Search is not available for this dataset
project
stringlengths
1
235
source
stringclasses
16 values
language
stringclasses
48 values
content
stringlengths
909
64.8M
lexicalrichness
readthedoc
Unknown
The two fundamental building blocks of all the measures are: the total number of words in the text (w) , and * the number of unique terms (t). RTTR: Root Type-Token Ratio (Guiraud 1954, 1960) This addendum exposes the underlying lexicalrichness measures from attributes and methods in the LexicalRichness class. * lexicalrichness.LexicalRichness.ttr() * Type-token ratio (TTR) computed as t/w, where t is the number of unique terms/vocab, and w is the total number of words. (Chotlos 1944, Templin 1957) Type-token ratio * Return type: * RTTR: Root Type-Token Ratio (Guiraud 1954, 1960)] * lexicalrichness.LexicalRichness.rttr() * Root TTR (RTTR) computed as t/sqrt(w), where t is the number of unique terms/vocab, and w is the total number of words. Also known as Guiraud’s R and Guiraud’s index. (Guiraud 1954, 1960) Root type-token ratio * Return type: * FLoat * lexicalrichness.LexicalRichness.cttr() * Corrected TTR (CTTR) computed as t/sqrt(2 * w), where t is the number of unique terms/vocab, and w is the total number of words. (Carrol 1964) Corrected type-token ratio * Return type: * * lexicalrichness.LexicalRichness.Herdan() * Computed as log(t)/log(w), where t is the number of unique terms/vocab, and w is the total number of words. Also known as Herdan’s C. (Herdan 1960, 1964) Herdan’s C * Return type: * * lexicalrichness.LexicalRichness.Summer() * Computed as log(log(t)) / log(log(w)), where t is the number of unique terms/vocab, and w is the total number of words. (Summer 1966) Summer * Return type: * * lexicalrichness.LexicalRichness.Dugast() * Computed as (log(w) ** 2) / (log(w) - log(t)), where t is the number of unique terms/vocab, and w is the total number of words. (Dugast 1978) Dugast * Return type: * * lexicalrichness.LexicalRichness.Maas() * Maas’s TTR, computed as (log(w) - log(t)) / (log(w) * log(w)), where t is the number of unique terms/vocab, and w is the total number of words. Unlike the other measures, lower maas measure indicates higher lexical richness. (Maas 1972) Maas * Return type: * * lexicalrichness.LexicalRichness.yulek() * Yule’s K (Yule 1944, Tweedie and Baayen 1998). \[k = 10^4 \times \left\{\sum_{i=1}^n f(i,N) \left(\frac{i}{N}\right)^2 -\frac{1}{N} \right\}\] See also Yule’s K * Return type: * * lexicalrichness.LexicalRichness.yulei() * Yule’s I (Yule 1944). \[I = \frac{t^2}{\sum^{n_{\text{max}}}_{i=1} i^2f(i,w) - t}\] See also Yule’s I * Return type: * * lexicalrichness.LexicalRichness.herdanvm() * Herdan’s Vm (Herdan 1955, Tweedie and Baayen 1998) \[V_m = \sqrt{\sum^{n_{\text{max}}}_{i=1} f(i,w) \left(\frac{i}{w} \right)^2 - \frac{1}{w}}\] See also Herdan’s Vm * Return type: * * lexicalrichness.LexicalRichness.simpsond() * Simpson’s D (Simpson 1949, Tweedie and Baayen 1998) \[D = \sum^{n_{\text{max}}}_{i=1} f(i,w) \frac{i}{w}\frac{i-1}{w-1}\] See also Simpson’s D * Return type: * msttr: Mean Segmental Type-Token Ratio (Johnson 1944) * lexicalrichness.LexicalRichness.msttr(self, segment_window=100, discard=True) * Mean segmental TTR (MSTTR) computed as average of TTR scores for segments in a text. Split a text into segments of length segment_window. For each segment, compute the TTR. MSTTR score is the sum of these scores divided by the number of segments. (Johnson 1944) * `segment_generator` * segment_window (int) – Size of each segment (default=100). * discard (bool) – If True, discard the remaining segment (e.g. for a text size of 105 and a segment_window of 100, the last 5 tokens will be discarded). Default is True. * Returns: * Mean segmental type-token ratio (MSTTR) * Return type: * mattr: Moving Average Type-Token Ratio (Covington 2007, Covington and McFall 2010) * lexicalrichness.LexicalRichness.mattr(self, window_size=100) * Moving average TTR (MATTR) computed using the average of TTRs over successive segments of a text. Estimate TTR for tokens 1 to n, 2 to n+1, 3 to n+2, and so on until the end of the text (where n is window size), then take the average. (Covington 2007, Covington and McFall 2010) * `list_sliding_window` * Returns a sliding window generator (of size window_size) over a sequence window_size (int) – Size of each sliding window. * Returns: * Moving average type-token ratio (MATTR) * Return type: * mtld: Measure of Textual Lexical Diversity (McCarthy 2005, McCarthy and Jarvis 2010) * lexicalrichness.LexicalRichness.mtld(self, threshold=0.72) * Measure of textual lexical diversity, computed as the mean length of sequential words in a text that maintains a minimum threshold TTR score. Iterates over words until TTR scores falls below a threshold, then increase factor counter by 1 and start over. McCarthy and Jarvis (2010, pg. 385) recommends a factor threshold in the range of [0.660, 0.750]. (McCarthy 2005, McCarthy and Jarvis 2010) threshold (float) – Factor threshold for MTLD. Algorithm skips to a new segment when TTR goes below the threshold (default=0.72). * Returns: * Measure of textual lexical diversity (MTLD) * Return type: * hdd: Hypergeometric Distribution Diversity (McCarthy and Jarvis 2007) * lexicalrichness.LexicalRichness.hdd(self, draws=42) * Hypergeometric distribution diversity (HD-D) score. For each term (t) in the text, compute the probabiltiy (p) of getting at least one appearance of t with a random draw of size n < N (text size). The contribution of t to the final HD-D score is p * (1/n). The final HD-D score thus sums over p * (1/n) with p computed for each term t. Described in McCarthy and Javis 2007, p.g. 465-466. (McCarthy and Jarvis 2007) draws (int) – Number of random draws in the hypergeometric distribution (default=42). * Returns: * Hypergeometric distribution diversity (HD-D) score * Return type: * vocd: vod-D (Mckee, Malvern, and Richards 2010) * lexicalrichness.LexicalRichness.vocd(self, ntokens=50, within_sample=100, iterations=3, seed=42) * Vocd score of lexical diversity derived from a series of TTR samplings and curve fittings. Vocd is meant as a measure of lexical diversity robust to varying text lengths. See also hdd. The vocd is computed in 4 steps as follows. Step 1: Take 100 random samples of 35 words from the text. Compute the mean TTR from the 100 samples. Step 2: Repeat this procedure for samples of 36 words, 37 words, and so on, all the way to ntokens (recommended as 50 [default]). In each iteration, compute the TTR. Then get the mean TTR over the different number of tokens. So now we have an array of averaged TTR values for ntoken=35, ntoken=36,…, and so on until ntoken=50. Step 3: Find the best-fitting curve from the empirical function of TTR to word size (ntokens). The value of D that provides the best fit is the vocd score. Step 4: Repeat steps 1 to 3 for x number (default=3) of times before averaging D, which is the returned value. * `ttr_nd` * TTR as a function of latent lexical diversity (d) and text length (n). ntokens (int) – Maximum number for the token/word size in the random samplings (default=50). * within_sample (int) – Number of samples for each token/word size (default=100). * iterations (int) – Number of times to repeat steps 1 to 3 before averaging (default=3). * seed (int) – Seed for the pseudo-random number generator in ramdom.sample() (default=42). * Returns: * voc-D * Return type: * Helper: lexicalrichness.segment_generator * lexicalrichness.segment_generator(List, segment_size) * List (list) – List of items to be segmented. * segment_size (int) – Size of each segment. * Yields: * List – List of s lists of with r items in each list. Helper: lexicalrichness.list_sliding_window * lexicalrichness.list_sliding_window(sequence, window_size=2) * Returns a sliding window generator (of size window_size) over a sequence. Taken from https://docs.python.org/release/2.3.5/lib/itertools-example.html Example: List = [‘a’, ‘b’, ‘c’, ‘d’] window_size = 2 * list_sliding_window(List, 2) -> * (‘a’, ‘b’) (‘b’, ‘c’) (‘c’, ‘d’) sequence (sequence (string, unicode, list, tuple, etc.)) – Sequence to be iterated over. window_size=1 is just a regular iterator. * window_size (int) – Size of each window. * Yields: * List – List of tuples of start and end points. Helper: lexicalrichness.frequency_wordfrequency_table * lexicalrichness.frequency_wordfrequency_table(bow) * Get table of i frequency and number of terms that appear i times in text of length N. For Yule’s I, Yule’s K, and Simpson’s D. In the returned table, freq column indicates the number of frequency of appearance in the text. fv_i_N column indicates the number of terms in the text of length N that appears freq number of times. bow (array-like) – List of words * Return type: * pandas.core.frame.DataFrame
biscale
cran
R
Package ‘biscale’ October 12, 2022 Type Package Title Tools and Palettes for Bivariate Thematic Mapping Version 1.0.0 Description Provides a 'ggplot2' centric approach to bivariate mapping. This is a technique that maps two quantities simultaneously rather than the single value that most thematic maps display. The package provides a suite of tools for calculating breaks using multiple different approaches, a selection of palettes appropriate for bivariate mapping and scale functions for 'ggplot2' calls that adds those palettes to maps. Tools for creating bivariate legends are also included. Depends R (>= 3.5) License GPL-3 URL https://chris-prener.github.io/biscale/ BugReports https://github.com/chris-prener/biscale/issues Encoding UTF-8 LazyData true Imports classInt, ggplot2, stats, utils RoxygenNote 7.1.2 Suggests covr, cowplot, knitr, rmarkdown, sf, testthat VignetteBuilder knitr NeedsCompilation no Author <NAME> [aut, cre] (<https://orcid.org/0000-0002-4310-9888>), <NAME> [aut], <NAME> [aut], <NAME> [ctb] Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2022-05-27 08:40:09 UTC R topics documented: bi_clas... 2 bi_class_break... 3 bi_legen... 4 bi_pa... 6 bi_scale_colo... 8 bi_scale_fil... 9 bi_them... 10 stl_race_incom... 11 stl_race_income_poin... 12 bi_class Create Classes for Bivariate Maps Description Creates mapping classes for a bivariate map. These data will be stored in a new variable named bi_class, which will be added to the given data object. Usage bi_class(.data, x, y, style, dim = 3, keep_factors = FALSE, dig_lab = 3) Arguments .data A data frame, tibble, or sf object x The x variable, either a numeric (including double and integer classes) or factor y The y variable, either a numeric (including double and integer classes) or factor style A string identifying the style used to calculate breaks. Currently supported styles are "quantile", "equal", "fisher", and "jenks". If both x and y are factors, this argument can be omitted. Note that older versions of biscale used "quantile" as the default for this argument. Now that bi_class accepts factors, this argument no longer as a default and older code will error. dim The dimensions of the palette. To use the built-in palettes, this value must be either 2, 3, or 4. A value of 3, for example, would be used to create a three-by- three bivariate map with a total of 9 classes. If you are using a custom palette, this value may be larger (though these maps can be very hard to interpret). If you are using pre-made factors, both factors must have the same number of levels as this value. keep_factors A logical scalar; if TRUE, the intermediate factor variables created as part of the calculation of bi_class will be retained. If FALSE (default), they will not be returned. dig_lab An integer that is passed to base::cut() Value A copy of .data with a new variable bi_class that contains combinations of values that correspond to an observations values for x and y. This is the basis for applying a bivariate color palette. Examples # quantile breaks, 2x2 data <- bi_class(stl_race_income, x = pctWhite, y = medInc, style = "quantile", dim = 2) # summarize quantile breaks, 2x2 table(data$bi_class) # quantile breaks, 3x3 data <- bi_class(stl_race_income, x = pctWhite, y = medInc, style = "quantile", dim = 3) # summarize quantile breaks, 3x3 table(data$bi_class) bi_class_breaks Return Breaks Description This function can be used to return a list containing vectors of either the ranges of values included in each category of x and y or, alternatively, the individual break values including the minimum and maximum values. This function supports simplified reporting as well as more descriptive legends. Usage bi_class_breaks(.data, x, y, style, dim = 3, clean_levels = TRUE, dig_lab = 3, split = FALSE) Arguments .data A data frame, tibble, or sf object x The x variable, either a numeric (including double and integer classes) or factor y The y variable, either a numeric (including double and integer classes) or factor style A string identifying the style used to calculate breaks. Currently supported styles are "quantile" (default), "equal", "fisher", and "jenks". If both x and y are factors, this argument can be omitted. dim The dimensions of the palette. To use the built-in palettes, this value must be either 2, 3, or 4. A value of 3, for example, would be used to create a three-by- three bivariate map with a total of 9 classes. If you are using a custom palette, this value may be larger (though these maps can be very hard to interpret). If you are using pre-made factors, both factors must have the same number of levels as this value. clean_levels A logical scalar; if TRUE (default), the brackets and parentheses will be stripped from the output. If FALSE (default), the levels will be returned with brackets and parentheses. If split is TRUE and clean_levels is FALSE, the clean_levels argument will be overridden. dig_lab An integer that is passed to base::cut(); it determines the number of digits used in formatting break numbers. It can either be a scalar or a vector. If it is a scalar, the value will be applied to both the x and y variables. If it is a vector, the first element will be applied to the x variable and the second will be applied to the y variable. split A logical scalar; if FALSE (default), the range of values for each factor level (corresponds to dim) will be returned for both the x and y variables. If TRUE, the individual values for each break (including the minimum and maximum values) will be returned. Value A list where bi_x is a vector containing the breaks for the x variable and bi_y is a vector containing the breaks for the y variable. Examples # return ranges for each category of x and y bi_class_breaks(stl_race_income, style = "quantile", x = pctWhite, y = medInc, dim = 4, dig_lab = c(4, 5), split = FALSE) # ranges can be returned with brackets and parentheses bi_class_breaks(stl_race_income, style = "quantile", x = pctWhite, y = medInc, clean_levels = FALSE, dim = 4, dig_lab = 3, split = FALSE) # return breaks for each category of x and y bi_class_breaks(stl_race_income, style = "quantile", x = pctWhite, y = medInc, dim = 4, dig_lab = c(4, 5), split = TRUE) # optionally name vector for dig_lab for increased clarity of code bi_class_breaks(stl_race_income, style = "quantile", x = pctWhite, y = medInc, dim = 4, dig_lab = c(x = 4, y = 5), split = TRUE) # scalars can also be used for dig_lab, though results may be less optimal bi_class_breaks(stl_race_income, style = "quantile", x = pctWhite, y = medInc, dim = 4, dig_lab = 3, split = TRUE) bi_legend Create Object for Drawing Legend Description Creates a ggplot object containing a legend that is specific to bivariate mapping. Usage bi_legend(pal, dim = 3, xlab, ylab, size = 10, flip_axes = FALSE, rotate_pal = FALSE, pad_width = NA, pad_color = "#ffffff", breaks = NULL, arrows = TRUE) Arguments pal A palette name or a vector containing a custom palette. See the help file for bi_pal for complete list of built-in palette names. If you are providing a cus- tom palette, it must follow the formatting described in the ’Advanced Options’ vignette. dim The dimensions of the palette. To use the built-in palettes, this value must be either 2, 3, or 4. A value of 3, for example, would be used to create a three-by- three bivariate map with a total of 9 classes. If you are using a custom palette, this value may be larger (though these maps can be very hard to interpret). See the ’Advanced Options’ vignette for details on the relationship between dim values and palette size. xlab Text for desired x axis label on legend ylab Text for desired y axis label on legend size A numeric scalar; size of axis labels flip_axes A logical scalar; if TRUE, the axes of the palette will be flipped. If FALSE (de- fault), the palette will be displayed on its original axes. Custom palettes with ’dim’ greater than 4 cannot take advantage of flipping axes. rotate_pal A logical scalar; if TRUE, the palette will be rotated 180 degrees. If FALSE (de- fault), the palette will be displayed in its original orientation. Custom palettes with ’dim’ greater than 4 cannot take advantage of palette rotation. pad_width An optional numeric scalar; controls the width of padding between tiles in the legend pad_color An optional character scalar; controls the color of padding between tiles in the legend breaks An optional list created by bi_class_breaks. Depending on the options se- lected when making the list, labels will placed showing the corresponding range of values for each axis or, if split = TRUE, showing the individual breaks. arrows A logical scalar; if TRUE (default), directional arrows will be added to both the x and y axes of the legend. If you want to suppress these arrows, especially if you are supplying breaks to create a more detailed legend, this parameter can be set of FALSE. Value A ggplot object with a bivariate legend. See Also bi_pal Examples # sample 3x3 legend legend <- bi_legend(pal = "GrPink", dim = 3, xlab = "Higher % White ", ylab = "Higher Income ", size = 16) # print legend legend # sample 3x3 legend with breaks ## create vector of breaks break_vals <- bi_class_breaks(stl_race_income, style = "quantile", x = pctWhite, y = medInc, dim = 3, dig_lab = c(x = 4, y = 5), split = TRUE) ## create legend legend <- bi_legend(pal = "GrPink", dim = 3, xlab = "Higher % White ", ylab = "Higher Income ", size = 16, breaks = break_vals, arrows = FALSE) # print legend legend bi_pal Preview Palettes and Hex Values Description Prints either a visual preview of each palette or the associated hex values. Usage bi_pal(pal, dim = 3, preview = TRUE, flip_axes = FALSE, rotate_pal = FALSE) Arguments pal A palette name or a vector containing a custom palette. If you are provid- ing a palette name, it must be one of: "Bluegill", "BlueGold", "BlueOr", "BlueYl", "Brown"/"Brown2", "DkBlue"/"DkBlue2", "DkCyan"/"DkCyan2", "DkViolet"/"DkViolet2" "GrPink"/"GrPink2", "PinkGrn", "PurpleGrn", or "PurpleOr". Pairs of palettes, such as "GrPink"/"GrPink2", are included for legacy sup- port. The numbered palettes support four-by-four bivarite maps while the un- numbered ones, which were the five included in the original release of the pack- age, only support two-by-two and three-by-three maps. If you are providing a custom palette, it must follow the formatting described in the ’Advanced Options’ vignette. dim The dimensions of the palette. To use the built-in palettes, this value must be either 2, 3, or 4. A value of 3, for example, would be used to create a three-by- three bivariate map with a total of 9 classes. If you are using a custom palette, this value may be larger (though these maps can be very hard to interpret). See the ’Advanced Options’ vignette for details on the relationship between dim values and palette size. preview A logical scalar; if TRUE (default), an image preview will be generated. If FALSE, a vector with hex color values will be returned. flip_axes A logical scalar; if TRUE the axes of the palette will be flipped. If FALSE (default), the palette will be displayed on its original axes. Custom palettes with ’dim’ greater than 4 cannot take advantage of flipping axes. rotate_pal A logical scalar; if TRUE the palette will be rotated 180 degrees. If FALSE (de- fault), the palette will be displayed in its original orientation. Custom palettes with ’dim’ greater than 4 cannot take advantage of palette rotation. Details The "Brown", "DkBlue", "DkCyan", and "GrPink" palettes were made by <NAME>. The "DkViolet" palette was made by <NAME> and <NAME>. Many of the new palettes were inspired by <NAME>’s earlier work to expand biscale. Value If preview = TRUE, an image preview of the legend will be returned. Otherwise, if preview = FALSE, a named vector with class values for names and their corresponding hex color values. Examples # gray pink palette, 2x2 bi_pal(pal = "GrPink", dim = 2) # gray pink palette, 2x2 hex values bi_pal(pal = "GrPink", dim = 2, preview = FALSE) # gray pink palette, 3x3 bi_pal(pal = "GrPink", dim = 3) # gray pink palette, 3x3 hex values bi_pal(pal = "GrPink", dim = 3, preview = FALSE) # custom palette custom_pal <- c( "1-1" = "#cabed0", # low x, low y "2-1" = "#ae3a4e", # high x, low y "1-2" = "#4885c1", # low x, high y "2-2" = "#3f2949" # high x, high y ) bi_pal(pal = custom_pal, dim = 2, preview = FALSE) bi_scale_color Apply Bivariate Color to ggplot Object Description Applies the selected palette as the color aesthetic when geom_sf is used and the bi_class variable is given as the color in the aesthetic mapping. Usage bi_scale_color(pal, dim = 3, flip_axes = FALSE, rotate_pal = FALSE, ...) Arguments pal A palette name or a vector containing a custom palette. See the help file for bi_pal for complete list of built-in palette names. If you are providing a cus- tom palette, it must follow the formatting described in the ’Advanced Options’ vignette. dim The dimensions of the palette. To use the built-in palettes, this value must be either 2, 3, or 4. A value of 3, for example, would be used to create a three-by- three bivariate map with a total of 9 classes. If you are using a custom palette, this value may be larger (though these maps can be very hard to interpret). See the ’Advanced Options’ vignette for details on the relationship between dim values and palette size. flip_axes A logical scalar; if TRUE the axes of the palette will be flipped. If FALSE (default), the palette will be displayed on its original axes. Custom palettes with ’dim’ greater than 4 cannot take advantage of flipping axes. rotate_pal A logical scalar; if TRUE the palette will be rotated 180 degrees. If FALSE (de- fault), the palette will be displayed in its original orientation. Custom palettes with ’dim’ greater than 4 cannot take advantage of palette rotation. ... Arguments to pass to scale_color_manual Value A ggplot object with the given bivariate palette applied to the data. See Also bi_pal Examples # load dependencies library(ggplot2) # add breaks, 3x3 data <- bi_class(stl_race_income, x = pctWhite, y = medInc, style = "quantile", dim = 3) # create map plot <- ggplot() + geom_sf(data = data, aes(color = bi_class), size = 2, show.legend = FALSE) + bi_scale_color(pal = "GrPink", dim = 3) bi_scale_fill Apply Bivariate Fill to ggplot Object Description Applies the selected palette as the fill aesthetic when geom_sf is used and the bi_class variable is given as the fill in the aesthetic mapping. Usage bi_scale_fill(pal, dim = 3, flip_axes = FALSE, rotate_pal = FALSE, ...) Arguments pal A palette name or a vector containing a custom palette. See the help file for bi_pal for complete list of built-in palette names. If you are providing a cus- tom palette, it must follow the formatting described in the ’Advanced Options’ vignette. dim The dimensions of the palette, either 2 for a two-by-two palette, 3 for a three- by-three palette, or 4 for a four-by-four palette. flip_axes A logical scalar; if TRUE the axes of the palette will be flipped. If FALSE (default), the palette will be displayed on its original axes. rotate_pal A logical scalar; if TRUE the palette will be rotated 180 degrees. If FALSE (de- fault), the palette will be displayed in its original orientation ... Arguments to pass to scale_fill_manual Value A ggplot object with the given bivariate palette applied to the data. See Also bi_pal Examples # load dependencies library(ggplot2) # add breaks, 3x3 data <- bi_class(stl_race_income, x = pctWhite, y = medInc, style = "quantile", dim = 3) # create map plot <- ggplot() + geom_sf(data = data, aes(fill = bi_class), color = "white", size = 0.1, show.legend = FALSE) + bi_scale_fill(pal = "GrPink", dim = 3) bi_theme Basic Theme for Bivariate Mapping Description A theme for creating a simple, clean bivariate map using ggplot2. Usage bi_theme( base_family = "sans", base_size = 24, bg_color = "#ffffff", font_color = "#000000", ... ) Arguments base_family A character string representing the font family to be used in the map. base_size A number representing the base size used in the map. bg_color A character string containing the hex value for the desired color of the map’s background. font_color A character string containing the hex value for the desired color of the map’s text. ... Arguments to pass on to ggplot2’s theme function Examples # load suggested dependencies library(ggplot2) library(sf) # add breaks, 3x3 data <- bi_class(stl_race_income, x = pctWhite, y = medInc, style = "quantile", dim = 3) # create map ggplot() + geom_sf(data = data, aes(fill = bi_class), color = "white", size = 0.1, show.legend = FALSE) + bi_scale_fill(pal = "GrPink", dim = 3) + bi_theme() stl_race_income Race and Median Income in St. Louis by Census Tract, 2017 Description A simple features data set containing the geometry and associated attributes for the 2013-2017 American Community Survey estimates for median household income and the percentage of white residents in St. Louis. This version of the sample data are stored as polygon data. Usage data(stl_race_income) Format A data frame with 106 rows and 4 variables: GEOID full GEOID string pctWhite Percent of white residents per tract medInc Median household income of tract geometry simple features geometry Source tidycensus package Examples str(stl_race_income) head(stl_race_income) summary(stl_race_income$medInc) stl_race_income_point Race and Median Income in St. Louis by Census Tract, 2017 Description A simple features data set containing the geometry and associated attributes for the 2013-2017 American Community Survey estimates for median household income and the percentage of white residents in St. Louis. This version of the sample data are stored as point data. Usage data(stl_race_income_point) Format A data frame with 106 rows and 4 variables: GEOID full GEOID string pctWhite Percent of white residents per tract medInc Median household income of tract geometry simple features geometry Source tidycensus package Examples str(stl_race_income_point) head(stl_race_income_point) summary(stl_race_income_point$medInc)
elastalert
readthedoc
YAML
ElastAlert 0.0.1 documentation [ElastAlert](index.html#document-index) --- ElastAlert - Easy & Flexible Alerting With Elasticsearch[¶](#elastalert-easy-flexible-alerting-with-elasticsearch) === Contents: ElastAlert - Easy & Flexible Alerting With Elasticsearch[¶](#elastalert-easy-flexible-alerting-with-elasticsearch) --- ElastAlert is a simple framework for alerting on anomalies, spikes, or other patterns of interest from data in Elasticsearch. At Yelp, we use Elasticsearch, Logstash and Kibana for managing our ever increasing amount of data and logs. Kibana is great for visualizing and querying data, but we quickly realized that it needed a companion tool for alerting on inconsistencies in our data. Out of this need, ElastAlert was created. If you have data being written into Elasticsearch in near real time and want to be alerted when that data matches certain patterns, ElastAlert is the tool for you. ### Overview[¶](#overview) We designed ElastAlert to be [reliable](#reliability), highly [modular](#modularity), and easy to [set up](index.html#tutorial) and [configure](#configuration). It works by combining Elasticsearch with two types of components, rule types and alerts. Elasticsearch is periodically queried and the data is passed to the rule type, which determines when a match is found. When a match occurs, it is given to one or more alerts, which take action based on the match. This is configured by a set of rules, each of which defines a query, a rule type, and a set of alerts. Several rule types with common monitoring paradigms are included with ElastAlert: * “Match where there are X events in Y time” (`frequency` type) * “Match when the rate of events increases or decreases” (`spike` type) * “Match when there are less than X events in Y time” (`flatline` type) * “Match when a certain field matches a blacklist/whitelist” (`blacklist` and `whitelist` type) * “Match on any event matching a given filter” (`any` type) * “Match when a field has two different values within some time” (`change` type) Currently, we have support built in for these alert types: * Command * Email * JIRA * OpsGenie * SNS * HipChat * Slack * Telegram * GoogleChat * Debug * Stomp * theHive Additional rule types and alerts can be easily imported or written. (See [Writing rule types](index.html#writingrules) and [Writing alerts](index.html#writingalerts)) In addition to this basic usage, there are many other features that make alerts more useful: * Alerts link to Kibana dashboards * Aggregate counts for arbitrary fields * Combine alerts into periodic reports * Separate alerts by using a unique key field * Intercept and enhance match data To get started, check out [Running ElastAlert For The First Time](index.html#tutorial). ### Reliability[¶](#reliability) ElastAlert has several features to make it more reliable in the event of restarts or Elasticsearch unavailability: * ElastAlert [saves its state to Elasticsearch](index.html#metadata) and, when started, will resume where previously stopped * If Elasticsearch is unresponsive, ElastAlert will wait until it recovers before continuing * Alerts which throw errors may be automatically retried for a period of time ### Modularity[¶](#modularity) ElastAlert has three main components that may be imported as a module or customized: #### Rule types[¶](#rule-types) The rule type is responsible for processing the data returned from Elasticsearch. It is initialized with the rule configuration, passed data that is returned from querying Elasticsearch with the rule’s filters, and outputs matches based on this data. See [Writing rule types](index.html#writingrules) for more information. #### Alerts[¶](#alerts) Alerts are responsible for taking action based on a match. A match is generally a dictionary containing values from a document in Elasticsearch, but may contain arbitrary data added by the rule type. See [Writing alerts](index.html#writingalerts) for more information. #### Enhancements[¶](#enhancements) Enhancements are a way of intercepting an alert and modifying or enhancing it in some way. They are passed the match dictionary before it is given to the alerter. See [Enhancements](index.html#enhancements) for more information. ### Configuration[¶](#configuration) ElastAlert has a global configuration file, `config.yaml`, which defines several aspects of its operation: `buffer_time`: ElastAlert will continuously query against a window from the present to `buffer_time` ago. This way, logs can be back filled up to a certain extent and ElastAlert will still process the events. This may be overridden by individual rules. This option is ignored for rules where `use_count_query` or `use_terms_query` is set to true. Note that back filled data may not always trigger count based alerts as if it was queried in real time. `es_host`: The host name of the Elasticsearch cluster where ElastAlert records metadata about its searches. When ElastAlert is started, it will query for information about the time that it was last run. This way, even if ElastAlert is stopped and restarted, it will never miss data or look at the same events twice. It will also specify the default cluster for each rule to run on. The environment variable `ES_HOST` will override this field. `es_port`: The port corresponding to `es_host`. The environment variable `ES_PORT` will override this field. `use_ssl`: Optional; whether or not to connect to `es_host` using TLS; set to `True` or `False`. The environment variable `ES_USE_SSL` will override this field. `verify_certs`: Optional; whether or not to verify TLS certificates; set to `True` or `False`. The default is `True`. `client_cert`: Optional; path to a PEM certificate to use as the client certificate. `client_key`: Optional; path to a private key file to use as the client key. `ca_certs`: Optional; path to a CA cert bundle to use to verify SSL connections `es_username`: Optional; basic-auth username for connecting to `es_host`. The environment variable `ES_USERNAME` will override this field. `es_password`: Optional; basic-auth password for connecting to `es_host`. The environment variable `ES_PASSWORD` will override this field. `es_url_prefix`: Optional; URL prefix for the Elasticsearch endpoint. The environment variable `ES_URL_PREFIX` will override this field. `es_send_get_body_as`: Optional; Method for querying Elasticsearch - `GET`, `POST` or `source`. The default is `GET` `es_conn_timeout`: Optional; sets timeout for connecting to and reading from `es_host`; defaults to `20`. `rules_loader`: Optional; sets the loader class to be used by ElastAlert to retrieve rules and hashes. Defaults to `FileRulesLoader` if not set. `rules_folder`: The name of the folder which contains rule configuration files. ElastAlert will load all files in this folder, and all subdirectories, that end in .yaml. If the contents of this folder change, ElastAlert will load, reload or remove rules based on their respective config files. (only required when using `FileRulesLoader`). `scan_subdirectories`: Optional; Sets whether or not ElastAlert should recursively descend the rules directory - `true` or `false`. The default is `true` `run_every`: How often ElastAlert should query Elasticsearch. ElastAlert will remember the last time it ran the query for a given rule, and periodically query from that time until the present. The format of this field is a nested unit of time, such as `minutes: 5`. This is how time is defined in every ElastAlert configuration. `writeback_index`: The index on `es_host` to use. `max_query_size`: The maximum number of documents that will be downloaded from Elasticsearch in a single query. The default is 10,000, and if you expect to get near this number, consider using `use_count_query` for the rule. If this limit is reached, ElastAlert will [scroll](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-request-scroll.html) using the size of `max_query_size` through the set amount of pages, when `max_scrolling_count` is set or until processing all results. `max_scrolling_count`: The maximum amount of pages to scroll through. The default is `0`, which means the scrolling has no limit. For example if this value is set to `5` and the `max_query_size` is set to `10000` then `50000` documents will be downloaded at most. `scroll_keepalive`: The maximum time (formatted in [Time Units](https://www.elastic.co/guide/en/elasticsearch/reference/current/common-options.html#time-units)) the scrolling context should be kept alive. Avoid using high values as it abuses resources in Elasticsearch, but be mindful to allow sufficient time to finish processing all the results. `max_aggregation`: The maximum number of alerts to aggregate together. If a rule has `aggregation` set, all alerts occuring within a timeframe will be sent together. The default is 10,000. `old_query_limit`: The maximum time between queries for ElastAlert to start at the most recently run query. When ElastAlert starts, for each rule, it will search `elastalert_metadata` for the most recently run query and start from that time, unless it is older than `old_query_limit`, in which case it will start from the present time. The default is one week. `disable_rules_on_error`: If true, ElastAlert will disable rules which throw uncaught (not EAException) exceptions. It will upload a traceback message to `elastalert_metadata` and if `notify_email` is set, send an email notification. The rule will no longer be run until either ElastAlert restarts or the rule file has been modified. This defaults to True. `show_disabled_rules`: If true, ElastAlert show the disable rules’ list when finishes the execution. This defaults to True. `notify_email`: An email address, or list of email addresses, to which notification emails will be sent. Currently, only an uncaught exception will send a notification email. The from address, SMTP host, and reply-to header can be set using `from_addr`, `smtp_host`, and `email_reply_to` options, respectively. By default, no emails will be sent. `from_addr`: The address to use as the from header in email notifications. This value will be used for email alerts as well, unless overwritten in the rule config. The default value is “ElastAlert”. `smtp_host`: The SMTP host used to send email notifications. This value will be used for email alerts as well, unless overwritten in the rule config. The default is “localhost”. `email_reply_to`: This sets the Reply-To header in emails. The default is the recipient address. `aws_region`: This makes ElastAlert to sign HTTP requests when using Amazon Elasticsearch Service. It’ll use instance role keys to sign the requests. The environment variable `AWS_DEFAULT_REGION` will override this field. `boto_profile`: Deprecated! Boto profile to use when signing requests to Amazon Elasticsearch Service, if you don’t want to use the instance role keys. `profile`: AWS profile to use when signing requests to Amazon Elasticsearch Service, if you don’t want to use the instance role keys. The environment variable `AWS_DEFAULT_PROFILE` will override this field. `replace_dots_in_field_names`: If `True`, ElastAlert replaces any dots in field names with an underscore before writing documents to Elasticsearch. The default value is `False`. Elasticsearch 2.0 - 2.3 does not support dots in field names. `string_multi_field_name`: If set, the suffix to use for the subfield for string multi-fields in Elasticsearch. The default value is `.raw` for Elasticsearch 2 and `.keyword` for Elasticsearch 5. `add_metadata_alert`: If set, alerts will include metadata described in rules (`category`, `description`, `owner` and `priority`); set to `True` or `False`. The default is `False`. `skip_invalid`: If `True`, skip invalid files instead of exiting. By default, ElastAlert uses a simple basic logging configuration to print log messages to standard error. You can change the log level to `INFO` messages by using the `--verbose` or `--debug` command line options. If you need a more sophisticated logging configuration, you can provide a full logging configuration in the config file. This way you can also configure logging to a file, to Logstash and adjust the logging format. For details, see the end of `config.yaml.example` where you can find an example logging configuration. ### Running ElastAlert[¶](#running-elastalert) `$ python elastalert/elastalert.py` Several arguments are available when running ElastAlert: `--config` will specify the configuration file to use. The default is `config.yaml`. `--debug` will run ElastAlert in debug mode. This will increase the logging verboseness, change all alerts to `DebugAlerter`, which prints alerts and suppresses their normal action, and skips writing search and alert metadata back to Elasticsearch. Not compatible with –verbose. `--verbose` will increase the logging verboseness, which allows you to see information about the state of queries. Not compatible with –debug. `--start <timestamp>` will force ElastAlert to begin querying from the given time, instead of the default, querying from the present. The timestamp should be ISO8601, e.g. `YYYY-MM-DDTHH:MM:SS` (UTC) or with timezone `YYYY-MM-DDTHH:MM:SS-08:00` (PST). Note that if querying over a large date range, no alerts will be sent until that rule has finished querying over the entire time period. To force querying from the current time, use “NOW”. `--end <timestamp>` will cause ElastAlert to stop querying at the specified timestamp. By default, ElastAlert will periodically query until the present indefinitely. `--rule <rule.yaml>` will only run the given rule. The rule file may be a complete file path or a filename in `rules_folder` or its subdirectories. `--silence <unit>=<number>` will silence the alerts for a given rule for a period of time. The rule must be specified using `--rule`. <unit> is one of days, weeks, hours, minutes or seconds. <number> is an integer. For example, `--rule noisy_rule.yaml --silence hours=4` will stop noisy_rule from generating any alerts for 4 hours. `--es_debug` will enable logging for all queries made to Elasticsearch. `--es_debug_trace <trace.log>` will enable logging curl commands for all queries made to Elasticsearch to the specified log file. `--es_debug_trace` is passed through to [elasticsearch.py](http://elasticsearch-py.readthedocs.io/en/master/index.html#logging) which logs localhost:9200 instead of the actual `es_host`:`es_port`. `--end <timestamp>` will force ElastAlert to stop querying after the given time, instead of the default, querying to the present time. This really only makes sense when running standalone. The timestamp is formatted as `YYYY-MM-DDTHH:MM:SS` (UTC) or with timezone `YYYY-MM-DDTHH:MM:SS-XX:00` (UTC-XX). `--pin_rules` will stop ElastAlert from loading, reloading or removing rules based on changes to their config files. Running ElastAlert for the First Time[¶](#running-elastalert-for-the-first-time) --- ### Requirements[¶](#requirements) * Elasticsearch * ISO8601 or Unix timestamped data * Python 3.6 * pip, see requirements.txt * Packages on Ubuntu 14.x: python-pip python-dev libffi-dev libssl-dev ### Downloading and Configuring[¶](#downloading-and-configuring) You can either install the latest released version of ElastAlert using pip: ``` $ pip install elastalert ``` or you can clone the ElastAlert repository for the most recent changes: ``` $ git clone https://github.com/Yelp/elastalert.git ``` Install the module: ``` $ pip install "setuptools>=11.3" $ python setup.py install ``` Depending on the version of Elasticsearch, you may need to manually install the correct version of elasticsearch-py. Elasticsearch 5.0+: ``` $ pip install "elasticsearch>=5.0.0" ``` Elasticsearch 2.X: ``` $ pip install "elasticsearch<3.0.0" ``` Next, open up config.yaml.example. In it, you will find several configuration options. ElastAlert may be run without changing any of these settings. `rules_folder` is where ElastAlert will load rule configuration files from. It will attempt to load every .yaml file in the folder. Without any valid rules, ElastAlert will not start. ElastAlert will also load new rules, stop running missing rules, and restart modified rules as the files in this folder change. For this tutorial, we will use the example_rules folder. `run_every` is how often ElastAlert will query Elasticsearch. `buffer_time` is the size of the query window, stretching backwards from the time each query is run. This value is ignored for rules where `use_count_query` or `use_terms_query` is set to true. `es_host` is the address of an Elasticsearch cluster where ElastAlert will store data about its state, queries run, alerts, and errors. Each rule may also use a different Elasticsearch host to query against. `es_port` is the port corresponding to `es_host`. `use_ssl`: Optional; whether or not to connect to `es_host` using TLS; set to `True` or `False`. `verify_certs`: Optional; whether or not to verify TLS certificates; set to `True` or `False`. The default is `True` `client_cert`: Optional; path to a PEM certificate to use as the client certificate `client_key`: Optional; path to a private key file to use as the client key `ca_certs`: Optional; path to a CA cert bundle to use to verify SSL connections `es_username`: Optional; basic-auth username for connecting to `es_host`. `es_password`: Optional; basic-auth password for connecting to `es_host`. `es_url_prefix`: Optional; URL prefix for the Elasticsearch endpoint. `es_send_get_body_as`: Optional; Method for querying Elasticsearch - `GET`, `POST` or `source`. The default is `GET` `writeback_index` is the name of the index in which ElastAlert will store data. We will create this index later. `alert_time_limit` is the retry window for failed alerts. Save the file as `config.yaml` ### Setting Up Elasticsearch[¶](#setting-up-elasticsearch) ElastAlert saves information and metadata about its queries and its alerts back to Elasticsearch. This is useful for auditing, debugging, and it allows ElastAlert to restart and resume exactly where it left off. This is not required for ElastAlert to run, but highly recommended. First, we need to create an index for ElastAlert to write to by running `elastalert-create-index` and following the instructions: ``` $ elastalert-create-index New index name (Default elastalert_status) Name of existing index to copy (Default None) New index elastalert_status created Done! ``` For information about what data will go here, see [ElastAlert Metadata Index](index.html#metadata). ### Creating a Rule[¶](#creating-a-rule) Each rule defines a query to perform, parameters on what triggers a match, and a list of alerts to fire for each match. We are going to use `example_rules/example_frequency.yaml` as a template: ``` # From example_rules/example_frequency.yaml es_host: elasticsearch.example.com es_port: 14900 name: Example rule type: frequency index: logstash-* num_events: 50 timeframe: hours: 4 filter: - term: some_field: "some_value" alert: - "email" email: - "<EMAIL>" ``` `es_host` and `es_port` should point to the Elasticsearch cluster we want to query. `name` is the unique name for this rule. ElastAlert will not start if two rules share the same name. `type`: Each rule has a different type which may take different parameters. The `frequency` type means “Alert when more than `num_events` occur within `timeframe`.” For information other types, see [Rule types](index.html#ruletypes). `index`: The name of the index(es) to query. If you are using Logstash, by default the indexes will match `"logstash-*"`. `num_events`: This parameter is specific to `frequency` type and is the threshold for when an alert is triggered. `timeframe` is the time period in which `num_events` must occur. `filter` is a list of Elasticsearch filters that are used to filter results. Here we have a single term filter for documents with `some_field` matching `some_value`. See [Writing Filters For Rules](index.html#writingfilters) for more information. If no filters are desired, it should be specified as an empty list: `filter: []` `alert` is a list of alerts to run on each match. For more information on alert types, see [Alerts](index.html#alerts). The email alert requires an SMTP server for sending mail. By default, it will attempt to use localhost. This can be changed with the `smtp_host` option. `email` is a list of addresses to which alerts will be sent. There are many other optional configuration options, see [Common configuration options](index.html#commonconfig). All documents must have a timestamp field. ElastAlert will try to use `@timestamp` by default, but this can be changed with the `timestamp_field` option. By default, ElastAlert uses ISO8601 timestamps, though unix timestamps are supported by setting `timestamp_type`. As is, this rule means “Send an email to [<EMAIL>](mailto:<EMAIL>%4<EMAIL>) when there are more than 50 documents with `some_field == some_value` within a 4 hour period.” ### Testing Your Rule[¶](#testing-your-rule) Running the `elastalert-test-rule` tool will test that your config file successfully loads and run it in debug mode over the last 24 hours: ``` $ elastalert-test-rule example_rules/example_frequency.yaml ``` If you want to specify a configuration file to use, you can run it with the config flag: ``` $ elastalert-test-rule --config <path-to-config-file> example_rules/example_frequency.yaml ``` The configuration preferences will be loaded as follows: 1. Configurations specified in the yaml file. 2. Configurations specified in the config file, if specified. 3. Default configurations, for the tool to run. See [the testing section for more details](index.html#testing) ### Running ElastAlert[¶](#running-elastalert) There are two ways of invoking ElastAlert. As a daemon, through Supervisor (<http://supervisord.org/>), or directly with Python. For easier debugging purposes in this tutorial, we will invoke it directly: ``` $ python -m elastalert.elastalert --verbose --rule example_frequency.yaml # or use the entry point: elastalert --verbose --rule ... No handlers could be found for logger "Elasticsearch" INFO:root:Queried rule Example rule from 1-15 14:22 PST to 1-15 15:07 PST: 5 hits INFO:Elasticsearch:POST http://elasticsearch.example.com:14900/elastalert_status/elastalert_status?op_type=create [status:201 request:0.025s] INFO:root:Ran Example rule from 1-15 14:22 PST to 1-15 15:07 PST: 5 query hits (0 already seen), 0 matches, 0 alerts sent INFO:root:Sleeping for 297 seconds ``` ElastAlert uses the python logging system and `--verbose` sets it to display INFO level messages. `--rule example_frequency.yaml` specifies the rule to run, otherwise ElastAlert will attempt to load the other rules in the example_rules folder. Let’s break down the response to see what’s happening. `Queried rule Example rule from 1-15 14:22 PST to 1-15 15:07 PST: 5 hits` ElastAlert periodically queries the most recent `buffer_time` (default 45 minutes) for data matching the filters. Here we see that it matched 5 hits. `POST http://elasticsearch.example.com:14900/elastalert_status/elastalert_status?op_type=create [status:201 request:0.025s]` This line showing that ElastAlert uploaded a document to the elastalert_status index with information about the query it just made. `Ran Example rule from 1-15 14:22 PST to 1-15 15:07 PST: 5 query hits (0 already seen), 0 matches, 0 alerts sent` The line means ElastAlert has finished processing the rule. For large time periods, sometimes multiple queries may be run, but their data will be processed together. `query hits` is the number of documents that are downloaded from Elasticsearch, `already seen` refers to documents that were already counted in a previous overlapping query and will be ignored, `matches` is the number of matches the rule type outputted, and `alerts sent` is the number of alerts actually sent. This may differ from `matches` because of options like `realert` and `aggregation` or because of an error. `Sleeping for 297 seconds` The default `run_every` is 5 minutes, meaning ElastAlert will sleep until 5 minutes have elapsed from the last cycle before running queries for each rule again with time ranges shifted forward 5 minutes. Say, over the next 297 seconds, 46 more matching documents were added to Elasticsearch: ``` INFO:root:Queried rule Example rule from 1-15 14:27 PST to 1-15 15:12 PST: 51 hits ... INFO:root:Sent email to ['<EMAIL>'] ... INFO:root:Ran Example rule from 1-15 14:27 PST to 1-15 15:12 PST: 51 query hits, 1 matches, 1 alerts sent ``` The body of the email will contain something like: ``` Example rule At least 50 events occurred between 1-15 11:12 PST and 1-15 15:12 PST @timestamp: 2015-01-15T15:12:00-08:00 ``` If an error occurred, such as an unreachable SMTP server, you may see: `ERROR:root:Error while running alert email: Error connecting to SMTP host: [Errno 61] Connection refused` Note that if you stop ElastAlert and then run it again later, it will look up `elastalert_status` and begin querying at the end time of the last query. This is to prevent duplication or skipping of alerts if ElastAlert is restarted. By using the `--debug` flag instead of `--verbose`, the body of email will instead be logged and the email will not be sent. In addition, the queries will not be saved to `elastalert_status`. Rule Types and Configuration Options[¶](#rule-types-and-configuration-options) --- Examples of several types of rule configuration can be found in the example_rules folder. Note All “time” formats are of the form `unit: X` where unit is one of weeks, days, hours, minutes or seconds. Such as `minutes: 15` or `hours: 1`. ### Rule Configuration Cheat Sheet[¶](#rule-configuration-cheat-sheet) | FOR ALL RULES | | --- | | `es_host` (string) | Required | | `es_port` (number) | | `index` (string) | | `type` (string) | | `alert` (string or list) | | `name` (string, defaults to the filename) | Optional | | `use_strftime_index` (boolean, default False) | | `use_ssl` (boolean, default False) | | `verify_certs` (boolean, default True) | | `es_username` (string, no default) | | `es_password` (string, no default) | | `es_url_prefix` (string, no default) | | `es_send_get_body_as` (string, default “GET”) | | `aggregation` (time, no default) | | `description` (string, default empty string) | | `generate_kibana_link` (boolean, default False) | | `use_kibana_dashboard` (string, no default) | | `kibana_url` (string, default from es_host) | | `use_kibana4_dashboard` (string, no default) | | `kibana4_start_timedelta` (time, default: 10 min) | | `kibana4_end_timedelta` (time, default: 10 min) | | `use_local_time` (boolean, default True) | | `realert` (time, default: 1 min) | | `exponential_realert` (time, no default) | | `match_enhancements` (list of strs, no default) | | `top_count_number` (int, default 5) | | `top_count_keys` (list of strs) | | `raw_count_keys` (boolean, default True) | | `include` (list of strs, default [“*”]) | | `filter` (ES filter DSL, no default) | | `max_query_size` (int, default global max_query_size) | | `query_delay` (time, default 0 min) | | `owner` (string, default empty string) | | `priority` (int, default 2) | | `category` (string, default empty string) | | `scan_entire_timeframe` (bool, default False) | | `import` (string) IGNORED IF `use_count_query` or `use_terms_query` is true | | `buffer_time` (time, default from config.yaml) | | `timestamp_type` (string, default iso) | | `timestamp_format` (string, default “%Y-%m-%dT%H:%M:%SZ”) | | `timestamp_format_expr` (string, no default ) | | `_source_enabled` (boolean, default True) | | `alert_text_args` (array of strs) | | `alert_text_kw` (object) | | `alert_missing_value` (string, default “<MISSING VALUE>”) | | `is_enabled` (boolean, default True) | | `search_extra_index` (boolean, default False) | | | RULE TYPE | Any | Blacklist | Whitelist | Change | Frequency | Spike | Flatline | New_term | Cardinality | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | `compare_key` (list of strs, no default) | | Req | Req | Req | | | | | | | `blacklist` (list of strs, no default) | | Req | | | | | | | | | `whitelist` (list of strs, no default) | | | Req | | | | | | | | `ignore_null` (boolean, no default) | | | Req | Req | | | | | | | `query_key` (string, no default) | Opt | | | Req | Opt | Opt | Opt | Req | Opt | | `aggregation_key` (string, no default) | Opt | | | | | | | | | | `summary_table_fields` (list, no default) | Opt | | | | | | | | | | `timeframe` (time, no default) | | | | Opt | Req | Req | Req | | Req | | `num_events` (int, no default) | | | | | Req | | | | | | `attach_related` (boolean, no default) | | | | | Opt | | | | | | `use_count_query` (boolean, no default) `doc_type` (string, no default) | | | | | Opt | Opt | Opt | | | | `use_terms_query` (boolean, no default) `doc_type` (string, no default) `query_key` (string, no default) `terms_size` (int, default 50) | | | | | Opt | Opt | | Opt | | | `spike_height` (int, no default) | | | | | | Req | | | | | `spike_type` ([up|down|both], no default) | | | | | | Req | | | | | `alert_on_new_data` (boolean, default False) | | | | | | Opt | | | | | `threshold_ref` (int, no default) | | | | | | Opt | | | | | `threshold_cur` (int, no default) | | | | | | Opt | | | | | `threshold` (int, no default) | | | | | | | Req | | | | `fields` (string or list, no default) | | | | | | | | Req | | | `terms_window_size` (time, default 30 days) | | | | | | | | Opt | | | `window_step_size` (time, default 1 day) | | | | | | | | Opt | | | `alert_on_missing_fields` (boolean, default False) | | | | | | | | Opt | | | `cardinality_field` (string, no default) | | | | | | | | | Req | | `max_cardinality` (boolean, no default) | | | | | | | | | Opt | | `min_cardinality` (boolean, no default) | | | | | | | | | Opt | ### Common Configuration Options[¶](#common-configuration-options) Every file that ends in `.yaml` in the `rules_folder` will be run by default. The following configuration settings are common to all types of rules. #### Required Settings[¶](#required-settings) ##### es_host[¶](#es-host) `es_host`: The hostname of the Elasticsearch cluster the rule will use to query. (Required, string, no default) The environment variable `ES_HOST` will override this field. ##### es_port[¶](#es-port) `es_port`: The port of the Elasticsearch cluster. (Required, number, no default) The environment variable `ES_PORT` will override this field. ##### index[¶](#index) `index`: The name of the index that will be searched. Wildcards can be used here, such as: `index: my-index-*` which will match `my-index-2014-10-05`. You can also use a format string containing `%Y` for year, `%m` for month, and `%d` for day. To use this, you must also set `use_strftime_index` to true. (Required, string, no default) ##### name[¶](#name) `name`: The name of the rule. This must be unique across all rules. The name will be used in alerts and used as a key when writing and reading search metadata back from Elasticsearch. (Required, string, no default) ##### type[¶](#type) `type`: The `RuleType` to use. This may either be one of the built in rule types, see [Rule Types](#ruletypes) section below for more information, or loaded from a module. For loading from a module, the type should be specified as `module.file.RuleName`. (Required, string, no default) ##### alert[¶](#alert) `alert`: The `Alerter` type to use. This may be one or more of the built in alerts, see [Alert Types](#alerts) section below for more information, or loaded from a module. For loading from a module, the alert should be specified as `module.file.AlertName`. (Required, string or list, no default) #### Optional Settings[¶](#optional-settings) ##### import[¶](#import) `import`: If specified includes all the settings from this yaml file. This allows common config options to be shared. Note that imported files that aren’t complete rules should not have a `.yml` or `.yaml` suffix so that ElastAlert doesn’t treat them as rules. Filters in imported files are merged (ANDed) with any filters in the rule. You can only have one import per rule, though the imported file can import another file, recursively. The filename can be an absolute path or relative to the rules directory. (Optional, string, no default) ##### use_ssl[¶](#use-ssl) `use_ssl`: Whether or not to connect to `es_host` using TLS. (Optional, boolean, default False) The environment variable `ES_USE_SSL` will override this field. ##### verify_certs[¶](#verify-certs) `verify_certs`: Whether or not to verify TLS certificates. (Optional, boolean, default True) ##### client_cert[¶](#client-cert) `client_cert`: Path to a PEM certificate to use as the client certificate (Optional, string, no default) ##### client_key[¶](#client-key) `client_key`: Path to a private key file to use as the client key (Optional, string, no default) ##### ca_certs[¶](#ca-certs) `ca_certs`: Path to a CA cert bundle to use to verify SSL connections (Optional, string, no default) ##### es_username[¶](#es-username) `es_username`: basic-auth username for connecting to `es_host`. (Optional, string, no default) The environment variable `ES_USERNAME` will override this field. ##### es_password[¶](#es-password) `es_password`: basic-auth password for connecting to `es_host`. (Optional, string, no default) The environment variable `ES_PASSWORD` will override this field. ##### es_url_prefix[¶](#es-url-prefix) `es_url_prefix`: URL prefix for the Elasticsearch endpoint. (Optional, string, no default) ##### es_send_get_body_as[¶](#es-send-get-body-as) `es_send_get_body_as`: Method for querying Elasticsearch. (Optional, string, default “GET”) ##### use_strftime_index[¶](#use-strftime-index) `use_strftime_index`: If this is true, ElastAlert will format the index using datetime.strftime for each query. See <https://docs.python.org/2/library/datetime.html#strftime-strptime-behavior> for more details. If a query spans multiple days, the formatted indexes will be concatenated with commas. This is useful as narrowing the number of indexes searched, compared to using a wildcard, may be significantly faster. For example, if `index` is `logstash-%Y.%m.%d`, the query url will be similar to `elasticsearch.example.com/logstash-2015.02.03/...` or `elasticsearch.example.com/logstash-2015.02.03,logstash-2015.02.04/...`. ##### search_extra_index[¶](#search-extra-index) `search_extra_index`: If this is true, ElastAlert will add an extra index on the early side onto each search. For example, if it’s querying completely within 2018-06-28, it will actually use 2018-06-27,2018-06-28. This can be useful if your timestamp_field is not what’s being used to generate the index names. If that’s the case, sometimes a query would not have been using the right index. ##### aggregation[¶](#aggregation) `aggregation`: This option allows you to aggregate multiple matches together into one alert. Every time a match is found, ElastAlert will wait for the `aggregation` period, and send all of the matches that have occurred in that time for a particular rule together. For example: ``` aggregation: hours: 2 ``` means that if one match occurred at 12:00, another at 1:00, and a third at 2:30, one alert would be sent at 2:00, containing the first two matches, and another at 4:30, containing the third match plus any additional matches occurring before 4:30. This can be very useful if you expect a large number of matches and only want a periodic report. (Optional, time, default none) If you wish to aggregate all your alerts and send them on a recurring interval, you can do that using the `schedule` field. For example, if you wish to receive alerts every Monday and Friday: ``` aggregation: schedule: '2 4 * * mon,fri' ``` This uses Cron syntax, which you can read more about [here](http://www.nncron.ru/help/EN/working/cron-format.htm). Make sure to only include either a schedule field or standard datetime fields (such as `hours`, `minutes`, `days`), not both. By default, all events that occur during an aggregation window are grouped together. However, if your rule has the `aggregation_key` field set, then each event sharing a common key value will be grouped together. A separate aggregation window will be made for each newly encountered key value. For example, if you wish to receive alerts that are grouped by the user who triggered the event, you can set: ``` aggregation_key: 'my_data.username' ``` Then, assuming an aggregation window of 10 minutes, if you receive the following data points: ``` {'my_data': {'username': 'alice', 'event_type': 'login'}, '@timestamp': '2016-09-20T00:00:00'} {'my_data': {'username': 'bob', 'event_type': 'something'}, '@timestamp': '2016-09-20T00:05:00'} {'my_data': {'username': 'alice', 'event_type': 'something else'}, '@timestamp': '2016-09-20T00:06:00'} ``` This should result in 2 alerts: One containing alice’s two events, sent at `2016-09-20T00:10:00` and one containing bob’s one event sent at `2016-09-20T00:16:00` For aggregations, there can sometimes be a large number of documents present in the viewing medium (email, jira ticket, etc..). If you set the `summary_table_fields` field, Elastalert will provide a summary of the specified fields from all the results. For example, if you wish to summarize the usernames and event_types that appear in the documents so that you can see the most relevant fields at a quick glance, you can set: ``` summary_table_fields: - my_data.username - my_data.event_type ``` Then, for the same sample data shown above listing alice and bob’s events, Elastalert will provide the following summary table in the alert medium: ``` +---+---+ | my_data.username | my_data.event_type | +---+---+ | alice | login | | bob | something | | alice | something else | +---+---+ ``` Note By default, aggregation time is relative to the current system time, not the time of the match. This means that running elastalert over past events will result in different alerts than if elastalert had been running while those events occured. This behavior can be changed by setting `aggregate_by_match_time`. ##### aggregate_by_match_time[¶](#aggregate-by-match-time) Setting this to true will cause aggregations to be created relative to the timestamp of the first event, rather than the current time. This is useful for querying over historic data or if using a very large buffer_time and you want multiple aggregations to occur from a single query. ##### realert[¶](#realert) `realert`: This option allows you to ignore repeating alerts for a period of time. If the rule uses a `query_key`, this option will be applied on a per key basis. All matches for a given rule, or for matches with the same `query_key`, will be ignored for the given time. All matches with a missing `query_key` will be grouped together using a value of `_missing`. This is applied to the time the alert is sent, not to the time of the event. It defaults to one minute, which means that if ElastAlert is run over a large time period which triggers many matches, only the first alert will be sent by default. If you want every alert, set realert to 0 minutes. (Optional, time, default 1 minute) ##### exponential_realert[¶](#exponential-realert) `exponential_realert`: This option causes the value of `realert` to exponentially increase while alerts continue to fire. If set, the value of `exponential_realert` is the maximum `realert` will increase to. If the time between alerts is less than twice `realert`, `realert` will double. For example, if `realert: minutes: 10` and `exponential_realert: hours: 1`, an alerts fires at 1:00 and another at 1:15, the next alert will not be until at least 1:35. If another alert fires between 1:35 and 2:15, `realert` will increase to the 1 hour maximum. If more than 2 hours elapse before the next alert, `realert` will go back down. Note that alerts that are ignored (e.g. one that occurred at 1:05) would not change `realert`. (Optional, time, no default) ##### buffer_time[¶](#buffer-time) `buffer_time`: This options allows the rule to override the `buffer_time` global setting defined in config.yaml. This value is ignored if `use_count_query` or `use_terms_query` is true. (Optional, time) ##### query_delay[¶](#query-delay) `query_delay`: This option will cause ElastAlert to subtract a time delta from every query, causing the rule to run with a delay. This is useful if the data is Elasticsearch doesn’t get indexed immediately. (Optional, time) ##### owner[¶](#owner) `owner`: This value will be used to identify the stakeholder of the alert. Optionally, this field can be included in any alert type. (Optional, string) ##### priority[¶](#priority) `priority`: This value will be used to identify the relative priority of the alert. Optionally, this field can be included in any alert type (e.g. for use in email subject/body text). (Optional, int, default 2) ##### category[¶](#category) `category`: This value will be used to identify the category of the alert. Optionally, this field can be included in any alert type (e.g. for use in email subject/body text). (Optional, string, default empty string) ##### max_query_size[¶](#max-query-size) `max_query_size`: The maximum number of documents that will be downloaded from Elasticsearch in a single query. If you expect a large number of results, consider using `use_count_query` for the rule. If this limit is reached, a warning will be logged but ElastAlert will continue without downloading more results. This setting will override a global `max_query_size`. (Optional, int, default value of global `max_query_size`) ##### filter[¶](#filter) `filter`: A list of Elasticsearch query DSL filters that is used to query Elasticsearch. ElastAlert will query Elasticsearch using the format `{'filter': {'bool': {'must': [config.filter]}}}` with an additional timestamp range filter. All of the results of querying with these filters are passed to the `RuleType` for analysis. For more information writing filters, see [Writing Filters](index.html#writingfilters). (Required, Elasticsearch query DSL, no default) ##### include[¶](#include) `include`: A list of terms that should be included in query results and passed to rule types and alerts. When set, only those fields, along with [‘@timestamp](mailto:'%40timestamp)’, `query_key`, `compare_key`, and `top_count_keys` are included, if present. (Optional, list of strings, default all fields) ##### top_count_keys[¶](#top-count-keys) `top_count_keys`: A list of fields. ElastAlert will perform a terms query for the top X most common values for each of the fields, where X is 5 by default, or `top_count_number` if it exists. For example, if `num_events` is 100, and `top_count_keys` is `- "username"`, the alert will say how many of the 100 events have each username, for the top 5 usernames. When this is computed, the time range used is from `timeframe` before the most recent event to 10 minutes past the most recent event. Because ElastAlert uses an aggregation query to compute this, it will attempt to use the field name plus “.raw” to count unanalyzed terms. To turn this off, set `raw_count_keys` to false. ##### top_count_number[¶](#top-count-number) `top_count_number`: The number of terms to list if `top_count_keys` is set. (Optional, integer, default 5) ##### raw_count_keys[¶](#raw-count-keys) `raw_count_keys`: If true, all fields in `top_count_keys` will have `.raw` appended to them. (Optional, boolean, default true) ##### description[¶](#description) `description`: text describing the purpose of rule. (Optional, string, default empty string) Can be referenced in custom alerters to provide context as to why a rule might trigger. ##### generate_kibana_link[¶](#generate-kibana-link) `generate_kibana_link`: This option is for Kibana 3 only. If true, ElastAlert will generate a temporary Kibana dashboard and include a link to it in alerts. The dashboard consists of an events over time graph and a table with `include` fields selected in the table. If the rule uses `query_key`, the dashboard will also contain a filter for the `query_key` of the alert. The dashboard schema will be uploaded to the kibana-int index as a temporary dashboard. (Optional, boolean, default False) ##### kibana_url[¶](#kibana-url) `kibana_url`: The url to access Kibana. This will be used if `generate_kibana_link` or `use_kibana_dashboard` is true. If not specified, a URL will be constructed using `es_host` and `es_port`. (Optional, string, default `http://<es_host>:<es_port>/_plugin/kibana/`) ##### use_kibana_dashboard[¶](#use-kibana-dashboard) `use_kibana_dashboard`: The name of a Kibana 3 dashboard to link to. Instead of generating a dashboard from a template, ElastAlert can use an existing dashboard. It will set the time range on the dashboard to around the match time, upload it as a temporary dashboard, add a filter to the `query_key` of the alert if applicable, and put the url to the dashboard in the alert. (Optional, string, no default) ##### use_kibana4_dashboard[¶](#use-kibana4-dashboard) `use_kibana4_dashboard`: A link to a Kibana 4 dashboard. For example, “<https://kibana.example.com/#/dashboard/My-Dashboard>”. This will set the time setting on the dashboard from the match time minus the timeframe, to 10 minutes after the match time. Note that this does not support filtering by `query_key` like Kibana 3. This value can use $VAR and ${VAR} references to expand environment variables. ##### kibana4_start_timedelta[¶](#kibana4-start-timedelta) `kibana4_start_timedelta`: Defaults to 10 minutes. This option allows you to specify the start time for the generated kibana4 dashboard. This value is added in front of the event. For example, `kibana4_start_timedelta: minutes: 2` ##### kibana4_end_timedelta[¶](#kibana4-end-timedelta) `kibana4_end_timedelta`: Defaults to 10 minutes. This option allows you to specify the end time for the generated kibana4 dashboard. This value is added in back of the event. For example, `kibana4_end_timedelta: minutes: 2` ##### use_local_time[¶](#use-local-time) `use_local_time`: Whether to convert timestamps to the local time zone in alerts. If false, timestamps will be converted to UTC, which is what ElastAlert uses internally. (Optional, boolean, default true) ##### match_enhancements[¶](#match-enhancements) `match_enhancements`: A list of enhancement modules to use with this rule. An enhancement module is a subclass of enhancements.BaseEnhancement that will be given the match dictionary and can modify it before it is passed to the alerter. The enhancements will be run after silence and realert is calculated and in the case of aggregated alerts, right before the alert is sent. This can be changed by setting `run_enhancements_first`. The enhancements should be specified as `module.file.EnhancementName`. See [Enhancements](index.html#enhancements) for more information. (Optional, list of strings, no default) ##### run_enhancements_first[¶](#run-enhancements-first) `run_enhancements_first`: If set to true, enhancements will be run as soon as a match is found. This means that they can be changed or dropped before affecting realert or being added to an aggregation. Silence stashes will still be created before the enhancement runs, meaning even if a `DropMatchException` is raised, the rule will still be silenced. (Optional, boolean, default false) ##### query_key[¶](#query-key) `query_key`: Having a query key means that realert time will be counted separately for each unique value of `query_key`. For rule types which count documents, such as spike, frequency and flatline, it also means that these counts will be independent for each unique value of `query_key`. For example, if `query_key` is set to `username` and `realert` is set, and an alert triggers on a document with `{'username': 'bob'}`, additional alerts for `{'username': 'bob'}` will be ignored while other usernames will trigger alerts. Documents which are missing the `query_key` will be grouped together. A list of fields may also be used, which will create a compound query key. This compound key is treated as if it were a single field whose value is the component values, or “None”, joined by commas. A new field with the key “field1,field2,etc” will be created in each document and may conflict with existing fields of the same name. ##### aggregation_key[¶](#aggregation-key) `aggregation_key`: Having an aggregation key in conjunction with an aggregation will make it so that each new value encountered for the aggregation_key field will result in a new, separate aggregation window. ##### summary_table_fields[¶](#summary-table-fields) `summary_table_fields`: Specifying the summmary_table_fields in conjunction with an aggregation will make it so that each aggregated alert will contain a table summarizing the values for the specified fields in all the matches that were aggregated together. ##### timestamp_type[¶](#timestamp-type) `timestamp_type`: One of `iso`, `unix`, `unix_ms`, `custom`. This option will set the type of `@timestamp` (or `timestamp_field`) used to query Elasticsearch. `iso` will use ISO8601 timestamps, which will work with most Elasticsearch date type field. `unix` will query using an integer unix (seconds since 1/1/1970) timestamp. `unix_ms` will use milliseconds unix timestamp. `custom` allows you to define your own `timestamp_format`. The default is `iso`. (Optional, string enum, default iso). ##### timestamp_format[¶](#timestamp-format) `timestamp_format`: In case Elasticsearch used custom date format for date type field, this option provides a way to define custom timestamp format to match the type used for Elastisearch date type field. This option is only valid if `timestamp_type` set to `custom`. (Optional, string, default ‘%Y-%m-%dT%H:%M:%SZ’). ##### timestamp_format_expr[¶](#timestamp-format-expr) `timestamp_format_expr`: In case Elasticsearch used custom date format for date type field, this option provides a way to adapt the value obtained converting a datetime through `timestamp_format`, when the format cannot match perfectly what defined in Elastisearch. When set, this option is evaluated as a Python expression along with a *globals* dictionary containing the original datetime instance named `dt` and the timestamp to be refined, named `ts`. The returned value becomes the timestamp obtained from the datetime. For example, when the date type field in Elasticsearch uses milliseconds (`yyyy-MM-dd'T'HH:mm:ss.SSS'Z'`) and `timestamp_format` option is `'%Y-%m-%dT%H:%M:%S.%fZ'`, Elasticsearch would fail to parse query terms as they contain microsecond values - that is it gets 6 digits instead of 3 - since the `%f` placeholder stands for microseconds for Python *strftime* method calls. Setting `timestamp_format_expr: 'ts[:23] + ts[26:]'` will truncate the value to milliseconds granting Elasticsearch compatibility. This option is only valid if `timestamp_type` set to `custom`. (Optional, string, no default). ##### _source_enabled[¶](#source-enabled) `_source_enabled`: If true, ElastAlert will use _source to retrieve fields from documents in Elasticsearch. If false, ElastAlert will use `fields` to retrieve stored fields. Both of these are represented internally as if they came from `_source`. See <https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-fields.html> for more details. The fields used come from `include`, see above for more details. (Optional, boolean, default True) ##### scan_entire_timeframe[¶](#scan-entire-timeframe) `scan_entire_timeframe`: If true, when ElastAlert starts, it will always start querying at the current time minus the timeframe. `timeframe` must exist in the rule. This may be useful, for example, if you are using a flatline rule type with a large timeframe, and you want to be sure that if ElastAlert restarts, you can still get alerts. This may cause duplicate alerts for some rule types, for example, Frequency can alert multiple times in a single timeframe, and if ElastAlert were to restart with this setting, it may scan the same range again, triggering duplicate alerts. Some rules and alerts require additional options, which also go in the top level of the rule configuration file. ### Testing Your Rule[¶](#testing-your-rule) Once you’ve written a rule configuration, you will want to validate it. To do so, you can either run ElastAlert in debug mode, or use `elastalert-test-rule`, which is a script that makes various aspects of testing easier. It can: * Check that the configuration file loaded successfully. * Check that the Elasticsearch filter parses. * Run against the last X day(s) and the show the number of hits that match your filter. * Show the available terms in one of the results. * Save documents returned to a JSON file. * Run ElastAlert using either a JSON file or actual results from Elasticsearch. * Print out debug alerts or trigger real alerts. * Check that, if they exist, the primary_key, compare_key and include terms are in the results. * Show what metadata documents would be written to `elastalert_status`. Without any optional arguments, it will run ElastAlert over the last 24 hours and print out any alerts that would have occurred. Here is an example test run which triggered an alert: ``` $ elastalert-test-rule my_rules/rule1.yaml Successfully Loaded Example rule1 Got 105 hits from the last 1 day Available terms in first hit: @timestamp field1 field2 ... Included term this_field_doesnt_exist may be missing or null INFO:root:Queried rule Example rule1 from 6-16 15:21 PDT to 6-17 15:21 PDT: 105 hits INFO:root:Alert for Example rule1 at 2015-06-16T23:53:12Z: INFO:root:Example rule1 At least 50 events occurred between 6-16 18:30 PDT and 6-16 20:30 PDT field1: value1: 25 value2: 25 @timestamp: 2015-06-16T20:30:04-07:00 field1: value1 field2: something Would have written the following documents to elastalert_status: silence - {'rule_name': 'Example rule1', '@timestamp': datetime.datetime( ... ), 'exponent': 0, 'until': datetime.datetime( ... )} elastalert_status - {'hits': 105, 'matches': 1, '@timestamp': datetime.datetime( ... ), 'rule_name': 'Example rule1', 'starttime': datetime.datetime( ... ), 'endtime': datetime.datetime( ... ), 'time_taken': 3.1415926} ``` Note that everything between “Alert for Example rule1 at …” and “Would have written the following …” is the exact text body that an alert would have. See the section below on alert content for more details. Also note that datetime objects are converted to ISO8601 timestamps when uploaded to Elasticsearch. See [the section on metadata](index.html#metadata) for more details. Other options include: `--schema-only`: Only perform schema validation on the file. It will not load modules or query Elasticsearch. This may catch invalid YAML and missing or misconfigured fields. `--count-only`: Only find the number of matching documents and list available fields. ElastAlert will not be run and documents will not be downloaded. `--days N`: Instead of the default 1 day, query N days. For selecting more specific time ranges, you must run ElastAlert itself and use `--start` and `--end`. `--save-json FILE`: Save all documents downloaded to a file as JSON. This is useful if you wish to modify data while testing or do offline testing in conjunction with `--data FILE`. A maximum of 10,000 documents will be downloaded. `--data FILE`: Use a JSON file as a data source instead of Elasticsearch. The file should be a single list containing objects, rather than objects on separate lines. Note than this uses mock functions which mimic some Elasticsearch query methods and is not guaranteed to have the exact same results as with Elasticsearch. For example, analyzed string fields may behave differently. `--alert`: Trigger real alerts instead of the debug (logging text) alert. `--formatted-output`: Output results in formatted JSON. Note Results from running this script may not always be the same as if an actual ElastAlert instance was running. Some rule types, such as spike and flatline require a minimum elapsed time before they begin alerting, based on their timeframe. In addition, use_count_query and use_terms_query rely on run_every to determine their resolution. This script uses a fixed 5 minute window, which is the same as the default. ### Rule Types[¶](#rule-types) The various `RuleType` classes, defined in `elastalert/ruletypes.py`, form the main logic behind ElastAlert. An instance is held in memory for each rule, passed all of the data returned by querying Elasticsearch with a given filter, and generates matches based on that data. To select a rule type, set the `type` option to the name of the rule type in the rule configuration file: `type: <rule type>` #### Any[¶](#any) `any`: The any rule will match everything. Every hit that the query returns will generate an alert. #### Blacklist[¶](#blacklist) `blacklist`: The blacklist rule will check a certain field against a blacklist, and match if it is in the blacklist. This rule requires two additional options: `compare_key`: The name of the field to use to compare to the blacklist. If the field is null, those events will be ignored. `blacklist`: A list of blacklisted values, and/or a list of paths to flat files which contain the blacklisted values using `- "!file /path/to/file"`; for example: ``` blacklist: - value1 - value2 - "!file /tmp/blacklist1.txt" - "!file /tmp/blacklist2.txt" ``` It is possible to mix between blacklist value definitions, or use either one. The `compare_key` term must be equal to one of these values for it to match. #### Whitelist[¶](#whitelist) `whitelist`: Similar to `blacklist`, this rule will compare a certain field to a whitelist, and match if the list does not contain the term. This rule requires three additional options: `compare_key`: The name of the field to use to compare to the whitelist. `ignore_null`: If true, events without a `compare_key` field will not match. `whitelist`: A list of whitelisted values, and/or a list of paths to flat files which contain the whitelisted values using `- "!file /path/to/file"`; for example: ``` whitelist: - value1 - value2 - "!file /tmp/whitelist1.txt" - "!file /tmp/whitelist2.txt" ``` It is possible to mix between whitelisted value definitions, or use either one. The `compare_key` term must be in this list or else it will match. #### Change[¶](#change) For an example configuration file using this rule type, look at `example_rules/example_change.yaml`. `change`: This rule will monitor a certain field and match if that field changes. The field must change with respect to the last event with the same `query_key`. This rule requires three additional options: `compare_key`: The names of the field to monitor for changes. Since this is a list of strings, we can have multiple keys. An alert will trigger if any of the fields change. `ignore_null`: If true, events without a `compare_key` field will not count as changed. Currently this checks for all the fields in `compare_key` `query_key`: This rule is applied on a per-`query_key` basis. This field must be present in all of the events that are checked. There is also an optional field: `timeframe`: The maximum time between changes. After this time period, ElastAlert will forget the old value of the `compare_key` field. #### Frequency[¶](#frequency) For an example configuration file using this rule type, look at `example_rules/example_frequency.yaml`. `frequency`: This rule matches when there are at least a certain number of events in a given time frame. This may be counted on a per-`query_key` basis. This rule requires two additional options: `num_events`: The number of events which will trigger an alert, inclusive. `timeframe`: The time that `num_events` must occur within. Optional: `use_count_query`: If true, ElastAlert will poll Elasticsearch using the count api, and not download all of the matching documents. This is useful is you care only about numbers and not the actual data. It should also be used if you expect a large number of query hits, in the order of tens of thousands or more. `doc_type` must be set to use this. `doc_type`: Specify the `_type` of document to search for. This must be present if `use_count_query` or `use_terms_query` is set. `use_terms_query`: If true, ElastAlert will make an aggregation query against Elasticsearch to get counts of documents matching each unique value of `query_key`. This must be used with `query_key` and `doc_type`. This will only return a maximum of `terms_size`, default 50, unique terms. `terms_size`: When used with `use_terms_query`, this is the maximum number of terms returned per query. Default is 50. `query_key`: Counts of documents will be stored independently for each value of `query_key`. Only `num_events` documents, all with the same value of `query_key`, will trigger an alert. `attach_related`: Will attach all the related events to the event that triggered the frequency alert. For example in an alert triggered with `num_events`: 3, the 3rd event will trigger the alert on itself and add the other 2 events in a key named `related_events` that can be accessed in the alerter. #### Spike[¶](#spike) `spike`: This rule matches when the volume of events during a given time period is `spike_height` times larger or smaller than during the previous time period. It uses two sliding windows to compare the current and reference frequency of events. We will call this two windows “reference” and “current”. This rule requires three additional options: `spike_height`: The ratio of number of events in the last `timeframe` to the previous `timeframe` that when hit will trigger an alert. `spike_type`: Either ‘up’, ‘down’ or ‘both’. ‘Up’ meaning the rule will only match when the number of events is `spike_height` times higher. ‘Down’ meaning the reference number is `spike_height` higher than the current number. ‘Both’ will match either. `timeframe`: The rule will average out the rate of events over this time period. For example, `hours: 1` means that the ‘current’ window will span from present to one hour ago, and the ‘reference’ window will span from one hour ago to two hours ago. The rule will not be active until the time elapsed from the first event is at least two timeframes. This is to prevent an alert being triggered before a baseline rate has been established. This can be overridden using `alert_on_new_data`. Optional: `field_value`: When set, uses the value of the field in the document and not the number of matching documents. This is useful to monitor for example a temperature sensor and raise an alarm if the temperature grows too fast. Note that the means of the field on the reference and current windows are used to determine if the `spike_height` value is reached. Note also that the threshold parameters are ignored in this smode. `threshold_ref`: The minimum number of events that must exist in the reference window for an alert to trigger. For example, if `spike_height: 3` and `threshold_ref: 10`, then the ‘reference’ window must contain at least 10 events and the ‘current’ window at least three times that for an alert to be triggered. `threshold_cur`: The minimum number of events that must exist in the current window for an alert to trigger. For example, if `spike_height: 3` and `threshold_cur: 60`, then an alert will occur if the current window has more than 60 events and the reference window has less than a third as many. To illustrate the use of `threshold_ref`, `threshold_cur`, `alert_on_new_data`, `timeframe` and `spike_height` together, consider the following examples: ``` " Alert if at least 15 events occur within two hours and less than a quarter of that number occurred within the previous two hours. " timeframe: hours: 2 spike_height: 4 spike_type: up threshold_cur: 15 hour1: 5 events (ref: 0, cur: 5) - No alert because (a) threshold_cur not met, (b) ref window not filled hour2: 5 events (ref: 0, cur: 10) - No alert because (a) threshold_cur not met, (b) ref window not filled hour3: 10 events (ref: 5, cur: 15) - No alert because (a) spike_height not met, (b) ref window not filled hour4: 35 events (ref: 10, cur: 45) - Alert because (a) spike_height met, (b) threshold_cur met, (c) ref window filled hour1: 20 events (ref: 0, cur: 20) - No alert because ref window not filled hour2: 21 events (ref: 0, cur: 41) - No alert because ref window not filled hour3: 19 events (ref: 20, cur: 40) - No alert because (a) spike_height not met, (b) ref window not filled hour4: 23 events (ref: 41, cur: 42) - No alert because spike_height not met hour1: 10 events (ref: 0, cur: 10) - No alert because (a) threshold_cur not met, (b) ref window not filled hour2: 0 events (ref: 0, cur: 10) - No alert because (a) threshold_cur not met, (b) ref window not filled hour3: 0 events (ref: 10, cur: 0) - No alert because (a) threshold_cur not met, (b) ref window not filled, (c) spike_height not met hour4: 30 events (ref: 10, cur: 30) - No alert because spike_height not met hour5: 5 events (ref: 0, cur: 35) - Alert because (a) spike_height met, (b) threshold_cur met, (c) ref window filled " Alert if at least 5 events occur within two hours, and twice as many events occur within the next two hours. " timeframe: hours: 2 spike_height: 2 spike_type: up threshold_ref: 5 hour1: 20 events (ref: 0, cur: 20) - No alert because (a) threshold_ref not met, (b) ref window not filled hour2: 100 events (ref: 0, cur: 120) - No alert because (a) threshold_ref not met, (b) ref window not filled hour3: 100 events (ref: 20, cur: 200) - No alert because ref window not filled hour4: 100 events (ref: 120, cur: 200) - No alert because spike_height not met hour1: 0 events (ref: 0, cur: 0) - No alert because (a) threshold_ref not met, (b) ref window not filled hour2: 20 events (ref: 0, cur: 20) - No alert because (a) threshold_ref not met, (b) ref window not filled hour3: 100 events (ref: 0, cur: 120) - No alert because (a) threshold_ref not met, (b) ref window not filled hour4: 100 events (ref: 20, cur: 200) - Alert because (a) spike_height met, (b) threshold_ref met, (c) ref window filled hour1: 1 events (ref: 0, cur: 1) - No alert because (a) threshold_ref not met, (b) ref window not filled hour2: 2 events (ref: 0, cur: 3) - No alert because (a) threshold_ref not met, (b) ref window not filled hour3: 2 events (ref: 1, cur: 4) - No alert because (a) threshold_ref not met, (b) ref window not filled hour4: 1000 events (ref: 3, cur: 1002) - No alert because threshold_ref not met hour5: 2 events (ref: 4, cur: 1002) - No alert because threshold_ref not met hour6: 4 events: (ref: 1002, cur: 6) - No alert because spike_height not met hour1: 1000 events (ref: 0, cur: 1000) - No alert because (a) threshold_ref not met, (b) ref window not filled hour2: 0 events (ref: 0, cur: 1000) - No alert because (a) threshold_ref not met, (b) ref window not filled hour3: 0 events (ref: 1000, cur: 0) - No alert because (a) spike_height not met, (b) ref window not filled hour4: 0 events (ref: 1000, cur: 0) - No alert because spike_height not met hour5: 1000 events (ref: 0, cur: 1000) - No alert because threshold_ref not met hour6: 1050 events (ref: 0, cur: 2050)- No alert because threshold_ref not met hour7: 1075 events (ref: 1000, cur: 2125) Alert because (a) spike_height met, (b) threshold_ref met, (c) ref window filled " Alert if at least 100 events occur within two hours and less than a fifth of that number occurred in the previous two hours. " timeframe: hours: 2 spike_height: 5 spike_type: up threshold_cur: 100 hour1: 1000 events (ref: 0, cur: 1000) - No alert because ref window not filled hour1: 2 events (ref: 0, cur: 2) - No alert because (a) threshold_cur not met, (b) ref window not filled hour2: 1 events (ref: 0, cur: 3) - No alert because (a) threshold_cur not met, (b) ref window not filled hour3: 20 events (ref: 2, cur: 21) - No alert because (a) threshold_cur not met, (b) ref window not filled hour4: 81 events (ref: 3, cur: 101) - Alert because (a) spike_height met, (b) threshold_cur met, (c) ref window filled hour1: 10 events (ref: 0, cur: 10) - No alert because (a) threshold_cur not met, (b) ref window not filled hour2: 20 events (ref: 0, cur: 30) - No alert because (a) threshold_cur not met, (b) ref window not filled hour3: 40 events (ref: 10, cur: 60) - No alert because (a) threshold_cur not met, (b) ref window not filled hour4: 80 events (ref: 30, cur: 120) - No alert because spike_height not met hour5: 200 events (ref: 60, cur: 280) - No alert because spike_height not met ``` `alert_on_new_data`: This option is only used if `query_key` is set. When this is set to true, any new `query_key` encountered may trigger an immediate alert. When set to false, baseline must be established for each new `query_key` value, and then subsequent spikes may cause alerts. Baseline is established after `timeframe` has elapsed twice since first occurrence. `use_count_query`: If true, ElastAlert will poll Elasticsearch using the count api, and not download all of the matching documents. This is useful is you care only about numbers and not the actual data. It should also be used if you expect a large number of query hits, in the order of tens of thousands or more. `doc_type` must be set to use this. `doc_type`: Specify the `_type` of document to search for. This must be present if `use_count_query` or `use_terms_query` is set. `use_terms_query`: If true, ElastAlert will make an aggregation query against Elasticsearch to get counts of documents matching each unique value of `query_key`. This must be used with `query_key` and `doc_type`. This will only return a maximum of `terms_size`, default 50, unique terms. `terms_size`: When used with `use_terms_query`, this is the maximum number of terms returned per query. Default is 50. `query_key`: Counts of documents will be stored independently for each value of `query_key`. #### Flatline[¶](#flatline) `flatline`: This rule matches when the total number of events is under a given `threshold` for a time period. This rule requires two additional options: `threshold`: The minimum number of events for an alert not to be triggered. `timeframe`: The time period that must contain less than `threshold` events. Optional: `use_count_query`: If true, ElastAlert will poll Elasticsearch using the count api, and not download all of the matching documents. This is useful is you care only about numbers and not the actual data. It should also be used if you expect a large number of query hits, in the order of tens of thousands or more. `doc_type` must be set to use this. `doc_type`: Specify the `_type` of document to search for. This must be present if `use_count_query` or `use_terms_query` is set. `use_terms_query`: If true, ElastAlert will make an aggregation query against Elasticsearch to get counts of documents matching each unique value of `query_key`. This must be used with `query_key` and `doc_type`. This will only return a maximum of `terms_size`, default 50, unique terms. `terms_size`: When used with `use_terms_query`, this is the maximum number of terms returned per query. Default is 50. `query_key`: With flatline rule, `query_key` means that an alert will be triggered if any value of `query_key` has been seen at least once and then falls below the threshold. `forget_keys`: Only valid when used with `query_key`. If this is set to true, ElastAlert will “forget” about the `query_key` value that triggers an alert, therefore preventing any more alerts for it until it’s seen again. #### New Term[¶](#new-term) `new_term`: This rule matches when a new value appears in a field that has never been seen before. When ElastAlert starts, it will use an aggregation query to gather all known terms for a list of fields. This rule requires one additional option: `fields`: A list of fields to monitor for new terms. `query_key` will be used if `fields` is not set. Each entry in the list of fields can itself be a list. If a field entry is provided as a list, it will be interpreted as a set of fields that compose a composite key used for the ElasticSearch query. Note The composite fields may only refer to primitive types, otherwise the initial ElasticSearch query will not properly return the aggregation results, thus causing alerts to fire every time the ElastAlert service initially launches with the rule. A warning will be logged to the console if this scenario is encountered. However, future alerts will actually work as expected after the initial flurry. Optional: `terms_window_size`: The amount of time used for the initial query to find existing terms. No term that has occurred within this time frame will trigger an alert. The default is 30 days. `window_step_size`: When querying for existing terms, split up the time range into steps of this size. For example, using the default 30 day window size, and the default 1 day step size, 30 invidivdual queries will be made. This helps to avoid timeouts for very expensive aggregation queries. The default is 1 day. `alert_on_missing_field`: Whether or not to alert when a field is missing from a document. The default is false. `use_terms_query`: If true, ElastAlert will use aggregation queries to get terms instead of regular search queries. This is faster than regular searching if there is a large number of documents. If this is used, you may only specify a single field, and must also set `query_key` to that field. Also, note that `terms_size` (the number of buckets returned per query) defaults to 50. This means that if a new term appears but there are at least 50 terms which appear more frequently, it will not be found. Note When using use_terms_query, make sure that the field you are using is not analyzed. If it is, the results of each terms query may return tokens rather than full values. ElastAlert will by default turn on use_keyword_postfix, which attempts to use the non-analyzed version (.keyword or .raw) to gather initial terms. These will not match the partial values and result in false positives. `use_keyword_postfix`: If true, ElastAlert will automatically try to add .keyword (ES5+) or .raw to the fields when making an initial query. These are non-analyzed fields added by Logstash. If the field used is analyzed, the initial query will return only the tokenized values, potentially causing false positives. Defaults to true. #### Cardinality[¶](#cardinality) `cardinality`: This rule matches when a the total number of unique values for a certain field within a time frame is higher or lower than a threshold. This rule requires: `timeframe`: The time period in which the number of unique values will be counted. `cardinality_field`: Which field to count the cardinality for. This rule requires one of the two following options: `max_cardinality`: If the cardinality of the data is greater than this number, an alert will be triggered. Each new event that raises the cardinality will trigger an alert. `min_cardinality`: If the cardinality of the data is lower than this number, an alert will be triggered. The `timeframe` must have elapsed since the first event before any alerts will be sent. When a match occurs, the `timeframe` will be reset and must elapse again before additional alerts. Optional: `query_key`: Group cardinality counts by this field. For each unique value of the `query_key` field, cardinality will be counted separately. #### Metric Aggregation[¶](#metric-aggregation) `metric_aggregation`: This rule matches when the value of a metric within the calculation window is higher or lower than a threshold. By default this is `buffer_time`. This rule requires: `metric_agg_key`: This is the name of the field over which the metric value will be calculated. The underlying type of this field must be supported by the specified aggregation type. `metric_agg_type`: The type of metric aggregation to perform on the `metric_agg_key` field. This must be one of ‘min’, ‘max’, ‘avg’, ‘sum’, ‘cardinality’, ‘value_count’. `doc_type`: Specify the `_type` of document to search for. This rule also requires at least one of the two following options: `max_threshold`: If the calculated metric value is greater than this number, an alert will be triggered. This threshold is exclusive. `min_threshold`: If the calculated metric value is less than this number, an alert will be triggered. This threshold is exclusive. Optional: `query_key`: Group metric calculations by this field. For each unique value of the `query_key` field, the metric will be calculated and evaluated separately against the threshold(s). `min_doc_count`: The minimum number of events in the current window needed for an alert to trigger. Used in conjunction with `query_key`, this will only consider terms which in their last `buffer_time` had at least `min_doc_count` records. Default 1. `use_run_every_query_size`: By default the metric value is calculated over a `buffer_time` sized window. If this parameter is true the rule will use `run_every` as the calculation window. `allow_buffer_time_overlap`: This setting will only have an effect if `use_run_every_query_size` is false and `buffer_time` is greater than `run_every`. If true will allow the start of the metric calculation window to overlap the end time of a previous run. By default the start and end times will not overlap, so if the time elapsed since the last run is less than the metric calculation window size, rule execution will be skipped (to avoid calculations on partial data). `bucket_interval`: If present this will divide the metric calculation window into `bucket_interval` sized segments. The metric value will be calculated and evaluated against the threshold(s) for each segment. If `bucket_interval` is specified then `buffer_time` must be a multiple of `bucket_interval`. (Or `run_every` if `use_run_every_query_size` is true). `sync_bucket_interval`: This only has an effect if `bucket_interval` is present. If true it will sync the start and end times of the metric calculation window to the keys (timestamps) of the underlying date_histogram buckets. Because of the way elasticsearch calculates date_histogram bucket keys these usually round evenly to nearest minute, hour, day etc (depending on the bucket size). By default the bucket keys are offset to allign with the time elastalert runs, (This both avoid calculations on partial data, and ensures the very latest documents are included). See: <https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-datehistogram-aggregation.html#_offset> for a more comprehensive explaination. #### Spike Aggregation[¶](#spike-aggregation) `spike_aggregation`: This rule matches when the value of a metric within the calculation window is `spike_height` times larger or smaller than during the previous time period. It uses two sliding windows to compare the current and reference metric values. We will call these two windows “reference” and “current”. This rule requires: `metric_agg_key`: This is the name of the field over which the metric value will be calculated. The underlying type of this field must be supported by the specified aggregation type. If using a scripted field via `metric_agg_script`, this is the name for your scripted field `metric_agg_type`: The type of metric aggregation to perform on the `metric_agg_key` field. This must be one of ‘min’, ‘max’, ‘avg’, ‘sum’, ‘cardinality’, ‘value_count’. `spike_height`: The ratio of the metric value in the last `timeframe` to the previous `timeframe` that when hit will trigger an alert. `spike_type`: Either ‘up’, ‘down’ or ‘both’. ‘Up’ meaning the rule will only match when the metric value is `spike_height` times higher. ‘Down’ meaning the reference metric value is `spike_height` higher than the current metric value. ‘Both’ will match either. `buffer_time`: The rule will average out the rate of events over this time period. For example, `hours: 1` means that the ‘current’ window will span from present to one hour ago, and the ‘reference’ window will span from one hour ago to two hours ago. The rule will not be active until the time elapsed from the first event is at least two timeframes. This is to prevent an alert being triggered before a baseline rate has been established. This can be overridden using `alert_on_new_data`. Optional: `query_key`: Group metric calculations by this field. For each unique value of the `query_key` field, the metric will be calculated and evaluated separately against the ‘reference’/’current’ metric value and `spike height`. `metric_agg_script`: A Painless formatted script describing how to calculate your metric on-the-fly: ``` metric_agg_key: myScriptedMetric metric_agg_script: script: doc['field1'].value * doc['field2'].value ``` `threshold_ref`: The minimum value of the metric in the reference window for an alert to trigger. For example, if `spike_height: 3` and `threshold_ref: 10`, then the ‘reference’ window must have a metric value of 10 and the ‘current’ window at least three times that for an alert to be triggered. `threshold_cur`: The minimum value of the metric in the current window for an alert to trigger. For example, if `spike_height: 3` and `threshold_cur: 60`, then an alert will occur if the current window has a metric value greater than 60 and the reference window is less than a third of that value. `min_doc_count`: The minimum number of events in the current window needed for an alert to trigger. Used in conjunction with `query_key`, this will only consider terms which in their last `buffer_time` had at least `min_doc_count` records. Default 1. #### Percentage Match[¶](#percentage-match) `percentage_match`: This rule matches when the percentage of document in the match bucket within a calculation window is higher or lower than a threshold. By default the calculation window is `buffer_time`. This rule requires: `match_bucket_filter`: ES filter DSL. This defines a filter for the match bucket, which should match a subset of the documents returned by the main query filter. `doc_type`: Specify the `_type` of document to search for. This rule also requires at least one of the two following options: `min_percentage`: If the percentage of matching documents is less than this number, an alert will be triggered. `max_percentage`: If the percentage of matching documents is greater than this number, an alert will be triggered. Optional: `query_key`: Group percentage by this field. For each unique value of the `query_key` field, the percentage will be calculated and evaluated separately against the threshold(s). `use_run_every_query_size`: See `use_run_every_query_size` in Metric Aggregation rule `allow_buffer_time_overlap`: See `allow_buffer_time_overlap` in Metric Aggregation rule `bucket_interval`: See `bucket_interval` in Metric Aggregation rule `sync_bucket_interval`: See `sync_bucket_interval` in Metric Aggregation rule `percentage_format_string`: An optional format string to apply to the percentage value in the alert match text. Must be a valid python format string. For example, “%.2f” will round it to 2 decimal places. See: <https://docs.python.org/3.4/library/string.html#format-specification-mini-language`min_denominator`: Minimum number of documents on which percentage calculation will apply. Default is 0. ### Alerts[¶](#alerts) Each rule may have any number of alerts attached to it. Alerts are subclasses of `Alerter` and are passed a dictionary, or list of dictionaries, from ElastAlert which contain relevant information. They are configured in the rule configuration file similarly to rule types. To set the alerts for a rule, set the `alert` option to the name of the alert, or a list of the names of alerts: `alert: email` or ``` alert: - email - jira ``` Options for each alerter can either defined at the top level of the YAML file, or nested within the alert name, allowing for different settings for multiple of the same alerter. For example, consider sending multiple emails, but with different ‘To’ and ‘From’ fields: ``` alert: - email from_addr: "<EMAIL>" email: "<EMAIL>" ``` versus ``` alert: - email: from_addr: "<EMAIL>" email: "<EMAIL>" - email: from_addr: "<EMAIL>"" email: "<EMAIL>" ``` If multiple of the same alerter type are used, top level settings will be used as the default and inline settings will override those for each alerter. #### Alert Subject[¶](#alert-subject) E-mail subjects, JIRA issue summaries, PagerDuty alerts, or any alerter that has a “subject” can be customized by adding an `alert_subject` that contains a custom summary. It can be further formatted using standard Python formatting syntax: ``` alert_subject: "Issue {0} occurred at {1}" ``` The arguments for the formatter will be fed from the matched objects related to the alert. The field names whose values will be used as the arguments can be passed with `alert_subject_args`: ``` alert_subject_args: - issue.name - "@timestamp" ``` It is mandatory to enclose the `@timestamp` field in quotes since in YAML format a token cannot begin with the `@` character. Not using the quotation marks will trigger a YAML parse error. In case the rule matches multiple objects in the index, only the first match is used to populate the arguments for the formatter. If the field(s) mentioned in the arguments list are missing, the email alert will have the text `alert_missing_value` in place of its expected value. This will also occur if `use_count_query` is set to true. #### Alert Content[¶](#alert-content) There are several ways to format the body text of the various types of events. In EBNF: ``` rule_name = name alert_text = alert_text ruletype_text = Depends on type top_counts_header = top_count_key, ":" top_counts_value = Value, ": ", Count top_counts = top_counts_header, LF, top_counts_value field_values = Field, ": ", Value ``` Similarly to `alert_subject`, `alert_text` can be further formatted using standard Python formatting syntax. The field names whose values will be used as the arguments can be passed with `alert_text_args` or `alert_text_kw`. You may also refer to any top-level rule property in the `alert_subject_args`, `alert_text_args`, `alert_missing_value`, and `alert_text_kw fields`. However, if the matched document has a key with the same name, that will take preference over the rule property. By default: ``` body = rule_name [alert_text] ruletype_text {top_counts} {field_values} ``` With `alert_text_type: alert_text_only`: ``` body = rule_name alert_text ``` With `alert_text_type: exclude_fields`: ``` body = rule_name [alert_text] ruletype_text {top_counts} ``` With `alert_text_type: aggregation_summary_only`: ``` body = rule_name aggregation_summary ``` * ruletype_text is the string returned by RuleType.get_match_str. field_values will contain every key value pair included in the results from Elasticsearch. These fields include “@timestamp” (or the value of `timestamp_field`), every key in `include`, every key in `top_count_keys`, `query_key`, and `compare_key`. If the alert spans multiple events, these values may come from an individual event, usually the one which triggers the alert. When using `alert_text_args`, you can access nested fields and index into arrays. For example, if your match was `{"data": {"ips": ["127.0.0.1", "12.34.56.78"]}}`, then by using `"data.ips[1]"` in `alert_text_args`, it would replace value with `"12.34.56.78"`. This can go arbitrarily deep into fields and will still work on keys that contain dots themselves. #### Command[¶](#command) The command alert allows you to execute an arbitrary command and pass arguments or stdin from the match. Arguments to the command can use Python format string syntax to access parts of the match. The alerter will open a subprocess and optionally pass the match, or matches in the case of an aggregated alert, as a JSON array, to the stdin of the process. This alert requires one option: `command`: A list of arguments to execute or a string to execute. If in list format, the first argument is the name of the program to execute. If passed a string, the command is executed through the shell. Strings can be formatted using the old-style format (`%`) or the new-style format (`.format()`). When the old-style format is used, fields are accessed using `%(field_name)s`, or `%(field.subfield)s`. When the new-style format is used, fields are accessed using `{field_name}`. New-style formatting allows accessing nested fields (e.g., `{field_1[subfield]}`). In an aggregated alert, these fields come from the first match. Optional: `pipe_match_json`: If true, the match will be converted to JSON and passed to stdin of the command. Note that this will cause ElastAlert to block until the command exits or sends an EOF to stdout. `pipe_alert_text`: If true, the standard alert body text will be passed to stdin of the command. Note that this will cause ElastAlert to block until the command exits or sends an EOF to stdout. It cannot be used at the same time as `pipe_match_json`. Example usage using old-style format: ``` alert: - command command: ["/bin/send_alert", "--username", "%(username)s"] ``` Warning Executing commmands with untrusted data can make it vulnerable to shell injection! If you use formatted data in your command, it is highly recommended that you use a args list format instead of a shell string. Example usage using new-style format: ``` alert: - command command: ["/bin/send_alert", "--username", "{match[username]}"] ``` #### Email[¶](#email) This alert will send an email. It connects to an smtp server located at `smtp_host`, or localhost by default. If available, it will use STARTTLS. This alert requires one additional option: `email`: An address or list of addresses to sent the alert to. Optional: `email_from_field`: Use a field from the document that triggered the alert as the recipient. If the field cannot be found, the `email` value will be used as a default. Note that this field will not be available in every rule type, for example, if you have `use_count_query` or if it’s `type: flatline`. You can optionally add a domain suffix to the field to generate the address using `email_add_domain`. It can be a single recipient or list of recipients. For example, with the following settings: ``` email_from_field: "data.user" email_add_domain: "@example.com" ``` and a match `{"@timestamp": "2017", "data": {"foo": "bar", "user": "qlo"}}` an email would be sent to `<EMAIL>` `smtp_host`: The SMTP host to use, defaults to localhost. `smtp_port`: The port to use. Default is 25. `smtp_ssl`: Connect the SMTP host using TLS, defaults to `false`. If `smtp_ssl` is not used, ElastAlert will still attempt STARTTLS. `smtp_auth_file`: The path to a file which contains SMTP authentication credentials. The path can be either absolute or relative to the given rule. It should be YAML formatted and contain two fields, `user` and `password`. If this is not present, no authentication will be attempted. `smtp_cert_file`: Connect the SMTP host using the given path to a TLS certificate file, default to `None`. `smtp_key_file`: Connect the SMTP host using the given path to a TLS key file, default to `None`. `email_reply_to`: This sets the Reply-To header in the email. By default, the from address is ElastAlert@ and the domain will be set by the smtp server. `from_addr`: This sets the From header in the email. By default, the from address is ElastAlert@ and the domain will be set by the smtp server. `cc`: This adds the CC emails to the list of recipients. By default, this is left empty. `bcc`: This adds the BCC emails to the list of recipients but does not show up in the email message. By default, this is left empty. `email_format`: If set to `html`, the email’s MIME type will be set to HTML, and HTML content should correctly render. If you use this, you need to put your own HTML into `alert_text` and use `alert_text_type: alert_text_only`. #### Jira[¶](#jira) The JIRA alerter will open a ticket on jira whenever an alert is triggered. You must have a service account for ElastAlert to connect with. The credentials of the service account are loaded from a separate file. The ticket number will be written to the alert pipeline, and if it is followed by an email alerter, a link will be included in the email. This alert requires four additional options: `jira_server`: The hostname of the JIRA server. `jira_project`: The project to open the ticket under. `jira_issuetype`: The type of issue that the ticket will be filed as. Note that this is case sensitive. `jira_account_file`: The path to the file which contains JIRA account credentials. For an example JIRA account file, see `example_rules/jira_acct.yaml`. The account file is also yaml formatted and must contain two fields: `user`: The username. `password`: The password. Optional: `jira_component`: The name of the component or components to set the ticket to. This can be a single string or a list of strings. This is provided for backwards compatibility and will eventually be deprecated. It is preferable to use the plural `jira_components` instead. `jira_components`: The name of the component or components to set the ticket to. This can be a single string or a list of strings. `jira_description`: Similar to `alert_text`, this text is prepended to the JIRA description. `jira_label`: The label or labels to add to the JIRA ticket. This can be a single string or a list of strings. This is provided for backwards compatibility and will eventually be deprecated. It is preferable to use the plural `jira_labels` instead. `jira_labels`: The label or labels to add to the JIRA ticket. This can be a single string or a list of strings. `jira_priority`: The index of the priority to set the issue to. In the JIRA dropdown for priorities, 0 would represent the first priority, 1 the 2nd, etc. `jira_watchers`: A list of user names to add as watchers on a JIRA ticket. This can be a single string or a list of strings. `jira_bump_tickets`: If true, ElastAlert search for existing tickets newer than `jira_max_age` and comment on the ticket with information about the alert instead of opening another ticket. ElastAlert finds the existing ticket by searching by summary. If the summary has changed or contains special characters, it may fail to find the ticket. If you are using a custom `alert_subject`, the two summaries must be exact matches, except by setting `jira_ignore_in_title`, you can ignore the value of a field when searching. For example, if the custom subject is “foo occured at bar”, and “foo” is the value field X in the match, you can set `jira_ignore_in_title` to “X” and it will only bump tickets with “bar” in the subject. Defaults to false. `jira_ignore_in_title`: ElastAlert will attempt to remove the value for this field from the JIRA subject when searching for tickets to bump. See `jira_bump_tickets` description above for an example. `jira_max_age`: If `jira_bump_tickets` is true, the maximum age of a ticket, in days, such that ElastAlert will comment on the ticket instead of opening a new one. Default is 30 days. `jira_bump_not_in_statuses`: If `jira_bump_tickets` is true, a list of statuses the ticket must **not** be in for ElastAlert to comment on the ticket instead of opening a new one. For example, to prevent comments being added to resolved or closed tickets, set this to ‘Resolved’ and ‘Closed’. This option should not be set if the `jira_bump_in_statuses` option is set. Example usage: ``` jira_bump_not_in_statuses: - Resolved - Closed ``` `jira_bump_in_statuses`: If `jira_bump_tickets` is true, a list of statuses the ticket *must be in* for ElastAlert to comment on the ticket instead of opening a new one. For example, to only comment on ‘Open’ tickets – and thus not ‘In Progress’, ‘Analyzing’, ‘Resolved’, etc. tickets – set this to ‘Open’. This option should not be set if the `jira_bump_not_in_statuses` option is set. Example usage: ``` jira_bump_in_statuses: - Open ``` `jira_bump_only`: Only update if a ticket is found to bump. This skips ticket creation for rules where you only want to affect existing tickets. Example usage: ``` jira_bump_only: true ``` `jira_transition_to`: If `jira_bump_tickets` is true, Transition this ticket to the given Status when bumping. Must match the text of your JIRA implementation’s Status field. Example usage: ``` jira_transition_to: 'Fixed' ``` `jira_bump_after_inactivity`: If this is set, ElastAlert will only comment on tickets that have been inactive for at least this many days. It only applies if `jira_bump_tickets` is true. Default is 0 days. Arbitrary Jira fields: ElastAlert supports setting any arbitrary JIRA field that your jira issue supports. For example, if you had a custom field, called “Affected User”, you can set it by providing that field name in `snake_case` prefixed with `jira_`. These fields can contain primitive strings or arrays of strings. Note that when you create a custom field in your JIRA server, internally, the field is represented as `customfield_1111`. In elastalert, you may refer to either the public facing name OR the internal representation. In addition, if you would like to use a field in the alert as the value for a custom JIRA field, use the field name plus a # symbol in front. For example, if you wanted to set a custom JIRA field called “user” to the value of the field “username” from the match, you would use the following. Example: ``` jira_user: "#username" ``` Example usage: ``` jira_arbitrary_singular_field: My Name jira_arbitrary_multivalue_field: - Name 1 - Name 2 jira_customfield_12345: My Custom Value jira_customfield_9999: - My Custom Value 1 - My Custom Value 2 ``` #### OpsGenie[¶](#opsgenie) OpsGenie alerter will create an alert which can be used to notify Operations people of issues or log information. An OpsGenie `API` integration must be created in order to acquire the necessary `opsgenie_key` rule variable. Currently the OpsGenieAlerter only creates an alert, however it could be extended to update or close existing alerts. It is necessary for the user to create an OpsGenie Rest HTTPS API [integration page](https://app.opsgenie.com/integration) in order to create alerts. The OpsGenie alert requires one option: `opsgenie_key`: The randomly generated API Integration key created by OpsGenie. Optional: `opsgenie_account`: The OpsGenie account to integrate with. `opsgenie_recipients`: A list OpsGenie recipients who will be notified by the alert. `opsgenie_recipients_args`: Map of arguments used to format opsgenie_recipients. `opsgenie_default_recipients`: List of default recipients to notify when the formatting of opsgenie_recipients is unsuccesful. `opsgenie_teams`: A list of OpsGenie teams to notify (useful for schedules with escalation). `opsgenie_teams_args`: Map of arguments used to format opsgenie_teams (useful for assigning the alerts to teams based on some data) `opsgenie_default_teams`: List of default teams to notify when the formatting of opsgenie_teams is unsuccesful. `opsgenie_tags`: A list of tags for this alert. `opsgenie_message`: Set the OpsGenie message to something other than the rule name. The message can be formatted with fields from the first match e.g. “Error occurred for {app_name} at {timestamp}.”. `opsgenie_alias`: Set the OpsGenie alias. The alias can be formatted with fields from the first match e.g “{app_name} error”. `opsgenie_subject`: A string used to create the title of the OpsGenie alert. Can use Python string formatting. `opsgenie_subject_args`: A list of fields to use to format `opsgenie_subject` if it contains formaters. `opsgenie_priority`: Set the OpsGenie priority level. Possible values are P1, P2, P3, P4, P5. #### SNS[¶](#sns) The SNS alerter will send an SNS notification. The body of the notification is formatted the same as with other alerters. The SNS alerter uses boto3 and can use credentials in the rule yaml, in a standard AWS credential and config files, or via environment variables. See <http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html> for details. SNS requires one option: `sns_topic_arn`: The SNS topic’s ARN. For example, `arn:aws:sns:us-east-1:123456789:somesnstopic` Optional: `aws_access_key`: An access key to connect to SNS with. `aws_secret_key`: The secret key associated with the access key. `aws_region`: The AWS region in which the SNS resource is located. Default is us-east-1 `profile`: The AWS profile to use. If none specified, the default will be used. #### HipChat[¶](#hipchat) HipChat alerter will send a notification to a predefined HipChat room. The body of the notification is formatted the same as with other alerters. The alerter requires the following two options: `hipchat_auth_token`: The randomly generated notification token created by HipChat. Go to <https://XXXXX.hipchat.com/account/api> and use ‘Create new token’ section, choosing ‘Send notification’ in Scopes list. `hipchat_room_id`: The id associated with the HipChat room you want to send the alert to. Go to <https://XXXXX.hipchat.com/rooms> and choose the room you want to post to. The room ID will be the numeric part of the URL. `hipchat_msg_color`: The color of the message background that is sent to HipChat. May be set to green, yellow or red. Default is red. `hipchat_domain`: The custom domain in case you have HipChat own server deployment. Default is api.hipchat.com. `hipchat_ignore_ssl_errors`: Ignore TLS errors (self-signed certificates, etc.). Default is false. `hipchat_proxy`: By default ElastAlert will not use a network proxy to send notifications to HipChat. Set this option using `hostname:port` if you need to use a proxy. `hipchat_notify`: When set to true, triggers a hipchat bell as if it were a user. Default is true. `hipchat_from`: When humans report to hipchat, a timestamp appears next to their name. For bots, the name is the name of the token. The from, instead of a timestamp, defaults to empty unless set, which you can do here. This is optional. `hipchat_message_format`: Determines how the message is treated by HipChat and rendered inside HipChat applications html - Message is rendered as HTML and receives no special treatment. Must be valid HTML and entities must be escaped (e.g.: ‘&amp;’ instead of ‘&’). May contain basic tags: a, b, i, strong, em, br, img, pre, code, lists, tables. text - Message is treated just like a message sent by a user. Can include @mentions, emoticons, pastes, and auto-detected URLs (Twitter, YouTube, images, etc). Valid values: html, text. Defaults to ‘html’. `hipchat_mentions`: When using a `html` message format, it’s not possible to mentions specific users using the `@user` syntax. In that case, you can set `hipchat_mentions` to a list of users which will be first mentioned using a single text message, then the normal ElastAlert message will be sent to Hipchat. If set, it will mention the users, no matter if the original message format is set to HTML or text. Valid values: list of strings. Defaults to `[]`. #### Stride[¶](#stride) Stride alerter will send a notification to a predefined Stride room. The body of the notification is formatted the same as with other alerters. Simple HTML such as <a> and <b> tags will be parsed into a format that Stride can consume. The alerter requires the following two options: `stride_access_token`: The randomly generated notification token created by Stride. `stride_cloud_id`: The site_id associated with the Stride site you want to send the alert to. `stride_conversation_id`: The conversation_id associated with the Stride conversation you want to send the alert to. `stride_ignore_ssl_errors`: Ignore TLS errors (self-signed certificates, etc.). Default is false. `stride_proxy`: By default ElastAlert will not use a network proxy to send notifications to Stride. Set this option using `hostname:port` if you need to use a proxy. #### MS Teams[¶](#ms-teams) MS Teams alerter will send a notification to a predefined Microsoft Teams channel. The alerter requires the following options: `ms_teams_webhook_url`: The webhook URL that includes your auth data and the ID of the channel you want to post to. Go to the Connectors menu in your channel and configure an Incoming Webhook, then copy the resulting URL. You can use a list of URLs to send to multiple channels. `ms_teams_alert_summary`: Summary should be configured according to [MS documentation](https://docs.microsoft.com/en-us/outlook/actionable-messages/card-reference), although it seems not displayed by Teams currently. Optional: `ms_teams_theme_color`: By default the alert will be posted without any color line. To add color, set this attribute to a HTML color value e.g. `#ff0000` for red. `ms_teams_proxy`: By default ElastAlert will not use a network proxy to send notifications to MS Teams. Set this option using `hostname:port` if you need to use a proxy. `ms_teams_alert_fixed_width`: By default this is `False` and the notification will be sent to MS Teams as-is. Teams supports a partial Markdown implementation, which means asterisk, underscore and other characters may be interpreted as Markdown. Currenlty, Teams does not fully implement code blocks. Setting this attribute to `True` will enable line by line code blocks. It is recommended to enable this to get clearer notifications in Teams. #### Slack[¶](#slack) Slack alerter will send a notification to a predefined Slack channel. The body of the notification is formatted the same as with other alerters. The alerter requires the following option: `slack_webhook_url`: The webhook URL that includes your auth data and the ID of the channel (room) you want to post to. Go to the Incoming Webhooks section in your Slack account <https://XXXXX.slack.com/services/new/incoming-webhook> , choose the channel, click ‘Add Incoming Webhooks Integration’ and copy the resulting URL. You can use a list of URLs to send to multiple channels. Optional: `slack_username_override`: By default Slack will use your username when posting to the channel. Use this option to change it (free text). `slack_channel_override`: Incoming webhooks have a default channel, but it can be overridden. A public channel can be specified “#other-channel”, and a Direct Message with “@username”. `slack_emoji_override`: By default ElastAlert will use the :ghost: emoji when posting to the channel. You can use a different emoji per ElastAlert rule. Any Apple emoji can be used, see <http://emojipedia.org/apple/> . If slack_icon_url_override parameter is provided, emoji is ignored. `slack_icon_url_override`: By default ElastAlert will use the :ghost: emoji when posting to the channel. You can provide icon_url to use custom image. Provide absolute address of the pciture, for example: <http://some.address.com/image.jpg> . `slack_msg_color`: By default the alert will be posted with the ‘danger’ color. You can also use ‘good’ or ‘warning’ colors. `slack_proxy`: By default ElastAlert will not use a network proxy to send notifications to Slack. Set this option using `hostname:port` if you need to use a proxy. `slack_alert_fields`: You can add additional fields to your slack alerts using this field. Specify the title using title and a value for the field using value. Additionally you can specify whether or not this field should be a short field using short: true. `slack_title`: Sets a title for the message, this shows up as a blue text at the start of the message `slack_title_link`: You can add a link in your Slack notification by setting this to a valid URL. Requires slack_title to be set. `slack_timeout`: You can specify a timeout value, in seconds, for making communicating with Slac. The default is 10. If a timeout occurs, the alert will be retried next time elastalert cycles. #### Mattermost[¶](#mattermost) Mattermost alerter will send a notification to a predefined Mattermost channel. The body of the notification is formatted the same as with other alerters. The alerter requires the following option: `mattermost_webhook_url`: The webhook URL. Follow the instructions on <https://docs.mattermost.com/developer/webhooks-incoming.html> to create an incoming webhook on your Mattermost installation. Optional: `mattermost_proxy`: By default ElastAlert will not use a network proxy to send notifications to Mattermost. Set this option using `hostname:port` if you need to use a proxy. `mattermost_ignore_ssl_errors`: By default ElastAlert will verify SSL certificate. Set this option to `False` if you want to ignore SSL errors. `mattermost_username_override`: By default Mattermost will use your username when posting to the channel. Use this option to change it (free text). `mattermost_channel_override`: Incoming webhooks have a default channel, but it can be overridden. A public channel can be specified “#other-channel”, and a Direct Message with “@username”. `mattermost_icon_url_override`: By default ElastAlert will use the default webhook icon when posting to the channel. You can provide icon_url to use custom image. Provide absolute address of the picture (for example: <http://some.address.com/image.jpg>) or Base64 data url. `mattermost_msg_pretext`: You can set the message attachment pretext using this option. `mattermost_msg_color`: By default the alert will be posted with the ‘danger’ color. You can also use ‘good’, ‘warning’, or hex color code. `mattermost_msg_fields`: You can add fields to your Mattermost alerts using this option. You can specify the title using title and the text value using value. Additionally you can specify whether this field should be a short field using short: true. If you set args and value is a formattable string, ElastAlert will format the incident key based on the provided array of fields from the rule or match. See <https://docs.mattermost.com/developer/message-attachments.html#fields> for more information. #### Telegram[¶](#telegram) Telegram alerter will send a notification to a predefined Telegram username or channel. The body of the notification is formatted the same as with other alerters. The alerter requires the following two options: `telegram_bot_token`: The token is a string along the lines of `110201543:AAHdqTcvCH1vGWJxfSeofSAs0K5PALDsaw` that will be required to authorize the bot and send requests to the Bot API. You can learn about obtaining tokens and generating new ones in this document <https://core.telegram.org/bots#botfather`telegram_room_id`: Unique identifier for the target chat or username of the target channel using telegram chat_id (in the format “-xxxxxxxx”) Optional: `telegram_api_url`: Custom domain to call Telegram Bot API. Default to api.telegram.org `telegram_proxy`: By default ElastAlert will not use a network proxy to send notifications to Telegram. Set this option using `hostname:port` if you need to use a proxy. #### GoogleChat[¶](#googlechat) GoogleChat alerter will send a notification to a predefined GoogleChat channel. The body of the notification is formatted the same as with other alerters. The alerter requires the following options: `googlechat_webhook_url`: The webhook URL that includes the channel (room) you want to post to. Go to the Google Chat website <https://chat.google.com> and choose the channel in which you wish to receive the notifications. Select ‘Configure Webhooks’ to create a new webhook or to copy the URL from an existing one. You can use a list of URLs to send to multiple channels. Optional: `googlechat_format`: Formatting for the notification. Can be either ‘card’ or ‘basic’ (default). `googlechat_header_title`: Sets the text for the card header title. (Only used if format=card) `googlechat_header_subtitle`: Sets the text for the card header subtitle. (Only used if format=card) `googlechat_header_image`: URL for the card header icon. (Only used if format=card) `googlechat_footer_kibanalink`: URL to Kibana to include in the card footer. (Only used if format=card) #### PagerDuty[¶](#pagerduty) PagerDuty alerter will trigger an incident to a predefined PagerDuty service. The body of the notification is formatted the same as with other alerters. The alerter requires the following option: `pagerduty_service_key`: Integration Key generated after creating a service with the ‘Use our API directly’ option at Integration Settings `pagerduty_client_name`: The name of the monitoring client that is triggering this event. `pagerduty_event_type`: Any of the following: trigger, resolve, or acknowledge. (Optional, defaults to trigger) Optional: `alert_subject`: If set, this will be used as the Incident description within PagerDuty. If not set, ElastAlert will default to using the rule name of the alert for the incident. `alert_subject_args`: If set, and `alert_subject` is a formattable string, ElastAlert will format the incident key based on the provided array of fields from the rule or match. `pagerduty_incident_key`: If not set PagerDuty will trigger a new incident for each alert sent. If set to a unique string per rule PagerDuty will identify the incident that this event should be applied. If there’s no open (i.e. unresolved) incident with this key, a new one will be created. If there’s already an open incident with a matching key, this event will be appended to that incident’s log. `pagerduty_incident_key_args`: If set, and `pagerduty_incident_key` is a formattable string, Elastalert will format the incident key based on the provided array of fields from the rule or match. `pagerduty_proxy`: By default ElastAlert will not use a network proxy to send notifications to PagerDuty. Set this option using `hostname:port` if you need to use a proxy. V2 API Options (Optional): These options are specific to the PagerDuty V2 API See <https://v2.developer.pagerduty.com/docs/send-an-event-events-api-v2`pagerduty_api_version`: Defaults to v1. Set to v2 to enable the PagerDuty V2 Event API. `pagerduty_v2_payload_class`: Sets the class of the payload. (the event type in PagerDuty) `pagerduty_v2_payload_class_args`: If set, and `pagerduty_v2_payload_class` is a formattable string, Elastalert will format the class based on the provided array of fields from the rule or match. `pagerduty_v2_payload_component`: Sets the component of the payload. (what program/interface/etc the event came from) `pagerduty_v2_payload_component_args`: If set, and `pagerduty_v2_payload_component` is a formattable string, Elastalert will format the component based on the provided array of fields from the rule or match. `pagerduty_v2_payload_group`: Sets the logical grouping (e.g. app-stack) `pagerduty_v2_payload_group_args`: If set, and `pagerduty_v2_payload_group` is a formattable string, Elastalert will format the group based on the provided array of fields from the rule or match. `pagerduty_v2_payload_severity`: Sets the severity of the page. (defaults to critical, valid options: critical, error, warning, info) `pagerduty_v2_payload_source`: Sets the source of the event, preferably the hostname or fqdn. `pagerduty_v2_payload_source_args`: If set, and `pagerduty_v2_payload_source` is a formattable string, Elastalert will format the source based on the provided array of fields from the rule or match. #### PagerTree[¶](#pagertree) PagerTree alerter will trigger an incident to a predefined PagerTree integration url. The alerter requires the following options: `pagertree_integration_url`: URL generated by PagerTree for the integration. #### Exotel[¶](#exotel) Developers in India can use Exotel alerter, it will trigger an incident to a mobile phone as sms from your exophone. Alert name along with the message body will be sent as an sms. The alerter requires the following option: `exotel_accout_sid`: This is sid of your Exotel account. `exotel_auth_token`: Auth token assosiated with your Exotel account. If you don’t know how to find your accound sid and auth token, refer - <http://support.exotel.in/support/solutions/articles/3000023019-how-to-find-my-exotel-token-and-exotel-sid>- `exotel_to_number`: The phone number where you would like send the notification. `exotel_from_number`: Your exophone number from which message will be sent. The alerter has one optional argument: `exotel_message_body`: Message you want to send in the sms, is you don’t specify this argument only the rule name is sent #### Twilio[¶](#twilio) Twilio alerter will trigger an incident to a mobile phone as sms from your twilio phone number. Alert name will arrive as sms once this option is chosen. The alerter requires the following option: `twilio_account_sid`: This is sid of your twilio account. `twilio_auth_token`: Auth token assosiated with your twilio account. `twilio_to_number`: The phone number where you would like send the notification. `twilio_from_number`: Your twilio phone number from which message will be sent. #### VictorOps[¶](#victorops) VictorOps alerter will trigger an incident to a predefined VictorOps routing key. The body of the notification is formatted the same as with other alerters. The alerter requires the following options: `victorops_api_key`: API key generated under the ‘REST Endpoint’ in the Integrations settings. `victorops_routing_key`: VictorOps routing key to route the alert to. `victorops_message_type`: VictorOps field to specify severity level. Must be one of the following: INFO, WARNING, ACKNOWLEDGEMENT, CRITICAL, RECOVERY Optional: `victorops_entity_id`: The identity of the incident used by VictorOps to correlate incidents throughout the alert lifecycle. If not defined, VictorOps will assign a random string to each alert. `victorops_entity_display_name`: Human-readable name of alerting entity to summarize incidents without affecting the life-cycle workflow. `victorops_proxy`: By default ElastAlert will not use a network proxy to send notifications to VictorOps. Set this option using `hostname:port` if you need to use a proxy. #### Gitter[¶](#gitter) Gitter alerter will send a notification to a predefined Gitter channel. The body of the notification is formatted the same as with other alerters. The alerter requires the following option: `gitter_webhook_url`: The webhook URL that includes your auth data and the ID of the channel (room) you want to post to. Go to the Integration Settings of the channel <https://gitter.im/ORGA/CHANNEL#integrations> , click ‘CUSTOM’ and copy the resulting URL. Optional: `gitter_msg_level`: By default the alert will be posted with the ‘error’ level. You can use ‘info’ if you want the messages to be black instead of red. `gitter_proxy`: By default ElastAlert will not use a network proxy to send notifications to Gitter. Set this option using `hostname:port` if you need to use a proxy. #### ServiceNow[¶](#servicenow) The ServiceNow alerter will create a ne Incident in ServiceNow. The body of the notification is formatted the same as with other alerters. The alerter requires the following options: `servicenow_rest_url`: The ServiceNow RestApi url, this will look like <https://instancename.service-now.com/api/now/v1/table/incident`username`: The ServiceNow Username to access the api. `password`: The ServiceNow password to access the api. `short_description`: The ServiceNow password to access the api. `comments`: Comments to be attached to the incident, this is the equivilant of work notes. `assignment_group`: The group to assign the incident to. `category`: The category to attach the incident to, use an existing category. `subcategory`: The subcategory to attach the incident to, use an existing subcategory. `cmdb_ci`: The configuration item to attach the incident to. `caller_id`: The caller id (email address) of the user that created the incident ([<EMAIL>](mailto:elastalert%40somewhere.com)). Optional: `servicenow_proxy`: By default ElastAlert will not use a network proxy to send notifications to ServiceNow. Set this option using `hostname:port` if you need to use a proxy. #### Debug[¶](#debug) The debug alerter will log the alert information using the Python logger at the info level. It is logged into a Python Logger object with the name `elastalert` that can be easily accessed using the `getLogger` command. #### Stomp[¶](#stomp) This alert type will use the STOMP protocol in order to push a message to a broker like ActiveMQ or RabbitMQ. The message body is a JSON string containing the alert details. The default values will work with a pristine ActiveMQ installation. Optional: `stomp_hostname`: The STOMP host to use, defaults to localhost. `stomp_hostport`: The STOMP port to use, defaults to 61613. `stomp_login`: The STOMP login to use, defaults to admin. `stomp_password`: The STOMP password to use, defaults to admin. `stomp_destination`: The STOMP destination to use, defaults to /queue/ALERT The stomp_destination field depends on the broker, the /queue/ALERT example is the nomenclature used by ActiveMQ. Each broker has its own logic. #### Alerta[¶](#alerta) Alerta alerter will post an alert in the Alerta server instance through the alert API endpoint. See <http://alerta.readthedocs.io/en/latest/api/alert.html> for more details on the Alerta JSON format. For Alerta 5.0 Required: `alerta_api_url`: API server URL. Optional: `alerta_api_key`: This is the api key for alerta server, sent in an `Authorization` HTTP header. If not defined, no Authorization header is sent. `alerta_use_qk_as_resource`: If true and query_key is present, this will override `alerta_resource` field with the `query_key value` (Can be useful if `query_key` is a hostname). `alerta_use_match_timestamp`: If true, it will use the timestamp of the first match as the `createTime` of the alert. otherwise, the current server time is used. `alert_missing_value`: Text to replace any match field not found when formating strings. Defaults to `<MISSING_TEXT>`. The following options dictate the values of the API JSON payload: `alerta_severity`: Defaults to “warning”. `alerta_timeout`: Defaults 84600 (1 Day). `alerta_type`: Defaults to “elastalert”. The following options use Python-like string syntax `{<field>}` or `%(<field>)s` to access parts of the match, similar to the CommandAlerter. Ie: “Alert for {clientip}”. If the referenced key is not found in the match, it is replaced by the text indicated by the option `alert_missing_value`. `alerta_resource`: Defaults to “elastalert”. `alerta_service`: Defaults to “elastalert”. `alerta_origin`: Defaults to “elastalert”. `alerta_environment`: Defaults to “Production”. `alerta_group`: Defaults to “”. `alerta_correlate`: Defaults to an empty list. `alerta_tags`: Defaults to an empty list. `alerta_event`: Defaults to the rule’s name. `alerta_text`: Defaults to the rule’s text according to its type. `alerta_value`: Defaults to “”. The `attributes` dictionary is built by joining the lists from `alerta_attributes_keys` and `alerta_attributes_values`, considered in order. Example usage using old-style format: ``` alert: - alerta alerta_api_url: "http://youralertahost/api/alert" alerta_attributes_keys: ["hostname", "TimestampEvent", "senderIP" ] alerta_attributes_values: ["%(key)s", "%(logdate)s", "%(sender_ip)s" ] alerta_correlate: ["ProbeUP","ProbeDOWN"] alerta_event: "ProbeUP" alerta_text: "Probe %(hostname)s is UP at %(logdate)s GMT" alerta_value: "UP" ``` Example usage using new-style format: ``` alert: - alerta alerta_attributes_values: ["{key}", "{logdate}", "{sender_ip}" ] alerta_text: "Probe {hostname} is UP at {logdate} GMT" ``` #### HTTP POST[¶](#http-post) This alert type will send results to a JSON endpoint using HTTP POST. The key names are configurable so this is compatible with almost any endpoint. By default, the JSON will contain all the items from the match, unless you specify http_post_payload, in which case it will only contain those items. Required: `http_post_url`: The URL to POST. Optional: `http_post_payload`: List of keys:values to use as the content of the POST. Example - ip:clientip will map the value from the clientip index of Elasticsearch to JSON key named ip. If not defined, all the Elasticsearch keys will be sent. `http_post_static_payload`: Key:value pairs of static parameters to be sent, along with the Elasticsearch results. Put your authentication or other information here. `http_post_headers`: Key:value pairs of headers to be sent as part of the request. `http_post_proxy`: URL of proxy, if required. `http_post_all_values`: Boolean of whether or not to include every key value pair from the match in addition to those in http_post_payload and http_post_static_payload. Defaults to True if http_post_payload is not specified, otherwise False. `http_post_timeout`: The timeout value, in seconds, for making the post. The default is 10. If a timeout occurs, the alert will be retried next time elastalert cycles. Example usage: ``` alert: post http_post_url: "http://example.com/api" http_post_payload: ip: clientip http_post_static_payload: apikey: abc123 http_post_headers: authorization: Basic 123dr3234 ``` #### Alerter[¶](#alerter) For all Alerter subclasses, you may reference values from a top-level rule property in your Alerter fields by referring to the property name surrounded by dollar signs. This can be useful when you have rule-level properties that you would like to reference many times in your alert. For example: Example usage: ``` jira_priority: $priority$ jira_alert_owner: $owner$ ``` #### Line Notify[¶](#line-notify) Line Notify will send notification to a Line application. The body of the notification is formatted the same as with other alerters. Required: `linenotify_access_token`: The access token that you got from <https://notify-bot.line.me/my/#### theHive[¶](#thehive) theHive alert type will send JSON request to theHive (Security Incident Response Platform) with TheHive4py API. Sent request will be stored like Hive Alert with description and observables. Required: `hive_connection`: The connection details as key:values. Required keys are `hive_host`, `hive_port` and `hive_apikey`. `hive_alert_config`: Configuration options for the alert. Optional: `hive_proxies`: Proxy configuration. `hive_observable_data_mapping`: If needed, matched data fields can be mapped to TheHive observable types using python string formatting. Example usage: ``` alert: hivealerter hive_connection: hive_host: http://localhost hive_port: <hive_port> hive_apikey: <hive_apikey> hive_proxies: http: '' https: '' hive_alert_config: title: 'Title' ## This will default to {rule[index]_rule[name]} if not provided type: 'external' source: 'elastalert' description: '{match[field1]} {rule[name]} Sample description' severity: 2 tags: ['tag1', 'tag2 {rule[name]}'] tlp: 3 status: 'New' follow: True hive_observable_data_mapping: - domain: "{match[field1]}_{rule[name]}" - domain: "{match[field]}" - ip: "{match[ip_field]}" ``` #### Zabbix[¶](#zabbix) Zabbix will send notification to a Zabbix server. The item in the host specified receive a 1 value for each hit. For example, if the elastic query produce 3 hits in the last execution of elastalert, three ‘1’ (integer) values will be send from elastalert to Zabbix Server. If the query have 0 hits, any value will be sent. Required: `zbx_sender_host`: The address where zabbix server is running. `zbx_sender_port`: The port where zabbix server is listenning. `zbx_host`: This field setup the host in zabbix that receives the value sent by Elastalert. `zbx_item`: This field setup the item in the host that receives the value sent by Elastalert. ElastAlert Metadata Index[¶](#elastalert-metadata-index) --- ElastAlert uses Elasticsearch to store various information about its state. This not only allows for some level of auditing and debugging of ElastAlert’s operation, but also to avoid loss of data or duplication of alerts when ElastAlert is shut down, restarted, or crashes. This cluster and index information is defined in the global config file with `es_host`, `es_port` and `writeback_index`. ElastAlert must be able to write to this index. The script, `elastalert-create-index` will create the index with the correct mapping for you, and optionally copy the documents from an existing ElastAlert writeback index. Run it and it will prompt you for the cluster information. ElastAlert will create three different types of documents in the writeback index: ### elastalert_status[¶](#elastalert-status) `elastalert_status` is a log of the queries performed for a given rule and contains: * `@timestamp`: The time when the document was uploaded to Elasticsearch. This is after a query has been run and the results have been processed. * `rule_name`: The name of the corresponding rule. * `starttime`: The beginning of the timestamp range the query searched. * `endtime`: The end of the timestamp range the query searched. * `hits`: The number of results from the query. * `matches`: The number of matches that the rule returned after processing the hits. Note that this does not necessarily mean that alerts were triggered. * `time_taken`: The number of seconds it took for this query to run. `elastalert_status` is what ElastAlert will use to determine what time range to query when it first starts to avoid duplicating queries. For each rule, it will start querying from the most recent endtime. If ElastAlert is running in debug mode, it will still attempt to base its start time by looking for the most recent search performed, but it will not write the results of any query back to Elasticsearch. ### elastalert[¶](#elastalert) `elastalert` is a log of information about every alert triggered and contains: * `@timestamp`: The time when the document was uploaded to Elasticsearch. This is not the same as when the alert was sent, but rather when the rule outputs a match. * `rule_name`: The name of the corresponding rule. * `alert_info`: This contains the output of Alert.get_info, a function that alerts implement to give some relevant context to the alert type. This may contain alert_info.type, alert_info.recipient, or any number of other sub fields. * `alert_sent`: A boolean value as to whether this alert was actually sent or not. It may be false in the case of an exception or if it is part of an aggregated alert. * `alert_time`: The time that the alert was or will be sent. Usually, this is the same as @timestamp, but may be some time in the future, indicating when an aggregated alert will be sent. * `match_body`: This is the contents of the match dictionary that is used to create the alert. The subfields may include a number of things containing information about the alert. * `alert_exception`: This field is only present when the alert failed because of an exception occurring, and will contain the exception information. * `aggregate_id`: This field is only present when the rule is configured to use aggregation. The first alert of the aggregation period will contain an alert_time set to the aggregation time into the future, and subsequent alerts will contain the document ID of the first. When the alert_time is reached, all alerts with that aggregate_id will be sent together. ### elastalert_error[¶](#elastalert-error) When an error occurs in ElastAlert, it is written to both Elasticsearch and to stderr. The `elastalert_error` type contains: * `@timestamp`: The time when the error occurred. * `message`: The error or exception message. * `traceback`: The traceback from when the error occurred. * `data`: Extra information about the error. This often contains the name of the rule which caused the error. ### silence[¶](#silence) `silence` is a record of when alerts for a given rule will be suppressed, either because of a `realert` setting or from using –silence. When an alert with `realert` is triggered, a `silence` record will be written with `until` set to the alert time plus `realert`. * `@timestamp`: The time when the document was uploaded to Elasticsearch. * `rule_name`: The name of the corresponding rule. * `until`: The timestamp when alerts will begin being sent again. * `exponent`: The exponential factor which multiplies `realert`. The length of this silence is equal to `realert` * 2**exponent. This will be 0 unless `exponential_realert` is set. Whenever an alert is triggered, ElastAlert will check for a matching `silence` document, and if the `until` timestamp is in the future, it will ignore the alert completely. See the [Running ElastAlert](index.html#runningelastalert) section for information on how to silence an alert. Adding a New Rule Type[¶](#adding-a-new-rule-type) --- This document describes how to create a new rule type. Built in rule types live in `elastalert/ruletypes.py` and are subclasses of `RuleType`. At the minimum, your rule needs to implement `add_data`. Your class may implement several functions from `RuleType`: ``` class AwesomeNewRule(RuleType): # ... def add_data(self, data): # ... def get_match_str(self, match): # ... def garbage_collect(self, timestamp): # ... ``` You can import new rule types by specifying the type as `module.file.RuleName`, where module is the name of a Python module, or folder containing `__init__.py`, and file is the name of the Python file containing a `RuleType` subclass named `RuleName`. ### Basics[¶](#basics) The `RuleType` instance remains in memory while ElastAlert is running, receives data, keeps track of its state, and generates matches. Several important member properties are created in the `__init__` method of `RuleType`: `self.rules`: This dictionary is loaded from the rule configuration file. If there is a `timeframe` configuration option, this will be automatically converted to a `datetime.timedelta` object when the rules are loaded. `self.matches`: This is where ElastAlert checks for matches from the rule. Whatever information is relevant to the match (generally coming from the fields in Elasticsearch) should be put into a dictionary object and added to `self.matches`. ElastAlert will pop items out periodically and send alerts based on these objects. It is recommended that you use `self.add_match(match)` to add matches. In addition to appending to `self.matches`, `self.add_match` will convert the datetime `@timestamp` back into an ISO8601 timestamp. `self.required_options`: This is a set of options that must exist in the configuration file. ElastAlert will ensure that all of these fields exist before trying to instantiate a `RuleType` instance. ### add_data(self, data):[¶](#add-data-self-data) When ElastAlert queries Elasticsearch, it will pass all of the hits to the rule type by calling `add_data`. `data` is a list of dictionary objects which contain all of the fields in `include`, `query_key` and `compare_key` if they exist, and `@timestamp` as a datetime object. They will always come in chronological order sorted by [‘@timestamp](mailto:'%40timestamp)’. ### get_match_str(self, match):[¶](#get-match-str-self-match) Alerts will call this function to get a human readable string about a match for an alert. Match will be the same object that was added to `self.matches`, and `rules` the same as `self.rules`. The `RuleType` base implementation will return an empty string. Note that by default, the alert text will already contain the key-value pairs from the match. This should return a string that gives some information about the match in the context of this specific RuleType. ### garbage_collect(self, timestamp):[¶](#garbage-collect-self-timestamp) This will be called after ElastAlert has run over a time period ending in `timestamp` and should be used to clear any state that may be obsolete as of `timestamp`. `timestamp` is a datetime object. ### Tutorial[¶](#tutorial) As an example, we are going to create a rule type for detecting suspicious logins. Let’s imagine the data we are querying is login events that contains IP address, username and a timestamp. Our configuration will take a list of usernames and a time range and alert if a login occurs in the time range. First, let’s create a modules folder in the base ElastAlert folder: ``` $ mkdir elastalert_modules $ cd elastalert_modules $ touch __init__.py ``` Now, in a file named `my_rules.py`, add ``` import dateutil.parser from elastalert.ruletypes import RuleType # elastalert.util includes useful utility functions # such as converting from timestamp to datetime obj from elastalert.util import ts_to_dt class AwesomeRule(RuleType): # By setting required_options to a set of strings # You can ensure that the rule config file specifies all # of the options. Otherwise, ElastAlert will throw an exception # when trying to load the rule. required_options = set(['time_start', 'time_end', 'usernames']) # add_data will be called each time Elasticsearch is queried. # data is a list of documents from Elasticsearch, sorted by timestamp, # including all the fields that the config specifies with "include" def add_data(self, data): for document in data: # To access config options, use self.rules if document['username'] in self.rules['usernames']: # Convert the timestamp to a time object login_time = document['@timestamp'].time() # Convert time_start and time_end to time objects time_start = dateutil.parser.parse(self.rules['time_start']).time() time_end = dateutil.parser.parse(self.rules['time_end']).time() # If the time falls between start and end if login_time > time_start and login_time < time_end: # To add a match, use self.add_match self.add_match(document) # The results of get_match_str will appear in the alert text def get_match_str(self, match): return "%s logged in between %s and %s" % (match['username'], self.rules['time_start'], self.rules['time_end']) # garbage_collect is called indicating that ElastAlert has already been run up to timestamp # It is useful for knowing that there were no query results from Elasticsearch because # add_data will not be called with an empty list def garbage_collect(self, timestamp): pass ``` In the rule configuration file, `example_rules/example_login_rule.yaml`, we are going to specify this rule by writing ``` name: "Example login rule" es_host: elasticsearch.example.com es_port: 14900 type: "elastalert_modules.my_rules.AwesomeRule" # Alert if admin, userXYZ or foobaz log in between 8 PM and midnight time_start: "20:00" time_end: "24:00" usernames: - "admin" - "userXYZ" - "foobaz" # We require the username field from documents include: - "username" alert: - debug ``` ElastAlert will attempt to import the rule with `from elastalert_modules.my_rules import AwesomeRule`. This means that the folder must be in a location where it can be imported as a Python module. An alert from this rule will look something like: ``` Example login rule userXYZ logged in between 20:00 and 24:00 @timestamp: 2015-03-02T22:23:24Z username: userXYZ ``` Adding a New Alerter[¶](#adding-a-new-alerter) --- Alerters are subclasses of `Alerter`, found in `elastalert/alerts.py`. They are given matches and perform some action based on that. Your alerter needs to implement two member functions, and will look something like this: ``` class AwesomeNewAlerter(Alerter): required_options = set(['some_config_option']) def alert(self, matches): ... def get_info(self): ... ``` You can import alert types by specifying the type as `module.file.AlertName`, where module is the name of a python module, and file is the name of the python file containing a `Alerter` subclass named `AlertName`. ### Basics[¶](#basics) The alerter class will be instantiated when ElastAlert starts, and be periodically passed matches through the `alert` method. ElastAlert also writes back info about the alert into Elasticsearch that it obtains through `get_info`. Several important member properties: `self.required_options`: This is a set containing names of configuration options that must be present. ElastAlert will not instantiate the alert if any are missing. `self.rule`: The dictionary containing the rule configuration. All options specific to the alert should be in the rule configuration file and can be accessed here. `self.pipeline`: This is a dictionary object that serves to transfer information between alerts. When an alert is triggered, a new empty pipeline object will be created and each alerter can add or receive information from it. Note that alerters are called in the order they are defined in the rule file. For example, the JIRA alerter will add its ticket number to the pipeline and the email alerter will add that link if it’s present in the pipeline. ### alert(self, match):[¶](#alert-self-match) ElastAlert will call this function to send an alert. `matches` is a list of dictionary objects with information about the match. You can get a nice string representation of the match by calling `self.rule['type'].get_match_str(match, self.rule)`. If this method raises an exception, it will be caught by ElastAlert and the alert will be marked as unsent and saved for later. ### get_info(self):[¶](#get-info-self) This function is called to get information about the alert to save back to Elasticsearch. It should return a dictionary, which is uploaded directly to Elasticsearch, and should contain useful information about the alert such as the type, recipients, parameters, etc. ### Tutorial[¶](#tutorial) Let’s create a new alert that will write alerts to a local output file. First, create a modules folder in the base ElastAlert folder: ``` $ mkdir elastalert_modules $ cd elastalert_modules $ touch __init__.py ``` Now, in a file named `my_alerts.py`, add ``` from elastalert.alerts import Alerter, BasicMatchString class AwesomeNewAlerter(Alerter): # By setting required_options to a set of strings # You can ensure that the rule config file specifies all # of the options. Otherwise, ElastAlert will throw an exception # when trying to load the rule. required_options = set(['output_file_path']) # Alert is called def alert(self, matches): # Matches is a list of match dictionaries. # It contains more than one match when the alert has # the aggregation option set for match in matches: # Config options can be accessed with self.rule with open(self.rule['output_file_path'], "a") as output_file: # basic_match_string will transform the match into the default # human readable string format match_string = str(BasicMatchString(self.rule, match)) output_file.write(match_string) # get_info is called after an alert is sent to get data that is written back # to Elasticsearch in the field "alert_info" # It should return a dict of information relevant to what the alert does def get_info(self): return {'type': 'Awesome Alerter', 'output_file': self.rule['output_file_path']} ``` In the rule configuration file, we are going to specify the alert by writing ``` alert: "elastalert_modules.my_alerts.AwesomeNewAlerter" output_file_path: "/tmp/alerts.log" ``` ElastAlert will attempt to import the alert with `from elastalert_modules.my_alerts import AwesomeNewAlerter`. This means that the folder must be in a location where it can be imported as a python module. Writing Filters For Rules[¶](#writing-filters-for-rules) --- This document describes how to create a filter section for your rule config file. The filters used in rules are part of the Elasticsearch query DSL, further documentation for which can be found at <https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl.html> This document contains a small subset of particularly useful filters. The filter section is passed to Elasticsearch exactly as follows: ``` filter: and: filters: - [filters from rule.yaml] ``` Every result that matches these filters will be passed to the rule for processing. ### Common Filter Types:[¶](#common-filter-types) #### query_string[¶](#query-string) The query_string type follows the Lucene query format and can be used for partial or full matches to multiple fields. See <http://lucene.apache.org/core/2_9_4/queryparsersyntax.html> for more information: ``` filter: - query: query_string: query: "username: bob" - query: query_string: query: "_type: login_logs" - query: query_string: query: "field: value OR otherfield: othervalue" - query: query_string: query: "this: that AND these: those" ``` #### term[¶](#term) The term type allows for exact field matches: ``` filter: - term: name_field: "bob" - term: _type: "login_logs" ``` Note that a term query may not behave as expected if a field is analyzed. By default, many string fields will be tokenized by whitespace, and a term query for “foo bar” may not match a field that appears to have the value “foo bar”, unless it is not analyzed. Conversely, a term query for “foo” will match analyzed strings “foo bar” and “foo baz”. For full text matching on analyzed fields, use query_string. See <https://www.elastic.co/guide/en/elasticsearch/guide/current/term-vs-full-text.html#### [terms](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-terms-query.html)[¶](#id1) Terms allows for easy combination of multiple term filters: ``` filter: - terms: field: ["value1", "value2"] # value1 OR value2 ``` You can also match on multiple fields: ``` - terms: fieldX: ["value1", "value2"] fieldY: ["something", "something_else"] fieldZ: ["foo", "bar", "baz"] ``` #### wildcard[¶](#wildcard) For wildcard matches: ``` filter: - query: wildcard: field: "foo*bar" ``` #### range[¶](#range) For ranges on fields: ``` filter: - range: status_code: from: 500 to: 599 ``` #### Negation, and, or[¶](#negation-and-or) For Elasticsearch 2.X, any of the filters can be embedded in `not`, `and`, and `or`: ``` filter: - or: - term: field: "value" - wildcard: field: "foo*bar" - and: - not: term: field: "value" - not: term: _type: "something" ``` For Elasticsearch 5.x, this will not work and to implement boolean logic use query strings: ``` filter: - query: query_string: query: "somefield: somevalue OR foo: bar" ``` ### Loading Filters Directly From Kibana 3[¶](#loading-filters-directly-from-kibana-3) There are two ways to load filters directly from a Kibana 3 dashboard. You can set your filter to: ``` filter: download_dashboard: "My Dashboard Name" ``` and when ElastAlert starts, it will download the dashboard schema from Elasticsearch and use the filters from that. However, if the dashboard name changes or if there is connectivity problems when ElastAlert starts, the rule will not load and ElastAlert will exit with an error like “Could not download filters for ..” The second way is to generate a config file once using the Kibana dashboard. To do this, run `elastalert-rule-from-kibana`. ``` $ elastalert-rule-from-kibana Elasticsearch host: elasticsearch.example.com Elasticsearch port: 14900 Dashboard name: My Dashboard Partial Config file --- name: My Dashboard es_host: elasticsearch.example.com es_port: 14900 filter: - query: query_string: {query: '_exists_:log.message'} - query: query_string: {query: 'some_field:12345'} ``` Enhancements[¶](#enhancements) --- Enhancements are modules which let you modify a match before an alert is sent. They should subclass `BaseEnhancement`, found in `elastalert/enhancements.py`. They can be added to rules using the `match_enhancements` option: ``` match_enhancements: - module.file.MyEnhancement ``` where module is the name of a Python module, or folder containing `__init__.py`, and file is the name of the Python file containing a `BaseEnhancement` subclass named `MyEnhancement`. A special exception class ``DropMatchException`` can be used in enhancements to drop matches if custom conditions are met. For example: ``` class MyEnhancement(BaseEnhancement): def process(self, match): # Drops a match if "field_1" == "field_2" if match['field_1'] == match['field_2']: raise DropMatchException() ``` ### Example[¶](#example) As an example enhancement, let’s add a link to a whois website. The match must contain a field named domain and it will add an entry named domain_whois_link. First, create a modules folder for the enhancement in the ElastAlert directory. ``` $ mkdir elastalert_modules $ cd elastalert_modules $ touch __init__.py ``` Now, in a file named `my_enhancements.py`, add ``` from elastalert.enhancements import BaseEnhancement class MyEnhancement(BaseEnhancement): # The enhancement is run against every match # The match is passed to the process function where it can be modified in any way # ElastAlert will do this for each enhancement linked to a rule def process(self, match): if 'domain' in match: url = "http://who.is/whois/%s" % (match['domain']) match['domain_whois_link'] = url ``` Enhancements will not automatically be run. Inside the rule configuration file, you need to point it to the enhancement(s) that it should run by setting the `match_enhancements` option: ``` match_enhancements: - "elastalert_modules.my_enhancements.MyEnhancement" ``` Rules Loaders[¶](#rules-loaders) --- RulesLoaders are subclasses of `RulesLoader`, found in `elastalert/loaders.py`. They are used to gather rules for a particular source. Your RulesLoader needs to implement three member functions, and will look something like this: ``` class AwesomeNewRulesLoader(RulesLoader): def get_names(self, conf, use_rule=None): ... def get_hashes(self, conf, use_rule=None): ... def get_yaml(self, rule): ... ``` You can import loaders by specifying the type as `module.file.RulesLoaderName`, where module is the name of a python module, and file is the name of the python file containing a `RulesLoader` subclass named `RulesLoaderName`. ### Example[¶](#example) As an example loader, let’s retrieve rules from a database rather than from the local file system. First, create a modules folder for the loader in the ElastAlert directory. ``` $ mkdir elastalert_modules $ cd elastalert_modules $ touch __init__.py ``` Now, in a file named `mongo_loader.py`, add ``` from pymongo import MongoClient from elastalert.loaders import RulesLoader import yaml class MongoRulesLoader(RulesLoader): def __init__(self, conf): super(MongoRulesLoader, self).__init__(conf) self.client = MongoClient(conf['mongo_url']) self.db = self.client[conf['mongo_db']] self.cache = {} def get_names(self, conf, use_rule=None): if use_rule: return [use_rule] rules = [] self.cache = {} for rule in self.db.rules.find(): self.cache[rule['name']] = yaml.load(rule['yaml']) rules.append(rule['name']) return rules def get_hashes(self, conf, use_rule=None): if use_rule: return [use_rule] hashes = {} self.cache = {} for rule in self.db.rules.find(): self.cache[rule['name']] = rule['yaml'] hashes[rule['name']] = rule['hash'] return hashes def get_yaml(self, rule): if rule in self.cache: return self.cache[rule] self.cache[rule] = yaml.load(self.db.rules.find_one({'name': rule})['yaml']) return self.cache[rule] ``` Finally, you need to specify in your ElastAlert configuration file that MongoRulesLoader should be used instead of the default FileRulesLoader, so in your `elastalert.conf` file: ``` rules_loader: "elastalert_modules.mongo_loader.MongoRulesLoader" ``` Signing requests to Amazon Elasticsearch service[¶](#signing-requests-to-amazon-elasticsearch-service) --- When using Amazon Elasticsearch service, you need to secure your Elasticsearch from the outside. Currently, there is no way to secure your Elasticsearch using network firewall rules, so the only way is to signing the requests using the access key and secret key for a role or user with permissions on the Elasticsearch service. You can sign requests to AWS using any of the standard AWS methods of providing credentials. - Environment Variables, `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` - AWS Config or Credential Files, `~/.aws/config` and `~/.aws/credentials` - AWS Instance Profiles, uses the EC2 Metadata service ### Using an Instance Profile[¶](#using-an-instance-profile) Typically, you’ll deploy ElastAlert on a running EC2 instance on AWS. You can assign a role to this instance that gives it permissions to read from and write to the Elasticsearch service. When using an Instance Profile, you will need to specify the `aws_region` in the configuration file or set the `AWS_DEFAULT_REGION` environment variable. ### Using AWS profiles[¶](#using-aws-profiles) You can also create a user with permissions on the Elasticsearch service and tell ElastAlert to authenticate itself using that user. First, create an AWS profile in the machine where you’d like to run ElastAlert for the user with permissions. You can use the environment variables `AWS_DEFAULT_PROFILE` and `AWS_DEFAULT_REGION` or add two options to the configuration file: - `aws_region`: The AWS region where you want to operate. - `profile`: The name of the AWS profile to use to sign the requests. Indices and Tables[¶](#indices-and-tables) === * [Index](genindex.html) * [Module Index](py-modindex.html) * [Search Page](search.html)
ArDec
cran
R
Package ‘ArDec’ October 12, 2022 Version 2.1-1 License GPL (>= 2) Title Time Series Autoregressive-Based Decomposition Description Autoregressive-based decomposition of a time series based on the ap- proach in West (1997). Particular cases include the extraction of trend and seasonal components. Author <NAME> Maintainer <NAME> <<EMAIL>> Date 2022-05-31 NeedsCompilation no Repository CRAN Date/Publication 2022-05-31 22:20:26 UTC R topics documented: arde... 1 ardec.l... 3 ardec.periodi... 4 ardec.tren... 5 tempEn... 6 ardec Time series autoregressive decomposition Description Decomposition of a time series into latent subseries from a fitted autoregressive model Usage ardec(x, coef, ...) Arguments x time series coef autoregressive parameters of AR(p) model ... additional arguments for specific methods Details If an observed time series can be adequately described by an (eventually high order) autoregressive AR(p) process, a constructive result (West, 1997) yields a time series decomposition in terms of latent components following either AR(1) or AR(2) processes depending on the eigenvalues of the state evolution matrix. Complex eigenvalues r exp(iw) correspond to pseudo-periodic oscillations as a damped sine wave with fixed period (2pi/w) and damping factor r. Real eigenvalues correspond to a first order autore- gressive process with parameter r. Value A list with components: period periods of latent components modulus damping factors of latent components comps matrix of latent components Author(s) <NAME> References West, M. (1997), Time series decomposition. Biometrika, 84, 489-494. West, M. and <NAME>. (1997), Bayesian Forecasting and Dynamic Models, Springer-Verlag. Examples data(tempEng) coef=ardec.lm(tempEng)$coefficients # warning: running the next command can be time comsuming! decomposition=ardec(tempEng,coef) ardec.lm Fit an autoregressive model as a linear regression Description Function ardec.lm fits an autoregressive model of order p, AR(p) to a time series through a linear least squares regression. Usage ardec.lm(x) Arguments x time series Value For ardec.lm, an object of class "lm". Author(s) <NAME> References West, M. (1995), Bayesian inference in cyclical component dynamic linear models.Journal of the American Statistical Association, 90, 1301-1312. See Also ar, lm Examples data(tempEng) model=ardec.lm(tempEng) ardec.periodic Extraction of individual periodic components from a monthly time se- ries Description Function ardec.periodic extracts a periodic component from the autoregressive decomposition of a monthly time series. Usage ardec.periodic(x, per, tol = 0.95) Arguments x time series per period of the component to be extracted tol tolerance for the period of the component Value A list with components: period period for the anual component modulus damping factor for the annual component component extracted component Author(s) <NAME> Examples data(tempEng) ardec.periodic(tempEng,per=12) ardec.trend Estimation of the trend component from a monthly time series Description Function ardec.trend extracts the trend component from the autoregressive decomposition of a monthly time series. Usage ardec.trend(x) Arguments x time series Value A list with components: modulus damping factor for the annual component trend trend component Author(s) <NAME> Examples data(co2) ardec.trend(co2) tempEng Time series of monthly temperature values Description Monthly temperature in Central England from 1723-1970 Usage data(tempEng) Format Time-Series [1:2976] from 1723 to 1971 Source <NAME>. and <NAME>. (1994) Time Series Modelling of Water Resources and Environmental Systems, Elsevier Examples data(tempEng)
lodash310patch
npm
JavaScript
lodash v3.10.1 === The [modern build](https://github.com/lodash/lodash/wiki/Build-Differences) of [lodash](https://lodash.com/) exported as [Node.js](http://nodejs.org/)/[io.js](https://iojs.org/) modules. Generated using [lodash-cli](https://www.npmjs.com/package/lodash-cli): ``` $ lodash modularize modern exports=node -o ./$ lodash modern -d -o ./index.js ``` Installation --- Using npm: ``` $ {sudo -H} npm i -g npm$ npm i --save lodash ``` In Node.js/io.js: ``` // load the modern buildvar _ = require('lodash');// or a method categoryvar array = require('lodash/array');// or a method (great for smaller builds with browserify/webpack)var chunk = require('lodash/array/chunk'); ``` See the [package source](https://github.com/lodash/lodash/tree/3.10.1-npm) for more details. **Note:** Don’t assign values to the [special variable](http://nodejs.org/api/repl.html#repl_repl_features) `_` when in the REPL. Install [n_](https://www.npmjs.com/package/n_) for a REPL that includes lodash by default. Module formats --- lodash is also available in a variety of other builds & module formats. * npm packages for [modern](https://www.npmjs.com/package/lodash), [compatibility](https://www.npmjs.com/package/lodash-compat), & [per method](https://www.npmjs.com/browse/keyword/lodash-modularized) builds * AMD modules for [modern](https://github.com/lodash/lodash/tree/3.10.1-amd) & [compatibility](https://github.com/lodash/lodash-compat/tree/3.10.1-amd) builds * ES modules for the [modern](https://github.com/lodash/lodash/tree/3.10.1-es) build Further Reading --- * [API Documentation](https://lodash.com/docs) * [Build Differences](https://github.com/lodash/lodash/wiki/Build-Differences) * [Changelog](https://github.com/lodash/lodash/wiki/Changelog) * [Roadmap](https://github.com/lodash/lodash/wiki/Roadmap) * [More Resources](https://github.com/lodash/lodash/wiki/Resources) Features --- * ~100% [code coverage](https://coveralls.io/r/lodash) * Follows [semantic versioning](http://semver.org/) for releases * [Lazily evaluated](http://filimanjaro.com/blog/2014/introducing-lazy-evaluation/) chaining * [_(…)](https://lodash.com/docs#_) supports implicit chaining * [*.ary*](https://lodash.com/docs#ary) & [.rearg](https://lodash.com/docs#rearg) to change function argument limits & order * [_.at](https://lodash.com/docs#at) for cherry-picking collection values * [_.attempt](https://lodash.com/docs#attempt) to execute functions which may error without a try-catch * [*.before*](https://lodash.com/docs#before) to complement [.after](https://lodash.com/docs#after) * [_.bindKey](https://lodash.com/docs#bindKey) for binding [*“lazy”*](http://michaux.ca/articles/lazy-function-definition-pattern) defined methods * [_.chunk](https://lodash.com/docs#chunk) for splitting an array into chunks of a given size * [_.clone](https://lodash.com/docs#clone) supports shallow cloning of `Date` & `RegExp` objects * [_.cloneDeep](https://lodash.com/docs#cloneDeep) for deep cloning arrays & objects * [*.curry*](https://lodash.com/docs#curry) & [.curryRight](https://lodash.com/docs#curryRight) for creating [curried](http://hughfdjackson.com/javascript/why-curry-helps/) functions * [*.debounce*](https://lodash.com/docs#debounce) & [.throttle](https://lodash.com/docs#throttle) are cancelable & accept options for more control * [_.defaultsDeep](https://lodash.com/docs#defaultsDeep) for recursively assigning default properties * [_.fill](https://lodash.com/docs#fill) to fill arrays with values * [_.findKey](https://lodash.com/docs#findKey) for finding keys * [*.flow*](https://lodash.com/docs#flow) to complement [.flowRight](https://lodash.com/docs#flowRight) (a.k.a `_.compose`) * [_.forEach](https://lodash.com/docs#forEach) supports exiting early * [_.forIn](https://lodash.com/docs#forIn) for iterating all enumerable properties * [_.forOwn](https://lodash.com/docs#forOwn) for iterating own properties * [*.get*](https://lodash.com/docs#get) & [.set](https://lodash.com/docs#set) for deep property getting & setting * [*.gt*](https://lodash.com/docs#gt), [.gte](https://lodash.com/docs#gte), [*.lt*](https://lodash.com/docs#lt), & [.lte](https://lodash.com/docs#lte) relational methods * [_.inRange](https://lodash.com/docs#inRange) for checking whether a number is within a given range * [_.isNative](https://lodash.com/docs#isNative) to check for native functions * [*.isPlainObject*](https://lodash.com/docs#isPlainObject) & [.toPlainObject](https://lodash.com/docs#toPlainObject) to check for & convert to `Object` objects * [_.isTypedArray](https://lodash.com/docs#isTypedArray) to check for typed arrays * [_.mapKeys](https://lodash.com/docs#mapKeys) for mapping keys to an object * [_.matches](https://lodash.com/docs#matches) supports deep object comparisons * [*.matchesProperty*](https://lodash.com/docs#matchesProperty) to complement [.matches](https://lodash.com/docs#matches) & [_.property](https://lodash.com/docs#property) * [*.merge*](https://lodash.com/docs#merge) for a deep [.extend](https://lodash.com/docs#extend) * [*.method*](https://lodash.com/docs#method) & [.methodOf](https://lodash.com/docs#methodOf) to create functions that invoke methods * [_.modArgs](https://lodash.com/docs#modArgs) for more advanced functional composition * [_.parseInt](https://lodash.com/docs#parseInt) for consistent cross-environment behavior * [*.pull*](https://lodash.com/docs#pull), [.pullAt](https://lodash.com/docs#pullAt), & [_.remove](https://lodash.com/docs#remove) for mutating arrays * [_.random](https://lodash.com/docs#random) supports returning floating-point numbers * [*.restParam*](https://lodash.com/docs#restParam) & [.spread](https://lodash.com/docs#spread) for applying rest parameters & spreading arguments to functions * [_.runInContext](https://lodash.com/docs#runInContext) for collisionless mixins & easier mocking * [_.slice](https://lodash.com/docs#slice) for creating subsets of array-like values * [*.sortByAll*](https://lodash.com/docs#sortByAll) & [.sortByOrder](https://lodash.com/docs#sortByOrder) for sorting by multiple properties & orders * [_.support](https://lodash.com/docs#support) for flagging environment features * [_.template](https://lodash.com/docs#template) supports [*“imports”*](https://lodash.com/docs#templateSettings-imports) options & [ES template delimiters](http://people.mozilla.org/~jorendorff/es6-draft.html#sec-template-literal-lexical-components) * [*.transform*](https://lodash.com/docs#transform) as a powerful alternative to [.reduce](https://lodash.com/docs#reduce) for transforming objects * [*.unzipWith*](https://lodash.com/docs#unzipWith) & [.zipWith](https://lodash.com/docs#zipWith) to specify how grouped values should be combined * [_.valuesIn](https://lodash.com/docs#valuesIn) for getting values of all enumerable properties * [*.xor*](https://lodash.com/docs#xor) to complement [.difference](https://lodash.com/docs#difference), [*.intersection*](https://lodash.com/docs#intersection), & [.union](https://lodash.com/docs#union) * [*.add*](https://lodash.com/docs#add), [.round](https://lodash.com/docs#round), [_.sum](https://lodash.com/docs#sum), & [more](https://lodash.com/docs "_.ceil & _.floor") math methods * [*.bind*](https://lodash.com/docs#bind), [.curry](https://lodash.com/docs#curry), [_.partial](https://lodash.com/docs#partial), & [more](https://lodash.com/docs "_.bindKey, _.curryRight, _.partialRight") support customizable argument placeholders * [*.capitalize*](https://lodash.com/docs#capitalize), [.trim](https://lodash.com/docs#trim), & [more](https://lodash.com/docs "_.camelCase, _.deburr, _.endsWith, _.escapeRegExp, _.kebabCase, _.pad, _.padLeft, _.padRight, _.repeat, _.snakeCase, _.startCase, _.startsWith, _.trimLeft, _.trimRight, _.trunc, _.words") string methods * [*.clone*](https://lodash.com/docs#clone), [.isEqual](https://lodash.com/docs#isEqual), & [more](https://lodash.com/docs "_.assign, _.cloneDeep, _.merge") accept customizer callbacks * [*.dropWhile*](https://lodash.com/docs#dropWhile), [.takeWhile](https://lodash.com/docs#takeWhile), & [more](https://lodash.com/docs "_.drop, _.dropRight, _.dropRightWhile, _.take, _.takeRight, _.takeRightWhile") to complement [*.first*](https://lodash.com/docs#first), [.initial](https://lodash.com/docs#initial), [*.last*](https://lodash.com/docs#last), & [.rest](https://lodash.com/docs#rest) * [*.findLast*](https://lodash.com/docs#findLast), [.findLastKey](https://lodash.com/docs#findLastKey), & [more](https://lodash.com/docs "_.curryRight, _.dropRight, _.dropRightWhile, _.flowRight, _.forEachRight, _.forInRight, _.forOwnRight, _.padRight, partialRight, _.takeRight, _.trimRight, _.takeRightWhile") right-associative methods * [*.includes*](https://lodash.com/docs#includes), [.toArray](https://lodash.com/docs#toArray), & [more](https://lodash.com/docs "_.at, _.countBy, _.every, _.filter, _.find, _.findLast, _.findWhere, _.forEach, _.forEachRight, _.groupBy, _.indexBy, _.invoke, _.map, _.max, _.min, _.partition, _.pluck, _.reduce, _.reduceRight, _.reject, _.shuffle, _.size, _.some, _.sortBy, _.sortByAll, _.sortByOrder, _.sum, _.where") accept strings * [*#commit*](https://lodash.com/docs#prototype-commit) & [#plant](https://lodash.com/docs#prototype-plant) for working with chain sequences * [_#thru](https://lodash.com/docs#thru) to pass values thru a chain sequence Support --- Tested in Chrome 43-44, Firefox 38-39, IE 6-11, MS Edge, Safari 5-8, ChakraNode 0.12.2, io.js 2.5.0, Node.js 0.8.28, 0.10.40, & 0.12.7, PhantomJS 1.9.8, RingoJS 0.11, & Rhino 1.7.6. Automated [browser](https://saucelabs.com/u/lodash) & [CI](https://travis-ci.org/lodash/lodash/) test runs are available. Special thanks to [Sauce Labs](https://saucelabs.com/) for providing automated browser testing. Readme --- ### Keywords * modules * stdlib * util
@types/http-link-header
npm
JavaScript
[Installation](#installation) === > `npm install --save @types/http-link-header` [Summary](#summary) === This package contains type definitions for http-link-header (<https://github.com/jhermsmeier/node-http-link-header>). [Details](#details) === Files were exported from <https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/http-link-header>. [index.d.ts](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/http-link-header/index.d.ts) --- ``` /// <reference types="node" /export = Link; /** * Parse & format HTTP link headers according to RFC 8288 */ declare class Link { /** * Creates a new Link by parsing a link header beginning at the provided * offset * @param value The header to parse * @param offset The offset to start at. Defaults to 0. * @return A new Link */ static parse(value: string, offset?: number): Link; /** * Determines whether an encoding can be * natively handled with a `Buffer` */ static isCompatibleEncoding(value: string): boolean; static isSingleOccurenceAttr(attr: string): boolean; static isTokenAttr(attr: string): boolean; static escapeQuotes(value: string): string; static formatExtendedAttribute(attr: string, data: Link.LinkData): string; /** * Format a given attribute and it's value */ static formatAttribute(attr: string, value: string | Buffer | Array<string | Buffer>): string; /** * Link * @param value Link header to parse */ constructor(value?: string); refs: Link.Reference[]; has(attribute: string, value: string): boolean; /** * Get refs where the given attribute has a given value * @param attribute Attribute name * @param value Value to match * @return An array of references */ get(attribute: string, value: string): Link.Reference[]; /** * Get refs with given relation type * @param value The rel value * @return An array of references */ rel(value: string): Link.Reference[]; set(ref: Link.Reference): Link; /** * Parse a link header beginning at the provided offset * @param value The header to parse * @param offset The offset to start at. Defaults to 0. * @return The calling instance */ parse(value: string, offset?: number): Link; } declare namespace Link { interface Reference { uri: string; rel: string; [index: string]: string; } interface LinkData { /** @default 'utf-8' */ encoding?: string | undefined; /** @default 'en' */ language?: string | undefined; value: string | Buffer; } } ``` ### [Additional Details](#additional-details) * Last updated: Wed, 18 Oct 2023 01:17:35 GMT * Dependencies: [@types/node](https://npmjs.com/package/@types/node) [Credits](#credits) === These definitions were written by [<NAME>](https://github.com/screendriver), [<NAME>](https://github.com/nloomans), [<NAME>](https://github.com/lummish), and [<NAME>](https://github.com/peterblazejewicz). Readme --- ### Keywords none
gdnative
rust
Rust
Crate gdnative === Rust bindings for the Godot game engine --- This crate contains high-level wrappers around the Godot game engine’s GDNative API. Some of the types were automatically generated from the engine’s JSON API description, and some other types are hand made wrappers around the core C types. ### Core types Wrappers for most core types expose safe Rust interfaces, and it’s unnecessary to mind memory management most of the times. The exceptions are `VariantArray` and `Dictionary`, internally reference-counted collections with *interior mutability* in Rust parlance. These types are modelled using the *typestate* pattern to enforce that the official thread-safety guidelines. For more information, read the type-level documentation for these types. Since it is easy to expect containers and other types to allocate a copy of their content when using the `Clone` trait, some types do not implement `Clone` and instead implement `NewRef` which provides a `new_ref(&self) -> Self` method to create references to the same collection or object. ### Generated API types The `api` module contains high-level wrappers for all the API types generated from a JSON description of the API. The generated types are tied to a specific version, typically the latest Godot 3.x release (at the time of the godot-rust release). If you want to use the bindings with another version of the engine, read the notes on the `custom-godot` feature flag below. #### Memory management API types may be reference-counted or manually-managed. This is indicated by the `RefCounted` and `ManuallyManaged` marker traits. The API types can exist in three reference forms: bare, `TRef` and `Ref`. Bare references to API types, like `&'a Node`, represent valid and safe references to Godot objects. As such, API methods may be called safely on them. `TRef` adds typestate tracking, which enable additional abilities like being able to be passed to the engine. `Ref`, or *persistent* references, have `'static` lifetime, but are not always safe to use. For more information on how to use persistent references safely, see the `object` module documentation or the corresponding book chapter. ### Feature flags All features are disabled by default. Functionality toggles: * **`async`** Activates async functionality, see `tasks` module for details. * **`serde`** Enable for `serde` support of several core types. See also `Variant`. * **`inventory`** Enables automatic class registration via `inventory`. **Attention:** Automatic registration is unsupported on some platforms, notably WASM. `inventory` can still be used for iterative development if such platforms are targeted, in which case the run-time diagnostic `init::diagnostics::missing_manual_registration` may be helpful. Please refer to the `rust-ctor` README for an up-to-date listing of platforms that *do* support automatic registration. Bindings generation: * **`custom-godot`** When active, tries to locate a Godot executable on your system, in this order: 1. If a `GODOT_BIN` environment variable is defined, it will interpret it as a path to the binary (not directory). 2. An executable called `godot`, accessible in your system’s PATH, is used. 3. If neither of the above is found, an error is generated.The symbols in `api` will be generated in a way compatible with that engine version. This allows to use Godot versions older than the currently supported ones. See Custom Godot builds for detailed instructions. * **`formatted`** Enable if the generated binding source code should be human-readable and split into multiple files. This can also help IDEs that struggle with a single huge file. * **`ptrcall`** Enables the `ptrcall` convention for calling Godot API methods. This increases performance, at the cost of forward binary compatibility with the engine. Binaries built with `ptrcall` enabled **may exhibit undefined behavior** when loaded by a different version of Godot, even when there are no breaking API changes as far as GDScript is concerned. Notably, the addition of new default parameters breaks any code using `ptrcall`. Cargo features are additive, and as such, it’s only necessary to enable this feature for the final `cdylib` crates, whenever desired. Modules --- apiBindings for the Godot Class API.core_typesTypes that represent core types of Godot.deriveexportFunctionality for user-defined types exported to the engine (native scripts).globalscopePort of selected GDScript built-in functions.initGlobal initialization and termination of the library.logFunctions for using the engine’s logging system in the editor.objectProvides types to interact with the Godot `Object` class hierarchypreludeCurated re-exports of common items.profilerInterface to Godot’s built-in profiler.tasksRuntime async support for godot-rust.Macros --- godot_dbgPrints and returns the value of a given expression for quick and dirty debugging, using the engine’s logging system (visible in the editor).godot_errorPrint an error using the engine’s logging system (visible in the editor).godot_printPrint a message using the engine’s logging system (visible in the editor).godot_siteCreates a `Site` value from the current position in code, optionally with a function path for identification. Crate gdnative === Rust bindings for the Godot game engine --- This crate contains high-level wrappers around the Godot game engine’s GDNative API. Some of the types were automatically generated from the engine’s JSON API description, and some other types are hand made wrappers around the core C types. ### Core types Wrappers for most core types expose safe Rust interfaces, and it’s unnecessary to mind memory management most of the times. The exceptions are `VariantArray` and `Dictionary`, internally reference-counted collections with *interior mutability* in Rust parlance. These types are modelled using the *typestate* pattern to enforce that the official thread-safety guidelines. For more information, read the type-level documentation for these types. Since it is easy to expect containers and other types to allocate a copy of their content when using the `Clone` trait, some types do not implement `Clone` and instead implement `NewRef` which provides a `new_ref(&self) -> Self` method to create references to the same collection or object. ### Generated API types The `api` module contains high-level wrappers for all the API types generated from a JSON description of the API. The generated types are tied to a specific version, typically the latest Godot 3.x release (at the time of the godot-rust release). If you want to use the bindings with another version of the engine, read the notes on the `custom-godot` feature flag below. #### Memory management API types may be reference-counted or manually-managed. This is indicated by the `RefCounted` and `ManuallyManaged` marker traits. The API types can exist in three reference forms: bare, `TRef` and `Ref`. Bare references to API types, like `&'a Node`, represent valid and safe references to Godot objects. As such, API methods may be called safely on them. `TRef` adds typestate tracking, which enable additional abilities like being able to be passed to the engine. `Ref`, or *persistent* references, have `'static` lifetime, but are not always safe to use. For more information on how to use persistent references safely, see the `object` module documentation or the corresponding book chapter. ### Feature flags All features are disabled by default. Functionality toggles: * **`async`** Activates async functionality, see `tasks` module for details. * **`serde`** Enable for `serde` support of several core types. See also `Variant`. * **`inventory`** Enables automatic class registration via `inventory`. **Attention:** Automatic registration is unsupported on some platforms, notably WASM. `inventory` can still be used for iterative development if such platforms are targeted, in which case the run-time diagnostic `init::diagnostics::missing_manual_registration` may be helpful. Please refer to the `rust-ctor` README for an up-to-date listing of platforms that *do* support automatic registration. Bindings generation: * **`custom-godot`** When active, tries to locate a Godot executable on your system, in this order: 1. If a `GODOT_BIN` environment variable is defined, it will interpret it as a path to the binary (not directory). 2. An executable called `godot`, accessible in your system’s PATH, is used. 3. If neither of the above is found, an error is generated.The symbols in `api` will be generated in a way compatible with that engine version. This allows to use Godot versions older than the currently supported ones. See Custom Godot builds for detailed instructions. * **`formatted`** Enable if the generated binding source code should be human-readable and split into multiple files. This can also help IDEs that struggle with a single huge file. * **`ptrcall`** Enables the `ptrcall` convention for calling Godot API methods. This increases performance, at the cost of forward binary compatibility with the engine. Binaries built with `ptrcall` enabled **may exhibit undefined behavior** when loaded by a different version of Godot, even when there are no breaking API changes as far as GDScript is concerned. Notably, the addition of new default parameters breaks any code using `ptrcall`. Cargo features are additive, and as such, it’s only necessary to enable this feature for the final `cdylib` crates, whenever desired. Modules --- apiBindings for the Godot Class API.core_typesTypes that represent core types of Godot.deriveexportFunctionality for user-defined types exported to the engine (native scripts).globalscopePort of selected GDScript built-in functions.initGlobal initialization and termination of the library.logFunctions for using the engine’s logging system in the editor.objectProvides types to interact with the Godot `Object` class hierarchypreludeCurated re-exports of common items.profilerInterface to Godot’s built-in profiler.tasksRuntime async support for godot-rust.Macros --- godot_dbgPrints and returns the value of a given expression for quick and dirty debugging, using the engine’s logging system (visible in the editor).godot_errorPrint an error using the engine’s logging system (visible in the editor).godot_printPrint a message using the engine’s logging system (visible in the editor).godot_siteCreates a `Site` value from the current position in code, optionally with a function path for identification. Struct gdnative::core_types::VariantArray === ``` pub struct VariantArray<Own = Shared>where     Own: Ownership,{ /* private fields */ } ``` A reference-counted `Variant` vector. Godot’s generic array data type. Negative indices can be used to count from the right. Generic methods on this type performs `Variant` conversion every time. This could be significant for complex structures. Users may convert arguments to `Variant`s before calling to avoid this behavior if necessary. Safety --- This is a reference-counted collection with “interior mutability” in Rust parlance. To enforce that the official thread-safety guidelines are followed this type uses the *typestate* pattern. The typestate `Access` tracks whether there is thread-local or unique access (where pretty much all operations are safe) or whether the value might be “shared”, in which case not all operations are safe. Implementations --- ### impl<Own> VariantArray<Own>where    Own: Ownership, Operations allowed on all arrays at any point in time. #### pub fn set<T>(&self, idx: i32, val: T)where    T: OwnedToVariant, Sets the value of the element at the given offset. #### pub fn get(&self, idx: i32) -> Variant Returns a copy of the element at the given offset. #### pub unsafe fn get_ref(&self, idx: i32) -> &Variant Returns a reference to the element at the given offset. ##### Safety The returned reference is invalidated if the same container is mutated through another reference. `Variant` is reference-counted and thus cheaply cloned. Consider using `get` instead. #### pub unsafe fn get_mut_ref(&self, idx: i32) -> &mut Variant Returns a mutable reference to the element at the given offset. ##### Safety The returned reference is invalidated if the same container is mutated through another reference. It is possible to create two mutable references to the same memory location if the same `idx` is provided, causing undefined behavior. #### pub fn count<T>(&self, val: T) -> i32where    T: ToVariant, #### pub fn is_empty(&self) -> bool Returns `true` if the `VariantArray` contains no elements. #### pub fn len(&self) -> i32 Returns the number of elements in the array. #### pub fn find<T>(&self, what: T, from: i32) -> i32where    T: ToVariant, Searches the array for a value and returns its index. Pass an initial search index as the second argument. Returns `-1` if value is not found. #### pub fn contains<T>(&self, what: T) -> boolwhere    T: ToVariant, Returns true if the `VariantArray` contains the specified value. #### pub fn rfind<T>(&self, what: T, from: i32) -> i32where    T: ToVariant, Searches the array in reverse order. Pass an initial search index as the second argument. If negative, the start index is considered relative to the end of the array. #### pub fn find_last<T>(&self, what: T) -> i32where    T: ToVariant, Searches the array in reverse order for a value. Returns its index or `-1` if not found. #### pub fn invert(&self) Inverts the order of the elements in the array. #### pub fn hash(&self) -> i32 Return a hashed i32 value representing the array contents. #### pub fn sort(&self) #### pub fn duplicate(&self) -> VariantArray<UniqueCreate a copy of the array. This creates a new array and is **not** a cheap reference count increment. #### pub fn duplicate_deep(&self) -> VariantArray<UniqueCreate a deep copy of the array. This creates a new array and is **not** a cheap reference count increment. #### pub fn iter(&self) -> Iter<'_, OwnReturns an iterator through all values in the `VariantArray`. `VariantArray` is reference-counted and have interior mutability in Rust parlance. Modifying the same underlying collection while observing the safety assumptions will not violate memory safely, but may lead to surprising behavior in the iterator. ### impl<Own> VariantArray<Own>where    Own: LocalThreadOwnership, Operations allowed on Dictionaries that can only be referenced to from the current thread. #### pub fn clear(&self) Clears the array, resizing to 0. #### pub fn remove(&self, idx: i32) Removes the element at `idx`. #### pub fn erase<T>(&self, val: T)where    T: ToVariant, Removed the first occurrence of `val`. #### pub fn resize(&self, size: i32) Resizes the array, filling with `Nil` if necessary. #### pub fn push<T>(&self, val: T)where    T: OwnedToVariant, Appends an element at the end of the array. #### pub fn pop(&self) -> Variant Removes an element at the end of the array. #### pub fn push_front<T>(&self, val: T)where    T: OwnedToVariant, Appends an element to the front of the array. #### pub fn pop_front(&self) -> Variant Removes an element at the front of the array. #### pub fn insert<T>(&self, at: i32, val: T)where    T: OwnedToVariant, Insert a new int at a given position in the array. ### impl<Own> VariantArray<Own>where    Own: NonUniqueOwnership, Operations allowed on non-unique arrays. #### pub unsafe fn assume_unique(self) -> VariantArray<UniqueAssume that this is the only reference to this array, on which operations that change the container size can be safely performed. ##### Safety It isn’t thread-safe to perform operations that change the container size from multiple threads at the same time. Creating multiple `Unique` references to the same collections, or violating the thread-safety guidelines in non-Rust code will cause undefined behavior. ### impl VariantArray<UniqueOperations allowed on unique arrays. #### pub fn new() -> VariantArray<UniqueCreates an empty `VariantArray`. #### pub fn into_shared(self) -> VariantArray<SharedPut this array under the “shared” access type. #### pub fn into_thread_local(self) -> VariantArray<ThreadLocalPut this array under the “thread-local” access type. ### impl VariantArray<SharedOperations allowed on arrays that might be shared between different threads. #### pub fn new_shared() -> VariantArray<SharedCreate a new shared array. ### impl VariantArray<ThreadLocalOperations allowed on Dictionaries that may only be shared on the current thread. #### pub fn new_thread_local() -> VariantArray<ThreadLocalCreate a new thread-local array. Trait Implementations --- ### impl CoerceFromVariant for VariantArray<Shared#### fn coerce_from_variant(v: &Variant) -> VariantArray<Shared### impl<Own> Debug for VariantArray<Own>where    Own: Ownership, #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter. #### fn drop(&mut self) Executes the destructor for this type. A type-specific hint type that is valid for the type being exported. Returns `ExportInfo` given an optional typed hint.### impl<T, Own> Extend<T> for VariantArray<Own>where    T: ToVariant,    Own: LocalThreadOwnership, #### fn extend<I>(&mut self, iter: I)where    I: IntoIterator<Item = T>, Extends a collection with the contents of an iterator. 🔬This is a nightly-only experimental API. (`extend_one`)Extends a collection with exactly one element.#### fn extend_reserve(&mut self, additional: usize) 🔬This is a nightly-only experimental API. (`extend_one`)Reserves capacity in a collection for the given number of additional elements. #### fn from_iter<I>(iter: I) -> VariantArray<Unique>where    I: IntoIterator<Item = T>, Creates a value from an iterator. #### type Item = Variant The type of the elements being iterated over.#### type IntoIter = Iter<'a, OwnWhich kind of iterator are we turning this into?#### fn into_iter(self) -> <&'a VariantArray<Own> as IntoIterator>::IntoIter Creates an iterator from a value. The type of the elements being iterated over.#### type IntoIter = IntoIter Which kind of iterator are we turning this into?#### fn into_iter(self) -> <VariantArray<Unique> as IntoIterator>::IntoIter Creates an iterator from a value. #### fn new_ref(&self) -> VariantArray<OwnCreates a new reference to the underlying object.### impl OwnedToVariant for VariantArray<Unique#### fn owned_to_variant(self) -> Variant ### impl ToVariant for VariantArray<Shared#### fn to_variant(&self) -> Variant Auto Trait Implementations --- ### impl<Own> RefUnwindSafe for VariantArray<Own>where    Own: RefUnwindSafe, ### impl<Own> Send for VariantArray<Own>where    Own: Send, ### impl<Own> Sync for VariantArray<Own>where    Own: Sync, ### impl<Own> Unpin for VariantArray<Own>where    Own: Unpin, ### impl<Own> UnwindSafe for VariantArray<Own>where    Own: UnwindSafe, Blanket Implementations --- ### impl<T> Any for Twhere    T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. const: unstable · source#### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> OwnedToVariant for Twhere    T: ToVariant, #### fn owned_to_variant(self) -> Variant ### impl<T, U> TryFrom<U> for Twhere    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct gdnative::core_types::Dictionary === ``` pub struct Dictionary<Own = Shared>where     Own: Ownership,{ /* private fields */ } ``` A reference-counted `Dictionary` of `Variant` key-value pairs. Generic methods on this type performs `Variant` conversion every time. This could be significant for complex structures. Users may convert arguments to `Variant`s before calling to avoid this behavior if necessary. Safety --- This is a reference-counted collection with “interior mutability” in Rust parlance. To enforce that the official thread-safety guidelines are followed this type uses the *typestate* pattern. The typestate `Ownership` tracks whether there is thread-local or unique access (where pretty much all operations are safe) or whether the value might be “shared”, in which case not all operations are safe. Implementations --- ### impl<Own> Dictionary<Own>where    Own: Ownership, Operations allowed on all Dictionaries at any point in time. #### pub fn is_empty(&self) -> bool Returns `true` if the `Dictionary` contains no elements. #### pub fn len(&self) -> i32 Returns the number of elements in the `Dictionary`. #### pub fn contains<K>(&self, key: K) -> boolwhere    K: OwnedToVariant + ToVariantEq, Returns true if the `Dictionary` contains the specified key. #### pub fn contains_all<ArrayOws>(&self, keys: &VariantArray<ArrayOws>) -> boolwhere    ArrayOws: Ownership, Returns true if the `Dictionary` has all of the keys in the given array. #### pub fn get<K>(&self, key: K) -> Option<Variant>where    K: OwnedToVariant + ToVariantEq, Returns a copy of the value corresponding to the key if it exists. #### pub fn get_or<K, D>(&self, key: K, default: D) -> Variantwhere    K: OwnedToVariant + ToVariantEq,    D: OwnedToVariant, Returns a copy of the value corresponding to the key, or `default` if it doesn’t exist #### pub fn get_or_nil<K>(&self, key: K) -> Variantwhere    K: OwnedToVariant + ToVariantEq, Returns a copy of the element corresponding to the key, or `Nil` if it doesn’t exist. Shorthand for `self.get_or(key, Variant::nil())`. #### pub fn update<K, V>(&self, key: K, val: V)where    K: OwnedToVariant + ToVariantEq,    V: OwnedToVariant, Update an existing element corresponding to the key. ##### Panics Panics if the entry for `key` does not exist. #### pub unsafe fn get_ref<K>(&self, key: K) -> &Variantwhere    K: OwnedToVariant + ToVariantEq, Returns a reference to the value corresponding to the key, inserting a `Nil` value first if it does not exist. ##### Safety The returned reference is invalidated if the same container is mutated through another reference, and other references may be invalidated if the entry does not already exist (which causes this function to insert `Nil` and thus possibly re-allocate). `Variant` is reference-counted and thus cheaply cloned. Consider using `get` instead. #### pub unsafe fn get_mut_ref<K>(&self, key: K) -> &mut Variantwhere    K: OwnedToVariant + ToVariantEq, Returns a mutable reference to the value corresponding to the key, inserting a `Nil` value first if it does not exist. ##### Safety The returned reference is invalidated if the same container is mutated through another reference, and other references may be invalidated if the `key` does not already exist (which causes this function to insert `Nil` and thus possibly re-allocate). It is also possible to create two mutable references to the same memory location if the same `key` is provided, causing undefined behavior. #### pub fn to_json(&self) -> GodotString Returns a GodotString of the `Dictionary`. #### pub fn keys(&self) -> VariantArray<UniqueReturns an array of the keys in the `Dictionary`. #### pub fn values(&self) -> VariantArray<UniqueReturns an array of the values in the `Dictionary`. #### pub fn hash(&self) -> i32 Return a hashed i32 value representing the dictionary’s contents. #### pub fn iter(&self) -> Iter<'_, OwnReturns an iterator through all key-value pairs in the `Dictionary`. `Dictionary` is reference-counted and have interior mutability in Rust parlance. Modifying the same underlying collection while observing the safety assumptions will not violate memory safely, but may lead to surprising behavior in the iterator. #### pub fn duplicate(&self) -> Dictionary<UniqueCreate a copy of the dictionary. This creates a new dictionary and is **not** a cheap reference count increment. ### impl Dictionary<SharedOperations allowed on Dictionaries that might be shared between different threads. #### pub fn new_shared() -> Dictionary<SharedCreate a new shared dictionary. ### impl Dictionary<ThreadLocalOperations allowed on Dictionaries that may only be shared on the current thread. #### pub fn new_thread_local() -> Dictionary<ThreadLocalCreate a new thread-local dictionary. ### impl<Own> Dictionary<Own>where    Own: NonUniqueOwnership, Operations allowed on Dictionaries that are not unique. #### pub unsafe fn assume_unique(self) -> Dictionary<UniqueAssume that this is the only reference to this dictionary, on which operations that change the container size can be safely performed. ##### Safety It isn’t thread-safe to perform operations that change the container size from multiple threads at the same time. Creating multiple `Unique` references to the same collections, or violating the thread-safety guidelines in non-Rust code will cause undefined behavior. ### impl<Own> Dictionary<Own>where    Own: LocalThreadOwnership, Operations allowed on Dictionaries that can only be referenced to from the current thread. #### pub fn insert<K, V>(&self, key: K, val: V)where    K: OwnedToVariant + ToVariantEq,    V: OwnedToVariant, Inserts or updates the value of the element corresponding to the key. #### pub fn erase<K>(&self, key: K)where    K: OwnedToVariant + ToVariantEq, Erase a key-value pair in the `Dictionary` by the specified key. #### pub fn clear(&self) Clears the `Dictionary`, removing all key-value pairs. ### impl Dictionary<UniqueOperations allowed on unique Dictionaries. #### pub fn new() -> Dictionary<UniqueCreates an empty `Dictionary`. #### pub fn into_shared(self) -> Dictionary<SharedPut this dictionary under the “shared” access type. #### pub fn into_thread_local(self) -> Dictionary<ThreadLocalPut this dictionary under the “thread-local” access type. Trait Implementations --- ### impl CoerceFromVariant for Dictionary<Shared#### fn coerce_from_variant(v: &Variant) -> Dictionary<Shared### impl<Own> Debug for Dictionary<Own>where    Own: Ownership, #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter. #### fn drop(&mut self) Executes the destructor for this type. A type-specific hint type that is valid for the type being exported. Returns `ExportInfo` given an optional typed hint.### impl<K, V, Own> Extend<(K, V)> for Dictionary<Own>where    Own: LocalThreadOwnership,    K: ToVariantEq + OwnedToVariant,    V: OwnedToVariant, #### fn extend<I>(&mut self, iter: I)where    I: IntoIterator<Item = (K, V)>, Extends a collection with the contents of an iterator. 🔬This is a nightly-only experimental API. (`extend_one`)Extends a collection with exactly one element.#### fn extend_reserve(&mut self, additional: usize) 🔬This is a nightly-only experimental API. (`extend_one`)Reserves capacity in a collection for the given number of additional elements. #### fn from_iter<I>(iter: I) -> Dictionary<Unique>where    I: IntoIterator<Item = (K, V)>, Creates a value from an iterator. #### type Item = (Variant, Variant) The type of the elements being iterated over.#### type IntoIter = Iter<'a, OwnWhich kind of iterator are we turning this into?#### fn into_iter(self) -> <&'a Dictionary<Own> as IntoIterator>::IntoIter Creates an iterator from a value. The type of the elements being iterated over.#### type IntoIter = IntoIter Which kind of iterator are we turning this into?#### fn into_iter(self) -> <Dictionary<Unique> as IntoIterator>::IntoIter Creates an iterator from a value. #### fn new_ref(&self) -> Dictionary<OwnCreates a new reference to the underlying object.### impl OwnedToVariant for Dictionary<Unique#### fn owned_to_variant(self) -> Variant ### impl ToVariant for Dictionary<Shared#### fn to_variant(&self) -> Variant Auto Trait Implementations --- ### impl<Own> RefUnwindSafe for Dictionary<Own>where    Own: RefUnwindSafe, ### impl<Own> Send for Dictionary<Own>where    Own: Send, ### impl<Own> Sync for Dictionary<Own>where    Own: Sync, ### impl<Own> Unpin for Dictionary<Own>where    Own: Unpin, ### impl<Own> UnwindSafe for Dictionary<Own>where    Own: UnwindSafe, Blanket Implementations --- ### impl<T> Any for Twhere    T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. const: unstable · source#### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> OwnedToVariant for Twhere    T: ToVariant, #### fn owned_to_variant(self) -> Variant ### impl<T, U> TryFrom<U> for Twhere    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Trait gdnative::object::NewRef === ``` pub trait NewRef { fn new_ref(&self) -> Self; } ``` A trait for incrementing the reference count to a Godot object. Required Methods --- #### fn new_ref(&self) -> Self Creates a new reference to the underlying object. Implementors --- ### impl NewRef for GodotString ### impl NewRef for NodePath ### impl<Own> NewRef for Dictionary<Own>where    Own: NonUniqueOwnership, ### impl<Own> NewRef for VariantArray<Own>where    Own: NonUniqueOwnership, ### impl<T> NewRef for PoolArray<T>where    T: PoolElement, Crate gdnative::api === Bindings for the Godot Class API. Modules --- aes_contextThis module contains types related to the API class `AESContext`.animationThis module contains types related to the API class `Animation`.animation_nodeThis module contains types related to the API class `AnimationNode`.animation_node_blend_space_2dThis module contains types related to the API class `AnimationNodeBlendSpace2D`.animation_node_one_shotThis module contains types related to the API class `AnimationNodeOneShot`.animation_node_state_machine_transitionThis module contains types related to the API class `AnimationNodeStateMachineTransition`.animation_playerThis module contains types related to the API class `AnimationPlayer`.animation_treeThis module contains types related to the API class `AnimationTree`.animation_tree_playerThis module contains types related to the API class `AnimationTreePlayer`.areaThis module contains types related to the API class `Area`.area_2dThis module contains types related to the API class `Area2D`.array_meshThis module contains types related to the API class `ArrayMesh`.arvr_interfaceThis module contains types related to the API class `ARVRInterface`.arvr_positional_trackerThis module contains types related to the API class `ARVRPositionalTracker`.arvr_serverThis module contains types related to the API class `ARVRServer`.aspect_ratio_containerThis module contains types related to the API class `AspectRatioContainer`.audio_effect_distortionThis module contains types related to the API class `AudioEffectDistortion`.audio_effect_filterThis module contains types related to the API class `AudioEffectFilter`.audio_effect_pitch_shiftThis module contains types related to the API class `AudioEffectPitchShift`.audio_effect_spectrum_analyzerThis module contains types related to the API class `AudioEffectSpectrumAnalyzer`.audio_effect_spectrum_analyzer_instanceThis module contains types related to the API class `AudioEffectSpectrumAnalyzerInstance`.audio_serverThis module contains types related to the API class `AudioServer`.audio_stream_playerThis module contains types related to the API class `AudioStreamPlayer`.audio_stream_player_3dThis module contains types related to the API class `AudioStreamPlayer3D`.audio_stream_sampleThis module contains types related to the API class `AudioStreamSample`.back_buffer_copyThis module contains types related to the API class `BackBufferCopy`.baked_lightmapThis module contains types related to the API class `BakedLightmap`.base_buttonThis module contains types related to the API class `BaseButton`.box_containerThis module contains types related to the API class `BoxContainer`.buttonThis module contains types related to the API class `Button`.cameraThis module contains types related to the API class `Camera`.camera_2dThis module contains types related to the API class `Camera2D`.camera_feedThis module contains types related to the API class `CameraFeed`.camera_serverThis module contains types related to the API class `CameraServer`.canvas_itemThis module contains types related to the API class `CanvasItem`.canvas_item_materialThis module contains types related to the API class `CanvasItemMaterial`.clipped_cameraThis module contains types related to the API class `ClippedCamera`.collision_polygon_2dThis module contains types related to the API class `CollisionPolygon2D`.cone_twist_jointThis module contains types related to the API class `ConeTwistJoint`.controlThis module contains types related to the API class `Control`.cpu_particlesThis module contains types related to the API class `CPUParticles`.cpu_particles_2dThis module contains types related to the API class `CPUParticles2D`.csg_polygonThis module contains types related to the API class `CSGPolygon`.csg_shapeThis module contains types related to the API class `CSGShape`.cube_mapThis module contains types related to the API class `CubeMap`.cull_instanceThis module contains types related to the API class `CullInstance`.curveThis module contains types related to the API class `Curve`.directional_lightThis module contains types related to the API class `DirectionalLight`.dynamic_fontThis module contains types related to the API class `DynamicFont`.dynamic_font_dataThis module contains types related to the API class `DynamicFontData`.editor_feature_profileThis module contains types related to the API class `EditorFeatureProfile`.editor_file_dialogThis module contains types related to the API class `EditorFileDialog`.editor_pluginThis module contains types related to the API class `EditorPlugin`.editor_vcs_interfaceThis module contains types related to the API class `EditorVCSInterface`.environmentThis module contains types related to the API class `Environment`.fileThis module contains types related to the API class `File`.file_dialogThis module contains types related to the API class `FileDialog`.fontThis module contains types related to the API class `Font`.generic_6dof_jointThis module contains types related to the API class `Generic6DOFJoint`.geometryThis module contains types related to the API class `Geometry`.geometry_instanceThis module contains types related to the API class `GeometryInstance`.gi_probeThis module contains types related to the API class `GIProbe`.gradientThis module contains types related to the API class `Gradient`.gradient_texture_2dThis module contains types related to the API class `GradientTexture2D`.graph_nodeThis module contains types related to the API class `GraphNode`.hashing_contextThis module contains types related to the API class `HashingContext`.hinge_jointThis module contains types related to the API class `HingeJoint`.http_clientThis module contains types related to the API class `HTTPClient`.http_requestThis module contains types related to the API class `HTTPRequest`.imageThis module contains types related to the API class `Image`.image_textureThis module contains types related to the API class `ImageTexture`.inputThis module contains types related to the API class `Input`.interpolated_cameraThis module contains types related to the API class `InterpolatedCamera`.ipThis module contains types related to the API class `IP`.item_listThis module contains types related to the API class `ItemList`.jsonrpcThis module contains types related to the API class `JSONRPC`.kinematic_bodyThis module contains types related to the API class `KinematicBody`.kinematic_body_2dThis module contains types related to the API class `KinematicBody2D`.labelThis module contains types related to the API class `Label`.label_3dThis module contains types related to the API class `Label3D`.lightThis module contains types related to the API class `Light`.light_2dThis module contains types related to the API class `Light2D`.line_2dThis module contains types related to the API class `Line2D`.line_editThis module contains types related to the API class `LineEdit`.link_buttonThis module contains types related to the API class `LinkButton`.meshThis module contains types related to the API class `Mesh`.multi_meshThis module contains types related to the API class `MultiMesh`.multiplayer_apiThis module contains types related to the API class `MultiplayerAPI`.navigation_meshThis module contains types related to the API class `NavigationMesh`.networked_multiplayer_enetThis module contains types related to the API class `NetworkedMultiplayerENet`.networked_multiplayer_peerThis module contains types related to the API class `NetworkedMultiplayerPeer`.nine_patch_rectThis module contains types related to the API class `NinePatchRect`.nodeThis module contains types related to the API class `Node`.objectThis module contains types related to the API class `Object`.occluder_polygon_2dThis module contains types related to the API class `OccluderPolygon2D`.omni_lightThis module contains types related to the API class `OmniLight`.osThis module contains types related to the API class `OS`.packed_sceneThis module contains types related to the API class `PackedScene`.packet_peer_dtlsThis module contains types related to the API class `PacketPeerDTLS`.particlesThis module contains types related to the API class `Particles`.particles_2dThis module contains types related to the API class `Particles2D`.particles_materialThis module contains types related to the API class `ParticlesMaterial`.path_followThis module contains types related to the API class `PathFollow`.performanceThis module contains types related to the API class `Performance`.physical_boneThis module contains types related to the API class `PhysicalBone`.physics_2d_serverThis module contains types related to the API class `Physics2DServer`.physics_serverThis module contains types related to the API class `PhysicsServer`.pin_jointThis module contains types related to the API class `PinJoint`.procedural_skyThis module contains types related to the API class `ProceduralSky`.proximity_groupThis module contains types related to the API class `ProximityGroup`.reflection_probeThis module contains types related to the API class `ReflectionProbe`.resource_importerThis module contains types related to the API class `ResourceImporter`.resource_saverThis module contains types related to the API class `ResourceSaver`.rich_text_labelThis module contains types related to the API class `RichTextLabel`.rigid_bodyThis module contains types related to the API class `RigidBody`.rigid_body_2dThis module contains types related to the API class `RigidBody2D`.room_managerThis module contains types related to the API class `RoomManager`.scene_stateThis module contains types related to the API class `SceneState`.scene_treeThis module contains types related to the API class `SceneTree`.scene_tree_tweenThis module contains types related to the API class `SceneTreeTween`.shaderThis module contains types related to the API class `Shader`.skyThis module contains types related to the API class `Sky`.slider_jointThis module contains types related to the API class `SliderJoint`.spatial_materialThis module contains types related to the API class `SpatialMaterial`.split_containerThis module contains types related to the API class `SplitContainer`.sprite_base_3dThis module contains types related to the API class `SpriteBase3D`.stream_peer_sslThis module contains types related to the API class `StreamPeerSSL`.stream_peer_tcpThis module contains types related to the API class `StreamPeerTCP`.style_box_textureThis module contains types related to the API class `StyleBoxTexture`.tab_containerThis module contains types related to the API class `TabContainer`.tabsThis module contains types related to the API class `Tabs`.text_editThis module contains types related to the API class `TextEdit`.text_meshThis module contains types related to the API class `TextMesh`.textureThis module contains types related to the API class `Texture`.texture_buttonThis module contains types related to the API class `TextureButton`.texture_layeredThis module contains types related to the API class `TextureLayered`.texture_progressThis module contains types related to the API class `TextureProgress`.texture_rectThis module contains types related to the API class `TextureRect`.themeThis module contains types related to the API class `Theme`.threadThis module contains types related to the API class `Thread`.tile_mapThis module contains types related to the API class `TileMap`.tile_setThis module contains types related to the API class `TileSet`.timeThis module contains types related to the API class `Time`.timerThis module contains types related to the API class `Timer`.touch_screen_buttonThis module contains types related to the API class `TouchScreenButton`.treeThis module contains types related to the API class `Tree`.tree_itemThis module contains types related to the API class `TreeItem`.tweenThis module contains types related to the API class `Tween`.undo_redoThis module contains types related to the API class `UndoRedo`.upnpThis module contains types related to the API class `UPNP`.upnp_deviceThis module contains types related to the API class `UPNPDevice`.utilsUtility functions and extension traits that depend on generated bindingsviewportThis module contains types related to the API class `Viewport`.visibility_enablerThis module contains types related to the API class `VisibilityEnabler`.visibility_enabler_2dThis module contains types related to the API class `VisibilityEnabler2D`.visual_script_builtin_funcThis module contains types related to the API class `VisualScriptBuiltinFunc`.visual_script_custom_nodeThis module contains types related to the API class `VisualScriptCustomNode`.visual_script_function_callThis module contains types related to the API class `VisualScriptFunctionCall`.visual_script_input_actionThis module contains types related to the API class `VisualScriptInputAction`.visual_script_math_constantThis module contains types related to the API class `VisualScriptMathConstant`.visual_script_property_getThis module contains types related to the API class `VisualScriptPropertyGet`.visual_script_property_setThis module contains types related to the API class `VisualScriptPropertySet`.visual_script_yieldThis module contains types related to the API class `VisualScriptYield`.visual_script_yield_signalThis module contains types related to the API class `VisualScriptYieldSignal`.visual_serverThis module contains types related to the API class `VisualServer`.visual_shaderThis module contains types related to the API class `VisualShader`.visual_shader_nodeThis module contains types related to the API class `VisualShaderNode`.visual_shader_node_color_funcThis module contains types related to the API class `VisualShaderNodeColorFunc`.visual_shader_node_color_opThis module contains types related to the API class `VisualShaderNodeColorOp`.visual_shader_node_compareThis module contains types related to the API class `VisualShaderNodeCompare`.visual_shader_node_cube_mapThis module contains types related to the API class `VisualShaderNodeCubeMap`.visual_shader_node_isThis module contains types related to the API class `VisualShaderNodeIs`.visual_shader_node_scalar_derivative_funcThis module contains types related to the API class `VisualShaderNodeScalarDerivativeFunc`.visual_shader_node_scalar_funcThis module contains types related to the API class `VisualShaderNodeScalarFunc`.visual_shader_node_scalar_opThis module contains types related to the API class `VisualShaderNodeScalarOp`.visual_shader_node_scalar_uniformThis module contains types related to the API class `VisualShaderNodeScalarUniform`.visual_shader_node_textureThis module contains types related to the API class `VisualShaderNodeTexture`.visual_shader_node_texture_uniformThis module contains types related to the API class `VisualShaderNodeTextureUniform`.visual_shader_node_transform_funcThis module contains types related to the API class `VisualShaderNodeTransformFunc`.visual_shader_node_transform_multThis module contains types related to the API class `VisualShaderNodeTransformMult`.visual_shader_node_transform_vec_multThis module contains types related to the API class `VisualShaderNodeTransformVecMult`.visual_shader_node_vector_derivative_funcThis module contains types related to the API class `VisualShaderNodeVectorDerivativeFunc`.visual_shader_node_vector_funcThis module contains types related to the API class `VisualShaderNodeVectorFunc`.visual_shader_node_vector_opThis module contains types related to the API class `VisualShaderNodeVectorOp`.web_rtc_data_channelThis module contains types related to the API class `WebRTCDataChannel`.web_rtc_peer_connectionThis module contains types related to the API class `WebRTCPeerConnection`.web_socket_peerThis module contains types related to the API class `WebSocketPeer`.web_xr_interfaceThis module contains types related to the API class `WebXRInterface`.xml_parserThis module contains types related to the API class `XMLParser`.Structs --- AESContext`core class AESContext` inherits `Reference` (reference-counted).ARVRAnchor`core class ARVRAnchor` inherits `Spatial` (manually managed).ARVRCamera`core class ARVRCamera` inherits `Camera` (manually managed).ARVRController`core class ARVRController` inherits `Spatial` (manually managed).ARVRInterface`core class ARVRInterface` inherits `Reference` (reference-counted).ARVRInterfaceGDNative`core class ARVRInterfaceGDNative` inherits `ARVRInterface` (reference-counted).ARVROrigin`core class ARVROrigin` inherits `Spatial` (manually managed).ARVRPositionalTracker`core class ARVRPositionalTracker` inherits `Reference` (reference-counted).ARVRServer`core singleton class ARVRServer` inherits `Object` (manually managed).AStar`core class AStar` inherits `Reference` (reference-counted).AStar2D`core class AStar2D` inherits `Reference` (reference-counted).AcceptDialog`core class AcceptDialog` inherits `WindowDialog` (manually managed).AnimatedSprite`core class AnimatedSprite` inherits `Node2D` (manually managed).AnimatedSprite3D`core class AnimatedSprite3D` inherits `SpriteBase3D` (manually managed).AnimatedTexture`core class AnimatedTexture` inherits `Texture` (reference-counted).Animation`core class Animation` inherits `Resource` (reference-counted).AnimationNode`core class AnimationNode` inherits `Resource` (reference-counted).AnimationNodeAdd2`core class AnimationNodeAdd2` inherits `AnimationNode` (reference-counted).AnimationNodeAdd3`core class AnimationNodeAdd3` inherits `AnimationNode` (reference-counted).AnimationNodeAnimation`core class AnimationNodeAnimation` inherits `AnimationRootNode` (reference-counted).AnimationNodeBlend2`core class AnimationNodeBlend2` inherits `AnimationNode` (reference-counted).AnimationNodeBlend3`core class AnimationNodeBlend3` inherits `AnimationNode` (reference-counted).AnimationNodeBlendSpace1D`core class AnimationNodeBlendSpace1D` inherits `AnimationRootNode` (reference-counted).AnimationNodeBlendSpace2D`core class AnimationNodeBlendSpace2D` inherits `AnimationRootNode` (reference-counted).AnimationNodeBlendTree`core class AnimationNodeBlendTree` inherits `AnimationRootNode` (reference-counted).AnimationNodeOneShot`core class AnimationNodeOneShot` inherits `AnimationNode` (reference-counted).AnimationNodeOutput`core class AnimationNodeOutput` inherits `AnimationNode` (reference-counted).AnimationNodeStateMachine`core class AnimationNodeStateMachine` inherits `AnimationRootNode` (reference-counted).AnimationNodeStateMachinePlayback`core class AnimationNodeStateMachinePlayback` inherits `Resource` (reference-counted).AnimationNodeStateMachineTransition`core class AnimationNodeStateMachineTransition` inherits `Resource` (reference-counted).AnimationNodeTimeScale`core class AnimationNodeTimeScale` inherits `AnimationNode` (reference-counted).AnimationNodeTimeSeek`core class AnimationNodeTimeSeek` inherits `AnimationNode` (reference-counted).AnimationNodeTransition`core class AnimationNodeTransition` inherits `AnimationNode` (reference-counted).AnimationPlayer`core class AnimationPlayer` inherits `Node` (manually managed).AnimationRootNode`core class AnimationRootNode` inherits `AnimationNode` (reference-counted).AnimationTrackEditPlugin`tools class AnimationTrackEditPlugin` inherits `Reference` (reference-counted).AnimationTree`core class AnimationTree` inherits `Node` (manually managed).AnimationTreePlayer`core class AnimationTreePlayer` inherits `Node` (manually managed).Area`core class Area` inherits `CollisionObject` (manually managed).Area2D`core class Area2D` inherits `CollisionObject2D` (manually managed).ArrayMesh`core class ArrayMesh` inherits `Mesh` (reference-counted).AspectRatioContainer`core class AspectRatioContainer` inherits `Container` (manually managed).AtlasTexture`core class AtlasTexture` inherits `Texture` (reference-counted).AudioBusLayout`core class AudioBusLayout` inherits `Resource` (reference-counted).AudioEffect`core class AudioEffect` inherits `Resource` (reference-counted).AudioEffectAmplify`core class AudioEffectAmplify` inherits `AudioEffect` (reference-counted).AudioEffectBandLimitFilter`core class AudioEffectBandLimitFilter` inherits `AudioEffectFilter` (reference-counted).AudioEffectBandPassFilter`core class AudioEffectBandPassFilter` inherits `AudioEffectFilter` (reference-counted).AudioEffectCapture`core class AudioEffectCapture` inherits `AudioEffect` (reference-counted).AudioEffectChorus`core class AudioEffectChorus` inherits `AudioEffect` (reference-counted).AudioEffectCompressor`core class AudioEffectCompressor` inherits `AudioEffect` (reference-counted).AudioEffectDelay`core class AudioEffectDelay` inherits `AudioEffect` (reference-counted).AudioEffectDistortion`core class AudioEffectDistortion` inherits `AudioEffect` (reference-counted).AudioEffectEQ`core class AudioEffectEQ` inherits `AudioEffect` (reference-counted).AudioEffectEQ6`core class AudioEffectEQ6` inherits `AudioEffectEQ` (reference-counted).AudioEffectEQ10`core class AudioEffectEQ10` inherits `AudioEffectEQ` (reference-counted).AudioEffectEQ21`core class AudioEffectEQ21` inherits `AudioEffectEQ` (reference-counted).AudioEffectFilter`core class AudioEffectFilter` inherits `AudioEffect` (reference-counted).AudioEffectHighPassFilter`core class AudioEffectHighPassFilter` inherits `AudioEffectFilter` (reference-counted).AudioEffectHighShelfFilter`core class AudioEffectHighShelfFilter` inherits `AudioEffectFilter` (reference-counted).AudioEffectInstance`core class AudioEffectInstance` inherits `Reference` (reference-counted).AudioEffectLimiter`core class AudioEffectLimiter` inherits `AudioEffect` (reference-counted).AudioEffectLowPassFilter`core class AudioEffectLowPassFilter` inherits `AudioEffectFilter` (reference-counted).AudioEffectLowShelfFilter`core class AudioEffectLowShelfFilter` inherits `AudioEffectFilter` (reference-counted).AudioEffectNotchFilter`core class AudioEffectNotchFilter` inherits `AudioEffectFilter` (reference-counted).AudioEffectPanner`core class AudioEffectPanner` inherits `AudioEffect` (reference-counted).AudioEffectPhaser`core class AudioEffectPhaser` inherits `AudioEffect` (reference-counted).AudioEffectPitchShift`core class AudioEffectPitchShift` inherits `AudioEffect` (reference-counted).AudioEffectRecord`core class AudioEffectRecord` inherits `AudioEffect` (reference-counted).AudioEffectReverb`core class AudioEffectReverb` inherits `AudioEffect` (reference-counted).AudioEffectSpectrumAnalyzer`core class AudioEffectSpectrumAnalyzer` inherits `AudioEffect` (reference-counted).AudioEffectSpectrumAnalyzerInstance`core class AudioEffectSpectrumAnalyzerInstance` inherits `AudioEffectInstance` (reference-counted).AudioEffectStereoEnhance`core class AudioEffectStereoEnhance` inherits `AudioEffect` (reference-counted).AudioServer`core singleton class AudioServer` inherits `Object` (manually managed).AudioStream`core class AudioStream` inherits `Resource` (reference-counted).AudioStreamGenerator`core class AudioStreamGenerator` inherits `AudioStream` (reference-counted).AudioStreamGeneratorPlayback`core class AudioStreamGeneratorPlayback` inherits `AudioStreamPlaybackResampled` (reference-counted).AudioStreamMP3`core class AudioStreamMP3` inherits `AudioStream` (reference-counted).AudioStreamMicrophone`core class AudioStreamMicrophone` inherits `AudioStream` (reference-counted).AudioStreamOGGVorbis`core class AudioStreamOGGVorbis` inherits `AudioStream` (reference-counted).AudioStreamPlayback`core class AudioStreamPlayback` inherits `Reference` (reference-counted).AudioStreamPlaybackResampled`core class AudioStreamPlaybackResampled` inherits `AudioStreamPlayback` (reference-counted).AudioStreamPlayer`core class AudioStreamPlayer` inherits `Node` (manually managed).AudioStreamPlayer2D`core class AudioStreamPlayer2D` inherits `Node2D` (manually managed).AudioStreamPlayer3D`core class AudioStreamPlayer3D` inherits `Spatial` (manually managed).AudioStreamRandomPitch`core class AudioStreamRandomPitch` inherits `AudioStream` (reference-counted).AudioStreamSample`core class AudioStreamSample` inherits `AudioStream` (reference-counted).BackBufferCopy`core class BackBufferCopy` inherits `Node2D` (manually managed).BakedLightmap`core class BakedLightmap` inherits `VisualInstance` (manually managed).BakedLightmapData`core class BakedLightmapData` inherits `Resource` (reference-counted).BaseButton`core class BaseButton` inherits `Control` (manually managed).BitMap`core class BitMap` inherits `Resource` (reference-counted).BitmapFont`core class BitmapFont` inherits `Font` (reference-counted).Bone2D`core class Bone2D` inherits `Node2D` (manually managed).BoneAttachment`core class BoneAttachment` inherits `Spatial` (manually managed).BoxContainer`core class BoxContainer` inherits `Container` (manually managed).BoxShape`core class BoxShape` inherits `Shape` (reference-counted).BulletPhysicsServer`core class BulletPhysicsServer` inherits `PhysicsServer` (manually managed).Button`core class Button` inherits `BaseButton` (manually managed).ButtonGroup`core class ButtonGroup` inherits `Resource` (reference-counted).CPUParticles`core class CPUParticles` inherits `GeometryInstance` (manually managed).CPUParticles2D`core class CPUParticles2D` inherits `Node2D` (manually managed).CSGBox`core class CSGBox` inherits `CSGPrimitive` (manually managed).CSGCombiner`core class CSGCombiner` inherits `CSGShape` (manually managed).CSGCylinder`core class CSGCylinder` inherits `CSGPrimitive` (manually managed).CSGMesh`core class CSGMesh` inherits `CSGPrimitive` (manually managed).CSGPolygon`core class CSGPolygon` inherits `CSGPrimitive` (manually managed).CSGPrimitive`core class CSGPrimitive` inherits `CSGShape` (manually managed).CSGShape`core class CSGShape` inherits `GeometryInstance` (manually managed).CSGSphere`core class CSGSphere` inherits `CSGPrimitive` (manually managed).CSGTorus`core class CSGTorus` inherits `CSGPrimitive` (manually managed).CallbackTweener`core class CallbackTweener` inherits `Tweener` (reference-counted).Camera`core class Camera` inherits `Spatial` (manually managed).Camera2D`core class Camera2D` inherits `Node2D` (manually managed).CameraFeed`core class CameraFeed` inherits `Reference` (reference-counted).CameraServer`core singleton class CameraServer` inherits `Object` (manually managed).CameraTexture`core class CameraTexture` inherits `Texture` (reference-counted).CanvasItem`core class CanvasItem` inherits `Node` (manually managed).CanvasItemMaterial`core class CanvasItemMaterial` inherits `Material` (reference-counted).CanvasLayer`core class CanvasLayer` inherits `Node` (manually managed).CanvasModulate`core class CanvasModulate` inherits `Node2D` (manually managed).CapsuleMesh`core class CapsuleMesh` inherits `PrimitiveMesh` (reference-counted).CapsuleShape`core class CapsuleShape` inherits `Shape` (reference-counted).CapsuleShape2D`core class CapsuleShape2D` inherits `Shape2D` (reference-counted).CenterContainer`core class CenterContainer` inherits `Container` (manually managed).CharFXTransform`core class CharFXTransform` inherits `Reference` (reference-counted).CheckBox`core class CheckBox` inherits `Button` (manually managed).CheckButton`core class CheckButton` inherits `Button` (manually managed).CircleShape2D`core class CircleShape2D` inherits `Shape2D` (reference-counted).ClassDB`core singleton class ClassDB` inherits `Object` (manually managed).ClippedCamera`core class ClippedCamera` inherits `Camera` (manually managed).CollisionObject`core class CollisionObject` inherits `Spatial` (manually managed).CollisionObject2D`core class CollisionObject2D` inherits `Node2D` (manually managed).CollisionPolygon`core class CollisionPolygon` inherits `Spatial` (manually managed).CollisionPolygon2D`core class CollisionPolygon2D` inherits `Node2D` (manually managed).CollisionShape`core class CollisionShape` inherits `Spatial` (manually managed).CollisionShape2D`core class CollisionShape2D` inherits `Node2D` (manually managed).ColorPicker`core class ColorPicker` inherits `BoxContainer` (manually managed).ColorPickerButton`core class ColorPickerButton` inherits `Button` (manually managed).ColorRect`core class ColorRect` inherits `Control` (manually managed).ConcavePolygonShape`core class ConcavePolygonShape` inherits `Shape` (reference-counted).ConcavePolygonShape2D`core class ConcavePolygonShape2D` inherits `Shape2D` (reference-counted).ConeTwistJoint`core class ConeTwistJoint` inherits `Joint` (manually managed).ConfigFile`core class ConfigFile` inherits `Reference` (reference-counted).ConfirmationDialog`core class ConfirmationDialog` inherits `AcceptDialog` (manually managed).Container`core class Container` inherits `Control` (manually managed).Control`core class Control` inherits `CanvasItem` (manually managed).ConvexPolygonShape`core class ConvexPolygonShape` inherits `Shape` (reference-counted).ConvexPolygonShape2D`core class ConvexPolygonShape2D` inherits `Shape2D` (reference-counted).Crypto`core class Crypto` inherits `Reference` (reference-counted).CryptoKey`core class CryptoKey` inherits `Resource` (reference-counted).CubeMap`core class CubeMap` inherits `Resource` (reference-counted).CubeMesh`core class CubeMesh` inherits `PrimitiveMesh` (reference-counted).CullInstance`core class CullInstance` inherits `Spatial` (manually managed).Curve`core class Curve` inherits `Resource` (reference-counted).Curve2D`core class Curve2D` inherits `Resource` (reference-counted).Curve3D`core class Curve3D` inherits `Resource` (reference-counted).CurveTexture`core class CurveTexture` inherits `Texture` (reference-counted).CylinderMesh`core class CylinderMesh` inherits `PrimitiveMesh` (reference-counted).CylinderShape`core class CylinderShape` inherits `Shape` (reference-counted).DTLSServer`core class DTLSServer` inherits `Reference` (reference-counted).DampedSpringJoint2D`core class DampedSpringJoint2D` inherits `Joint2D` (manually managed).DirectionalLight`core class DirectionalLight` inherits `Light` (manually managed).Directory`core class Directory` inherits `Reference` (reference-counted).DynamicFont`core class DynamicFont` inherits `Font` (reference-counted).DynamicFontData`core class DynamicFontData` inherits `Resource` (reference-counted).EditorExportPlugin`tools class EditorExportPlugin` inherits `Reference` (reference-counted).EditorFeatureProfile`tools class EditorFeatureProfile` inherits `Reference` (reference-counted).EditorFileDialog`tools class EditorFileDialog` inherits `ConfirmationDialog` (manually managed).EditorFileSystem`tools class EditorFileSystem` inherits `Node` (manually managed).EditorFileSystemDirectory`tools class EditorFileSystemDirectory` inherits `Object` (manually managed).EditorImportPlugin`tools class EditorImportPlugin` inherits `ResourceImporter` (reference-counted).EditorInspector`tools class EditorInspector` inherits `ScrollContainer` (manually managed).EditorInspectorPlugin`tools class EditorInspectorPlugin` inherits `Reference` (reference-counted).EditorInterface`tools class EditorInterface` inherits `Node` (manually managed).EditorPlugin`tools class EditorPlugin` inherits `Node` (manually managed).EditorProperty`tools class EditorProperty` inherits `Container` (manually managed).EditorResourceConversionPlugin`tools class EditorResourceConversionPlugin` inherits `Reference` (reference-counted).EditorResourcePicker`tools class EditorResourcePicker` inherits `HBoxContainer` (manually managed).EditorResourcePreview`tools class EditorResourcePreview` inherits `Node` (manually managed).EditorResourcePreviewGenerator`tools class EditorResourcePreviewGenerator` inherits `Reference` (reference-counted).EditorSceneImporter`tools class EditorSceneImporter` inherits `Reference` (reference-counted).EditorSceneImporterFBX`tools class EditorSceneImporterFBX` inherits `EditorSceneImporter` (reference-counted).EditorSceneImporterGLTF`tools class EditorSceneImporterGLTF` inherits `EditorSceneImporter` (reference-counted).EditorScenePostImport`tools class EditorScenePostImport` inherits `Reference` (reference-counted).EditorScript`tools class EditorScript` inherits `Reference` (reference-counted).EditorScriptPicker`tools class EditorScriptPicker` inherits `EditorResourcePicker` (manually managed).EditorSelection`tools class EditorSelection` inherits `Object` (manually managed).EditorSettings`tools class EditorSettings` inherits `Resource` (reference-counted).EditorSpatialGizmo`tools class EditorSpatialGizmo` inherits `SpatialGizmo` (reference-counted).EditorSpatialGizmoPlugin`tools class EditorSpatialGizmoPlugin` inherits `Resource` (reference-counted).EditorSpinSlider`tools class EditorSpinSlider` inherits `Range` (manually managed).EditorVCSInterface`tools class EditorVCSInterface` inherits `Object` (manually managed).EncodedObjectAsID`core class EncodedObjectAsID` inherits `Reference` (reference-counted).Engine`core singleton class Engine` inherits `Object` (manually managed).Environment`core class Environment` inherits `Resource` (reference-counted).Expression`core class Expression` inherits `Reference` (reference-counted).ExternalTexture`core class ExternalTexture` inherits `Texture` (reference-counted).File`core class File` inherits `Reference` (reference-counted).FileDialog`core class FileDialog` inherits `ConfirmationDialog` (manually managed).FileSystemDock`tools class FileSystemDock` inherits `VBoxContainer` (manually managed).FlowContainer`core class FlowContainer` inherits `Container` (manually managed).Font`core class Font` inherits `Resource` (reference-counted).FuncRef`core class FuncRef` inherits `Reference` (reference-counted).GDNative`core class GDNative` inherits `Reference` (reference-counted).GDNativeLibrary`core class GDNativeLibrary` inherits `Resource` (reference-counted).GDScript`core class GDScript` inherits `Script` (reference-counted).GDScriptFunctionState`core class GDScriptFunctionState` inherits `Reference` (reference-counted).GIProbe`core class GIProbe` inherits `VisualInstance` (manually managed).GIProbeData`core class GIProbeData` inherits `Resource` (reference-counted).GLTFAccessor`core class GLTFAccessor` inherits `Resource` (reference-counted).GLTFAnimation`core class GLTFAnimation` inherits `Resource` (reference-counted).GLTFBufferView`core class GLTFBufferView` inherits `Resource` (reference-counted).GLTFCamera`core class GLTFCamera` inherits `Resource` (reference-counted).GLTFDocument`core class GLTFDocument` inherits `Resource` (reference-counted).GLTFLight`core class GLTFLight` inherits `Resource` (reference-counted).GLTFMesh`tools class GLTFMesh` inherits `Resource` (reference-counted).GLTFNode`core class GLTFNode` inherits `Resource` (reference-counted).GLTFSkeleton`core class GLTFSkeleton` inherits `Resource` (reference-counted).GLTFSkin`core class GLTFSkin` inherits `Resource` (reference-counted).GLTFSpecGloss`core class GLTFSpecGloss` inherits `Resource` (reference-counted).GLTFState`core class GLTFState` inherits `Resource` (reference-counted).GLTFTexture`core class GLTFTexture` inherits `Resource` (reference-counted).Generic6DOFJoint`core class Generic6DOFJoint` inherits `Joint` (manually managed).Geometry`core singleton class Geometry` inherits `Object` (manually managed).GeometryInstance`core class GeometryInstance` inherits `VisualInstance` (manually managed).GlobalConstants`core singleton class GlobalConstants` (reference-counted)Gradient`core class Gradient` inherits `Resource` (reference-counted).GradientTexture`core class GradientTexture` inherits `Texture` (reference-counted).GradientTexture2D`core class GradientTexture2D` inherits `Texture` (reference-counted).GraphEdit`core class GraphEdit` inherits `Control` (manually managed).GraphNode`core class GraphNode` inherits `Container` (manually managed).GridContainer`core class GridContainer` inherits `Container` (manually managed).GridMap`core class GridMap` inherits `Spatial` (manually managed).GrooveJoint2D`core class GrooveJoint2D` inherits `Joint2D` (manually managed).HBoxContainer`core class HBoxContainer` inherits `BoxContainer` (manually managed).HFlowContainer`core class HFlowContainer` inherits `FlowContainer` (manually managed).HMACContext`core class HMACContext` inherits `Reference` (reference-counted).HScrollBar`core class HScrollBar` inherits `ScrollBar` (manually managed).HSeparator`core class HSeparator` inherits `Separator` (manually managed).HSlider`core class HSlider` inherits `Slider` (manually managed).HSplitContainer`core class HSplitContainer` inherits `SplitContainer` (manually managed).HTTPClient`core class HTTPClient` inherits `Reference` (reference-counted).HTTPRequest`core class HTTPRequest` inherits `Node` (manually managed).HashingContext`core class HashingContext` inherits `Reference` (reference-counted).HeightMapShape`core class HeightMapShape` inherits `Shape` (reference-counted).HingeJoint`core class HingeJoint` inherits `Joint` (manually managed).IP`core singleton class IP` inherits `Object` (manually managed).IP_Unix`core class IP_Unix` inherits `IP` (manually managed).Image`core class Image` inherits `Resource` (reference-counted).ImageTexture`core class ImageTexture` inherits `Texture` (reference-counted).ImmediateGeometry`core class ImmediateGeometry` inherits `GeometryInstance` (manually managed).Input`core singleton class Input` inherits `Object` (manually managed).InputDefault`core class InputDefault` inherits `Input` (manually managed).InputEvent`core class InputEvent` inherits `Resource` (reference-counted).InputEventAction`core class InputEventAction` inherits `InputEvent` (reference-counted).InputEventGesture`core class InputEventGesture` inherits `InputEventWithModifiers` (reference-counted).InputEventJoypadButton`core class InputEventJoypadButton` inherits `InputEvent` (reference-counted).InputEventJoypadMotion`core class InputEventJoypadMotion` inherits `InputEvent` (reference-counted).InputEventKey`core class InputEventKey` inherits `InputEventWithModifiers` (reference-counted).InputEventMIDI`core class InputEventMIDI` inherits `InputEvent` (reference-counted).InputEventMagnifyGesture`core class InputEventMagnifyGesture` inherits `InputEventGesture` (reference-counted).InputEventMouse`core class InputEventMouse` inherits `InputEventWithModifiers` (reference-counted).InputEventMouseButton`core class InputEventMouseButton` inherits `InputEventMouse` (reference-counted).InputEventMouseMotion`core class InputEventMouseMotion` inherits `InputEventMouse` (reference-counted).InputEventPanGesture`core class InputEventPanGesture` inherits `InputEventGesture` (reference-counted).InputEventScreenDrag`core class InputEventScreenDrag` inherits `InputEvent` (reference-counted).InputEventScreenTouch`core class InputEventScreenTouch` inherits `InputEvent` (reference-counted).InputEventWithModifiers`core class InputEventWithModifiers` inherits `InputEvent` (reference-counted).InputMap`core singleton class InputMap` inherits `Object` (manually managed).InstancePlaceholder`core class InstancePlaceholder` inherits `Node` (manually managed).InterpolatedCamera`core class InterpolatedCamera` inherits `Camera` (manually managed).IntervalTweener`core class IntervalTweener` inherits `Tweener` (reference-counted).ItemList`core class ItemList` inherits `Control` (manually managed).JNISingleton`core class JNISingleton` inherits `Object` (manually managed).JSON`core singleton class JSON` inherits `Object` (manually managed).JSONParseResult`core class JSONParseResult` inherits `Reference` (reference-counted).JSONRPC`core class JSONRPC` inherits `Object` (manually managed).JavaClass`core class JavaClass` inherits `Reference` (reference-counted).JavaClassWrapper`core singleton class JavaClassWrapper` inherits `Object` (manually managed).JavaScript`core singleton class JavaScript` inherits `Object` (manually managed).JavaScriptObject`core class JavaScriptObject` inherits `Reference` (reference-counted).Joint`core class Joint` inherits `Spatial` (manually managed).Joint2D`core class Joint2D` inherits `Node2D` (manually managed).KinematicBody`core class KinematicBody` inherits `PhysicsBody` (manually managed).KinematicBody2D`core class KinematicBody2D` inherits `PhysicsBody2D` (manually managed).KinematicCollision`core class KinematicCollision` inherits `Reference` (reference-counted).KinematicCollision2D`core class KinematicCollision2D` inherits `Reference` (reference-counted).Label`core class Label` inherits `Control` (manually managed).Label3D`core class Label3D` inherits `GeometryInstance` (manually managed).LargeTexture`core class LargeTexture` inherits `Texture` (reference-counted).Light`core class Light` inherits `VisualInstance` (manually managed).Light2D`core class Light2D` inherits `Node2D` (manually managed).LightOccluder2D`core class LightOccluder2D` inherits `Node2D` (manually managed).Line2D`core class Line2D` inherits `Node2D` (manually managed).LineEdit`core class LineEdit` inherits `Control` (manually managed).LineShape2D`core class LineShape2D` inherits `Shape2D` (reference-counted).LinkButton`core class LinkButton` inherits `BaseButton` (manually managed).Listener`core class Listener` inherits `Spatial` (manually managed).Listener2D`core class Listener2D` inherits `Node2D` (manually managed).MainLoop`core class MainLoop` inherits `Object` (manually managed).MarginContainer`core class MarginContainer` inherits `Container` (manually managed).Marshalls`core singleton class Marshalls` inherits `Object` (manually managed).Material`core class Material` inherits `Resource` (reference-counted).MenuButton`core class MenuButton` inherits `Button` (manually managed).Mesh`core class Mesh` inherits `Resource` (reference-counted).MeshDataTool`core class MeshDataTool` inherits `Reference` (reference-counted).MeshInstance`core class MeshInstance` inherits `GeometryInstance` (manually managed).MeshInstance2D`core class MeshInstance2D` inherits `Node2D` (manually managed).MeshLibrary`core class MeshLibrary` inherits `Resource` (reference-counted).MeshTexture`core class MeshTexture` inherits `Texture` (reference-counted).MethodTweener`core class MethodTweener` inherits `Tweener` (reference-counted).MobileVRInterface`core class MobileVRInterface` inherits `ARVRInterface` (reference-counted).MultiMesh`core class MultiMesh` inherits `Resource` (reference-counted).MultiMeshInstance`core class MultiMeshInstance` inherits `GeometryInstance` (manually managed).MultiMeshInstance2D`core class MultiMeshInstance2D` inherits `Node2D` (manually managed).MultiplayerAPI`core class MultiplayerAPI` inherits `Reference` (reference-counted).MultiplayerPeerGDNative`core class MultiplayerPeerGDNative` inherits `NetworkedMultiplayerPeer` (reference-counted).Mutex`core class Mutex` inherits `Reference` (reference-counted).NativeScript`core class NativeScript` inherits `Script` (reference-counted).Navigation`core class Navigation` inherits `Spatial` (manually managed).Navigation2D`core class Navigation2D` inherits `Node2D` (manually managed).Navigation2DServer`core singleton class Navigation2DServer` inherits `Object` (manually managed).NavigationAgent`core class NavigationAgent` inherits `Node` (manually managed).NavigationAgent2D`core class NavigationAgent2D` inherits `Node` (manually managed).NavigationMesh`core class NavigationMesh` inherits `Resource` (reference-counted).NavigationMeshGenerator`core singleton class NavigationMeshGenerator` inherits `Object` (manually managed).NavigationMeshInstance`core class NavigationMeshInstance` inherits `Spatial` (manually managed).NavigationObstacle`core class NavigationObstacle` inherits `Node` (manually managed).NavigationObstacle2D`core class NavigationObstacle2D` inherits `Node` (manually managed).NavigationPolygon`core class NavigationPolygon` inherits `Resource` (reference-counted).NavigationPolygonInstance`core class NavigationPolygonInstance` inherits `Node2D` (manually managed).NavigationServer`core singleton class NavigationServer` inherits `Object` (manually managed).NetworkedMultiplayerCustom`core class NetworkedMultiplayerCustom` inherits `NetworkedMultiplayerPeer` (reference-counted).NetworkedMultiplayerENet`core class NetworkedMultiplayerENet` inherits `NetworkedMultiplayerPeer` (reference-counted).NetworkedMultiplayerPeer`core class NetworkedMultiplayerPeer` inherits `PacketPeer` (reference-counted).NinePatchRect`core class NinePatchRect` inherits `Control` (manually managed).Node`core class Node` inherits `Object` (manually managed).Node2D`core class Node2D` inherits `CanvasItem` (manually managed).NoiseTexture`core class NoiseTexture` inherits `Texture` (reference-counted).OS`core singleton class OS` inherits `Object` (manually managed).ObjectThe base class of all classes in the Godot hierarchy.Occluder`core class Occluder` inherits `Spatial` (manually managed).OccluderPolygon2D`core class OccluderPolygon2D` inherits `Resource` (reference-counted).OccluderShape`core class OccluderShape` inherits `Resource` (reference-counted).OccluderShapePolygon`core class OccluderShapePolygon` inherits `OccluderShape` (reference-counted).OccluderShapeSphere`core class OccluderShapeSphere` inherits `OccluderShape` (reference-counted).OmniLight`core class OmniLight` inherits `Light` (manually managed).OpenSimplexNoise`core class OpenSimplexNoise` inherits `Resource` (reference-counted).OptionButton`core class OptionButton` inherits `Button` (manually managed).PCKPacker`core class PCKPacker` inherits `Reference` (reference-counted).PHashTranslation`core class PHashTranslation` inherits `Translation` (reference-counted).PackedDataContainer`core class PackedDataContainer` inherits `Resource` (reference-counted).PackedDataContainerRef`core class PackedDataContainerRef` inherits `Reference` (reference-counted).PackedScene`core class PackedScene` inherits `Resource` (reference-counted).PackedSceneGLTF`core class PackedSceneGLTF` inherits `PackedScene` (reference-counted).PacketPeer`core class PacketPeer` inherits `Reference` (reference-counted).PacketPeerDTLS`core class PacketPeerDTLS` inherits `PacketPeer` (reference-counted).PacketPeerGDNative`core class PacketPeerGDNative` inherits `PacketPeer` (reference-counted).PacketPeerStream`core class PacketPeerStream` inherits `PacketPeer` (reference-counted).PacketPeerUDP`core class PacketPeerUDP` inherits `PacketPeer` (reference-counted).Panel`core class Panel` inherits `Control` (manually managed).PanelContainer`core class PanelContainer` inherits `Container` (manually managed).PanoramaSky`core class PanoramaSky` inherits `Sky` (reference-counted).ParallaxBackground`core class ParallaxBackground` inherits `CanvasLayer` (manually managed).ParallaxLayer`core class ParallaxLayer` inherits `Node2D` (manually managed).Particles`core class Particles` inherits `GeometryInstance` (manually managed).Particles2D`core class Particles2D` inherits `Node2D` (manually managed).ParticlesMaterial`core class ParticlesMaterial` inherits `Material` (reference-counted).Path`core class Path` inherits `Spatial` (manually managed).Path2D`core class Path2D` inherits `Node2D` (manually managed).PathFollow`core class PathFollow` inherits `Spatial` (manually managed).PathFollow2D`core class PathFollow2D` inherits `Node2D` (manually managed).Performance`core singleton class Performance` inherits `Object` (manually managed).PhysicalBone`core class PhysicalBone` inherits `PhysicsBody` (manually managed).Physics2DDirectBodyState`core class Physics2DDirectBodyState` inherits `Object` (manually managed).Physics2DDirectSpaceState`core class Physics2DDirectSpaceState` inherits `Object` (manually managed).Physics2DServer`core singleton class Physics2DServer` inherits `Object` (manually managed).Physics2DServerSW`core class Physics2DServerSW` inherits `Physics2DServer` (manually managed).Physics2DShapeQueryParameters`core class Physics2DShapeQueryParameters` inherits `Reference` (reference-counted).Physics2DTestMotionResult`core class Physics2DTestMotionResult` inherits `Reference` (reference-counted).PhysicsBody`core class PhysicsBody` inherits `CollisionObject` (manually managed).PhysicsBody2D`core class PhysicsBody2D` inherits `CollisionObject2D` (manually managed).PhysicsDirectBodyState`core class PhysicsDirectBodyState` inherits `Object` (manually managed).PhysicsDirectSpaceState`core class PhysicsDirectSpaceState` inherits `Object` (manually managed).PhysicsMaterial`core class PhysicsMaterial` inherits `Resource` (reference-counted).PhysicsServer`core singleton class PhysicsServer` inherits `Object` (manually managed).PhysicsShapeQueryParameters`core class PhysicsShapeQueryParameters` inherits `Reference` (reference-counted).PhysicsTestMotionResult`core class PhysicsTestMotionResult` inherits `Reference` (reference-counted).PinJoint`core class PinJoint` inherits `Joint` (manually managed).PinJoint2D`core class PinJoint2D` inherits `Joint2D` (manually managed).PlaneMesh`core class PlaneMesh` inherits `PrimitiveMesh` (reference-counted).PlaneShape`core class PlaneShape` inherits `Shape` (reference-counted).PluginScript`core class PluginScript` inherits `Script` (reference-counted).PointMesh`core class PointMesh` inherits `PrimitiveMesh` (reference-counted).Polygon2D`core class Polygon2D` inherits `Node2D` (manually managed).PolygonPathFinder`core class PolygonPathFinder` inherits `Resource` (reference-counted).Popup`core class Popup` inherits `Control` (manually managed).PopupDialog`core class PopupDialog` inherits `Popup` (manually managed).PopupMenu`core class PopupMenu` inherits `Popup` (manually managed).PopupPanel`core class PopupPanel` inherits `Popup` (manually managed).Portal`core class Portal` inherits `Spatial` (manually managed).Position2D`core class Position2D` inherits `Node2D` (manually managed).Position3D`core class Position3D` inherits `Spatial` (manually managed).PrimitiveMesh`core class PrimitiveMesh` inherits `Mesh` (reference-counted).PrismMesh`core class PrismMesh` inherits `PrimitiveMesh` (reference-counted).ProceduralSky`core class ProceduralSky` inherits `Sky` (reference-counted).ProgressBar`core class ProgressBar` inherits `Range` (manually managed).ProjectSettings`core singleton class ProjectSettings` inherits `Object` (manually managed).PropertyTweener`core class PropertyTweener` inherits `Tweener` (reference-counted).ProximityGroup`core class ProximityGroup` inherits `Spatial` (manually managed).ProxyTexture`core class ProxyTexture` inherits `Texture` (reference-counted).QuadMesh`core class QuadMesh` inherits `PrimitiveMesh` (reference-counted).RandomNumberGenerator`core class RandomNumberGenerator` inherits `Reference` (reference-counted).Range`core class Range` inherits `Control` (manually managed).RayCast`core class RayCast` inherits `Spatial` (manually managed).RayCast2D`core class RayCast2D` inherits `Node2D` (manually managed).RayShape`core class RayShape` inherits `Shape` (reference-counted).RayShape2D`core class RayShape2D` inherits `Shape2D` (reference-counted).RectangleShape2D`core class RectangleShape2D` inherits `Shape2D` (reference-counted).ReferenceBase class of all reference-counted types. Inherits `Object`.ReferenceRect`core class ReferenceRect` inherits `Control` (manually managed).ReflectionProbe`core class ReflectionProbe` inherits `VisualInstance` (manually managed).RegEx`core class RegEx` inherits `Reference` (reference-counted).RegExMatch`core class RegExMatch` inherits `Reference` (reference-counted).RemoteTransform`core class RemoteTransform` inherits `Spatial` (manually managed).RemoteTransform2D`core class RemoteTransform2D` inherits `Node2D` (manually managed).Resource`core class Resource` inherits `Reference` (reference-counted).ResourceFormatLoader`core class ResourceFormatLoader` inherits `Reference` (reference-counted).ResourceFormatSaver`core class ResourceFormatSaver` inherits `Reference` (reference-counted).ResourceImporter`core class ResourceImporter` inherits `Reference` (reference-counted).ResourceInteractiveLoader`core class ResourceInteractiveLoader` inherits `Reference` (reference-counted).ResourceLoader`core singleton class ResourceLoader` inherits `Object` (manually managed).ResourcePreloader`core class ResourcePreloader` inherits `Node` (manually managed).ResourceSaver`core singleton class ResourceSaver` inherits `Object` (manually managed).RichTextEffect`core class RichTextEffect` inherits `Resource` (reference-counted).RichTextLabel`core class RichTextLabel` inherits `Control` (manually managed).RigidBody`core class RigidBody` inherits `PhysicsBody` (manually managed).RigidBody2D`core class RigidBody2D` inherits `PhysicsBody2D` (manually managed).Room`core class Room` inherits `Spatial` (manually managed).RoomGroup`core class RoomGroup` inherits `Spatial` (manually managed).RoomManager`core class RoomManager` inherits `Spatial` (manually managed).RootMotionView`core class RootMotionView` inherits `VisualInstance` (manually managed).SceneState`core class SceneState` inherits `Reference` (reference-counted).SceneTree`core class SceneTree` inherits `MainLoop` (manually managed).SceneTreeTimer`core class SceneTreeTimer` inherits `Reference` (reference-counted).SceneTreeTween`core class SceneTreeTween` inherits `Reference` (reference-counted).Script`core class Script` inherits `Resource` (reference-counted).ScriptCreateDialog`tools class ScriptCreateDialog` inherits `ConfirmationDialog` (manually managed).ScriptEditor`tools class ScriptEditor` inherits `PanelContainer` (manually managed).ScrollBar`core class ScrollBar` inherits `Range` (manually managed).ScrollContainer`core class ScrollContainer` inherits `Container` (manually managed).SegmentShape2D`core class SegmentShape2D` inherits `Shape2D` (reference-counted).Semaphore`core class Semaphore` inherits `Reference` (reference-counted).Separator`core class Separator` inherits `Control` (manually managed).Shader`core class Shader` inherits `Resource` (reference-counted).ShaderMaterial`core class ShaderMaterial` inherits `Material` (reference-counted).Shape`core class Shape` inherits `Resource` (reference-counted).Shape2D`core class Shape2D` inherits `Resource` (reference-counted).ShortCut`core class ShortCut` inherits `Resource` (reference-counted).Skeleton`core class Skeleton` inherits `Spatial` (manually managed).Skeleton2D`core class Skeleton2D` inherits `Node2D` (manually managed).SkeletonIK`core class SkeletonIK` inherits `Node` (manually managed).Skin`core class Skin` inherits `Resource` (reference-counted).SkinReference`core class SkinReference` inherits `Reference` (reference-counted).Sky`core class Sky` inherits `Resource` (reference-counted).Slider`core class Slider` inherits `Range` (manually managed).SliderJoint`core class SliderJoint` inherits `Joint` (manually managed).SoftBody`core class SoftBody` inherits `MeshInstance` (manually managed).Spatial`core class Spatial` inherits `Node` (manually managed).SpatialGizmo`core class SpatialGizmo` inherits `Reference` (reference-counted).SpatialMaterial`core class SpatialMaterial` inherits `Material` (reference-counted).SpatialVelocityTracker`core class SpatialVelocityTracker` inherits `Reference` (reference-counted).SphereMesh`core class SphereMesh` inherits `PrimitiveMesh` (reference-counted).SphereShape`core class SphereShape` inherits `Shape` (reference-counted).SpinBox`core class SpinBox` inherits `Range` (manually managed).SplitContainer`core class SplitContainer` inherits `Container` (manually managed).SpotLight`core class SpotLight` inherits `Light` (manually managed).SpringArm`core class SpringArm` inherits `Spatial` (manually managed).Sprite`core class Sprite` inherits `Node2D` (manually managed).Sprite3D`core class Sprite3D` inherits `SpriteBase3D` (manually managed).SpriteBase3D`core class SpriteBase3D` inherits `GeometryInstance` (manually managed).SpriteFrames`core class SpriteFrames` inherits `Resource` (reference-counted).StaticBody`core class StaticBody` inherits `PhysicsBody` (manually managed).StaticBody2D`core class StaticBody2D` inherits `PhysicsBody2D` (manually managed).StreamPeer`core class StreamPeer` inherits `Reference` (reference-counted).StreamPeerBuffer`core class StreamPeerBuffer` inherits `StreamPeer` (reference-counted).StreamPeerGDNative`core class StreamPeerGDNative` inherits `StreamPeer` (reference-counted).StreamPeerSSL`core class StreamPeerSSL` inherits `StreamPeer` (reference-counted).StreamPeerTCP`core class StreamPeerTCP` inherits `StreamPeer` (reference-counted).StreamTexture`core class StreamTexture` inherits `Texture` (reference-counted).StyleBox`core class StyleBox` inherits `Resource` (reference-counted).StyleBoxEmpty`core class StyleBoxEmpty` inherits `StyleBox` (reference-counted).StyleBoxFlat`core class StyleBoxFlat` inherits `StyleBox` (reference-counted).StyleBoxLine`core class StyleBoxLine` inherits `StyleBox` (reference-counted).StyleBoxTexture`core class StyleBoxTexture` inherits `StyleBox` (reference-counted).SurfaceTool`core class SurfaceTool` inherits `Reference` (reference-counted).TCP_Server`core class TCP_Server` inherits `Reference` (reference-counted).TabContainer`core class TabContainer` inherits `Container` (manually managed).Tabs`core class Tabs` inherits `Control` (manually managed).TextEdit`core class TextEdit` inherits `Control` (manually managed).TextFile`core class TextFile` inherits `Resource` (reference-counted).TextMesh`core class TextMesh` inherits `PrimitiveMesh` (reference-counted).Texture`core class Texture` inherits `Resource` (reference-counted).Texture3D`core class Texture3D` inherits `TextureLayered` (reference-counted).TextureArray`core class TextureArray` inherits `TextureLayered` (reference-counted).TextureButton`core class TextureButton` inherits `BaseButton` (manually managed).TextureLayered`core class TextureLayered` inherits `Resource` (reference-counted).TextureProgress`core class TextureProgress` inherits `Range` (manually managed).TextureRect`core class TextureRect` inherits `Control` (manually managed).Theme`core class Theme` inherits `Resource` (reference-counted).Thread`core class Thread` inherits `Reference` (reference-counted).TileMap`core class TileMap` inherits `Node2D` (manually managed).TileSet`core class TileSet` inherits `Resource` (reference-counted).Time`core singleton class Time` inherits `Object` (manually managed).Timer`core class Timer` inherits `Node` (manually managed).ToolButton`core class ToolButton` inherits `Button` (manually managed).TouchScreenButton`core class TouchScreenButton` inherits `Node2D` (manually managed).Translation`core class Translation` inherits `Resource` (reference-counted).TranslationServer`core singleton class TranslationServer` inherits `Object` (manually managed).Tree`core class Tree` inherits `Control` (manually managed).TreeItem`core class TreeItem` inherits `Object` (manually managed).TriangleMesh`core class TriangleMesh` inherits `Reference` (reference-counted).Tween`core class Tween` inherits `Node` (manually managed).Tweener`core class Tweener` inherits `Reference` (reference-counted).UDPServer`core class UDPServer` inherits `Reference` (reference-counted).UPNP`core class UPNP` inherits `Reference` (reference-counted).UPNPDevice`core class UPNPDevice` inherits `Reference` (reference-counted).UndoRedo`core class UndoRedo` inherits `Object` (manually managed).VBoxContainer`core class VBoxContainer` inherits `BoxContainer` (manually managed).VFlowContainer`core class VFlowContainer` inherits `FlowContainer` (manually managed).VScrollBar`core class VScrollBar` inherits `ScrollBar` (manually managed).VSeparator`core class VSeparator` inherits `Separator` (manually managed).VSlider`core class VSlider` inherits `Slider` (manually managed).VSplitContainer`core class VSplitContainer` inherits `SplitContainer` (manually managed).VehicleBody`core class VehicleBody` inherits `RigidBody` (manually managed).VehicleWheel`core class VehicleWheel` inherits `Spatial` (manually managed).VideoPlayer`core class VideoPlayer` inherits `Control` (manually managed).VideoStream`core class VideoStream` inherits `Resource` (reference-counted).VideoStreamGDNative`core class VideoStreamGDNative` inherits `VideoStream` (reference-counted).VideoStreamTheora`core class VideoStreamTheora` inherits `VideoStream` (reference-counted).VideoStreamWebm`core class VideoStreamWebm` inherits `VideoStream` (reference-counted).Viewport`core class Viewport` inherits `Node` (manually managed).ViewportContainer`core class ViewportContainer` inherits `Container` (manually managed).ViewportTexture`core class ViewportTexture` inherits `Texture` (reference-counted).VisibilityEnabler`core class VisibilityEnabler` inherits `VisibilityNotifier` (manually managed).VisibilityEnabler2D`core class VisibilityEnabler2D` inherits `VisibilityNotifier2D` (manually managed).VisibilityNotifier`core class VisibilityNotifier` inherits `CullInstance` (manually managed).VisibilityNotifier2D`core class VisibilityNotifier2D` inherits `Node2D` (manually managed).VisualInstance`core class VisualInstance` inherits `CullInstance` (manually managed).VisualScript`core class VisualScript` inherits `Script` (reference-counted).VisualScriptBasicTypeConstant`core class VisualScriptBasicTypeConstant` inherits `VisualScriptNode` (reference-counted).VisualScriptBuiltinFunc`core class VisualScriptBuiltinFunc` inherits `VisualScriptNode` (reference-counted).VisualScriptClassConstant`core class VisualScriptClassConstant` inherits `VisualScriptNode` (reference-counted).VisualScriptComment`core class VisualScriptComment` inherits `VisualScriptNode` (reference-counted).VisualScriptComposeArray`core class VisualScriptComposeArray` inherits `VisualScriptLists` (reference-counted).VisualScriptCondition`core class VisualScriptCondition` inherits `VisualScriptNode` (reference-counted).VisualScriptConstant`core class VisualScriptConstant` inherits `VisualScriptNode` (reference-counted).VisualScriptConstructor`core class VisualScriptConstructor` inherits `VisualScriptNode` (reference-counted).VisualScriptCustomNode`core class VisualScriptCustomNode` inherits `VisualScriptNode` (reference-counted).VisualScriptDeconstruct`core class VisualScriptDeconstruct` inherits `VisualScriptNode` (reference-counted).VisualScriptEditor`tools singleton class VisualScriptEditor` inherits `Object` (manually managed).VisualScriptEmitSignal`core class VisualScriptEmitSignal` inherits `VisualScriptNode` (reference-counted).VisualScriptEngineSingleton`core class VisualScriptEngineSingleton` inherits `VisualScriptNode` (reference-counted).VisualScriptExpression`core class VisualScriptExpression` inherits `VisualScriptNode` (reference-counted).VisualScriptFunction`core class VisualScriptFunction` inherits `VisualScriptNode` (reference-counted).VisualScriptFunctionCall`core class VisualScriptFunctionCall` inherits `VisualScriptNode` (reference-counted).VisualScriptFunctionState`core class VisualScriptFunctionState` inherits `Reference` (reference-counted).VisualScriptGlobalConstant`core class VisualScriptGlobalConstant` inherits `VisualScriptNode` (reference-counted).VisualScriptIndexGet`core class VisualScriptIndexGet` inherits `VisualScriptNode` (reference-counted).VisualScriptIndexSet`core class VisualScriptIndexSet` inherits `VisualScriptNode` (reference-counted).VisualScriptInputAction`core class VisualScriptInputAction` inherits `VisualScriptNode` (reference-counted).VisualScriptIterator`core class VisualScriptIterator` inherits `VisualScriptNode` (reference-counted).VisualScriptLists`core class VisualScriptLists` inherits `VisualScriptNode` (reference-counted).VisualScriptLocalVar`core class VisualScriptLocalVar` inherits `VisualScriptNode` (reference-counted).VisualScriptLocalVarSet`core class VisualScriptLocalVarSet` inherits `VisualScriptNode` (reference-counted).VisualScriptMathConstant`core class VisualScriptMathConstant` inherits `VisualScriptNode` (reference-counted).VisualScriptNode`core class VisualScriptNode` inherits `Resource` (reference-counted).VisualScriptOperator`core class VisualScriptOperator` inherits `VisualScriptNode` (reference-counted).VisualScriptPreload`core class VisualScriptPreload` inherits `VisualScriptNode` (reference-counted).VisualScriptPropertyGet`core class VisualScriptPropertyGet` inherits `VisualScriptNode` (reference-counted).VisualScriptPropertySet`core class VisualScriptPropertySet` inherits `VisualScriptNode` (reference-counted).VisualScriptResourcePath`core class VisualScriptResourcePath` inherits `VisualScriptNode` (reference-counted).VisualScriptReturn`core class VisualScriptReturn` inherits `VisualScriptNode` (reference-counted).VisualScriptSceneNode`core class VisualScriptSceneNode` inherits `VisualScriptNode` (reference-counted).VisualScriptSceneTree`core class VisualScriptSceneTree` inherits `VisualScriptNode` (reference-counted).VisualScriptSelect`core class VisualScriptSelect` inherits `VisualScriptNode` (reference-counted).VisualScriptSelf`core class VisualScriptSelf` inherits `VisualScriptNode` (reference-counted).VisualScriptSequence`core class VisualScriptSequence` inherits `VisualScriptNode` (reference-counted).VisualScriptSubCall`core class VisualScriptSubCall` inherits `VisualScriptNode` (reference-counted).VisualScriptSwitch`core class VisualScriptSwitch` inherits `VisualScriptNode` (reference-counted).VisualScriptTypeCast`core class VisualScriptTypeCast` inherits `VisualScriptNode` (reference-counted).VisualScriptVariableGet`core class VisualScriptVariableGet` inherits `VisualScriptNode` (reference-counted).VisualScriptVariableSet`core class VisualScriptVariableSet` inherits `VisualScriptNode` (reference-counted).VisualScriptWhile`core class VisualScriptWhile` inherits `VisualScriptNode` (reference-counted).VisualScriptYield`core class VisualScriptYield` inherits `VisualScriptNode` (reference-counted).VisualScriptYieldSignal`core class VisualScriptYieldSignal` inherits `VisualScriptNode` (reference-counted).VisualServer`core singleton class VisualServer` inherits `Object` (manually managed).VisualShader`core class VisualShader` inherits `Shader` (reference-counted).VisualShaderNode`core class VisualShaderNode` inherits `Resource` (reference-counted).VisualShaderNodeBooleanConstant`core class VisualShaderNodeBooleanConstant` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeBooleanUniform`core class VisualShaderNodeBooleanUniform` inherits `VisualShaderNodeUniform` (reference-counted).VisualShaderNodeColorConstant`core class VisualShaderNodeColorConstant` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeColorFunc`core class VisualShaderNodeColorFunc` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeColorOp`core class VisualShaderNodeColorOp` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeColorUniform`core class VisualShaderNodeColorUniform` inherits `VisualShaderNodeUniform` (reference-counted).VisualShaderNodeCompare`core class VisualShaderNodeCompare` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeCubeMap`core class VisualShaderNodeCubeMap` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeCubeMapUniform`core class VisualShaderNodeCubeMapUniform` inherits `VisualShaderNodeTextureUniform` (reference-counted).VisualShaderNodeCustom`core class VisualShaderNodeCustom` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeDeterminant`core class VisualShaderNodeDeterminant` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeDotProduct`core class VisualShaderNodeDotProduct` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeExpression`core class VisualShaderNodeExpression` inherits `VisualShaderNodeGroupBase` (reference-counted).VisualShaderNodeFaceForward`core class VisualShaderNodeFaceForward` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeFresnel`core class VisualShaderNodeFresnel` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeGlobalExpression`core class VisualShaderNodeGlobalExpression` inherits `VisualShaderNodeExpression` (reference-counted).VisualShaderNodeGroupBase`core class VisualShaderNodeGroupBase` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeIf`core class VisualShaderNodeIf` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeInput`core class VisualShaderNodeInput` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeIs`core class VisualShaderNodeIs` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeOuterProduct`core class VisualShaderNodeOuterProduct` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeOutput`core class VisualShaderNodeOutput` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeScalarClamp`core class VisualShaderNodeScalarClamp` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeScalarConstant`core class VisualShaderNodeScalarConstant` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeScalarDerivativeFunc`core class VisualShaderNodeScalarDerivativeFunc` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeScalarFunc`core class VisualShaderNodeScalarFunc` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeScalarInterp`core class VisualShaderNodeScalarInterp` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeScalarOp`core class VisualShaderNodeScalarOp` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeScalarSmoothStep`core class VisualShaderNodeScalarSmoothStep` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeScalarSwitch`core class VisualShaderNodeScalarSwitch` inherits `VisualShaderNodeSwitch` (reference-counted).VisualShaderNodeScalarUniform`core class VisualShaderNodeScalarUniform` inherits `VisualShaderNodeUniform` (reference-counted).VisualShaderNodeSwitch`core class VisualShaderNodeSwitch` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeTexture`core class VisualShaderNodeTexture` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeTextureUniform`core class VisualShaderNodeTextureUniform` inherits `VisualShaderNodeUniform` (reference-counted).VisualShaderNodeTextureUniformTriplanar`core class VisualShaderNodeTextureUniformTriplanar` inherits `VisualShaderNodeTextureUniform` (reference-counted).VisualShaderNodeTransformCompose`core class VisualShaderNodeTransformCompose` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeTransformConstant`core class VisualShaderNodeTransformConstant` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeTransformDecompose`core class VisualShaderNodeTransformDecompose` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeTransformFunc`core class VisualShaderNodeTransformFunc` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeTransformMult`core class VisualShaderNodeTransformMult` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeTransformUniform`core class VisualShaderNodeTransformUniform` inherits `VisualShaderNodeUniform` (reference-counted).VisualShaderNodeTransformVecMult`core class VisualShaderNodeTransformVecMult` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeUniform`core class VisualShaderNodeUniform` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeUniformRef`core class VisualShaderNodeUniformRef` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeVec3Constant`core class VisualShaderNodeVec3Constant` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeVec3Uniform`core class VisualShaderNodeVec3Uniform` inherits `VisualShaderNodeUniform` (reference-counted).VisualShaderNodeVectorClamp`core class VisualShaderNodeVectorClamp` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeVectorCompose`core class VisualShaderNodeVectorCompose` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeVectorDecompose`core class VisualShaderNodeVectorDecompose` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeVectorDerivativeFunc`core class VisualShaderNodeVectorDerivativeFunc` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeVectorDistance`core class VisualShaderNodeVectorDistance` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeVectorFunc`core class VisualShaderNodeVectorFunc` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeVectorInterp`core class VisualShaderNodeVectorInterp` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeVectorLen`core class VisualShaderNodeVectorLen` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeVectorOp`core class VisualShaderNodeVectorOp` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeVectorRefract`core class VisualShaderNodeVectorRefract` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeVectorScalarMix`core class VisualShaderNodeVectorScalarMix` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeVectorScalarSmoothStep`core class VisualShaderNodeVectorScalarSmoothStep` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeVectorScalarStep`core class VisualShaderNodeVectorScalarStep` inherits `VisualShaderNode` (reference-counted).VisualShaderNodeVectorSmoothStep`core class VisualShaderNodeVectorSmoothStep` inherits `VisualShaderNode` (reference-counted).WeakRef`core class WeakRef` inherits `Reference` (reference-counted).WebRTCDataChannel`core class WebRTCDataChannel` inherits `PacketPeer` (reference-counted).WebRTCDataChannelGDNative`core class WebRTCDataChannelGDNative` inherits `WebRTCDataChannel` (reference-counted).WebRTCMultiplayer`core class WebRTCMultiplayer` inherits `NetworkedMultiplayerPeer` (reference-counted).WebRTCPeerConnection`core class WebRTCPeerConnection` inherits `Reference` (reference-counted).WebRTCPeerConnectionGDNative`core class WebRTCPeerConnectionGDNative` inherits `WebRTCPeerConnection` (reference-counted).WebSocketClient`core class WebSocketClient` inherits `WebSocketMultiplayerPeer` (reference-counted).WebSocketMultiplayerPeer`core class WebSocketMultiplayerPeer` inherits `NetworkedMultiplayerPeer` (reference-counted).WebSocketPeer`core class WebSocketPeer` inherits `PacketPeer` (reference-counted).WebSocketServer`core class WebSocketServer` inherits `WebSocketMultiplayerPeer` (reference-counted).WebXRInterface`core class WebXRInterface` inherits `ARVRInterface` (reference-counted).WindowDialog`core class WindowDialog` inherits `Popup` (manually managed).World`core class World` inherits `Resource` (reference-counted).World2D`core class World2D` inherits `Resource` (reference-counted).WorldEnvironment`core class WorldEnvironment` inherits `Node` (manually managed).X509Certificate`core class X509Certificate` inherits `Resource` (reference-counted).XMLParser`core class XMLParser` inherits `Reference` (reference-counted).YSort`core class YSort` inherits `Node2D` (manually managed). Struct gdnative::object::TRef === ``` pub struct TRef<'a, T, Own = Shared>where     T: GodotObject,     Own: Ownership,{ /* private fields */ } ``` A temporary safe pointer to Godot objects that tracks thread access status. `TRef` can be coerced into bare references with `Deref`. See the type-level documentation on `Ref` for detailed documentation on the reference system of `godot-rust`. Using as method arguments or return values --- `TRef<T, Shared>` can be passed into methods. Using as `owner` arguments in NativeScript methods --- It’s possible to use `TRef` as the `owner` argument in NativeScript methods. This can make passing `owner` to methods easier. Implementations --- ### impl<'a, T, Own> TRef<'a, T, Own>where    T: GodotObject,    Own: Ownership, #### pub fn as_ref(self) -> &'a T Returns the underlying reference without thread access. #### pub fn cast<U>(self) -> Option<TRef<'a, U, Own>>where    U: GodotObject + SubClass<T>, Performs a dynamic reference cast to target type, keeping the thread access info. #### pub fn upcast<U>(&self) -> TRef<'a, U, Own>where    U: GodotObject,    T: SubClass<U>, Performs a static reference upcast to a supertype that is guaranteed to be valid, keeping the thread access info. This is guaranteed to be a no-op at runtime. #### pub fn cast_instance<C>(self) -> Option<TInstance<'a, C, Own>>where    C: NativeClass<Base = T>, Convenience method to downcast to `TInstance` where `self` is the base object. ### impl<'a, Kind, T, Own> TRef<'a, T, Own>where    Kind: Memory,    T: GodotObject<Memory = Kind>,    Own: NonUniqueOwnership, #### pub fn claim(self) -> Ref<T, OwnPersists this reference into a persistent `Ref` with the same thread access. This is only available for non-`Unique` accesses. ### impl<'a, T> TRef<'a, T, Shared>where    T: GodotObject, #### pub unsafe fn try_from_instance_id(id: i64) -> Option<TRef<'a, T, Shared>Recovers a instance ID previously returned by `Object::get_instance_id` if the object is still alive. ##### Safety During the entirety of `'a`, the thread from which `try_from_instance_id` is called must have exclusive access to the underlying object, if it is still alive. #### pub unsafe fn from_instance_id(id: i64) -> TRef<'a, T, SharedRecovers a instance ID previously returned by `Object::get_instance_id` if the object is still alive, and panics otherwise. This does **NOT** guarantee that the resulting reference is safe to use. ##### Panics Panics if the given id refers to a destroyed object. For a non-panicking version, see `try_from_instance_id`. ##### Safety During the entirety of `'a`, the thread from which `try_from_instance_id` is called must have exclusive access to the underlying object, if it is still alive. Trait Implementations --- ### impl<'a, T, Own> AsRef<T> for TRef<'a, T, Own>where    T: GodotObject,    Own: Ownership, #### fn as_ref(&self) -> &T Converts this type into a shared reference of the (usually inferred) input type.### impl<'a, T> AsVariant for TRef<'a, T, Shared>where    T: GodotObject, #### type Target = T ### impl<'a, T, Own> Borrow<T> for TRef<'a, T, Own>where    T: GodotObject,    Own: Ownership, #### fn borrow(&self) -> &T Immutably borrows from an owned value. #### fn clone(&self) -> TRef<'a, T, OwnReturns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter. #### type Target = T The resulting type after dereferencing.#### fn deref(&self) -> &<TRef<'a, T, Own> as Deref>::Target Dereferences the value.### impl<'a, T> ToVariant for TRef<'a, T, Shared>where    T: GodotObject, #### fn to_variant(&self) -> Variant ### impl<'a, T, U> AsArg<U> for TRef<'a, T, Shared>where    T: GodotObject + SubClass<U>,    U: GodotObject, ### impl<'a, T, Own> Copy for TRef<'a, T, Own>where    T: GodotObject,    Own: Ownership, ### impl<'a, T, Own> OwnerArg<'a, T, Own> for TRef<'a, T, Own>where    T: GodotObject,    Own: Ownership + 'static, Auto Trait Implementations --- ### impl<'a, T, Own> RefUnwindSafe for TRef<'a, T, Own>where    Own: RefUnwindSafe,    T: RefUnwindSafe, ### impl<'a, T, Own> Send for TRef<'a, T, Own>where    Own: Send,    T: Sync, ### impl<'a, T, Own> Sync for TRef<'a, T, Own>where    Own: Sync,    T: Sync, ### impl<'a, T, Own> Unpin for TRef<'a, T, Own>where    Own: Unpin, ### impl<'a, T, Own> UnwindSafe for TRef<'a, T, Own>where    Own: UnwindSafe,    T: RefUnwindSafe, Blanket Implementations --- ### impl<T> Any for Twhere    T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. const: unstable · source#### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> OwnedToVariant for Twhere    T: ToVariant, #### fn owned_to_variant(self) -> Variant ### impl<T> ToOwned for Twhere    T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. #### type Error = Infallible The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct gdnative::object::Ref === ``` pub struct Ref<T, Own = Shared>where     T: GodotObject,     Own: Ownership,{ /* private fields */ } ``` A polymorphic smart pointer for Godot objects whose behavior changes depending on the memory management method of the underlying type and the thread access status. Manually-managed types --- `Shared` references to manually-managed types, like `Ref<Node, Shared>`, act like raw pointers. They are safe to alias, can be sent between threads, and can also be taken as method arguments (converted from `Variant`). They can’t be used directly. Instead, it’s required to obtain a safe view first. See the “Obtaining a safe view” section below for more information. `ThreadLocal` references to manually-managed types cannot normally be obtained, since it does not add anything over `Shared` ones. `Unique` references to manually-managed types, like `Ref<Node, Unique>`, can’t be aliased or sent between threads, but can be used safely. However, they *won’t* be automatically freed on drop, and are *leaked* if not passed to the engine or freed manually with `free`. `Unique` references can be obtained through constructors safely, or `assume_unique` in unsafe contexts. Reference-counted types --- `Shared` references to reference-counted types, like `Ref<Reference, Shared>`, act like `Arc` smart pointers. New references can be created with `Clone`, and they can be sent between threads. The pointer is presumed to be always valid. As such, more operations are available even when thread safety is not assumed. However, API methods still can’t be used directly, and users are required to obtain a safe view first. See the “Obtaining a safe view” section below for more information. `ThreadLocal` reference to reference-counted types, like `Ref<Reference, ThreadLocal>`, add the ability to call API methods safely. Unlike `Unique` references, it’s unsafe to convert them to `Shared` because there might be other `ThreadLocal` references in existence. Obtaining a safe view --- In a lot of cases, references obtained from the engine as return values or arguments aren’t safe to use, due to lack of pointer validity and thread safety guarantees in the API. As such, it’s usually required to use `unsafe` code to obtain safe views of the same object before API methods can be called. The ways to cast between different reference types are as follows: | From | To | Method | Note | | --- | --- | --- | --- | | `Unique` | `&'a T` | `Deref` (API methods can be called directly) / `as_ref` | - | | `ThreadLocal` | `&'a T` | `Deref` (API methods can be called directly) / `as_ref` | Only if `T` is a reference-counted type. | | `Shared` | `&'a T` | `unsafe assume_safe::<'a>` | The underlying object must be valid, and exclusive to this thread during `'a`. | | `Unique` | `ThreadLocal` | `into_thread_local` | - | | `Unique` | `Shared` | `into_shared` | - | | `Shared` | `ThreadLocal` | `unsafe assume_thread_local` | The reference must be local to the current thread. | | `Shared` / `ThreadLocal` | `Unique` | `unsafe assume_unique` | The reference must be unique. | | `ThreadLocal` | `Shared` | `unsafe assume_unique().into_shared()` | The reference must be unique. | Using as method arguments or return values --- In order to enforce thread safety statically, the ability to be passed to the engine is only given to some reference types. Specifically, they are: * All *owned* `Ref<T, Unique>` references. The `Unique` access is lost if passed into a method. * Owned and borrowed `Shared` references, including temporary ones (`TRef`). It’s unsound to pass `ThreadLocal` references to the engine because there is no guarantee that the reference will stay on the same thread. Conditional trait implementations --- Many trait implementations for `Ref` are conditional, dependent on the type parameters. When viewing rustdoc documentation, you may expand the documentation on their respective `impl` blocks for more detailed explanations of the trait bounds. Implementations --- ### impl<T> Ref<T, Unique>where    T: GodotObject + Instanciable, #### pub fn new() -> Ref<T, UniqueCreates a new instance of `T`. The lifetime of the returned object is *not* automatically managed if `T` is a manually- managed type. ### impl<T> Ref<T, Unique>where    T: GodotObject, #### pub fn by_class_name(class_name: &str) -> Option<Ref<T, Unique>Creates a new instance of a sub-class of `T` by its class name. Returns `None` if the class does not exist, cannot be constructed, has a different `Memory` from, or is not a sub-class of `T`. The lifetime of the returned object is *not* automatically managed if `T` is a manually- managed type. This means that if `Object` is used as the type parameter, any `Reference` objects created, if returned, will be leaked. As a result, such calls will return `None`. Casting between `Object` and `Reference` is possible on `TRef` and bare references. ### impl<T, Own> Ref<T, Own>where    T: GodotObject,    Own: Ownership,    RefImplBound: SafeDeref<<T as GodotObject>::Memory, Own>, Method for references that can be safely used. #### pub fn as_ref(&self) -> TRef<'_, T, OwnReturns a safe temporary reference that tracks thread access. `Ref<T, Own>` can be safely dereferenced if either: * `T` is reference-counted and `Ownership` is not `Shared`, * or, `T` is manually-managed and `Ownership` is `Unique`. ### impl<T, Own> Ref<T, Own>where    T: GodotObject,    Own: Ownership,    RefImplBound: SafeAsRaw<<T as GodotObject>::Memory, Own>, Methods for references that point to valid objects, but are not necessarily safe to use. * All `Ref`s to reference-counted types always point to valid objects. * `Ref` to manually-managed types are only guaranteed to be valid if `Unique`. #### pub fn cast<U>(self) -> Option<Ref<U, Own>>where    U: GodotObject<Memory = <T as GodotObject>::Memory> + SubClass<T>, Performs a dynamic reference cast to target type, keeping the reference count. Shorthand for `try_cast().ok()`. The `cast` method can only be used for downcasts. For statically casting to a supertype, use `upcast` instead. This is only possible between types with the same `Memory`s, since otherwise the reference can get leaked. Casting between `Object` and `Reference` is possible on `TRef` and bare references. #### pub fn upcast<U>(self) -> Ref<U, Own>where    U: GodotObject<Memory = <T as GodotObject>::Memory>,    T: SubClass<U>, Performs a static reference upcast to a supertype, keeping the reference count. This is guaranteed to be valid. This is only possible between types with the same `Memory`s, since otherwise the reference can get leaked. Casting between `Object` and `Reference` is possible on `TRef` and bare references. #### pub fn try_cast<U>(self) -> Result<Ref<U, Own>, Ref<T, Own>>where    U: GodotObject<Memory = <T as GodotObject>::Memory> + SubClass<T>, Performs a dynamic reference cast to target type, keeping the reference count. This is only possible between types with the same `Memory`s, since otherwise the reference can get leaked. Casting between `Object` and `Reference` is possible on `TRef` and bare references. ##### Errors Returns `Err(self)` if the cast failed. #### pub fn cast_instance<C>(self) -> Option<Instance<C, Own>>where    C: NativeClass<Base = T>, Performs a downcast to a `NativeClass` instance, keeping the reference count. Shorthand for `try_cast_instance().ok()`. The resulting `Instance` is not necessarily safe to use directly. #### pub fn try_cast_instance<C>(self) -> Result<Instance<C, Own>, Ref<T, Own>>where    C: NativeClass<Base = T>, Performs a downcast to a `NativeClass` instance, keeping the reference count. ##### Errors Returns `Err(self)` if the cast failed. ### impl<T> Ref<T, Shared>where    T: GodotObject, Methods for references that can’t be used directly, and have to be assumed safe `unsafe`ly. #### pub unsafe fn assume_safe<'a, 'r>(&'r self) -> TRef<'a, T, Shared>where    AssumeSafeLifetime<'a, 'r>: LifetimeConstraint<<T as GodotObject>::Memory>, Assume that `self` is safe to use, returning a reference that can be used to call API methods. This is guaranteed to be a no-op at runtime if `debug_assertions` is disabled. Runtime sanity checks may be added in debug builds to help catch bugs. ##### Safety Suppose that the lifetime of the returned reference is `'a`. It’s safe to call `assume_safe` only if: 1. During the entirety of `'a`, the underlying object will always be valid. *This is always true for reference-counted types.* For them, the `'a` lifetime will be constrained to the lifetime of `&self`. This means that any methods called on the resulting reference will not free it, unless it’s the last operation within the lifetime. If any script methods are called, the code ran as a consequence will also not free it. This can happen via virtual method calls on other objects, or signals connected in a non-deferred way. 2. During the entirety of ’a, the thread from which `assume_safe` is called has exclusive access to the underlying object. This is because all Godot objects have “interior mutability” in Rust parlance, and can’t be shared across threads. The best way to guarantee this is to follow the official thread-safety guidelines across the codebase. Failure to satisfy either of the conditions will lead to undefined behavior. #### pub unsafe fn assume_unique(self) -> Ref<T, UniqueAssume that `self` is the unique reference to the underlying object. This is guaranteed to be a no-op at runtime if `debug_assertions` is disabled. Runtime sanity checks may be added in debug builds to help catch bugs. ##### Safety Calling `assume_unique` when `self` isn’t the unique reference is instant undefined behavior. This is a much stronger assumption than `assume_safe` and should be used with care. ### impl<T> Ref<T, Shared>where    T: GodotObject<Memory = ManuallyManaged>, Extra methods with explicit sanity checks for manually-managed unsafe references. #### pub unsafe fn is_instance_sane(&self) -> bool Returns `true` if the pointer currently points to a valid object of the correct type. **This does NOT guarantee that it’s safe to use this pointer.** ##### Safety This thread must have exclusive access to the object during the call. #### pub unsafe fn assume_safe_if_sane<'a>(&self) -> Option<TRef<'a, T, Shared>Assume that `self` is safe to use, if a sanity check using `is_instance_sane` passed. ##### Safety The same safety constraints as `assume_safe` applies. **The sanity check does NOT guarantee that the operation is safe.** #### pub unsafe fn assume_unique_if_sane(self) -> Option<Ref<T, Unique>Assume that `self` is the unique reference to the underlying object, if a sanity check using `is_instance_sane` passed. ##### Safety Calling `assume_unique_if_sane` when `self` isn’t the unique reference is instant undefined behavior. This is a much stronger assumption than `assume_safe` and should be used with care. ### impl<T> Ref<T, Shared>where    T: GodotObject<Memory = RefCounted>, Methods for conversion from `Shared` to `ThreadLocal` access. This is only available for reference-counted types. #### pub unsafe fn assume_thread_local(self) -> Ref<T, ThreadLocalAssume that all references to the underlying object is local to the current thread. This is guaranteed to be a no-op at runtime. ##### Safety Calling `assume_thread_local` when there are references on other threads is instant undefined behavior. This is a much stronger assumption than `assume_safe` and should be used with care. ### impl<T> Ref<T, Unique>where    T: GodotObject<Memory = RefCounted>, Methods for conversion from `Unique` to `ThreadLocal` access. This is only available for reference-counted types. #### pub fn into_thread_local(self) -> Ref<T, ThreadLocalConvert to a thread-local reference. This is guaranteed to be a no-op at runtime. ### impl<T> Ref<T, Unique>where    T: GodotObject, Methods for conversion from `Unique` to `Shared` access. #### pub fn into_shared(self) -> Ref<T, SharedConvert to a shared reference. This is guaranteed to be a no-op at runtime. ### impl<T> Ref<T, Unique>where    T: GodotObject<Memory = ManuallyManaged>, Methods for freeing `Unique` references to manually-managed objects. #### pub fn free(self) Manually frees the object. Manually-managed objects are not free-on-drop *even when the access is unique*, because it’s impossible to know whether methods take “ownership” of them or not. It’s up to the user to decide when they should be freed. This is only available for `Unique` references. If you have a `Ref` with another access, and you are sure that it is unique, use `assume_unique` to convert it to a `Unique` one. ### impl<T> Ref<T, Unique>where    T: GodotObject<Memory = ManuallyManaged> + QueueFree, Methods for freeing `Unique` references to manually-managed objects. #### pub fn queue_free(self) Queues the object for deallocation in the near future. This is preferable for `Node`s compared to `Ref::free`. This is only available for `Unique` references. If you have a `Ref` with another access, and you are sure that it is unique, use `assume_unique` to convert it to a `Unique` one. Trait Implementations --- ### impl<'a, T> AsVariant for &'a Ref<T, Shared>where    T: GodotObject, #### type Target = T ### impl<T> AsVariant for Ref<T, Shared>where    T: GodotObject, #### type Target = T ### impl<T> AsVariant for Ref<T, Unique>where    T: GodotObject, #### type Target = T ### impl<T, Own> Borrow<T> for Ref<T, Own>where    T: GodotObject,    Own: Ownership,    RefImplBound: SafeDeref<<T as GodotObject>::Memory, Own>, `Ref<T, Own>` can be safely dereferenced if either: * `T` is reference-counted and `Ownership` is not `Shared`, * or, `T` is manually-managed and `Ownership` is `Unique`. #### fn borrow(&self) -> &T Immutably borrows from an owned value. `Ref` is `Clone` if the access is not `Unique`. #### fn clone(&self) -> Ref<T, OwnReturns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter. `Ref<T, Own>` can be safely dereferenced if either: * `T` is reference-counted and `Ownership` is not `Shared`, * or, `T` is manually-managed and `Ownership` is `Unique`. #### type Target = T The resulting type after dereferencing.#### fn deref(&self) -> &<Ref<T, Own> as Deref>::Target Dereferences the value.### impl<T> Export for Ref<T, Shared>where    T: GodotObject, #### type Hint = NoHint A type-specific hint type that is valid for the type being exported. Returns `ExportInfo` given an optional typed hint.### impl<T> FromVariant for Ref<T, Shared>where    T: GodotObject, #### fn from_variant(variant: &Variant) -> Result<Ref<T, Shared>, FromVariantError### impl<T, Own> Hash for Ref<T, Own>where    T: GodotObject,    Own: Ownership, Hashes the raw pointer. #### fn hash<H>(&self, state: &mut H)where    H: Hasher, Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where    H: Hasher,    Self: Sized, Feeds a slice of this type into the given `Hasher`. Ordering of the raw pointer value. #### fn cmp(&self, other: &Ref<T, Own>) -> Ordering This method returns an `Ordering` between `self` and `other`. Read more1.21.0 · source#### fn max(self, other: Self) -> Selfwhere    Self: Sized, Compares and returns the maximum of two values. Read more1.21.0 · source#### fn min(self, other: Self) -> Selfwhere    Self: Sized, Compares and returns the minimum of two values. Read more1.50.0 · source#### fn clamp(self, min: Self, max: Self) -> Selfwhere    Self: Sized + PartialOrd<Self>, Restrict a value to a certain interval. #### fn owned_to_variant(self) -> Variant ### impl<T, Own, RhsOws> PartialEq<Ref<T, RhsOws>> for Ref<T, Own>where    T: GodotObject,    Own: Ownership,    RhsOws: Ownership, Reference equality. #### fn eq(&self, other: &Ref<T, RhsOws>) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl<T, Own> PartialOrd<Ref<T, Own>> for Ref<T, Own>where    T: GodotObject,    Own: Ownership, Ordering of the raw pointer value. #### fn partial_cmp(&self, other: &Ref<T, Own>) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists. Read more1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn to_variant(&self) -> Variant ### impl<'a, T, U> AsArg<U> for &'a Ref<T, Shared>where    T: GodotObject + SubClass<U>,    U: GodotObject, ### impl<T, U> AsArg<U> for Ref<T, Shared>where    T: GodotObject + SubClass<U>,    U: GodotObject, ### impl<T, U> AsArg<U> for Ref<T, Unique>where    T: GodotObject + SubClass<U>,    U: GodotObject, ### impl<T, Own> Copy for Ref<T, Own>where    T: GodotObject<Memory = ManuallyManaged>,    Own: NonUniqueOwnership, `Ref` is `Copy` if the underlying object is manually-managed, and the access is not `Unique`. ### impl<T, Own> Eq for Ref<T, Own>where    T: GodotObject,    Own: Ownership, Reference equality. ### impl<T, Own> Send for Ref<T, Own>where    T: GodotObject,    Own: Ownership + Send, `Ref` is `Send` if the thread access is `Shared` or `Unique`. ### impl<T, Own> Sync for Ref<T, Own>where    T: GodotObject,    Own: Ownership + Sync, `Ref` is `Sync` if the thread access is `Shared`. Auto Trait Implementations --- ### impl<T, Own> RefUnwindSafe for Ref<T, Own>where    Own: RefUnwindSafe,    T: RefUnwindSafe,    <<T as GodotObject>::Memory as MemorySpec>::PtrWrapper: RefUnwindSafe, ### impl<T, Own> Unpin for Ref<T, Own>where    Own: Unpin,    <<T as GodotObject>::Memory as MemorySpec>::PtrWrapper: Unpin, ### impl<T, Own> UnwindSafe for Ref<T, Own>where    Own: UnwindSafe,    T: RefUnwindSafe,    <<T as GodotObject>::Memory as MemorySpec>::PtrWrapper: UnwindSafe, Blanket Implementations --- ### impl<T> Any for Twhere    T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. const: unstable · source#### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn equivalent(&self, key: &K) -> bool Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> OwnedToVariant for Twhere    T: ToVariant, #### fn owned_to_variant(self) -> Variant ### impl<T> ToOwned for Twhere    T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. #### type Error = Infallible The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Module gdnative::object === Provides types to interact with the Godot `Object` class hierarchy This module contains wrappers and helpers to interact with Godot objects. In Godot, classes stand in an inheritance relationship, with the root at `Object`. If you are looking for how to manage user-defined types (native scripts), check out the `export` module. Modules --- boundsVarious traits to verify memory policy, ownership policy or lifetime boundsmemoryMarker types to express the memory management method of Godot types.ownershipTypestates to express ownership and thread safety of Godot types.Structs --- InstanceA persistent reference to a GodotObject with a rust NativeClass attached.NullRepresents an explicit null reference in method arguments. This works around type inference issues with `Option`. You may create `Null`s with `Null::null` or `GodotObject::null`.RawObjectAn opaque struct representing Godot objects. This should never be created on the stack.RefA polymorphic smart pointer for Godot objects whose behavior changes depending on the memory management method of the underlying type and the thread access status.TInstanceA reference to a GodotObject with a rust NativeClass attached that is assumed safe during a certain lifetime.TRefA temporary safe pointer to Godot objects that tracks thread access status. `TRef` can be coerced into bare references with `Deref`.Traits --- AsArgTrait for safe conversion from Godot object references into API method arguments. This is a sealed trait with no public interface.AsVariantTrait for safe conversion from Godot object references into Variant. This is a sealed trait with no public interface.GodotObjectTrait for Godot API objects. This trait is sealed, and implemented for generated wrapper types.InstanciableGodotObjects that have a zero argument constructor.NewRefA trait for incrementing the reference count to a Godot object.QueueFreeManually managed Godot classes implementing `queue_free`. This trait has no public interface. See `Ref::queue_free`.SubClassMarker trait for API types that are subclasses of another type. This trait is implemented by the bindings generator, and has no public interface. Users should not attempt to implement this trait. Crate gdnative::tasks === Runtime async support for godot-rust. This crate contains types and functions that enable using async code with godot-rust. Safety assumptions --- This crate assumes that all user non-Rust code follow the official threading guidelines. Structs --- AsyncAdapter for async methods that implements `Method` and can be registered.ContextContext for creating `yield`-like futures in async methods.SpawnerA helper structure for working around naming future types. See `Spawner::spawn`.StaticArgsAdapter for methods whose arguments are statically determined. If the arguments would fail to type check, the method will print the errors to Godot’s debug console and return `null`.YieldFuture that can be `await`ed for a signal or a `resume` call from Godot. See `Context` for methods that return this future.Traits --- AsyncMethodTrait for async methods. When exported, such methods return `FunctionState`-like objects that can be manually resumed or yielded to completion.StaticArgsAsyncMethodTrait for async methods whose argument lists are known at compile time. Not to be confused with a “static method”. When exported, such methods return `FunctionState`-like objects that can be manually resumed or yielded to completion.Functions --- register_runtimeAdds required supporting NativeScript classes to `handle`. This must be called once and only once per initialization.set_boxed_executorSets the global executor for the current thread to a `Box<dyn LocalSpawn>`. This value is leaked.set_executorSets the global executor for the current thread to a `&'static dyn LocalSpawn`.terminate_runtimeReleases all observers still in use. This should be called in the `godot_gdnative_terminate` callback. Struct gdnative::core_types::Variant === ``` pub struct Variant(_); ``` A `Variant` can represent all Godot values (core types or `Object` class instances). The underlying data is either stored inline or reference-counted on the heap, depending on the size of the type and whether the it is trivially copyable. If you compile godot-rust with the `serde` feature enabled, you will have access to serialization/deserialization support: the traits `Serialize` and `Deserialize` will be automatically implemented on `VariantDispatch` as well as most of the types in `core_types`. Implementations --- ### impl Variant #### pub fn new<T>(from: T) -> Variantwhere    T: OwnedToVariant, Creates a `Variant` from a value that implements `ToVariant`. #### pub fn nil() -> Variant Creates an empty `Variant`. #### pub fn to<T>(&self) -> Option<T>where    T: FromVariant, Performs a strongly-typed, structure-aware conversion to `T` from this variant, if it is a valid representation of said type. This is the same as `T::from_variant(self).ok()`. This is the same conversion used to parse arguments of exported methods. See `FromVariant` for more details. #### pub fn try_to<T>(&self) -> Result<T, FromVariantError>where    T: FromVariant, Performs a strongly-typed, structure-aware conversion to `T` from this variant, if it is a valid representation of said type. This is the same as `T::from_variant(self)`. This is the same conversion used to parse arguments of exported methods. See `FromVariant` for more details. #### pub fn coerce_to<T>(&self) -> Twhere    T: CoerceFromVariant, Coerce a value of type `T` out of this variant, through what Godot presents as a “best-effort” conversion, possibly returning a default value. See `CoerceFromVariant` for more details. See also `Variant::to` and `Variant::try_to` for strongly-typed, structure-aware conversions into Rust types. #### pub fn to_object<T>(&self) -> Option<Ref<T, Shared>>where    T: GodotObject, Convenience method to extract a `Ref<T, Shared>` from this variant, if the type matches. This is the same as `Ref::<T, Shared>::from_variant(self).ok()`. This is the same conversion used to parse arguments of exported methods. See `FromVariant` for more details. #### pub fn try_to_object<T>(&self) -> Result<Ref<T, Shared>, FromVariantError>where    T: GodotObject, Convenience method to extract a `Ref<T, Shared>` from this variant, if the type matches. This is the same as `Ref::<T, Shared>::from_variant(self)`. This is the same conversion used to parse arguments of exported methods. See `FromVariant` for more details. #### pub fn get_type(&self) -> VariantType Returns this variant’s type. #### pub fn dispatch(&self) -> VariantDispatch Converts this variant to a primitive value depending on its type. ##### Examples ``` let variant = 42.to_variant(); let number_as_float = match variant.dispatch() { VariantDispatch::I64(i) => i as f64, VariantDispatch::F64(f) => f, _ => panic!("not a number"), }; approx::assert_relative_eq!(42.0, number_as_float); ``` #### pub fn is_nil(&self) -> bool Returns true if this is an empty variant. #### pub fn has_method(&self, method: impl Into<GodotString>) -> bool #### pub unsafe fn call(    &mut self,    method: impl Into<GodotString>,    args: &[Variant]) -> Result<Variant, CallErrorInvokes a method on the held object. ##### Safety This method may invoke [Object::call()] internally, which is unsafe, as it allows execution of arbitrary code (including user-defined code in GDScript or unsafe Rust). #### pub fn evaluate(    &self,    op: VariantOperator,    rhs: &Variant) -> Result<Variant, InvalidOpEvaluates a variant operator on `self` and `rhs` and returns the result on success. ##### Errors Returns `Err(InvalidOp)` if the result is not valid. Trait Implementations --- ### impl Clone for Variant #### fn clone(&self) -> Variant Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter. #### fn default() -> Variant Returns the “default value” for a type. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter. #### fn drop(&mut self) Executes the destructor for this type. #### fn from(v: &'a Variant) -> VariantDispatch Converts to this type from the input type.### impl<'a> From<&'a VariantDispatch> for Variant #### fn from(v: &'a VariantDispatch) -> Variant Converts to this type from the input type.### impl FromVariant for Variant #### fn from_variant(variant: &Variant) -> Result<Variant, FromVariantError### impl Ord for Variant #### fn cmp(&self, other: &Variant) -> Ordering This method returns an `Ordering` between `self` and `other`. Read more1.21.0 · source#### fn max(self, other: Self) -> Selfwhere    Self: Sized, Compares and returns the maximum of two values. Read more1.21.0 · source#### fn min(self, other: Self) -> Selfwhere    Self: Sized, Compares and returns the minimum of two values. Read more1.50.0 · source#### fn clamp(self, min: Self, max: Self) -> Selfwhere    Self: Sized + PartialOrd<Self>, Restrict a value to a certain interval. #### fn eq(&self, other: &Variant) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialOrd<Variant> for Variant #### fn partial_cmp(&self, other: &Variant) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists. Read more1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### fn to_variant(&self) -> Variant ### impl Eq for Variant ### impl ToVariantEq for Variant Auto Trait Implementations --- ### impl RefUnwindSafe for Variant ### impl Send for Variant ### impl Sync for Variant ### impl Unpin for Variant ### impl UnwindSafe for Variant Blanket Implementations --- ### impl<T> Any for Twhere    T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. const: unstable · source#### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn equivalent(&self, key: &K) -> bool Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> OwnedToVariant for Twhere    T: ToVariant, #### fn owned_to_variant(self) -> Variant ### impl<T> ToOwned for Twhere    T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. #### default fn to_string(&self) -> String Converts the given value to a `String`. #### type Error = Infallible The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Function gdnative::init::diagnostics::missing_manual_registration === ``` pub fn missing_manual_registration() -> bool ``` Checks for any `NativeClass` types that are registered automatically, but not manually. Returns `true` if the test isn’t applicable, or if no such types were found. Some platforms may not have support for automatic registration. On such platforms, only manually registered classes are visible at run-time. Please refer to the `rust-ctor` README for an up-to-date listing of platforms that *do* support automatic registration. Module gdnative::core_types === Types that represent core types of Godot. In contrast to generated Godot class types from the `api` module, the types in here are hand-written in idiomatic Rust and are the counterparts to built-in types in GDScript. godot-rust provides optional serialization support for many core types. Enable the feature `serde` to make use of it. Structs --- AabbAxis-aligned bounding box.AlignedA pool array access that is (assumed to be) aligned.BasisA 3x3 matrix, typically used as an orthogonal basis for `Transform`.ColorRGBA color with 32-bit floating point components.DictionaryA reference-counted `Dictionary` of `Variant` key-value pairs.GodotCharType representing a character in Godot’s native encoding. Can be converted to and from `char`. Depending on the platform, this might not always be able to represent a full code point.GodotStringGodot’s reference-counted string type.InvalidOpError indicating that an operator result is invalid.MarginErrorError indicating that an `i64` cannot be converted to a `Margin`.MaybeNotWrapper type around a `FromVariant` result that may not be a successMaybeUnalignedA pool array access that may be unaligned.NodePathA reference-counted relative or absolute path in a scene tree, for use with `Node.get_node()` and similar functions. It can reference a node, a resource within a node, or a property of a node or resource.OwnedA pool array write access with an owned aligned copy. The data is written back when this is dropped.Plane3D plane in Hessian form: `a*b + b*y + c*z + d = 0`PoolArrayA reference-counted CoW typed vector using Godot’s pool allocator, generic over possible element types.QuatQuaternion, used to represent 3D rotations.ReadGuardRAII read guard.Rect22D axis-aligned bounding box.RidA RID (“resource ID”) is an opaque handle that refers to a Godot `Resource`.StringNameInterned string.TransformAffine 3D transform (3x4 matrix).Transform2DAffine 2D transform (2x3 matrix).VariantA `Variant` can represent all Godot values (core types or `Object` class instances).VariantArrayA reference-counted `Variant` vector. Godot’s generic array data type. Negative indices can be used to count from the right.Vector22D vector class.Vector33D vector class.WriteGuardRAII write guard.Enums --- AxisCallErrorFromVariantErrorError type returned by `FromVariant::from_variant`.GodotCharErrorError indicating that a `GodotChar` cannot be converted to a `char`.GodotErrorError codes used in various Godot APIs.MarginProvides compatibility with Godot’s `Margin` enum through the `TryFrom` trait.VariantDispatchRust enum associating each primitive variant type to its value.VariantEnumReprVariantOperatorGodot variant operator kind.VariantStructReprVariantTypeTraits --- CoerceFromVariantTypes that can be coerced from a `Variant`. Coercions are provided by Godot, with results consistent with GDScript. This cannot be implemented for custom types.FromVariantTypes that can be converted from a `Variant`. Conversions are performed in Rust, and can be implemented for custom types.GuardTrait for array access guardsOwnedToVariantTypes that can only be safely converted to a `Variant` as owned values. Such types cannot implement `ToVariant` in general, but can still be passed to API methods as arguments, or used as return values. Notably, this includes `Unique` arrays, dictionaries, and references to Godot objects and instances.PoolElementTrait for element types that can be contained in `PoolArray`. This trait is sealed and has no public interface.ToVariantTypes that can be converted to a `Variant`.ToVariantEqTrait for types whose `ToVariant` implementations preserve equivalence.WritePtrMarker trait for write access guardsType Definitions --- ByteArrayDeprecatedA reference-counted vector of `u8` that uses Godot’s pool allocator.ColorArrayDeprecatedA reference-counted vector of `Color` that uses Godot’s pool allocator.Float32ArrayDeprecatedA reference-counted vector of `f32` that uses Godot’s pool allocator.GodotResultResult type with GodotErrorInt32ArrayDeprecatedA reference-counted vector of `i32` that uses Godot’s pool allocator.ReadA RAII read access for Godot pool arrays.StringArrayDeprecatedA reference-counted vector of `GodotString` that uses Godot’s pool allocator.Vector2ArrayDeprecatedA reference-counted vector of `Vector2` that uses Godot’s pool allocator.Vector3ArrayDeprecatedA reference-counted vector of `Vector3` that uses Godot’s pool allocator.WriteA RAII write access for Godot pool arrays. This will only lock the CoW container once, as opposed to every time with methods like `push()`. Module gdnative::export === Functionality for user-defined types exported to the engine (native scripts). NativeScript allows users to have their own scripts in a native language (in this case Rust). It is *not* the same as GDNative, the native interface to call into Godot. Symbols in this module allow registration, exporting and management of user-defined types which are wrapped in native scripts. If you are looking for how to manage Godot core types or classes (objects), check out the `core_types` and `object` modules, respectively. To handle initialization and shutdown of godot-rust, take a look at the `init` module. For full examples, see `examples` in the godot-rust repository. Modules --- hintStrongly typed property hints.user_dataCustomizable user-data wrappers.Macros --- godot_wrap_methodConvenience macro to wrap an object’s method into a `Method` implementor that can be passed to the engine when registering a class.Structs --- ArgBuilderBuilder for providing additional argument information for error reporting.ArgumentErrorError during argument parsing.ClassBuilderAllows registration of exported properties, methods and signals.ExportInfoMetadata about the exported property.IndexBoundsDefines which number of arguments is valid.MethodBuilderBuilder type used to register a method on a `NativeClass`.PropertyPlaceholder type for exported properties with no backing field.PropertyBuilderBuilder type used to register a property on a `NativeClass`.PropertyUsageSignalBuilderClass to construct a signal. Make sure to call `Self::done()` in the end.SignalParamParameter in a signal declaration.StaticArgsAdapter for methods whose arguments are statically determined. If the arguments would fail to type check, the method will print the errors to Godot’s debug console and return `null`.VarargsSafe interface to a list of borrowed method arguments with a convenient API for common operations with them.Enums --- RpcModeVarargsErrorAll possible errors that can occur when converting from Varargs.Traits --- ExportTrait for exportable types.FromVarargsTrait for structures that can be parsed from `Varargs`.MethodSafe low-level trait for stateful, variadic methods that can be called on a native script type.MixinTrait for mixins, manually registered `#[methods]` blocks that may be applied to multiple types.NativeClassTrait used for describing and initializing a Godot script class.NativeClassMethodsTrait used to provide information of Godot-exposed methods of a script class.OwnerArgTrait for types that can be used as the `owner` arguments of exported methods. This trait is sealed and has no public interface.StaticArgsMethodTrait for methods whose argument lists are known at compile time. Not to be confused with a “static method”.StaticallyNamedA NativeScript “class” that is statically named. `NativeClass` types that implement this trait can be registered using [`InitHandle::add_class`]. Module gdnative::globalscope === Port of selected GDScript built-in functions. This module contains *some* of the functions available in the @GDScript documentation. Reasons why a GDScript function may *not* be ported to Rust include: * they are in the Rust standard library (`abs`, `sin`, `floor`, `assert`, …) * they are already part of a godot-rust API + `print` -> `godot_print!` + `instance_from_id` -> `GodotObject::from_instance_id()` + … * they have a private implementation, i.e. a Rust port would have different semantics + `randi`, `randf` etc. – users should use `rand` crate + `str2var`, `bytes2var`, `hash` etc – to be verified This above list is not a definitive inclusion/exclusion criterion, just a rough guideline. Other noteworthy special cases: * GDScript `fmod` corresponds to Rust’s `%` operator on `f32` (also known as the `Rem` trait). Functions --- cartesian2polarCoordinate system conversion: cartesian -> polardb2linearConverts from decibels to linear energy (audio).easeReturns an “eased” value of x based on an easing function defined with `curve`.fposmodReturns the floating-point modulus of `a/b` that wraps equally in positive and negative.inverse_lerpFind linear interpolation weight from interpolated values.is_equal_approxReturns `true` if `a` and `b` are approximately equal to each other.is_zero_approxReturns true if `s` is zero or almost zero.lerpLinearly interpolates between two values, by the factor defined in weight.lerp_angleLinearly interpolates between two angles (in radians), by a normalized value.linear2dbConverts from linear energy to decibels (audio).loadLoads a resource from the filesystem located at `path`.move_towardMoves `range.start()` toward `range.end()` by the `delta` value.nearest_po2Returns the nearest equal or larger power of 2 for an integer value.polar2cartesianCoordinate system conversion: polar -> cartesianposmodReturns the integer modulus of `a/b` that wraps equally in positive and negative.range_lerpMaps a value from `range_from` to `range_to`, using linear interpolation.smoothstepSmooth (Hermite) interpolation.step_decimalsPosition of the first non-zero digit, after the decimal point.stepifySnaps float value `s` to a given `step`.wrapfWraps float value between `min` and `max`.wrapiWraps integer value between `min` and `max`. Module gdnative::init === Global initialization and termination of the library. This module provides all the plumbing required for global initialization and shutdown of godot-rust. ### Init and exit hooks Three endpoints are automatically invoked by the engine during startup and shutdown: * `godot_gdnative_init`, * `godot_nativescript_init`, * `godot_gdnative_terminate`, All three must be present. To quickly define all three endpoints using the default names, use `godot_init`. ### Registering script classes `InitHandle` is the registry of all your exported symbols. To register script classes, call `InitHandle::add_class()` or `InitHandle::add_tool_class()` in your `godot_nativescript_init` or `godot_init` callback: ``` use gdnative::prelude::*; #[derive(NativeClass)] struct HelloWorld { /* ... */ } #[methods] impl HelloWorld { /* ... */ } fn init(handle: InitHandle) { handle.add_class::<HelloWorld>(); } godot_init!(init); ``` Modules --- diagnosticsRun-time tracing functions to help debug the init process.Macros --- godot_gdnative_initDeclare the API endpoint to initialize the gdnative API on startup.godot_gdnative_terminateDeclare the API endpoint invoked during shutdown.godot_initDeclare all the API endpoints necessary to initialize a NativeScript library.godot_nativescript_initDeclare the API endpoint to initialize export classes on startup.Structs --- InitHandleA handle that can register new classes to the engine during initialization.InitializeInfoContext for the `godot_gdnative_init` callback.TerminateInfoContext for the `godot_gdnative_terminate` callback. Module gdnative::log === Functions for using the engine’s logging system in the editor. Macros --- godot_dbgPrints and returns the value of a given expression for quick and dirty debugging, using the engine’s logging system (visible in the editor).godot_errorPrint an error using the engine’s logging system (visible in the editor).godot_printPrint a message using the engine’s logging system (visible in the editor).godot_siteCreates a `Site` value from the current position in code, optionally with a function path for identification.godot_warnPrint a warning using the engine’s logging system (visible in the editor).Structs --- SiteValue representing a call site for errors and warnings. Can be constructed using the `godot_site` macro, or manually.Functions --- errorPrint an error to the Godot console.printPrint a message to the Godot console.warnPrint a warning to the Godot console. Module gdnative::prelude === Curated re-exports of common items. Modules --- user_dataUser-data attributes from `export::user_data` module.Macros --- godot_dbgPrints and returns the value of a given expression for quick and dirty debugging, using the engine’s logging system (visible in the editor).godot_errorPrint an error using the engine’s logging system (visible in the editor).godot_initDeclare all the API endpoints necessary to initialize a NativeScript library.godot_printPrint a message using the engine’s logging system (visible in the editor).godot_warnPrint a warning using the engine’s logging system (visible in the editor).godot_wrap_methodConvenience macro to wrap an object’s method into a `Method` implementor that can be passed to the engine when registering a class.Structs --- AabbAxis-aligned bounding box.BasisA 3x3 matrix, typically used as an orthogonal basis for `Transform`.Button`core class Button` inherits `BaseButton` (manually managed).CanvasItem`core class CanvasItem` inherits `Node` (manually managed).CanvasLayer`core class CanvasLayer` inherits `Node` (manually managed).ClassBuilderAllows registration of exported properties, methods and signals.ColorRGBA color with 32-bit floating point components.ColorRect`core class ColorRect` inherits `Control` (manually managed).Control`core class Control` inherits `CanvasItem` (manually managed).DictionaryA reference-counted `Dictionary` of `Variant` key-value pairs.ExportInfoMetadata about the exported property.GodotStringGodot’s reference-counted string type.Image`core class Image` inherits `Resource` (reference-counted).InitHandleA handle that can register new classes to the engine during initialization.Input`core singleton class Input` inherits `Object` (manually managed).InputEvent`core class InputEvent` inherits `Resource` (reference-counted).InputEventKey`core class InputEventKey` inherits `InputEventWithModifiers` (reference-counted).InstanceA persistent reference to a GodotObject with a rust NativeClass attached.KinematicBody`core class KinematicBody` inherits `PhysicsBody` (manually managed).KinematicBody2D`core class KinematicBody2D` inherits `PhysicsBody2D` (manually managed).Label`core class Label` inherits `Control` (manually managed).MethodBuilderBuilder type used to register a method on a `NativeClass`.Node`core class Node` inherits `Object` (manually managed).Node2D`core class Node2D` inherits `CanvasItem` (manually managed).NodePathA reference-counted relative or absolute path in a scene tree, for use with `Node.get_node()` and similar functions. It can reference a node, a resource within a node, or a property of a node or resource.NullRepresents an explicit null reference in method arguments. This works around type inference issues with `Option`. You may create `Null`s with `Null::null` or `GodotObject::null`.ObjectThe base class of all classes in the Godot hierarchy.PackedScene`core class PackedScene` inherits `Resource` (reference-counted).Plane3D plane in Hessian form: `a*b + b*y + c*z + d = 0`PoolArrayA reference-counted CoW typed vector using Godot’s pool allocator, generic over possible element types.PropertyPlaceholder type for exported properties with no backing field.PropertyUsageQuatQuaternion, used to represent 3D rotations.Rect22D axis-aligned bounding box.RefA polymorphic smart pointer for Godot objects whose behavior changes depending on the memory management method of the underlying type and the thread access status.ReferenceBase class of all reference-counted types. Inherits `Object`.ResourceLoader`core singleton class ResourceLoader` inherits `Object` (manually managed).RidA RID (“resource ID”) is an opaque handle that refers to a Godot `Resource`.SceneTree`core class SceneTree` inherits `MainLoop` (manually managed).Shader`core class Shader` inherits `Resource` (reference-counted).SharedMarker that indicates that a value currently might be shared in the same or over multiple threads.SignalBuilderClass to construct a signal. Make sure to call `Self::done()` in the end.SignalParamParameter in a signal declaration.Spatial`core class Spatial` inherits `Node` (manually managed).Sprite`core class Sprite` inherits `Node2D` (manually managed).StringNameInterned string.TInstanceA reference to a GodotObject with a rust NativeClass attached that is assumed safe during a certain lifetime.TRefA temporary safe pointer to Godot objects that tracks thread access status. `TRef` can be coerced into bare references with `Deref`.Texture`core class Texture` inherits `Resource` (reference-counted).ThreadLocalMarker that indicates that a value can currently only be shared in the same thread.Timer`core class Timer` inherits `Node` (manually managed).TransformAffine 3D transform (3x4 matrix).Transform2DAffine 2D transform (2x3 matrix).Tween`core class Tween` inherits `Node` (manually managed).UniqueMarker that indicates that a value currently only has a single unique reference.VariantA `Variant` can represent all Godot values (core types or `Object` class instances).VariantArrayA reference-counted `Variant` vector. Godot’s generic array data type. Negative indices can be used to count from the right.Vector22D vector class.Vector33D vector class.Viewport`core class Viewport` inherits `Node` (manually managed).Enums --- FromVariantErrorError type returned by `FromVariant::from_variant`.GodotErrorError codes used in various Godot APIs.ManuallyManagedMarker that indicates that a type is manually managed.RefCountedMarker that indicates that a type is reference-counted.VariantDispatchRust enum associating each primitive variant type to its value.VariantOperatorGodot variant operator kind.VariantTypeTraits --- AsArgTrait for safe conversion from Godot object references into API method arguments. This is a sealed trait with no public interface.FromVariantTypes that can be converted from a `Variant`. Conversions are performed in Rust, and can be implemented for custom types.GodotObjectTrait for Godot API objects. This trait is sealed, and implemented for generated wrapper types.InstanciableGodotObjects that have a zero argument constructor.MethodSafe low-level trait for stateful, variadic methods that can be called on a native script type.NativeClassTrait used for describing and initializing a Godot script class.NativeClassMethodsTrait used to provide information of Godot-exposed methods of a script class.NewRefA trait for incrementing the reference count to a Godot object.NodeResolveExtOwnedToVariantTypes that can only be safely converted to a `Variant` as owned values. Such types cannot implement `ToVariant` in general, but can still be passed to API methods as arguments, or used as return values. Notably, this includes `Unique` arrays, dictionaries, and references to Godot objects and instances.QueueFreeManually managed Godot classes implementing `queue_free`. This trait has no public interface. See `Ref::queue_free`.SubClassMarker trait for API types that are subclasses of another type. This trait is implemented by the bindings generator, and has no public interface. Users should not attempt to implement this trait.ToVariantTypes that can be converted to a `Variant`.ToVariantEqTrait for types whose `ToVariant` implementations preserve equivalence.Functions --- autoload⚠Convenience method to obtain a reference to an “auto-load” node, that is a child of the root node.loadLoads a resource from the filesystem located at `path`.Type Definitions --- ByteArrayDeprecatedA reference-counted vector of `u8` that uses Godot’s pool allocator.ColorArrayDeprecatedA reference-counted vector of `Color` that uses Godot’s pool allocator.Float32ArrayDeprecatedA reference-counted vector of `f32` that uses Godot’s pool allocator.Int32ArrayDeprecatedA reference-counted vector of `i32` that uses Godot’s pool allocator.StringArrayDeprecatedA reference-counted vector of `GodotString` that uses Godot’s pool allocator.Vector2ArrayDeprecatedA reference-counted vector of `Vector2` that uses Godot’s pool allocator.Vector3ArrayDeprecatedA reference-counted vector of `Vector3` that uses Godot’s pool allocator.Attribute Macros --- methodsCollects method signatures of all functions in a `NativeClass` that have the `#[method]` attribute and registers them with Godot.monomorphizeWires up necessary internals for a concrete monomorphization of a generic `NativeClass`, represented as a type alias, so it can be registered.profiledMakes a function profiled in Godot’s built-in profiler. This macro automatically creates a tag using the name of the current module and the function by default.Derive Macros --- FromVarargsEnable struct types to be parsed as argument lists.FromVariantNativeClassMakes it possible to use a type as a NativeScript. Automatically registers the type if the `inventory` feature is enabled on supported platforms.OwnedToVariantToVariant Module gdnative::profiler === Interface to Godot’s built-in profiler. Macros --- _profile_sigConvenience macro to create a profiling signature with a given tag.profile_sigConvenience macro to create a profiling signature with a given tag.Structs --- SignatureA string encoding information about the code being profiled for Godot’s built-in profiler.Functions --- add_dataAdd a data point to Godot’s built-in profiler. The profiler only has microsecond precision. Sub-microsecond time is truncated.profileTimes a closure and adds the measured time to Godot’s built-in profiler with the given signature, and then returns it’s return value. Macro gdnative::godot_dbg === ``` macro_rules! godot_dbg { () => { ... }; ($val:expr) => { ... }; ($val:expr,) => { ... }; ($($val:expr),+ $(,)?) => { ... }; } ``` Prints and returns the value of a given expression for quick and dirty debugging, using the engine’s logging system (visible in the editor). This behaves similarly to the `std::dbg!` macro. Macro gdnative::godot_error === ``` macro_rules! godot_error { ($($args:tt)*) => { ... }; } ``` Print an error using the engine’s logging system (visible in the editor). Guarantees --- It’s guaranteed that the expansion result of this macro may *only* panic if: * Any of the arguments for the message panicked in `fmt`. * The formatted message contains the NUL byte (`\0`) anywhere. Macro gdnative::godot_print === ``` macro_rules! godot_print { ($($args:tt)*) => { ... }; } ``` Print a message using the engine’s logging system (visible in the editor). Macro gdnative::godot_site === ``` macro_rules! godot_site { () => { ... }; ($($path:tt)+) => { ... }; } ``` Creates a `Site` value from the current position in code, optionally with a function path for identification. Examples --- ``` use gdnative::log; // WARN: <unset>: foo At: path/to/file.rs:123 log::warn(log::godot_site!(), "foo"); // WARN: Foo::my_func: bar At: path/to/file.rs:123 log::error(log::godot_site!(Foo::my_func), "bar"); ``` Struct gdnative::log::Site === ``` pub struct Site<'a> { /* private fields */ } ``` Value representing a call site for errors and warnings. Can be constructed using the `godot_site` macro, or manually. Implementations --- ### impl<'a> Site<'a#### pub const fn new(file: &'a CStr, func: &'a CStr, line: u32) -> Site<'aConstruct a new `Site` value using values provided manually. Trait Implementations --- ### impl<'a> Clone for Site<'a#### fn clone(&self) -> Site<'aReturns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. --- ### impl<'a> RefUnwindSafe for Site<'a### impl<'a> Send for Site<'a### impl<'a> Sync for Site<'a### impl<'a> Unpin for Site<'a### impl<'a> UnwindSafe for Site<'aBlanket Implementations --- ### impl<T> Any for Twhere    T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. const: unstable · source#### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere    T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. #### default fn to_string(&self) -> String Converts the given value to a `String`. #### type Error = Infallible The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
SLModels
cran
R
Package ‘SLModels’ October 12, 2022 Type Package Title Stepwise Linear Models for Binary Classification Problems under Youden Index Optimisation Version 0.1.2 Depends stats, ROCR Maintainer <NAME> <<EMAIL>> Description Stepwise models for the optimal linear combination of continuous variables in bi- nary classification problems under Youden Index optimisation. Information on the models imple- mented can be found at Aznar-Gimeno et al. (2021) <doi:10.3390/math9192497>. License GPL-3 Encoding UTF-8 NeedsCompilation no Author <NAME> [aut, cre] (<https://orcid.org/0000-0003-1415-146X>), <NAME> [aut] (<https://orcid.org/0000-0002-3007-302X>), <NAME> [aut] (<https://orcid.org/0000-0002-6474-2252>), <NAME> [aut] (<https://orcid.org/0000-0003-2755-5500>) Repository CRAN Date/Publication 2022-02-03 14:30:10 UTC R topics documented: SLModel... 1 SLModels Stepwise Linear Models for Binary Classification Problems under Youden Index Optimisation Description Stepwise models for the optimal linear combination of continuous variables in binary classification problems under Youden Index optimisation. Information on the models implemented can be found at Aznar-Gimeno et al. (2021) <doi:10.3390/math9192497>. Usage SLModels(data, algorithm="stepwise", scaling=FALSE) Arguments data Data frame containing the input variables and the binary output variable. The last column must be reserved for the output variable. algorithm string; Stepwise linear model to be applied. The options are: "stepwise", "min- max", "minmaxmedian", "minmaxiqr"; default value: "stepwise". scaling boolean; if TRUE, the Min-Max Scaling is applied; if FALSE, no normalisation is applied to the input variables; default value: FALSE. Details The "stepwise" algorithm refers to our proposed stepwise algorithm on the original variables which is the adaptation for the maximisation of the Youden index of the one proposed by Esteban et al. (2011) <doi:10.1080/02664761003692373>. The general idea of this approach, as suggested by <NAME> Thompson (2000) <doi:10.1093/biostatistics/1.2.123>, is to follow a step by step algo- rithm that includes a new variable in each step, selecting the best combination (or combinations) of two variables, in terms of maximising the Youden index. The "minmax" algorithm refers to the distribution-free min–max approach proposed by Liu et al. (2011) <doi:10.1002/sim.4238>. The idea is to reduce the order of the linear combination beforehand by considering only two markers (maximum and minimum values of all the vari- ables/biomarkers). This algorithm was adapted in order to maximise the Youden index. The "minmaxmedian" algorithm refers to our proposed algorithm that considers the linear combi- nation of the following three variables: the minimum, maximum and median values of the original variables. The "minmaxiqr" algorithm refers to our proposed algorithm that considers the linear combination of the following three variables: the minimum, maximum and interquartile range (IQR) values of the original variables. More information on the implemented algorithms can be found in Aznar-Gimeno et al. (2021) <doi:10.3390/math9192497>. Value Optimal linear combination that maximises the Youden index. Specifically, the function returns the coefficients for each variable, optimal cut-off point and Youden Index achieved. Note The "stepwise" algorithm becomes a computationally intensive problem when the number of vari- ables exceeds 4. Author(s) <NAME>, <NAME>, <NAME>, <NAME> References <NAME>., <NAME>., <NAME>., del-Hoyo-Alonso, R., & <NAME>. (2021). Incorporating a New Summary Statistic into the Min–Max Approach: A Min–Max–Median, Min–Max–IQR Combination of Biomarkers for Maximising the Youden Index. Mathematics, 9(19), 2497, doi:10.3390/math9192497. <NAME>., <NAME>., & <NAME>. (2011). A step-by-step algorithm for combining diagnostic tests. Journal of Applied Statistics, 38(5), 899-911, doi:10.1080/02664761003692373. <NAME>., & <NAME>. (2000). Combining diagnostic test results to increase accuracy. Biostatistics, 1(2), 123-140, doi:10.1093/biostatistics/1.2.123. <NAME>., <NAME>., & <NAME>. (2011). A min–max combination of biomarkers to improve diagnostic accuracy. Statistics in medicine, 30(16), 2005-2014, doi:10.1002/sim.4238. Examples #Create dataframe x1<-rnorm(100,sd =1) x2<-rnorm(100,sd =2) x3<-rnorm(100,sd =3) x4<-rnorm(100,sd =4) z <- rep(c(1,0), c(50,50)) DT<-data.frame(cbind(x1,x2,x3,x4)) data<-cbind(DT,z) #Example 1# SLModels(data) #default values: algorithm="stepwise", scaling=FALSE #Example 2# SLModels(data, algorithm="minmax") #scaling=FALSE, default value #Example 3# SLModels(data, algorithm="minmax", scaling=TRUE) #Example 4# SLModels(data, algorithm="minmaxmedian", scaling=TRUE) #Example 5# SLModels(data, algorithm="minmaxiqr", scaling=TRUE)
packagist_dibi_dibi.jsonl
personal_doc
SQL
class dibi AFFECTED_ROWS | | --- | IDENTIFIER | VERSION | | ASC | | DESC | | static | $sql | | | --- | --- | --- | static | $elapsedTime | | static | $totalTime | | static | $numOfQueries | | ### at line 88 ``` static Connection connect(array $config = [], string $name = '0') ``` ``` static bool isConnected() ``` ### at line 107 ``` static Connection getConnection(string|null $name = null) ``` ``` static Connection setConnection(Connection $connection) ``` ``` static __callStatic(string $name, array $args) ``` ``` static DateTimeInterface stripMicroseconds(DateTimeInterface $dt) ``` ``` static void disconnect() ``` ``` static Result query(mixed $args) ``` ``` static Result nativeQuery(mixed $args) ``` ``` static bool test(mixed $args) ``` ``` static DataSource dataSource(mixed $args) ``` ``` static Row|null fetch(mixed $args) ``` ``` static array fetchAll(mixed $args) ``` ``` static mixed fetchSingle(mixed $args) ``` ``` static array fetchPairs(mixed $args) ``` ``` static int getAffectedRows() ``` ``` static int getInsertId(string $sequence = null) ``` ``` static void begin(string $savepoint = null) ``` ``` static void rollback(string $savepoint = null) ``` ``` static Database getDatabaseInfo() ``` ``` static Fluent select(mixed $args) ``` ``` static Fluent update(string|string[] $table, array $args) ``` ``` static Fluent insert(string $table, array $args) ``` ``` static Fluent delete(string $table) ``` ``` static HashMap getSubstitutes() ``` ``` static int loadFile(string $file) ``` class DibiExtension22 extends CompilerExtension loadConfiguration() `loadConfiguration()` No description class Panel implements IBarPanel static | $maxLength | | --- | --- | $explain | $filter | register(Connection $connection) static array|null renderException(Throwable|null $e) getTab() getPanel() ### at line 29 ``` void register(Connection $connection) ``` ``` static array|null renderException(Throwable|null $e) ``` ``` string getTab() ``` ``` string|null getPanel() ``` class DummyDriver implements Driver, ResultDriver, Reflector ### at line 20 ### at line 25 ### at line 134 ``` array|null fetch(bool $assoc) ``` ### at line 172 class FirebirdDriver implements Driver * database => the path to database file (server:/path/database.fdb) * username (or user) * password (or pass) * charset => character encoding to set * buffers (int) => buffers is the number of database buffers to allocate for the server-side cache. If 0 or omitted, server chooses its own default. * resource (resource) => existing connection resource ERROR_EXCEPTION_THROWN | | --- | ### at line 145 ### at line 163 ### at line 190 ``` FirebirdResult createResultDriver(resource $resource) ``` ### at line 251 ### at line 276 class FirebirdReflector implements Reflector ``` __construct(Driver $driver) ``` ### at line 55 ### at line 141 ``` array getIndices(string $table) ``` ``` array getConstraints(string $table) ``` ``` array getTriggersMeta(string|null $table = null) ``` ``` array getTriggers(string|null $table = null) ``` ``` array getProceduresMeta() ``` ``` array getProcedures() ``` ``` array getGenerators() ``` ### at line 374 ``` array getFunctions() ``` class MySqliDriver implements Driver * host => the MySQL server host name * port (int) => the port number to attempt to connect to the MySQL server * socket => the socket or named pipe * username (or user) * password (or pass) * database => the database name to select * options (array) => array of driver specific constants (MYSQLI_*) and values {\Dibi\Drivers\mysqli_options} * flags (int) => driver specific constants (MYSQLICLIENT*) {\Dibi\Drivers\mysqli_real_connect} * charset => character encoding to set (default is utf8) * persistent (bool) => try to find a persistent link? * unbuffered (bool) => sends query without fetching and buffering the result rows automatically? * sqlmode => see http://dev.mysql.com/doc/refman/5.0/en/server-sql-mode.html * resource (mysqli) => existing connection resource ERROR_ACCESS_DENIED | | --- | ERROR_DUPLICATE_ENTRY | ERROR_DATA_TRUNCATED | ### at line 175 ``` array getInfo() ``` ### at line 194 ``` MySqliResult createResultDriver(mysqli_result $result) ``` ### at line 278 ### at line 290 ### at line 302 ### at line 308 class OdbcDriver implements Driver * dsn => driver specific DSN * username (or user) * password (or pass) * persistent (bool) => try to find a persistent link? * resource (resource) => existing connection resource * microseconds (bool) => use microseconds in datetime format? ### at line 101 ``` OdbcResult createResultDriver(resource $resource) ``` ### at line 230 ### at line 236 class OracleDriver implements Driver * database => the name of the local Oracle instance or the name of the entry in tnsnames.ora * username (or user) * password (or pass) * charset => character encoding to set * schema => alters session schema * nativeDate => use native date format (defaults to true) * resource (resource) => existing connection resource * persistent => Creates persistent connections with oci_pconnect instead of oci_new_connect ### at line 183 ``` OracleResult createResultDriver(resource $resource) ``` ### at line 220 ### at line 264 class PdoDriver implements Driver * dsn => driver specific DSN * username (or user) * password (or pass) * options (array) => driver specific options {PDO resource (PDO) => existing connection * version ### at line 119 ### at line 151 ### at line 164 ### at line 185 ``` PdoResult createResultDriver(PDOStatement $result) ``` ### at line 242 ### at line 300 ### at line 309 ### at line 347 class PostgreDriver implements Driver * host, hostaddr, port, dbname, user, password, connect_timeout, options, sslmode, service => see PostgreSQL API * string => or use connection string * schema => the schema search path * charset => character encoding to set (default is utf8) * persistent (bool) => try to find a persistent link? * resource (resource) => existing connection resource * connect_type (int) => see pg_connect() ``` static DriverException createException(string $message, $code = null, string|null $sql = null) ``` ### at line 160 ### at line 238 ### at line 248 ``` PostgreResult createResultDriver(resource $resource) ``` ### at line 280 ### at line 287 ### at line 305 ### at line 326 class PostgreReflector implements Reflector __construct(Driver $driver, string $version) ``` __construct(Driver $driver, string $version) ``` class SqliteDriver implements Driver * database (or file) => the filename of the SQLite3 database * formatDate => how to format date in SQL (date) * formatDateTime => how to format datetime in SQL (date) * resource (SQLite3) => existing connection resource ### at line 123 ### at line 171 ### at line 189 ``` SqliteResult createResultDriver(SQLite3Result $result) ``` ### at line 207 ### at line 213 ### at line 237 ### at line 246 ### at line 256 ``` void registerFunction(string $name, callable $callback, int $numArgs = -1) ``` ### at line 283 ``` void registerAggregateFunction(string $name, callable $rowCallback, callable $agrCallback, int $numArgs = -1) ``` class SqlsrvDriver implements Driver * host => the MS SQL server host name. It can also include a port number (hostname:port) * username (or user) * password (or pass) * database => the database name to select * options (array) => connection options https://msdn.microsoft.com/en-us/library/cc296161(SQL.90).aspx * charset => character encoding to set (default is UTF-8) * resource (resource) => existing connection resource ### at line 137 ### at line 147 ``` SqlsrvResult createResultDriver(resource $resource) ``` ### at line 223 class FileLogger $file | | | --- | --- | $filter | class Column string read-only | $name | | --- | --- | string read-only | $fullName | Table read-only | $table | string read-only | $type | mixed read-only | $nativeType | int|null read-only | $size | bool read-only | $nullable | bool read-only | $autoIncrement | mixed read-only | $default | getFullName() hasTable() getTableName() getType() getNativeType() int|null getSize() isNullable() isAutoIncrement() getDefault() getVendorInfo(string $key) ``` string getFullName() ``` ### at line 56 ``` bool hasTable() ``` ``` Table getTable() ``` ``` string|null getTableName() ``` ### at line 80 ``` string|null getType() ``` ``` string getNativeType() ``` ### at line 92 ``` int|null getSize() ``` ``` bool isNullable() ``` ``` bool isAutoIncrement() ``` ``` mixed getDefault() ``` ### at line 116 ``` mixed getVendorInfo(string $key) ``` class Database string read-only | $name | | --- | --- | array read-only | $tables | array read-only | $tableNames | getTableNames() hasTable(string $name) init() ``` string|null getName() ``` ``` array getTableNames() ``` ### at line 65 ``` Table getTable(string $name) ``` ### at line 78 ``` bool hasTable(string $name) ``` ``` protected void init() ``` class ForeignKey string read-only | $name | | --- | --- | array read-only | $references | __construct(string $name, array $references) getReferences() ``` __construct(string $name, array $references) ``` ``` array getReferences() ``` class Index string read-only | $name | | --- | --- | array read-only | $columns | bool read-only | $unique | bool read-only | $primary | __construct(array $info) isUnique() isPrimary() ``` __construct(array $info) ``` ``` bool isUnique() ``` ### at line 52 ``` bool isPrimary() ``` class Result array read-only | $columns | | --- | --- | array read-only | $columnNames | __construct(ResultDriver $driver) getColumnNames(bool $fullNames = false) ``` __construct(ResultDriver $driver) ``` ``` array getColumnNames(bool $fullNames = false) ``` class Table string read-only | $name | | --- | --- | bool read-only | $view | array read-only | $columns | array read-only | $columnNames | array read-only | $foreignKeys | array read-only | $indexes | Index read-only | $primaryKey | __construct(Reflector $reflector, array $info) isView() getColumnNames() getForeignKeys() getIndexes() getPrimaryKey() initIndexes() initForeignKeys() ``` __construct(Reflector $reflector, array $info) ``` ### at line 57 ``` bool isView() ``` ``` array getColumnNames() ``` ### at line 84 ``` array getForeignKeys() ``` ``` array getIndexes() ``` ``` Index getPrimaryKey() ``` ### at line 127 ``` protected void initIndexes() ``` ``` protected void initForeignKeys() ``` class Connection implements IConnection $onEvent | | | --- | --- | int read-only | $affectedRows | int read-only | $insertId | Prevents unserialization. Prevents serialization. ``` __construct(array $config, string|null $name = null) ``` ``` final void connect() ``` ``` final void disconnect() ``` ``` final bool isConnected() ``` ``` final mixed getConfig(string|null $key = null, $default = null) ``` ``` final Driver getDriver() ``` ``` final Result query(...$args) ``` ### at line 222 ``` final string translate(mixed ...$args) ``` ``` final bool test(mixed ...$args) ``` ``` final DataSource dataSource(mixed ...$args) ``` ``` final Result nativeQuery(string $sql) ``` ### at line 336 ### at line 365 ### at line 394 ``` Result createResultSet(ResultDriver $resultDriver) ``` ### at line 460 ### at line 466 ``` Fluent select(...$args) ``` ### at line 475 ``` Fluent update(string|string[] $table, iterable $args) ``` ### at line 481 ``` Fluent insert(string $table, iterable $args) ``` ### at line 492 ``` Fluent delete(string $table) ``` ``` HashMap getSubstitutes() ``` ``` string substitute(string $value) ``` ``` void setObjectTranslator(callable $translator) ``` ### at line 551 ``` Expression|null translateObject(object $object) ``` ### at line 596 ``` Row|null fetch(mixed ...$args) ``` ``` array fetchAll(mixed ...$args) ``` ### at line 617 ``` mixed fetchSingle(mixed ...$args) ``` ### at line 627 ``` array fetchPairs(mixed ...$args) ``` ``` static Literal literal(string $value) ``` ### at line 639 ``` static Expression expression(...$args) ``` ### at line 653 ``` int loadFile(string $file, callable|null $onProgress = null) ``` ### at line 662 ``` Database getDatabaseInfo() ``` ### at line 675 `__wakeup()` Prevents unserialization. ### at line 684 `__sleep()` Prevents serialization. ### at line 690 ``` protected void onEvent($arg) ``` class DataSource implements IDataSource ``` __construct(string $sql, Connection $connection) ``` ``` DataSource select(string|array $col, string|null $as = null) ``` ``` DataSource where($cond) ``` ### at line 77 ``` DataSource orderBy(string|array $row, string $direction = 'ASC') ``` ``` DataSource applyLimit(int $limit, int|null $offset = null) ``` ### at line 102 ### at line 114 ``` Result getResult() ``` Returns (and queries) Result. ``` ResultIterator getIterator() ``` ``` Row|null fetch() ``` ``` array fetchAll() ``` ``` void release() ``` ``` Fluent toFluent() ``` Returns this data source wrapped in Fluent object. ### at line 250 ``` int getTotalCount() ``` class DateTime extends DateTimeImmutable DateTime. ### at line 18 interface Driver ### at line 54 ### at line 66 ### at line 71 ### at line 81 class Event CONNECT | | | --- | --- | SELECT | | INSERT | | DELETE | | UPDATE | | QUERY | | BEGIN | | COMMIT | | ROLLBACK | | TRANSACTION | | ALL | | $connection | | --- | $type | $sql | $result | $time | $count | $source | ``` Event done(DriverException|null $result = null) ``` getSql() ``` final string|null getSql() ``` class Expression __construct(...$values) getValues() ``` __construct(...$values) ``` ``` array getValues() ``` class Fluent implements IDataSource REMOVE | | --- | static | $masks | | --- | --- | static | $modifiers | | static | $separators | | static | $clauseSwitches | clauses | ``` Fluent __call(string $clause, array $args) ``` Appends new argument to the clause. ``` Fluent clause(string $clause) ``` Switch to a clause. ``` Fluent removeClause(string $clause) ``` ``` Fluent setFlag(string $flag, bool $value = true) ``` Change a SQL flag. ``` final bool getFlag(string $flag) ``` ``` final string|null getCommand() ``` ``` Fluent setupResult(string $method) ``` ``` Result|int|null execute(string|null $return = null) ``` Returns result set or number of affected rows ### at line 301 ``` Row|array|null fetch() ``` ### at line 313 ### at line 324 ### at line 334 ### at line 343 ### at line 352 ``` ResultIterator getIterator(int|null $offset = null, int|null $limit = null) ``` ### at line 361 ``` bool test(string|null $clause = null) ``` ### at line 399 ``` final string __toString() ``` ``` protected array _export(string|null $clause = null, array $args = []) ``` ``` static string _formatClause(string $s) ``` `__clone()` No description ``` Fluent select(mixed $field) ``` ``` Fluent distinct() ``` ``` Fluent from(mixed $table, mixed $args) ``` ``` Fluent where(mixed $cond) ``` ``` Fluent groupBy(mixed $field) ``` ``` Fluent having(mixed $cond) ``` ``` Fluent orderBy(mixed $field) ``` ``` Fluent limit(int $limit) ``` ``` Fluent offset(int $offset) ``` ``` Fluent join(mixed $table) ``` ``` Fluent leftJoin(mixed $table) ``` ``` Fluent innerJoin(mixed $table) ``` ``` Fluent rightJoin(mixed $table) ``` ``` Fluent outerJoin(mixed $table) ``` ``` Fluent union(Fluent $fluent) ``` ``` Fluent unionAll(Fluent $fluent) ``` ``` Fluent as(mixed $field) ``` ``` Fluent on(mixed $cond) ``` ``` Fluent and(mixed $cond) ``` ``` Fluent or(mixed $cond) ``` ``` Fluent using(mixed $cond) ``` ``` Fluent update(mixed $cond) ``` ``` Fluent insert(mixed $cond) ``` ``` Fluent delete(mixed $cond) ``` ``` Fluent into(mixed $cond) ``` ``` Fluent values(mixed $cond) ``` ``` Fluent set(mixed $args) ``` ``` Fluent asc() ``` ``` Fluent desc() ``` final class HashMap extends HashMapBase ``` __set(string $nm, $val) ``` `__get(string $nm)` No description abstract class HashMapBase __construct(callable $callback) setCallback(callable $callback) callable getCallback() ``` __construct(callable $callback) ``` ``` void setCallback(callable $callback) ``` ``` callable getCallback() ``` class Helpers dump(Result|null $sql = null, bool $return = false) getSuggestion(array $items, string $value) detectType(string $type) static HashMap getTypeCache() static void alias(array $config, string $key, string $alias) loadFromFile(Connection $connection, string $file, callable|null $onProgress = null) static mixed false2Null(mixed $val) intVal(mixed $value) ``` static string|null getSuggestion(array $items, string $value) ``` ``` static string escape(Driver $driver, $value, string $type) ``` ### at line 181 ``` static string|null detectType(string $type) ``` ``` static HashMap getTypeCache() ``` ``` static void alias(array $config, string $key, string $alias) ``` ``` static int loadFromFile(Connection $connection, string $file, callable|null $onProgress = null) ``` Returns count of sql commands ``` static mixed false2Null(mixed $val) ``` ### at line 292 ``` static int intVal(mixed $value) ``` interface IConnection ``` void connect() ``` ``` bool isConnected() ``` ``` Driver getDriver() ``` ``` Result query(...$args) ``` class Literal __construct($value) `__construct($value)` No description `__construct()` No description PCRE exception. `__construct()` No description protected | $severity | | --- | --- | getSeverity() ``` string getSeverity() ``` interface Reflector ### at line 174 class Result implements IDataSource int read-only | $rowCount | | --- | --- | ``` __construct(ResultDriver $driver, bool $normalize = true) ``` ``` final void free() ``` ``` final ResultDriver getResultDriver() ``` ``` final bool seek(int $row) ``` ### at line 90 ``` final ResultIterator getIterator() ``` ### at line 117 ``` final int getColumnCount() ``` ### at line 129 ``` Result setRowClass(string|null $class) ``` ``` string|null getRowClass() ``` ### at line 148 ``` Result setRowFactory(callable $callback) ``` ``` final mixed fetch() ``` and moves the internal cursor to the next position ``` final mixed fetchSingle() ``` Examples: * associative descriptor: col1[]col2->col3 builds a tree: $tree[$val1][$index][$val2]->col3[$val3] = {record} * associative descriptor: col1|col2->col3=col4 builds a tree: $tree[$val1][$val2]->col3[$val3] = val4 ### at line 372 ### at line 524 ``` final Result setType(string $column, string|null $type) ``` Define column type. ``` final string|null getType(string $column) ``` ### at line 543 ``` final array getTypes() ``` ### at line 552 ``` final Result setFormat(string $type, string|null $format) ``` Sets type format. ### at line 562 ``` final Result setFormats(array $formats) ``` ### at line 572 ``` final string|null getFormat(string $type) ``` ### at line 584 ``` Result getInfo() ``` Returns a meta information about the current result set. ### at line 595 ``` final array getColumns() ``` ``` final void dump() ``` interface ResultDriver getRowCount() seek(int $row) array|null fetch(bool $type) free() getResultColumns() getResultResource() unescapeBinary(string $value) ``` array|null fetch(bool $type) ``` ### at line 144 class ResultIterator implements Iterator, Countable __construct(Result $result) rewind() key() current() next() valid() ``` __construct(Result $result) ``` ``` void rewind() ``` ``` mixed key() ``` ``` mixed current() ``` ``` void next() ``` ``` bool valid() ``` ### at line 82 class Row implements ArrayAccess, IteratorAggregate, Countable __construct(array $arr) toArray() DateTime|string|null __get(string $key) __isset(string $key) getIterator() offsetSet($nm, $val) offsetGet($nm) offsetExists($nm) offsetUnset($nm) ### at line 19 ``` __construct(array $arr) ``` ``` array toArray() ``` `__get(string $key)` No description ``` bool __isset(string $key) ``` ``` final ArrayIterator getIterator() ``` ``` final void offsetSet($nm, $val) ``` ``` final mixed offsetGet($nm) ``` ``` final bool offsetExists($nm) ``` ``` final void offsetUnset($nm) ``` final class Translator __construct(Connection $connection) translate(array $args) formatValue(mixed $value, string|null $modifier) delimite(string $value) ``` string translate(array $args) ``` ``` string formatValue(mixed $value, string|null $modifier) ``` ``` string delimite(string $value) ``` class Type TEXT | | --- | BINARY | JSON | BOOL | INTEGER | FLOAT | DATE | DATETIME | TIME | TIME_INTERVAL | database server exception. ## A * $ Connection#affectedRows — Property in class Connection * DataSource::applyLimit() — Method in class DataSource * Limits number of rows. * Driver::applyLimit() — Method in class Driver * Injects LIMIT/OFFSET to the SQL query. * DummyDriver::applyLimit() — Method in class DummyDriver * Injects LIMIT/OFFSET to the SQL query. * PostgreDriver::applyLimit() — Method in class PostgreDriver * Injects LIMIT/OFFSET to the SQL query. * Fluent::as() — Method in class Fluent * Fluent::and() — Method in class Fluent * Fluent::asc() — Method in class Fluent * Helpers::alias() — Method in class Helpers * Apply configuration alias or default values. * $ Column#autoIncrement — Property in class Column * Row::asDateTime() — Method in class Row * ## B * Connection::begin() — Method in class Connection * Begins a transaction (if supported). * OdbcDriver::begin() — Method in class OdbcDriver * Begins a transaction (if supported). * OracleDriver::begin() — Method in class OracleDriver * Begins a transaction (if supported). * SqlsrvDriver::begin() — Method in class SqlsrvDriver * Begins a transaction (if supported). * IConnection::begin() — Method in class IConnection * Begins a transaction (if supported). * dibi::begin() — Method in class dibi ## C * Connection — Class in namespace Dibi * Dibi connection. * Connection::connect() — Method in class Connection * Connects to a database. * Connection::commit() — Method in class Connection * Commits statements in a transaction. * Connection::createResultSet() — Method in class Connection * Result set factory. * Connection::command() — Method in class Connection * ConstraintViolationException — Class in namespace Dibi * Base class for all constraint violation related exceptions. * DataSource::count() — Method in class DataSource * Commits statements in a transaction. * DummyDriver::commit() — Method in class DummyDriver * Result set driver factory. * MySqliDriver::createException() — Method in class MySqliDriver * MySqliDriver::commit() — Method in class MySqliDriver * Commits statements in a transaction. * MySqliDriver::createResultDriver() — Method in class MySqliDriver * Result set driver factory. * OdbcDriver::commit() — Method in class OdbcDriver * Commits statements in a transaction. * OdbcDriver::createResultDriver() — Method in class OdbcDriver * Result set driver factory. * OracleDriver::createException() — Method in class OracleDriver * OracleDriver::commit() — Method in class OracleDriver * Commits statements in a transaction. * OracleDriver::createResultDriver() — Method in class OracleDriver * Result set driver factory. * PdoDriver::commit() — Method in class PdoDriver * Commits statements in a transaction. * PdoDriver::createResultDriver() — Method in class PdoDriver * Result set driver factory. * PostgreDriver::createException() — Method in class PostgreDriver * PostgreDriver::commit() — Method in class PostgreDriver * Commits statements in a transaction. * PostgreDriver::createResultDriver() — Method in class PostgreDriver * Result set driver factory. * SqliteDriver::createException() — Method in class SqliteDriver * SqliteDriver::commit() — Method in class SqliteDriver * Result set driver factory. * SqlsrvDriver::commit() — Method in class SqlsrvDriver * Commits statements in a transaction. * SqlsrvDriver::createResultDriver() — Method in class SqlsrvDriver * Result set driver factory. * $ Event#connection — Property in class Event * $ Event#count — Property in class Event * $ Fluent#clauseSwitches — Property in class Fluent * clauses * Fluent::clause() — Method in class Fluent * Switch to a clause. * Fluent::count() — Method in class Fluent * IConnection::connect() — Method in class IConnection * Connects to a database. * IConnection::commit() — Method in class IConnection * Commits statements in a transaction. * Column — Class in namespace Dibi\Reflection * Reflection metadata class for a table or result set column. * $ Index#columns — Property in class Index * $ Result#columns — Property in class Result * $ Result#columnNames — Property in class Result * $ Table#columns — Property in class Table * $ Table#columnNames — Property in class Table * Result::count() — Method in class Result * Required by the Countable interface. * ResultIterator::current() — Method in class ResultIterator * Returns the current element. * ResultIterator::count() — Method in class ResultIterator * Required by the Countable interface. * Row::count() — Method in class Row * dibi::connect() — Method in class dibi * Creates a new Connection object and connects it to specified database. * dibi::commit() — Method in class dibi * dibi::command() — Method in class dibi ## D * DibiExtension22 — Class in namespace Dibi\Bridges\Nette * Dibi extension for Nette Framework 2.2. Creates 'connection' & 'panel' services. * Connection::disconnect() — Method in class Connection * Disconnects from a database. * Connection::dataSource() — Method in class Connection * Generates (translates) and returns SQL query as DataSource. * Connection::delete() — Method in class Connection * DataSource — Class in namespace Dibi * Default implementation of IDataSource. * DateTime — Class in namespace Dibi * DateTime. * Driver — Class in namespace Dibi * Driver interface. * Driver::disconnect() — Method in class Driver * Disconnects from a database. * DriverException — Class in namespace Dibi * database server exception. * DummyDriver — Class in namespace Dibi\Drivers * The dummy driver for testing purposes. * DummyDriver::disconnect() — Method in class DummyDriver * Disconnects from a database. * OdbcDriver::disconnect() — Method in class OdbcDriver * Disconnects from a database. * OracleDriver::disconnect() — Method in class OracleDriver * Disconnects from a database. * PdoDriver::disconnect() — Method in class PdoDriver * Disconnects from a database. * PostgreDriver::disconnect() — Method in class PostgreDriver * Disconnects from a database. * SqlsrvDriver::disconnect() — Method in class SqlsrvDriver * Disconnects from a database. * Event::done() — Method in class Event * Fluent::distinct() — Method in class Fluent * Fluent::delete() — Method in class Fluent * Fluent::desc() — Method in class Fluent * Helpers::dump() — Method in class Helpers * Prints out a syntax highlighted version of the SQL command or Result. * Helpers::detectType() — Method in class Helpers * Heuristic type detection. * IConnection::disconnect() — Method in class IConnection * Disconnects from a database. * $ Column#default — Property in class Column * Database — Class in namespace Dibi\Reflection * Reflection metadata class for a database. * Result::dump() — Method in class Result * Displays complete result set as HTML or text table for debug purposes. * Translator::delimite() — Method in class Translator * Apply substitutions to identifier and delimites it. * dibi — Class in namespace [Global Namespace] * Static container class for Dibi connections. * dibi::dump() — Method in class dibi * Prints out a syntax highlighted version of the SQL command or Result. * dibi::disconnect() — Method in class dibi * dibi::dataSource() — Method in class dibi * dibi::delete() — Method in class dibi ## E * $ Panel#explain — Property in class Panel * Connection::expression() — Method in class Connection * Driver::escapeText() — Method in class Driver * Encodes data for use in a SQL statement. * Driver::escapeBinary() — Method in class Driver * Driver::escapeIdentifier() — Method in class Driver * Driver::escapeBool() — Method in class Driver * Driver::escapeDate() — Method in class Driver * Driver::escapeDateTime() — Method in class Driver * Driver::escapeDateInterval() — Method in class Driver * Driver::escapeLike() — Method in class Driver * Encodes data for use in a SQL statement. * DummyDriver::escapeBinary() — Method in class DummyDriver * DummyDriver::escapeIdentifier() — Method in class DummyDriver * DummyDriver::escapeBool() — Method in class DummyDriver * DummyDriver::escapeDate() — Method in class DummyDriver * DummyDriver::escapeDateTime() — Method in class DummyDriver * DummyDriver::escapeDateInterval() — Method in class DummyDriver * DummyDriver::escapeLike() — Method in class DummyDriver * Encodes data for use in a SQL statement. * FirebirdDriver::escapeBinary() — Method in class FirebirdDriver * FirebirdDriver::escapeIdentifier() — Method in class FirebirdDriver * FirebirdDriver::escapeBool() — Method in class FirebirdDriver * FirebirdDriver::escapeDate() — Method in class FirebirdDriver * FirebirdDriver::escapeDateTime() — Method in class FirebirdDriver * FirebirdDriver::escapeDateInterval() — Method in class FirebirdDriver * FirebirdDriver::escapeLike() — Method in class FirebirdDriver * Encodes data for use in a SQL statement. * MySqliDriver::escapeBinary() — Method in class MySqliDriver * MySqliDriver::escapeIdentifier() — Method in class MySqliDriver * MySqliDriver::escapeBool() — Method in class MySqliDriver * MySqliDriver::escapeDate() — Method in class MySqliDriver * MySqliDriver::escapeDateTime() — Method in class MySqliDriver * MySqliDriver::escapeDateInterval() — Method in class MySqliDriver * MySqliDriver::escapeLike() — Method in class MySqliDriver * Encodes data for use in a SQL statement. * OdbcDriver::escapeBinary() — Method in class OdbcDriver * OdbcDriver::escapeIdentifier() — Method in class OdbcDriver * OdbcDriver::escapeBool() — Method in class OdbcDriver * OdbcDriver::escapeDate() — Method in class OdbcDriver * OdbcDriver::escapeDateTime() — Method in class OdbcDriver * OdbcDriver::escapeDateInterval() — Method in class OdbcDriver * OdbcDriver::escapeLike() — Method in class OdbcDriver * Encodes data for use in a SQL statement. * OracleDriver::escapeBinary() — Method in class OracleDriver * OracleDriver::escapeIdentifier() — Method in class OracleDriver * OracleDriver::escapeBool() — Method in class OracleDriver * OracleDriver::escapeDate() — Method in class OracleDriver * OracleDriver::escapeDateTime() — Method in class OracleDriver * OracleDriver::escapeDateInterval() — Method in class OracleDriver * OracleDriver::escapeLike() — Method in class OracleDriver * Encodes string for use in a LIKE statement. * PdoDriver::escapeText() — Method in class PdoDriver * Encodes data for use in a SQL statement. * PdoDriver::escapeBinary() — Method in class PdoDriver * PdoDriver::escapeIdentifier() — Method in class PdoDriver * PdoDriver::escapeBool() — Method in class PdoDriver * PdoDriver::escapeDate() — Method in class PdoDriver * PdoDriver::escapeDateTime() — Method in class PdoDriver * PdoDriver::escapeDateInterval() — Method in class PdoDriver * PdoDriver::escapeLike() — Method in class PdoDriver * Encodes data for use in a SQL statement. * PostgreDriver::escapeBinary() — Method in class PostgreDriver * PostgreDriver::escapeIdentifier() — Method in class PostgreDriver * PostgreDriver::escapeBool() — Method in class PostgreDriver * PostgreDriver::escapeDate() — Method in class PostgreDriver * PostgreDriver::escapeDateTime() — Method in class PostgreDriver * PostgreDriver::escapeDateInterval() — Method in class PostgreDriver * PostgreDriver::escapeLike() — Method in class PostgreDriver * Encodes data for use in a SQL statement. * SqliteDriver::escapeBinary() — Method in class SqliteDriver * SqliteDriver::escapeIdentifier() — Method in class SqliteDriver * SqliteDriver::escapeBool() — Method in class SqliteDriver * SqliteDriver::escapeDate() — Method in class SqliteDriver * SqliteDriver::escapeDateTime() — Method in class SqliteDriver * SqliteDriver::escapeDateInterval() — Method in class SqliteDriver * SqliteDriver::escapeLike() — Method in class SqliteDriver * Encodes string for use in a LIKE statement. * SqlsrvDriver::escapeText() — Method in class SqlsrvDriver * Encodes data for use in a SQL statement. * SqlsrvDriver::escapeBinary() — Method in class SqlsrvDriver * SqlsrvDriver::escapeIdentifier() — Method in class SqlsrvDriver * SqlsrvDriver::escapeBool() — Method in class SqlsrvDriver * SqlsrvDriver::escapeDate() — Method in class SqlsrvDriver * SqlsrvDriver::escapeDateTime() — Method in class SqlsrvDriver * SqlsrvDriver::escapeDateInterval() — Method in class SqlsrvDriver * SqlsrvDriver::escapeLike() — Method in class SqlsrvDriver * Encodes string for use in a LIKE statement. * Event — Class in namespace Dibi * Profiler & logger event. * Exception — Class in namespace Dibi * Dibi common exception. * Expression — Class in namespace Dibi * SQL expression. * Fluent::execute() — Method in class Fluent * Generates and executes SQL query. * Helpers::escape() — Method in class Helpers * $ dibi#elapsedTime — Property in class dibi * Elapsed time for last query ## F Executes SQL query and fetch result - shortcut for query() & fetch(). * Connection::fetchAll() — Method in class Connection * Executes SQL query and fetch pairs - shortcut for query() & fetchPairs(). * DataSource::fetch() — Method in class DataSource * Generates, executes SQL query and fetches the single row. * DataSource::fetchSingle() — Method in class DataSource * Like fetch(), but returns only first field. * DataSource::fetchAll() — Method in class DataSource * Fetches all records from table. * DataSource::fetchAssoc() — Method in class DataSource * Fetches all records from table and returns associative tree. * DataSource::fetchPairs() — Method in class DataSource * Fetches all records from table like $key => $value pairs. * DummyDriver::fetch() — Method in class DummyDriver * Frees the resources allocated for this result set. * FirebirdDriver — Class in namespace Dibi\Drivers * The driver for Firebird/InterBase database. * FirebirdReflector — Class in namespace Dibi\Drivers * The reflector for Firebird/InterBase database. * FirebirdResult — Class in namespace Dibi\Drivers * The driver for Firebird/InterBase result set. * FirebirdResult::fetch() — Method in class FirebirdResult * Frees the resources allocated for this result set. * MySqliResult::fetch() — Method in class MySqliResult * Frees the resources allocated for this result set. * NoDataResult::fetch() — Method in class NoDataResult * Frees the resources allocated for this result set. * OdbcResult::fetch() — Method in class OdbcResult * Frees the resources allocated for this result set. * OracleResult::fetch() — Method in class OracleResult * Frees the resources allocated for this result set. * PdoResult::fetch() — Method in class PdoResult * Frees the resources allocated for this result set. * PostgreResult::fetch() — Method in class PostgreResult * Frees the resources allocated for this result set. * SqlsrvResult::fetch() — Method in class SqlsrvResult * Frees the resources allocated for this result set. * Fluent — Class in namespace Dibi * SQL builder via fluent interfaces. * Fluent::fetch() — Method in class Fluent * Generates, executes SQL query and fetches the single row. * Fluent::fetchSingle() — Method in class Fluent * Like fetch(), but returns only first field. * Fluent::fetchAll() — Method in class Fluent * Fetches all records from table. * Fluent::fetchAssoc() — Method in class Fluent * Fetches all records from table and returns associative tree. * Fluent::fetchPairs() — Method in class Fluent * Fetches all records from table like $key => $value pairs. * Fluent::from() — Method in class Fluent * ForeignKeyConstraintViolationException — Class in namespace Dibi * Exception for a foreign key constraint violation. * Helpers::false2Null() — Method in class Helpers * FileLogger — Class in namespace Dibi\Loggers * Dibi file logger. * $ FileLogger#file — Property in class FileLogger * Name of the file where SQL errors should be logged * $ FileLogger#filter — Property in class FileLogger * $ Column#fullName — Property in class Column * ForeignKey — Class in namespace Dibi\Reflection * Reflection metadata class for a foreign key. * $ Table#foreignKeys — Property in class Table * Result::free() — Method in class Result * Fetches the row at current position, process optional type conversion. * Result::fetchSingle() — Method in class Result * Like fetch(), but returns only first field. * Result::fetchAll() — Method in class Result * Fetches all records from table. * Result::fetchAssoc() — Method in class Result * Fetches all records from table and returns associative tree. * Result::fetchPairs() — Method in class Result * Fetches all records from table like $key => $value pairs. * ResultDriver::fetch() — Method in class ResultDriver * Frees the resources allocated for this result set. * Translator::formatValue() — Method in class Translator * Apply modifier to single value. * dibi::fetch() — Method in class dibi * dibi::fetchAll() — Method in class dibi * dibi::fetchSingle() — Method in class dibi * dibi::fetchPairs() — Method in class dibi ## G * Panel::getTab() — Method in class Panel * Returns HTML code for custom tab. (Tracy\IBarPanel) * Panel::getPanel() — Method in class Panel * Returns HTML code for custom panel. (Tracy\IBarPanel) * Connection::getConfig() — Method in class Connection * Returns configuration variable. If no $key is passed, returns the entire array. * Connection::getDriver() — Method in class Connection * Returns the driver and connects to a database in lazy mode. * Connection::getAffectedRows() — Method in class Connection * Returns substitution hashmap. * Connection::getDatabaseInfo() — Method in class Connection * Gets a information about the current database. * DataSource::getConnection() — Method in class DataSource * DataSource::getResult() — Method in class DataSource * Returns (and queries) Result. * DataSource::getIterator() — Method in class DataSource * DataSource::getTotalCount() — Method in class DataSource * Returns the connection resource. * Driver::getReflector() — Method in class Driver * Returns the connection resource. * DummyDriver::getReflector() — Method in class DummyDriver * Returns the number of rows in a result set. * DummyDriver::getResultResource() — Method in class DummyDriver * Returns metadata for all columns in a result set. * DummyDriver::getTables() — Method in class DummyDriver * Returns list of tables. * DummyDriver::getColumns() — Method in class DummyDriver * Returns metadata for all columns in a table. * DummyDriver::getIndexes() — Method in class DummyDriver * Returns metadata for all foreign keys in a table. * FirebirdDriver::getAffectedRows() — Method in class FirebirdDriver * Returns the connection resource. * FirebirdDriver::getReflector() — Method in class FirebirdDriver * Returns the connection reflector. * FirebirdReflector::getTables() — Method in class FirebirdReflector * Returns list of tables. * FirebirdReflector::getColumns() — Method in class FirebirdReflector * Returns metadata for all indexes in a table (the constraints are included). * FirebirdReflector::getForeignKeys() — Method in class FirebirdReflector * Returns list of indices in given table (the constraints are not listed). * FirebirdReflector::getConstraints() — Method in class FirebirdReflector * Returns list of constraints in given table. * FirebirdReflector::getTriggersMeta() — Method in class FirebirdReflector * Returns metadata for all triggers in a table or database. * FirebirdReflector::getTriggers() — Method in class FirebirdReflector * Returns list of triggers for given table. * FirebirdReflector::getProceduresMeta() — Method in class FirebirdReflector * Returns metadata from stored procedures and their input and output parameters. * FirebirdReflector::getProcedures() — Method in class FirebirdReflector * Returns list of stored procedures. * FirebirdReflector::getGenerators() — Method in class FirebirdReflector * Returns list of generators. * FirebirdReflector::getFunctions() — Method in class FirebirdReflector * Returns list of user defined functions (UDF). * FirebirdResult::getRowCount() — Method in class FirebirdResult * Returns the result set resource. * FirebirdResult::getResultColumns() — Method in class FirebirdResult * Returns metadata for all columns in a result set. * MySqlReflector::getTables() — Method in class MySqlReflector * Returns list of tables. * MySqlReflector::getColumns() — Method in class MySqlReflector * Retrieves information about the most recently executed query. * MySqliDriver::getAffectedRows() — Method in class MySqliDriver * Returns the connection reflector. * MySqliResult::getRowCount() — Method in class MySqliResult * Returns the number of rows in a result set. * MySqliResult::getResultColumns() — Method in class MySqliResult * Returns metadata for all columns in a result set. * MySqliResult::getResultResource() — Method in class MySqliResult * Returns the result set resource. * NoDataResult::getRowCount() — Method in class NoDataResult * Returns the number of affected rows. * NoDataResult::getResultColumns() — Method in class NoDataResult * Returns the result set resource. * OdbcDriver::getAffectedRows() — Method in class OdbcDriver * Returns the connection resource. * OdbcDriver::getReflector() — Method in class OdbcDriver * Returns the connection reflector. * OdbcReflector::getTables() — Method in class OdbcReflector * Returns list of tables. * OdbcReflector::getColumns() — Method in class OdbcReflector * Returns metadata for all columns in a table. * OdbcReflector::getIndexes() — Method in class OdbcReflector * Returns the number of rows in a result set. * OdbcResult::getResultColumns() — Method in class OdbcResult * Gets the number of affected rows by the last INSERT, UPDATE or DELETE query. * OracleDriver::getInsertId() — Method in class OracleDriver * Returns the connection reflector. * OracleReflector::getTables() — Method in class OracleReflector * Returns list of tables. * OracleReflector::getColumns() — Method in class OracleReflector * Returns metadata for all columns in a table. * OracleReflector::getIndexes() — Method in class OracleReflector * Returns metadata for all indexes in a table. * OracleReflector::getForeignKeys() — Method in class OracleReflector * Returns metadata for all foreign keys in a table. * OracleResult::getRowCount() — Method in class OracleResult * Returns the number of rows in a result set. * OracleResult::getResultColumns() — Method in class OracleResult * Returns metadata for all columns in a result set. * OracleResult::getResultResource() — Method in class OracleResult * Retrieves the ID generated for an AUTO_INCREMENT column by the previous INSERT query. * PdoDriver::getResource() — Method in class PdoDriver * Returns the connection reflector. * PdoResult::getRowCount() — Method in class PdoResult * Returns the number of rows in a result set. * PdoResult::getResultColumns() — Method in class PdoResult * Returns the result set resource. * PostgreDriver::getAffectedRows() — Method in class PostgreDriver * Retrieves the ID generated for an AUTO_INCREMENT column by the previous INSERT query. * PostgreDriver::getResource() — Method in class PostgreDriver * Returns the connection resource. * PostgreDriver::getReflector() — Method in class PostgreDriver * Returns the connection reflector. * PostgreReflector::getTables() — Method in class PostgreReflector * Returns list of tables. * PostgreReflector::getColumns() — Method in class PostgreReflector * Returns metadata for all columns in a table. * PostgreReflector::getIndexes() — Method in class PostgreReflector * Returns metadata for all indexes in a table. * PostgreReflector::getForeignKeys() — Method in class PostgreReflector * Returns the result set resource. * SqliteDriver::getAffectedRows() — Method in class SqliteDriver * Returns list of tables. * SqliteReflector::getColumns() — Method in class SqliteReflector * Returns the number of rows in a result set. * SqliteResult::getResultColumns() — Method in class SqliteResult * Returns metadata for all columns in a result set. * SqliteResult::getResultResource() — Method in class SqliteResult * Returns the result set resource. * SqlsrvDriver::getAffectedRows() — Method in class SqlsrvDriver * Returns the connection resource. * SqlsrvDriver::getReflector() — Method in class SqlsrvDriver * Returns the connection reflector. * SqlsrvReflector::getTables() — Method in class SqlsrvReflector * Returns list of tables. * SqlsrvReflector::getColumns() — Method in class SqlsrvReflector * Returns metadata for all columns in a table. * SqlsrvReflector::getIndexes() — Method in class SqlsrvReflector * Returns metadata for all indexes in a table. * SqlsrvReflector::getForeignKeys() — Method in class SqlsrvReflector * Returns metadata for all foreign keys in a table. * SqlsrvResult::getRowCount() — Method in class SqlsrvResult * Returns the number of rows in a result set. * SqlsrvResult::getResultColumns() — Method in class SqlsrvResult * Returns metadata for all columns in a result set. * SqlsrvResult::getResultResource() — Method in class SqlsrvResult * Returns the result set resource. * Exception::getSql() — Method in class Exception * Expression::getValues() — Method in class Expression * Fluent::getFlag() — Method in class Fluent * Is a flag set? * Fluent::getCommand() — Method in class Fluent * Returns SQL command. * Fluent::getConnection() — Method in class Fluent * Fluent::getIterator() — Method in class Fluent * Required by the IteratorAggregate interface. * Fluent::groupBy() — Method in class Fluent * HashMapBase::getCallback() — Method in class HashMapBase * Helpers::getSuggestion() — Method in class Helpers * Finds the best suggestion. * Helpers::getTypeCache() — Method in class Helpers * IConnection::getDriver() — Method in class IConnection * Returns the driver and connects to a database in lazy mode. * IConnection::getAffectedRows() — Method in class IConnection * Gets the exception severity. * Column::getName() — Method in class Column * Column::getFullName() — Method in class Column * Column::getTable() — Method in class Column * Column::getTableName() — Method in class Column * Column::getType() — Method in class Column * Column::getNativeType() — Method in class Column * Column::getSize() — Method in class Column * Column::getDefault() — Method in class Column * Column::getVendorInfo() — Method in class Column * Database::getName() — Method in class Database * Database::getTables() — Method in class Database * Database::getTableNames() — Method in class Database * Database::getTable() — Method in class Database * ForeignKey::getName() — Method in class ForeignKey * ForeignKey::getReferences() — Method in class ForeignKey * Index::getName() — Method in class Index * Index::getColumns() — Method in class Index * Result::getColumns() — Method in class Result * Result::getColumnNames() — Method in class Result * Result::getColumn() — Method in class Result * Table::getName() — Method in class Table * Table::getColumns() — Method in class Table * Table::getColumnNames() — Method in class Table * Table::getColumn() — Method in class Table * Table::getForeignKeys() — Method in class Table * Table::getIndexes() — Method in class Table * Table::getPrimaryKey() — Method in class Table * Reflector::getTables() — Method in class Reflector * Returns list of tables. * Reflector::getColumns() — Method in class Reflector * Returns metadata for all indexes in a table. * Reflector::getForeignKeys() — Method in class Reflector * Safe access to property $driver. * Result::getRowCount() — Method in class Result * Required by the IteratorAggregate interface. * Result::getColumnCount() — Method in class Result * Returns the number of columns in a result set. * Result::getRowClass() — Method in class Result * Returns fetched object class name. * Result::getType() — Method in class Result * Returns column type. * Result::getTypes() — Method in class Result * Returns columns type. * Result::getFormat() — Method in class Result * Returns data format. * Result::getInfo() — Method in class Result * Returns a meta information about the current result set. * Result::getColumns() — Method in class Result * ResultDriver::getRowCount() — Method in class ResultDriver * Retrieve active connection. * dibi::getAffectedRows() — Method in class dibi * dibi::getInsertId() — Method in class dibi * dibi::getDatabaseInfo() — Method in class dibi * dibi::getSubstitutes() — Method in class dibi ## H * Fluent::having() — Method in class Fluent * HashMap — Class in namespace Dibi * Lazy cached storage. * HashMapBase — Class in namespace Dibi * Lazy cached storage. * Helpers — Class in namespace Dibi * Column::hasTable() — Method in class Column * Database::hasTable() — Method in class Database * Result::hasColumn() — Method in class Result * Table::hasColumn() — Method in class Table ## I * $ Connection#insertId — Property in class Connection * Connection::isConnected() — Method in class Connection * Returns true when connection was established. * Connection::insert() — Method in class Connection * FirebirdDriver::inTransaction() — Method in class FirebirdDriver * Is in transaction? * OdbcDriver::inTransaction() — Method in class OdbcDriver * Is in transaction? * PostgreDriver::inTransaction() — Method in class PostgreDriver * Is in transaction? * Fluent::innerJoin() — Method in class Fluent * Fluent::insert() — Method in class Fluent * Fluent::into() — Method in class Fluent * Helpers::intVal() — Method in class Helpers * IConnection — Class in namespace Dibi * Dibi connection. * IConnection::isConnected() — Method in class IConnection * Returns true when connection was established. * IDataSource — Class in namespace Dibi * Provides an interface between a dataset and data-aware components. * Column::isNullable() — Method in class Column * Column::isAutoIncrement() — Method in class Column * Database::init() — Method in class Database * Index — Class in namespace Dibi\Reflection * Reflection metadata class for a index or primary key. * Index::isUnique() — Method in class Index * Index::isPrimary() — Method in class Index * Result::initColumns() — Method in class Result * $ Table#indexes — Property in class Table * Table::isView() — Method in class Table * Table::initColumns() — Method in class Table * Table::initIndexes() — Method in class Table * Table::initForeignKeys() — Method in class Table * dibi::isConnected() — Method in class dibi * Returns true when connection was established. * dibi::insert() — Method in class dibi ## J * Fluent::join() — Method in class Fluent ## K * ResultIterator::key() — Method in class ResultIterator * ## L * DibiExtension22::loadConfiguration() — Method in class DibiExtension22 * Panel::logEvent() — Method in class Panel * After event notification. * Connection::literal() — Method in class Connection * Connection::loadFile() — Method in class Connection * Import SQL dump from file. * Fluent::limit() — Method in class Fluent * Fluent::leftJoin() — Method in class Fluent * Helpers::loadFromFile() — Method in class Helpers * Import SQL dump from file. * Literal — Class in namespace Dibi * SQL literal value. * FileLogger::logEvent() — Method in class FileLogger * After event notification. * dibi::loadFile() — Method in class dibi ## M * $ Panel#maxLength — Property in class Panel * MySqlReflector — Class in namespace Dibi\Drivers * The reflector for MySQL databases. * MySqliDriver — Class in namespace Dibi\Drivers * The driver for MySQL database. * MySqliResult — Class in namespace Dibi\Drivers * The driver for MySQL result set. * $ Fluent#masks — Property in class Fluent * $ Fluent#modifiers — Property in class Fluent * default modifiers for arrays ## N Executes the SQL query. * NoDataResult — Class in namespace Dibi\Drivers * The driver for no result set. * NotImplementedException — Class in namespace Dibi * NotNullConstraintViolationException — Class in namespace Dibi * Exception for a NOT NULL constraint violation. * NotSupportedException — Class in namespace Dibi * $ Column#name — Property in class Column * $ Column#nativeType — Property in class Column * $ Column#nullable — Property in class Column * $ Database#name — Property in class Database * $ ForeignKey#name — Property in class ForeignKey * $ Index#name — Property in class Index * $ Table#name — Property in class Table * ResultIterator::next() — Method in class ResultIterator * Moves forward to next element. * $ dibi#numOfQueries — Property in class dibi * Number or queries * dibi::nativeQuery() — Method in class dibi ## O * $ Connection#onEvent — Property in class Connection * function (Event $event); Occurs after query is executed * Connection::onEvent() — Method in class Connection * DataSource::orderBy() — Method in class DataSource * Selects columns to order by. * OdbcDriver — Class in namespace Dibi\Drivers * The driver interacting with databases via ODBC connections. * OdbcReflector — Class in namespace Dibi\Drivers * The reflector for ODBC connections. * OdbcResult — Class in namespace Dibi\Drivers * The driver interacting with result set via ODBC connections. * OracleDriver — Class in namespace Dibi\Drivers * The driver for Oracle database. * OracleReflector — Class in namespace Dibi\Drivers * The driver for Oracle result set. * Fluent::orderBy() — Method in class Fluent * Fluent::offset() — Method in class Fluent * Fluent::outerJoin() — Method in class Fluent * Fluent::on() — Method in class Fluent * Fluent::or() — Method in class Fluent * Row::offsetSet() — Method in class Row * Row::offsetGet() — Method in class Row * Row::offsetExists() — Method in class Row * Row::offsetUnset() — Method in class Row ## P * Panel — Class in namespace Dibi\Bridges\Tracy * Dibi panel for Tracy. * MySqliDriver::ping() — Method in class MySqliDriver * Pings a server connection, or tries to reconnect if the connection has gone down. * PdoDriver — Class in namespace Dibi\Drivers * The driver for PDO. * PdoResult — Class in namespace Dibi\Drivers * The driver for PDO result set. * PostgreDriver — Class in namespace Dibi\Drivers * The driver for PostgreSQL database. * PostgreDriver::ping() — Method in class PostgreDriver * Pings database. * PostgreReflector — Class in namespace Dibi\Drivers * The reflector for PostgreSQL database. * PostgreResult — Class in namespace Dibi\Drivers * The driver for PostgreSQL result set. * PcreException — Class in namespace Dibi * PCRE exception. * ProcedureException — Class in namespace Dibi * Database procedure exception. * $ Index#primary — Property in class Index * $ Table#primaryKey — Property in class Table ## Q Executes the SQL query. * MySqliDriver::query() — Method in class MySqliDriver * Executes the SQL query. * OracleDriver::query() — Method in class OracleDriver * Executes the SQL query. * PdoDriver::query() — Method in class PdoDriver * Executes the SQL query. * PostgreDriver::query() — Method in class PostgreDriver * Executes the SQL query. * SqliteDriver::query() — Method in class SqliteDriver * ## R * Panel::register() — Method in class Panel * Panel::renderException() — Method in class Panel * Returns blue-screen custom tab. * Connection::rollback() — Method in class Connection * Rollback changes in a transaction. * DataSource::release() — Method in class DataSource * Discards the internal cache. * Driver::rollback() — Method in class Driver * Rollback changes in a transaction. * FirebirdDriver::rollback() — Method in class FirebirdDriver * Rollback changes in a transaction. * MySqliDriver::rollback() — Method in class MySqliDriver * Rollback changes in a transaction. * OdbcDriver::rollback() — Method in class OdbcDriver * Rollback changes in a transaction. * OracleDriver::rollback() — Method in class OracleDriver * Rollback changes in a transaction. * PdoDriver::rollback() — Method in class PdoDriver * Rollback changes in a transaction. * PostgreDriver::rollback() — Method in class PostgreDriver * Rollback changes in a transaction. * SqliteDriver::rollback() — Method in class SqliteDriver * Registers an user defined function for use in SQL statements. * SqliteDriver::registerAggregateFunction() — Method in class SqliteDriver * Registers an aggregating user defined function for use in SQL statements. * SqlsrvDriver::rollback() — Method in class SqlsrvDriver * Rollback changes in a transaction. * $ Event#result — Property in class Event * Fluent::removeClause() — Method in class Fluent * Removes a clause. * Fluent::rightJoin() — Method in class Fluent * IConnection::rollback() — Method in class IConnection * Rollback changes in a transaction. * $ ForeignKey#references — Property in class ForeignKey * Result — Class in namespace Dibi\Reflection * Reflection metadata class for a result set. * Reflector — Class in namespace Dibi * Reflection driver. * Result — Class in namespace Dibi * Query result. * $ Result#rowCount — Property in class Result * ResultDriver — Class in namespace Dibi * Result set driver interface. * ResultIterator — Class in namespace Dibi * External result set iterator. * ResultIterator::rewind() — Method in class ResultIterator * Rewinds the iterator to the first element. * Row — Class in namespace Dibi * Result set single row. * dibi::rollback() — Method in class dibi ## S * Connection::select() — Method in class Connection * Connection::substitute() — Method in class Connection * Provides substitution. * Connection::setObjectTranslator() — Method in class Connection * DataSource::select() — Method in class DataSource * Selects columns to query. * DummyDriver::seek() — Method in class DummyDriver * Moves cursor position without fetching row. * FirebirdResult::seek() — Method in class FirebirdResult * Moves cursor position without fetching row. * OracleResult::seek() — Method in class OracleResult * Moves cursor position without fetching row. * PostgreResult::seek() — Method in class PostgreResult * Moves cursor position without fetching row. * Sqlite3Driver — Class in namespace Dibi\Drivers * The driver for SQLite v3 database. * SqliteReflector — Class in namespace Dibi\Drivers * The driver for SQLite result set. * SqliteResult::seek() — Method in class SqliteResult * Moves cursor position without fetching row. * SqlsrvDriver — Class in namespace Dibi\Drivers * The driver for Microsoft SQL Server and SQL Azure databases. * SqlsrvReflector — Class in namespace Dibi\Drivers * The reflector for Microsoft SQL Server and SQL Azure databases. * SqlsrvResult — Class in namespace Dibi\Drivers * The driver for Microsoft SQL Server and SQL Azure result set. * SqlsrvResult::seek() — Method in class SqlsrvResult * Moves cursor position without fetching row. * $ Event#sql — Property in class Event * $ Event#source — Property in class Event * $ Fluent#separators — Property in class Fluent * clauses separators * Fluent::setFlag() — Method in class Fluent * Change a SQL flag. * Fluent::setupResult() — Method in class Fluent * Adds Result setup. * Fluent::select() — Method in class Fluent * Fluent::set() — Method in class Fluent * HashMapBase::setCallback() — Method in class HashMapBase * $ ProcedureException#severity — Property in class ProcedureException * $ Column#size — Property in class Column * Result::seek() — Method in class Result * Moves cursor position without fetching row. * Result::setRowClass() — Method in class Result * Set fetched object class. This class should extend the Row class. * Result::setRowFactory() — Method in class Result * Set a factory to create fetched object instances. These should extend the Row class. * Result::setType() — Method in class Result * Define column type. * Result::setFormat() — Method in class Result * Sets type format. * Result::setFormats() — Method in class Result * Sets type formats. * ResultDriver::seek() — Method in class ResultDriver * Moves cursor position without fetching row. * $ dibi#sql — Property in class dibi * Last SQL command dibi::query() * dibi::setConnection() — Method in class dibi * Sets connection. * dibi::stripMicroseconds() — Method in class dibi * Strips microseconds part. * dibi::select() — Method in class dibi ## T Generates SQL query. * Connection::test() — Method in class Connection * Generates and prints SQL query. * Connection::transaction() — Method in class Connection * Connection::translateObject() — Method in class Connection * DataSource::toFluent() — Method in class DataSource * Returns this data source wrapped in Fluent object. * DataSource::toDataSource() — Method in class DataSource * Returns this data source wrapped in DataSource object. * $ Event#type — Property in class Event * $ Event#time — Property in class Event * Fluent::test() — Method in class Fluent * Generates and prints SQL query or it's part. * Fluent::toDataSource() — Method in class Fluent * $ Column#table — Property in class Column * $ Column#type — Property in class Column * $ Database#tables — Property in class Database * $ Database#tableNames — Property in class Database * Table — Class in namespace Dibi\Reflection * Reflection metadata class for a database table. * Row::toArray() — Method in class Row * Translator — Class in namespace Dibi * SQL translator. * Translator::translate() — Method in class Translator * Generates SQL. Can be called only once. * Type — Class in namespace Dibi * Data types. * $ dibi#totalTime — Property in class dibi * Elapsed time for all queries * dibi::test() — Method in class dibi * dibi::transaction() — Method in class dibi ## U * Connection::update() — Method in class Connection * DummyDriver::unescapeBinary() — Method in class DummyDriver * Decodes data from result set. * FirebirdResult::unescapeBinary() — Method in class FirebirdResult * Decodes data from result set. * NoDataResult::unescapeBinary() — Method in class NoDataResult * Decodes data from result set. * OracleResult::unescapeBinary() — Method in class OracleResult * Decodes data from result set. * PdoResult::unescapeBinary() — Method in class PdoResult * Decodes data from result set. * PostgreResult::unescapeBinary() — Method in class PostgreResult * Decodes data from result set. * SqlsrvResult::unescapeBinary() — Method in class SqlsrvResult * Decodes data from result set. * Fluent::union() — Method in class Fluent * Fluent::unionAll() — Method in class Fluent * Fluent::using() — Method in class Fluent * Fluent::update() — Method in class Fluent * $ Index#unique — Property in class Index * ResultDriver::unescapeBinary() — Method in class ResultDriver * Decodes data from result set. * UniqueConstraintViolationException — Class in namespace Dibi * Exception for a unique constraint violation. * dibi::update() — Method in class dibi ## V * Fluent::values() — Method in class Fluent * $ Table#view — Property in class Table * ResultIterator::valid() — Method in class ResultIterator * ## W * DataSource::where() — Method in class DataSource * Adds conditions to query. * Fluent::where() — Method in class Fluent ## _ * DibiExtension22::__construct() — Method in class DibiExtension22 * Panel::__construct() — Method in class Panel * Connection::__construct() — Method in class Connection * Automatically frees the resources allocated for this result set. * Connection::__wakeup() — Method in class Connection * Prevents unserialization. * Connection::__sleep() — Method in class Connection * Prevents serialization. * DataSource::__construct() — Method in class DataSource * DataSource::__toString() — Method in class DataSource * Returns SQL query. * DateTime::__construct() — Method in class DateTime * DateTime::__toString() — Method in class DateTime * FirebirdDriver::__construct() — Method in class FirebirdDriver * FirebirdReflector::__construct() — Method in class FirebirdReflector * FirebirdResult::__construct() — Method in class FirebirdResult * MySqlReflector::__construct() — Method in class MySqlReflector * MySqliDriver::__construct() — Method in class MySqliDriver * MySqliResult::__construct() — Method in class MySqliResult * NoDataResult::__construct() — Method in class NoDataResult * OdbcDriver::__construct() — Method in class OdbcDriver * OdbcReflector::__construct() — Method in class OdbcReflector * OdbcResult::__construct() — Method in class OdbcResult * OracleDriver::__construct() — Method in class OracleDriver * OracleReflector::__construct() — Method in class OracleReflector * OracleResult::__construct() — Method in class OracleResult * PdoDriver::__construct() — Method in class PdoDriver * PdoResult::__construct() — Method in class PdoResult * PostgreDriver::__construct() — Method in class PostgreDriver * PostgreReflector::__construct() — Method in class PostgreReflector * PostgreResult::__construct() — Method in class PostgreResult * SqliteDriver::__construct() — Method in class SqliteDriver * SqliteReflector::__construct() — Method in class SqliteReflector * SqliteResult::__construct() — Method in class SqliteResult * SqlsrvDriver::__construct() — Method in class SqlsrvDriver * SqlsrvReflector::__construct() — Method in class SqlsrvReflector * SqlsrvResult::__construct() — Method in class SqlsrvResult * Event::__construct() — Method in class Event * Exception::__construct() — Method in class Exception * Exception::__toString() — Method in class Exception * Expression::__construct() — Method in class Expression * Fluent::__construct() — Method in class Fluent * Fluent::__call() — Method in class Fluent * Appends new argument to the clause. * Fluent::__toString() — Method in class Fluent * Returns SQL query. * Fluent::_export() — Method in class Fluent * Generates parameters for Translator. * Fluent::_formatClause() — Method in class Fluent * Format camelCase clause name to UPPER CASE. * Fluent::__clone() — Method in class Fluent * HashMap::__set() — Method in class HashMap * HashMap::__get() — Method in class HashMap * HashMapBase::__construct() — Method in class HashMapBase * Literal::__construct() — Method in class Literal * Literal::__toString() — Method in class Literal * FileLogger::__construct() — Method in class FileLogger * PcreException::__construct() — Method in class PcreException * ProcedureException::__construct() — Method in class ProcedureException * Construct the exception. * Column::__construct() — Method in class Column * Database::__construct() — Method in class Database * ForeignKey::__construct() — Method in class ForeignKey * Index::__construct() — Method in class Index * Result::__construct() — Method in class Result * Table::__construct() — Method in class Table * Result::__construct() — Method in class Result * ResultIterator::__construct() — Method in class ResultIterator * Row::__construct() — Method in class Row * Row::__get() — Method in class Row * Row::__isset() — Method in class Row * Translator::__construct() — Method in class Translator * Type::__construct() — Method in class Type * dibi::__construct() — Method in class dibi * Static class - cannot be instantiated. * dibi::__callStatic() — Method in class dibi *
distlib
readthedoc
Unknown
Distlib Documentation Release 0.3.6 <NAME> Aug 26, 2022 Contents 1.1 Distlib evolved out of packagin... 3 1.2 What was the problem with packaging... 3 1.3 How Distlib can hel... 4 1.4 How you can hel... 4 1.5 Main feature... 4 1.6 Python version and platform compatibilit... 5 1.7 Project statu... 5 1.8 Change log for distli... 6 1.8.1 0.3.6 (future... 6 1.8.2 0.3.... 6 1.8.3 0.3.... 6 1.8.4 0.3.... 7 1.8.5 0.3.... 7 1.8.6 0.3.... 8 1.8.7 0.3.... 8 1.8.8 0.2.... 9 1.8.9 0.2.... 10 1.8.10 0.2.... 10 1.8.11 0.2.... 11 1.8.12 0.2.... 11 1.8.13 0.2.... 12 1.8.14 0.2.... 12 1.8.15 0.2.... 13 1.8.16 0.2.... 13 1.8.17 0.2.... 14 1.8.18 0.1.... 15 1.8.19 0.1.... 15 1.8.20 0.1.... 16 1.8.21 0.1.... 17 1.8.22 0.1.... 17 1.8.23 0.1.... 17 1.8.24 0.1.... 18 1.8.25 0.1.... 19 1.8.26 0.1.... 20 1.8.27 0.1.... 21 i 1.9 Next step... 21 2.1 Installatio... 23 2.2 Testin... 23 2.2.1 PYPI availabilit... 24 2.3 First step... 24 2.3.1 Using the database AP... 24 2.3.1.1 Distribution path... 24 2.3.1.2 Querying a path for distribution... 24 2.3.1.3 Including legacy distributions in the search result... 25 2.3.1.4 Distribution propertie... 25 2.3.1.5 Exporting things from Distribution... 26 2.3.2 Distribution dependencie... 26 2.3.3 Using the locators AP... 27 2.3.3.1 Overvie... 27 2.3.3.2 Under the hoo... 27 2.3.4 Using the index AP... 29 2.3.4.1 Overvie... 29 2.3.4.2 Registering a projec... 29 2.3.4.3 Uploading a source distributio... 29 2.3.4.4 Uploading binary distribution... 29 2.3.4.5 Signing a distributio... 30 2.3.4.6 Downloading file... 30 2.3.4.7 Verifying signature... 31 2.3.4.8 Uploading documentatio... 31 2.3.4.9 Authenticatio... 31 2.3.4.10 Verifying HTTPS connection... 32 2.3.4.11 Saving a default configuratio... 33 2.3.4.12 Searching PyP... 33 2.3.5 Using the metadata and markers API... 36 2.3.5.1 Instantiating metadat... 36 2.3.5.2 Reading metadata from files and stream... 36 2.3.5.3 Writing metadata to paths and stream... 37 2.3.5.4 Using marker... 37 2.3.6 Using the resource AP... 37 2.3.6.1 Access to resources in the file syste... 37 2.3.6.2 Access to resources in the .zip file... 38 2.3.6.3 Iterating over resource... 39 2.3.7 Using the scripts AP... 39 2.3.7.1 Specifying scripts to instal... 39 2.3.7.2 Wrapping callables with script... 40 2.3.7.3 Specifying a custom executable for shebang... 41 2.3.7.4 Generating variants of a scrip... 41 2.3.7.5 Avoiding overwriting existing script... 42 2.3.7.6 Generating windowed scripts on Window... 42 2.3.8 Using the version AP... 42 2.3.8.1 Overvie... 42 2.3.8.2 Matching versions against constraint... 43 2.3.9 Using the wheel AP... 44 2.3.9.1 Building wheel... 44 2.3.9.2 Customising tags during buil... 44 2.3.9.3 Specifying a wheel’s versio... 45 2.3.9.4 Installing from wheel... 45 ii 2.3.9.5 Verifying wheel... 46 2.3.9.6 Modifying wheel... 46 2.3.9.7 Mounting wheel... 47 2.3.9.8 Using vanilla pip to build wheels for existing distributions on PyPI . . . . . . . . . 47 2.3.10 Using the manifest AP... 53 2.3.10.1 The include directiv... 54 2.3.10.2 The exclude directiv... 54 2.3.10.3 The global-include directiv... 54 2.3.10.4 The global-exclude directiv... 54 2.3.10.5 The recursive-include directive . . . . . . . . . . . . . . . . . . . . . . . . 55 2.3.10.6 The recursive-exclude directive . . . . . . . . . . . . . . . . . . . . . . . . 55 2.3.10.7 The graft directiv... 55 2.3.10.8 The prune directiv... 55 2.4 Next step... 55 3.1 The locators AP... 57 3.1.1 The problem we’re trying to solv... 57 3.1.... minimal solutio... 58 3.1.2.1 Locating distribution... 58 3.1.2.2 Finding dependencie... 59 3.2 The index AP... 60 3.2.1 The problem we’re trying to solv... 60 3.2.... minimal solutio... 60 3.3 The resources AP... 61 3.3.1 The problem we’re trying to solv... 61 3.3.... minimal solutio... 62 3.3.3 Dealing with the requirement for access via file system file... 64 3.4 The scripts AP... 65 3.4.1 The problem we’re trying to solv... 65 3.4.... minimal solutio... 65 3.4.2.1 Flag format... 66 3.5 The version AP... 66 3.5.1 The problem we’re trying to solv... 67 3.5.... minimal solutio... 68 3.5.2.1 Version... 68 3.5.2.2 Matcher... 68 3.5.2.3 Version scheme... 70 3.6 The wheel AP... 70 3.6.1 The problem we’re trying to solv... 71 3.6.... minimal solutio... 71 3.7 Next step... 72 4.1 The distlib.database packag... 73 4.1.1 Classe... 73 4.2 The distlib.resources packag... 76 4.2.1 Attribute... 76 4.2.2 Function... 77 4.2.3 Classe... 77 4.3 The distlib.scripts packag... 79 4.3.1 Classe... 79 4.3.2 Function... 81 4.4 The distlib.locators packag... 81 iii 4.4.1 Classe... 81 4.4.2 Function... 84 4.4.3 Variable... 85 4.5 The distlib.index packag... 85 4.5.1 Classe... 85 4.6 The distlib.util packag... 88 4.6.1 Classe... 88 4.6.2 Function... 89 4.7 The distlib.wheel packag... 90 4.7.1 Attribute... 90 4.7.2 Classe... 90 4.7.3 Function... 93 4.8 Next step... 93 5.1 The pkg_resources resource AP... 95 5.1.1 Basic resource acces... 95 5.1.2 Resource extractio... 95 5.1.3 Provider interfac... 96 5.2 The pkg_resources entry point AP... 96 iv Distlib Documentation, Release 0.3.6 Welcome to the documentation for distlib, a library of packaging functionality which is intended to be used as the basis for third-party packaging tools. Using a common layer will improve interoperability and consistency of user experience across those tools which use the library. Please note: this documentation is a work in progress. Distlib Documentation, Release 0.3.6 2 Contents CHAPTER 1 Overview Start here for all things distlib. 1.1 Distlib evolved out of packaging Distlib is a library which implements low-level functions that relate to packaging and distribution of Python software. It consists in part of the functions in the packaging Python package, which was intended to be released as part of Python 3.3, but was removed shortly before Python 3.3 entered beta testing. Note: The packaging package referred to here does not refer to any packaging package currently available on PyPI, but to a package that was never released on PyPI but called packaging in the Python 3.3 alpha stdlib tree. 1.2 What was the problem with packaging? The packaging software just wasn’t ready for inclusion in the Python standard library. The amount of work needed to get it into the desired state was too great, given the number of people able to work on the project, the time they could devote to it, and the Python 3.3 release schedule. The approach taken by packaging was seen to be a good one: to ensure interoperability and consistency between different tools in the packaging space by defining standards for data formats through PEPs, and to do away with the ad hoc nature of installation encouraged by the distutils approach of using executable Python code in setup.py. Where custom code was needed, it could be provided in a standardised way using installation hooks. While some very good work was done in defining PEPs to codify some of the best practices, packaging suffered from some drawbacks, too: • Not all the PEPs may have been functionally complete, because some important use cases were not considered – for example, built (binary) distributions for Windows. Distlib Documentation, Release 0.3.6 • It continued the command-based design of distutils, which had resulted in distutils being difficult to extend in a consistent, easily understood, and maintainable fashion. • Some important features required by distribution authors were not considered – for example: – Access to data files stored in Python packages. – Support for plug-in extension points. – Support for native script execution on Windows. These features are supported by third-party tools (like setuptools / Distribute) using pkg_resources, entry points and console scripts. • There were a lot of rough edges in the packaging implementation, both in terms of bugs and in terms of incompletely implemented features. This can be seen (with the benefit of hindsight) as due to the goals being set too ambitiously; the project developers bit off more than they could chew. 1.3 How Distlib can help The idea behind Distlib is expressed in this python-dev mailing-list post, though a different name was suggested for the library. Basically, Distlib contains the implementations of the packaging PEPs and other low-level features which relate to packaging, distribution, and deployment of Python software. If Distlib can be made genuinely useful, then it is possible for third-party packaging tools to transition to using it. Their developers and users then benefit from standardised implementation of low-level functions, time saved by not having to reinvent wheels, and improved interoperability between tools. 1.4 How you can help If you have some time and the inclination to improve the state of Python packaging, then you can help by trying out Distlib, raising issues where you find problems, contributing feedback and/or patches to the implementation, documentation, and underlying PEPs. 1.5 Main features Distlib currently offers the following features: • The package distlib.database, which implements a database of installed distributions, as defined by PEP 376, and distribution dependency graph logic. Support is also provided for non-installed distributions (i.e. distributions registered with metadata on an index like PyPI), including the ability to scan for dependencies and building dependency graphs. • The package distlib.index, which implements an interface to perform operations on an index, such as registering a project, uploading a distribution or uploading documentation. Support is included for verifying SSL connections (with domain matching) and signing/verifying packages using GnuPG. Note: Since this API was developed, a number of features of PyPI have been turned off for various reasons - for example, documentation uploads, XMLRPC search API - and a number of APIs have changed (e.g. PyPI no longer shows GnuPG signatures). For now, the distlib.index API should be considered not fully reliable, mostly due to changes in PyPI where there has not been enough time to catch up with them. Distlib Documentation, Release 0.3.6 • The package distlib.metadata, which implements distribution metadata as defined by PEP 643, PEP 566, PEP 345, PEP 314 and PEP 241. Note: In the past distlib has tracked metadata proposals in PEPs even when they were draft, but this has proven to be too time-consuming. The current policy is not to track standards proactively while they’re still being thrashed out, but to look instead at starting to implement them once they’re marked Final. • The package distlib.markers, which implements environment markers as defined by PEP 508. • The package distlib.manifest, which implements lists of files used in packaging source distributions. • The package distlib.locators, which allows finding distributions, whether on PyPI (XML-RPC or via the “simple” interface), local directories or some other source. • The package distlib.resources, which allows access to data files stored in Python packages, both in the file system and in .zip files. • The package distlib.scripts, which allows installing of scripts with adjustment of shebang lines and support for native Windows executable launchers. • The package distlib.version, which implements version specifiers as defined by PEP 440, but also sup- port for working with “legacy” versions (setuptools/distribute) and semantic versions. • The package distlib.wheel, which provides support for building and installing from the Wheel format for binary distributions (see PEP 427). • The package distlib.util, which contains miscellaneous functions and classes which are useful in packag- ing, but which do not fit neatly into one of the other packages in distlib.* The package implements enhanced globbing functionality such as the ability to use ** in patterns to specify recursing into subdirectories. 1.6 Python version and platform compatibility Distlib is intended to be used on any Python version >= 2.7 and is tested on Python versions 2.7 and 3.6-3.10 on Linux, Windows, and macOS. 1.7 Project status The project has reached a mature status in its development: there is a test suite and it has been exercised on Windows, Ubuntu and macOS. The project is used by well-known projects such as pip, virtualenv and caniusepython3. To work with the project, you can download a release from PyPI, or clone the source repository or download a tarball from it. The source repository for the project is on GitHub: https://github.com/pypa/distlib/ Coverage results are available at: https://coveralls.io/github/vsajip/distlib/ Continuous integration test results are available at: https://github.com/pypa/distlib/actions/ You can leave feedback by raising a new issue on the issue tracker. Distlib Documentation, Release 0.3.6 1.8 Change log for distlib 1.8.1 0.3.6 (future) Released: Not yet. • scripts – Fixed #175: Updated launcher executables to better handle the relationship between launcher and child process in the Job API. 1.8.2 0.3.5 Released: 2022-07-14 • database – Fixed #170: Corrected implementation of get_required_dists(). • index – Updated coverage pragmas for tests relating to obsolete PyPI APIs. • locators – Changed the default locator configuration. • metadata – Updates in support of PEP 643 / Metadata 2.2. • scripts – Updated launcher executables. Thanks to <NAME> for his help with the launcher changes. – Fixed #164: Improved support for reproducible builds by allowing a fixed date/time to be inserted into created .exe files. Thanks to Somber Night for the patch. • util – Fixed #161: Updated test case. • wheel – Updated to write archive path of RECORD to RECORD instead of staging path. Thanks to <NAME> for the patch. – Fixed #169: Removed usage of deprecated imp module in favour of importlib. – Fixed #172: Compute ABI correctly for Python < 3.8. In addition to the above, setup.py was replaced by setup.cfg and pyproject.toml. 1.8.3 0.3.4 Released: 2021-12-08 • database – Fixed #153: Raise warnings in get_distributions() if bad metadata seen, but keep going. • markers Distlib Documentation, Release 0.3.6 – Fixed #154: Determine Python versions correctly for Python >= 3.10. • scripts – Updated launcher executables. Code relating to support for Python 2.6 was also removed. 1.8.4 0.3.3 Released: 2021-09-22 • compat – Fixed #152: Removed splituser() function which wasn’t used and is deprecated. • markers – Fixed #149: Handle version comparisons correctly in environment markers. • scripts – Add ARM-64 launchers and support code to use them. Thanks to <NAME> and <NAME> for their contributions. • util – Fixed #148: Handle a single trailing comma following a version. Thanks to <NAME> for the report and suggested fix. • version – Fixed #150: Fix incorrect handling of epochs. • wheel – Reverted handling of tags for Python >= 3.10 (use 310 rather than 3_10). This is because PEP 641 was rejected. • tests – Made changes relating to implementing CI using GitHub Actions. 1.8.5 0.3.2 Released: 2021-05-29 • locators – Fixed #141: removed unused regular expression. • metadata – Fixed #140: allowed “Obsoletes” in more scenarios, to better handle faulty metadata already on PyPI. • resources – Fixed #146: added entry for SourcelessFileLoader to the finder registry. • scripts – Made the generation of scripts more configurable: Distlib Documentation, Release 0.3.6 * the variant_separator attribute can be set to determine the separator used between a script basename and its X.Y variant. The default value is '-' and would result in a final script basename like 'foo-X.Y', whereas setting it to '' would result in a final script basename like 'fooX.Y'. * You can also subclass and override the get_script_filenames() method to provide a more customised set of file basenames. • util – Fixed #140: allowed a trailing comma in constraints, to better handle faulty metadata already on PyPI. – Moved get_platform() logic from distutils to here. – Fixed #143: removed normcase() to avoid some problems on Windows. • wheel – Dropped any trailing data when computing the Python tag. – Added support for manylinux tags. – Changed handling of tags for Python >= 3.10 (use 3_10 rather than 310). – Fixed #147: permission bits are now preserved on POSIX when installing from a wheel. • tests – Fixed #139: improved handling of errors related to the test PyPI server. 1.8.6 0.3.1 Released: 2020-06-27 The repository has been migrated to Git. References to earlier changesets (commits) in issue comments, etc. will be invalid. • scripts – Fixed #132: Added documentation to help with relative interpreter paths. Thanks to <NAME> for the patch. – Fixed #134: Allowed specifying a different target Python version when generating scripts. – Fixed #135: Exposed the enquote_executable function previously implemented as an internal func- tion. – Addressed #138: Improved metadata support for newer metadata versions. Thanks to <NAME> for the patch. • wheel – Changed the output of flags in entry point definitions. Thanks to frostming () for the patch. – Stopped writing JSON metadata. Only old-style metadata is written now. 1.8.7 0.3.0 Released: 2019-10-29 • database – Issue #102 (partial): modules attribute of InstalledDistribution was incorrectly computed as a list of bytes, rather than a list of str. This has now been corrected. Distlib Documentation, Release 0.3.6 • locators – Updated Locator._get_digest to check PyPI JSON responses for a “digests” dictionary before trying “algo_digest” keys. Thanks to <NAME> for the patch. • scripts – Fixed #123: Improved error message if a resource isn’t found. – Fixed #124: Stopped norm-casing the executable written into shebangs, as it doesn’t work for some non- ASCII paths. – Fixed #125: Updated launchers with versions that correctly report errors containing non-ASCII characters. The updated launchers now also support relative paths (see http://bit.ly/2JxmOoi for more information). – Changed Python version handling to accommodate versions like e.g. 3.10 (no longer assume a version X.Y where X and Y are single digits). • util – Fixed #127: Allowed hyphens in flags in export specifications. • wheel – Changed Python version handling to accommodate versions like e.g. 3.10 (no longer assume a version X.Y where X and Y are single digits). 1.8.8 0.2.9 Released: 2019-05-14 • index – Updated default PyPI URL to https://pypi.org/pypi. • locators – Updated default PyPI URL to https://pypi.org/pypi. • metadata – Relaxed metadata format checks to ignore ‘Provides’. • scripts – Fixed #33, #34: Simplified script template. – Updated Windows launchers. • util – Fixed #116: Corrected parsing of credentials from URLs. • wheel – Fixed #115: Relaxed check for ‘..’ in wheel archive entries by not checking filename parts, only directory segments. – Skip entries in archive entries ending with ‘/’ (directories) when verifying or installing. • docs – Updated default PyPI URL to https://pypi.org/pypi. – Commented out Disqus comment section. – Changed theme configuration. Distlib Documentation, Release 0.3.6 – Updated some out-of-date argument lists. • tests – Updated default PyPI URL to https://pypi.org/pypi. – Preserved umask on POSIX across a test. 1.8.9 0.2.8 Released: 2018-10-01 • database – Fixed #108: Updated metadata scan to look for the METADATA file as well as the JSON formats. • locators – Fixed #112: Handled wheel tags and platform-dependent downloads correctly in SimpleScrapingLocator. • metadata – Fixed #107: Updated documentation on testing to include information on setting PYTHONHASHSEED. • scripts – Fixed #111: Avoided unnecessary newlines in script preambles, which caused problems with detecting encoding declarations. Thanks to <NAME> for the report and patch. • util – Fixed #109: Removed existing files (which might have been symlinks) before overwriting. 1.8.10 0.2.7 Released: 2018-04-16 • compat – Fixed #105: cache_from_source is now imported from importlib.util where available. • database – Addressed #102: InstalledDistributions now have a modules attribute which is a list of top-level modules as read from top_level.txt, if that is in the distribution info. • locators – Fixed #103: Thanks to <NAME> for the patch. • metadata – Added support for PEP 566 / Metadata 1.3. • scripts – Fixed #104: Updated launcher binaries. Thanks to <NAME> for the diagnosis and fix. Distlib Documentation, Release 0.3.6 1.8.11 0.2.6 Released: 2017-10-28 • compat – Fixed #99: Updated to handle a case where sys.getfilesystemencoding() returns None. • database – Fixed #97: Eliminated a crash in EggInfoDistribution.list_distinfo_files() which was caused by trying to open a non-existent file. – Handled a case where an installed distribution didn’t have ‘Provides:’ metadata. • locators – Fixed #96: SimpleScrapingLocator no longer fails prematurely when scraping links due to invalid versions. • markers – Improved error messages issued when interpreting markers • scripts – Improved the shebangs written into installed scripts when the interpreter path is very long or contains spaces (to cater for a limitation in shebang line parsing on Linux) – Updated launcher binaries. • tests – Numerous test refinements, not detailed further here. 1.8.12 0.2.5 Released: 2017-05-06 • general – Changed regular expressions to be compatible with 3.6 as regards escape sequences. Thanks to <NAME> for the patch. – closed some resource leaks related to XML-RPC proxies. – Removed Python 2.6 from the support list. • locators – Made downloadability a factor in scoring URLs for preferences. • markers – Replaced the implementation with code which parses requirements in accordance with PEP 508 and eval- uates marker expressions according to PEP 508. • util – Changed _csv_open to use utf-8 across all platforms on Python 3.x. Thanks to <NAME> for the patch. • wheel – Changed to look for metadata in metadata.json as well as pydist.json. • version Distlib Documentation, Release 0.3.6 – Updated requirement parsing in version matchers to use the new PEP 508-compliant code. • tests – Numerous test refinements, not detailed further here. 1.8.13 0.2.4 Released: 2016-09-30 • compat – Updated to not fail on import if SSL is unavailable. • index – Switch from using gpg in preference to gpg2 for signing. This is to avoid gpg2’s behaviour of prompting for passwords, which interferes with the tests on some machines. • locators – Changed project name comparisons to follow PEP 503. Thanks to <NAME> for the patch. – Added errors queue to Locator. • manifest – Changed match logic to work under Python 3.6, due to differences in how fnmatch.translate behaves. • resources – Updated finder registry logic to reflect changes in Python 3.6. • scripts – Fixed regular expression in generated script boilerplate. • util – Updated to not fail on import if SSL is unavailable. – Added normalize_name for project name comparisons using PEP 503. • tests – Updated to skip certain tests if SSL is unavailable. – Numerous other test refinements, not detailed further here. 1.8.14 0.2.3 Released: 2016-04-30 • util – Changed get_executable to return Unicode rather than bytes. – Fixed #84: Allow + character in output script names. – Relaxed too-stringent test looking for application/json in headers. • wheel – sorted the entries in RECORD before writing to file. • tests Distlib Documentation, Release 0.3.6 – Numerous test refinements, not detailed further here. 1.8.15 0.2.2 Released: 2016-01-30 • database – Issue #81: Added support for detecting distributions installed by wheel versions >= 0.23 (which use meta- data.json rather than pydist.json). Thanks to <NAME> for the patch. • locators – Updated default PyPI URL to https://pypi.python.org/pypi • metadata – Updated to use different formatting for description field for V1.1 metadata. – Corrected “classifier” to “classifiers” in the mapping for V1.0 metadata. • scripts – Improved support for Jython when quoting executables in output scripts. • util – Issue #77: Made the internal URL used for extended metadata fetches configurable via a module attribute. – Issue #78: Improved entry point parsing to handle leading spaces in ini-format files. • docs – Numerous documentation updates, not detailed further here. • tests – renamed environment variable SKIP_SLOW to SKIP_ONLINE in tests and applied to some more tests. – Numerous other test refinements, not detailed further here. 1.8.16 0.2.1 Released: 2015-07-07 • locators – Issue #58: Return a Distribution instance or None from locate(). – Issue #59: Skipped special keys when looking for versions. – Improved behaviour of PyPIJSONLocator to be analogous to that of other locators. • resource – Added resource iterator functionality. • scripts – Issue #71: Updated launchers to decode shebangs using UTF-8. This allows non-ASCII pathnames to be correctly handled. – Ensured that the executable written to shebangs is normcased. – Changed ScriptMaker to work better under Jython. Distlib Documentation, Release 0.3.6 • util – Changed the mode setting method to work better under Jython. – Changed get_executable() to return a normcased value. • wheel – Handled multiple-architecture wheel filenames correctly. • docs – Numerous documentation updates, not detailed further here. • tests – Numerous test refinements, not detailed further here. 1.8.17 0.2.0 Released: 2014-12-17 • compat – Updated match_hostname to use the latest Python implementation. • database – Added download_urls and digests attributes to Distribution. • locators – Issue #48: Fixed the problem of adding a tuple containing a set (unhashable) to a set, by wrapping with frozenset(). – Issue #55: Return multiple download URLs for distributions, if available. • manifest – Issue #57: Remove unhelpful warnings about pattern matches. • metadata – Updated to reflect changes to PEP 426. • resources – Issue #50: The type of the path needs to be preserved on 2.x. • scripts – Updated (including launchers) to support providing arguments to interpreters in shebang lines. – The launcher sources are now included in the repository and the source distribution (they are to be found in the PC directory). – Added frames support in IronPython (patch by <NAME>). – Issue #51: encode shebang executable using utf-8 rather than fsencode. • util – Removed reference to __PYVENV_LAUNCHER__ when determining executable for scripts (relevant only on macOS). – Updated to support changes to PEP 426. • version Distlib Documentation, Release 0.3.6 – Updated to reflect changes to versioning proposed in PEP 440. • wheel – Updated build() code to respect interpreter arguments in prebuilt scripts. – Updated to support changes to PEP 426 / PEP 440. • docs – Numerous documentation updates, not detailed further here. • tests – Numerous test refinements, not detailed further here. 1.8.18 0.1.9 Released: 2014-05-19 • index – Added keystore keyword argument to signing and verification APIs. • scripts – Issue #47: Updated binary launchers to fix double-quoting bug where script executable paths have spaces. • docs – Numerous documentation updates, not detailed further here. • tests – Numerous test refinements, not detailed further here. 1.8.19 0.1.8 Released: 2014-03-18 • index – Improved thread-safety in SimpleScrapingLocator (issue #45). – Replaced absolute imports with relative ones. – Added search method to PackageIndex. • locators – Improved thread-safety in SimpleScrapingLocator (issue #45). • metadata – Fixed bug in add_requirements implementation. • resources – The Cache class was refactored into distlib.util.Cache and distlib.resources. ResourceCache classes. • scripts – Implement quoting for executables with spaces in them. • util Distlib Documentation, Release 0.3.6 – Gained the Cache class, which is also used in distlib.wheel. • version – Allowed versions with a single numeric component and a local version component. – Adjusted pre-release computation for legacy versions to be the same as the logic in the setuptools docu- mentation. • wheel – Added verify, update, is_compatible and is_mountable methods to the Wheel class. – Converted local version separators from ‘-’ to ‘_’ and back. – If SOABI not available, used Py_DEBUG, Py_UNICODE_SIZE and WITH_PYMALLOC to derive the ABI. – Added “exists” property to Wheel instances. – Factored out RECORD writing and zip building to separate methods. – Provided the ability to determine the location where extensions are extracted, by using the distlib. util.Cache class. – Avoided using pydist.json in 1.0 wheels (bdist_wheel writes a non-conforming pydist. json.) – Improved computation of compatible tags on macOS, and made COMPATIBLE_TAGS a set. • _backport/sysconfig – Replaced an absolute import with a relative one. • docs – Numerous documentation updates, not detailed further here. • tests – Numerous test refinements, not detailed further here. 1.8.20 0.1.7 Released: 2014-01-16 • metadata – Added some more fields to the metadata for the index. • resources – Use native literal string in cache path. – Issue #40: Now does path adjustments differently for files and zips. • scripts – Improved checking for venvs when generating scripts. • util – Issue #39: Fall back to temporary directory for cache if home directory unavailable. • wheel – Use native literal string in cache path. Distlib Documentation, Release 0.3.6 1.8.21 0.1.6 Released: 2013-12-31 • scripts – Updated binary launchers because the wrong variant was shipped with the previous release. • version – Added support for local component in PEP 440 versions. • tests – Numerous test refinements, not detailed further here. 1.8.22 0.1.5 Released: 2013-12-15 • compat – Changed source of import for unescape in Python >= 3.4. • index – Used dummy_threading when threading isn’t available. – Used https for default index. • locators – Used dummy_threading when threading isn’t available. • scripts – Defaulted to setting script mode bits on POSIX. – Use uncompressed executable launchers, since some anti-virus products raise false positive errors. • util – Used dummy_threading when threading isn’t available. • docs – Updated out-of-date links in overview. • tests – Used dummy_threading when threading isn’t available. 1.8.23 0.1.4 Released: 2013-10-31 • scripts – Updated the logic for finding the distlib package using a relative, rather than absolute method. This fixes a problem for pip, where distlib is kept in the pip.vendor.distlib package. • _backport/sysconfig – The analogous change to that made for scripts, described above. Distlib Documentation, Release 0.3.6 1.8.24 0.1.3 Released: 2013-10-18 • database – Added support for PEP 426 JSON metadata (pydist.json). – Generalised digests to support e.g. SHA256. – Fixed a bug in parsing legacy metadata from .egg directories. – Removed duplicated code relating to parsing “provides” fields. • index – Changes relating to support for PEP 426 JSON metadata (pydist.json). • locators – Changes relating to support for PEP 426 JSON metadata (pydist.json). – Fixed a bug in scoring download URLs for preference when multiple URLs are available. – The legacy scheme is used for the default locator. – Made changes relating to parsing “provides” fields. – Generalised digests to support e.g. SHA256. – If no release version is found for a requirement, prereleases are now considered even if not explicitly requested. • markers – Added support for markers as specified in PEP 426. • metadata – Added support for PEP 426 JSON metadata (pydist.json). The old metadata class is renamed to Lega- cyMetadata, and the (new) Metadata class wraps the JSON format (and also the legacy format, through LegacyMetadata). – Removed code which was only used if docutils was installed. This code implemented validation of .rst descriptions, which is not done in distlib. • scripts – Updated the logic for writing executable files to deal as best we can with files which are already in use and hence cannot be deleted on Windows. – Changed the script generation when launchers are used to write a single executable which wraps a script (whether pre-built or generated) and includes a manifest to avoid UAC prompts on Windows. – Changed the interface for script generation options: the make and make_multiple methods of ScriptMaker now take an optional options dictionary. • util – Added extract_by_key() to copy selected keys from one dict to another. – Added parse_name_and_version() for use in parsing “provides” fields. – Made split_filename more flexible. • version – Added support for PEP 440 version matching. Distlib Documentation, Release 0.3.6 – Removed AdaptiveVersion, AdaptiveMatcher etc. as they don’t add sufficient value to justify keeping them in. • wheel – Added wheel_version kwarg to Wheel.build API. – Changed Wheel.install API (after consultation on distutils-sig). – Added support for PEP 426 JSON metadata (pydist.json). – Added lib_only flag to install() method. • docs – Numerous documentation updates, not detailed further here. • tests – Numerous test refinements, not detailed further here. 1.8.25 0.1.2 Released: 2013-04-30 • compat – Added BaseConfigurator backport for 2.6. • database – Return RECORD path from write_installed_files (or None if dry_run). – Explicitly return None from write_shared_locations if dry run. • metadata – Added missing condition in todict(). • scripts – Add variants and clobber flag for generation of foo/fooX/foo-X.Y. – Added .exe manifests for Windows. • util – Regularised recording of written files. – Added Configurator. • version – Tidyups, most suggested by <NAME>: Made key functions private, removed _Common class, removed checking for huge version numbers, made UnsupportedVersionError a ValueError. • wheel – Replaced absolute import with relative. – Handle None return from write_shared_locations correctly. – Fixed bug in Mounter for extension modules not in sub-packages. – Made dylib-cache Python version-specific. • docs – Numerous documentation updates, not detailed further here. Distlib Documentation, Release 0.3.6 • tests – Numerous test refinements, not detailed further here. • other – Corrected setup.py to ensure that sysconfig.cfg is included. 1.8.26 0.1.1 Released: 2013-03-22 • database – Updated requirements logic to use extras and environment markers. – Made it easier to subclass Distribution and EggInfoDistribution. • locators – Added method to clear locator caches. – Added the ability to skip pre-releases. • manifest – Fixed bug which caused side-effect when sorting a manifest. • metadata – Updated to handle most 2.0 fields, though PEP 426 is still a draft. – Added the option to skip unset fields when writing. • resources – Made separate subclasses ResourceBase, Resource and ResourceContainer from Resource. Thanks to <NAME> for the suggestion and patch. • scripts – Fixed bug which prevented writing shebang lines correctly on Windows. • util – Made get_cache_base more useful by parameterising the suffix to use. – Fixed a bug when reading CSV streams from .zip files under 3.x. • version – Added is_prerelease property to versions. – Moved to PEP 426 version formats and sorting. • wheel – Fixed CSV stream reading under 3.x and handled UTF-8 in zip entries correctly. – Added metadata and info properties, and updated the install method to return the installed distribution. – Added mount/unmount functionality. – Removed compatible_tags() function in favour of COMPATIBLE_TAGS attribute. • docs – Numerous documentation updates, not detailed further here. Distlib Documentation, Release 0.3.6 • tests – Numerous test refinements, not detailed further here. 1.8.27 0.1.0 Released: 2013-03-02 • Initial release. 1.9 Next steps You might find it helpful to look at the Tutorial, or the API Reference. Distlib Documentation, Release 0.3.6 22 Chapter 1. Overview CHAPTER 2 Tutorial 2.1 Installation Distlib is a pure-Python library. You should be able to install it using: pip install distlib for installing distlib into a virtualenv or other directory where you have write permissions. On Posix platforms, you may need to invoke using sudo if you need to install distlib in a protected location such as your system Python’s site-packages directory. 2.2 Testing A full test suite is included with distlib. To run it, you’ll need to download the source distribution, unpack it and run python tests/test_all.py in the top-level directory of the package. If running the tests under Python >= 3.2.3, remember to first set the environment variable PYTHONHASHSEED=0 to disable hash randomisation, which is needed for the tests. (The enviroment variable also needs to be set if running Python 2.x with -R. which is only available in Python 2.6.8 and later.) Continuous integration test results are available at: https://github.com/pypa/distlib/actions Coverage results are available at: https://app.codecov.io/gh/pypa/distlib Note that the actual coverage is higher than that shown, because coverage under Windows is not included in the above coverage figures. Note that the index tests are configured, by default, to use a local test server, though they can be configured to run against PyPI itself. This local test server is not bundled with distlib, but is available from: https://raw.github.com/vsajip/pypiserver/standalone/pypi-server-standalone.py Distlib Documentation, Release 0.3.6 This is a slightly modified version of <NAME>’s pypiserver. To use, the script needs to be copied to the tests folder of the distlib distribution. If the server script is not available, the tests which use it will be skipped. Naturally, this will also affect the coverage statistics. 2.2.1 PYPI availability If PyPI is unavailable or slow, then some of the tests can fail or become painfully slow. To skip tests that might be sometimes slow, set the SKIP_SLOW environment variable: $ SKIP_SLOW=1 PYTHONHASHSEED=0 python tests/test_all.py on Posix, or: C:\> set SKIP_SLOW=1 C:\> set PYTHONHASHSEED=0 C:\> python tests/test_all.py on Windows. 2.3 First steps For now, we just list how to use particular parts of the API as they take shape. 2.3.1 Using the database API You can use the distlib.database package to access information about installed distributions. This information is available through the following classes: • DistributionPath, which represents a set of distributions installed on a path. • Distribution, which represents an individual distribution, conforming to recent packaging PEPs (PEP 643, PEP 566, PEP 508, PEP 440, PEP 386, PEP 376, PEP 345, PEP 314 and PEP 241). • EggInfoDistribution, which represents a legacy distribution in egg format. 2.3.1.1 Distribution paths The Distribution and EggInfoDistribution classes are normally not instantiated directly; rather, they are returned by querying DistributionPath for distributions. To create a DistributionPath instance, you can do >>> from distlib.database import DistributionPath >>> dist_path = DistributionPath() 2.3.1.2 Querying a path for distributions In this most basic form, dist_path will provide access to all non-legacy distributions on sys.path. To get these distributions, you invoke the get_distributions() method, which returns an iterable. Let’s try it: Distlib Documentation, Release 0.3.6 >>> list(dist_path.get_distributions()) [] This may seem surprising if you’ve just started looking at distlib, as you won’t have any non-legacy distributions. 2.3.1.3 Including legacy distributions in the search results To include distributions created and installed using setuptools or distribute, you need to create the DistributionPath by specifying an additional keyword argument, like so: >>> dist_path = DistributionPath(include_egg=True) and then you’ll get a less surprising result: >>> len(list(dist_path.get_distributions())) 77 The exact number returned will be different for you, of course. You can ask for a particular distribution by name, using the get_distribution() method: >>> dist_path.get_distribution('setuptools') <EggInfoDistribution u'setuptools' 0.6c11 at '/usr/lib/python2.7/dist-packages/ ˓→setuptools.egg-info'> If you want to look at a specific path other than sys.path, you specify it as a positional argument to the DistributionPath constructor: >>> from pprint import pprint >>> special_dists = DistributionPath(['tests/fake_dists'], include_egg=True) >>> pprint([d.name for d in special_dists.get_distributions()]) ['babar', 'choxie', 'towel-stuff', 'grammar', 'truffles', 'coconuts-aster', 'nut', 'bacon', 'banana', 'cheese', 'strawberry'] or, if you leave out egg-based distributions: >>> special_dists = DistributionPath(['tests/fake_dists']) >>> pprint([d.name for d in special_dists.get_distributions()]) ['babar', 'choxie', 'towel-stuff', 'grammar'] 2.3.1.4 Distribution properties Once you have a Distribution instance, you can use it to get more information about the distribution. For example: Distlib Documentation, Release 0.3.6 • The metadata attribute gives access to the distribution’s metadata (see Using the metadata and markers APIs for more information). • The name_and_version attribute shows the name and version in the format name (X.Y). • The key attribute holds the distribution’s name in lower-case, as you generally want to search for distributions without regard to case sensitivity. 2.3.1.5 Exporting things from Distributions Each distribution has a dictionary of exports. The exports dictionary is functionally equivalent to “entry points” in distribute / setuptools. The keys to the dictionary are just names in a hierarchical namespace delineated with periods (like Python packages, so we’ll refer to them as pkgnames in the following discussion). The keys indicate categories of information which the distribution’s author wishes to export. In each such category, a distribution may publish one or more entries. The entries can be used for many purposes, and can point to callable code or data. A common purpose is for publishing callables in the distribution which adhere to a particular protocol. To give a concrete example, the Babel library for internationalisation support provides a mechanism for extracting, from a variety of sources, message text to be internationalised. Babel itself provides functionality to extract messages from e.g. Python and JavaScript source code, but helpfully offers a mechanism whereby providers of other sources of message text can provide their own extractors. It does this by providing a category 'babel.extractors', under which other software can register extractors for their sources. The Jinja2 template engine, for example, makes use of this to provide a message extractor for Jinja2 templates. Babel itself registers its own extractors under the same category, so that a unified view of all extractors in a given Python environment can be obtained, and Babel’s extractors are treated by other parts of Babel in exactly the same way as extractors from third parties. Any installed distribution can offer up values for any category, and a set of distributions (such as the set of installed distributions on sys.path) conceptually has an aggregation of these values. The values associated with a category are a list of strings with the format: name = prefix [ ":" suffix ] [ "[" flags "]" ] where name, prefix, and suffix are pkgnames. suffix and flags are optional and flags follow the description in Flag formats. Any installed distribution can offer up values for any category, and a set of distributions (such as the set of installed distributions on sys.path) conceptually has an aggregation of these values. For callables, the prefix is the package or module name which contains the callable, suffix is the path to the callable in the module, and flags can be used for any purpose determined by the distribution author (for example, the extras feature in distribute / setuptools). This entry format is used in the distlib.scripts package for installing scripts based on Python callables. 2.3.2 Distribution dependencies You can use the distlib.locators package to locate the dependencies that a distribution has. The distlib. database package has code which allow you to analyse the relationships between a set of distributions: • make_graph(), which generates a dependency graph from a list of distributions. • get_dependent_dists(), which takes a list of distributions and a specific distribution in that list, and returns the distributions that are dependent on that specific distribution. Distlib Documentation, Release 0.3.6 • get_required_dists(), which takes a list of distributions and a specific distribution in that list, and returns the distributions that are required by that specific distribution. The graph returned by make_graph() is an instance of DependencyGraph. 2.3.3 Using the locators API 2.3.3.1 Overview To locate a distribution in an index, we can use the locate() function. This returns a potentially downloadable distribution (in the sense that it has a download URL – of course, there are no guarantees that there will actually be a downloadable resource at that URL). The return value is an instance of distlib.database.Distribution which can be queried for any distributions it requires, so that they can also be located if desired. Here is a basic example: >>> from distlib.locators import locate >>> flask = locate('flask') >>> flask <Distribution Flask (0.10.1) [https://pypi.org/packages/source/F/Flask/Flask-0.10.1. ˓→tar.gz]> >>> dependencies = [locate(r) for r in flask.run_requires] >>> from pprint import pprint >>> pprint(dependencies) [<Distribution Werkzeug (0.9.1) [https://pypi.org/packages/source/W/Werkzeug/Werkzeug- ˓→0.9.1.tar.gz]>, <Distribution Jinja2 (2.7) [https://pypi.org/packages/source/J/Jinja2/Jinja2-2.7.tar. ˓→gz]>, <Distribution itsdangerous (0.21) [https://pypi.org/packages/source/i/itsdangerous/ ˓→itsdangerous-0.21.tar.gz]>] >>> The values in the run_requires property are just strings. Here’s another example, showing a little more detail: >>> authy = locate('authy') >>> authy.run_requires set(['httplib2 (>= 0.7, < 0.8)', 'simplejson']) >>> authy <Distribution authy (1.0.0) [http://pypi.org/packages/source/a/authy/authy-1.0.0.tar. ˓→gz]> >>> deps = [locate(r) for r in authy.run_requires] >>> pprint(deps) [<Distribution httplib2 (0.7.7) [http://pypi.org/packages/source/h/httplib2/httplib2- ˓→0.7.7.zip]>, <Distribution simplejson (3.3.0) [http://pypi.org/packages/source/s/simplejson/ ˓→simplejson-3.3.0.tar.gz]>] >>> Note that the constraints on the dependencies were honoured by locate(). 2.3.3.2 Under the hood Under the hood, locate() uses locators. Locators are a mechanism for finding distributions from a range of sources. Although the pypi subpackage has been copied from distutils2 to distlib, there may be benefits in a higher- level API, and so the distlib.locators package has been created as an experiment. Locators are objects which locate distributions. A locator instance’s get_project() method is called, passing in a project name: The method Distlib Documentation, Release 0.3.6 returns a dictionary containing information about distribution releases found for that project. The keys of the returned dictionary are versions, and the values are instances of distlib.database.Distribution. The following locators are provided: • DirectoryLocator – this is instantiated with a base directory and will look for archives in the file system tree under that directory. Name and version information is inferred from the filenames of archives, and the amount of information returned about the download is minimal. The locator searches all subdirectories by default, but can be set to only look in the specified directory by setting the recursive keyword argument to False. • PyPIRPCLocator. – This takes a base URL for the RPC service and will locate packages using PyPI’s XML-RPC API. This locator is a little slow (the scraping interface seems to work faster) and case-sensitive. For example, searching for 'flask' will throw up no results, but you get the expected results when searching from 'Flask'. This appears to be a limitation of the underlying XML-RPC API. Note that 20 versions of a project necessitate 41 network calls (one to get the versions, and two more for each version – one to get the metadata, and another to get the downloads information). • PyPIJSONLocator. – This takes a base URL for the JSON service and will locate packages using PyPI’s JSON API. This locator is case-sensitive. For example, searching for 'flask' will throw up no results, but you get the expected results when searching from 'Flask'. This appears to be a limitation of the underlying JSON API. Note that unlike the XML-RPC service, only non-hidden releases will be returned. • SimpleScrapingLocator – this takes a base URL for the site to scrape, and locates packages using a similar approach to the PackageFinder class in pip, or as documented in the setuptools documentation as the approach used by easy_install. • DistPathLocator – this takes a DistributionPath instance and locates installed distributions. This can be used with AggregatingLocator to satisfy requirements from installed distributions before looking elsewhere for them. • JSONLocator – this uses an improved JSON metadata schema and returns data on all versions of a distribu- tion, including dependencies, using a single network request. • AggregatingLocator – this takes a list of other aggregators and delegates finding projects to them. It can either return the first result found (i.e. from the first aggregator in the list provided which returns a non-empty result), or a merged result from all the aggregators in the list. There is a default locator, available at distlib.locators.default_locator. The locators package also contains a function, get_all_distribution_names(), which retrieves the names of all distributions registered on PyPI: >>> from distlib.locators import get_all_distribution_names >>> names = get_all_distribution_names() >>> len(names) 31905 >>> This is implemented using the XML-RPC API. Apart from JSONLocator, none of the locators currently returns enough metadata to allow dependency resolution to be carried out, but that is a result of the fact that metadata relating to dependencies are not indexed, and would require not just downloading the distribution archives and inspection of contained metadata files, but potentially also introspecting setup.py! This is the downside of having vital information only available via keyword arguments to the setup() call: hopefully, a move to fully declarative metadata will facilitate indexing it and allowing the provision of improved features. The locators will skip binary distributions other than wheels. (.egg files are currently treated as binary distributions). Distlib Documentation, Release 0.3.6 The PyPI locator classes don’t yet support the use of mirrors, but that can be added in due course – once the basic functionality is working satisfactorily. 2.3.4 Using the index API You can use the distlib.index package to perform operations relating to a package index compatible with PyPI. This includes things like registering a project, uploading a distribution or uploading documentation. 2.3.4.1 Overview You access index functionality through an instance of the PackageIndex class. This is instantiated with the URL of the repository (which can be omitted if you want to use PyPI itself): >>> from distlib.index import PackageIndex >>> index = PackageIndex() >>> index.url 'http://pypi.org/pypi' To use a local test server, you might do this: >>> index = PackageIndex('http://localhost:8080/') 2.3.4.2 Registering a project Registering a project can be done using a Metadata instance which holds the index metadata used for registering. A simple example: >>> from distlib.metadata import Metadata >>> metadata = Metadata() >>> metadata.name = 'tatterdemalion' >>> metadata.version = '0.1' >>> # other fields omitted >>> response = index.register(metadata) The register() method returns an HTTP response, such as might be returned by a call to urlopen. If an error occurs, a HTTPError will be raised. Otherwise, the response.code should be 200. 2.3.4.3 Uploading a source distribution To upload a source distribution, you need to do the following as a minimum: >>> metadata = ... # get a populated Metadata instance >>> response = index.upload_file(metadata, archive_name) The upload_file() method returns an HTTP response or, in case of error, raises an HTTPError. 2.3.4.4 Uploading binary distributions When uploading binary distributions, you need to specify the file type and Python version, as in the following example: Distlib Documentation, Release 0.3.6 >>> response = index.upload_file(metadata, archive_name, ... filetype='bdist_dumb', ... pyversion='2.6') 2.3.4.5 Signing a distribution To sign a distribution, you will typically need GnuPG. The default implementation looks for gpg or gpg2 on the path, but if not available there, you can can explicitly specify an absolute path indicating where the signing program is to be found: >>> index.gpg = '/path/to/gpg' Once this is set, you can sign the archive before uploading, as follows: >>> response = index.upload_file(metadata, archive_name, ... signer='Test User', ... sign_password='secret', keystore='/path/to/keys') As an alternative to passing the keystore with each call, you can specify that in an instance attribute: >>> index.gpg_home = '/path/to/keys' The keystore is a directory which contains the GnuPG key database (files like pubring.gpg, secring.gpg, and trustdb.gpg). When you sign a distribution, both the distribution and the signature are uploaded to the index. 2.3.4.6 Downloading files The PackageIndex class contains a utility method which allows you to download distributions (and other files, such as signatures): >>> index.download_file(url, destfile, digest=None, reporthook=None) This is similar in function to urlretrieve() in the standard library. Provide a digest if you want the call to check that the has digest of the downloaded file matches a specific value: if not provided, no matching is done. The value passed can just be a plain string in the case of an MD5 digest or, if you want to specify the hashing algorithm to use, specify a tuple such as ('sha1', '0123456789abcdef...'). The hashing algorithm must be one that’s supported by the hashlib module. Benefits to using this method over plain urlretrieve() are: • It will use the ssl_verifier, if set, to ensure that the download is coming from where you think it is (see Verifying HTTPS connections). • It will compute the digest as it downloads, saving you from having to read the whole of the downloaded file just to compute its digest. Note that the url you download from doesn’t actually need to be on the index – in theory, it could be from some other site. Note that if you have an ssl_verifier set on the index, it will perform its checks according to whichever url you supply – whether it’s a resource on the index or not. Distlib Documentation, Release 0.3.6 2.3.4.7 Verifying signatures For any archive downloaded from an index, you can retrieve any signature by just appending .asc to the path portion of the download URL for the archive, and downloading that. The index class offers a verify_signature() method for validating a signature. If you have files ‘good.bin’, ‘bad.bin’ which are different from each other, and ‘good.bin.asc’ has the signature for ‘good.bin’, then you can verify signatures like this: >>> index.verify_signature('good.bin.asc', 'good.bin', '/path/to/keys') True >>> index.verify_signature('good.bin.asc', 'bad.bin', '/path/to/keys') False The last argument, which is optional, specifies a directory which holds the GnuPG keys used for verification – the keystore. Instead of specifying the keystore location in each call, you can specify the location in an instance attribute: >>> index.gpg_home = '/path/to/keys' If you do this, you don’t need to pass the keystore location. Note that if you don’t have the gpg or gpg2 programs on the path, you may need to specify the location of the verifier program explicitly: >>> index.gpg = '/path/to/gpg' Some caveats about verified signatures In order to be able to perform signature verification, you’ll have to ensure that the public keys of whoever signed those distributions are in your key store. However, having these keys shouldn’t give you a false sense of security; unless you can be sure that those keys actually belong to the people or organisations they purport to represent, the signature has no real value, even if it is verified without error. For you to be able to trust a key, it would need to be signed by someone you trust, who vouches for it – and this requires there to be either a signature from a valid certifying authority (e.g. Verisign, Thawte etc.) or a Web of Trust around the keys that you want to rely on. An index may itself countersign distributions (so it deals with the keys of the distribution publishers, but you need only deal with the public signing key belonging to the index). If you trust the index, you can trust the verified signature if it’s signed by the index. 2.3.4.8 Uploading documentation To upload documentation, you need to specify the metadata and the directory which is the root of the documentation (typically, if you use Sphinx to build your documentation, this will be something like <project>/docs/_build/ html): >>> response = index.upload_documentation(metadata, doc_dir) The upload_documentation() method returns an HTTP response or, in case of error, raises an HTTPError. The call will zip up the entire contents of the passed directory doc_dir and upload the zip file to the index. 2.3.4.9 Authentication Operations which update the index (all of the above) will require authenticated requests. You can specify a username and password to use for requests sent to the index: Distlib Documentation, Release 0.3.6 >>> index.username = 'test' >>> index.password = 'secret' For your convenience, these will be automatically read from any .pypirc file which you have; if it contains entries for multiple indexes, a repository key in .pypirc must match index.url to identify which username and password are to be read from .pypirc. Note that to ensure compatibility, distlib uses distutils code to read the .pypirc configuration. Thus, given the .pypirc file: [distutils] index-servers = pypi test [pypi] username: me password: my_strong_password [test] repository: http://localhost:8080/ username: test password: secret you would see the following: >>> index = PackageIndex() >>> index.username 'me' >>> index.password 'my_strong_password' >>> index = PackageIndex('http://localhost:8080/') >>> index.username 'test' >>> index.password 'secret' 2.3.4.10 Verifying HTTPS connections Although Python has full support for SSL, it does not, by default, verify SSL connections to servers. That’s because in order to do so, a set of certificates which certify the identity of the server needs to be provided (see the relevant Python documentation for details). Support for verifying SSL connections is provided in distlib through a handler, distlib.util.HTTPSHandler. To use it, set the ssl_verifier attribute of the index to a suitably configured instance. For example: >>> from distlib.util import HTTPSHandler >>> verifier = HTTPSHandler('/path/to/root/certs.pem') >>> index.ssl_verifier = verifier By default, the handler will attempt to match domains, including wildcard matching. This means that (for example) you access foo.org or www.foo.org which have a certificate for *.foo.org, the domains will match. If the domains don’t match, the handler raises a CertificateError (a subclass of ValueError). Domain mismatches can, however, happen for valid reasons. Say a hosting server bar.com hosts www.foo.org, which we are trying to access using SSL. If the server holds a certificate for www.foo.org, it will present it to the client, as long as both support Server Name Indication (SNI). While distlib supports SNI where Python supports Distlib Documentation, Release 0.3.6 it, Python 2.x does not include SNI support. For this or some other reason , you may wish to turn domain matching off. To do so, instantiate the verifier like this: >>> verifier = HTTPSHandler('/path/to/root/certs.pem', False) Ensuring that only HTTPS connections are made You may want to ensure that traffic is only HTTPS for a particular interaction with a server – for example: • Deal with a Man-In-The-Middle proxy server which listens on port 443 but talks HTTP rather than HTTPS • Deal with situations where an index page obtained via HTTPS contains links with a scheme of http rather than https. To do this, instead of using HTTPSHandler as shown above, use the HTTPSOnlyHandler class instead, which disallows any HTTP traffic. It’s used in the same way as HTTPSHandler: >>> from distlib.util import HTTPSOnlyHandler >>> verifier = HTTPSOnlyHandler('/path/to/root/certs.pem') >>> index.ssl_verifier = verifier Note that with this handler, you can’t make any HTTP connections at all - it will raise URLError if you try. Getting hold of root certificates At the time of writing, you can find a file in the appropriate format on the cURL website. Just download the cacert. pem file and pass the path to it when instantiating your verifier. 2.3.4.11 Saving a default configuration If you don’t have a .pypirc file but want to save one, you can do this by setting the username and password and calling the save_configuration() method: >>> index = PackageIndex() >>> index.username = 'fred' >>> index.password = 'flintstone' >>> index.save_configuration() This will use distutils code to save a default .pypirc file which specifies a single index – PyPI – with the specified username and password. 2.3.4.12 Searching PyPI You can use the search() method of PackageIndex to search for distributions on PyPI: >>> index = PackageIndex() >>> from pprint import pprint >>> pprint(index.search('tatterdema')) [{'_pypi_ordering': 0, 'name': 'tatterdemalion', 'summary': 'A dummy distribution', 'version': '0.1.0'}] Distlib Documentation, Release 0.3.6 If a string is specified, just the name is searched for. Alternatively, you can specify a dictionary of attributes to search for, along with values to match. For example: >>> pprint(index.search({'summary': 'dummy'})) [{'_pypi_ordering': 5, 'name': 'collective.lorem', 'summary': 'A package that provides dummy content generation.', 'version': '0.2.3'}, {'_pypi_ordering': 7, 'name': 'collective.loremipsum', 'summary': 'Creates dummy content with populated Lorem Ipsum.', 'version': '0.8'}, {'_pypi_ordering': 1, 'name': 'cosent.dummypackage', 'summary': 'A dummy package for buildtools testing', 'version': '0.4'}, {'_pypi_ordering': 0, 'name': 'django-dummyimage', 'summary': 'Dynamic Dummy Image Generator For Django!', 'version': '0.1.1'}, {'_pypi_ordering': 1, 'name': 'django-plainpasswordhasher', 'summary': 'Dummy (plain text) password hashing for Django.', 'version': '0.2'}, {'_pypi_ordering': 2, 'name': 'django-plainpasswordhasher', 'summary': 'Dummy (plain text) password hashing for Django.', 'version': '0.3'}, {'_pypi_ordering': 1, 'name': 'dummycache', 'summary': 'A dummy in-memory cache for development and testing. (Not recommended ˓→for production use.)', 'version': '0.0.2'}, {'_pypi_ordering': 0, 'name': 'dummy-txredis', 'summary': 'Dummy txRedis client and factory.', 'version': '0.5'}, {'_pypi_ordering': 7, 'name': 'eea.eggmonkeytesttarget', 'summary': 'A dummy package to test eea.eggmonkey', 'version': '5.7'}, {'_pypi_ordering': 8, 'name': 'invewrapper', 'summary': 'dummy/transitional package that depends on "pew"', 'version': '0.1.8'}, {'_pypi_ordering': 0, 'name': 'monoprocessing', 'summary': 'A dummy implementation of multiprocessing.Pool', 'version': '0.1'}, {'_pypi_ordering': 0, 'name': 'myFun', 'summary': 'This is a dummy function which prints given list data.', 'version': '1.0.0'}, {'_pypi_ordering': 0, 'name': 'ReadableDict-a-dict-without-brackets', 'summary': 'provides a dummy implementation of a dict without brackets', 'version': '0.0'}, {'_pypi_ordering': 4, (continues on next page) Distlib Documentation, Release 0.3.6 (continued from previous page) 'name': 'setuptools_dummy', 'summary': 'Setuptools Dummy Filefinder', 'version': '0.1.0.4'}, {'_pypi_ordering': 0, 'name': 'tatterdemalion', 'summary': 'A dummy distribution', 'version': '0.1.0'}] If you specify multiple attributes, then the search returns the intersection of matches – an and operation: >>> pprint(index.search({'summary': 'dummy', 'name': 'ta'})) [{'_pypi_ordering': 7, 'name': 'eea.eggmonkeytesttarget', 'summary': 'A dummy package to test eea.eggmonkey', 'version': '5.7'}, {'_pypi_ordering': 0, 'name': 'tatterdemalion', 'summary': 'A dummy distribution', 'version': '0.1.0'}] If you want a union of matches – an or operation – specify a second argument to the PackageIndex.search() method with the value 'or': >>> pprint(index.search({'version': '2013.9', 'name': 'pytzp'}, 'or')) [{'_pypi_ordering': 65, 'name': 'pytz', 'summary': 'World timezone definitions, modern and historical', 'version': '2013.9'}, {'_pypi_ordering': 2, 'name': 'pytzpure', 'summary': 'A pure-Python version of PYTZ (timezones).', 'version': '0.2.4'}] The search functionality makes use of PyPI’s XML-RPC interface, so it will only work for indexes which supply a compatible implementation. The following search attributes are currently supported: • name • version • stable_version • author • author_email • maintainer • maintainer_email • home_page • license • summary • description • keywords • platform Distlib Documentation, Release 0.3.6 • download_url • classifiers (list of classifier strings) • project_url • docs_url (URL of the pythonhosted.org docs if they’ve been supplied) 2.3.5 Using the metadata and markers APIs The metadata API is exposed through a Metadata class. This class can read and write metadata files complying with any of the defined versions: 1.0 (PEP 241), 1.1 (PEP 314), 1.2 (PEP 345), 2.1 (PEP 566) and 2.2 (PEP 643). It implements methods to parse and write metadata files. 2.3.5.1 Instantiating metadata You can simply instantiate a Metadata instance and start populating it: >>> from distlib.metadata import Metadata >>> md = Metadata() >>> md.name = 'foo' >>> md.version = '1.0' An instance so created may not be valid unless it has some minimal properties which meet certain constraints, as specified in the Core metadata specifications. These constraints aren’t applicable to legacy metadata. Therefore, when creating Metadata instances to deal with such metadata, you can specify the scheme keyword when creating the instance: >>> legacy_metadata = Metadata(scheme='legacy') The term ‘legacy’ refers to the version scheme. Whether dealing with current or legacy metadata. an instance’s validate() method can be called to ensure that the metadata has no missing or invalid data. This raises a DistlibException (either MetadataMissingError or MetadataInvalidError) if the metadata isn’t valid. You can initialise an instance with a dictionary using the following form: >>> metadata = Metadata(mapping=a_dictionary) 2.3.5.2 Reading metadata from files and streams The Metadata class can be instantiated with the path of the metadata file. Here’s an example with legacy metadata: >>> from distlib.metadata import Metadata >>> metadata = Metadata(path='PKG-INFO') >>> metadata.name 'CLVault' >>> metadata.version '0.5' >>> metadata.run_requires ['keyring'] Instead of using the path keyword argument to specify a file location, you can also specify a fileobj keyword argument to specify a file-like object which contains the data. Distlib Documentation, Release 0.3.6 2.3.5.3 Writing metadata to paths and streams Writing metadata can be done using the write method: >>> metadata.write(path='/to/my/pydist.json') You can also specify a file-like object to write to, using the fileobj keyword argument. 2.3.5.4 Using markers Environment markers are implemented in the distlib.markers package and accessed via a single function, interpret(). See PEP 508 for more information about environment markers. The interpret() function takes a string argument which represents a Boolean expression, and returns either True or False: >>> from distlib.markers import interpret >>> interpret('python_version >= "1.0"') True You can pass in a context dictionary which is checked for values before the environment: >>> interpret('python_version >= "1.0"', {'python_version': '0.5'}) False You won’t normally need to work with markers in this way – they are dealt with by the Metadata and Distribution logic when needed. 2.3.6 Using the resource API You can use the distlib.resources package to access data stored in Python packages, whether in the file system or .zip files. Consider a package which contains data alongside Python code: foofoo bar bar_resource.bin baz.py __init__.py foo_resource.bin __init__.py nested nested_resource.bin 2.3.6.1 Access to resources in the file system You can access these resources like so: >>> from distlib.resources import finder >>> f = finder('foofoo') >>> r = f.find('foo_resource.bin') >>> r.is_container False >>> r.size 10 (continues on next page) Distlib Documentation, Release 0.3.6 (continued from previous page) >>> r.bytes b'more_data\n' >>> s = r.as_stream() >>> s.read() b'more_data\n' >>> s.close() >>> r = f.find('nested') >>> r.is_container True >>> r.resources {'nested_resource.bin'} >>> r = f.find('nested/nested_resource.bin') >>> r.size 12 >>> r.bytes b'nested data\n' >>> f = finder('foofoo.bar') >>> r = f.find('bar_resource.bin') >>> r.is_container False >>> r.bytes b'data\n' 2.3.6.2 Access to resources in the .zip files It works the same way if the package is in a .zip file. Given the zip file foo.zip: $ unzip -l foo.zip Archive: foo.zip Length Date Time Name --------- ---------- ----- ---- 10 2012-09-20 21:34 foo/foo_resource.bin 8 2012-09-20 21:42 foo/__init__.py 14 2012-09-20 21:42 foo/bar/baz.py 8 2012-09-20 21:42 foo/bar/__init__.py 5 2012-09-20 21:33 foo/bar/bar_resource.bin --------- ------- 45 5 files You can access its resources as follows: >>> import sys >>> sys.path.append('foo.zip') >>> from distlib.resources import finder >>> f = finder('foo') >>> r = f.find('foo_resource.bin') >>> r.is_container False >>> r.size 10 >>> r.bytes 'more_data\n' and so on. Distlib Documentation, Release 0.3.6 2.3.6.3 Iterating over resources You can iterate over resources as shown in the following example: >>> from distlib.resources import finder >>> f = finder('foofoo') >>> iterator = f.iterator('') >>> for r in iterator: print('%-20s %s' % (r.name, r.is_container)) ... True foo_resource.bin False __init__.py False bar True bar/bar_resource.bin False bar/baz.py False bar/__init__.py False nested True nested/nested_resource.bin False It works with zipped resources, too: >>> import sys >>> sys.path.append('foo.zip') >>> from distlib.resources import finder >>> f = finder('foo') >>> iterator = f.iterator('') >>> for r in iterator: print('%-20s %s' % (r.name, r.is_container)) ... True foo_resource.bin False __init__.py False bar True bar/bar_resource.bin False bar/baz.py False bar/__init__.py False 2.3.7 Using the scripts API You can use the distlib.scripts API to install scripts. Installing scripts is slightly more involved than just copying files: • You may need to adjust shebang lines in scripts to point to the interpreter to be used to run scripts. This is important in virtual environments (venvs), and also in other situations where you may have multiple Python installations on a single computer. • On Windows, on systems where the PEP 397 launcher isn’t installed, it is not easy to ensure that the correct Python interpreter is used for a script. You may wish to install native Windows executable launchers which run the correct interpreter, based on a shebang line in the script. 2.3.7.1 Specifying scripts to install To install scripts, create a ScriptMaker instance, giving it the source and target directories for scripts: >>> from distlib.scripts import ScriptMaker >>> maker = ScriptMaker(source_dir, target_dir) Distlib Documentation, Release 0.3.6 You can then install a script foo.py like this: >>> maker.make('foo.py') The string passed to make can take one of the following forms: • A filename, relative to the source directory for scripts, such as foo.py or subdir/bar.py. • A reference to a callable, given in the form: name = some_package.some_module:some_callable [flags] where the flags part is optional. For more information about flags, see Flag formats. Note that this format is exactly the same as for export entries in a distribution (see Exporting things from Distributions). When this form is passed to the ScriptMaker.make() method, a Python stub script is created with the appropriate shebang line and with code to load and call the specified callable with no arguments, returning its value as the return code from the script. You can pass an optional options dictionary to the make() method. This is meant to contain options which control script generation. There are two options currently in use: gui: This Boolean value, if True, indicates on Windows that a Windows executable launcher (rather than a launcher which is a console application) should be used. (This only applies if add_launchers is true.) interpreter_args: If provided, this should be a list of strings which are added to the shebang line follow- ing the interpreter. If there are values with spaces, you will need to surround them with double quotes. Note: Use of this feature may affect portability, since POSIX does not standardise how these arguments are passed to the interpreter (see https://en.wikipedia.org/wiki/Shebang_line#Portability for more information). For example, you can pass {'gui': True} to generate a windowed script. 2.3.7.2 Wrapping callables with scripts Let’s see how wrapping a callable works. Consider the following file: $ cat scripts/foo.py def main(): print('Hello from foo') def other_main(): print('Hello again from foo') we can try wrapping ``main`` and ``other_main`` as callables:: >>> from distlib.scripts import ScriptMaker >>> maker = ScriptMaker('scripts', '/tmp/scratch') >>> maker.make_multiple(('foo = foo:main', 'bar = foo:other_main')) ['/tmp/scratch/foo', '/tmp/scratch/bar'] >>> we can inspect the resulting scripts. First, ``foo``:: (continues on next page) Distlib Documentation, Release 0.3.6 (continued from previous page) $ ls /tmp/scratch/ bar foo $ cat /tmp/scratch/foo #!/usr/bin/python if __name__ == '__main__': import sys, re def _resolve(module, func): __import__(module) mod = sys.modules[module] parts = func.split('.') result = getattr(mod, parts.pop(0)) for p in parts: result = getattr(result, p) return result try: sys.argv[0] = re.sub('-script.pyw?$', '', sys.argv[0]) func = _resolve('foo', 'main') rc = func() # None interpreted as 0 except Exception as e: # only supporting Python >= 2.6 sys.stderr.write('%s\n' % e) rc = 1 sys.exit(rc) The other script, bar, is different only in the essentials: $ diff /tmp/scratch/foo /tmp/scratch/bar 16c16 < func = _resolve('foo', 'main') --- > func = _resolve('foo', 'other_main') 2.3.7.3 Specifying a custom executable for shebangs You may need to specify a custom executable for shebang lines. To do this, set the executable attribute of a ScriptMaker instance to the absolute Unicode path of the executable which you want to be written to the shebang lines of scripts. If not specified, the executable running the ScriptMaker code is used. If the value has spaces, you should surround it with double quotes. You can use the enquote_executable() function for this. Changed in version 0.3.1: The enquote_executable() function was an internal function _enquote_executable() in earlier versions. For relocatable .exe files under Windows, you can specify the location of the python executable relative to the script by putting <launcher_dir> as the beginning of the executable path. Since windows places python.exe in the root install directory and the application scripts in the Scripts subdirectory, setting maker.executable = r"<launcher_dir>\..\python.exe" will allow you to move a python installation which is installed together with an application to a different path or a different machine and the .exe files will still run. 2.3.7.4 Generating variants of a script When installing a script foo, it is not uncommon to want to install version-specific variants such as foo3 or foo-3. 2. You can control exactly which variants of the script get written through the ScriptMaker instance’s variants Distlib Documentation, Release 0.3.6 attribute. This defaults to set(('', 'X.Y')), which means that by default a script foo would be installed as foo and foo-3.2 under Python 3.2. If the value of the variants attribute were set(('', 'X', 'X.Y')) then the foo script would be installed as foo, foo3 and foo-3.2 when run under Python 3.2. Note: If you need to generate variants for a different version of Python than the one running the ScriptMaker code, set the version_info attribute of the ScriptMaker instance to a 2-tuple holding the major and minor version numbers of the target Python version. New in version 0.3.1. 2.3.7.5 Avoiding overwriting existing scripts In some scenarios, you might overwrite existing scripts when you shouldn’t. For example, if you use Python 2.7 to install a distribution with script foo in the user site (see PEP 370), you will write (on POSIX) scripts ~/.local/ bin/foo and ~/.local/bin/foo-2.7. If you then install the same distribution with Python 3.2, you would write (on POSIX) scripts ~/.local/bin/foo and ~/.local/bin/foo-3.2. However, by overwriting the ~/.local/bin/foo script, you may prevent verification or removal of the 2.7 installation to fail, because the overwritten file may be different (and so have a different hash from what was computed during the 2.7 installation). To control overwriting of generated scripts this way, you can use the clobber attribute of a ScriptMaker instance. This is set to False by default, which prevents overwriting; to force overwriting, set it to True. 2.3.7.6 Generating windowed scripts on Windows The make() and make_multiple() methods take an optional second options argument, which can be used to control script generation. If specified, this should be a dictionary of options. Currently, only the value for the gui key in the dictionary is inspected: if True, it generates scripts with .pyw extensions (rather than .py) and, if add_launchers is specified as True in the ScriptMaker instance, then (on Windows) a windowed native executable launcher is created (otherwise, the native executable launcher will be a console application). 2.3.8 Using the version API 2.3.8.1 Overview The NormalizedVersion class implements a PEP 440 compatible version: >>> from distlib.version import NormalizedVersion >>> v1 = NormalizedVersion('1.0') >>> v2 = NormalizedVersion('1.0a1') >>> v3 = NormalizedVersion('1.0b1') >>> v4 = NormalizedVersion('1.0c1') >>> v5 = NormalizedVersion('1.0.post1') >>> These sort in the expected order: >>> v2 < v3 < v4 < v1 < v5 True >>> You can’t pass any old thing as a version number: Distlib Documentation, Release 0.3.6 >>> NormalizedVersion('foo') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "distlib/version.py", line 49, in __init__ self._parts = parts = self.parse(s) File "distlib/version.py", line 254, in parse def parse(self, s): return normalized_key(s) File "distlib/version.py", line 199, in normalized_key raise UnsupportedVersionError(s) distlib.version.UnsupportedVersionError: foo >>> 2.3.8.2 Matching versions against constraints The NormalizedMatcher is used to match version constraints against versions: >>> from distlib.version import NormalizedMatcher >>> m = NormalizedMatcher('foo (1.0b1)') >>> m NormalizedMatcher('foo (1.0b1)') >>> [m.match(v) for v in v1, v2, v3, v4, v5] [False, False, True, False, False] >>> Specifying 'foo (1.0b1)' is equivalent to specifying 'foo (==1.0b1)', i.e. only the exact version is matched. You can also specify inequality constraints: >>> m = NormalizedMatcher('foo (<1.0c1)') >>> [m.match(v) for v in v1, v2, v3, v4, v5] [False, True, True, False, False] >>> and multiple constraints: >>> m = NormalizedMatcher('foo (>= 1.0b1, <1.0.post1)') >>> [m.match(v) for v in v1, v2, v3, v4, v5] [True, False, True, True, False] >>> You can do exactly the same thing as above with setuptools/ distribute version numbering (use LegacyVersion and LegacyMatcher) or with semantic versioning (use SemanticVersion and SemanticMatcher). However, you can’t mix and match versions of different types: >>> from distlib.version import SemanticVersion, LegacyVersion >>> nv = NormalizedVersion('1.0.0') >>> lv = LegacyVersion('1.0.0') >>> sv = SemanticVersion('1.0.0') >>> lv == sv Traceback (most recent call last): File "<stdin>", line 1, in <module> File "distlib/version.py", line 61, in __eq__ self._check_compatible(other) File "distlib/version.py", line 58, in _check_compatible raise TypeError('cannot compare %r and %r' % (self, other)) TypeError: cannot compare LegacyVersion('1.0.0') and SemanticVersion('1.0.0') (continues on next page) Distlib Documentation, Release 0.3.6 (continued from previous page) >>> nv == sv Traceback (most recent call last): File "<stdin>", line 1, in <module> File "distlib/version.py", line 61, in __eq__ self._check_compatible(other) File "distlib/version.py", line 58, in _check_compatible raise TypeError('cannot compare %r and %r' % (self, other)) TypeError: cannot compare NormalizedVersion('1.0.0') and SemanticVersion('1.0.0') >>> 2.3.9 Using the wheel API You can use the distlib.wheel package to build and install from files in the Wheel format, defined in PEP 427. 2.3.9.1 Building wheels Building wheels is straightforward: from distlib.wheel import Wheel wheel = Wheel() # Set the distribution's identity wheel.name = 'name_of_distribution' wheel.version = '0.1' # Indicate where the files to go in the wheel are to be found paths = { 'prefix': '/path/to/installation/prefix', 'purelib': '/path/to/purelib', # only one of purelib 'platlib': '/path/to/platlib', # or platlib should be set 'scripts': '/path/to/scripts', 'headers': '/path/to/headers', 'data': '/path/to/data', } wheel.dirname = '/where/you/want/the/wheel/to/go' # Now build wheel.build(paths) If the 'data', 'headers' and 'scripts' keys are absent, or point to paths which don’t exist, nothing will be added to the wheel for these categories. The 'prefix' key and one of 'purelib' or 'platlib' must be provided, and the paths referenced should exist. 2.3.9.2 Customising tags during build By default, the build() method will use default tags depending on whether or not the build is a pure-Python build: • For a pure-Python build, the pyver will be set to pyXY where XY is the version of the building Python. The abi tag will be none and the arch tag will be any. • For a build which is not pure-Python (i.e. contains C code), the pyver will be set to e.g. cpXY, and the abi and arch tags will be set according to the building Python. Distlib Documentation, Release 0.3.6 If you want to override these default tags, you can pass a tags parameter to the build() method which has the tags you want to declare. For example, for a pure build where we know that the code in the wheel will be compatible with the major version of the building Python: from wheel import PYVER tags = { 'pyver': [PYVER[:-1], PYVER], } wheel.build(paths, tags) This would set the pyver tags to be pyX.pyXY where X and Y relate to the building Python. You can similarly pass values using the abi and arch keys in the tags dictionary. 2.3.9.3 Specifying a wheel’s version You can also specify a particular “Wheel-Version” to be written to the wheel metadata of a wheel you’re building. Simply pass a (major, minor) tuple in the wheel_version keyword argument to build(). If not specified, the most recent version supported is written. 2.3.9.4 Installing from wheels Installing from wheels is similarly straightforward. You just need to indicate where you want the files in the wheel to be installed: from distlib.wheel import Wheel from distlib.scripts import ScriptMaker wheel = Wheel('/path/to/my_dist-0.1-py32-none-any.whl') # Indicate where the files in the wheel are to be installed to. # All the keys should point to writable paths. paths = { 'prefix': '/path/to/installation/prefix', 'purelib': '/path/to/purelib', 'platlib': '/path/to/platlib', 'scripts': '/path/to/scripts', 'headers': '/path/to/headers', 'data': '/path/to/data', } maker = ScriptMaker(None, None) # You can specify a custom executable in script shebang lines, whether # or not to install native executable launchers, whether to do a dry run # etc. by setting attributes on the maker, wither when creating it or # subsequently. # Now install. The method accepts optional keyword arguments: # # - A ``warner`` argument which, if specified, should be a callable that # will be called with (software_wheel_version, file_wheel_version) if # they differ. They will both be in the form (major_ver, minor_ver). # # - A ``lib_only`` argument which indicates that only the library portion # of the wheel should be installed - no scripts, header files or # non-package data. (continues on next page) Distlib Documentation, Release 0.3.6 (continued from previous page) wheel.install(paths, maker) Only one of the purelib or platlib paths will actually be written to (assuming that they are different, which isn’t often the case). Which one it is depends on whether the wheel metadata declares that the wheel contains pure Python code. 2.3.9.5 Verifying wheels You can verify that a wheel’s contents match the declared contents in the wheel’s RECORD entry, by calling the verify() method. This will raise a DistlibException if a size or digest mismatch is found. 2.3.9.6 Modifying wheels Note: In an ideal world one would not need to modify wheels, but in the short term there might be a need to do so (for example, to add dependency information which is missing). If you are working with wheels on your own projects, you shouldn’t use the method described here, as you will have full control of the wheels you build yourself. However, if you are working with third party wheels which you don’t build yourself but you need to modify in some way, then the approach described below might be useful. You can update existing wheels with distlib by calling the update() method of a wheel. This is called as follows: modified = wheel.update(modifier, dest_dir, **kwargs) where the modifier is a callable which you specify, and kwargs are options you want to pass to it (currently, the update() method passes kwargs unchanged to the modifier). The dest_dir argument indicates where you want any new wheel to be written - it is optional and if not specified, the existing wheel will be overwritten. The update() method extracts the entire contents of the wheel to a temporary location, and then calls modifier as follows: modified = modifier(path_map, **kwargs) where path_map is a dictionary mapping archive paths to the location of the corresponding extracted archive entry, and kwargs is whatever was passed to the update method. If the modifier returns True, a new wheel is built from the (possibly updated) contents of path_map and its path name. The passed path_map will contain all of the wheel’s entries other than the RECORD entry (which will be recreated if a new wheel is built). For example, if you wanted to add numpy as a dependency in a scipy wheel, you might do something like this: def add_numpy_dependency(path_map, **kwargs): mdpath = path_map['scipy-0.11.dist-info/pydist.json'] md = Metadata(path=mdpath) md.add_requirements(['numpy']) md.write(path=mdpath) return True wheel = Wheel('scipy-0.11-py27-abi3-linux_x86_64.whl') wheel.update(add_numpy_dependency) Distlib Documentation, Release 0.3.6 In the above example, the modifier doesn’t actually use kwargs, but you could pass useful information which can be used to control the modifier’s operation. For example, you might make the function work with other distributions than scipy, or other versions of scipy: def add_numpy_dependency(path_map, **kwargs): name = kwargs.get('name', 'scipy') version = kwargs.get('version', '0.11') key = '%s-%s.dist-info/pydist.json' % (name, version) mdpath = path_map[key] md = Metadata(path=mdpath) md.add_requirements(['numpy']) md.write(path=mdpath) return True 2.3.9.7 Mounting wheels One of Python’s perhaps under-used features is zipimport, which gives the ability to import Python source from .zip files. Since wheels are .zip files, they can sometimes be used to provide functionality without needing to be installed. Whereas .zip files contain no convention for indicating compatibility with a particular Python, wheels do contain this compatibility information. Thus, it is possible to check if a wheel can be directly imported from, and the wheel support in distlib allows you to take advantage of this using the mount() and unmount() methods. When you mount a wheel, its absolute path name is added to sys.path, allowing the Python code in it to be imported. (A DistlibException is raised if the wheel isn’t compatible with the Python which calls the mount() method.) The mount() method takes an optional keyword parameter append which defaults to False, meaning the a mounted wheel’s pathname is added to the beginning of sys.path. If you pass True, the pathname is appended to sys.path. The mount() method goes further than just enabling Python imports – any C extensions in the wheel are also made available for import. For this to be possible, the wheel has to be built with additional metadata about extensions – a JSON file called EXTENSIONS which serialises an extension mapping dictionary. This maps extension module names to the names in the wheel of the shared libraries which implement those modules. Running unmount() on the wheel removes its absolute pathname from sys.path and makes its C extensions, if any, also unavailable for import. Note: The C extension mounting functionality may not work in all cases, though it should work in a useful subset of cases. Use with care. Note that extension information is currently only available in wheels built using distil – for wheels built using e.g. pip, this note will not apply, because C extensions will never be available for import. • There might be subtle differences in binary compatibility between the extension and the running Python, because the compatibility tag framework currently does not capture all the relevant ABI information. This is a situation which can be expected to improve over time. • If the extension uses custom dynamically linked libraries which are bundled with the extension, it may not be found by the dynamic loading machinery, for reasons that are platform-dependent. In such cases, you should have a good understanding of how dynamic loading works on your platforms, before taking advantage of this feature. 2.3.9.8 Using vanilla pip to build wheels for existing distributions on PyPI Although work is afoot to add wheel support to pip, you don’t need this to build wheels for existing PyPI distributions if you use distlib. The following script shows how you can use an unpatched, vanilla pip to build wheels: Distlib Documentation, Release 0.3.6 #!/usr/bin/env python # -*- coding: utf-8 -*- # # Copyright (C) 2013 <NAME>. License: MIT # import logging import optparse # for 2.6 import os import re import shutil import subprocess import sys import tempfile logger = logging.getLogger('wheeler') from distlib.compat import configparser, filter from distlib.database import DistributionPath, Distribution, make_graph from distlib.locators import (JSONLocator, SimpleScrapingLocator, AggregatingLocator, DependencyFinder) from distlib.manifest import Manifest from distlib.metadata import Metadata from distlib.util import parse_requirement, get_package_data from distlib.wheel import Wheel EGG_INFO_RE = re.compile(r'(-py\d\.\d)?\.egg-info', re.I) INSTALLED_DISTS = DistributionPath(include_egg=True) def get_requirements(data): lines = [] for line in data.splitlines(): line = line.strip() if not line or line[0] == '#': continue lines.append(line) reqts = [] extras = {} result = {'install': reqts, 'extras': extras} for line in lines: if line[0] != '[': reqts.append(line) else: i = line.find(']', 1) if i < 0: raise ValueError('unrecognised line: %r' % line) extra = line[1:i] extras[extra] = reqts = [] return result def convert_egg_info(libdir, prefix, options): files = os.listdir(libdir) ei = list(filter(lambda d: d.endswith('.egg-info'), files))[0] olddn = os.path.join(libdir, ei) (continues on next page) Distlib Documentation, Release 0.3.6 (continued from previous page) di = EGG_INFO_RE.sub('.dist-info', ei) newdn = os.path.join(libdir, di) os.rename(olddn, newdn) if options.compatible: renames = {} else: renames = { 'entry_points.txt': 'EXPORTS', } excludes = set([ 'SOURCES.txt', # of no interest in/post WHEEL 'installed-files.txt', # replaced by RECORD, so not needed 'requires.txt', # added to METADATA, so not needed 'PKG-INFO', # replaced by METADATA 'not-zip-safe', # not applicable ]) files = os.listdir(newdn) metadata = mdname = reqts = None for oldfn in files: pn = os.path.join(newdn, oldfn) if oldfn in renames: os.rename(pn, os.path.join(newdn, renames[oldfn])) else: if oldfn == 'requires.txt': with open(pn, 'r') as f: reqts = get_requirements(f.read()) elif oldfn == 'PKG-INFO': metadata = Metadata(path=pn) pd = get_package_data(metadata.name, metadata.version) metadata = Metadata(mapping=pd['index-metadata']) mdname = os.path.join(newdn, 'pydist.json') if oldfn in excludes or not options.compatible: os.remove(pn) if metadata: # Use Metadata 1.2 or later metadata.provides += ['%s (%s)' % (metadata.name, metadata.version)] # Update if not set up by get_package_data if reqts and not metadata.run_requires: metadata.dependencies = reqts metadata.write(path=mdname) manifest = Manifest(os.path.dirname(libdir)) manifest.findall() paths = manifest.allfiles dp = DistributionPath([libdir]) dist = next(dp.get_distributions()) dist.write_installed_files(paths, prefix) def install_dist(distname, workdir, options): pfx = '--install-option=' purelib = pfx + '--install-purelib=%s/purelib' % workdir platlib = pfx + '--install-platlib=%s/platlib' % workdir headers = pfx + '--install-headers=%s/headers' % workdir scripts = pfx + '--install-scripts=%s/scripts' % workdir data = pfx + '--install-data=%s/data' % workdir # Use the pip adjacent to sys.executable, if any (for virtualenvs) (continues on next page) Distlib Documentation, Release 0.3.6 (continued from previous page) d = os.path.dirname(sys.executable) files = filter(lambda o: o in ('pip', 'pip.exe'), os.listdir(d)) if not files: prog = 'pip' else: prog = os.path.join(d, next(files)) cmd = [prog, 'install', '--no-deps', '--quiet', '--index-url', 'http://pypi.org/simple/', '--timeout', '3', '--default-timeout', '3', purelib, platlib, headers, scripts, data, distname] result = { 'scripts': os.path.join(workdir, 'scripts'), 'headers': os.path.join(workdir, 'headers'), 'data': os.path.join(workdir, 'data'), } print('Pipping %s ...' % distname) p = subprocess.Popen(cmd, shell=False, stdout=sys.stdout, stderr=subprocess.STDOUT) stdout, _ = p.communicate() if p.returncode: raise ValueError('pip failed to install %s:\n%s' % (distname, stdout)) for dn in ('purelib', 'platlib'): libdir = os.path.join(workdir, dn) if os.path.isdir(libdir): result[dn] = libdir break convert_egg_info(libdir, workdir, options) dp = DistributionPath([libdir]) dist = next(dp.get_distributions()) md = dist.metadata result['name'] = md.name result['version'] = md.version return result def build_wheel(distname, options): result = None r = parse_requirement(distname) if not r: print('Invalid requirement: %r' % distname) else: dist = INSTALLED_DISTS.get_distribution(r.name) if dist: print('Can\'t build a wheel from already-installed ' 'distribution %s' % dist.name_and_version) else: workdir = tempfile.mkdtemp() # where the Wheel input files will live try: paths = install_dist(distname, workdir, options) paths['prefix'] = workdir wheel = Wheel() wheel.name = paths.pop('name') wheel.version = paths.pop('version') wheel.dirname = options.destdir wheel.build(paths) result = wheel (continues on next page) Distlib Documentation, Release 0.3.6 (continued from previous page) finally: shutil.rmtree(workdir) return result def main(args=None): parser = optparse.OptionParser(usage='%prog [options] requirement [requirement ... ˓→]') parser.add_option('-d', '--dest', dest='destdir', metavar='DESTDIR', default=os.getcwd(), help='Where you want the wheels ' 'to be put.') parser.add_option('-n', '--no-deps', dest='deps', default=True, action='store_false', help='Don\'t build dependent wheels.') options, args = parser.parse_args(args) options.compatible = True # may add flag to turn off later if not args: parser.print_usage() else: # Check if pip is available; no point in continuing, otherwise try: with open(os.devnull, 'w') as f: p = subprocess.call(['pip', '--version'], stdout=f, stderr=subprocess. ˓→STDOUT) except Exception: p = 1 if p: print('pip appears not to be available. Wheeler needs pip to ' 'build wheels.') return 1 if options.deps: # collect all the requirements, including dependencies u = 'http://pypi.org/simple/' locator = AggregatingLocator(JSONLocator(), SimpleScrapingLocator(u, timeout=3.0), scheme='legacy') finder = DependencyFinder(locator) wanted = set() for arg in args: r = parse_requirement(arg) if not r.constraints: dname = r.name else: dname = '%s (%s)' % (r.name, ', '.join(r.constraints)) print('Finding the dependencies of %s ...' % arg) dists, problems = finder.find(dname) if problems: print('There were some problems resolving dependencies ' 'for %r.' % arg) for _, info in problems: print(' Unsatisfied requirement %r' % info) wanted |= dists want_ordered = True # set to False to skip ordering if not want_ordered: wanted = list(wanted) else: graph = make_graph(wanted, scheme=locator.scheme) (continues on next page) Distlib Documentation, Release 0.3.6 (continued from previous page) slist, cycle = graph.topological_sort() if cycle: # Now sort the remainder on dependency count. cycle = sorted(cycle, reverse=True, key=lambda d: len(graph.reverse_list[d])) wanted = slist + cycle # get rid of any installed distributions from the list for w in list(wanted): dist = INSTALLED_DISTS.get_distribution(w.name) if dist or w.name in ('setuptools', 'distribute'): wanted.remove(w) s = w.name_and_version print('Skipped already-installed distribution %s' % s) # converted wanted list to pip-style requirements args = ['%s==%s' % (dist.name, dist.version) for dist in wanted] # Now go build built = [] for arg in args: wheel = build_wheel(arg, options) if wheel: built.append(wheel) if built: if options.destdir == os.getcwd(): dest = '' else: dest = ' in %s' % options.destdir print('The following wheels were built%s:' % dest) for wheel in built: print(' %s' % wheel.filename) if __name__ == '__main__': logging.basicConfig(format='%(levelname)-8s %(name)s %(message)s', filename='wheeler.log', filemode='w') try: rc = main() except Exception as e: print('Failed - sorry! Reason: %s\nPlease check the log.' % e) logger.exception('Failed.') rc = 1 sys.exit(rc) This script, wheeler.py, is also available here. Note that by default, it downloads dependencies of any distribution you specify and builds separate wheels for each distribution. It’s smart about not repeating work if dependencies are common across multiple distributions you specify: $ python wheeler.py sphinx flask Finding the dependencies of sphinx ... Finding the dependencies of flask ... Pipping Jinja2==2.6 ... Pipping docutils==0.10 ... Pipping Pygments==1.6 ... Pipping Werkzeug==0.8.3 ... Pipping Sphinx==1.1.3 ... (continues on next page) Distlib Documentation, Release 0.3.6 (continued from previous page) Pipping Flask==0.9 ... The following wheels were built: Jinja2-2.6-py27-none-any.whl docutils-0.10-py27-none-any.whl Pygments-1.6-py27-none-any.whl Werkzeug-0.8.3-py27-none-any.whl Sphinx-1.1.3-py27-none-any.whl Flask-0.9-py27-none-any.whl Note that the common dependency – Jinja2 – was only built once. You can opt to not build dependent wheels by specifying --no-deps on the command line. Note that the script also currently uses an http: URL for PyPI – this may need to change to an https: URL in the future. Note: It can’t be used to build wheels from existing distributions, as pip will either refuse to install to custom locations (because it views a distribution as already installed), or will try to upgrade and thus uninstall the existing distribution, even though installation is requested to a custom location (and uninstallation is not desirable). For best results, run it in a fresh venv: $ my_env/bin/python wheeler.py some_dist It should use the venv’s pip, if one is found. 2.3.10 Using the manifest API You can use the distlib.manifest API to construct lists of files when creating distributions. This functionality is an improved version of the equivalent functionality in distutils, where it was not a public API. You can create instances of the Manifest class to work with a set of files rooted in a particular directory: >>> from distlib.manifest import Manifest >>> manifest = Manifest('/path/to/my/sources') This sets the base attribute to the passed in root directory. You can add one or multiple files using names relative to the base directory: >>> manifest.add('abc') >>> manifest.add_many(['def', 'ghi']) As a result of the above two statements, the manifest will consist of '/path/to/my/sources/abc', '/path/ to/my/sources/def' and '/path/to/my/sources/ghi'. No check is made regarding the existence of these files. You can get all the files below the base directory of the manifest: >>> manifest.findall() This will populate the allfiles attribute of manifest with a list of all files in the directory tree rooted at the base. However, the manifest is still empty: >>> manifest.files >>> set() Distlib Documentation, Release 0.3.6 You can populate the manifest – the files attribute – by running a number of directives, using the process_directive() method. Each directive will either add files from allfiles to files, or remove files from allfiles if they were added by a previous directive. A directive is a string which must have a specific syntax: malformed lines will result in a DistlibException being raised. The following directives are available: they are compatible with the syntax of MANIFEST.in files processed by distutils. Consider the following directory tree: testsrc/ keep keep.txt LICENSE README.txt subdir lose lose.txt somedata.txt subsubdir somedata.bin This will be used to illustrate how the directives work, in the following sections. 2.3.10.1 The include directive This takes the form of the word include (case-sensitive) followed by a number of file-name patterns (as used in MANIFEST.in in distutils). All files in allfiles` matching the patterns (considered relative to the base directory) are added to files. For example: >>> manifest.process_directive('include R*.txt LIC* keep/*.txt') This will add README.txt, LICENSE and keep/keep.txt to the manifest. 2.3.10.2 The exclude directive This takes the form of the word exclude (case-sensitive) followed by a number of file-name patterns (as used in MANIFEST.in in distutils). All files in files` matching the patterns (considered relative to the base directory) are removed from files. For example: >>> manifest.process_directive('exclude LIC*') This will remove ‘LICENSE’ from the manifest, as it was added in the section above. 2.3.10.3 The global-include directive This works just like include, but will add matching files at all levels of the directory tree: >>> manifest.process_directive('global-include *.txt') This will add subdir/somedata.txt and subdir/lose/lose.txt from the manifest. 2.3.10.4 The global-exclude directive This works just like exclude, but will remove matching files at all levels of the directory tree: Distlib Documentation, Release 0.3.6 >>> manifest.process_directive('global-exclude l*.txt') This will remove subdir/lose/lose.txt from the manifest. 2.3.10.5 The recursive-include directive This directive takes a directory name (relative to the base) and a set of patterns. The patterns are used as in global-include, but only for files under the specified directory: >>> manifest.process_directive('recursive-include subdir l*.txt') This will add subdir/lose/lose.txt back to the manifest. 2.3.10.6 The recursive-exclude directive This works like recursive-include, but excludes matching files under the specified directory if they were al- ready added by a previous directive: >>> manifest.process_directive('recursive-exclude subdir lose*') This will remove subdir/lose/lose.txt from the manifest again. 2.3.10.7 The graft directive This directive takes the name of a directory (relative to the base) and copies all the names under it from allfiles to files. 2.3.10.8 The prune directive This directive takes the name of a directory (relative to the base) and removes all the names under it from files. 2.4 Next steps You might find it helpful to look at information about Distlib’s design – or peruse the API Reference. Distlib Documentation, Release 0.3.6 56 Chapter 2. Tutorial CHAPTER 3 Distlib’s design This is the section containing some discussion of how distlib’s design was arrived at, as and when time permits. 3.1 The locators API This section describes the design of the distlib API relating to accessing distribution metadata, whether stored locally or in indexes like PyPI. 3.1.1 The problem we’re trying to solve People who use distributions need to locate, download and install them. Distributions can be found in a number of places, such as: • An Internet index such as The Python Packages Index (PyPI), or a mirror thereof. • Other Internet resources, such as the developer’s website, or a source code repository such as GitHub, BitBucket, Google Code or similar. • File systems, whether local to one computer or shared between several. • Distributions which have already been installed, and are available in the sys.path of a running Python inter- preter. When we’re looking for distributions, we don’t always know exactly what we want: often, we just want the latest version, but it’s not uncommon to want a specific older version, or perhaps the most recent version that meets some constraints on the version. Since we need to be concerned with matching versions, we need to consider the version schemes in use (see The version API). It’s useful to separate the notion of a project from a distribution: The project is the version-independent part of the distribution, i.e. it’s described by the name of the distribution and encompasses all released distributions which use that name. We often don’t just want a single distribution, either: a common requirement, when installing a distribution, is to locate all distributions that it relies on, which aren’t already installed. So we need a dependency finder, which itself needs Distlib Documentation, Release 0.3.6 to locate depended-upon distributions, and recursively search for dependencies until all that are available have been found. We may need to distinguish between different types of dependencies: • Post-installation dependencies. These are needed by the distribution after it has been installed, and is in use. • Build dependencies. These are needed for building and/or installing the distribution, but are not needed by the distribution itself after installation. • Test dependencies. These are only needed for testing the distribution, but are not needed by the distribution itself after installation. When testing a distribution, we need all three types of dependencies. When installing a distribution, we need the first two, but not the third. 3.1.2 A minimal solution 3.1.2.1 Locating distributions It seems that the simplest API to locate a distribution would look like locate(requirement), where requirement is a string giving the distribution name and optional version constraints. Given that we know that dis- tributions can be found in different places, it’s best to consider a Locator class which has a locate() method with a corresponding signature, with subclasses for each of the different types of location that distributions inhabit. It’s also reasonable to provide a default locator in a module attribute default_locator, and a module-level locate() function which calls the locate() method on the default locator. Since we’ll often need to locate all the versions of a project before picking one, we can imagine that a locator would need a get_project() method for fetching all versions of a project; and since we will be likely to want to use caching, we can assume there will be a _get_project() method to do the actual work of fetching the version data, which the higher-level get_project() will call (and probably cache). So our locator base class will look something like this: class Locator(object): """ Locate distributions. """ def __init__(self, scheme='default'): """ Initialise a locator with the specified version scheme. """ def locate(self, requirement): """ Locate the highest-version distribution which satisfies the constraints in ``requirement``, and return a ``Distribution`` instance if found, or else ``None``. """ def get_project(self, name): """ Return all known distributions for a project named ``name``, returning a dictionary mapping version to ``Distribution`` instance, or an empty dictionary if nothing was found. Use _get_project to do the actual work, and cache the results for future use. (continues on next page) Distlib Documentation, Release 0.3.6 (continued from previous page) """ def _get_project(self, name): """ Return all known distributions for a project named ``name``, returning a dictionary mapping version to ``Distribution`` instance, or an empty dictionary if nothing was found. """ When attempting to locate(), it would be useful to pass requirement information to get_project() / _get_project(). This can be done in a matcher attribute which is normally None but set to a distlib. version.Matcher instance when a locate() call is in progress. Note that in order to work with legacy version numbers (those not complying with PEP 440), you need to pass scheme='legacy' to the initializer for a locator. 3.1.2.2 Finding dependencies A dependency finder will depend on a locator to locate dependencies. A simple approach will be to consider a DependencyFinder class which takes a locator as a constructor argument. It might look something like this: class DependencyFinder(object): """ Locate dependencies for distributions. """ def __init__(self, locator): """ Initialise an instance, using the specified locator to locate distributions. """ def find(self, requirement, meta_extras=None, prereleases=False): """ Find a distribution matching requirement and all distributions it depends on. Use the ``meta_extras`` argument to determine whether distributions used only for build, test etc. should be included in the results. Allow ``requirement`` to be either a :class:`Distribution` instance or a string expressing a requirement. If ``prereleases`` is True, treat pre-releases as normal releases; otherwise only return pre-releases if they're all that's available. Return a set of :class:`Distribution` instances and a set of problems. The distributions returned should be such that they have the :attr:`required` attribute set to ``True`` if they were from the ``requirement`` passed to ``find()``, and they have the :attr:`build_time_dependency` attribute set to ``True`` unless they are post-installation dependencies of the ``requirement``. The problems should be a tuple consisting of the string ``'unsatisfied'`` and the requirement which couldn't be satisfied by any distribution known to the locator. """ Distlib Documentation, Release 0.3.6 3.2 The index API This section describes the design of the distlib API relating to performing certain operations on Python package indexes like PyPI. Note that this API does not support finding distributions - the locators API is used for that. 3.2.1 The problem we’re trying to solve Operations on a package index that are commonly performed by distribution developers are: • Register projects on the index. • Upload distributions relating to projects on the index, with support for signed distributions. • Upload documentation relating to projects. Less common operations are: • Find a list of hosts which mirror the index. • Save a default .pypirc file with default username and password to use. 3.2.2 A minimal solution The distutils approach was to have several separate command classes called register, upload and upload_doc, where really all that was needed was some methods. That’s the approach distlib takes, by implementing a PackageIndex class with register(), upload_file() and upload_documentation() methods. The PackageIndex class contains no user interface code whatsoever: that’s assumed to be the domain of the packaging tool. The packaging tool is expected to get the required information from a user using whatever means the developers of that tool deem to be the most appropriate; the required attributes are then set on the PackageIndex instance. (Examples of this kind of information: user name, password, whether the user wants to save a default configuration, where the signing program and its keys live.) The minimal interface to provide the required functionality thus looks like this: class PackageIndex(object): def __init__(self, url=None, mirror_host=None): """ Initialise an instance using a specific index URL, and a DNS name for a mirror host which can be used to determine available mirror hosts for the index. """ def save_configuration(self): """ Save the username and password attributes of this instance in a default .pypirc file. """ def register(self, metadata): """ Register a project on the index, using the specified metadata. """ def upload_file(self, metadata, filename, signer=None, sign_password=None, filetype='sdist', pyversion='source'): """ (continues on next page) Distlib Documentation, Release 0.3.6 (continued from previous page) Upload a distribution file to the index using the specified metadata to identify it, with options for signing and for binary distributions which are specific to Python versions. """ def upload_documentation(self, metadata, doc_dir): """ Upload documentation files in a specified directory using the specified metadata to identify it, after archiving the directory contents into a .zip file. """ The following additional attributes can be identified on PackageIndex instances: • username - the username to use for authentication. • password - the password to use for authentication. • mirrors (read-only) - a list of hostnames of known mirrors. 3.3 The resources API This section describes the design of the distlib API relating to accessing ‘resources’, which is a convenient label for data files associated with Python packages. 3.3.1 The problem we’re trying to solve Developers often have a need to co-locate data files with their Python packages. Examples of these might be: • Templates, commonly used in web applications • Translated messages used in internationalisation/localisation The stdlib does not provide a uniform API to access these resources. A common approach is to use __file__ like this: base = os.path.dirname(__file__) data_filename = os.path.join(base, 'data.bin') with open(data_filename, 'rb') as f: # read the data from f However, this approach fails if the package is deployed in a .zip file. To consider how to provide a minimal uniform API to access resources in Python packages, we’ll assume that the requirements are as follows: • All resources are regarded as binary. The consuming application is expected to know how to convert resources to text, where appropriate. • All resources are read-only. • It should be possible to access resources either as streams, or as their entire data as a byte-string. • Resources will have a unique, identifying name which is text. Resources will be hierarchical and named using filesystem-like paths using ‘/’ as a separator. The library will be responsible for converting resource names to the names of the underlying representations (e.g. encoding of file names corresponding to resource names). Distlib Documentation, Release 0.3.6 • Some resources are containers of other resources, some are not. For example, a resource nested/ nested_resource.bin in a package would not contain other resources, but implies the existence of a resource nested, which contains nested_resource.bin. • Resources can only be associated with packages, not with modules. That’s because with peer modules a.py and b.py, there’s no obvious location for data associated only with a: both a and b are in the same directory. With a package, there’s no ambiguity, as a package is associated with a specific directory, and no other package can be associated with that directory. • Support should be provided for access to data deployed in the file system or in packages contained in .zip files, and third parties should be able to extend the facilities to work with other storage formats which support import of Python packages. • It should be possible to access the contents of any resource through a file on the file system. This is to cater for any external APIs which need to access the resource data as files (examples would be a shared library for linking using dlopen() on POSIX, or any APIs which need access to resource data via OS-level file handles rather than Python streams). 3.3.2 A minimal solution We know that we will have to deal with resources, so it seems natural that there would be a Resource class in the solution. From the requirements, we can see that a Resource would have the following: • A name property identifying the resource. • A as_stream method allowing access to the resource data as a binary stream. This is not a property, because a new stream is returned each time this method is called. The returned stream should be closed by the caller. • A bytes property returning the entire contents of the resource as a byte string. • A size property indicating the size of the resource (in bytes). • An is_container property indicating whether the resource is a container of other resources. • A resources property returning the names of resources contained within the resource. The Resource class would be the logical place to perform sanity checks which relate to all resources. For example: • It doesn’t make sense to ask for the bytes or size properties or call the as_stream method of a container resource. • It doesn’t make sense to ask for the resources property of a resource which is not a container. It seems reasonable to raise exceptions for incorrect property or method accesses. We know that we need to support resource access in the file system as well as .zip files, and to support other sources of storage which might be used to import Python packages. Since import and loading of Python packages happens through PEP 302 importers and loaders, we can deduce that the mechanism used to find resources in a package will be closely tied to the loader for that package. We could consider an API for finding resources in a package like this: def find_resource(pkg, resource_name): # return a Resource instance for the resource and then use it like this: r1 = find_resource(pkg, 'foo') r2 = find_resource(pkg, 'bar') Distlib Documentation, Release 0.3.6 However, we’ll often have situations where we will want to get multiple resources from a package, and in certain applications we might want to implement caching or other processing of resources before returning them. The above API doesn’t facilitate this, so let’s consider delegating the finding of resources in a package to a finder for that package. Once we get a finder, we can hang on to it and ask it to find multiple resources. Finders can be extended to provide whatever caching and preprocessing an application might need. To get a finder for a package, let’s assume there’s a finder function: def finder(pkg): # return a finder for the specified package We can use it like this: f = finder(pkg) r1 = f.find('foo') r2 = f.find('bar') The finder function knows what kind of finder to return for a particular package through the use of a registry. Given a package, finder can determine the loader for that package, and based on the type of loader, it can instantiate the right kind of finder. The registry maps loader types to callables that return finders. The callable is called with a single argument – the Python module object for the package. Given that we have finders in the design, we can identify ResourceFinder and ZipResourceFinder classes for the two import systems we’re going to support. We’ll make ResourceFinder a concrete class rather than an interface - it’ll implement getting resources from packages stored in the file system. ZipResourceFinder will be a subclass of ResourceFinder. Since there is no loader for file system packages when the C-based import system is used, the registry will come with the following mappings: • type(None) -> ResourceFinder • _frozen_importlib.SourceFileLoader -> ``ResourceFinder • zipimport.zipimporter -> ZipResourceFinder Users of the API can add new or override existing mappings using the following function: def register_finder(loader, finder_maker): # register ``finder_maker`` to make finders for packages with a loader # of the same type as ``loader``. Typically, the finder_maker will be a class like ResourceFinder or ZipResourceFinder, but it can be any callable which takes the Python module object for a package and returns a finder. Let’s consider in more detail what finders look like and how they interact with the Resource class. We’ll keep the Resource class minimal; API users never instantiate Resource directly, but call a finder’s find method to return a Resource instance. A finder could return an instance of a Resource subclass if really needed, though it shouldn’t be necessary in most cases. If a finder can’t find a resource, it should return None. The Resource constructor will look like this: def __init__(self, finder, name): self.finder = finder self.name = name # other initialisation, not specified and delegate as much work as possible to its finder. That way, new import loader types can be supported just by implementing a suitable XXXResourceFinder for that loader type. What a finder needs to do can be exemplified by the following skeleton for ResourceFinder: Distlib Documentation, Release 0.3.6 class ResourceFinder(object): def __init__(self, module): "Initialise finder for the specified package" def find(self, resource_name): "Find and return a ``Resource`` instance or ``None``" def is_container(self, resource): "Return whether resource is a container" def get_bytes(self, resource): "Return the resource's data as bytes" def get_size(self, resource): "Return the size of the resource's data in bytes" def get_stream(self, resource): "Return the resource's data as a binary stream" def get_resources(self, resource): """ Return the resources contained in this resource as a set of (relative) resource names """ 3.3.3 Dealing with the requirement for access via file system files To cater for the requirement that the contents of some resources be made available via a file on the file system, we’ll assume a simple caching solution that saves any such resources to a local file system cache, and returns the filename of the resource in the cache. We need to divide the work between the finder and the cache. We’ll deliver the cache function through a Cache class, which will have the following methods: • A constructor which takes an optional base directory for the cache. If none is provided, we’ll construct a base directory of the form: <rootdir>/.distlib/resource-cache where <rootdir> is the user’s home directory. On Windows, if the environment specifies a variable named LOCALAPPDATA, its value will be used as <rootdir> – otherwise, the user’s home directory will be used. • A get() method which takes a Resource and returns a file system filename, such that the contents of that named file will be the contents of the resource. • An is_stale() method which takes a Resource and its corresponding file system filename, and returns whether the file system file is stale when compared with the resource. Knowing that cache invalidation is hard, the default implementation just returns True. • A prefix_to_dir() method which converts a prefix to a directory name. We’ll assume that for the cache, a resource path can be divided into two parts: the prefix and the subpath. For resources in a .zip file, the prefix would be the pathname of the archive, while the subpath would be the path inside the archive. For a file system resource, since it is already in the file system, the prefix would be None and the subpath would be the absolute path name of the resource. The prefix_to_dir() method’s job is to convert a prefix (if not None) to a subdirectory in the cache that holds the cached files for all resources with that prefix. We’ll delegate the determination of a resource’s prefix and subpath to its finder, using a get_cache_info() method on finders, which takes a Resource and returns a (prefix, subpath) tuple. Distlib Documentation, Release 0.3.6 The default implementation will use os.splitdrive() to see if there’s a Windows drive, if present, and convert its ':' to '---'. The rest of the prefix will be converted by replacing '/' by '--', and appending '.cache' to the result. The cache will be activated when the file_path property of a Resource is accessed. This will be a cached property, and will call the cache’s get() method to obtain the file system path. 3.4 The scripts API This section describes the design of the distlib API relating to installing scripts. 3.4.1 The problem we’re trying to solve Installing scripts is slightly more involved than simply copying files from source to target, for the following reasons: • On POSIX systems, scripts need to be made executable. To cater for scenarios where there are multiple Python versions installed on a computer, scripts need to have their shebang lines adjusted to point to the correct inter- preter. This requirement is commonly found when virtual environments (venvs) are in use, but also in other multiple-interpreter scenarios. • On Windows systems, which don’t support shebang lines natively, some alternate means of finding the correct interpreter need to be provided. Following the acceptance and implementation of PEP 397, a shebang- interpret- ing launcher will be available in Python 3.3 and later and a standalone version of it for use with earlier Python versions is also available. However, where this can’t be used, an alternative approach using executable launchers installed with the scripts may be necessary. (That is the approach taken by setuptools.) Windows also has two types of launchers - console applications and Windows applications. The appropriate launcher needs to be used for scripts. • Some scripts are effectively just callable code in a Python package, with boilerplate for importing that code, calling it and returning its return value as the script’s return code. It would be useful to have the boilerplate standardised, so that developers need just specify which callables to expose as scripts, and under what names, using e.g. a name = callable syntax. (This is the approach taken by setuptools for the popular console_scripts feature). 3.4.2 A minimal solution Script handling in distutils and setuptools is done in two phases: ‘build’ and ‘install’. Whether a particular packaging tool chooses to do the ‘heavy lifting’ of script creation (i.e. the things referred to above, beyond simple copying) in ‘build’ or ‘install’ phases, the job is the same. To abstract out just the functionality relating to scripts, in an extensible way, we can just delegate the work to a class, unimaginatively called ScriptMaker. Given the above requirements, together with the more basic requirement of being able to do ‘dry-run’ installation, we need to provide a ScriptMaker with the following items of information: • Where source scripts are to be found. • Where scripts are to be installed. • Whether, on Windows, executable launchers should be added. • Whether a dry-run mode is in effect. These dictate the form that ScriptMaker.__init__() will take. In addition, other methods suggest themselves for ScriptMaker: Distlib Documentation, Release 0.3.6 • A make() method, which takes a specification, which is either a filename or a ‘wrap me a callable’ indicator which looks like this: name = some_package.some_module:some_callable [ flag(=value) ... ] The name would need to be a valid filename for a script, and the some_package.some_module part would indicate the module where the callable resides. The some_callable part identifies the callable, and optionally you can have flags, which the ScriptMaker instance must know how to interpret. One flag would be 'gui', indicating that the launcher should be a Windows application rather than a console application, for GUI-based scripts which shouldn’t show a console window. The above specification is used by setuptools for the ‘console_scripts’ feature. See Flag formats for more information about flags. It seems sensible for this method to return a list of absolute paths of files that were installed (or would have been installed, but for the dry-run mode being in effect). • A make_multiple() method, which takes an iterable of specifications and just runs calls make() on each item iterated over, aggregating the results to return a list of absolute paths of all files that were installed (or would have been installed, but for the dry-run mode being in effect). One advantage of having this method is that you can override it in a subclass for post-processing, e.g. to run a tool like 2to3, or an analysis tool, over all the installed files. • The details of the callable specification can be encapsulated in a utility function, get_exports_entry(). This would take a specification and return None, if the specification didn’t match the callable format, or an instance of ExportEntry if it did match. In addition, the following attributes on a ScriptMaker could be further used to refine its behaviour: • force to indicate when scripts should be copied from source to target even when timestamps show the target is up to date. • set_mode to indicate whether, on Posix, the execute mode bits should be set on the target script. 3.4.2.1 Flag formats Flags, if present, are enclosed by square brackets. Each flag can have the format of just an alphanumeric string, optionally followed by an ‘=’ and a value (with no intervening spaces). Multiple flags can be separated by ‘,’ and whitespace. The following would be valid flag sections: [a,b,c] [a, b, c] [a=b, c=d, e, f=g, 9=8] whereas the following would be invalid: [] [\] [a,] [a,,b] [a=,b,c] 3.5 The version API This section describes the design of the distlib API relating to versions. Distlib Documentation, Release 0.3.6 3.5.1 The problem we’re trying to solve Distribution releases are named by versions and versions have two principal uses: • Identifying a particular release and determining whether or not it is earlier or later than some other release. • When specifying other distributions that a distribution release depends on, specifying constraints governing the releases of those distributions that are depended upon. In addition, qualitative information may be given by the version format about the quality of the release: e.g. alpha versions, beta versions, stable releases, hot-fixes following a stable release. The following excerpt from PEP 386 defines the requirements for versions: • It should be possible to express more than one versioning level (usually this is expressed as major and minor revision and, sometimes, also a micro revision). • A significant number of projects need special meaning versions for “pre-releases” (such as “alpha”, “beta”, “rc”), and these have widely used aliases (“a” stands for “alpha”, “b” for “beta” and “c” for “rc”). And these pre- release versions make it impossible to use a simple alphanumerical ordering of the version string components. (e.g. 3.1a1 < 3.1) • Some projects also need “post-releases” of regular versions, mainly for maintenance purposes, which can’t be clearly expressed otherwise. • Development versions allow packagers of unreleased work to avoid version clashes with later stable releases. There are a number of version schemes in use. The ones of most interest in the Python ecosystem are: • Loose versioning in distutils. Any version number is allowed, with lexicographical ordering. No support exists for pre- and post-releases, and lexicographical ordering can be unintuitive (e.g. ‘1.10’ < ‘1.2.1’) • Strict versioning in distutils, which supports slightly more structure. It allows for up to three dot-separated numeric components, and support for multiple alpha and beta releases. However, there is no support for release candidates, nor for post-release versions. • Versioning in setuptools/distribute. This is described in PEP 386 in this section – it’s perhaps the most widely used Python version scheme, but since it tries to be very flexible and work with a wide range of conventions, it ends up allowing a very chaotic mess of version conventions in the Python community as a whole. • The proposed versioning scheme described in PEP 440. • Semantic versioning, which is rational, simple and well-regarded in the software community in general. Although the new versioning scheme mentioned in PEP 386 was implemented in distutils2 and that code has been copied over to distlib, there are many projects on PyPI which do not conform to it, but rather to the “legacy” versioning schemes in distutils/setuptools/distribute. These schemes are deserving of some support not because of their intrinsic qualities, but due to their ubiquity in projects registered on PyPI. Below are some results from testing actual projects on PyPI: Packages processed: 24891 Packages with no versions: 217 Packages with versions: 24674 Number of packages clean for all schemes: 19010 (77%) Number of packages clean for PEP 386: 21072 (85%) Number of packages clean for PEP 386 + suggestion: 23685 (96%) Number of packages clean for legacy: 24674 (100%, by you would expect) Number of packages clean for semantic: 19278 (78%) where “+ suggestion” refers to using the suggested version algorithm to derive a version from a version which would otherwise be incompatible with PEP 386. Distlib Documentation, Release 0.3.6 3.5.2 A minimal solution Since distlib is a low-level library which might be used by tools which work with existing projects, the internal implementation of versions has changed slightly from distutils2 to allow better support for legacy version num- bering. Since the re-implementation facilitated adding semantic version support at minimal cost, this has also been provided. 3.5.2.1 Versions The basic scheme is as follows. The differences between versioning schemes is catered for by having a single function for each scheme which converts a string version to an appropriate tuple which acts as a key for sort- ing and comparison of versions. We have a base class, Version, which defines any common code. Then we can have subclasses NormalizedVersion (PEP-386), LegacyVersion (distribute/setuptools) and SemanticVersion. To compare versions, we just check type compatibility and then compare the corresponding tuples. 3.5.2.2 Matchers Matchers take a name followed by a set of constraints in parentheses. Each constraint is an operation together with a version string which needs to be converted to the corresponding version instance. In summary, the following attributes can be identified for Version and Matcher: Version: • version string passed in to constructor (stripped) • parser to convert string string to tuple • compare functions to compare with other versions of same type Matcher: • version string passed in to constructor (stripped) • name of distribution • list of constraints • parser to convert string to name and set of constraints, using the same function as for Version to convert the version strings in the constraints to version instances • method to match version to constraints and return True/False Given the above, it appears that all the functionality could be provided with a single class per versioning scheme, with the only difference between them being the function to convert from version string to tuple. Any instance would act as either version or predicate, would display itself differently according to which it is, and raise exceptions if the wrong type of operation is performed on it (matching only allowed for predicate instances; <=, <, >=, > comparisons only allowed for version instances; and == and != allowed for either. However, the use of the same class to implement versions and predicates leads to ambiguity, because of the very loose project naming and versioning schemes allowed by PyPI. For example, “Hello 2.0” could be a valid project name, and “5” is a project name actually registered on PyPI. If distribution names can look like versions, it’s hard to discern the developer’s intent when creating an instance with the string “5”. So, we make separate classes for Version and Matcher. For ease of testing, the module will define, for each of the supported schemes, a function to do the parsing (as no information is needed other than the string), and the parse method of the class will call that function: Distlib Documentation, Release 0.3.6 def normalized_key(s): "parse using PEP-386 logic" def legacy_key(s): "parse using distribute/setuptools logic" def semantic_key(s): "parse using semantic versioning logic" class Version: # defines all common code def parse(self, s): raise NotImplementedError('Please implement in a subclass') and then: class NormalizedVersion(Version): def parse(self, s): return normalized_key(s) class LegacyVersion(Version): def parse(self, s): return legacy_key(s) class SemanticVersion(Version): def parse(self, s): return semantic_key(s) And a custom versioning scheme can be devised to work in the same way: def custom_key(s): """ convert s to tuple using custom logic, raise UnsupportedVersionError on problems """ class CustomVersion(Version): def parse(self, s): return custom_key(s) The matcher classes are pretty minimal, too: class Matcher(object): version_class = None def match(self, string_or_version): """ If passed a string, convert to version using version_class, then do matching in a way independent of version scheme in use """ and then: class NormalizedMatcher(Matcher): version_class = NormalizedVersion class LegacyMatcher(Matcher): version_class = LegacyVersion class SemanticMatcher(Matcher): version_class = SemanticVersion Distlib Documentation, Release 0.3.6 3.5.2.3 Version schemes Ideally one would want to work with the PEP 386 scheme, but there might be times when one needs to work with the legacy scheme (for example, when investigating dependency graphs of existing PyPI projects). Hence, the important aspects of each scheme are bundled into a simple VersionScheme class: class VersionScheme(object): def __init__(self, key, matcher): self.key = key # version string -> tuple converter self.matcher = matcher # Matcher subclass for the scheme Of course, the version class is also available through the matcher’s version_class attribute. VersionScheme makes it easier to work with alternative version schemes. For example, say we decide to experi- ment with an “adaptive” version scheme, which is based on the PEP 386 scheme, but when handed a non-conforming version, automatically tries to convert it to a normalized version using suggest_normalized_version(). Then, code which has to deal with version schemes just has to pick the appropriate scheme by name. Creating the adaptive scheme is easy: def adaptive_key(s): try: result = normalized_key(s, False) except UnsupportedVersionError: s = suggest_normalized_version(s) if s is None: raise result = normalized_key(s, False) return result class AdaptiveVersion(NormalizedVersion): def parse(self, s): return adaptive_key(s) class AdaptiveMatcher(Matcher): version_class = AdaptiveVersion The appropriate scheme can be fetched by using the get_scheme() function, which is defined thus: def get_scheme(scheme_name): "Get a VersionScheme for the given scheme_name." Allowed names are 'normalized', 'legacy', 'semantic', 'adaptive' and 'default' (which points to the same as 'adaptive'). If an unrecognised name is passed in, a ValueError is raised. The reimplemented distlib.version module is shorter than the corresponding module in distutils2, but the entire test suite passes and there is support for working with three versioning schemes as opposed to just one. However, the concept of “final” versions, which is not in the PEP but which was in the distutils2 implementation, has been removed because it appears of little value (there’s no way to determine the “final” status of versions for many of the project releases registered on PyPI). 3.6 The wheel API This section describes the design of the wheel API which facilitates building and installing from wheels, the new binary distribution format for Python described in PEP 427. Distlib Documentation, Release 0.3.6 3.6.1 The problem we’re trying to solve There are basically two operations which need to be performed on wheels: • Building a wheel from a source distribution. • Installing a distribution which has been packaged as a wheel. 3.6.2 A minimal solution Since we’re talking about wheels, it seems likely that a Wheel class would be part of the design. This allows for extensibility over a purely function-based API. The Wheel would be expected to have methods that support the required operations: class Wheel(object): def __init__(self, spec): """ Initialise an instance from a specification. This can either be a valid filename for a wheel (for when you want to work with an existing wheel), or just the ``name-version-buildver`` portion of a wheel's filename (for when you're going to build a wheel for a known version and build of a named project). """ def build(self, paths, tags=None): """ Build a wheel. The ``name`, ``version`` and ``buildver`` should already have been set correctly. The ``paths`` should be a dictionary with keys 'prefix', 'scripts', 'headers', 'data' and one of 'purelib' and 'platlib'. These must point to valid paths if they are to be included in the wheel. The optional ``tags`` argument should, if specified, be a dictionary with optional keys 'pyver', 'abi' and 'arch' indicating lists of tags which indicate environments with which the wheel is compatible. """ def install(self, paths, maker, **kwargs): """ Install from a wheel. The ``paths`` should be a dictionary with keys 'prefix', 'scripts', 'headers', 'data', 'purelib' and 'platlib'. These must point to valid paths to which files may be written if they are in the wheel. Only one of the 'purelib' and 'platlib' paths will be used (in the case where they are different), depending on whether the wheel is for a pure- Python distribution. The ``maker`` argument should be a suitably configured :class:`ScriptMaker` instance. The ``source_dir`` and ``target_dir`` arguments can be set to ``None`` when creating the instance - these will be set to appropriate values inside this method. The following keyword arguments are recognised: * ``warner``, if specified, should be a callable that will be called with (software_wheel_ver, file_wheel_ver) if they differ. They will both be in the form of tuples (continues on next page) Distlib Documentation, Release 0.3.6 (continued from previous page) (major_ver, minor_ver). The ``warner`` defaults to ``None``. * It's conceivable that one might want to install only the library portion of a package -- not installing scripts, headers data and so on. If ``lib_only`` is specified as ``True``, only the ``site-packages`` contents will be installed. The default value is ``False`` (meaning everything will be installed). """ In addition to the above, the following attributes can be identified for a Wheel instance: • name – the name of the distribution • version – the version of the distribution • buildver – the build tag for the distribution • pyver – a list of Python versions with which the wheel is compatible • abi – a list of application binary interfaces (ABIs) with which the wheel is compatible • arch – a list of architectures with which the wheel is compatible • dirname – The directory in which a wheel file is found/to be created • filename – The filename of the wheel (computed from the other attributes) 3.7 Next steps You might find it helpful to look at the API Reference. CHAPTER 4 API Reference This is the place where the functions and classes in distlib's public API are described. 4.1 The distlib.database package 4.1.1 Classes class DistributionPath This class represents a set of distributions which are installed on a Python path (like PYTHONPATH / sys. path). Both new-style (distlib) and legacy (egg) distributions are catered for. Methods: __init__(path=None, include_egg=False) Initialise the instance using a particular path. Parameters • path (list of str) – The path to use when looking for distributions. If None is specified, sys.path is used. • include_egg – If True, legacy distributions (eggs) are included in the search; other- wise, they aren’t. enable_cache() Enables a cache, so that metadata information doesn’t have to be fetched from disk. The cache is per instance of the DistributionPath instance and is enabled by default. It can be disabled using disable_cache() and cleared using clear_cache() (disabling won’t automatically clear it). disable_cache() Disables the cache, but doesn’t clear it. clear_cache() Clears the cache, but doesn’t change its enabled/disabled status. If enabled, the cache will be re-populated when querying for distributions. Distlib Documentation, Release 0.3.6 get_distributions() The main querying method if you want to look at all the distributions. It returns an iterator which returns Distribution and, if include_egg was specified as True for the instance, also instances of any EggInfoDistribution for any legacy distributions found. get_distribution(name) Looks for a distribution by name. It returns the first one found with that name (there should only be one distribution with a given name on a given search path). Returns None if no distribution was found, or else an instance of Distribution (or, if include_egg was specified as True for the instance, an instance of EggInfoDistribution if a legacy distribution was found with that name). Parameters name (str) – The name of the distribution to search for. get_exported_entries(category, name=None) Returns an iterator for entries exported by distributions on the path. Parameters • category (str) – The export category to look in. • name (str) – A specific name to search for. If not specified, all entries in the category are returned. Returns An iterator which iterates over exported entries (instances of ExportEntry). class Distribution A class representing a distribution, typically one which hasn’t been installed (most likely, one which has been obtained from an index like PyPI). Properties: name The name of the distribution. version The version of the distribution. metadata The metadata for the distribution. This is a distlib.metadata.Metadata instance. download_url The download URL for the distribution. If there are multiple URLs, this will be one of the values in download_urls. download_urls A set of known download URLs for the distribution. New in version 0.2.0: The download_urls attribute was added. digest The digest for the source distribution. This is either None or a 2-tuple consisting of the hashing algorithm and the digest using that algorithm, e.g. ('sha256', '01234...'). digests A dictionary mapping download URLs to digests, if and when digests are available. New in version 0.2.0: The digests attribute was added. locator The locator for an instance which has been retrieved through a locator. This is None for an installed distribution. Distlib Documentation, Release 0.3.6 class InstalledDistribution(Distribution) A class representing an installed distribution. This class is not instantiated directly, except by packaging tools. Instances of it are returned from querying a DistributionPath. Properties: requested Whether the distribution was installed by user request (if not, it may have been installed as a dependency of some other distribution). exports The distribution’s exports, as described in Exporting things from Distributions. This is a cached property. Methods: list_installed_files() Returns an iterator over all of the individual files installed as part of the distribution, including metadata files. The iterator returns tuples of the form (path, hash, size). The list of files is written by the installer to the RECORD metadata file. list_distinfo_files() Similar to list_installed_files(), but only returns metadata files. check_installed_files() Runs over all the installed files to check that the size and checksum are unchanged from the values in the RECORD file, written when the distribution was installed. It returns a list of mismatches. If the files in the distribution haven’t been corrupted , an empty list will be returned; otherwise, a list of mismatches will be returned. Returns A list which, if non-empty, will contain tuples with the following elements: • The path in RECORD which failed to match. • One of the strings ‘exists’, ‘size’ or ‘hash’ according to what didn’t match (existence is checked first, then size, then hash). • The expected value of what didn’t match (as obtained from RECORD). • The actual value of what didn’t match (as obtained from the file system). read_exports(filename=None) Read exports information from a file. Normal access to a distribution’s exports should be through its exports attribute. This method is called from there as needed. If no filename is specified, the EXPORTS file in the .dist-info directory is read (it is expected to be present). Parameters filename (str) – The filename to read from, or None to read from the default location. Returns The exports read from the file. Return type dict write_exports(exports, filename=None) Write exports information to a file. If no filename is specified, the EXPORTS file in the .dist-info directory is written. Parameters • exports (dict) – A dictionary whose keys are categories and whose values are dictio- naries which contain ExportEntry instances keyed on their name. Distlib Documentation, Release 0.3.6 • filename (str) – The filename to read from, or None to read from the default location. class EggInfoDistribution Analogous to Distribution, but covering legacy distributions. This class is not instantiated directly. In- stances of it are returned from querying a DistributionPath. Properties: name The name of the distribution. version The version of the distribution. metadata The metadata for the distribution. This is a distlib.metadata.Metadata instance. Methods: list_installed_files() Returns a list all of the individual files installed as part of the distribution. class DependencyGraph This class represents a dependency graph between releases. The nodes are distribution instances; the edges model dependencies. An edge from a to b means that a depends on b. add_distribution(distribution) Add distribution to the graph. add_edge(x, y, label=None) Add an edge from distribution x to distribution y with the given label (string). add_missing(distribution, requirement) Add a missing requirement (string) for the given distribution. repr_node(dist, level=1) Print a subgraph starting from dist. level gives the depth of the subgraph. Direct access to the graph nodes and edges is provided through these attributes: adjacency_list Dictionary mapping distributions to a list of (other, label) tuples where other is a distribution and the edge is labelled with label (i.e. the version specifier, if such was provided). reverse_list Dictionary mapping distributions to a list of predecessors. This allows efficient traversal. missing Dictionary mapping distributions to a list of requirements that were not provided by any distribution. 4.2 The distlib.resources package 4.2.1 Attributes cache An instance of ResourceCache. This can be set after module import, but before calling any functionality which uses it, to ensure that the cache location is entirely under your control. Distlib Documentation, Release 0.3.6 If you access the file_path property of Resource instance, the cache will be needed, and if not set by you, an instance with a default location will be created. See distlib.util.get_cache_base() for more information. 4.2.2 Functions finder(package) Get a finder for the specified package. If the package hasn’t been imported yet, an attempt will be made to import it. If importing fails, an ImportError will be raised. Parameters package (str) – The name of the package for which a finder is desired. Returns A finder for the package. register_finder(loader, finder_maker) Register a callable which makes finders for a particular type of PEP 302 loader. Parameters • loader – The loader for which a finder is to be returned. • finder_maker – A callable to be registered, which is called when a loader of the speci- fied type is used to load a package. The callable is called with a single argument – the Python module object corresponding to the package – and must return a finder for that package. 4.2.3 Classes class Resource A class representing resources. It is never instantiated directly, but always through calling a finder’s find method. Properties: is_container Whether this instance is a container of other resources. bytes All of the resource data as a byte string. Raises an exception if accessed on a container resource. size The size of the resource data in bytes. Raises an exception if accessed on a container resource. resources The relative names of all the contents of this resource. Raises an exception if accessed on a resource which is not a container. path This attribute is set by the resource’s finder. It is a textual representation of the path, such that if a PEP 302 loader’s get_data() method is called with the path, the resource’s bytes are returned by the loader. This attribute is analogous to the resource_filename API in setuptools. Note that for resources in zip files, the path will be a pointer to the resource in the zip file, and not directly usable as a filename. While setuptools deals with this by extracting zip entries to cache and returning filenames from the cache, this does not seem an appropriate thing to do in this package, as a resource is already made available to callers either as a stream or a string of bytes. Distlib Documentation, Release 0.3.6 file_path This attribute is the same as the path for file-based resource. For resources in a .zip file, the relevant resource is extracted to a file in a cache in the file system, and the name of the cached file is returned. This is for use with APIs that need file names, or need to be able to access data through OS-level file handles. See the Cache documentation for more information about the cache. Methods: as_stream() A binary stream of the resource’s data. This must be closed by the caller when it’s finished with. Raises an exception if called on a container resource. class ResourceFinder A base class for resource finders, which finds resources for packages stored in the file system. __init__(module) Initialise the finder for the package specified by module. Parameters module – The Python module object representing a package. find(resource_name) Find a resource with the name specified by resource_name and return a Resource instance which represents it. Parameters resource_name – A fully qualified resource name, with hierarchical compo- nents separated by ‘/’. Returns A Resource instance, or None if a resource with that name wasn’t found. iterator(resource_name) Return a generator which walks through the resources available through resource_name. Parameters resource_name – A fully qualified resource name, with hierarchical compo- nents separated by ‘/’. You can use ‘’ to mean the ‘root’ resource. If the resource name refers to a non-container resource, only that resource is returned. Otherwise, the named resource is returned, followed by its children, recursively. If there is no resource named resource_name, None is returned. Returns A generator to iterate over resources, or None. is_container(resource) Return whether a resource is a container of other resources. Parameters resource (a Resource instance) – The resource whose status as container is wanted. Returns True or False. get_stream(resource) Return a binary stream for the specified resource. Parameters resource (a Resource instance) – The resource for which a stream is wanted. Returns A binary stream for the resource. get_bytes(resource) Return the contents of the specified resource as a byte string. Parameters resource (a Resource instance) – The resource for which the bytes are wanted. Returns The data in the resource as a byte string. get_size(resource) Return the size of the specified resource in bytes. Distlib Documentation, Release 0.3.6 Parameters resource (a Resource instance) – The resource for which the size is wanted. Returns The size of the resource in bytes. class ZipResourceFinder This has the same interface as ResourceFinder. class ResourceCache This class implements a cache for resources which must be accessible as files in the file system. It is based on Cache, and adds resource-specific methods. __init__(base=None) Initialise a cache instance with a specific directory which holds the cache. If base is not specified, the value resource-cache in the directory returned by get_cache_base() is used. get(resource) Ensures that the resource is available as a file in the file system, and returns the name of that file. This method calls the resource’s finder’s get_cache_info() method. is_stale(resource, path) Returns whether the data in the resource which is cached in the file system is stale compared to the re- source’s current data. The default implementation returns True, causing the resource’s data to be re- written to the file every time. 4.3 The distlib.scripts package 4.3.1 Classes class ScriptMaker A class used to install scripts based on specifications. Attributes source_dir The directory where script sources are to be found. target_dir The directory where scripts are to be created. add_launchers Whether to create native executable launchers on Windows. force Whether to overwrite scripts even when timestamps show they’re up to date. set_mode Whether, on Posix, the scripts should have their execute mode set. script_template The text of a template which should contain %(shebang)s, %(module)s and %(func)s in the ap- propriate places. The attribute is defined at class level. You can override it at the instance level to customise your scripts. version_info A two-tuple of the Python version to be used when generating scripts, where version-specific variants such as foo3 or foo-3.8 are created. This defaults to sys.version_info. The provided tuple can have more elements, but only the first two are used. New in version 0.3.1. Distlib Documentation, Release 0.3.6 variant_separator A string value placed between the root basename and the version information in a variant-specific filename. This defaults to '-', which means that a script with root basename foo and a variant X.Y will have a base filename of foo-3.8 for target Python version 3.8. If you wanted to write foo3.8 instead of foo-3.8, this attribute could be set to ''. If you need more control over filename generation, you can subclass ScriptMaker and override the get_script_filenames() method. New in version 0.3.2. Methods __init__(source_directory, target_directory, add_launchers=True, dry_run=False) Initialise the instance with options that control its behaviour. Parameters • source_directory (str) – Where to find scripts to install. • target_directory (str) – Where to install scripts to. • add_launchers (bool) – If true, create executable launchers on Windows. The exe- cutables are currently generated from the following project: https://bitbucket.org/vinay.sajip/simple_launcher/ • dry_run – If true, don’t actually install scripts - just pretend to. make(specification, options=None) Make a script in the target directory. Parameters • specification (str) – A specification, which can take one of the following forms: – A filename, relative to source_directory, such as foo.py or subdir/bar. py. – A reference to a callable, given in the form: name = some_package.some_module:some_callable [flags] where the flags part is optional. When this form is passed, a Python stub script is created with the appropriate shebang line and with code to load and call the specified callable with no arguments, returning its value as the return code from the script. For more information about flags, see Flag formats. • options (dict) – If specified, a dictionary of options used to control script creation. Currently, the following keys are checked: gui: This should be a bool which, if True, indicates that the script is a windowed application. This distinction is only drawn on Windows if add_launchers is True, and results in a windowed native launcher application if options['gui'] is True (otherwise, the native executable launcher is a console application). interpreter_args: If specified, this should be a list of strings which are ap- pended to the interpreter executable in the shebang line. If there are values with spaces, you will need to surround them with double quotes. Note: Linux does not handle passing arguments to interpreters particularly well – multiple arguments are bundled up into one when passing to the interpreter – see Distlib Documentation, Release 0.3.6 https://en.wikipedia.org/wiki/Shebang_line#Portability for more information. This may also affect other POSIX platforms – consult the OS documentation for your system if necessary. On Windows, the distlib native executable launchers do parse multiple arguments and pass them to the interpreter. Returns A list of absolute pathnames of files installed (or which would have been installed, but for dry_run being true). make_multiple(specifications, options) Make multiple scripts from an iterable. This method just calls make() once for each value returned by the iterable, but it might be convenient to override this method in some scenarios to do post-processing of the installed files (for example, running 2to3 on them). Parameters • specifications – an iterable giving the specifications to follow. • options – As for the make() method. Returns A list of absolute pathnames of files installed (or which would have been installed, but for dry_run being true). get_script_filenames(name) Get the names of scripts to be written for the specified base name, based on the variants and version_info for this instance. You can override this if you need to customise the filenames to be written. Parameters name (str) – the basename of the script to be written. Returns A set of filenames of files to be written as scripts, based on what variants are spec- ified. For example, if the name is foo and the variants are {'X', 'X.Y'} and the version_info is (3, 8), then the result would be {'foo3', 'foo-3.8'}. New in version 0.3.2. 4.3.2 Functions enquote_executable(path) Cover an executable path in quotes. This only applies quotes if the passed path contains any spaces. It’s at least a little careful when doing the quoting - for example, producing e.g. /usr/bin/env "/dir with spaces/bin/jython" instead of "/usr/bin/env /dir with spaces/bin/jython" Changed in version 0.3.1: This was an internal function _enquote_executable() in earlier versions. 4.4 The distlib.locators package 4.4.1 Classes class Locator The base class for locators. Implements logic common to multiple locators. __init__(scheme=’default’) Initialise an instance of the locator. Parameters scheme (str) – The version scheme to use. Distlib Documentation, Release 0.3.6 get_project(name) This method should be implemented in subclasses. It returns a (potentially empty) dictionary whose keys are the versions located for the project named by name, and whose values are instances of distlib. util.Distribution. convert_url_to_download_info(url, project_name) Extract information from a URL about the name and version of a distribution. Parameters • url (str) – The URL potentially of an archive (though it needn’t be). • project_name (str) – This must match the project name determined from the archive (case-insensitive matching is used). Returns None if the URL does not appear to be that of a distribution archive for the named project. Otherwise, a dictionary is returned with the following keys at a minimum: • url – the URL passed in, minus any fragment portion. • filename – a suitable filename to use for the archive locally. Optional keys returned are: • md5_digest – the MD5 hash of the archive, for verification after downloading. This is extracted from the fragment portion, if any, of the passed-in URL. • sha256_digest – the SHA256 hash of the archive, for verification after downloading. This is extracted from the fragment portion, if any, of the passed-in URL. Return type dict get_distribution_names() Get the names of all distributions known to this locator. The base class raises NotImplementedError; this method should be implemented in a subclass. Returns All distributions known to this locator. Return type set locate(requirement, prereleases=False) This tries to locate the latest version of a potentially downloadable distribution which matches a require- ment (name and version constraints). If a potentially downloadable distribution (i.e. one with a download URL) is not found, None is returned – otherwise, an instance of Distribution is returned. The re- turned instance will have, at a minimum, name, version and source_url populated. Parameters • requirement (str) – The name and optional version constraints of the distribution to locate, e.g. 'Flask' or 'Flask (>= 0.7, < 0.9)'. • prereleases (bool) – If True, prereleases are treated like normal releases. The default behaviour is to not return any prereleases unless they are the only ones which match the requirement. Returns A matching instance of Distribution, or None. get_errors() This returns a (possibly empty) list of error messages relating to a recent get_project() or locate() call. Fetching the errors clears the error list. New in version 0.2.4. Distlib Documentation, Release 0.3.6 class DirectoryLocator(Locator) This locator scans the file system under a base directory, looking for distribution archives. The locator scans all subdirectories recursively, unless the recursive flag is set to False. __init__(base_dir, **kwargs) Parameters • base_dir (str) – The base directory to scan for distribution archives. • kwargs – Passed to base class constructor, apart from the following keyword arguments: – recursive (defaults to True) – if False, no recursion into subdirectories occurs. class PyPIRPCLocator(Locator) This locator uses the PyPI XML-RPC interface to locate distribution archives and other data about downloads. __init__(url, **kwargs) param url The base URL to use for the XML-RPC service. type url str param kwargs Passed to base class constructor. get_project(name) See Locator.get_project(). class PyPIJSONLocator(Locator) This locator uses the PyPI JSON interface to locate distribution archives and other data about downloads. It gets the metadata and URL information in a single call, so it should perform better than the XML-RPC locator. __init__(url, **kwargs) param url The base URL to use for the JSON service. type url str param kwargs Passed to base class constructor. get_project(name) See Locator.get_project(). class SimpleScrapingLocator This locator uses the PyPI ‘simple’ interface – a Web scraping interface – to locate distribution archives. __init__(url, timeout=None, num_workers=10, **kwargs) Parameters • url (str) – The base URL to use for the simple service HTML pages. • timeout (float) – How long (in seconds) to wait before giving up on a remote re- source. • num_workers (int) – The number of worker threads created to perform scraping ac- tivities. • kwargs – Passed to base class constructor. class DistPathLocator This locator uses a DistributionPath instance to locate installed distributions. __init__(url, distpath, **kwargs) Parameters • distpath (DistributionPath) – The distribution path to use. Distlib Documentation, Release 0.3.6 • kwargs – Passed to base class constructor. class AggregatingLocator(Locator) This locator uses a list of other aggregators and delegates finding projects to them. It can either return the first result found (i.e. from the first aggregator in the list provided which returns a non-empty result), or a merged result from all the aggregators in the list. __init__(*locators, **kwargs) Parameters • locators (sequence of locators) – A list of aggregators to delegate finding projects to. • merge (bool) – If this kwarg is True, each aggregator in the list is asked to provide results, which are aggregated into a results dictionary. If False, the first non-empty return value from the list of aggregators is returned. The locators are consulted in the order in which they’re passed in. class DependencyFinder This class allows you to recursively find all the distributions which a particular distribution depends on. __init__(locator) Initialise an instance with the locator to be used for locating distributions. find(requirement, metas_extras=None, prereleases=False) Find all the distributions needed to fulfill requirement. Parameters • requirement – A string of the from name (version) where version can include an inequality constraint, or an instance of Distribution (e.g. representing a distribution on the local hard disk). • meta_extras – A list of meta extras such as :test:, :build: and so on, to be included in the dependencies. • prereleases – If True, allow pre-release versions to be returned - otherwise, don’t return prereleases unless they’re all that’s available. Returns A 2-tuple. The first element is a set of Distribution instances. The second element is a set of problems encountered during dependency resolution. Currently, if this set is non- empty, it will contain 2-tuples whose first element is the string ‘unsatisfied’ and whose second element is a requirement which couldn’t be satisfied. In the set of Distribution instances returned, some attributes will be set: • The instance representing the passed-in requirement will have the requested at- tribute set to True. • All requirements which are not installation requirements (in other words, are needed only for build and test) will have the build_time_dependency attribute set to True. 4.4.2 Functions get_all_distribution_names(url=None) Retrieve the names of all distributions registered on an index. Parameters url (str) – The XML-RPC service URL of the node to query. If not specified, The main PyPI index is queried. Distlib Documentation, Release 0.3.6 Returns A list of the names of distributions registered on the index. Note that some of the names may be Unicode. Return type list locate(requirement, prereleases=False) This convenience function returns the latest version of a potentially downloadable distribution which matches a requirement (name and version constraints). If a potentially downloadable distribution (i.e. one with a download URL) is not found, None is returned – otherwise, an instance of Distribution is returned. The returned instance will have, at a minimum, name, version, download_url and download_urls. Parameters • requirement (str) – The name and optional version constraints of the distribution to locate, e.g. 'Flask' or 'Flask (>= 0.7, < 0.9)'. • prereleases (bool) – If True, prereleases are treated like normal releases. The default behaviour is to not return any prereleases unless they are the only ones which match the requirement. Returns A matching instance of Distribution, or None. 4.4.3 Variables default_locator This attribute holds a locator which is used by locate() to locate distributions. 4.5 The distlib.index package 4.5.1 Classes class PackageIndex This class represents a package index which is compatible with PyPI, the Python Package Index. It allows you to register projects, upload source and binary distributions (with support for digital signatures), upload documentation, verify signatures and get a list of hosts which are mirrors for the index. Methods: __init__(url=None, mirror_host=None) Initialise an instance, setting instance attributes named from the keyword arguments. Parameters • url – The root URL for the index. If not specified, the URL for PyPI is used (’http: //pypi.org/pypi’). • mirror_host – The DNS name for a host which can be used to determine available mirror hosts for the index. If not specified, the value ‘last.pypi.python.org’ is used. register(metadata) Register a project with the index. Parameters metadata – A Metadata instance. This should have at least the Name and Version fields set, and ideally as much metadata as possible about this distribution. Though it might seem odd to have to specify a version when you are initially registering a project, this is required by PyPI. You can see this in PyPI’s Web UI when you click the “Package submission” link in the left-hand side menu. Distlib Documentation, Release 0.3.6 Returns An urllib HTTP response returned by the index. If an error occurs, an HTTPError exception will be raised. upload_file(metadata, filename, signer=None, sign_password=None, filetype='sdist', pyversion='source', keystore=None) Upload a distribution to the index. Parameters • metadata – A Metadata instance. This should have at least the Name and Version fields set, and ideally as much metadata as possible about this distribution. • file_name – The path to the file which is to be uploaded. • signer – If specified, this needs to be a string identifying the GnuPG private key which is to be used for signing the distribution. • sign_password – The passphrase which allows access to the private key used for the signature. • filetype – The type of the file being uploaded. This would have values such as sdist (for a source distribution), bdist_wininst for a Windows installer, and so on. Consult the distutils documentation for the full set of possible values. • pyversion – The Python version this distribution is compatible with. If it’s a pure- Python distribution, the value to use would be source - for distributions which are for specific Python versions, you would use the Python version in the form X.Y. • keystore – The path to a directory which contains the keys used in signing. If not specified, the instance’s gpg_home attribute is used instead. This parameter is not used unless a signer is specified. Returns An urllib HTTP response returned by the index. If an error occurs, an HTTPError exception will be raised. Changed in version 0.1.9: The keystore argument was added. upload_documentation(metadata, doc_dir) Upload HTML documentation to the index. The contents of the specified directory tree will be packed into a .zip file which is then uploaded to the index. Parameters • metadata – A Metadata instance. This should have at least the Name and Version fields set. • doc_dir – The path to the root directory for the HTML documentation. This directory should be the one that contains index.html. Returns An urllib HTTP response returned by the index. If an error occurs, an HTTPError exception will be raised. verify_signature(self, signature_filename, data_filename, keystore=None) Verify a digital signature against a downloaded distribution. Parameters • signature_filename – The path to the file which contains the digital signature. • data_filename – The path to the file which was supposedly signed to obtain the sig- nature in signature_filename. Distlib Documentation, Release 0.3.6 • keystore – The path to a directory which contains the keys used in verification. If not specified, the instance’s gpg_home attribute is used instead. Returns True if the signature can be verified, else False. If an error occurs (e.g. unable to locate the public key used to verify the signature), a ValueError is raised. Changed in version 0.1.9: The keystore argument was added. search(query, operation=None) Search the index for distributions matching a search query. Parameters • query – The query, either as a string or a dictionary. If a string 'foo' is passed, it will be treated equivalently to passing the dictionary {'name': 'foo'}. The dictionary can have the following keys: – name – version – stable_version – author – author_email – maintainer – maintainer_email – home_page – license – summary – description – keywords – platform – download_url – classifiers (list of classifier strings) – project_url – docs_url (URL of the pythonhosted.org docs if they’ve been supplied) • operation – If specified, it should be either 'and' or 'or'. If not specified, 'and' is assumed. This is only used if a passed dictionary has multiple keys. It determines whether the intersection or the union of matches is returned. Returns A (possibly empty) list of the distributions matching the query. Each entry in the list will be a dictionary with the following keys: • _pypi_ordering – the internal ordering value (an integer) • name –The name of the distribution • version – the version of the distribution • summary – the summary for that version New in version 0.1.8. Distlib Documentation, Release 0.3.6 Additional attributes: username The username to use when authenticating with the index. password The password to use when authenticating with the index. gpg The path to the signing and verification program. gpg_home The location of the key database for the signing and verification program. mirrors The list of hosts which are mirrors for this index. boundary The boundary value to use when MIME-encoding requests to be sent to the index. This should be a byte-string. 4.6 The distlib.util package 4.6.1 Classes class Cache This base class implements common operations for distlib caches. __init__(base) Initialise a cache instance with a specific directory which holds the cache. Warning: If base is specified and exists, it should exist and its permissions (relevant on POSIX only) should be set to 0700 - i.e. only the user of the running process has any rights over the directory. If this is not done, the application using this functionality may be vulnerable to security breaches as a result of other processes being able to interfere with the cache. prefix_to_dir(prefix) Converts a prefix (e.g. the name of a resource’s containing .zip, or a wheel pathname) into a directory name in the cache. This implementation delegates the work to path_to_cache_dir(). class ExportEntry Attributes: A class holding information about a exports entry. name The name of the entry. prefix The prefix part of the entry. For a callable or data item in a module, this is the name of the package or module containing the item. suffix The suffix part of the entry. For a callable or data item in a module, this is a dotted path which points to the item in the module. Distlib Documentation, Release 0.3.6 flags A list of flags. See Flag formats for more information. value The actual value of the entry (a callable or data item in a module, or perhaps just a module). This is a cached property of the instance, and is determined by calling resolve() with the prefix and suffix properties. dist The distribution which exports this entry. This is normally an instance of InstalledDistribution. 4.6.2 Functions get_cache_base() Return the base directory which will hold distlib caches. If the directory does not exist, it is created. On Windows, if LOCALAPPDATA is defined in the environment, then it is assumed to be a directory, and will be the parent directory of the result. On POSIX, and on Windows if LOCALAPPDATA is not defined, the user’s home directory – as determined using os.expanduser('~') – will be the parent directory of the result. The result is just the directory '.distlib' in the parent directory as determined above. If a home directory is unavailable (no such directory, or if it’s write- protected), a parent directory for the cache is determined using tempfile.mkdtemp(). This returns a directory to which only the running process has access (permission mask 0700 on POSIX), meaning that the cache should be isolated from possible malicious interference by other processes. Note: This cache is used for the following purposes: • As a place to cache package resources which need to be in the file system, because they are used by APIs which either expect filesystem paths, or to be able to use OS-level file handles. An example of the former is the SSLContext.load_verify_locations() method in Python’s ssl module. The subdirectory resource-cache is used for this purpose. • As a place to cache shared libraries which are extracted as a result of calling the mount() method of the Wheel class. The subdirectory dylib-cache is used for this purpose. The application using this cache functionality, whether through the above mechanisms or through using the value returned from here directly, is responsible for any cache cleanup that is desired. Note that on Windows, you may not be able to do cache cleanup if any of the cached files are open (this will generally be the case with shared libraries, i.e. DLLs). The best way to do cache cleanup in this scenario may be on application startup, before any resources have been cached or wheels mounted. path_to_cache_dir(path) Converts a path (e.g. the name of an archive) into a directory name suitable for use in a cache. The following algorithm is used: 1. On Windows, any ':' in the drive is replaced with '---'. 2. Any occurrence of os.sep is replaced with '--'. 3. '.cache' is appended. get_export_entry(specification) Return a export entry from a specification, if it matches the expected format, or else None. Parameters specification (str) – A specification, as documented for the distlib. scripts.ScriptMaker.make() method. Distlib Documentation, Release 0.3.6 Returns None if the specification didn’t match the expected form for an entry, or else an instance of ExportEntry holding information about the entry. resolve(module_name, dotted_path) Given a module name and a dotted_path representing an object in that module, resolve the passed pa- rameters to an object and return that object. If the module has not already been imported, this function attempts to import it, then access the object repre- sented by dotted_path in the module’s namespace. If dotted_path is None, the module is returned. If import or attribute access fails, an ImportError or AttributeError will be raised. Parameters • module_name (str) – The name of a Python module or package, e.g. os or os.path. • dotted_path (str) – The path of an object expected to be in the module’s namespace, e.g. 'environ', 'sep' or 'path.supports_unicode_filenames'. 4.7 The distlib.wheel package This package has functionality which allows you to work with wheels (see PEP 427). 4.7.1 Attributes cache An instance of distlib.util.Cache. This can be set after module import, but before calling any function- ality which uses it, to ensure that the cache location is entirely under your control. If you call the mount method of a Wheel instance, and the wheel is successfully mounted and contains C extensions, the cache will be needed, and if not set by you, an instance with a default location will be created. See distlib.util.get_cache_base() for more information. COMPATIBLE_TAGS A set of (pyver, abi, arch) tags which are compatible with this Python implementation. 4.7.2 Classes class Wheel This class represents wheels – either existing wheels, or wheels to be built. __init__(spec) Initialise an instance from a specification. Parameters spec (str) – This can either be a valid filename for a wheel (for when you want to work with an existing wheel), or just the name-version-buildver portion of a wheel’s filename (for when you’re going to build a wheel for a known version and build of a named project). build(paths, tags=None, wheel_version=None) Build a wheel. The name, version and buildver should already have been set correctly. Parameters • paths – This should be a dictionary with keys 'prefix', 'scripts', 'headers', 'data' and one of 'purelib' or 'platlib'. These must point to valid paths if they are to be included in the wheel. Distlib Documentation, Release 0.3.6 • tags – If specified, this should be a dictionary with optional keys 'pyver', 'abi' and 'arch' indicating lists of tags which indicate environments with which the wheel is compatible. • wheel_version – If specified, this is written to the wheel’s “Wheel-Version” metadata. If not specified, the implementation’s latest supported wheel version is used. install(self, paths, maker, **kwargs) Install from a wheel. Parameters • paths – This should be a dictionary with keys 'prefix', 'scripts', 'headers', 'data', 'purelib' and 'platlib'. These must point to valid paths to which files may be written if they are in the wheel. Only one of the 'purelib' and 'platlib' paths will be used (in the case where they are different), depending on whether the wheel is for a pure-Python distribution. • maker – This should be set to a suitably configured instance of ScriptMaker. The source_dir and target_dir arguments can be set to None when creating the in- stance - these will be set to appropriate values inside this method. • warner – If specified, should be a callable that will be called with (software_wheel_ver, file_wheel_ver) if they differ. They will both be in the form of tuples (major_ver, mi- nor_ver). • lib_only – It’s conceivable that one might want to install only the library portion of a package – not installing scripts, headers, data and so on. If lib_only is specified as True, only the site-packages contents will be installed. is_compatible() Determine whether this wheel instance is compatible with the running Python. Returns True if compatible, else False. is_mountable() Determine whether this wheel instance is indicated suitable for mounting in its metadata. Returns True if mountable, else False. mount(append=False) Mount the wheel so that its contents can be imported directly, without the need to install the wheel. If the wheel contains C extensions and has metadata about these extensions, the extensions are also available for import. If the wheel tags indicate it is not compatible with the running Python, a DistlibException is raised. (The is_compatible() method is used to determine compatibility.) If the wheel is indicated as not suitable for mounting, a DistlibException is raised. (The is_mountable() method is used to determine mountability.) param append If True, the wheel’s pathname is added to the end of sys.path. By default, it is added to the beginning. Note: Wheels may state in their metadata that they are not intended to be mountable, in which case this method will raise a DistlibException with a suitable message. If C extensions are extracted, the location for extraction will be under the directory dylib-cache in the directory returned by get_cache_base(). Distlib Documentation, Release 0.3.6 Wheels may be marked by their publisher as unmountable to indicate that running directly from a zip is not supported by the packaged software. unmount() Unmount the wheel so that its contents can no longer be imported directly. If the wheel contains C extensions and has metadata about these extensions, the extensions are also made unavailable for import. Note: Unmounting does not automatically clean up any extracted C extensions, as that may not be de- sired (and not possible, on Windows, because the files will be open). See the get_cache_base() documentation for suggested cleanup scenarios. verify() Verify sizes and hashes of the wheel’s contents against the sizes and hashes declared in the wheel’s RECORD. Raise a DistlibException if a size or digest mismatch is detected. New in version 0.1.8. update(modifier, dest_dir=None, **kwargs) Allows a user-defined callable access to the contents of a wheel. The callable can modify the contents of the wheel, add new entries or remove entries. The method first extracts the wheel’s contents to a temporary location, and then calls the modifier like this: modified = modifier(path_map, **kwargs) where path_map is a dictionary mapping archive paths to the location of the corresponding ex- tracted archive entry, and kwargs is whatever was passed to the update method. If the modifier returns True, a new wheel is built from the (possibly updated) contents of path_map and, as a final step, copied to the location of the original wheel (hence effectively modifying it in-place). The passed path_map will contain all of the wheel’s entries other than the RECORD entry (which will be recreated if a new wheel is built). New in version 0.1.8. name The name of the distribution. version The version of the distribution buildver The build tag for the distribution. pyver A list of Python versions with which the wheel is compatible. See PEP 427 and PEP 425 for details. abi A list of application binary interfaces (ABIs) with which the wheel is compatible. See PEP 427 and PEP 425 for details. arch A list of architectures with which the wheel is compatible. See PEP 427 and PEP 425 for details. dirname The directory in which a wheel file is found/to be created. filename The filename of the wheel (computed from the other attributes) Distlib Documentation, Release 0.3.6 metadata The metadata for the distribution in the wheel, as a Metadata instance. info The wheel metadata (contents of the WHEEL metadata file) as a dictionary. exists Whether the wheel file exists. New in version 0.1.8. 4.7.3 Functions is_compatible(wheel, tags=None) Indicate if a wheel is compatible with a set of tags. If any combination of the tags of wheel is found in tags, then the wheel is considered to be compatible. Parameters • wheel – A Wheel instance or the filename of a wheel. • tags – A set of tags to check for compatibility. If not specified, it defaults to the set of tags which are compatible with this Python implementation. Returns True if compatible, else False. 4.8 Next steps You might find it helpful to look at the mailing list. Distlib Documentation, Release 0.3.6 94 Chapter 4. API Reference CHAPTER 5 Migrating from older APIs This section has information on migrating from older APIs. 5.1 The pkg_resources resource API 5.1.1 Basic resource access resource_exists(package, resource_name) finder(package).find(resource_name) is not None resource_stream(package, resource_name) finder(package).find(resource_name). as_stream() resource_string(package, resource_name) finder(package).find(resource_name). bytes resource_isdir(package, resource_name) finder(package).find(resource_name). is_container resource_listdir(package, resource_name) finder(package).find(resource_name). resources 5.1.2 Resource extraction resource_filename(package, resource_name) finder(package).find(resource_name). file_path set_extraction_path(extraction_path) This has no direct analogue, but you can achieve equivalent results by doing something like the following: from distlib import resources resources.cache = resources.Cache(extraction_path) Distlib Documentation, Release 0.3.6 before accessing the file_path property of any Resource. Note that if you have accessed the file_path property for a resource before doing this, the cache may already have extracted files. cleanup_resources(force=False) This is not actually implemented in pkg_resources – it’s a no-op. You could achieve the analogous result using: from distlib import resources not_removed = resources.cache.clear() 5.1.3 Provider interface You can provide an XXXResourceFinder class which finds resources in custom storage containers, and works like ResourceFinder. Although it shouldn’t be necessary, you could also return a subclass of Resource from your finders, to deal with custom requirements which aren’t catered for. get_cache_path(archive_name, names=()) There’s no analogue for this, as you shouldn’t need to care about whether particular resources are implemented in archives or not. If you need this API, please give feedback with more information about your use cases. extraction_error() There’s no analogue for this. The Cache.get() method, which writes a resource’s bytes to a file in the cache, will raise any exception caused by underlying I/O. If you need to handle this in the cache layer, you can subclass Cache and override get(). If that doesn’t work for you, please give feedback with more information about your use cases. postprocess(tempname, filename) There’s no analogue for this. The Cache.get() method, which writes a resource’s bytes to a file in the cache, can be overridden to perform any custom post-processing. If that doesn’t work for you, please give feedback with more information about your use cases. 5.2 The pkg_resources entry point API Entry points in pkg_resources are equivalent to per-distribution exports dictionary (see Exporting things from Distributions). The keys to the dictionary are just names in a hierarchical namespace delineated with periods (like Python packages). These keys are called groups in pkg_resources documentation, though that term is a little ambiguous. In Eclipse, for example, they are called extension point IDs, which is a little closer to the intended usage, but a bit of a mouthful. In distlib, we’ll use the term category or export category. In distlib, the implementation of exports is slightly different from entry points of pkg_resources. A Distribution instance has an exports attribute, which is a dictionary keyed by category and whose values are dictionaries that map names to ExportEntry instances. Below are the pkg_resources functions and how to achieve the equivalent in distlib. In cases where the pkg_resources functions take distribution names, in distlib you get the corresponding Distribution in- stance, using: dist = dist_path.get_distribution(distname) and then ask that instance (or the dist_path instance) for the things you need. load_entry_point(distname, groupname, name) dist.exports[groupname][name]. value get_entry_info(distname, groupname, name) dist.exports[groupname][name] get_entry_map(distname, groupname=None) dist.exports or dist.exports[groupname] Distlib Documentation, Release 0.3.6 iter_entry_points(groupname, name=None) dist_path.get_exported_entries(groupname, name=None) 5.2. The pkg_resources entry point API 97 Distlib Documentation, Release 0.3.6 98 Chapter 5. Migrating from older APIs
ds4psy
cran
R
Package ‘ds4psy’ September 15, 2023 Type Package Title Data Science for Psychologists Version 1.0.0 Date 2023-09-15 Maintainer <NAME> <<EMAIL>> Description All datasets and functions required for the examples and exercises of the book ``Data Sci- ence for Psychologists'' (by <NAME>, Konstanz University, 2022), avail- able at <https://bookdown.org/hneth/ds4psy/>. The book and course introduce princi- ples and methods of data science to students of psychology and other biological or social sci- ences. The 'ds4psy' package primarily provides datasets, but also functions for data genera- tion and manipulation (e.g., of text and time data) and graph- ics that are used in the book and its exercises. All functions included in 'ds4psy' are de- signed to be explicit and instructive, rather than efficient or elegant. Depends R (>= 3.5.0) Imports ggplot2, unikn Suggests knitr, rmarkdown, spelling Collate 'util_fun.R' 'num_util_fun.R' 'text_util_fun.R' 'time_util_fun.R' 'color_fun.R' 'data.R' 'data_fun.R' 'text_fun.R' 'time_fun.R' 'num_fun.R' 'theme_fun.R' 'plot_fun.R' 'start.R' Encoding UTF-8 LazyData true License CC BY-SA 4.0 URL https://bookdown.org/hneth/ds4psy/, https://github.com/hneth/ds4psy/ BugReports https://github.com/hneth/ds4psy/issues VignetteBuilder knitr RoxygenNote 7.2.3 Language en-US NeedsCompilation no Author <NAME> [aut, cre] (<https://orcid.org/0000-0001-5427-3141>) Repository CRAN Date/Publication 2023-09-15 07:30:02 UTC R topics documented: base2de... 4 base_digit... 6 Bushism... 7 capitaliz... 7 casefli... 8 cclas... 9 change_tim... 10 change_t... 11 chars_to_tex... 13 coi... 14 collapse_char... 15 countrie... 16 count_char... 17 count_chars_word... 18 count_word... 19 cur_dat... 20 cur_tim... 21 data_... 22 data_... 23 data_t... 24 data_t1_d... 24 data_t1_ta... 25 data_t... 25 data_t... 26 data_t... 27 days_in_mont... 27 dec2bas... 28 dic... 30 dice_... 31 diff_date... 32 diff_time... 35 diff_t... 36 ds4psy.guid... 38 dt_1... 38 exp_num_d... 39 exp_wid... 40 falsePosPsy_al... 40 fam... 42 flower... 43 fruit... 43 get_se... 44 invert_rule... 45 is_equa... 46 is_leap_yea... 47 is_vec... 48 is_wholenumbe... 49 l33t_rul3... 51 make_gri... 51 map_text_char... 52 map_text_coor... 53 map_text_rege... 54 metacha... 57 num_as_cha... 58 num_as_ordina... 59 num_equa... 60 outlier... 62 pal_ds4ps... 62 pal_n_s... 63 pi_100... 64 plot_charma... 64 plot_char... 66 plot_f... 70 plot_fu... 71 plot_... 72 plot_tex... 74 plot_tile... 77 posPsy_AHI_CES... 79 posPsy_lon... 80 posPsy_p_inf... 81 posPsy_wid... 82 read_asci... 83 sample_cha... 84 sample_dat... 85 sample_tim... 86 t... 88 t... 89 table... 89 table... 90 table... 91 table... 91 t... 92 text_to_char... 93 text_to_sentence... 94 text_to_word... 96 theme_clea... 97 theme_ds4ps... 98 theme_empt... 101 transl33... 102 Trumpism... 103 t_... 104 t_... 104 t_... 105 t_... 106 Umlau... 106 what_dat... 107 what_mont... 109 what_tim... 110 what_wda... 111 what_wee... 112 what_yea... 113 words_to_tex... 114 zodia... 115 base2dec Convert a string of numeral digits from some base into decimal nota- tion. Description base2dec converts a sequence of numeral symbols (digits) from its notation as positional numerals (with some base or radix) into standard decimal notation (using the base or radix of 10). Usage base2dec(x, base = 2) Arguments x A (required) sequence of numeric symbols (as a character sequence or vector of digits). base The base or radix of the symbols in seq. Default: base = 2 (binary). Details The individual digits provided in x (e.g., from "0" to "9", "A" to "F") must be defined in the specified base (i.e., every digit value must be lower than the base or radix value). See base_digits for the sequence of default digits. base2dec is the complement of dec2base. Value An integer number (in decimal notation). See Also dec2base converts decimal numbers into numerals in another base; as.roman converts integers into Roman numerals. Other numeric functions: base_digits, dec2base(), is_equal(), is_wholenumber(), num_as_char(), num_as_ordinal(), num_equal() Other utility functions: base_digits, dec2base(), is_equal(), is_vect(), is_wholenumber(), num_as_char(), num_as_ordinal(), num_equal() Examples # (a) single string input: base2dec("11") # default base = 2 base2dec("0101") base2dec("1010") base2dec("11", base = 3) base2dec("11", base = 5) base2dec("11", base = 10) base2dec("11", base = 12) base2dec("11", base = 14) base2dec("11", base = 16) # (b) numeric vectors as inputs: base2dec(c(0, 1, 0)) base2dec(c(0, 1, 0), base = 3) # (c) character vector as inputs: base2dec(c("0", "1", "0")) base2dec(c("0", "1", "0"), base = 3) # (d) multi-digit vectors: base2dec(c(1, 1)) base2dec(c(1, 1), base = 3) # Extreme values: base2dec(rep("1", 32)) # 32 x "1" base2dec(c("1", rep("0", 32))) # 2^32 base2dec(rep("1", 33)) # 33 x "1" base2dec(c("1", rep("0", 33))) # 2^33 # Non-standard inputs: base2dec(" ", 2) # no non-spaces: NA base2dec(" ?! ", 2) # no base digits: NA base2dec(" 100 ", 2) # remove leading and trailing spaces base2dec("- 100", 2) # handle negative inputs (value < 0) base2dec("- -100", 2) # handle double negations base2dec("---100", 2) # handle multiple negations # Special cases: base2dec(NA) base2dec(0) base2dec(c(3, 3), base = 3) # Note message! # Note: base2dec(dec2base(012340, base = 9), base = 9) dec2base(base2dec(043210, base = 11), base = 11) base_digits Base digits: Sequence of numeric symbols (as named vector) Description base_digits provides numeral symbols (digits) for notational place-value systems with arbitrary bases (as a named character vector). Usage base_digits Format An object of class character of length 62. Details Note that the elements (digits) are character symbols (i.e., numeral digits "0"-"9", "A"-"F", etc.), whereas their names correspond to their numeric values (from 0 to length(base_digits) - 1). Thus, the maximum base value in conversions by base2dec or dec2base is length(base_digits). See Also base2dec converts numerals in some base into decimal numbers; dec2base converts decimal num- bers into numerals in another base; as.roman converts integers into Roman numerals. Other numeric functions: base2dec(), dec2base(), is_equal(), is_wholenumber(), num_as_char(), num_as_ordinal(), num_equal() Other utility functions: base2dec(), dec2base(), is_equal(), is_vect(), is_wholenumber(), num_as_char(), num_as_ordinal(), num_equal() Examples base_digits # named character vector, zero-indexed names length(base_digits) # 62 (maximum base value) base_digits[10] # 10. element ("9" with name "9") base_digits["10"] # named element "10" ("A" with name "10") base_digits[["10"]] # element named "10" ("A") Bushisms Data: Bushisms. Description Bushisms contains phrases spoken by or attributed to U.S. president <NAME> (the 43rd president of the United States, in office from January 2001 to January 2009). Usage Bushisms Format A vector of type character with length(Bushisms) = 22. Source Data based on https://en.wikipedia.org/wiki/Bushism. See Also Other datasets: Trumpisms, countries, data_1, data_2, data_t1_de, data_t1_tab, data_t1, data_t2, data_t3, data_t4, dt_10, exp_num_dt, exp_wide, falsePosPsy_all, fame, flowery, fruits, outliers, pi_100k, posPsy_AHI_CESD, posPsy_long, posPsy_p_info, posPsy_wide, t3, t4, t_1, t_2, t_3, t_4, table6, table7, table8, table9, tb capitalize Capitalize initial characters in strings of text x. Description capitalize converts the case of each element’s (i.e., character string or word in text) n initial characters to upper or lowercase. Usage capitalize(x, n = 1, upper = TRUE, as_text = FALSE) Arguments x A string of text (required). n Number of initial characters to convert. Default: n = 1. upper Convert to uppercase? Default: upper = TRUE. as_text Treat and return x as a text (i.e., one character string)? Default: as_text = FALSE. Details If as_text = TRUE, the input x is merged into one string of text and the arguments are applied to each word. Value A character vector. See Also caseflip for converting the case of all letters; words_to_text and text_to_words for converting character vectors and texts. Other text objects and functions: Umlaut, caseflip(), cclass, chars_to_text(), collapse_chars(), count_chars_words(), count_chars(), count_words(), invert_rules(), l33t_rul35, map_text_chars(), map_text_coord(), map_text_regex(), metachar, read_ascii(), text_to_chars(), text_to_sentences(), text_to_words(), transl33t(), words_to_text() Examples x <- c("Hello world!", "this is a TEST sentence.", "the end.") capitalize(x) capitalize(tolower(x)) # Options: capitalize(x, n = 3) # leaves strings intact capitalize(x, n = 3, as_text = TRUE) # treats strings as text capitalize(x, n = 3, upper = FALSE) # first n in lowercase caseflip Flip the case of characters in a string of text x. Description caseflip flips the case of all characters in a string of text x. Usage caseflip(x) Arguments x A string of text (required). Details Internally, caseflip uses the letters and LETTERS constants of base R and the chartr function for replacing characters in strings of text. Value A character vector. See Also capitalize for converting the case of initial letters; chartr for replacing characters in strings of text. Other text objects and functions: Umlaut, capitalize(), cclass, chars_to_text(), collapse_chars(), count_chars_words(), count_chars(), count_words(), invert_rules(), l33t_rul35, map_text_chars(), map_text_coord(), map_text_regex(), metachar, read_ascii(), text_to_chars(), text_to_sentences(), text_to_words(), transl33t(), words_to_text() Examples x <- c("Hello world!", "This is a 1st sentence.", "This is the 2nd sentence.", "The end.") caseflip(x) cclass cclass provides character classes (as a named vector). Description cclass provides different character classes (as a named character vector). Usage cclass Format An object of class character of length 6. Details cclass allows illustrating matching character classes via regular expressions. See ?base::regex for details on regular expressions and ?"'" for a list of character constants/quotes in R. See Also metachar for a vector of metacharacters. Other text objects and functions: Umlaut, capitalize(), caseflip(), chars_to_text(), collapse_chars(), count_chars_words(), count_chars(), count_words(), invert_rules(), l33t_rul35, map_text_chars(), map_text_coord(), map_text_regex(), metachar, read_ascii(), text_to_chars(), text_to_sentences(), text_to_words(), transl33t(), words_to_text() Examples cclass["hex"] # select by name writeLines(cclass["pun"]) grep("[[:alpha:]]", cclass, value = TRUE) change_time Change time and time zone (without changing time display). Description change_time changes the time and time zone without changing the time display. Usage change_time(time, tz = "") Arguments time Time (as a scalar or vector). If time is not a local time (of the "POSIXlt" class) the function first tries coercing time into "POSIXlt" without changing the time display. tz Time zone (as character string). Default: tz = "" (i.e., current system time zone, Sys.timezone()). See OlsonNames() for valid options. Details change_time expects inputs to time to be local time(s) (of the "POSIXlt" class) and a valid time zone argument tz (as a string) and returns the same time display (but different actual times) as calendar time(s) (of the "POSIXct" class). Value A calendar time of class "POSIXct". See Also change_tz function which preserves time but changes time display; Sys.time() function of base R. Other date and time functions: change_tz(), cur_date(), cur_time(), days_in_month(), diff_dates(), diff_times(), diff_tz(), is_leap_year(), what_date(), what_month(), what_time(), what_wday(), what_week(), what_year(), zodiac() Examples change_time(as.POSIXlt(Sys.time()), tz = "UTC") # from "POSIXlt" time: t1 <- as.POSIXlt("2020-01-01 10:20:30", tz = "Europe/Berlin") change_time(t1, "Pacific/Auckland") change_time(t1, "America/Los_Angeles") # from "POSIXct" time: tc <- as.POSIXct("2020-07-01 12:00:00", tz = "UTC") change_time(tc, "Pacific/Auckland") # from "Date": dt <- as.Date("2020-12-31", tz = "Pacific/Honolulu") change_time(dt, tz = "Pacific/Auckland") # from time "string": ts <- "2020-12-31 20:30:45" change_time(ts, tz = "America/Los_Angeles") # from other "string" times: tx <- "7:30:45" change_time(tx, tz = "Asia/Calcutta") ty <- "1:30" change_time(ty, tz = "Europe/London") # convert into local times: (l1 <- as.POSIXlt("2020-06-01 10:11:12")) change_tz(change_time(l1, "Pacific/Auckland"), tz = "UTC") change_tz(change_time(l1, "Europe/Berlin"), tz = "UTC") change_tz(change_time(l1, "America/New_York"), tz = "UTC") # with vector of "POSIXlt" times: (l2 <- as.POSIXlt("2020-12-31 23:59:55", tz = "America/Los_Angeles")) (tv <- c(l1, l2)) # uses tz of l1 change_time(tv, "America/Los_Angeles") # change time and tz change_tz Change time zone (without changing represented time). Description change_tz changes the nominal time zone (i.e., the time display) without changing the actual time. Usage change_tz(time, tz = "") Arguments time Time (as a scalar or vector). If time is not a calendar time (of the "POSIXct" class) the function first tries coercing time into "POSIXct" without changing the denoted time. tz Time zone (as character string). Default: tz = "" (i.e., current system time zone, Sys.timezone()). See OlsonNames() for valid options. Details change_tz expects inputs to time to be calendar time(s) (of the "POSIXct" class) and a valid time zone argument tz (as a string) and returns the same time(s) as local time(s) (of the "POSIXlt" class). Value A local time of class "POSIXlt". See Also change_time function which preserves time display but changes time; Sys.time() function of base R. Other date and time functions: change_time(), cur_date(), cur_time(), days_in_month(), diff_dates(), diff_times(), diff_tz(), is_leap_year(), what_date(), what_month(), what_time(), what_wday(), what_week(), what_year(), zodiac() Examples change_tz(Sys.time(), tz = "Pacific/Auckland") change_tz(Sys.time(), tz = "Pacific/Honolulu") # from "POSIXct" time: tc <- as.POSIXct("2020-07-01 12:00:00", tz = "UTC") change_tz(tc, "Australia/Melbourne") change_tz(tc, "Europe/Berlin") change_tz(tc, "America/Los_Angeles") # from "POSIXlt" time: tl <- as.POSIXlt("2020-07-01 12:00:00", tz = "UTC") change_tz(tl, "Australia/Melbourne") change_tz(tl, "Europe/Berlin") change_tz(tl, "America/Los_Angeles") # from "Date": dt <- as.Date("2020-12-31") change_tz(dt, "Pacific/Auckland") change_tz(dt, "Pacific/Honolulu") # Note different date! # with a vector of "POSIXct" times: t2 <- as.POSIXct("2020-12-31 23:59:55", tz = "America/Los_Angeles") tv <- c(tc, t2) tv # Note: Both times in tz of tc change_tz(tv, "America/Los_Angeles") chars_to_text Combine character inputs x into a single string of text. Description chars_to_text combines multi-element character inputs x into a single string of text (i.e., a char- acter object of length 1), while preserving punctuation and spaces. Usage chars_to_text(x, sep = "") Arguments x A vector (required), typically a character vector. sep Character to insert between the elements of a multi-element character vector as input x? Default: sep = "" (i.e., add nothing). Details chars_to_text is an inverse function of text_to_chars. Note that using paste(x, collapse = "") would remove spaces. See collapse_chars for a sim- pler alternative. Value A character vector (of length 1). See Also collapse_chars for collapsing character vectors; text_to_chars for splitting text into a vector of characters; text_to_words for splitting text into a vector of words; strsplit for splitting strings. Other text objects and functions: Umlaut, capitalize(), caseflip(), cclass, collapse_chars(), count_chars_words(), count_chars(), count_words(), invert_rules(), l33t_rul35, map_text_chars(), map_text_coord(), map_text_regex(), metachar, read_ascii(), text_to_chars(), text_to_sentences(), text_to_words(), transl33t(), words_to_text() Examples # (a) One string (with spaces and punctuation): t1 <- "Hello world! This is _A TEST_. Does this work?" (cv <- unlist(strsplit(t1, split = ""))) (t2 <- chars_to_text(cv)) t1 == t2 # (b) Multiple strings (nchar from 0 to >1): s <- c("Hi", " ", "", "there!", " ", "", "Does THIS work?") chars_to_text(s) # Note: Using sep argument: chars_to_text(c("Hi there!", "How are you today?"), sep = " ") chars_to_text(1:3, sep = " | ") coin Flip a fair coin (with 2 sides "H" and "T") n times. Description coin generates a sequence of events that represent the results of flipping a fair coin n times. Usage coin(n = 1, events = c("H", "T")) Arguments n Number of coin flips. Default: n = 1. events Possible outcomes (as a vector). Default: events = c("H", "T"). Details By default, the 2 possible events for each flip are "H" (for "heads") and "T" (for "tails"). See Also Other sampling functions: dice_2(), dice(), sample_char(), sample_date(), sample_time() Examples # Basics: coin() table(coin(n = 100)) table(coin(n = 100, events = LETTERS[1:3])) # Note an oddity: coin(10, events = 8:9) # works as expected, but coin(10, events = 9:9) # odd: see sample() for an explanation. # Limits: coin(2:3) coin(NA) coin(0) coin(1/2) coin(3, events = "X") coin(3, events = NA) coin(NULL, NULL) collapse_chars Collapse character inputs x into a single string. Description collapse_chars converts multi-element character inputs x into a single string of text (i.e., a char- acter object of length 1), separating its elements by sep. Usage collapse_chars(x, sep = " ") Arguments x A vector (required), typically a character vector. sep A character inserted as separator/delimiter between elements when collapsing multi-element strings of x. Default: sep = " " (i.e., insert 1 space between ele- ments). Details As collapse_chars is a wrapper around paste(x, collapse = sep). It preserves spaces within the elements of x. The separator sep is only used when collapsing multi-element vectors and inserted between ele- ments. See chars_to_text for combining character vectors into text. Value A character vector (of length 1). See Also chars_to_text for combining character vectors into text; text_to_chars for splitting text into a vector of characters; text_to_words for splitting text into a vector of words; strsplit for splitting strings. Other text objects and functions: Umlaut, capitalize(), caseflip(), cclass, chars_to_text(), count_chars_words(), count_chars(), count_words(), invert_rules(), l33t_rul35, map_text_chars(), map_text_coord(), map_text_regex(), metachar, read_ascii(), text_to_chars(), text_to_sentences(), text_to_words(), transl33t(), words_to_text() Examples collapse_chars(c("Hello", "world", "!")) collapse_chars(c("_", " _ ", " _ "), sep = "|") # preserves spaces writeLines(collapse_chars(c("Hello", "world", "!"), sep = "\n")) collapse_chars(1:3, sep = "") countries Data: Names of countries. Description countries is a dataset containing the names of 197 countries (as a vector of text strings). Usage countries Format A vector of type character with length(countries) = 197. Source Data from https://www.gapminder.org: Original data at https://www.gapminder.org/data/ documentation/gd004/. See Also Other datasets: Bushisms, Trumpisms, data_1, data_2, data_t1_de, data_t1_tab, data_t1, data_t2, data_t3, data_t4, dt_10, exp_num_dt, exp_wide, falsePosPsy_all, fame, flowery, fruits, outliers, pi_100k, posPsy_AHI_CESD, posPsy_long, posPsy_p_info, posPsy_wide, t3, t4, t_1, t_2, t_3, t_4, table6, table7, table8, table9, tb count_chars Count the frequency of characters in a string of text x. Description count_chars provides frequency counts of the characters in a string of text x as a named numeric vector. Usage count_chars(x, case_sense = TRUE, rm_specials = TRUE, sort_freq = TRUE) Arguments x A string of text (required). case_sense Boolean: Distinguish lower- vs. uppercase characters? Default: case_sense = TRUE. rm_specials Boolean: Remove special characters? Default: rm_specials = TRUE. sort_freq Boolean: Sort output by character frequency? Default: sort_freq = TRUE. Details If rm_specials = TRUE (as per default), most special (or non-word) characters are removed and not counted. (Note that this currently works without using regular expressions.) The quantification is case-sensitive and the resulting vector is sorted by name (alphabetically) or by frequency (per default). Value A named numeric vector. See Also count_words for counting the frequency of words; count_chars_words for counting both charac- ters and words; plot_chars for a corresponding plotting function. Other text objects and functions: Umlaut, capitalize(), caseflip(), cclass, chars_to_text(), collapse_chars(), count_chars_words(), count_words(), invert_rules(), l33t_rul35, map_text_chars(), map_text_coord(), map_text_regex(), metachar, read_ascii(), text_to_chars(), text_to_sentences(), text_to_words(), transl33t(), words_to_text() Examples # Default: x <- c("Hello world!", "This is a 1st sentence.", "This is the 2nd sentence.", "THE END.") count_chars(x) # Options: count_chars(x, case_sense = FALSE) count_chars(x, rm_specials = FALSE) count_chars(x, sort_freq = FALSE) count_chars_words Count the frequency of characters and words in a string of text x. Description count_chars_words provides frequency counts of the characters and words of a string of text x on a per character basis. Usage count_chars_words(x, case_sense = TRUE, sep = "|", rm_sep = TRUE) Arguments x A string of text (required). case_sense Boolean: Distinguish lower- vs. uppercase characters? Default: case_sense = TRUE. sep Dummy character(s) to insert between elements/lines when parsing a multi- element character vector x as input. This character is inserted to mark word boundaries in multi-element inputs x (without punctuation at the boundary). It should NOT occur anywhere in x, so that it can be removed again (by rm_sep = TRUE). Default: sep = "|" (i.e., insert a vertical bar between lines). rm_sep Should sep be removed from output? Default: rm_sep = TRUE. Details count_chars_words calls both count_chars and count_words and maps their results to a data frame that contains a row for each character of x. The quantifications are case-sensitive. Special characters (e.g., parentheses, punctuation, and spaces) are counted as characters, but removed from word counts. If input x consists of multiple text strings, they are collapsed with an added " " (space) between them. Value A data frame with 4 variables (char, char_freq, word, word_freq). See Also count_chars for counting the frequency of characters; count_words for counting the frequency of words; plot_chars for a character plotting function. Other text objects and functions: Umlaut, capitalize(), caseflip(), cclass, chars_to_text(), collapse_chars(), count_chars(), count_words(), invert_rules(), l33t_rul35, map_text_chars(), map_text_coord(), map_text_regex(), metachar, read_ascii(), text_to_chars(), text_to_sentences(), text_to_words(), transl33t(), words_to_text() Examples s1 <- ("This test is to test this function.") head(count_chars_words(s1)) head(count_chars_words(s1, case_sense = FALSE)) s3 <- c("A 1st sentence.", "The 2nd sentence.", "A 3rd --- and also THE FINAL --- SENTENCE.") tail(count_chars_words(s3)) tail(count_chars_words(s3, case_sense = FALSE)) count_words Count the frequency of words in a string of text x. Description count_words provides frequency counts of the words in a string of text x as a named numeric vector. Usage count_words(x, case_sense = TRUE, sort_freq = TRUE) Arguments x A string of text (required). case_sense Boolean: Distinguish lower- vs. uppercase characters? Default: case_sense = TRUE. sort_freq Boolean: Sort output by word frequency? Default: sort_freq = TRUE. Details Special (or non-word) characters are removed and not counted. The quantification is case-sensitive and the resulting vector is sorted by name (alphabetically) or by frequency (per default). Value A named numeric vector. See Also count_chars for counting the frequency of characters; count_chars_words for counting both characters and words; plot_chars for a character plotting function. Other text objects and functions: Umlaut, capitalize(), caseflip(), cclass, chars_to_text(), collapse_chars(), count_chars_words(), count_chars(), invert_rules(), l33t_rul35, map_text_chars(), map_text_coord(), map_text_regex(), metachar, read_ascii(), text_to_chars(), text_to_sentences(), text_to_words(), transl33t(), words_to_text() Examples # Default: s3 <- c("A first sentence.", "The second sentence.", "A third --- and also THE FINAL --- SENTENCE.") count_words(s3) # case-sensitive, sorts by frequency # Options: count_words(s3, case_sense = FALSE) # case insensitive count_words(s3, sort_freq = FALSE) # sorts alphabetically cur_date Current date (in yyyy-mm-dd or dd-mm-yyyy format). Description cur_date provides a relaxed version of Sys.time() that is sufficient for most purposes. Usage cur_date(rev = FALSE, as_string = TRUE, sep = "-") Arguments rev Boolean: Reverse from "yyyy-mm-dd" to "dd-mm-yyyy" format? Default: rev = FALSE. as_string Boolean: Return as character string? Default: as_string = TRUE. If as_string = FALSE, a "Date" object is returned. sep Character: Separator to use. Default: sep = "-". Details By default, cur_date returns Sys.Date as a character string (using current system settings and sep for formatting). If as_string = FALSE, a "Date" object is returned. Alternatively, consider using Sys.Date or Sys.time() to obtain the " format according to the ISO 8601 standard. For more options, see the documentations of the date and Sys.Date functions of base R and the formatting options for Sys.time(). Value A character string or object of class "Date". See Also what_date() function to print dates with more options; date() and today() functions of the lubridate package; date(), Sys.Date(), and Sys.time() functions of base R. Other date and time functions: change_time(), change_tz(), cur_time(), days_in_month(), diff_dates(), diff_times(), diff_tz(), is_leap_year(), what_date(), what_month(), what_time(), what_wday(), what_week(), what_year(), zodiac() Examples cur_date() cur_date(sep = "/") cur_date(rev = TRUE) cur_date(rev = TRUE, sep = ".") # return a "Date" object: from <- cur_date(as_string = FALSE) class(from) cur_time Current time (in hh:mm or hh:mm:ss format). Description cur_time provides a satisficing version of Sys.time() that is sufficient for most purposes. Usage cur_time(seconds = FALSE, as_string = TRUE, sep = ":") Arguments seconds Boolean: Show time with seconds? Default: seconds = FALSE. as_string Boolean: Return as character string? Default: as_string = TRUE. If as_string = FALSE, a "POSIXct" object is returned. sep Character: Separator to use. Default: sep = ":". Details By default, cur_time returns a Sys.time() as a character string (in " using current system settings. If as_string = FALSE, a "POSIXct" (calendar time) object is returned. For a time zone argument, see the what_time function, or the now() function of the lubridate package. Value A character string or object of class "POSIXct". See Also what_time() function to print times with more options; now() function of the lubridate package; Sys.time() function of base R. Other date and time functions: change_time(), change_tz(), cur_date(), days_in_month(), diff_dates(), diff_times(), diff_tz(), is_leap_year(), what_date(), what_month(), what_time(), what_wday(), what_week(), what_year(), zodiac() Examples cur_time() cur_time(seconds = TRUE) cur_time(sep = ".") # return a "POSIXct" object: t <- cur_time(as_string = FALSE) format(t, "%T %Z") data_1 Data import data_1. Description data_1 is a fictitious dataset to practice importing data (from a DELIMITED file). Usage data_1 Format A table with 100 cases (rows) and 4 variables (columns). Source See DELIMITED data at http://rpository.com/ds4psy/data/data_1.dat. See Also Other datasets: Bushisms, Trumpisms, countries, data_2, data_t1_de, data_t1_tab, data_t1, data_t2, data_t3, data_t4, dt_10, exp_num_dt, exp_wide, falsePosPsy_all, fame, flowery, fruits, outliers, pi_100k, posPsy_AHI_CESD, posPsy_long, posPsy_p_info, posPsy_wide, t3, t4, t_1, t_2, t_3, t_4, table6, table7, table8, table9, tb data_2 Data import data_2. Description data_2 is a fictitious dataset to practice importing data (from a FWF file). Usage data_2 Format A table with 100 cases (rows) and 4 variables (columns). Source See FWF data at http://rpository.com/ds4psy/data/data_2.dat. See Also Other datasets: Bushisms, Trumpisms, countries, data_1, data_t1_de, data_t1_tab, data_t1, data_t2, data_t3, data_t4, dt_10, exp_num_dt, exp_wide, falsePosPsy_all, fame, flowery, fruits, outliers, pi_100k, posPsy_AHI_CESD, posPsy_long, posPsy_p_info, posPsy_wide, t3, t4, t_1, t_2, t_3, t_4, table6, table7, table8, table9, tb data_t1 Data table data_t1. Description data_t1 is a fictitious dataset to practice importing and joining data (from a CSV file). Usage data_t1 Format A table with 20 cases (rows) and 4 variables (columns). Source See CSV data at http://rpository.com/ds4psy/data/data_t1.csv. See Also Other datasets: Bushisms, Trumpisms, countries, data_1, data_2, data_t1_de, data_t1_tab, data_t2, data_t3, data_t4, dt_10, exp_num_dt, exp_wide, falsePosPsy_all, fame, flowery, fruits, outliers, pi_100k, posPsy_AHI_CESD, posPsy_long, posPsy_p_info, posPsy_wide, t3, t4, t_1, t_2, t_3, t_4, table6, table7, table8, table9, tb data_t1_de Data import data_t1_de. Description data_t1_de is a fictitious dataset to practice importing data (from a CSV file, de/European style). Usage data_t1_de Format A table with 20 cases (rows) and 4 variables (columns). Source See CSV data at http://rpository.com/ds4psy/data/data_t1_de.csv. See Also Other datasets: Bushisms, Trumpisms, countries, data_1, data_2, data_t1_tab, data_t1, data_t2, data_t3, data_t4, dt_10, exp_num_dt, exp_wide, falsePosPsy_all, fame, flowery, fruits, outliers, pi_100k, posPsy_AHI_CESD, posPsy_long, posPsy_p_info, posPsy_wide, t3, t4, t_1, t_2, t_3, t_4, table6, table7, table8, table9, tb data_t1_tab Data import data_t1_tab. Description data_t1_tab is a fictitious dataset to practice importing data (from a TAB file). Usage data_t1_tab Format A table with 20 cases (rows) and 4 variables (columns). Source See TAB-delimited data at http://rpository.com/ds4psy/data/data_t1_tab.csv. See Also Other datasets: Bushisms, Trumpisms, countries, data_1, data_2, data_t1_de, data_t1, data_t2, data_t3, data_t4, dt_10, exp_num_dt, exp_wide, falsePosPsy_all, fame, flowery, fruits, outliers, pi_100k, posPsy_AHI_CESD, posPsy_long, posPsy_p_info, posPsy_wide, t3, t4, t_1, t_2, t_3, t_4, table6, table7, table8, table9, tb data_t2 Data table data_t2. Description data_t2 is a fictitious dataset to practice importing and joining data (from a CSV file). Usage data_t2 Format A table with 20 cases (rows) and 4 variables (columns). Source See CSV data at http://rpository.com/ds4psy/data/data_t2.csv. See Also Other datasets: Bushisms, Trumpisms, countries, data_1, data_2, data_t1_de, data_t1_tab, data_t1, data_t3, data_t4, dt_10, exp_num_dt, exp_wide, falsePosPsy_all, fame, flowery, fruits, outliers, pi_100k, posPsy_AHI_CESD, posPsy_long, posPsy_p_info, posPsy_wide, t3, t4, t_1, t_2, t_3, t_4, table6, table7, table8, table9, tb data_t3 Data table data_t3. Description data_t3 is a fictitious dataset to practice importing and joining data (from a CSV file). Usage data_t3 Format A table with 20 cases (rows) and 4 variables (columns). Source See CSV data at http://rpository.com/ds4psy/data/data_t3.csv. See Also Other datasets: Bushisms, Trumpisms, countries, data_1, data_2, data_t1_de, data_t1_tab, data_t1, data_t2, data_t4, dt_10, exp_num_dt, exp_wide, falsePosPsy_all, fame, flowery, fruits, outliers, pi_100k, posPsy_AHI_CESD, posPsy_long, posPsy_p_info, posPsy_wide, t3, t4, t_1, t_2, t_3, t_4, table6, table7, table8, table9, tb data_t4 Data table data_t4. Description data_t4 is a fictitious dataset to practice importing and joining data (from a CSV file). Usage data_t4 Format A table with 20 cases (rows) and 4 variables (columns). Source See CSV data at http://rpository.com/ds4psy/data/data_t4.csv. See Also Other datasets: Bushisms, Trumpisms, countries, data_1, data_2, data_t1_de, data_t1_tab, data_t1, data_t2, data_t3, dt_10, exp_num_dt, exp_wide, falsePosPsy_all, fame, flowery, fruits, outliers, pi_100k, posPsy_AHI_CESD, posPsy_long, posPsy_p_info, posPsy_wide, t3, t4, t_1, t_2, t_3, t_4, table6, table7, table8, table9, tb days_in_month How many days are in a month (of given date)? Description days_in_month computes the number of days in the months of given dates (provided as a date or time dt, or number/string denoting a 4-digit year). Usage days_in_month(dt = Sys.Date(), ...) Arguments dt Date or time (scalar or vector). Default: dt = Sys.Date(). Numbers or strings with dates are parsed into 4-digit numbers denoting the year. ... Other parameters (passed to as.Date()). Details The function requires dt as "Dates", rather than month names or numbers, to check for leap years (in which February has 29 days). Value A named (numeric) vector. See Also is_leap_year to check for leap years; diff_tz for time zone-based time differences; days_in_month function of the lubridate package. Other date and time functions: change_time(), change_tz(), cur_date(), cur_time(), diff_dates(), diff_times(), diff_tz(), is_leap_year(), what_date(), what_month(), what_time(), what_wday(), what_week(), what_year(), zodiac() Examples days_in_month() # Robustness: days_in_month(Sys.Date()) # Date days_in_month(Sys.time()) # POSIXct days_in_month("2020-07-01") # string days_in_month(20200901) # number days_in_month(c("2020-02-10 01:02:03", "2021-02-11", "2024-02-12")) # vectors of strings # For leap years: ds <- as.Date("2020-02-20") + (365 * 0:4) days_in_month(ds) # (2020/2024 are leap years) dec2base Convert an integer from decimal notation into a string of numeric dig- its in some base. Description dec2base converts an integer from its standard decimal notation (i.e., using positional numerals with a base or radix of 10) into a sequence of numeric symbols (digits) in some other base. Usage dec2base(x, base = 2) Arguments x A (required) integer in decimal (base 10) notation or corresponding string of digits (i.e., digits 0-9). base The base or radix of the digits in the output. Default: base = 2 (binary). Details See base_digits for the sequence of default digits. To prevent erroneous interpretations of numeric outputs, dec2base returns a sequence of digits (as a character string). dec2base is the complement of base2dec. Value A character string of digits (in base notation). See Also base2dec converts numerals in some base into decimal numbers; as.roman converts integers into Roman numerals. Other numeric functions: base2dec(), base_digits, is_equal(), is_wholenumber(), num_as_char(), num_as_ordinal(), num_equal() Other utility functions: base2dec(), base_digits, is_equal(), is_vect(), is_wholenumber(), num_as_char(), num_as_ordinal(), num_equal() Examples # (a) single numeric input: dec2base(3) # base = 2 dec2base(8, base = 2) dec2base(8, base = 3) dec2base(8, base = 7) dec2base(100, base = 5) dec2base(100, base = 10) dec2base(100, base = 15) dec2base(14, base = 14) dec2base(15, base = 15) dec2base(16, base = 16) dec2base(15, base = 16) dec2base(31, base = 16) dec2base(47, base = 16) # (b) single string input: dec2base("7", base = 2) dec2base("8", base = 3) # Extreme values: dec2base(base2dec(rep("1", 32))) # 32 x "1" dec2base(base2dec(c("1", rep("0", 32)))) # 2^32 dec2base(base2dec(rep("1", 33))) # 33 x "1" dec2base(base2dec(c("1", rep("0", 33)))) # 2^33 # Non-standard inputs: dec2base(" ") # only spaces: NA dec2base("?") # no decimal digits: NA dec2base(" 10 ", 2) # remove leading and trailing spaces dec2base("-10", 2) # handle negative inputs (in character strings) dec2base(" -- 10", 2) # handle multiple negations dec2base("xy -10 ", 2) # ignore non-decimal digit prefixes # Note: base2dec(dec2base(012340, base = 9), base = 9) dec2base(base2dec(043210, base = 11), base = 11) dice Throw a fair dice (with a given number of sides) n times. Description dice generates a sequence of events that represent the results of throwing a fair dice (with a given number of events or number of sides) n times. Usage dice(n = 1, events = 1:6) Arguments n Number of dice throws. Default: n = 1. events Events to draw from (or number of sides). Default: events = 1:6. Details By default, the 6 possible events for each throw of the dice are the numbers from 1 to 6. See Also Other sampling functions: coin(), dice_2(), sample_char(), sample_date(), sample_time() Examples # Basics: dice() table(dice(10^4)) # 5-sided dice: dice(events = 1:5) table(dice(100, events = 5)) # Strange dice: dice(5, events = 8:9) table(dice(100, LETTERS[1:3])) # Note: dice(10, 1) table(dice(100, 2)) # Note an oddity: dice(10, events = 8:9) # works as expected, but dice(10, events = 9:9) # odd: see sample() for an explanation. # Limits: dice(NA) dice(0) dice(1/2) dice(2:3) dice(5, events = NA) dice(5, events = 1/2) dice(NULL, NULL) dice_2 Throw a questionable dice (with a given number of sides) n times. Description dice_2 is a variant of dice that generates a sequence of events that represent the results of throwing a dice (with a given number of sides) n times. Usage dice_2(n = 1, sides = 6) Arguments n Number of dice throws. Default: n = 1. sides Number of sides. Default: sides = 6. Details Something is wrong with this dice. Can you examine it and measure its problems in a quantitative fashion? See Also Other sampling functions: coin(), dice(), sample_char(), sample_date(), sample_time() Examples # Basics: dice_2() table(dice_2(100)) # 10-sided dice: dice_2(sides = 10) table(dice_2(100, sides = 10)) # Note: dice_2(10, 1) table(dice_2(5000, sides = 5)) # Note an oddity: dice_2(n = 10, sides = 8:9) # works, but dice_2(n = 10, sides = 9:9) # odd: see sample() for an explanation. diff_dates Get the difference between two dates (in human units). Description diff_dates computes the difference between two dates (i.e., from some from_date to some to_date) in human measurement units (periods). Usage diff_dates( from_date, to_date = Sys.Date(), unit = "years", as_character = TRUE ) Arguments from_date From date (required, scalar or vector, as "Date"). Date of birth (DOB), assumed to be of class "Date", and coerced into "Date" when of class "POSIXt". to_date To date (optional, scalar or vector, as "Date"). Default: to_date = Sys.Date(). Maximum date/date of death (DOD), assumed to be of class "Date", and coerced into "Date" when of class "POSIXt". unit Largest measurement unit for representing results. Units represent human time periods, rather than chronological time differences. Default: unit = "years" for completed years, months, and days. Options available: 1. unit = "years": completed years, months, and days (default) 2. unit = "months": completed months, and days 3. unit = "days": completed days Units may be abbreviated. as_character Boolean: Return output as character? Default: as_character = TRUE. If as_character = FALSE, results are returned as columns of a data frame and include from_date and to_date. Details diff_dates answers questions like "How much time has elapsed between two dates?" or "How old are you?" in human time periods of (full) years, months, and days. Key characteristics: • If to_date or from_date are not "Date" objects, diff_dates aims to coerce them into "Date" objects. • If to_date is missing (i.e., NA), to_date is set to today’s date (i.e., Sys.Date()). • If to_date is specified, any intermittent missing values (i.e., NA) are set to today’s date (i.e., Sys.Date()). Thus, dead people (with both birth dates and death dates specified) do not age any further, but people still alive (with is.na(to_date), are measured to today’s date (i.e., Sys.Date()). • If to_date precedes from_date (i.e., from_date > to_date) computations are performed on swapped days and the result is marked as negative (by a character "-") in the output. • If the lengths of from_date and to_date differ, the shorter vector is recycled to the length of the longer one. By default, diff_dates provides output as (signed) character strings. For numeric outputs, use as_character = FALSE. Value A character vector or data frame (with dates, sign, and numeric columns for units). See Also Time spans (interval as.period) in the lubridate package. Other date and time functions: change_time(), change_tz(), cur_date(), cur_time(), days_in_month(), diff_times(), diff_tz(), is_leap_year(), what_date(), what_month(), what_time(), what_wday(), what_week(), what_year(), zodiac() Examples y_100 <- Sys.Date() - (100 * 365.25) + -1:1 diff_dates(y_100) # with "to_date" argument: y_050 <- Sys.Date() - (50 * 365.25) + -1:1 diff_dates(y_100, y_050) diff_dates(y_100, y_050, unit = "d") # days (with decimals) # Time unit and output format: ds_from <- as.Date("2010-01-01") + 0:2 ds_to <- as.Date("2020-03-01") # (2020 is leap year) diff_dates(ds_from, ds_to, unit = "y", as_character = FALSE) # years diff_dates(ds_from, ds_to, unit = "m", as_character = FALSE) # months diff_dates(ds_from, ds_to, unit = "d", as_character = FALSE) # days # Robustness: days_cur_year <- 365 + is_leap_year(Sys.Date()) diff_dates(Sys.time() - (1 * (60 * 60 * 24) * days_cur_year)) # for POSIXt times diff_dates("10-08-11", "20-08-10") # for strings diff_dates(20200228, 20200301) # for numbers (2020 is leap year) # Recycling "to_date" to length of "from_date": y_050_2 <- Sys.Date() - (50 * 365.25) diff_dates(y_100, y_050_2) # Note maxima and minima: diff_dates("0000-01-01", "9999-12-31") # max. d + m + y diff_dates("1000-06-01", "1000-06-01") # min. d + m + y # If from_date == to_date: diff_dates("2000-01-01", "2000-01-01") # If from_date > to_date: diff_dates("2000-01-02", "2000-01-01") # Note negation "-" diff_dates("2000-02-01", "2000-01-01", as_character = TRUE) diff_dates("2001-02-02", "2000-02-02", as_character = FALSE) # Test random date samples: f_d <- sample_date(size = 10) t_d <- sample_date(size = 10) diff_dates(f_d, t_d, as_character = TRUE) # Using 'fame' data: dob <- as.Date(fame$DOB, format = "%B %d, %Y") dod <- as.Date(fame$DOD, format = "%B %d, %Y") head(diff_dates(dob, dod)) # Note: Deceased people do not age further. head(diff_dates(dob, dod, as_character = FALSE)) # numeric outputs diff_times Get the difference between two times (in human units). Description diff_times computes the difference between two times (i.e., from some from_time to some to_time) in human measurement units (periods). Usage diff_times(from_time, to_time = Sys.time(), unit = "days", as_character = TRUE) Arguments from_time From time (required, scalar or vector, as "POSIXct"). Origin time, assumed to be of class "POSIXct", and coerced into "POSIXct" when of class "Date" or "POSIXlt. to_time To time (optional, scalar or vector, as "POSIXct"). Default: to_time = Sys.time(). Maximum time, assumed to be of class "POSIXct", and coerced into "POSIXct" when of class "Date" or "POSIXlt". unit Largest measurement unit for representing results. Units represent human time periods, rather than chronological time differences. Default: unit = "days" for completed days, hours, minutes, and seconds. Options available: 1. unit = "years": completed years, months, and days (default) 2. unit = "months": completed months, and days 3. unit = "days": completed days 4. unit = "hours": completed hours 5. unit = "minutes": completed minutes 6. unit = "seconds": completed seconds Units may be abbreviated. as_character Boolean: Return output as character? Default: as_character = TRUE. If as_character = FALSE, results are returned as columns of a data frame and include from_date and to_date. Details diff_times answers questions like "How much time has elapsed between two dates?" or "How old are you?" in human time periods of (full) years, months, and days. Key characteristics: • If to_time or from_time are not "POSIXct" objects, diff_times aims to coerce them into "POSIXct" objects. • If to_time is missing (i.e., NA), to_time is set to the current time (i.e., Sys.time()). • If to_time is specified, any intermittent missing values (i.e., NA) are set to the current time (i.e., Sys.time()). • If to_time precedes from_time (i.e., from_time > to_time) computations are performed on swapped times and the result is marked as negative (by a character "-") in the output. • If the lengths of from_time and to_time differ, the shorter vector is recycled to the length of the longer one. By default, diff_times provides output as (signed) character strings. For numeric outputs, use as_character = FALSE. Value A character vector or data frame (with times, sign, and numeric columns for units). See Also diff_dates for date differences; time spans (an interval as.period) in the lubridate package. Other date and time functions: change_time(), change_tz(), cur_date(), cur_time(), days_in_month(), diff_dates(), diff_tz(), is_leap_year(), what_date(), what_month(), what_time(), what_wday(), what_week(), what_year(), zodiac() Examples t1 <- as.POSIXct("1969-07-13 13:53 CET") # (before UNIX epoch) diff_times(t1, unit = "years", as_character = TRUE) diff_times(t1, unit = "secs", as_character = TRUE) diff_tz Get the time zone difference between two times. Description diff_tz computes the time difference between two times t1 and t2 that is exclusively due to both times being in different time zones. Usage diff_tz(t1, t2, in_min = FALSE) Arguments t1 First time (required, as "POSIXt" time point/moment). t2 Second time (required, as "POSIXt" time point/moment). in_min Return time-zone based time difference in minutes (Boolean)? Default: in_min = FALSE. Details diff_tz ignores all differences in nominal times, but allows adjusting time-based computations for time shifts that are due to time zone differences (e.g., different locations, or changes to/from daylight saving time, DST), rather than differences in actual times. Internally, diff_tz determines and contrasts the POSIX conversion specifications " (in numeric form). If the lengths of t1 and t2 differ, the shorter vector is recycled to the length of the longer one. Value A character (in "HH:MM" format) or numeric vector (number of minutes). See Also days_in_month for the number of days in given months; is_leap_year to check for leap years. Other date and time functions: change_time(), change_tz(), cur_date(), cur_time(), days_in_month(), diff_dates(), diff_times(), is_leap_year(), what_date(), what_month(), what_time(), what_wday(), what_week(), what_year(), zodiac() Examples # Time zones differences: tm <- "2020-01-01 01:00:00" # nominal time t1 <- as.POSIXct(tm, tz = "Pacific/Auckland") t2 <- as.POSIXct(tm, tz = "Europe/Berlin") t3 <- as.POSIXct(tm, tz = "Pacific/Honolulu") # as character (in "HH:MM"): diff_tz(t1, t2) diff_tz(t2, t3) diff_tz(t1, t3) # as numeric (in minutes): diff_tz(t1, t3, in_min = TRUE) # Compare local times (POSIXlt): t4 <- as.POSIXlt(Sys.time(), tz = "Pacific/Auckland") t5 <- as.POSIXlt(Sys.time(), tz = "Europe/Berlin") diff_tz(t4, t5) diff_tz(t4, t5, in_min = TRUE) # DSL shift: Spring ahead (on 2020-03-29: 02:00:00 > 03:00:00): s6 <- "2020-03-29 01:00:00 CET" # before DSL switch s7 <- "2020-03-29 03:00:00 CEST" # after DSL switch t6 <- as.POSIXct(s6, tz = "Europe/Berlin") # CET t7 <- as.POSIXct(s7, tz = "Europe/Berlin") # CEST diff_tz(t6, t7) # 1 hour forward diff_tz(t6, t7, in_min = TRUE) ds4psy.guide Opens user guide of the ds4psy package. Description Opens user guide of the ds4psy package. Usage ds4psy.guide() dt_10 Data from 10 Danish people. Description dt_10 contains precise DOB information of 10 non-existent, but definitely Danish people. Usage dt_10 Format A table with 10 cases (rows) and 7 variables (columns). Source See CSV data file at http://rpository.com/ds4psy/data/dt_10.csv. See Also Other datasets: Bushisms, Trumpisms, countries, data_1, data_2, data_t1_de, data_t1_tab, data_t1, data_t2, data_t3, data_t4, exp_num_dt, exp_wide, falsePosPsy_all, fame, flowery, fruits, outliers, pi_100k, posPsy_AHI_CESD, posPsy_long, posPsy_p_info, posPsy_wide, t3, t4, t_1, t_2, t_3, t_4, table6, table7, table8, table9, tb exp_num_dt Data from an experiment with numeracy and date-time variables. Description exp_num_dt is a fictitious dataset describing 1000 non-existing, but surprisingly friendly people. Usage exp_num_dt Format A table with 1000 cases (rows) and 15 variables (columns). Details Codebook The table contains 15 columns/variables: • 1. name: Participant initials. • 2. gender: Self-identified gender. • 3. bday: Day (within month) of DOB. • 4. bmonth: Month (within year) of DOB. • 5. byear: Year of DOB. • 6. height: Height (in cm). • 7. blood_type: Blood type. • 8. bnt_1 to 11. bnt_4: Correct response to BNT question? (1: correct, 0: incorrect). • 12. g_iq and 13. s_iq: Scores from two IQ tests (general vs. social). • 14. t_1 and 15. t_2: Start and end time. exp_num_dt was generated for analyzing test scores (e.g., IQ, numeracy), for converting data from wide into long format, and for dealing with date- and time-related variables. Source See CSV data files at http://rpository.com/ds4psy/data/numeracy.csv and http://rpository. com/ds4psy/data/dt.csv. See Also Other datasets: Bushisms, Trumpisms, countries, data_1, data_2, data_t1_de, data_t1_tab, data_t1, data_t2, data_t3, data_t4, dt_10, exp_wide, falsePosPsy_all, fame, flowery, fruits, outliers, pi_100k, posPsy_AHI_CESD, posPsy_long, posPsy_p_info, posPsy_wide, t3, t4, t_1, t_2, t_3, t_4, table6, table7, table8, table9, tb exp_wide Data exp_wide. Description exp_wide is a fictitious dataset to practice tidying data (here: converting from wide to long format). Usage exp_wide Format A table with 10 cases (rows) and 7 variables (columns). Source See CSV data at http://rpository.com/ds4psy/data/exp_wide.csv. See Also Other datasets: Bushisms, Trumpisms, countries, data_1, data_2, data_t1_de, data_t1_tab, data_t1, data_t2, data_t3, data_t4, dt_10, exp_num_dt, falsePosPsy_all, fame, flowery, fruits, outliers, pi_100k, posPsy_AHI_CESD, posPsy_long, posPsy_p_info, posPsy_wide, t3, t4, t_1, t_2, t_3, t_4, table6, table7, table8, table9, tb falsePosPsy_all False Positive Psychology data. Description falsePosPsy_all is a dataset containing the data from 2 studies designed to highlight problematic research practices within psychology. Usage falsePosPsy_all Format A table with 78 cases (rows) and 19 variables (columns): Details <NAME> and Simonsohn (2011) published a controversial article with a necessarily false finding. By conducting simulations and 2 simple behavioral experiments, the authors show that flexibility in data collection, analysis, and reporting dramatically increases the rate of false-positive findings. study Study ID. id Participant ID. aged Days since participant was born (based on their self-reported birthday). aged365 Age in years. female Is participant a woman? 1: yes, 2: no. dad Father’s age (in years). mom Mother’s age (in years). potato Did the participant hear the song ’Hot Potato’ by The Wiggles? 1: yes, 2: no. when64 Did the participant hear the song ’When I am 64’ by The Beatles? 1: yes, 2: no. kalimba Did the participant hear the song ’Kalimba’ by Mr. Scrub? 1: yes, 2: no. cond In which condition was the participant? control: Subject heard the song ’Kalimba’ by Mr. Scrub; potato: Subject heard the song ’Hot Potato’ by The Wiggles; 64: Subject heard the song ’When I am 64’ by The Beatles. root Could participant report the square root of 100? 1: yes, 2: no. bird Imagine a restaurant you really like offered a 30 percent discount for dining between 4pm and 6pm. How likely would you be to take advantage of that offer? Scale from 1: very unlikely, 7: very likely. political In the political spectrum, where would you place yourself? Scale: 1: very liberal, 2: liberal, 3: centrist, 4: conservative, 5: very conservative. quarterback If you had to guess who was chosen the quarterback of the year in Canada last year, which of the following four options would you choose? 1: <NAME>, 2: <NAME>, 3: <NAME>, 4: <NAME>. olddays How often have you referred to some past part of your life as “the good old days”? Scale: 11: never, 12: almost never, 13: sometimes, 14: often, 15: very often. feelold How old do you feel? Scale: 1: very young, 2: young, 3: neither young nor old, 4: old, 5: very old. computer Computers are complicated machines. Scale from 1: strongly disagree, to 5: strongly agree. diner Imagine you were going to a diner for dinner tonight, how much do you think you would like the food? Scale from 1: dislike extremely, to 9: like extremely. See https://bookdown.org/hneth/ds4psy/B-2-datasets-false.html for codebook and more information. Source Articles • <NAME>., <NAME>., & <NAME>. (2011). False-positive psychology: Undis- closed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22(11), 1359–1366. doi: 10.1177/0956797611417632 • <NAME>., <NAME>., & <NAME>. (2014). Data from paper "False-Positive Psy- chology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant". Journal of Open Psychology Data, 2(1), e1. doi: 10.5334/jopd.aa See files at https://openpsychologydata.metajnl.com/articles/10.5334/jopd.aa/ and the archive at https://zenodo.org/record/7664 for original dataset. See Also Other datasets: Bushisms, Trumpisms, countries, data_1, data_2, data_t1_de, data_t1_tab, data_t1, data_t2, data_t3, data_t4, dt_10, exp_num_dt, exp_wide, fame, flowery, fruits, outliers, pi_100k, posPsy_AHI_CESD, posPsy_long, posPsy_p_info, posPsy_wide, t3, t4, t_1, t_2, t_3, t_4, table6, table7, table8, table9, tb fame Data table fame. Description fame is a dataset to practice working with dates. fame contains the names, areas, dates of birth (DOB), and — if applicable — the dates of death (DOD) of famous people. Usage fame Format A table with 67 cases (rows) and 4 variables (columns). Source Student solutions to exercises, dates mostly from https://www.wikipedia.org/. See Also Other datasets: Bushisms, Trumpisms, countries, data_1, data_2, data_t1_de, data_t1_tab, data_t1, data_t2, data_t3, data_t4, dt_10, exp_num_dt, exp_wide, falsePosPsy_all, flowery, fruits, outliers, pi_100k, posPsy_AHI_CESD, posPsy_long, posPsy_p_info, posPsy_wide, t3, t4, t_1, t_2, t_3, t_4, table6, table7, table8, table9, tb flowery Data: Flowery phrases. Description flowery contains versions and variations of Gertrude Stein’s popular phrase "A rose is a rose is a rose". Usage flowery Format A vector of type character with length(flowery) = 60. Details The phrase stems from Gertrude Stein’s poem "<NAME>" (written in 1913 and published in 1922, in "Geography and Plays"). The verbatim line in the poem actually reads "Rose is a rose is a rose is a rose". See https://en.wikipedia.org/wiki/Rose_is_a_rose_is_a_rose_is_a_rose for additional variations and sources. Source Data based on https://en.wikipedia.org/wiki/Rose_is_a_rose_is_a_rose_is_a_rose. See Also Other datasets: Bushisms, Trumpisms, countries, data_1, data_2, data_t1_de, data_t1_tab, data_t1, data_t2, data_t3, data_t4, dt_10, exp_num_dt, exp_wide, falsePosPsy_all, fame, fruits, outliers, pi_100k, posPsy_AHI_CESD, posPsy_long, posPsy_p_info, posPsy_wide, t3, t4, t_1, t_2, t_3, t_4, table6, table7, table8, table9, tb fruits Data: Names of fruits. Description fruits is a dataset containing the names of 122 fruits (as a vector of text strings). Usage fruits Format A vector of type character with length(fruits) = 122. Details Botanically, "fruits" are the seed-bearing structures of flowering plants (angiosperms) formed from the ovary after flowering. In common usage, "fruits" refer to the fleshy seed-associated structures of a plant that taste sweet or sour, and are edible in their raw state. Source Data based on https://simple.wikipedia.org/wiki/List_of_fruits. See Also Other datasets: Bushisms, Trumpisms, countries, data_1, data_2, data_t1_de, data_t1_tab, data_t1, data_t2, data_t3, data_t4, dt_10, exp_num_dt, exp_wide, falsePosPsy_all, fame, flowery, outliers, pi_100k, posPsy_AHI_CESD, posPsy_long, posPsy_p_info, posPsy_wide, t3, t4, t_1, t_2, t_3, t_4, table6, table7, table8, table9, tb get_set Get a set of x-y coordinates. Description get_set obtains a set of x/y coordinates and returns it (as a data frame). Usage get_set(n = 1) Arguments n Number of set (as an integer from 1 to 4)). Default: n = 1. Details Each set stems from Anscombe’s Quartet (see datasets::anscombe, hence 1 <= n <= 4) and is returned as an 11 x 2 data frame. Source See ?datasets:anscombe for details and references. See Also Other data functions: make_grid() Examples get_set(1) plot(get_set(2), col = "red") invert_rules invert_rules inverts a set of encoding rules. Description invert_rules allows decoding messages that were encoded by a set of rules x. Usage invert_rules(x) Arguments x The rules used for encoding a message (as a named vector). Details x is assumed to be a named vector. invert_rules replaces the elements of x by the names of x, and vice versa. A message is issued if the elements of x are repeated (i.e., decoding is non-unique). Value A character vector. See Also transl33t for encoding text (e.g., into leet slang); l33t_rul35 for default rules used. Other text objects and functions: Umlaut, capitalize(), caseflip(), cclass, chars_to_text(), collapse_chars(), count_chars_words(), count_chars(), count_words(), l33t_rul35, map_text_chars(), map_text_coord(), map_text_regex(), metachar, read_ascii(), text_to_chars(), text_to_sentences(), text_to_words(), transl33t(), words_to_text() Examples invert_rules(l33t_rul35) # Note repeated elements # Encoding and decoding a message: (txt_0 <- "Hello world! How are you doing today?") # message (txt_1 <- transl33t(txt_0, rules = l33t_rul35)) # encoding (txt_2 <- transl33t(txt_1, rules = invert_rules(l33t_rul35))) # decoding is_equal Test two vectors for pairwise (near) equality. Description is_equal tests if two vectors x and y are pairwise equal. Usage is_equal(x, y, ...) Arguments x 1st vector to compare (required). y 2nd vector to compare (required). ... Other parameters (passed to num_equal()). Details If both x and y are numeric, is_equal calls num_equal(x, y, ...) (allowing for some tolerance threshold tol). Otherwise, x and y are compared by x == y. is_equal is a safer way to verify the (near) equality of numeric vectors than ==, as numbers may exhibit floating point effects. See Also num_equal function for comparing numeric vectors; all.equal function of the R base package; near function of the dplyr package. Other numeric functions: base2dec(), base_digits, dec2base(), is_wholenumber(), num_as_char(), num_as_ordinal(), num_equal() Other utility functions: base2dec(), base_digits, dec2base(), is_vect(), is_wholenumber(), num_as_char(), num_as_ordinal(), num_equal() Examples # numeric data: is_equal(2, sqrt(2)^2) is_equal(2, sqrt(2)^2, tol = 0) is_equal(c(2, 3), c(sqrt(2)^2, sqrt(3)^2, 4/2, 9/3)) # other data types: is_equal((1:3 > 1), (1:3 > 2)) # logical is_equal(c("A", "B", "c"), toupper(c("a", "b", "c"))) # character is_equal(as.Date("2020-08-16"), Sys.Date()) # dates # as factors: is_equal((1:3 > 1), as.factor((1:3 > 2))) is_equal(c(1, 2, 3), as.factor(c(1, 2, 3))) is_equal(c("A", "B", "C"), as.factor(c("A", "B", "C"))) is_leap_year Is some year a so-called leap year? Description is_leap_year checks whether a given year (provided as a date or time dt, or number/string denot- ing a 4-digit year) lies in a so-called leap year (i.e., a year containing a date of Feb-29). Usage is_leap_year(dt) Arguments dt Date or time (scalar or vector). Numbers or strings with dates are parsed into 4-digit numbers denoting the year. Details When dt is not recognized as "Date" or "POSIXt" object(s), is_leap_year aims to parse a string dt as describing year(s) in a "dddd" (4-digit year) format, as a valid "Date" string (to retrieve the 4-digit year "%Y"), or a numeric dt as 4-digit integer(s). is_leap_year then solves the task by verifying the numeric definition of a "leap year" (see https: //en.wikipedia.org/wiki/Leap_year). An alternative solution that tried using as.Date() for defining a "Date" of Feb-29 in the corre- sponding year(s) was removed, as it evaluated NA values as FALSE. Value Boolean vector. Source See https://en.wikipedia.org/wiki/Leap_year for definition. See Also days_in_month for the number of days in given months; diff_tz for time zone-based time differ- ences; leap_year function of the lubridate package. Other date and time functions: change_time(), change_tz(), cur_date(), cur_time(), days_in_month(), diff_dates(), diff_times(), diff_tz(), what_date(), what_month(), what_time(), what_wday(), what_week(), what_year(), zodiac() Examples is_leap_year(2020) (days_this_year <- 365 + is_leap_year(Sys.Date())) # from dates: is_leap_year(Sys.Date()) is_leap_year(as.Date("2022-02-28")) # from times: is_leap_year(Sys.time()) is_leap_year(as.POSIXct("2022-10-11 10:11:12")) is_leap_year(as.POSIXlt("2022-10-11 10:11:12")) # from non-integers: is_leap_year(2019.5) # For vectors: is_leap_year(2020:2028) # with dt as strings: is_leap_year(c("2020", "2021")) is_leap_year(c("2020-02-29 01:02:03", "2021-02-28 01:02")) # Note: Invalid date string yields error: # is_leap_year("2021-02-29") is_vect Test for a vector (i.e., atomic vector or list). Description is_vect tests if x is a vector. Usage is_vect(x) Arguments x Vector(s) to test (required). Details is_vect does what the base R function is.vector is not designed to do: • is_vect() returns TRUE if x is an atomic vector or a list (irrespective of its attributes). • is.vector() returns TRUE if x is a vector of the specified mode having no attributes other than names, otherwise FALSE. Internally, the function is a wrapper for is.atomic(x) | is.list(x). Note that data frames are also vectors. See the is_vector function of the purrr package and the base R functions is.atomic, is.list, and is.vector, for details. See Also is_vect function of the purrr package; is.atomic function of the R base package; is.list function of the R base package; is.vector function of the R base package. Other utility functions: base2dec(), base_digits, dec2base(), is_equal(), is_wholenumber(), num_as_char(), num_as_ordinal(), num_equal() Examples # Define 3 types of vectors: v1 <- 1:3 # (a) atomic vector names(v1) <- LETTERS[v1] # with names v2 <- v1 # (b) copy vector attr(v2, "my_attr") <- "foo" # add an attribute ls <- list(1, 2, "C") # (c) list # Compare: is.vector(v1) is.list(v1) is_vect(v1) is.vector(v2) # FALSE is.list(v2) is_vect(v2) # TRUE is.vector(ls) is.list(ls) is_vect(ls) # Data frames are also vectors: df <- as.data.frame(1:3) is_vect(df) # is TRUE is_wholenumber Test for whole numbers (i.e., integers). Description is_wholenumber tests if x contains only integer numbers. Usage is_wholenumber(x, tol = .Machine$double.eps^0.5) Arguments x Number(s) to test (required, accepts numeric vectors). tol Numeric tolerance value. Default: tol = .Machine$double.eps^0.5 (see ?.Machine for details). Details is_wholenumber does what the base R function is.integer is not designed to do: • is_wholenumber() returns TRUE or FALSE depending on whether its numeric argument x is an integer value (i.e., a "whole" number). • is.integer() returns TRUE or FALSE depending on whether its argument is of integer type, and FALSE if its argument is a factor. See the documentation of is.integer for definition and details. See Also is.integer function of the R base package. Other numeric functions: base2dec(), base_digits, dec2base(), is_equal(), num_as_char(), num_as_ordinal(), num_equal() Other utility functions: base2dec(), base_digits, dec2base(), is_equal(), is_vect(), num_as_char(), num_as_ordinal(), num_equal() Examples is_wholenumber(1) # is TRUE is_wholenumber(1/2) # is FALSE x <- seq(1, 2, by = 0.5) is_wholenumber(x) # Compare: is.integer(1+2) is_wholenumber(1+2) l33t_rul35 l33t_rul35 provides rules for translating text into leet/l33t slang. Description l33t_rul35 specifies rules for translating characters into other characters (typically symbols) to mimic leet/l33t slang (as a named character vector). Usage l33t_rul35 Format An object of class character of length 13. Details Old (i.e., to be replaced) characters are paste(names(l33t_rul35), collapse = ""). New (i.e., replaced) characters are paste(l33t_rul35, collapse = ""). See https://en.wikipedia.org/wiki/Leet for details. See Also transl33t for a corresponding function. Other text objects and functions: Umlaut, capitalize(), caseflip(), cclass, chars_to_text(), collapse_chars(), count_chars_words(), count_chars(), count_words(), invert_rules(), map_text_chars(), map_text_coord(), map_text_regex(), metachar, read_ascii(), text_to_chars(), text_to_sentences(), text_to_words(), transl33t(), words_to_text() make_grid Generate a grid of x-y coordinates. Description make_grid generates a grid of x/y coordinates and returns it (as a data frame). Usage make_grid(x_min = 0, x_max = 2, y_min = 0, y_max = 1) Arguments x_min Minimum x coordinate. Default: x_min = 0. x_max Maximum x coordinate. Default: x_max = 2. y_min Minimum y coordinate. Default: y_min = 0. y_max Maximum y coordinate. Default: y_max = 1. See Also Other data functions: get_set() Examples make_grid() make_grid(x_min = -3, x_max = 3, y_min = -2, y_max = 2) map_text_chars map_text_chars maps the characters of a text string into a table (with x/y coordinates). Description map_text_chars parses text (from a text string x) into a table that contains a row for each character and x/y-coordinates corresponding to the character positions in x. Usage map_text_chars(x, flip_y = FALSE) Arguments x The text string(s) to map (required). If length(x) > 1, elements are mapped to different lines (i.e., y-coordinates). flip_y Boolean: Should y-coordinates be flipped, so that the lowest line in the text file becomes y = 1, and the top line in the text file becomes y = n_lines? Default: flip_y = FALSE. Details map_text_chars creates a data frame with 3 variables: Each character’s x- and y-coordinates (from top to bottom) and a variable char for the character at these coordinates. Note that map_text_chars was originally a part of read_ascii, but has been separated to enable independent access to separate functionalities. Note that map_text_chars is replaced by the simpler map_text_coord function. Value A data frame with 3 variables: Each character’s x- and y-coordinates (from top to bottom) and a variable char for the character at this coordinate. See Also read_ascii for parsing text from file or user input; plot_chars for a character plotting function. Other text objects and functions: Umlaut, capitalize(), caseflip(), cclass, chars_to_text(), collapse_chars(), count_chars_words(), count_chars(), count_words(), invert_rules(), l33t_rul35, map_text_coord(), map_text_regex(), metachar, read_ascii(), text_to_chars(), text_to_sentences(), text_to_words(), transl33t(), words_to_text() map_text_coord map_text_coord maps the characters of a text string into a table (with x/y-coordinates). Description map_text_coord parses text (from a text string x) into a table that contains a row for each character and x/y-coordinates corresponding to the character positions in x. Usage map_text_coord(x, flip_y = FALSE, sep = "") Arguments x The text string(s) to map (required). If length(x) > 1, elements are mapped to different lines (i.e., y-coordinates). flip_y Boolean: Should y-coordinates be flipped, so that the lowest line in the text file becomes y = 1, and the top line in the text file becomes y = n_lines? Default: flip_y = FALSE. sep Character to insert between the elements of a multi-element character vector as input x? Default: sep = "" (i.e., add nothing). Details map_text_coord creates a data frame with 3 variables: Each character’s x- and y-coordinates (from top to bottom) and a variable char for the character at these coordinates. Note that map_text_coord was originally a part of read_ascii, but has been separated to enable independent access to separate functionalities. Value A data frame with 3 variables: Each character’s x- and y-coordinates (from top to bottom) and a variable char for the character at this coordinate. See Also map_text_regex for mapping text to a character table and matching patterns; plot_charmap for plotting character maps; plot_chars for creating and plotting character maps; read_ascii for parsing text from file or user input. Other text objects and functions: Umlaut, capitalize(), caseflip(), cclass, chars_to_text(), collapse_chars(), count_chars_words(), count_chars(), count_words(), invert_rules(), l33t_rul35, map_text_chars(), map_text_regex(), metachar, read_ascii(), text_to_chars(), text_to_sentences(), text_to_words(), transl33t(), words_to_text() Examples map_text_coord("Hello world!") # 1 line of text map_text_coord(c("Hello", "world!")) # 2 lines of text map_text_coord(c("Hello", " ", "world!")) # 3 lines of text ## Read text from file: ## Create a temporary file "test.txt": # cat("Hello world!", "This is a test.", # "Can you see this text?", "Good! Please carry on...", # file = "test.txt", sep = "\n") # txt <- read_ascii("test.txt") # map_text_coord(txt) # unlink("test.txt") # clean up (by deleting file). map_text_regex Map text to character table (allowing for matching patterns). Description map_text_regex parses text (from a file or user input) into a data frame that contains a row for each character of x. Usage map_text_regex( x = NA, file = "", lbl_hi = NA, lbl_lo = NA, bg_hi = NA, bg_lo = "[[:space:]]", lbl_rotate = NA, case_sense = TRUE, lbl_tiles = TRUE, col_lbl = "black", col_lbl_hi = pal_ds4psy[[1]], col_lbl_lo = pal_ds4psy[[9]], col_bg = pal_ds4psy[[7]], col_bg_hi = pal_ds4psy[[4]], col_bg_lo = "white", col_sample = FALSE, rseed = NA, angle_fg = c(-90, 90), angle_bg = 0 ) Arguments x The text to map or plot (as a character vector). Different elements denote differ- ent lines of text. If x = NA (as per default), the file argument is used to read a text file or user input from the Console. file A text file to read (or its path). If file = "" (as per default), scan is used to read user input from the Console. If a text file is stored in a sub-directory, enter its path and name here (without any leading or trailing "." or "/"). lbl_hi Labels to highlight (as regex). Default: lbl_hi = NA. lbl_lo Labels to de-emphasize (as regex). Default: lbl_lo = NA. bg_hi Background tiles to highlight (as regex). Default: bg_hi = NA. bg_lo Background tiles to de-emphasize (as regex). Default: bg_lo = "[[:space:]]". lbl_rotate Labels to rotate (as regex). Default: lbl_rotate = NA. case_sense Boolean: Distinguish lower- vs. uppercase characters in pattern matches? De- fault: case_sense = TRUE. lbl_tiles Are character labels shown? This enables pattern matching for (fg) color and angle aesthetics. Default: lbl_tiles = TRUE (i.e., show labels). col_lbl Default color of text labels. Default: col_lbl = "black". col_lbl_hi Highlighting color of text labels. Default: col_lbl_hi = pal_ds4psy[[1]]. col_lbl_lo De-emphasizing color of text labels. Default: col_lbl_lo = pal_ds4psy[[9]]. col_bg Default color to fill background tiles. Default: col_bg = pal_ds4psy[[7]]. col_bg_hi Highlighting color to fill background tiles. Default: col_bg_hi = pal_ds4psy[[4]]. col_bg_lo De-emphasizing color to fill background tiles. Default: col_bg_lo = "white". col_sample Boolean: Sample color vectors (within category)? Default: col_sample = FALSE. rseed Random seed (number). Default: rseed = NA (using random seed). angle_fg Angle(s) for rotating character labels matching the pattern of the lbl_rotate expression. Default: angle_fg = c(-90, 90). If length(angle_fg) > 1, a ran- dom value in uniform range(angle_fg) is used for every character. angle_bg Angle(s) of rotating character labels not matching the pattern of the lbl_rotate expression. Default: angle_bg = 0 (i.e., no rotation). If length(angle_bg) > 1, a random value in uniform range(angle_bg) is used for every character. Details map_text_regex allows for regular expression (regex) to match text patterns and create correspond- ing variables (e.g., for color or orientation). Five regular expressions and corresponding color and angle arguments allow identifying, marking (highlighting or de-emphasizing), and rotating those sets of characters (i.e, their text labels or fill colors). that match the provided patterns. The plot generated by plot_chars is character-based: Individual characters are plotted at equidis- tant x-y-positions and the aesthetic settings provided for text labels and tile fill colors. map_text_regex returns a plot description (as a data frame). Using this output as an input to plot_charmap plots text in a character-based fashion (i.e., individual characters are plotted at equidistant x-y-positions). Together, both functions replace the over-specialized plot_chars and plot_text functions. Value A data frame describing a plot. See Also map_text_coord for mapping text to a table of character coordinates; plot_charmap for plotting character maps; plot_chars for creating and plotting character maps; plot_text for plotting char- acters and color tiles by frequency; read_ascii for reading text inputs into a character string. Other text objects and functions: Umlaut, capitalize(), caseflip(), cclass, chars_to_text(), collapse_chars(), count_chars_words(), count_chars(), count_words(), invert_rules(), l33t_rul35, map_text_chars(), map_text_coord(), metachar, read_ascii(), text_to_chars(), text_to_sentences(), text_to_words(), transl33t(), words_to_text() Examples ## (1) From text string(s): ts <- c("Hello world!", "This is a test to test this splendid function", "Does this work?", "That's good.", "Please carry on.") sum(nchar(ts)) # (a) simple use: map_text_regex(ts) # (b) matching patterns (regex): map_text_regex(ts, lbl_hi = "\\b\\w{4}\\b", bg_hi = "[good|test]", lbl_rotate = "[^aeiou]", angle_fg = c(-45, +45)) ## (2) From user input: # map_text_regex() # (enter text in Console) ## (3) From text file: # cat("Hello world!", "This is a test file.", # "Can you see this text?", # "Good! Please carry on...", # file = "test.txt", sep = "\n") # # map_text_regex(file = "test.txt") # default # map_text_regex(file = "test.txt", lbl_hi = "[[:upper:]]", lbl_lo = "[[:punct:]]", # col_lbl_hi = "red", col_lbl_lo = "blue") # # map_text_regex(file = "test.txt", lbl_hi = "[aeiou]", col_lbl_hi = "red", # col_bg = "white", bg_hi = "see") # mark vowels and "see" (in bg) # map_text_regex(file = "test.txt", bg_hi = "[aeiou]", col_bg_hi = "gold") # mark (bg of) vowels # # # Label options: # map_text_regex(file = "test.txt", bg_hi = "see", lbl_tiles = FALSE) # map_text_regex(file = "test.txt", angle_bg = c(-20, 20)) # # unlink("test.txt") # clean up (by deleting file). metachar metachar provides metacharacters (as a character vector). Description metachar provides the metacharacters of extended regular expressions (as a character vector). Usage metachar Format An object of class character of length 12. Details metachar allows illustrating the notion of meta-characters in regular expressions (and provides corresponding exemplars). See ?base::regex for details on regular expressions and ?"'" for a list of character constants/quotes in R. See Also cclass for a vector of character classes. Other text objects and functions: Umlaut, capitalize(), caseflip(), cclass, chars_to_text(), collapse_chars(), count_chars_words(), count_chars(), count_words(), invert_rules(), l33t_rul35, map_text_chars(), map_text_coord(), map_text_regex(), read_ascii(), text_to_chars(), text_to_sentences(), text_to_words(), transl33t(), words_to_text() Examples metachar length(metachar) # 12 nchar(paste0(metachar, collapse = "")) # 12 num_as_char Convert a number into a character sequence. Description num_as_char converts a number into a character sequence (of a specific length). Usage num_as_char(x, n_pre_dec = 2, n_dec = 2, sym = "0", sep = ".") Arguments x Number(s) to convert (required, accepts numeric vectors). n_pre_dec Number of digits before the decimal separator. Default: n_pre_dec = 2. This value is used to add zeros to the front of numbers. If the number of meaningful digits prior to decimal separator is greater than n_pre_dec, this value is ignored. n_dec Number of digits after the decimal separator. Default: n_dec = 2. sym Symbol to add to front or back. Default: sym = 0. Using sym = " " or sym = "_" can make sense, digits other than "0" do not. sep Decimal separator to use. Default: sep = ".". Details The arguments n_pre_dec and n_dec set a number of desired digits before and after the decimal separator sep. num_as_char tries to meet these digit numbers by adding zeros to the front and end of x. However, when n_pre_dec is lower than the number of relevant (pre-decimal) digits, all relevant digits are shown. n_pre_dec also works for negative numbers, but the minus symbol is not counted as a (pre-decimal) digit. Caveat: Note that this function illustrates how numbers, characters, for loops, and paste() can be combined when writing functions. It is not written efficiently or well. See Also Other numeric functions: base2dec(), base_digits, dec2base(), is_equal(), is_wholenumber(), num_as_ordinal(), num_equal() Other utility functions: base2dec(), base_digits, dec2base(), is_equal(), is_vect(), is_wholenumber(), num_as_ordinal(), num_equal() Examples num_as_char(1) num_as_char(10/3) num_as_char(1000/6) # rounding down: num_as_char((1.3333), n_pre_dec = 0, n_dec = 0) num_as_char((1.3333), n_pre_dec = 2, n_dec = 0) num_as_char((1.3333), n_pre_dec = 2, n_dec = 1) # rounding up: num_as_char(1.6666, n_pre_dec = 1, n_dec = 0) num_as_char(1.6666, n_pre_dec = 1, n_dec = 1) num_as_char(1.6666, n_pre_dec = 2, n_dec = 2) num_as_char(1.6666, n_pre_dec = 2, n_dec = 3) # Note: If n_pre_dec is too small, actual number is kept: num_as_char(11.33, n_pre_dec = 0, n_dec = 1) num_as_char(11.66, n_pre_dec = 1, n_dec = 1) # Note: num_as_char(1, sep = ",") num_as_char(2, sym = " ") num_as_char(3, sym = " ", n_dec = 0) # for vectors: num_as_char(1:10/1, n_pre_dec = 1, n_dec = 1) num_as_char(1:10/3, n_pre_dec = 2, n_dec = 2) # for negative numbers (adding relevant pre-decimals): mix <- c(10.33, -10.33, 10.66, -10.66) num_as_char(mix, n_pre_dec = 1, n_dec = 1) num_as_char(mix, n_pre_dec = 1, n_dec = 0) # Beware of bad inputs: num_as_char(4, sym = "8") num_as_char(5, sym = "99") num_as_ordinal Convert a number into an ordinal character sequence. Description num_as_ordinal converts a given (cardinal) number into an ordinal character sequence. Usage num_as_ordinal(x, sep = "") Arguments x Number(s) to convert (required, scalar or vector). sep Decimal separator to use. Default: sep = "" (i.e., no separator). Details The function currently only works for the English language and does not accepts inputs that are characters, dates, or times. Note that the toOrdinal() function of the toOrdinal package works for multiple languages and provides a toOrdinalDate() function. Caveat: Note that this function illustrates how numbers, characters, for loops, and paste() can be combined when writing functions. It is instructive, but not written efficiently or well (see the function definition for an alternative solution using vector indexing). See Also toOrdinal() function of the toOrdinal package. Other numeric functions: base2dec(), base_digits, dec2base(), is_equal(), is_wholenumber(), num_as_char(), num_equal() Other utility functions: base2dec(), base_digits, dec2base(), is_equal(), is_vect(), is_wholenumber(), num_as_char(), num_equal() Examples num_as_ordinal(1:4) num_as_ordinal(10:14) # all with "th" num_as_ordinal(110:114) # all with "th" num_as_ordinal(120:124) # 4 different suffixes num_as_ordinal(1:15, sep = "-") # using sep # Note special cases: num_as_ordinal(NA) num_as_ordinal("1") num_as_ordinal(Sys.Date()) num_as_ordinal(Sys.time()) num_as_ordinal(seq(1.99, 2.14, by = .01)) num_equal Test two numeric vectors for pairwise (near) equality. Description num_equal tests if two numeric vectors x and y are pairwise equal (within some tolerance value ‘tol‘). Usage num_equal(x, y, tol = .Machine$double.eps^0.5) Arguments x 1st numeric vector to compare (required, assumes a numeric vector). y 2nd numeric vector to compare (required, assumes a numeric vector). tol Numeric tolerance value. Default: tol = .Machine$double.eps^0.5 (see ?.Machine for details). Details num_equal is a safer way to verify the (near) equality of numeric vectors than ==, as numbers may exhibit floating point effects. See Also is_equal function for generic vectors; all.equal function of the R base package; near function of the dplyr package. Other numeric functions: base2dec(), base_digits, dec2base(), is_equal(), is_wholenumber(), num_as_char(), num_as_ordinal() Other utility functions: base2dec(), base_digits, dec2base(), is_equal(), is_vect(), is_wholenumber(), num_as_char(), num_as_ordinal() Examples num_equal(2, sqrt(2)^2) # Recycling: num_equal(c(2, 3), c(sqrt(2)^2, sqrt(3)^2, 4/2, 9/3)) # Contrast: .1 == .3/3 num_equal(.1, .3/3) # Contrast: v <- c(.9 - .8, .8 - .7, .7 - .6, .6 - .5, .5 - .4, .4 - .3, .3 - .2, .2 -.1, .1) unique(v) .1 == v num_equal(.1, v) outliers Outlier data. Description outliers is a fictitious dataset containing the id, sex, and height of 1000 non-existing, but otherwise normal people. Usage outliers Format A table with 100 cases (rows) and 3 variables (columns). Details Codebook id Participant ID (as character code) sex Gender (female vs. male) height Height (in cm) Source See CSV data at http://rpository.com/ds4psy/data/out.csv. See Also Other datasets: Bushisms, Trumpisms, countries, data_1, data_2, data_t1_de, data_t1_tab, data_t1, data_t2, data_t3, data_t4, dt_10, exp_num_dt, exp_wide, falsePosPsy_all, fame, flowery, fruits, pi_100k, posPsy_AHI_CESD, posPsy_long, posPsy_p_info, posPsy_wide, t3, t4, t_1, t_2, t_3, t_4, table6, table7, table8, table9, tb pal_ds4psy ds4psy default color palette. Description pal_ds4psy provides a dedicated color palette. Usage pal_ds4psy Format An object of class data.frame with 1 rows and 11 columns. Details By default, pal_ds4psy is based on pal_unikn of the unikn package. See Also Other color objects and functions: pal_n_sq() pal_n_sq Get n-by-n dedicated colors of a color palette. Description pal_n_sq returns n^2 dedicated colors of a color palette pal (up to a maximum of n = "all" col- ors). Usage pal_n_sq(n = "all", pal = pal_ds4psy) Arguments n The desired number colors of pal (as a number) or the character string "all" (to get all colors of pal). Default: n = "all". pal A color palette (as a data frame). Default: pal = pal_ds4psy. Details Use the more specialized function unikn::usecol for choosing n dedicated colors of a known color palette. See Also plot_tiles to plot tile plots. Other color objects and functions: pal_ds4psy Examples pal_n_sq(1) # 1 color: seeblau3 pal_n_sq(2) # 4 colors pal_n_sq(3) # 9 colors (5: white) pal_n_sq(4) # 11 colors (6: white) pi_100k Data: 100k digits of pi. Description pi_100k is a dataset containing the first 100k digits of pi. Usage pi_100k Format A character of nchar(pi_100k) = 100001. Source See TXT data at http://rpository.com/ds4psy/data/pi_100k.txt. Original data at http://www.geom.uiuc.edu/~huberty/math5337/groupe/digits.html. See Also Other datasets: Bushisms, Trumpisms, countries, data_1, data_2, data_t1_de, data_t1_tab, data_t1, data_t2, data_t3, data_t4, dt_10, exp_num_dt, exp_wide, falsePosPsy_all, fame, flowery, fruits, outliers, posPsy_AHI_CESD, posPsy_long, posPsy_p_info, posPsy_wide, t3, t4, t_1, t_2, t_3, t_4, table6, table7, table8, table9, tb plot_charmap Plot a character map as a tile plot with text labels. Description plot_charmap plots a character map and some aesthetics as a tile plot with text labels (using gg- plot2). Usage plot_charmap( x = NA, file = "", lbl_tiles = TRUE, col_lbl = "black", angle = 0, cex = 3, fontface = 1, family = "sans", col_bg = "grey80", borders = FALSE, border_col = "white", border_size = 0.5 ) Arguments x A character map, as generated by map_text_coord or map_text_regex (as df). Alternatively, some text to map or plot (as a character vector). Different ele- ments denote different lines of text. If x = NA (as per default), the file argument is used to read a text file or user input from the Console. file A text file to read (or its path). If file = "" (as per default), scan is used to read user input from the Console. If a text file is stored in a sub-directory, enter its path and name here (without any leading or trailing "." or "/"). lbl_tiles Add character labels to tiles? Default: lbl_tiles = TRUE (i.e., show labels). col_lbl Default color of text labels (unless specified as a column col_fg of x). Default: col_lbl = "black". angle Default angle of text labels (unless specified as a column of x). Default: angle = 0. cex Character size (numeric). Default: cex = 3. fontface Font face of text labels (numeric). Default: fontface = 1, (from 1 to 4). family Font family of text labels (name). Default: family = "sans". Alternative op- tions: "sans", "serif", or "mono". col_bg Default color to fill background tiles (unless specified as a column col_bg of x). Default: col_bg = "grey80". borders Boolean: Add borders to tiles? Default: borders = FALSE (i.e., no borders). border_col Color of tile borders. Default: border_col = "white". border_size Size of tile borders. Default: border_size = 0.5. Details plot_charmap is based on plot_chars. As it only contains the plotting-related parts, it assumes a character map generated by map_text_regex as input. The plot generated by plot_charmap is character-based: Individual characters are plotted at equidis- tant x-y-positions and aesthetic variables are used for text labels and tile fill colors. Value A plot generated by ggplot2. See Also plot_chars for creating and plotting character maps; plot_text for plotting characters and color tiles by frequency; map_text_regex for mapping text to a character table and matching patterns; map_text_coord for mapping text to a table of character coordinates; read_ascii for reading text inputs into a character string; pal_ds4psy for default color palette. Other plot functions: plot_chars(), plot_fn(), plot_fun(), plot_n(), plot_text(), plot_tiles(), theme_clean(), theme_ds4psy(), theme_empty() Examples # (0) Prepare: ts <- c("Hello world!", "This is a test to test this splendid function", "Does this work?", "That's good.", "Please carry on.") sum(nchar(ts)) # (1) From character map: # (a) simple: cm_1 <- map_text_coord(x = ts, flip_y = TRUE) plot_charmap(cm_1) # (b) pattern matching (regex): cm_2 <- map_text_regex(ts, lbl_hi = "\\b\\w{4}\\b", bg_hi = "[good|test]", lbl_rotate = "[^aeiou]", angle_fg = c(-45, +45)) plot_charmap(cm_2) # (2) Alternative inputs: # (a) From text string(s): plot_charmap(ts) # (b) From user input: # plot_charmap() # (enter text in Console) # (c) From text file: # cat("Hello world!", "This is a test file.", # "Can you see this text?", # "Good! Please carry on...", # file = "test.txt", sep = "\n") # plot_charmap(file = "test.txt") # unlink("test.txt") # clean up (by deleting file). plot_chars Plot text characters (from file or user input) and match patterns. Description plot_chars parses text (from a file or user input) into a table and then plots its individual characters as a tile plot (using ggplot2). Usage plot_chars( x = NA, file = "", lbl_hi = NA, lbl_lo = NA, bg_hi = NA, bg_lo = "[[:space:]]", lbl_rotate = NA, case_sense = TRUE, lbl_tiles = TRUE, angle_fg = c(-90, 90), angle_bg = 0, col_lbl = "black", col_lbl_hi = pal_ds4psy[[1]], col_lbl_lo = pal_ds4psy[[9]], col_bg = pal_ds4psy[[7]], col_bg_hi = pal_ds4psy[[4]], col_bg_lo = "white", col_sample = FALSE, rseed = NA, cex = 3, fontface = 1, family = "sans", borders = FALSE, border_col = "white", border_size = 0.5 ) Arguments x The text to plot (as a character vector). Different elements denote different lines of text. If x = NA (as per default), the file argument is used to read a text file or user input from the Console. file A text file to read (or its path). If file = "" (as per default), scan is used to read user input from the Console. If a text file is stored in a sub-directory, enter its path and name here (without any leading or trailing "." or "/"). lbl_hi Labels to highlight (as regex). Default: lbl_hi = NA. lbl_lo Labels to de-emphasize (as regex). Default: lbl_lo = NA. bg_hi Background tiles to highlight (as regex). Default: bg_hi = NA. bg_lo Background tiles to de-emphasize (as regex). Default: bg_lo = "[[:space:]]". lbl_rotate Labels to rotate (as regex). Default: lbl_rotate = NA. case_sense Boolean: Distinguish lower- vs. uppercase characters in pattern matches? De- fault: case_sense = TRUE. lbl_tiles Add character labels to tiles? Default: lbl_tiles = TRUE (i.e., show labels). angle_fg Angle(s) for rotating character labels matching the pattern of the lbl_rotate expression. Default: angle_fg = c(-90, 90). If length(angle_fg) > 1, a ran- dom value in uniform range(angle_fg) is used for every character. angle_bg Angle(s) of rotating character labels not matching the pattern of the lbl_rotate expression. Default: angle_bg = 0 (i.e., no rotation). If length(angle_bg) > 1, a random value in uniform range(angle_bg) is used for every character. col_lbl Default color of text labels. Default: col_lbl = "black". col_lbl_hi Highlighting color of text labels. Default: col_lbl_hi = pal_ds4psy[[1]]. col_lbl_lo De-emphasizing color of text labels. Default: col_lbl_lo = pal_ds4psy[[9]]. col_bg Default color to fill background tiles. Default: col_bg = pal_ds4psy[[7]]. col_bg_hi Highlighting color to fill background tiles. Default: col_bg_hi = pal_ds4psy[[4]]. col_bg_lo De-emphasizing color to fill background tiles. Default: col_bg_lo = "white". col_sample Boolean: Sample color vectors (within category)? Default: col_sample = FALSE. rseed Random seed (number). Default: rseed = NA (using random seed). cex Character size (numeric). Default: cex = 3. fontface Font face of text labels (numeric). Default: fontface = 1, (from 1 to 4). family Font family of text labels (name). Default: family = "sans". Alternative op- tions: "sans", "serif", or "mono". borders Boolean: Add borders to tiles? Default: borders = FALSE (i.e., no borders). border_col Color of tile borders. Default: border_col = "white". border_size Size of tile borders. Default: border_size = 0.5. Details plot_chars blurs the boundary between a text and its graphical representation by combining op- tions for matching patterns of text with visual features for displaying characters (e.g., their color or orientation). plot_chars is based on plot_text, but provides additional support for detecting and displaying characters (i.e., text labels, their orientation, and color options) based on matching regular expres- sion (regex). Internally, plot_chars is a wrapper that calls (1) map_text_regex for creating a character map (al- lowing for matching patterns for some aesthetics) and (2) plot_charmap for plotting this character map. However, in contrast to plot_charmap, plot_chars invisibly returns a description of the plot (as a data frame). The plot generated by plot_chars is character-based: Individual characters are plotted at equidis- tant x-y-positions and the aesthetic settings provided for text labels and tile fill colors. Five regular expressions and corresponding color and angle arguments allow identifying, marking (highlighting or de-emphasizing), and rotating those sets of characters (i.e, their text labels or fill colors). that match the provided patterns. Value An invisible data frame describing the plot. See Also plot_charmap for plotting character maps; plot_text for plotting characters and color tiles by frequency; map_text_coord for mapping text to a table of character coordinates; map_text_regex for mapping text to a character table and matching patterns; read_ascii for reading text inputs into a character string; pal_ds4psy for default color palette. Other plot functions: plot_charmap(), plot_fn(), plot_fun(), plot_n(), plot_text(), plot_tiles(), theme_clean(), theme_ds4psy(), theme_empty() Examples # (A) From text string(s): plot_chars(x = c("Hello world!", "Does this work?", "That's good.", "Please carry on...")) # (B) From user input: # plot_chars() # (enter text in Console) # (C) From text file: # Create and use a text file: # cat("Hello world!", "This is a test file.", # "Can you see this text?", # "Good! Please carry on...", # file = "test.txt", sep = "\n") # plot_chars(file = "test.txt") # default # plot_chars(file = "test.txt", lbl_hi = "[[:upper:]]", lbl_lo = "[[:punct:]]", # col_lbl_hi = "red", col_lbl_lo = "blue") # plot_chars(file = "test.txt", lbl_hi = "[aeiou]", col_lbl_hi = "red", # col_bg = "white", bg_hi = "see") # mark vowels and "see" (in bg) # plot_chars(file = "test.txt", bg_hi = "[aeiou]", col_bg_hi = "gold") # mark (bg of) vowels ## Label options: # plot_chars(file = "test.txt", bg_hi = "see", lbl_tiles = FALSE) # plot_chars(file = "test.txt", cex = 5, family = "mono", fontface = 4, lbl_angle = c(-20, 20)) ## Note: plot_chars() invisibly returns a description of the plot (as df): # tb <- plot_chars(file = "test.txt", lbl_hi = "[aeiou]", lbl_rotate = TRUE) # head(tb) # unlink("test.txt") # clean up (by deleting file). ## (B) From text file (in subdir): # plot_chars(file = "data-raw/txt/hello.txt") # requires txt file # plot_chars(file = "data-raw/txt/ascii.txt", lbl_hi = "[2468]", bg_lo = "[[:digit:]]", # col_lbl_hi = "red", cex = 10, fontface = 2) ## (C) User input: # plot_chars() # (enter text in Console) plot_fn A function to plot a plot. Description plot_fn is a function that uses parameters for plotting a plot. Usage plot_fn( x = NA, y = 1, A = TRUE, B = FALSE, C = TRUE, D = FALSE, E = FALSE, F = FALSE, f = c(rev(pal_seeblau), "white", pal_pinky), g = "white" ) Arguments x Numeric (integer > 0). Default: x = NA. y Numeric (double). Default: y = 1. A Boolean. Default: A = TRUE. B Boolean. Default: B = FALSE. C Boolean. Default: C = TRUE. D Boolean. Default: D = FALSE. E Boolean. Default: E = FALSE. F Boolean. Default: F = FALSE. f A color palette (as a vector). Default: f = c(rev(pal_seeblau), "white", pal_pinky). Note: Using colors of the unikn package by default. g A color (e.g., a color name, as a character). Default: g = "white". Details plot_fn is deliberately kept cryptic and obscure to illustrate how function parameters can be ex- plored. plot_fn also shows that brevity in argument names should not come at the expense of clarity. In fact, transparent argument names are absolutely essential for understanding and using a function. plot_fn currently requires pal_seeblau and pal_pinky (from the unikn package) for its default colors. See Also plot_fun for a related function; pal_ds4psy for a color palette. Other plot functions: plot_charmap(), plot_chars(), plot_fun(), plot_n(), plot_text(), plot_tiles(), theme_clean(), theme_ds4psy(), theme_empty() Examples # Basics: plot_fn() # Exploring options: plot_fn(x = 2, A = TRUE) plot_fn(x = 3, A = FALSE, E = TRUE) plot_fn(x = 4, A = TRUE, B = TRUE, D = TRUE) plot_fn(x = 5, A = FALSE, B = TRUE, E = TRUE, f = c("black", "white", "gold")) plot_fn(x = 7, A = TRUE, B = TRUE, F = TRUE, f = c("steelblue", "white", "forestgreen")) plot_fun Another function to plot some plot. Description plot_fun is a function that provides options for plotting a plot. Usage plot_fun( a = NA, b = TRUE, c = TRUE, d = 1, e = FALSE, f = FALSE, g = FALSE, c1 = c(rev(pal_seeblau), "white", pal_grau, "black", Bordeaux), c2 = "black" ) Arguments a Numeric (integer > 0). Default: a = NA. b Boolean. Default: b = TRUE. c Boolean. Default: c = TRUE. d Numeric (double). Default: d = 1.0. e Boolean. Default: e = FALSE. f Boolean. Default: f = FALSE. g Boolean. Default: g = FALSE. c1 A color palette (as a vector). Default: c1 = c(rev(pal_seeblau), "white", pal_grau, "black", Bordeaux) (i.e., using colors of the unikn package by default). c2 A color (e.g., color name, as character). Default: c2 = "black". Details plot_fun is deliberately kept cryptic and obscure to illustrate how function parameters can be explored. plot_fun also shows that brevity in argument names should not come at the expense of clarity. In fact, transparent argument names are absolutely essential for understanding and using a function. plot_fun currently requires pal_seeblau, pal_grau, and Bordeaux (from the unikn package) for its default colors. See Also plot_fn for a related function; pal_ds4psy for color palette. Other plot functions: plot_charmap(), plot_chars(), plot_fn(), plot_n(), plot_text(), plot_tiles(), theme_clean(), theme_ds4psy(), theme_empty() Examples # Basics: plot_fun() # Exploring options: plot_fun(a = 3, b = FALSE, e = TRUE) plot_fun(a = 4, f = TRUE, g = TRUE, c1 = c("steelblue", "white", "firebrick")) plot_n Plot n tiles. Description plot_n plots a row or column of n tiles on fixed or polar coordinates. Usage plot_n( n = NA, row = TRUE, polar = FALSE, pal = pal_ds4psy, sort = TRUE, borders = TRUE, border_col = "black", border_size = 0, lbl_tiles = FALSE, lbl_title = FALSE, rseed = NA, save = FALSE, save_path = "images/tiles", prefix = "", suffix = "" ) Arguments n Basic number of tiles (on either side). row Plot as a row? Default: row = TRUE (else plotted as a column). polar Plot on polar coordinates? Default: polar = FALSE (i.e., using fixed coordi- nates). pal A color palette (automatically extended to n colors). Default: pal = pal_ds4psy. sort Sort tiles? Default: sort = TRUE (i.e., sorted tiles). borders Add borders to tiles? Default: borders = TRUE (i.e., use borders). border_col Color of borders (if borders = TRUE). Default: border_col = "black". border_size Size of borders (if borders = TRUE). Default: border_size = 0 (i.e., invisible). lbl_tiles Add numeric labels to tiles? Default: lbl_tiles = FALSE (i.e., no labels). lbl_title Add numeric label (of n) to plot? Default: lbl_title = FALSE (i.e., no title). rseed Random seed (number). Default: rseed = NA (using random seed). save Save plot as png file? Default: save = FALSE. save_path Path to save plot (if save = TRUE). Default: save_path = "images/tiles". prefix Prefix to plot name (if save = TRUE). Default: prefix = "". suffix Suffix to plot name (if save = TRUE). Default: suffix = "". Details Note that a polar row makes a tasty pie, whereas a polar column makes a target plot. See Also pal_ds4psy for default color palette. Other plot functions: plot_charmap(), plot_chars(), plot_fn(), plot_fun(), plot_text(), plot_tiles(), theme_clean(), theme_ds4psy(), theme_empty() Examples # (1) Basics (as ROW or COL): plot_n() # default plot (random n, row = TRUE, with borders, no labels) plot_n(row = FALSE) # default plot (random n, with borders, no labels) plot_n(n = 4, sort = FALSE) # random order plot_n(n = 6, borders = FALSE) # no borders plot_n(n = 8, lbl_tiles = TRUE, # with tile + lbl_title = TRUE) # title labels # Set colors: plot_n(n = 5, row = TRUE, lbl_tiles = TRUE, lbl_title = TRUE, pal = c("orange", "white", "firebrick"), border_col = "white", border_size = 2) # Fixed rseed: plot_n(n = 4, sort = FALSE, borders = FALSE, lbl_tiles = TRUE, lbl_title = TRUE, rseed = 101) # (2) polar plot (as PIE or TARGET): plot_n(polar = TRUE) # PIE plot (with borders, no labels) plot_n(polar = TRUE, row = FALSE) # TARGET plot (with borders, no labels) plot_n(n = 4, polar = TRUE, sort = FALSE) # PIE in random order plot_n(n = 5, polar = TRUE, row = FALSE, borders = FALSE) # TARGET no borders plot_n(n = 5, polar = TRUE, lbl_tiles = TRUE) # PIE with tile labels plot_n(n = 5, polar = TRUE, row = FALSE, lbl_title = TRUE) # TARGET with title label # plot_n(n = 4, row = TRUE, sort = FALSE, borders = TRUE, # border_col = "white", border_size = 2, # polar = TRUE, rseed = 132) # plot_n(n = 4, row = FALSE, sort = FALSE, borders = TRUE, # border_col = "white", border_size = 2, # polar = TRUE, rseed = 134) plot_text Plot text characters (from file or user input). Description plot_text parses text (from a file or from user input) and plots its individual characters as a tile plot (using ggplot2). Usage plot_text( x = NA, file = "", char_bg = " ", lbl_tiles = TRUE, lbl_rotate = FALSE, cex = 3, fontface = 1, family = "sans", col_lbl = "black", col_bg = "white", pal = pal_ds4psy[1:5], pal_extend = TRUE, case_sense = FALSE, borders = TRUE, border_col = "white", border_size = 0.5 ) Arguments x The text to plot (as a character vector). Different elements denote different lines of text. If x = NA (as per default), the file argument is used to read a text file or scan user input (entering text in Console). file A text file to read (or its path). If file = "" (as per default), scan is used to read user input from the Console. If a text file is stored in a sub-directory, enter its path and name here (without any leading or trailing "." or "/"). char_bg Character used as background. Default: char_bg = " ". If char_bg = NA, the most frequent character is used. lbl_tiles Add character labels to tiles? Default: lbl_tiles = TRUE (i.e., show labels). lbl_rotate Rotate character labels? Default: lbl_rotate = FALSE (i.e., no rotation). cex Character size (numeric). Default: cex = 3. fontface Font face of text labels (numeric). Default: fontface = 1, (from 1 to 4). family Font family of text labels (name). Default: family = "sans". Alternative op- tions: "sans", "serif", or "mono". col_lbl Color of text labels. Default: col_lbl = "black" (if lbl_tiles = TRUE). col_bg Color of char_bg (if defined), or the most frequent character in text (typically " "). Default: col_bg = "white". pal Color palette for filling tiles of text (used in order of character frequency). De- fault: pal = pal_ds4psy[1:5] (i.e., shades of Seeblau). pal_extend Boolean: Should pal be extended to match the number of different characters in text? Default: pal_extend = TRUE. If pal_extend = FALSE, only the tiles of the length(pal) most frequent characters will be filled by the colors of pal. case_sense Boolean: Distinguish lower- vs. uppercase characters? Default: case_sense = FALSE. borders Boolean: Add borders to tiles? Default: borders = TRUE (i.e., use borders). border_col Color of borders (if borders = TRUE). Default: border_col = "white". border_size Size of borders (if borders = TRUE). Default: border_size = 0.5. Details plot_text blurs the boundary between a text and its graphical representation by adding visual options for coloring characters based on their frequency counts. (Note that plot_chars provides additional support for matching regular expressions.) plot_text is character-based: Individual characters are plotted at equidistant x-y-positions with color settings for text labels and tile fill colors. By default, the color palette pal (used for tile fill colors) is scaled to indicate character frequency. plot_text invisibly returns a description of the plot (as a data frame). Value An invisible data frame describing the plot. See Also plot_charmap for plotting character maps; plot_chars for creating and plotting character maps; map_text_coord for mapping text to a table of character coordinates; map_text_regex for map- ping text to a character table and matching patterns; read_ascii for parsing text from file or user input; pal_ds4psy for default color palette. Other plot functions: plot_charmap(), plot_chars(), plot_fn(), plot_fun(), plot_n(), plot_tiles(), theme_clean(), theme_ds4psy(), theme_empty() Examples # (A) From text string(s): plot_text(x = c("Hello", "world!")) plot_text(x = c("Hello world!", "How are you today?")) # (B) From user input: # plot_text() # (enter text in Console) # (C) From text file: ## Create a temporary file "test.txt": # cat("Hello world!", "This is a test file.", # "Can you see this text?", # "Good! Please carry on...", # file = "test.txt", sep = "\n") # plot_text(file = "test.txt") ## Set colors, pal_extend, and case_sense: # cols <- c("steelblue", "skyblue", "lightgrey") # cols <- c("firebrick", "olivedrab", "steelblue", "orange", "gold") # plot_text(file = "test.txt", pal = cols, pal_extend = TRUE) # plot_text(file = "test.txt", pal = cols, pal_extend = FALSE) # plot_text(file = "test.txt", pal = cols, pal_extend = FALSE, case_sense = TRUE) ## Customize text and grid options: # plot_text(file = "test.txt", col_lbl = "darkblue", cex = 4, family = "sans", fontface = 3, # pal = "gold1", pal_extend = TRUE, border_col = NA) # plot_text(file = "test.txt", family = "serif", cex = 6, lbl_rotate = TRUE, # pal = NA, borders = FALSE) # plot_text(file = "test.txt", col_lbl = "white", pal = c("green3", "black"), # border_col = "black", border_size = .2) ## Color ranges: # plot_text(file = "test.txt", pal = c("red2", "orange", "gold")) # plot_text(file = "test.txt", pal = c("olivedrab4", "gold")) # unlink("test.txt") # clean up. ## (B) From text file (in subdir): # plot_text(file = "data-raw/txt/hello.txt") # requires txt file # plot_text(file = "data-raw/txt/ascii.txt", cex = 5, # col_bg = "grey", char_bg = "-") ## (C) From user input: # plot_text() # (enter text in Console) plot_tiles Plot n-by-n tiles. Description plot_tiles plots an area of n-by-n tiles on fixed or polar coordinates. Usage plot_tiles( n = NA, pal = pal_ds4psy, sort = TRUE, borders = TRUE, border_col = "black", border_size = 0.2, lbl_tiles = FALSE, lbl_title = FALSE, polar = FALSE, rseed = NA, save = FALSE, save_path = "images/tiles", prefix = "", suffix = "" ) Arguments n Basic number of tiles (on either side). pal Color palette (automatically extended to n x n colors). Default: pal = pal_ds4psy. sort Boolean: Sort tiles? Default: sort = TRUE (i.e., sorted tiles). borders Boolean: Add borders to tiles? Default: borders = TRUE (i.e., use borders). border_col Color of borders (if borders = TRUE). Default: border_col = "black". border_size Size of borders (if borders = TRUE). Default: border_size = 0.2. lbl_tiles Boolean: Add numeric labels to tiles? Default: lbl_tiles = FALSE (i.e., no labels). lbl_title Boolean: Add numeric label (of n) to plot? Default: lbl_title = FALSE (i.e., no title). polar Boolean: Plot on polar coordinates? Default: polar = FALSE (i.e., using fixed coordinates). rseed Random seed (number). Default: rseed = NA (using random seed). save Boolean: Save plot as png file? Default: save = FALSE. save_path Path to save plot (if save = TRUE). Default: save_path = "images/tiles". prefix Prefix to plot name (if save = TRUE). Default: prefix = "". suffix Suffix to plot name (if save = TRUE). Default: suffix = "". See Also pal_ds4psy for default color palette. Other plot functions: plot_charmap(), plot_chars(), plot_fn(), plot_fun(), plot_n(), plot_text(), theme_clean(), theme_ds4psy(), theme_empty() Examples # (1) Tile plot: plot_tiles() # default plot (random n, with borders, no labels) plot_tiles(n = 4, sort = FALSE) # random order plot_tiles(n = 6, borders = FALSE) # no borders plot_tiles(n = 8, lbl_tiles = TRUE, # with tile + lbl_title = TRUE) # title labels # Set colors: plot_tiles(n = 4, pal = c("orange", "white", "firebrick"), lbl_tiles = TRUE, lbl_title = TRUE, sort = TRUE) plot_tiles(n = 6, sort = FALSE, border_col = "white", border_size = 2) # Fixed rseed: plot_tiles(n = 4, sort = FALSE, borders = FALSE, lbl_tiles = TRUE, lbl_title = TRUE, rseed = 101) # (2) polar plot: plot_tiles(polar = TRUE) # default polar plot (with borders, no labels) plot_tiles(n = 4, polar = TRUE, sort = FALSE) # random order plot_tiles(n = 6, polar = TRUE, sort = TRUE, # sorted and with lbl_tiles = TRUE, lbl_title = TRUE) # tile + title labels plot_tiles(n = 4, sort = FALSE, borders = TRUE, border_col = "white", border_size = 2, polar = TRUE, rseed = 132) # fixed rseed posPsy_AHI_CESD Positive Psychology: AHI CESD data. Description posPsy_AHI_CESD is a dataset containing answers to the 24 items of the Authentic Happiness In- ventory (AHI) and answers to the 20 items of the Center for Epidemiological Studies Depression (CES-D) scale (Radloff, 1977) for multiple (1 to 6) measurement occasions. Usage posPsy_AHI_CESD Format A table with 992 cases (rows) and 50 variables (columns). Details Codebook • 1. id: Participant ID. • 2. occasion: Measurement occasion: 0: Pretest (i.e., at enrolment), 1: Posttest (i.e., 7 days after pretest), 2: 1-week follow-up, (i.e., 14 days after pretest, 7 days after posttest), 3: 1- month follow-up, (i.e., 38 days after pretest, 31 days after posttest), 4: 3-month follow-up, (i.e., 98 days after pretest, 91 days after posttest), 5: 6-month follow-up, (i.e., 189 days after pretest, 182 days after posttest). • 3. elapsed.days: Time since enrolment measured in fractional days. • 4. intervention: Type of intervention: 3 positive psychology interventions (PPIs), plus 1 control condition: 1: "Using signature strengths", 2: "Three good things", 3: "Gratitude visit", 4: "Recording early memories" (control condition). • 5.-28. (from ahi01 to ahi24): Responses on 24 AHI items. • 29.-48. (from cesd01 to cesd20): Responses on 20 CES-D items. • 49. ahiTotal: Total AHI score. • 50. cesdTotal: Total CES-D score. See codebook and references at https://bookdown.org/hneth/ds4psy/B-1-datasets-pos.html. Source Articles • <NAME>., <NAME>., <NAME>., & <NAME>. (2017). Web-based positive psychology interventions: A reexamination of effectiveness. Journal of Clinical Psy- chology, 73(3), 218–232. doi: 10.1002/jclp.22328 • <NAME>., <NAME>., <NAME>. and <NAME>. (2018). Data from, ‘Web-based positive psychology interventions: A reexamination of effectiveness’. Journal of Open Psychology Data, 6(1). doi: 10.5334/jopd.35 See https://openpsychologydata.metajnl.com/articles/10.5334/jopd.35/ for details and doi:10.6084/m9.figshare.1577563.v1 for original dataset. Additional references at https://bookdown.org/hneth/ds4psy/B-1-datasets-pos.html. See Also posPsy_long for a corrected version of this file (in long format). Other datasets: Bushisms, Trumpisms, countries, data_1, data_2, data_t1_de, data_t1_tab, data_t1, data_t2, data_t3, data_t4, dt_10, exp_num_dt, exp_wide, falsePosPsy_all, fame, flowery, fruits, outliers, pi_100k, posPsy_long, posPsy_p_info, posPsy_wide, t3, t4, t_1, t_2, t_3, t_4, table6, table7, table8, table9, tb posPsy_long Positive Psychology: AHI CESD corrected data (in long format). Description posPsy_long is a dataset containing answers to the 24 items of the Authentic Happiness Inventory (AHI) and answers to the 20 items of the Center for Epidemiological Studies Depression (CES-D) scale (see Radloff, 1977) for multiple (1 to 6) measurement occasions. Usage posPsy_long Format A table with 990 cases (rows) and 50 variables (columns). Details This dataset is a corrected version of posPsy_AHI_CESD and in long-format. Source Articles • <NAME>., <NAME>., <NAME>., & <NAME>. (2017). Web-based positive psychology interventions: A reexamination of effectiveness. Journal of Clinical Psy- chology, 73(3), 218–232. doi: 10.1002/jclp.22328 • <NAME>., <NAME>., <NAME>. and <NAME>. (2018). Data from, ‘Web-based positive psychology interventions: A reexamination of effectiveness’. Journal of Open Psychology Data, 6(1). doi: 10.5334/jopd.35 See https://openpsychologydata.metajnl.com/articles/10.5334/jopd.35/ for details and doi:10.6084/m9.figshare.1577563.v1 for original dataset. Additional references at https://bookdown.org/hneth/ds4psy/B-1-datasets-pos.html. See Also posPsy_AHI_CESD for source of this file and codebook information; posPsy_wide for a version of this file (in wide format). Other datasets: Bushisms, Trumpisms, countries, data_1, data_2, data_t1_de, data_t1_tab, data_t1, data_t2, data_t3, data_t4, dt_10, exp_num_dt, exp_wide, falsePosPsy_all, fame, flowery, fruits, outliers, pi_100k, posPsy_AHI_CESD, posPsy_p_info, posPsy_wide, t3, t4, t_1, t_2, t_3, t_4, table6, table7, table8, table9, tb posPsy_p_info Positive Psychology: Participant data. Description posPsy_p_info is a dataset containing details of 295 participants. Usage posPsy_p_info Format A table with 295 cases (rows) and 6 variables (columns). Details id Participant ID. intervention Type of intervention: 3 positive psychology interventions (PPIs), plus 1 control con- dition: 1: "Using signature strengths", 2: "Three good things", 3: "Gratitude visit", 4: "Record- ing early memories" (control condition). sex Sex: 1 = female, 2 = male. age Age (in years). educ Education level: Scale from 1: less than 12 years, to 5: postgraduate degree. income Income: Scale from 1: below average, to 3: above average. See codebook and references at https://bookdown.org/hneth/ds4psy/B-1-datasets-pos.html. Source Articles • <NAME>., <NAME>., <NAME>., & <NAME>. (2017). Web-based positive psychology interventions: A reexamination of effectiveness. Journal of Clinical Psy- chology, 73(3), 218–232. doi: 10.1002/jclp.22328 • <NAME>., <NAME>., <NAME>. and <NAME>. (2018). Data from, ‘Web-based positive psychology interventions: A reexamination of effectiveness’. Journal of Open Psychology Data, 6(1). doi: 10.5334/jopd.35 See https://openpsychologydata.metajnl.com/articles/10.5334/jopd.35/ for details and doi:10.6084/m9.figshare.1577563.v1 for original dataset. Additional references at https://bookdown.org/hneth/ds4psy/B-1-datasets-pos.html. See Also Other datasets: Bushisms, Trumpisms, countries, data_1, data_2, data_t1_de, data_t1_tab, data_t1, data_t2, data_t3, data_t4, dt_10, exp_num_dt, exp_wide, falsePosPsy_all, fame, flowery, fruits, outliers, pi_100k, posPsy_AHI_CESD, posPsy_long, posPsy_wide, t3, t4, t_1, t_2, t_3, t_4, table6, table7, table8, table9, tb posPsy_wide Positive Psychology: All corrected data (in wide format). Description posPsy_wide is a dataset containing answers to the 24 items of the Authentic Happiness Inventory (AHI) and answers to the 20 items of the Center for Epidemiological Studies Depression (CES-D) scale (see Radloff, 1977) for multiple (1 to 6) measurement occasions. Usage posPsy_wide Format An object of class spec_tbl_df (inherits from tbl_df, tbl, data.frame) with 295 rows and 294 columns. Details This dataset is based on posPsy_AHI_CESD and posPsy_long, but is in wide format. Source Articles • <NAME>., <NAME>., <NAME>., & <NAME>. (2017). Web-based positive psychology interventions: A reexamination of effectiveness. Journal of Clinical Psy- chology, 73(3), 218–232. doi: 10.1002/jclp.22328 • <NAME>., <NAME>., <NAME>. and <NAME>. (2018). Data from, ‘Web-based positive psychology interventions: A reexamination of effectiveness’. Journal of Open Psychology Data, 6(1). doi: 10.5334/jopd.35 See https://openpsychologydata.metajnl.com/articles/10.5334/jopd.35/ for details and doi:10.6084/m9.figshare.1577563.v1 for original dataset. Additional references at https://bookdown.org/hneth/ds4psy/B-1-datasets-pos.html. See Also posPsy_AHI_CESD for the source of this file, posPsy_long for a version of this file (in long format). Other datasets: Bushisms, Trumpisms, countries, data_1, data_2, data_t1_de, data_t1_tab, data_t1, data_t2, data_t3, data_t4, dt_10, exp_num_dt, exp_wide, falsePosPsy_all, fame, flowery, fruits, outliers, pi_100k, posPsy_AHI_CESD, posPsy_long, posPsy_p_info, t3, t4, t_1, t_2, t_3, t_4, table6, table7, table8, table9, tb read_ascii read_ascii parses text (from file or user input) into string(s) of text. Description read_ascii parses text inputs (from a file or from user input in the Console) into a character vector. Usage read_ascii(file = "", quiet = FALSE) Arguments file The text file to read (or its path). If file = "" (the default), scan is used to read user input from the Console. If a text file is stored in a sub-directory, enter its path and name here (without any leading or trailing "." or "/"). Default: file = "". quiet Boolean: Provide user feedback? Default: quiet = FALSE. Details Different lines of text are represented by different elements of the character vector returned. The getwd function is used to determine the current working directory. This replaces the here package, which was previously used to determine an (absolute) file path. Note that read_ascii originally contained map_text_coord, but has been separated to enable independent access to separate functionalities. Value A character vector, with its elements denoting different lines of text. See Also map_text_coord for mapping text to a table of character coordinates; plot_chars for a character plotting function. Other text objects and functions: Umlaut, capitalize(), caseflip(), cclass, chars_to_text(), collapse_chars(), count_chars_words(), count_chars(), count_words(), invert_rules(), l33t_rul35, map_text_chars(), map_text_coord(), map_text_regex(), metachar, text_to_chars(), text_to_sentences(), text_to_words(), transl33t(), words_to_text() Examples ## Create a temporary file "test.txt": # cat("Hello world!", "This is a test.", # "Can you see this text?", # "Good! Please carry on...", # file = "test.txt", sep = "\n") ## (a) Read text (from file): # read_ascii("test.txt") # read_ascii("test.txt", quiet = TRUE) # y flipped # unlink("test.txt") # clean up (by deleting file). ## (b) Read text (from file in subdir): # read_ascii("data-raw/txt/ascii.txt") # requires txt file ## (c) Scan user input (from console): # read_ascii() sample_char Draw a sample of n random characters (from given characters). Description sample_char draws a sample of n random characters from a given range of characters. Usage sample_char(x_char = c(letters, LETTERS), n = 1, replace = FALSE, ...) Arguments x_char Population of characters to sample from. Default: x_char = c(letters, LETTERS). n Number of characters to draw. Default: n = 1. replace Boolean: Sample with replacement? Default: replace = FALSE. ... Other arguments. (Use for specifying prob, as passed to sample().) Details By default, sample_char draws n = 1 a random alphabetic character from x_char = c(letters, LETTERS). As with sample(), the sample size n must not exceed the number of available characters nchar(x_char), unless replace = TRUE (i.e., sampling with replacement). Value A text string (scalar character vector). See Also Other sampling functions: coin(), dice_2(), dice(), sample_date(), sample_time() Examples sample_char() # default sample_char(n = 10) sample_char(x_char = "abc", n = 10, replace = TRUE) sample_char(x_char = c("x y", "6 9"), n = 6, replace = FALSE) sample_char(x_char = c("x y", "6 9"), n = 20, replace = TRUE) # Biased sampling: sample_char(x_char = "abc", n = 20, replace = TRUE, prob = c(3/6, 2/6, 1/6)) # Note: By default, n must not exceed nchar(x_char): sample_char(n = 52, replace = FALSE) # works, but # sample_char(n = 53, replace = FALSE) # would yield ERROR; sample_char(n = 53, replace = TRUE) # works again. sample_date Draw a sample of n random dates (from a given range). Description sample_date draws a sample of n random dates from a given range. Usage sample_date(from = "1970-01-01", to = Sys.Date(), size = 1, ...) Arguments from Earliest date (as "Date" or string). Default: from = "1970-01-01" (as a scalar). to Latest date (as "Date" or string). Default: to = Sys.Date() (as a scalar). size Size of date samples to draw. Default: size = 1. ... Other arguments. (Use for specifying replace, as passed to sample().) Details By default, sample_date draws n = 1 random date (as a "Date" object) in the range from = "1970-01-01" to = Sys.Date() (current date). Both from and to currently need to be scalars (i.e., with a length of 1). Value A vector of class "Date". See Also Other sampling functions: coin(), dice_2(), dice(), sample_char(), sample_time() Examples sample_date() sort(sample_date(size = 10)) sort(sample_date(from = "2020-02-28", to = "2020-03-01", size = 10, replace = TRUE)) # 2020 is a leap year # Note: Oddity with sample(): sort(sample_date(from = "2020-01-01", to = "2020-01-01", size = 10, replace = TRUE)) # range of 0! # see sample(9:9, size = 10, replace = TRUE) sample_time Draw a sample of n random times (from a given range). Description sample_time draws a sample of n random times from a given range. Usage sample_time( from = "1970-01-01 00:00:00", to = Sys.time(), size = 1, as_POSIXct = TRUE, tz = "", ... ) Arguments from Earliest date-time (as string). Default: from = "1970-01-01 00:00:00" (as a scalar). to Latest date-time (as string). Default: to = Sys.time() (as a scalar). size Size of time samples to draw. Default: size = 1. as_POSIXct Boolean: Return calendar time ("POSIXct") object? Default: as_POSIXct = TRUE. If as_POSIXct = FALSE, a local time ("POSIXlt") object is returned (as a list). tz Time zone. Default: tz = "" (i.e., current system time zone, see Sys.timezone()). Use tz = "UTC" for Universal Time, Coordinated. ... Other arguments. (Use for specifying replace, as passed to sample().) Details By default, sample_time draws n = 1 random calendar time (as a "POSIXct" object) in the range from = "1970-01-01 00:00:00" to = Sys.time() (current time). Both from and to currently need to be scalars (i.e., with a length of 1). If as_POSIXct = FALSE, a local time ("POSIXlt") object is returned (as a list). The tz argument allows specifying time zones (see Sys.timezone() for current setting and OlsonNames() for options.) Value A vector of class "POSIXct" or "POSIXlt". See Also Other sampling functions: coin(), dice_2(), dice(), sample_char(), sample_date() Examples # Basics: sample_time() sample_time(size = 10) # Specific ranges: sort(sample_time(from = (Sys.time() - 60), size = 10)) # within last minute sort(sample_time(from = (Sys.time() - 1 * 60 * 60), size = 10)) # within last hour sort(sample_time(from = Sys.time(), to = (Sys.time() + 1 * 60 * 60), size = 10, replace = FALSE)) # within next hour sort(sample_time(from = "2020-12-31 00:00:00 CET", to = "2020-12-31 00:00:01 CET", size = 10, replace = TRUE)) # within 1 sec range # Local time (POSIXlt) objects (as list): (lt_sample <- sample_time(as_POSIXct = FALSE)) unlist(lt_sample) # Time zones: sample_time(size = 3, tz = "UTC") sample_time(size = 3, tz = "America/Los_Angeles") # Note: Oddity with sample(): sort(sample_time(from = "2020-12-31 00:00:00 CET", to = "2020-12-31 00:00:00 CET", size = 10, replace = TRUE)) # range of 0! # see sample(9:9, size = 10, replace = TRUE) t3 Data table t3. Description t3 is a fictitious dataset to practice importing and joining data (from a CSV file). Usage t3 Format A table with 10 cases (rows) and 4 variables (columns). Source See CSV data at http://rpository.com/ds4psy/data/t3.csv. See Also Other datasets: Bushisms, Trumpisms, countries, data_1, data_2, data_t1_de, data_t1_tab, data_t1, data_t2, data_t3, data_t4, dt_10, exp_num_dt, exp_wide, falsePosPsy_all, fame, flowery, fruits, outliers, pi_100k, posPsy_AHI_CESD, posPsy_long, posPsy_p_info, posPsy_wide, t4, t_1, t_2, t_3, t_4, table6, table7, table8, table9, tb t4 Data table t4. Description t4 is a fictitious dataset to practice importing and joining data (from a CSV file). Usage t4 Format A table with 10 cases (rows) and 4 variables (columns). Source See CSV data at http://rpository.com/ds4psy/data/t4.csv. See Also Other datasets: Bushisms, Trumpisms, countries, data_1, data_2, data_t1_de, data_t1_tab, data_t1, data_t2, data_t3, data_t4, dt_10, exp_num_dt, exp_wide, falsePosPsy_all, fame, flowery, fruits, outliers, pi_100k, posPsy_AHI_CESD, posPsy_long, posPsy_p_info, posPsy_wide, t3, t_1, t_2, t_3, t_4, table6, table7, table8, table9, tb table6 Data table6. Description table6 is a fictitious dataset to practice reshaping and tidying data. Usage table6 Format A table with 6 cases (rows) and 2 variables (columns). Details This dataset is a further variant of the table1 to table5 datasets of the tidyr package. Source See CSV data at http://rpository.com/ds4psy/data/table6.csv. See Also Other datasets: Bushisms, Trumpisms, countries, data_1, data_2, data_t1_de, data_t1_tab, data_t1, data_t2, data_t3, data_t4, dt_10, exp_num_dt, exp_wide, falsePosPsy_all, fame, flowery, fruits, outliers, pi_100k, posPsy_AHI_CESD, posPsy_long, posPsy_p_info, posPsy_wide, t3, t4, t_1, t_2, t_3, t_4, table7, table8, table9, tb table7 Data table7. Description table7 is a fictitious dataset to practice reshaping and tidying data. Usage table7 Format A table with 6 cases (rows) and 1 (horrendous) variable (column). Details This dataset is a further variant of the table1 to table5 datasets of the tidyr package. Source See CSV data at http://rpository.com/ds4psy/data/table7.csv. See Also Other datasets: Bushisms, Trumpisms, countries, data_1, data_2, data_t1_de, data_t1_tab, data_t1, data_t2, data_t3, data_t4, dt_10, exp_num_dt, exp_wide, falsePosPsy_all, fame, flowery, fruits, outliers, pi_100k, posPsy_AHI_CESD, posPsy_long, posPsy_p_info, posPsy_wide, t3, t4, t_1, t_2, t_3, t_4, table6, table8, table9, tb table8 Data table8. Description table9 is a fictitious dataset to practice reshaping and tidying data. Usage table8 Format A table with 3 cases (rows) and 5 variables (columns). Details This dataset is a further variant of the table1 to table5 datasets of the tidyr package. Source See CSV data at http://rpository.com/ds4psy/data/table8.csv. See Also Other datasets: Bushisms, Trumpisms, countries, data_1, data_2, data_t1_de, data_t1_tab, data_t1, data_t2, data_t3, data_t4, dt_10, exp_num_dt, exp_wide, falsePosPsy_all, fame, flowery, fruits, outliers, pi_100k, posPsy_AHI_CESD, posPsy_long, posPsy_p_info, posPsy_wide, t3, t4, t_1, t_2, t_3, t_4, table6, table7, table9, tb table9 Data table9. Description table9 is a fictitious dataset to practice reshaping and tidying data. Usage table9 Format A 3 x 2 x 2 array (of type "xtabs") with 2940985206 elements (frequency counts). Details This dataset is a further variant of the table1 to table5 datasets of the tidyr package. Source Generated by using stats::xtabs(formula = count ~., data = tidyr::table2). See Also Other datasets: Bushisms, Trumpisms, countries, data_1, data_2, data_t1_de, data_t1_tab, data_t1, data_t2, data_t3, data_t4, dt_10, exp_num_dt, exp_wide, falsePosPsy_all, fame, flowery, fruits, outliers, pi_100k, posPsy_AHI_CESD, posPsy_long, posPsy_p_info, posPsy_wide, t3, t4, t_1, t_2, t_3, t_4, table6, table7, table8, tb tb Data table tb. Description tb is a fictitious dataset describing 100 non-existing, but otherwise ordinary people. Usage tb Format A table with 100 cases (rows) and 5 variables (columns). Details Codebook The table contains 5 columns/variables: • 1. id: Participant ID. • 2. age: Age (in years). • 3. height: Height (in cm). • 4. shoesize: Shoesize (EU standard). • 5. IQ: IQ score (according Raven’s Regressive Tables). tb was originally created to practice loops and iterations (as a CSV file). Source See CSV data file at http://rpository.com/ds4psy/data/tb.csv. See Also Other datasets: Bushisms, Trumpisms, countries, data_1, data_2, data_t1_de, data_t1_tab, data_t1, data_t2, data_t3, data_t4, dt_10, exp_num_dt, exp_wide, falsePosPsy_all, fame, flowery, fruits, outliers, pi_100k, posPsy_AHI_CESD, posPsy_long, posPsy_p_info, posPsy_wide, t3, t4, t_1, t_2, t_3, t_4, table6, table7, table8, table9 text_to_chars Split string(s) of text x into its characters. Description text_to_chars splits a string of text x (consisting of one or more character strings) into a vector of its individual characters. Usage text_to_chars(x, rm_specials = FALSE, sep = "") Arguments x A string of text (required). rm_specials Boolean: Remove special characters? Default: rm_specials = TRUE. sep Character to insert between the elements of a multi-element character vector as input x? Default: sep = "" (i.e., add nothing). Details If rm_specials = TRUE, most special (or non-word) characters are removed. (Note that this cur- rently works without using regular expressions.) text_to_chars is an inverse function of chars_to_text. Value A character vector (containing individual characters). See Also chars_to_text for combining character vectors into text; text_to_sentences for splitting text into a vector of sentences; text_to_words for splitting text into a vector of words; count_chars for counting the frequency of characters; count_words for counting the frequency of words; strsplit for splitting strings. Other text objects and functions: Umlaut, capitalize(), caseflip(), cclass, chars_to_text(), collapse_chars(), count_chars_words(), count_chars(), count_words(), invert_rules(), l33t_rul35, map_text_chars(), map_text_coord(), map_text_regex(), metachar, read_ascii(), text_to_sentences(), text_to_words(), transl33t(), words_to_text() Examples s3 <- c("A 1st sentence.", "The 2nd sentence.", "A 3rd --- and FINAL --- sentence.") text_to_chars(s3) text_to_chars(s3, sep = "\n") text_to_chars(s3, rm_specials = TRUE) text_to_sentences Split strings of text x into sentences. Description text_to_sentences splits text x (consisting of one or more character strings) into a vector of its constituting sentences. Usage text_to_sentences( x, sep = " ", split_delim = "\\.|\\?|!", force_delim = FALSE ) Arguments x A string of text (required), typically a character vector. sep A character inserted as separator/delimiter between elements when collapsing multi-element strings of x. Default: sep = " " (i.e., insert 1 space between ele- ments). split_delim Sentence delimiters (as regex) used to split the collapsed string of x into sub- strings. Default: split_delim = "\.|\?|!" (rather than "[[:punct:]]"). force_delim Boolean: Enforce splitting at split_delim? If force_delim = FALSE (as per default), a standard sentence-splitting pattern is assumed: split_delim is fol- lowed by one or more blank spaces and a capital letter. If force_delim = TRUE, splits at split_delim are enforced (without considering spacing or capitaliza- tion). Details The splits of x will occur at given punctuation marks (provided as a regular expression, default: split_delim = "\.|\?|!"). Empty leading and trailing spaces are removed before returning a vector of the remaining character sequences (i.e., the sentences). The Boolean argument force_delim distinguishes between two splitting modes: 1. If force_delim = FALSE (as per default), a standard sentence-splitting pattern is assumed: A sentence delimiter in split_delim must be followed by one or more blank spaces and a capital letter starting the next sentence. Sentence delimiters in split_delim are not removed from the output. 2. If force_delim = TRUE, the function enforces splits at each delimiter in split_delim. For instance, any dot (i.e., the metacharacter "\.") is interpreted as a full stop, so that sentences containing dots mid-sentence (e.g., for abbreviations, etc.) are split into parts. Sentence de- limiters in split_delim are removed from the output. Internally, text_to_sentences first uses paste to collapse strings (adding sep between elements) and then strsplit to split strings at split_delim. Value A character vector (of sentences). See Also text_to_words for splitting text into a vector of words; text_to_chars for splitting text into a vector of characters; count_words for counting the frequency of words; strsplit for splitting strings. Other text objects and functions: Umlaut, capitalize(), caseflip(), cclass, chars_to_text(), collapse_chars(), count_chars_words(), count_chars(), count_words(), invert_rules(), l33t_rul35, map_text_chars(), map_text_coord(), map_text_regex(), metachar, read_ascii(), text_to_chars(), text_to_words(), transl33t(), words_to_text() Examples x <- c("A first sentence. Exclamation sentence!", "Any questions? But etc. can be tricky. A fourth --- and final --- sentence.") text_to_sentences(x) text_to_sentences(x, force_delim = TRUE) # Changing split delimiters: text_to_sentences(x, split_delim = "\\.") # only split at "." text_to_sentences("Buy apples, berries, and coconuts.") text_to_sentences("Buy apples, berries; and coconuts.", split_delim = ",|;|\\.", force_delim = TRUE) text_to_sentences(c("123. 456? 789! 007 etc."), force_delim = TRUE) # Split multi-element strings (w/o punctuation): e3 <- c("12", "34", "56") text_to_sentences(e3, sep = " ") # Default: Collapse strings adding 1 space, but: text_to_sentences(e3, sep = ".", force_delim = TRUE) # insert sep and force split. # Punctuation within sentences: text_to_sentences("Dr. who is left intact.") text_to_sentences("Dr. Who is problematic.") text_to_words Split string(s) of text x into words. Description text_to_words splits a string of text x (consisting of one or more character strings) into a vector of its constituting words. Usage text_to_words(x) Arguments x A string of text (required), typically a character vector. Details text_to_words removes all (standard) punctuation marks and empty spaces in the resulting text parts, before returning a vector of the remaining character symbols (as its words). Internally, text_to_words uses strsplit to split strings at punctuation marks (split = "[[:punct:]]") and blank spaces (split = "( ){1,}"). Value A character vector (of words). See Also text_to_words for splitting a text into its words; text_to_sentences for splitting text into a vector of sentences; text_to_chars for splitting text into a vector of characters; count_words for counting the frequency of words; strsplit for splitting strings. Other text objects and functions: Umlaut, capitalize(), caseflip(), cclass, chars_to_text(), collapse_chars(), count_chars_words(), count_chars(), count_words(), invert_rules(), l33t_rul35, map_text_chars(), map_text_coord(), map_text_regex(), metachar, read_ascii(), text_to_chars(), text_to_sentences(), transl33t(), words_to_text() Examples # Default: x <- c("Hello!", "This is a 1st sentence.", "This is the 2nd sentence.", "The end.") text_to_words(x) theme_clean A clean alternative theme for ggplot2. Description theme_clean provides an alternative ds4psy theme to use in ggplot2 commands. Usage theme_clean( base_size = 11, base_family = "", base_line_size = base_size/22, base_rect_size = base_size/22, col_title = grey(0, 1), col_panel = grey(0.85, 1), col_gridx = grey(1, 1), col_gridy = grey(1, 1), col_ticks = grey(0.1, 1) ) Arguments base_size Base font size (optional, numeric). Default: base_size = 11. base_family Base font family (optional, character). Default: base_family = "". Options include "mono", "sans" (default), and "serif". base_line_size Base line size (optional, numeric). Default: base_line_size = base_size/22. base_rect_size Base rectangle size (optional, numeric). Default: base_rect_size = base_size/22. col_title Color of plot title (and tag). Default: col_title = grey(.0, 1) (i.e., "black"). col_panel Color of panel background(s). Default: col_panel = grey(.85, 1) (i.e., light "grey"). col_gridx Color of (major) panel lines (through x/vertical). Default: col_gridx = grey(1.0, 1) (i.e., "white"). col_gridy Color of (major) panel lines (through y/horizontal). Default: col_gridy = grey(1.0, 1) (i.e., "white"). col_ticks Color of axes text and ticks. Default: col_ticks = grey(.10, 1) (i.e., near "black"). Details theme_clean is more minimal than theme_ds4psy and fills panel backgrounds with a color col_panel. This theme works well for plots with multiple panels, strong colors and bright color accents, but is of limited use with transparent colors. Value A ggplot2 theme. See Also theme_ds4psy for default theme. Other plot functions: plot_charmap(), plot_chars(), plot_fn(), plot_fun(), plot_n(), plot_text(), plot_tiles(), theme_ds4psy(), theme_empty() Examples # Plotting iris dataset (using ggplot2, theme_grau, and unikn colors): library('ggplot2') # theme_clean() requires ggplot2 library('unikn') # for colors and usecol() function ggplot(datasets::iris) + geom_jitter(aes(x = Sepal.Length, y = Sepal.Width, color = Species), size = 3, alpha = 3/4) + facet_wrap(~Species) + scale_color_manual(values = usecol(pal = c(Pinky, Karpfenblau, Seegruen))) + labs(tag = "B", title = "Iris sepals", caption = "Data from datasets::iris") + coord_fixed(ratio = 3/2) + theme_clean() theme_ds4psy A basic and flexible plot theme (using ggplot2 and unikn). Description theme_ds4psy provides a generic ds4psy theme to use in ggplot2 commands. Usage theme_ds4psy( base_size = 11, base_family = "", base_line_size = base_size/22, base_rect_size = base_size/22, col_title = grey(0, 1), col_txt_1 = grey(0.1, 1), col_txt_2 = grey(0.2, 1), col_txt_3 = grey(0.1, 1), col_bgrnd = "transparent", col_panel = grey(1, 1), col_strip = "transparent", col_axes = grey(0, 1), col_gridx = grey(0.75, 1), col_gridy = grey(0.75, 1), col_brdrs = "transparent" ) Arguments base_size Base font size (optional, numeric). Default: base_size = 11. base_family Base font family (optional, character). Default: base_family = "". Options include "mono", "sans" (default), and "serif". base_line_size Base line size (optional, numeric). Default: base_line_size = base_size/22. base_rect_size Base rectangle size (optional, numeric). Default: base_rect_size = base_size/22. col_title Color of plot title (and tag). Default: col_title = grey(.0, 1) (i.e., "black"). col_txt_1 Color of primary text (headings and axis labels). Default: col_title = grey(.1, 1). col_txt_2 Color of secondary text (caption, legend, axes labels/ticks). Default: col_title = grey(.2, 1). col_txt_3 Color of other text (facet strip labels). Default: col_title = grey(.1, 1). col_bgrnd Color of plot background. Default: col_bgrnd = "transparent". col_panel Color of panel background(s). Default: col_panel = grey(1.0, 1) (i.e., "white"). col_strip Color of facet strips. Default: col_strip = "transparent". col_axes Color of (x and y) axes. Default: col_axes = grey(.00, 1) (i.e., "black"). col_gridx Color of (major and minor) panel lines (through x/vertical). Default: col_gridx = grey(.75, 1) (i.e., light "grey"). col_gridy Color of (major and minor) panel lines (through y/horizontal). Default: col_gridy = grey(.75, 1) (i.e., light "grey"). col_brdrs Color of (panel and strip) borders. Default: col_brdrs = "transparent". Details The theme is lightweight and no-nonsense, but somewhat opinionated (e.g., in using transparency and grid lines, and relying on grey tones for emphasizing data with color accents). Basic sizes and the colors of text elements, backgrounds, and lines can be specified. However, excessive customization rarely yields aesthetic improvements over the standard ggplot2 themes. Value A ggplot2 theme. See Also unikn::theme_unikn inspired the current theme. Other plot functions: plot_charmap(), plot_chars(), plot_fn(), plot_fun(), plot_n(), plot_text(), plot_tiles(), theme_clean(), theme_empty() Examples # Plotting iris dataset (using ggplot2 and unikn): library('ggplot2') # theme_ds4psy() requires ggplot2 library('unikn') # for colors and usecol() function ggplot(datasets::iris) + geom_jitter(aes(x = Petal.Length, y = Petal.Width, color = Species), size = 3, alpha = 2/3) + scale_color_manual(values = usecol(pal = c(Pinky, Seeblau, Seegruen))) + labs(title = "Iris petals", subtitle = "The subtitle of this plot", caption = "Data from datasets::iris") + theme_ds4psy() ggplot(datasets::iris) + geom_jitter(aes(x = Sepal.Length, y = Sepal.Width, color = Species), size = 3, alpha = 2/3) + facet_wrap(~Species) + scale_color_manual(values = usecol(pal = c(Pinky, Seeblau, Seegruen))) + labs(tag = "A", title = "Iris sepals", subtitle = "Demo plot with facets and default colors", caption = "Data from datasets::iris") + coord_fixed(ratio = 3/2) + theme_ds4psy() # A unikn::Seeblau look: ggplot(datasets::iris) + geom_jitter(aes(x = Sepal.Length, y = Sepal.Width, color = Species), size = 3, alpha = 2/3) + facet_wrap(~Species) + scale_color_manual(values = usecol(pal = c(Pinky, Seeblau, Seegruen))) + labs(tag = "B", title = "Iris sepals", subtitle = "Demo plot in unikn::Seeblau colors", caption = "Data from datasets::iris") + coord_fixed(ratio = 3/2) + theme_ds4psy(col_title = pal_seeblau[[4]], col_strip = pal_seeblau[[1]], col_brdrs = Grau) theme_empty A basic and flexible plot theme (using ggplot2 and unikn). Description theme_empty provides an empty (blank) theme to use in ggplot2 commands. Usage theme_empty( font_size = 12, font_family = "", rel_small = 12/14, plot_mar = c(0, 0, 0, 0) ) Arguments font_size Overall font size. Default: font_size = 12. font_family Base font family. Default: font_family = "". rel_small Relative size of smaller text. Default: rel_small = 10/12. plot_mar Plot margin sizes (on top, right, bottom, left). Default: plot_mar = c(0, 0, 0, 0) (in lines). Details theme_empty shows nothing but the plot panel. theme_empty is based on theme_nothing of the cowplot package and uses theme_void of the ggplot2 package. Value A ggplot2 theme. See Also cowplot::theme_nothing is the inspiration and source of this theme. Other plot functions: plot_charmap(), plot_chars(), plot_fn(), plot_fun(), plot_n(), plot_text(), plot_tiles(), theme_clean(), theme_ds4psy() Examples # Plotting iris dataset (using ggplot2): library('ggplot2') # theme_empty() requires ggplot2 ggplot(datasets::iris) + geom_point(aes(x = Petal.Length, y = Petal.Width, color = Species), size = 4, alpha = 1/2) + scale_color_manual(values = c("firebrick3", "deepskyblue3", "olivedrab3")) + labs(title = "NOT SHOWN: Title", subtitle = "NOT SHOWN: Subtitle", caption = "NOT SHOWN: Data from datasets::iris") + theme_empty(plot_mar = c(2, 0, 1, 0)) # margin lines (top, right, bot, left) transl33t transl33t translates text into leet slang. Description transl33t translates text into leet (or l33t) slang given a set of rules. Usage transl33t(txt, rules = l33t_rul35, in_case = "no", out_case = "no") Arguments txt The text (character string) to translate. rules Rules which existing character in txt is to be replaced by which new character (as a named character vector). Default: rules = l33t_rul35. in_case Change case of input string txt. Default: in_case = "no". Set to "lo" or "up" for lower or uppercase, respectively. out_case Change case of output string. Default: out_case = "no". Set to "lo" or "up" for lower or uppercase, respectively. Details The current version of transl33t only uses base R commands, rather than the stringr package. Value A character vector. See Also l33t_rul35 for default rules used; invert_rules for inverting rules. Other text objects and functions: Umlaut, capitalize(), caseflip(), cclass, chars_to_text(), collapse_chars(), count_chars_words(), count_chars(), count_words(), invert_rules(), l33t_rul35, map_text_chars(), map_text_coord(), map_text_regex(), metachar, read_ascii(), text_to_chars(), text_to_sentences(), text_to_words(), words_to_text() Examples # Use defaults: transl33t(txt = "hello world") transl33t(txt = c(letters)) transl33t(txt = c(LETTERS)) # Specify rules: transl33t(txt = "hello world", rules = c("e" = "3", "l" = "1", "o" = "0")) # Set input and output case: transl33t(txt = "hello world", in_case = "up", rules = c("e" = "3", "l" = "1", "o" = "0")) # e only capitalized transl33t(txt = "hEllo world", in_case = "lo", out_case = "up", rules = c("e" = "3", "l" = "1", "o" = "0")) # e transl33ted Trumpisms Data: Trumpisms. Description Trumpisms contains frequent words and characteristic phrases by U.S. president <NAME> (the 45th president of the United States, in office from January 20, 2017, to January 20, 2021). Usage Trumpisms Format A vector of type character with length(Trumpisms) = 168 (on 2021-01-28). Source Data originally based on a collection of Donald Trump’s 20 most frequently used words on https: //www.yourdictionary.com and expanded by interviews, public speeches, and Twitter tweets from https://twitter.com/realDonaldTrump. See Also Other datasets: Bushisms, countries, data_1, data_2, data_t1_de, data_t1_tab, data_t1, data_t2, data_t3, data_t4, dt_10, exp_num_dt, exp_wide, falsePosPsy_all, fame, flowery, fruits, outliers, pi_100k, posPsy_AHI_CESD, posPsy_long, posPsy_p_info, posPsy_wide, t3, t4, t_1, t_2, t_3, t_4, table6, table7, table8, table9, tb t_1 Data t_1. Description t_1 is a fictitious dataset to practice tidying data. Usage t_1 Format A table with 8 cases (rows) and 9 variables (columns). Source See CSV data at http://rpository.com/ds4psy/data/t_1.csv. See Also Other datasets: Bushisms, Trumpisms, countries, data_1, data_2, data_t1_de, data_t1_tab, data_t1, data_t2, data_t3, data_t4, dt_10, exp_num_dt, exp_wide, falsePosPsy_all, fame, flowery, fruits, outliers, pi_100k, posPsy_AHI_CESD, posPsy_long, posPsy_p_info, posPsy_wide, t3, t4, t_2, t_3, t_4, table6, table7, table8, table9, tb t_2 Data t_2. Description t_2 is a fictitious dataset to practice tidying data. Usage t_2 Format A table with 8 cases (rows) and 5 variables (columns). Source See CSV data at http://rpository.com/ds4psy/data/t_2.csv. See Also Other datasets: Bushisms, Trumpisms, countries, data_1, data_2, data_t1_de, data_t1_tab, data_t1, data_t2, data_t3, data_t4, dt_10, exp_num_dt, exp_wide, falsePosPsy_all, fame, flowery, fruits, outliers, pi_100k, posPsy_AHI_CESD, posPsy_long, posPsy_p_info, posPsy_wide, t3, t4, t_1, t_3, t_4, table6, table7, table8, table9, tb t_3 Data t_3. Description t_3 is a fictitious dataset to practice tidying data. Usage t_3 Format A table with 16 cases (rows) and 6 variables (columns). Source See CSV data at http://rpository.com/ds4psy/data/t_3.csv. See Also Other datasets: Bushisms, Trumpisms, countries, data_1, data_2, data_t1_de, data_t1_tab, data_t1, data_t2, data_t3, data_t4, dt_10, exp_num_dt, exp_wide, falsePosPsy_all, fame, flowery, fruits, outliers, pi_100k, posPsy_AHI_CESD, posPsy_long, posPsy_p_info, posPsy_wide, t3, t4, t_1, t_2, t_4, table6, table7, table8, table9, tb t_4 Data t_4. Description t_4 is a fictitious dataset to practice tidying data. Usage t_4 Format A table with 16 cases (rows) and 8 variables (columns). Source See CSV data at http://rpository.com/ds4psy/data/t_4.csv. See Also Other datasets: Bushisms, Trumpisms, countries, data_1, data_2, data_t1_de, data_t1_tab, data_t1, data_t2, data_t3, data_t4, dt_10, exp_num_dt, exp_wide, falsePosPsy_all, fame, flowery, fruits, outliers, pi_100k, posPsy_AHI_CESD, posPsy_long, posPsy_p_info, posPsy_wide, t3, t4, t_1, t_2, t_3, table6, table7, table8, table9, tb Umlaut Umlaut provides German Umlaut letters (as Unicode characters). Description Umlaut provides the German Umlaut letters (aka. diaeresis/diacritic) as a named character vector. Usage Umlaut Format An object of class character of length 7. Details For Unicode details, see https://home.unicode.org/, For details on German Umlaut letters (aka. diaeresis/diacritic), see https://en.wikipedia.org/ wiki/Diaeresis_(diacritic) and https://en.wikipedia.org/wiki/Germanic_umlaut. See Also Other text objects and functions: capitalize(), caseflip(), cclass, chars_to_text(), collapse_chars(), count_chars_words(), count_chars(), count_words(), invert_rules(), l33t_rul35, map_text_chars(), map_text_coord(), map_text_regex(), metachar, read_ascii(), text_to_chars(), text_to_sentences(), text_to_words(), transl33t(), words_to_text() Examples Umlaut names(Umlaut) paste0("Hansj", Umlaut["o"], "rg i", Umlaut["s"], "t s", Umlaut["u"], "sse ", Umlaut["A"], "pfel.") paste0("Das d", Umlaut["u"], "nne M", Umlaut["a"], "dchen l", Umlaut["a"], "chelt.") paste0("Der b", Umlaut["o"], "se Mann macht ", Umlaut["u"], "blen ", Umlaut["A"], "rger.") paste0("Das ", Umlaut["U"], "ber-Ich ist ", Umlaut["a"], "rgerlich.") what_date What date is it? Description what_date provides a satisficing version of Sys.Date() that is sufficient for most purposes. Usage what_date( when = NA, rev = FALSE, as_string = TRUE, sep = "-", month_form = "m", tz = "" ) Arguments when Date(s) (as a scalar or vector). Default: when = NA. Using as.Date(when) to convert strings into dates, and Sys.Date(), if when = NA. rev Boolean: Reverse date (to Default: rev = FALSE. as_string Boolean: Return as character string? Default: as_string = TRUE. If as_string = FALSE, a "Date" object is returned. sep Character: Separator to use. Default: sep = "-". month_form Character: Month format. Default: month_form = "m" for numeric month (01- 12). Use month_form = "b" for short month name and month_form = "B" for full month name (in current locale). tz Time zone. Default: tz = "" (i.e., current system time zone, see Sys.timezone()). Use tz = "UTC" for Coordinated Universal Time. Details By default, what_date returns either Sys.Date() or the dates provided by when as a character string (using current system settings and sep for formatting). If as_string = FALSE, a "Date" object is returned. The tz argument allows specifying time zones (see Sys.timezone() for current setting and OlsonNames() for options.) However, tz is merely used to represent the dates provided to the when argument. Thus, there currently is no active conversion of dates into other time zones (see the today function of lubridate package). Value A character string or object of class "Date". See Also what_wday() function to obtain (week)days; what_time() function to obtain times; cur_time() function to print the current time; cur_date() function to print the current date; now() function of the lubridate package; Sys.time() function of base R. Other date and time functions: change_time(), change_tz(), cur_date(), cur_time(), days_in_month(), diff_dates(), diff_times(), diff_tz(), is_leap_year(), what_month(), what_time(), what_wday(), what_week(), what_year(), zodiac() Examples what_date() what_date(sep = "/") what_date(rev = TRUE) what_date(rev = TRUE, sep = ".") what_date(rev = TRUE, sep = " ", month_form = "B") # with "POSIXct" times: what_date(when = Sys.time()) # with time vector (of "POSIXct" objects): ts <- c("1969-07-13 13:53 CET", "2020-12-31 23:59:59") what_date(ts) what_date(ts, rev = TRUE, sep = ".") what_date(ts, rev = TRUE, month_form = "b") # return a "Date" object: dt <- what_date(as_string = FALSE) class(dt) # with time zone: ts <- ISOdate(2020, 12, 24, c(0, 12)) # midnight and midday UTC what_date(when = ts, tz = "Pacific/Honolulu", as_string = FALSE) what_month What month is it? Description what_month provides a satisficing version of to determine the month corresponding to a given date. Usage what_month(when = Sys.Date(), abbr = FALSE, as_integer = FALSE) Arguments when Date (as a scalar or vector). Default: when = NA. Using as.Date(when) to con- vert strings into dates, and Sys.Date(), if when = NA. abbr Boolean: Return abbreviated? Default: abbr = FALSE. as_integer Boolean: Return as integer? Default: as_integer = FALSE. Details what_month returns the month of when or Sys.Date() (as a name or number). See Also what_week() function to obtain weeks; what_date() function to obtain dates; cur_time() func- tion to print the current time; cur_date() function to print the current date; now() function of the lubridate package; Sys.time() function of base R. Other date and time functions: change_time(), change_tz(), cur_date(), cur_time(), days_in_month(), diff_dates(), diff_times(), diff_tz(), is_leap_year(), what_date(), what_time(), what_wday(), what_week(), what_year(), zodiac() Examples what_month() what_month(abbr = TRUE) what_month(as_integer = TRUE) # with date vector (as characters): ds <- c("2020-01-01", "2020-02-29", "2020-12-24", "2020-12-31") what_month(when = ds) what_month(when = ds, abbr = TRUE, as_integer = FALSE) what_month(when = ds, abbr = TRUE, as_integer = TRUE) # with time vector (strings of POSIXct times): ts <- c("2020-02-29 10:11:12 CET", "2020-12-31 23:59:59") what_month(ts) what_time What time is it? Description what_time provides a satisficing version of Sys.time() that is sufficient for most purposes. Usage what_time(when = NA, seconds = FALSE, as_string = TRUE, sep = ":", tz = "") Arguments when Time (as a scalar or vector). Default: when = NA. Returning Sys.time(), if when = NA. seconds Boolean: Show time with seconds? Default: seconds = FALSE. as_string Boolean: Return as character string? Default: as_string = TRUE. If as_string = FALSE, a "POSIXct" object is returned. sep Character: Separator to use. Default: sep = ":". tz Time zone. Default: tz = "" (i.e., current system time zone, see Sys.timezone()). Use tz = "UTC" for Coordinated Universal Time. Details By default, what_time prints a simple version of when or Sys.time() as a character string (in " using current default system settings. If as_string = FALSE, a "POSIXct" (calendar time) object is returned. The tz argument allows specifying time zones (see Sys.timezone() for current setting and OlsonNames() for options.) However, tz is merely used to represent the times provided to the when argument. Thus, there currently is no active conversion of times into other time zones (see the now function of lubridate package). Value A character string or object of class "POSIXct". See Also cur_time() function to print the current time; cur_date() function to print the current date; now() function of the lubridate package; Sys.time() function of base R. Other date and time functions: change_time(), change_tz(), cur_date(), cur_time(), days_in_month(), diff_dates(), diff_times(), diff_tz(), is_leap_year(), what_date(), what_month(), what_wday(), what_week(), what_year(), zodiac() Examples what_time() # with vector (of "POSIXct" objects): tm <- c("2020-02-29 01:02:03", "2020-12-31 14:15:16") what_time(tm) # with time zone: ts <- ISOdate(2020, 12, 24, c(0, 12)) # midnight and midday UTC t1 <- what_time(when = ts, tz = "Pacific/Honolulu") t1 # time display changed, due to tz # return "POSIXct" object(s): # Same time in differen tz: t2 <- what_time(as.POSIXct("2020-02-29 10:00:00"), as_string = FALSE, tz = "Pacific/Honolulu") format(t2, "%F %T %Z (UTF %z)") # from string: t3 <- what_time("2020-02-29 10:00:00", as_string = FALSE, tz = "Pacific/Honolulu") format(t3, "%F %T %Z (UTF %z)") what_wday What day of the week is it? Description what_wday provides a satisficing version of to determine the day of the week corresponding to a given date. Usage what_wday(when = Sys.Date(), abbr = FALSE) Arguments when Date (as a scalar or vector). Default: when = Sys.Date(). Aiming to convert when into "Date" if a different object class is provided. abbr Boolean: Return abbreviated? Default: abbr = FALSE. Details what_wday returns the name of the weekday of when or of Sys.Date() (as a character string). See Also what_date() function to obtain dates; what_time() function to obtain times; cur_time() function to print the current time; cur_date() function to print the current date; now() function of the lubridate package; Sys.time() function of base R. Other date and time functions: change_time(), change_tz(), cur_date(), cur_time(), days_in_month(), diff_dates(), diff_times(), diff_tz(), is_leap_year(), what_date(), what_month(), what_time(), what_week(), what_year(), zodiac() Examples what_wday() what_wday(abbr = TRUE) what_wday(Sys.Date() + -1:1) # Date (as vector) what_wday(Sys.time()) # POSIXct what_wday("2020-02-29") # string (of valid date) what_wday(20200229) # number (of valid date) # date vector (as characters): ds <- c("2020-01-01", "2020-02-29", "2020-12-24", "2020-12-31") what_wday(when = ds) what_wday(when = ds, abbr = TRUE) # time vector (strings of POSIXct times): ts <- c("1969-07-13 13:53 CET", "2020-12-31 23:59:59") what_wday(ts) # fame data: greta_dob <- as.Date(fame[grep(fame$name, pattern = "Greta") , ]$DOB, "%B %d, %Y") what_wday(greta_dob) # Friday, of course. what_week What week is it? Description what_week provides a satisficing version of to determine the week corresponding to a given date. Usage what_week(when = Sys.Date(), unit = "year", as_integer = FALSE) Arguments when Date (as a scalar or vector). Default: when = Sys.Date(). Using as.Date(when) to convert strings into dates if a different when is provided. unit Character: Unit of week? Possible values are "month", "year". Default: unit = "year" (for week within year). as_integer Boolean: Return as integer? Default: as_integer = FALSE. Details what_week returns the week of when or Sys.Date() (as a name or number). See Also what_wday() function to obtain (week)days; what_date() function to obtain dates; cur_time() function to print the current time; cur_date() function to print the current date; now() function of the lubridate package; Sys.time() function of base R. Other date and time functions: change_time(), change_tz(), cur_date(), cur_time(), days_in_month(), diff_dates(), diff_times(), diff_tz(), is_leap_year(), what_date(), what_month(), what_time(), what_wday(), what_year(), zodiac() Examples what_week() what_week(as_integer = TRUE) # Other dates/times: d1 <- as.Date("2020-12-24") what_week(when = d1, unit = "year") what_week(when = d1, unit = "month") what_week(Sys.time()) # with POSIXct time # with date vector (as characters): ds <- c("2020-01-01", "2020-02-29", "2020-12-24", "2020-12-31") what_week(when = ds) what_week(when = ds, unit = "month", as_integer = TRUE) what_week(when = ds, unit = "year", as_integer = TRUE) # with time vector (strings of POSIXct times): ts <- c("2020-12-25 10:11:12 CET", "2020-12-31 23:59:59") what_week(ts) what_year What year is it? Description what_year provides a satisficing version of to determine the year corresponding to a given date. Usage what_year(when = Sys.Date(), abbr = FALSE, as_integer = FALSE) Arguments when Date (as a scalar or vector). Default: when = NA. Using as.Date(when) to con- vert strings into dates, and Sys.Date(), if when = NA. abbr Boolean: Return abbreviated? Default: abbr = FALSE. as_integer Boolean: Return as integer? Default: as_integer = FALSE. Details what_year returns the year of when or Sys.Date() (as a name or number). See Also what_week() function to obtain weeks; what_month() function to obtain months; cur_time() function to print the current time; cur_date() function to print the current date; now() function of the lubridate package; Sys.time() function of base R. Other date and time functions: change_time(), change_tz(), cur_date(), cur_time(), days_in_month(), diff_dates(), diff_times(), diff_tz(), is_leap_year(), what_date(), what_month(), what_time(), what_wday(), what_week(), zodiac() Examples what_year() what_year(abbr = TRUE) what_year(as_integer = TRUE) # with date vectors (as characters): ds <- c("2020-01-01", "2020-02-29", "2020-12-24", "2020-12-31") what_year(when = ds) what_year(when = ds, abbr = TRUE, as_integer = FALSE) what_year(when = ds, abbr = TRUE, as_integer = TRUE) # with time vector (strings of POSIXct times): ts <- c("2020-02-29 10:11:12 CET", "2020-12-31 23:59:59") what_year(ts) words_to_text Paste or collapse words x into a text. Description words_to_text pastes or collapses a character string x into a single text string. Usage words_to_text(x, collapse = " ") Arguments x A string of text (required), typically a character vector. collapse A character string to separate the elements of x in the resulting text. Default: collapse = " ". Details words_to_text is essentially identical to collapse_chars. Internally, both functions are wrappers around paste with a collapse argument. Value A text (as a collapsed character vector). See Also text_to_words for splitting a text into its words; text_to_sentences for splitting text into a vector of sentences; text_to_chars for splitting text into a vector of characters; count_words for counting the frequency of words; collapse_chars for collapsing character vectors; strsplit for splitting strings. Other text objects and functions: Umlaut, capitalize(), caseflip(), cclass, chars_to_text(), collapse_chars(), count_chars_words(), count_chars(), count_words(), invert_rules(), l33t_rul35, map_text_chars(), map_text_coord(), map_text_regex(), metachar, read_ascii(), text_to_chars(), text_to_sentences(), text_to_words(), transl33t() Examples s <- c("Hello world!", "A 1st sentence.", "A 2nd sentence.", "The end.") words_to_text(s) cat(words_to_text(s, collapse = "\n")) zodiac Get zodiac (corresponding to date x). Description zodiac provides the tropical zodiac sign or symbol for given date(s) x. Usage zodiac( x, out = "en", zodiac_swap_mmdd = c(120, 219, 321, 421, 521, 621, 723, 823, 923, 1023, 1123, 1222) ) Arguments x Date (as a scalar or vector, required). If x is not a date (of class "Date"), the function tries to coerce x into a "Date". out Output format (as character). Available output formats are: English/Latin (out = "en", by default), German/Deutsch (out = "de"), HTML (out = "html"), or Unicode (out = "Unicode") symbols. zodiac_swap_mmdd Monthly dates on which the 12 zodiac signs switch (in mmdd format, ordered chronologically within a calendar year). Default: zodiac_swap_mmdd = c(0120, 0219, 0321, 0421, 0521, 0621,0723, 0823, 0923, 1023, 1123, 1222). Details zodiac is flexible by providing different output formats (in Latin/English, German, or Unicode/HTML, see out) and allowing to adjust the calendar dates on which a new zodiac is assigned (via zodiac_swap_mmdd). Value Zodiac label or symbol (as a factor). Source See https://en.wikipedia.org/wiki/Zodiac or https://de.wikipedia.org/wiki/Tierkreiszeichen for alternative date ranges. See Also Zodiac() function of the DescTools package. Other date and time functions: change_time(), change_tz(), cur_date(), cur_time(), days_in_month(), diff_dates(), diff_times(), diff_tz(), is_leap_year(), what_date(), what_month(), what_time(), what_wday(), what_week(), what_year() Examples zodiac(Sys.Date()) # Works with vectors: dt <- sample_date(size = 10) zodiac(dt) levels(zodiac(dt)) # Alternative outputs: zodiac(dt, out = "de") # German/deutsch zodiac(dt, out = "Unicode") # Unicode zodiac(dt, out = "HTML") # HTML # Alternative date breaks: zodiac("2000-08-23") # 0823 is "Virgo" by default zodiac("2000-08-23", # change to 0824 (i.e., August 24): zodiac_swap_mmdd = c(0120, 0219, 0321, 0421, 0521, 0621, 0723, 0824, 0923, 1023, 1123, 1222))
groupdata2
cran
R
Package ‘groupdata2’ June 18, 2023 Title Creating Groups from Data Version 2.0.3 Description Methods for dividing data into groups. Create balanced partitions and cross-validation folds. Perform time series windowing and general grouping and splitting of data. Balance existing groups with up- and downsampling or collapse them to fewer groups. Depends R (>= 3.5) License MIT + file LICENSE URL https://github.com/ludvigolsen/groupdata2 BugReports https://github.com/ludvigolsen/groupdata2/issues Encoding UTF-8 Imports checkmate (>= 2.0.0), dplyr (>= 0.8.4), numbers (>= 0.7-5), lifecycle, plyr (>= 1.8.5), purrr, rearrr (>= 0.3.0), rlang (>= 0.4.4), stats, tibble (>= 2.1.3), tidyr, utils RoxygenNote 7.2.3 Suggests broom, covr, ggplot2, knitr, lmerTest, rmarkdown, testthat, xpectr (>= 0.4.1) RdMacros lifecycle Roxygen list(markdown = TRUE) VignetteBuilder knitr R topics documented: all_groups_identica... 2 balanc... 3 collapse_group... 6 collapse_groups_b... 15 differs_from_previou... 21 downsampl... 24 find_missing_start... 26 find_start... 27 fol... 29 grou... 36 groupdata... 39 group_facto... 40 partitio... 43 ranked_balance... 47 spl... 48 summarize_balance... 50 summarize_group_col... 54 upsampl... 56 %primes... 58 %staircase... 59 all_groups_identical Test if two grouping factors contain the same groups Description [Maturing] Checks whether two grouping factors contain the same groups, looking only at the group members, allowing for different group names / identifiers. Usage all_groups_identical(x, y) Arguments x, y Two grouping factors (vectors/factors with group identifiers) to compare. N.B. Both are converted to character vectors. Details Both factors are sorted by `x`. A grouping factor is created with new groups starting at the values in `y` which differ from the previous row (i.e. group() with method = "l_starts" and n = "auto"). A similar grouping factor is created for `x`, to have group identifiers range from 1 to the number of groups. The two generated grouping factors are tested for equality. Value Whether all groups in `x` are the same in `y`, memberwise. (logical) Author(s) <NAME>, <<EMAIL>> See Also Other grouping functions: collapse_groups_by, collapse_groups(), fold(), group_factor(), group(), partition(), splt() Examples # Attach groupdata2 library(groupdata2) # Same groups, different identifiers x1 <- c(1, 1, 2, 2, 3, 3) x2 <- c(2, 2, 1, 1, 4, 4) all_groups_identical(x1, x2) # TRUE # Same groups, different identifier types x1 <- c(1, 1, 2, 2, 3, 3) x2 <- c("a", "a", "b", "b", "c", "c") all_groups_identical(x1, x2) # TRUE # Not same groups # Note that all groups must be the same to return TRUE x1 <- c(1, 1, 2, 2, 3, 3) x2 <- c(1, 2, 2, 3, 3, 3) all_groups_identical(x1, x2) # FALSE # Different number of groups x1 <- c(1, 1, 2, 2, 3, 3) x2 <- c(1, 1, 1, 2, 2, 2) all_groups_identical(x1, x2) # FALSE balance Balance groups by up- and downsampling Description [Maturing] Uses up- and/or downsampling to fix the group sizes to the min, max, mean, or median group size or to a specific number of rows. Has a range of methods for balancing on ID level. Usage balance( data, size, cat_col, id_col = NULL, id_method = "n_ids", mark_new_rows = FALSE, new_rows_col_name = ".new_row" ) Arguments data data.frame. Can be grouped, in which case the function is applied group-wise. size Size to fix group sizes to. Can be a specific number, given as a whole number, or one of the following strings: "min", "max", "mean", "median". number: Fix each group to have the size of the specified number of row. Uses downsampling for groups with too many rows and upsampling for groups with too few rows. min: Fix each group to have the size of smallest group in the dataset. Uses downsampling on all groups that have too many rows. max: Fix each group to have the size of largest group in the dataset. Uses upsampling on all groups that have too few rows. mean: Fix each group to have the mean group size in the dataset. The mean is rounded. Uses downsampling for groups with too many rows and upsampling for groups with too few rows. median: Fix each group to have the median group size in the dataset. The median is rounded. Uses downsampling for groups with too many rows and upsampling for groups with too few rows. cat_col Name of categorical variable to balance by. (Character) id_col Name of factor with IDs. (Character) IDs are considered entities, e.g. allowing us to add or remove all rows for an ID. How this is used is up to the `id_method`. E.g. If we have measured a participant multiple times and want make sure that we keep all these measurements. Then we would either remove/add all mea- surements for the participant or leave in all measurements for the participant. N.B. When `data` is a grouped data.frame (see dplyr::group_by()), IDs that appear in multiple groupings are considered separate entities within those groupings. id_method Method for balancing the IDs. (Character) "n_ids", "n_rows_c", "distributed", or "nested". n_ids (default): Balances on ID level only. It makes sure there are the same number of IDs for each category. This might lead to a different number of rows between categories. n_rows_c: Attempts to level the number of rows per category, while only removing/adding entire IDs. This is done in 2 steps: 1. If a category needs to add all its rows one or more times, the data is re- peated. 2. Iteratively, the ID with the number of rows closest to the lacking/excessive number of rows is added/removed. This happens until adding/removing the closest ID would lead to a size further from the target size than the current size. If multiple IDs are closest, one is randomly sampled. distributed: Distributes the lacking/excess rows equally between the IDs. If the number to distribute can not be equally divided, some IDs will have 1 row more/less than the others. nested: Calls balance() on each category with IDs as cat_col. I.e. if size is "min", IDs will have the size of the smallest ID in their category. mark_new_rows Add column with 1s for added rows, and 0s for original rows. (Logical) new_rows_col_name Name of column marking new rows. Defaults to ".new_row". Details Without ‘id_col‘: Upsampling is done with replacement for added rows, while the original data remains intact. Downsampling is done without replacement, meaning that rows are not du- plicated but only removed. With ‘id_col‘: See `id_method` description. Value data.frame with added and/or deleted rows. Ordered by potential grouping variables, `cat_col` and (potentially) `id_col`. Author(s) <NAME>, <<EMAIL>> See Also Other sampling functions: downsample(), upsample() Examples # Attach packages library(groupdata2) # Create data frame df <- data.frame( "participant" = factor(c(1, 1, 2, 3, 3, 3, 3, 4, 4, 5, 5, 5, 5)), "diagnosis" = factor(c(0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0)), "trial" = c(1, 2, 1, 1, 2, 3, 4, 1, 2, 1, 2, 3, 4), "score" = sample(c(1:100), 13) ) # Using balance() with specific number of rows balance(df, 3, cat_col = "diagnosis") # Using balance() with min balance(df, "min", cat_col = "diagnosis") # Using balance() with max balance(df, "max", cat_col = "diagnosis") # Using balance() with id_method "n_ids" # With column specifying added rows balance(df, "max", cat_col = "diagnosis", id_col = "participant", id_method = "n_ids", mark_new_rows = TRUE ) # Using balance() with id_method "n_rows_c" # With column specifying added rows balance(df, "max", cat_col = "diagnosis", id_col = "participant", id_method = "n_rows_c", mark_new_rows = TRUE ) # Using balance() with id_method "distributed" # With column specifying added rows balance(df, "max", cat_col = "diagnosis", id_col = "participant", id_method = "distributed", mark_new_rows = TRUE ) # Using balance() with id_method "nested" # With column specifying added rows balance(df, "max", cat_col = "diagnosis", id_col = "participant", id_method = "nested", mark_new_rows = TRUE ) collapse_groups Collapse groups with categorical, numerical, ID, and size balancing Description [Experimental] Collapses a set of groups into a smaller set of groups. Attempts to balance the new groups by specified numerical columns, categorical columns, level counts in ID columns, and/or the number of rows (size). Note: The more of these you balance at a time, the less balanced each of them may become. While, on average, the balancing work better than without, this is not guaranteed on every run. Enabling `auto_tune` can yield a much better overall balance than without in most contexts. This generates a larger set of group columns using all combinations of the balancing columns and selects the most balanced group column(s). This is slower and we recommend enabling parallelization (see `parallel`). While this balancing algorithm will not be optimal in all cases, it allows balancing a large number of columns at once. Especially with auto-tuning enabled, this can be very powerful. Tip: Check the balances of the new groups with summarize_balances() and ranked_balances(). Note: The categorical and ID balancing algorithms are different to those in fold() and partition(). Usage collapse_groups( data, n, group_cols, cat_cols = NULL, cat_levels = NULL, num_cols = NULL, id_cols = NULL, balance_size = TRUE, auto_tune = FALSE, weights = NULL, method = "balance", group_aggregation_fn = mean, num_new_group_cols = 1, unique_new_group_cols_only = TRUE, max_iters = 5, extreme_pairing_levels = 1, combine_method = "avg_standardized", col_name = ".coll_groups", parallel = FALSE, verbose = TRUE ) Arguments data data.frame. Can be grouped, in which case the function is applied group-wise. n Number of new groups. When `num_new_group_cols` > 1, `n` can also be a vector with one `n` per new group column. This allows trying multiple `n` settings at a time. Note that the generated group columns are not guaranteed to be in the order of `n`. group_cols Names of factors in `data` for identifying the existing groups that should be collapsed. Multiple names are treated as in dplyr::group_by() (i.e., a hierarchy of groups), where each leaf group within each parent group is considered a unique group to be collapsed. Parent groups are not considered during collapsing, why leaf groups from different parent groups can be collapsed together. Note: Do not confuse these group columns with potential columns that `data` is grouped by. `group_cols` identifies the groups to be collapsed. When `data` is grouped with dplyr::group_by(), the function is applied separately to each of those subsets. cat_cols Names of categorical columns to balance the average frequency of one or more levels of. cat_levels Names of the levels in the `cat_cols` columns to balance the average fre- quencies of. When `NULL` (default), all levels are balanced. Can be weights indicating the balancing importance of each level (within each column). The weights are automatically scaled to sum to 1. Can be ".minority" or ".majority", in which case the minority/majority level are found and used. When ‘cat_cols‘ has single column name:: Either a vector with level names or a named numeric vector with weights: E.g. c("dog", "pidgeon", "mouse") or c("dog" = 5, "pidgeon" = 1, "mouse" = 3) When ‘cat_cols‘ has multiple column names:: A named list with vectors for each column name in `cat_cols`. When not providing a vector for a `cat_cols` column, all levels are balanced in that column. E.g. list("col1" = c("dog" = 5, "pidgeon" = 1, "mouse" = 3), "col2" = c("hydrated", "dehydrated")). num_cols Names of numerical columns to balance between groups. id_cols Names of factor columns with IDs to balance the counts of between groups. E.g. useful to get a similar number of participants in each group. balance_size Whether to balance the size of the collapsed groups. (logical) auto_tune Whether to create a larger set of collapsed group columns from all combina- tions of the balancing dimensions and select the overall most balanced group column(s). This tends to create much more balanced collapsed group columns. Can be slow, why we recommend enabling parallelization (see `parallel`). weights Named vector with balancing importance weights for each of the balancing columns. Besides the columns in `cat_cols`, `num_cols`, and `id_cols`, the size balancing weight can be given as "size". The weights are automatically scaled to sum to 1. Dimensions that are not given a weight is automatically given the weight 1. E.g. c("size" = 1, "cat" = 1, "num1" = 4, "num2" = 7, "id" = 2). method "balance", "ascending", or "descending": After calculating a combined balancing column from each of the balancing columns (see Details >> Balancing columns): • "balance" balances the combined balancing column between the groups. • "ascending" orders the combined balancing column and groups from the lowest to highest value. • "descending" orders the combined balancing column and groups from the highest to lowest value. group_aggregation_fn Function for aggregating values in the `num_cols` columns for each group in `group_cols`. Default is mean(), where the average value(s) are balanced across the new groups. When using sum(), the groups will have similar sums across the new groups. N.B. Only used when `num_cols` is specified. num_new_group_cols Number of group columns to create. When `num_new_group_cols` > 1, columns are named with a combination of `col_name` and "_1", "_2", etc. E.g. ”.collg roups1 ”, ”.collg roups2 ”, ... N.B. When `unique_new_group_cols_only` is `TRUE`, we may end up with fewer columns than specified, see `max_iters`. unique_new_group_cols_only Whether to only return unique new group columns. As the number of column comparisons can be quite time consuming, we recom- mend enabling parallelization. See `parallel`. N.B. We can end up with fewer columns than specified in `num_new_group_cols`, see `max_iters`. N.B. Only used when `num_new_group_cols` > 1. max_iters Maximum number of attempts at reaching `num_new_group_cols` unique new group columns. When only keeping unique new group columns, we risk having fewer columns than expected. Hence, we repeatedly create the missing columns and remove those that are not unique. This is done until we have `num_new_group_cols` unique group columns or we have attempted `max_iters` times. In some cases, it is not possible to create `num_new_group_cols` unique com- binations of the dataset. `max_iters` specifies when to stop trying. Note that we can end up with fewer columns than specified in `num_new_group_cols`. N.B. Only used when `num_new_group_cols` > 1. extreme_pairing_levels How many levels of extreme pairing to do when balancing the groups by the combined balancing column (see Details). Extreme pairing: Rows/pairs are ordered as smallest, largest, second smallest, second largest, etc. If extreme_pairing_levels > 1, this is done "recursively" on the extreme pairs. N.B. Larger values work best with large datasets. If set too high, the result might not be stochastic. Always check if an increase actually makes the groups more balanced. combine_method Method to combine the balancing columns by. One of "avg_standardized" or "avg_min_max_scaled". For each balancing column (all columns in num_cols, cat_cols, and id_cols, plus size), we calculate a normalized, numeric group summary column, which indicates the "size" of each group in that dimension. These are then combined to a single combined balancing column. The three steps are: 1. Calculate a numeric representation of the balance for each column. E.g. the number of unique levels within each group of an ID column (see Details > Balancing columns for more on this). 2. Normalize each column separately with standardization ("avg_standardized"; Default) or MinMax scaling to the [0, 1] range ("avg_min_max_scaled"). 3. Average the columns rowwise to get a single column with one value per group. The averaging is weighted by `weights`, which is useful when one of the dimensions is more important to get a good balance of. `combine_method` chooses whether to use standardization or MinMax scaling in step 2. col_name Name of the new group column. When creating multiple new group columns (`num_new_group_cols`>1), this is the prefix for the names, which will be suffixed with an underscore and a number (_1, _2, _3, etc.). parallel Whether to parallelize the group column comparisons when `unique_new_group_cols_only` is `TRUE`. Especially highly recommended when `auto_tune` is enabled. Requires a registered parallel backend. Like doParallel::registerDoParallel. verbose Whether to print information about the process. May make the function slightly slower. N.B. Currently only used during auto-tuning. Details The goal of collapse_groups() is to combine existing groups to a lower number of groups while (optionally) balancing one or more numeric, categorical and/or ID columns, along with the group size. For each of these columns (and size), we calculate a normalized, numeric "balancing column" that when balanced between the groups lead to its original column being balanced as well. To balance multiple columns at once, we combine their balancing columns with weighted averaging (see `combine_method` and `weights`) to a single combined balancing column. Finally, we create groups where this combined balancing column is balanced between the groups, using the numerical balancing in fold(). Auto-tuning: This strategy is not guaranteed to produce balanced groups in all contexts, e.g. when the balanc- ing columns cancel out. To increase the probability of balanced groups, we can produce multiple group columns with all combinations of the balancing columns and select the overall most bal- anced group column(s). We refer to this as auto-tuning (see `auto_tune`). We find the overall most balanced group column by ranking the across-group standard deviations for each of the balancing columns, as found with summarize_balances(). Example of finding the overall most balanced group column(s): Given a group column with the following average age per group: `c(16, 18, 25, 21)`, the stan- dard deviation hereof (3.92) is a measure of how balanced the age column is. Another group column can thus have a lower/higher standard deviation and be considered more/less balanced. We find the rankings of these standard deviations for all the balancing columns and average them (again weighted by `weights`). We select the group column(s) with the, on average, highest rank (i.e. lowest standard deviations). Checking balances: We highly recommend using summarize_balances() and ranked_balances() to check how balanced the created groups are on the various dimensions. When applying ranked_balances() to the output of summarize_balances(), we get a data.frame with the standard deviations for each balancing dimension (lower means more balanced), ordered by the average rank (see Examples). Balancing columns: The following describes the creation of the balancing columns for each of the supported column types: cat_cols: For each column in `cat_cols`: • Count each level within each group. This creates a data.frame with one count column per level, with one row per group. • Standardize the count columns. • Average the standardized counts rowwise to create one combined column representing the balance of the levels for each group. When cat_levels contains weights for each of the levels, we apply weighted averaging. Example: Consider a factor column with the levels c("A", "B", "C"). We count each level per group, normalize the counts and combine them with weighted averaging: Group A B C -> nA nB nC -> Combined 1 5 57 1 | 0.24 0.55 -0.77 | 0.007 2 7 69 2 | 0.93 0.64 -0.77 | 0.267 3 2 34 14 | -1.42 0.29 1.34 | 0.07 4 5 0 4 | 0.24 -1.48 0.19 | -0.35 ... ... ... ... | ... ... ... | ... id_cols: For each column in `id_cols`: • Count the unique IDs (levels) within each group. (Note: The same ID can be counted in multiple groups.) num_cols: For each column in `num_cols`: • Aggregate the numeric columns by group using the `group_aggregation_fn`. size: • Count the number of rows per group. Combining balancing columns: • Apply standardization or MinMax scaling to each of the balancing columns (see `combine_method`). • Perform weighted averaging to get a single balancing column (see `weights`). Example: We apply standardization and perform weighted averaging: Group Size Num Cat ID -> nSize nNum nCat nID -> Combined 1 34 1.3 0.007 3 | -0.33 -0.82 0.03 -0.46 | -0.395 2 23 4.6 0.267 4 | -1.12 0.34 1.04 0.0 | 0.065 3 56 7.2 0.07 7 | 1.27 1.26 0.28 1.39 | 1.05 4 41 1.4 -0.35 2 | 0.18 -0.79 -1.35 -0.93 | -0.723 ... ... ... ... ... | ... ... ... ... | ... Creating the groups: Finally, we get to the group creation. There are three methods for creating groups based on the combined balancing column: "balance" (default), "ascending", and "descending". method is "balance": To create groups that are balanced by the combined balancing column, we use the numerical balancing in fold(). The following describes the numerical balancing in broad terms: 1. Rows are shuffled. Note that this will only affect rows with the same value in the combined balancing column. 2. Extreme pairing 1: Rows are ordered as smallest, largest, second smallest, second largest, etc. Each small+large pair get an extreme-group identifier. (See rearrr::pair_extremes()) 3. If `extreme_pairing_levels` > 1: These extreme-group identifiers are reordered as small- est, largest, second smallest, second largest, etc., by the sum of the combined balancing column in the represented rows. These pairs (of pairs) get a new set of extreme-group iden- tifiers, and the process is repeated `extreme_pairing_levels`-2 times. Note that the extreme-group identifiers at the last level will represent 2^`extreme_pairing_levels` rows, why you should be careful when choosing a larger setting. 4. The extreme-group identifiers from the last pairing are randomly divided into the final groups and these final identifiers are transferred to the original rows. N.B. When doing extreme pairing of an unequal number of rows, the row with the smallest value is placed in a group by itself, and the order is instead: (smallest), (second smallest, largest), (third smallest, second largest), etc. A similar approach with extreme triplets (i.e. smallest, closest to median, largest, second small- est, second closest to median, second largest, etc.) may also be utilized in some scenarios. (See rearrr::triplet_extremes()) Example: We order the data.frame by smallest "Num" value, largest "Num" value, sec- ond smallest, and so on. We could further (when `extreme_pairing_levels` > 1) find the sum of "Num" for each pair and perform extreme pairing on the pairs. Finally, we group the data.frame: Group Num -> Group Num Pair -> New group 1 -0.395 | 5 -1.23 1 | 3 2 0.065 | 3 1.05 1 | 3 3 1.05 | 4 -0.723 2 | 1 4 -0.723 | 2 0.065 2 | 1 5 -1.23 | 1 -0.395 3 | 2 6 -0.15 | 6 -0.15 3 | 2 ... ... | ... ... ... | ... method is "ascending" or "descending": These methods order the data by the combined bal- ancing column and creates groups such that the sums get increasingly larger (`ascending`) or smaller (`descending`). This will in turn lead to a pattern of increasing/decreasing sums in the balancing columns (e.g. increasing/decreasing counts of the categorical levels, counts of IDs, number of rows and sums of numeric columns). Value data.frame with one or more new grouping factors. Author(s) <NAME>, <<EMAIL>> See Also fold() for creating balanced folds/groups. partition() for creating balanced partitions. Other grouping functions: all_groups_identical(), collapse_groups_by, fold(), group_factor(), group(), partition(), splt() Examples # Attach packages library(groupdata2) library(dplyr) # Set seed if (requireNamespace("xpectr", quietly = TRUE)){ xpectr::set_test_seed(42) } # Create data frame df <- data.frame( "participant" = factor(rep(1:20, 3)), "age" = rep(sample(c(1:100), 20), 3), "answer" = factor(sample(c("a", "b", "c", "d"), 60, replace = TRUE)), "score" = sample(c(1:100), 20 * 3) ) df <- df %>% dplyr::arrange(participant) df$session <- rep(c("1", "2", "3"), 20) # Sample rows to get unequal sizes per participant df <- dplyr::sample_n(df, size = 53) # Create the initial groups (to be collapsed) df <- fold( data = df, collapse_groups 13 k = 8, method = "n_dist", id_col = "participant" ) # Ungroup the data frame # Otherwise `collapse_groups()` would be # applied to each fold separately! df <- dplyr::ungroup(df) # NOTE: Make sure to check the examples with `auto_tune` # in the end, as this is where the magic lies # Collapse to 3 groups with size balancing # Creates new `.coll_groups` column df_coll <- collapse_groups( data = df, n = 3, group_cols = ".folds", balance_size = TRUE # enabled by default ) # Check balances (coll_summary <- summarize_balances( data = df_coll, group_cols = ".coll_groups", cat_cols = 'answer', num_cols = c('score', 'age'), id_cols = 'participant' )) # Get ranked balances # NOTE: When we only have a single new group column # we don't get ranks - but this is good to use # when comparing multiple group columns! # The scores are standard deviations across groups ranked_balances(coll_summary) # Collapse to 3 groups with size + *categorical* balancing # We create 2 new `.coll_groups_1/2` columns df_coll <- collapse_groups( data = df, n = 3, group_cols = ".folds", cat_cols = "answer", balance_size = TRUE, num_new_group_cols = 2 ) # Check balances # To simplify the output, we only find the # balance of the `answer` column (coll_summary <- summarize_balances( data = df_coll, group_cols = paste0(".coll_groups_", 1:2), cat_cols = 'answer' )) # Get ranked balances # All scores are standard deviations across groups or (average) ranks # Rows are ranked by most to least balanced # (i.e. lowest average SD rank) ranked_balances(coll_summary) # Collapse to 3 groups with size + categorical + *numerical* balancing # We create 2 new `.coll_groups_1/2` columns df_coll <- collapse_groups( data = df, n = 3, group_cols = ".folds", cat_cols = "answer", num_cols = "score", balance_size = TRUE, num_new_group_cols = 2 ) # Check balances (coll_summary <- summarize_balances( data = df_coll, group_cols = paste0(".coll_groups_", 1:2), cat_cols = 'answer', num_cols = 'score' )) # Get ranked balances # All scores are standard deviations across groups or (average) ranks ranked_balances(coll_summary) # Collapse to 3 groups with size and *ID* balancing # We create 2 new `.coll_groups_1/2` columns df_coll <- collapse_groups( data = df, n = 3, group_cols = ".folds", id_cols = "participant", balance_size = TRUE, num_new_group_cols = 2 ) # Check balances # To simplify the output, we only find the # balance of the `participant` column (coll_summary <- summarize_balances( data = df_coll, group_cols = paste0(".coll_groups_", 1:2), id_cols = 'participant' )) # Get ranked balances # All scores are standard deviations across groups or (average) ranks ranked_balances(coll_summary) ################### #### Auto-tune #### # As you might have seen, the balancing does not always # perform as optimal as we might want or need # To get a better balance, we can enable `auto_tune` # which will create a larger set of collapsings # and select the most balanced new group columns # While it is not required, we recommend # enabling parallelization ## Not run: # Uncomment for parallelization # library(doParallel) # doParallel::registerDoParallel(7) # use 7 cores # Collapse to 3 groups with lots of balancing # We enable `auto_tune` to get a more balanced set of columns # We create 10 new `.coll_groups_1/2/...` columns df_coll <- collapse_groups( data = df, n = 3, group_cols = ".folds", cat_cols = "answer", num_cols = "score", id_cols = "participant", balance_size = TRUE, num_new_group_cols = 10, auto_tune = TRUE, parallel = FALSE # Set to TRUE for parallelization! ) # Check balances # To simplify the output, we only find the # balance of the `participant` column (coll_summary <- summarize_balances( data = df_coll, group_cols = paste0(".coll_groups_", 1:10), cat_cols = "answer", num_cols = "score", id_cols = 'participant' )) # Get ranked balances # All scores are standard deviations across groups or (average) ranks ranked_balances(coll_summary) # Now we can choose the .coll_groups_* column(s) # that we favor the balance of # and move on with our lives! ## End(Not run) collapse_groups_by Collapse groups balanced by a single attribute Description [Experimental] Collapses a set of groups into a smaller set of groups. Balance the new groups by: • The number of rows with collapse_groups_by_size() • Numerical columns with collapse_groups_by_numeric() • One or more levels of categorical columns with collapse_groups_by_levels() • Level counts in ID columns with collapse_groups_by_ids() • Any combination of these with collapse_groups() These functions wrap collapse_groups() to provide a simpler interface. To balance more than one of the attributes at a time and/or create multiple new unique grouping columns at once, use collapse_groups() directly. While, on average, the balancing work better than without, this is not guaranteed on every run. `auto_tune` (enabled by default) can yield a much better overall balance than without in most contexts. This generates a larger set of group columns using all combinations of the balancing columns and selects the most balanced group column(s). This is slower and can be speeded up by enabling parallelization (see `parallel`). Tip: When speed is more important than balancing, disable `auto_tune`. Tip: Check the balances of the new groups with summarize_balances() and ranked_balances(). Note: The categorical and ID balancing algorithms are different to those in fold() and partition(). Usage collapse_groups_by_size( data, n, group_cols, auto_tune = TRUE, method = "balance", col_name = ".coll_groups", parallel = FALSE, verbose = FALSE ) collapse_groups_by_numeric( data, n, group_cols, num_cols, balance_size = FALSE, auto_tune = TRUE, method = "balance", group_aggregation_fn = mean, col_name = ".coll_groups", parallel = FALSE, verbose = FALSE ) collapse_groups_by_levels( data, n, group_cols, cat_cols, cat_levels = NULL, balance_size = FALSE, auto_tune = TRUE, method = "balance", col_name = ".coll_groups", parallel = FALSE, verbose = FALSE ) collapse_groups_by_ids( data, n, group_cols, id_cols, balance_size = FALSE, auto_tune = TRUE, method = "balance", col_name = ".coll_groups", parallel = FALSE, verbose = FALSE ) Arguments data data.frame. Can be grouped, in which case the function is applied group-wise. n Number of new groups. group_cols Names of factors in `data` for identifying the existing groups that should be collapsed. Multiple names are treated as in dplyr::group_by() (i.e., a hierarchy of groups), where each leaf group within each parent group is considered a unique group to be collapsed. Parent groups are not considered during collapsing, why leaf groups from different parent groups can be collapsed together. Note: Do not confuse these group columns with potential columns that `data` is grouped by. `group_cols` identifies the groups to be collapsed. When `data` is grouped with dplyr::group_by(), the function is applied separately to each of those subsets. auto_tune Whether to create a larger set of collapsed group columns from all combina- tions of the balancing dimensions and select the overall most balanced group column(s). This tends to create much more balanced collapsed group columns. Can be slow, why we recommend enabling parallelization (see `parallel`). method "balance", "ascending", or "descending". • "balance" balances the attribute between the groups. • "ascending" orders by the attribute and groups from the lowest to highest value. • "descending" orders by the attribute and groups from the highest to lowest value. col_name Name of the new group column. When creating multiple new group columns (`num_new_group_cols`>1), this is the prefix for the names, which will be suffixed with an underscore and a number (_1, _2, _3, etc.). parallel Whether to parallelize the group column comparisons when `auto_tune` is enabled. Requires a registered parallel backend. Like doParallel::registerDoParallel. verbose Whether to print information about the process. May make the function slightly slower. N.B. Currently only used during auto-tuning. num_cols Names of numerical columns to balance between groups. balance_size Whether to balance the size of the collapsed groups. (logical) group_aggregation_fn Function for aggregating values in the `num_cols` columns for each group in `group_cols`. Default is mean(), where the average value(s) are balanced across the new groups. When using sum(), the groups will have similar sums across the new groups. N.B. Only used when `num_cols` is specified. cat_cols Names of categorical columns to balance the average frequency of one or more levels of. cat_levels Names of the levels in the `cat_cols` columns to balance the average fre- quencies of. When `NULL` (default), all levels are balanced. Can be weights indicating the balancing importance of each level (within each column). The weights are automatically scaled to sum to 1. Can be ".minority" or ".majority", in which case the minority/majority level are found and used. When ‘cat_cols‘ has single column name:: Either a vector with level names or a named numeric vector with weights: E.g. c("dog", "pidgeon", "mouse") or c("dog" = 5, "pidgeon" = 1, "mouse" = 3) When ‘cat_cols‘ has multiple column names:: A named list with vectors for each column name in `cat_cols`. When not providing a vector for a `cat_cols` column, all levels are balanced in that column. E.g. list("col1" = c("dog" = 5, "pidgeon" = 1, "mouse" = 3), "col2" = c("hydrated", "dehydrated")). id_cols Names of factor columns with IDs to balance the counts of between groups. E.g. useful to get a similar number of participants in each group. Details See details in collapse_groups(). Value `data` with a new grouping factor column. Author(s) <NAME>, <<EMAIL>> See Also Other grouping functions: all_groups_identical(), collapse_groups(), fold(), group_factor(), group(), partition(), splt() Examples # Attach packages library(groupdata2) library(dplyr) # Set seed if (requireNamespace("xpectr", quietly = TRUE)){ xpectr::set_test_seed(42) } # Create data frame df <- data.frame( "participant" = factor(rep(1:20, 3)), "age" = rep(sample(c(1:100), 20), 3), "answer" = factor(sample(c("a", "b", "c", "d"), 60, replace = TRUE)), "score" = sample(c(1:100), 20 * 3) ) df <- df %>% dplyr::arrange(participant) df$session <- rep(c("1", "2", "3"), 20) # Sample rows to get unequal sizes per participant df <- dplyr::sample_n(df, size = 53) # Create the initial groups (to be collapsed) df <- fold( data = df, k = 8, method = "n_dist", id_col = "participant" ) # Ungroup the data frame # Otherwise `collapse_groups*()` would be # applied to each fold separately! df <- dplyr::ungroup(df) # When `auto_tune` is enabled for larger datasets # we recommend enabling parallelization # This can be done with: # library(doParallel) # doParallel::registerDoParallel(7) # use 7 cores ## Not run: # Collapse to 3 groups with size balancing # Creates new `.coll_groups` column df_coll <- collapse_groups_by_size( data = df, n = 3, group_cols = ".folds" ) # Check balances (coll_summary <- summarize_balances( data = df_coll, group_cols = ".coll_groups" )) # Get ranked balances # This is most useful when having created multiple # new group columns with `collapse_groups()` # The scores are standard deviations across groups ranked_balances(coll_summary) # Collapse to 3 groups with *categorical* balancing df_coll <- collapse_groups_by_levels( data = df, n = 3, group_cols = ".folds", cat_cols = "answer" ) # Check balances (coll_summary <- summarize_balances( data = df_coll, group_cols = ".coll_groups", cat_cols = 'answer' )) # Collapse to 3 groups with *numerical* balancing # Also balance size to get similar sums # as well as means df_coll <- collapse_groups_by_numeric( data = df, n = 3, group_cols = ".folds", num_cols = "score", balance_size = TRUE ) # Check balances (coll_summary <- summarize_balances( data = df_coll, group_cols = ".coll_groups", num_cols = 'score' )) # Collapse to 3 groups with *ID* balancing # This should give us a similar number of IDs per group df_coll <- collapse_groups_by_ids( data = df, n = 3, group_cols = ".folds", id_cols = "participant" ) # Check balances (coll_summary <- summarize_balances( data = df_coll, group_cols = ".coll_groups", id_cols = 'participant' )) # Collapse to 3 groups with balancing of ALL attributes # We create 5 new grouping factors and compare them # The latter is in-general a good strategy even if you # only need a single collapsed grouping factor # as you can choose your preferred balances # based on the summary # NOTE: This is slow (up to a few minutes) # consider enabling parallelization df_coll <- collapse_groups( data = df, n = 3, num_new_group_cols = 5, group_cols = ".folds", cat_cols = "answer", num_cols = 'score', id_cols = "participant", auto_tune = TRUE # Disabled by default in `collapse_groups()` # parallel = TRUE # Add comma above and uncomment ) # Check balances (coll_summary <- summarize_balances( data = df_coll, group_cols = paste0(".coll_groups_", 1:5), cat_cols = "answer", num_cols = 'score', id_cols = 'participant' )) # Compare the new grouping columns # The lowest across-group standard deviation # is the most balanced ranked_balances(coll_summary) ## End(Not run) differs_from_previous Find values in a vector that differ from the previous value Description [Maturing] Finds values, or indices of values, that differ from the previous value by some threshold(s). Operates with both a positive and a negative threshold. Depending on `direction`, it checks if the difference to the previous value is: • greater than or equal to the positive threshold. • less than or equal to the negative threshold. Usage differs_from_previous( data, col = NULL, threshold = NULL, direction = "both", return_index = FALSE, include_first = FALSE, handle_na = "ignore", factor_conversion_warning = TRUE ) Arguments data data.frame or vector. N.B. If checking a factor, it is converted to a character vector. This means that factors can only be used when `threshold` is NULL. Conversion will gener- ate a warning, which can be turned off by setting `factor_conversion_warning` to FALSE. N.B. If `data` is a grouped data.frame, the function is applied group-wise and the output is a list of vectors. The names are based on the group indices (see dplyr::group_indices()). col Name of column to find values that differ in. Used when `data` is data.frame. (Character) threshold Threshold to check difference to previous value to. NULL, numeric scalar or numeric vector with length 2. NULL: Checks if the value is different from the previous value. Ignores `direction`. N.B. Works for both numeric and character vectors. Numeric scalar: Positive number. Negative threshold is the negated number. N.B. Only works for numeric vectors. Numeric vector with length 2: Given as c(negative threshold, positive threshold). Negative threshold must be a negative number and positive threshold must be a positive number. N.B. Only works for numeric vectors. direction both, positive or negative. (character) both: Checks whether the difference to the previous value is • greater than or equal to the positive threshold. • less than or equal to the negative threshold. positive: Checks whether the difference to the previous value is • greater than or equal to the positive threshold. negative: Checks whether the difference to the previous value is • less than or equal to the negative threshold. return_index Return indices of values that differ. (Logical) include_first Whether to include the first element of the vector in the output. (Logical) handle_na How to handle NAs in the column. "ignore": Removes the NAs before finding the differing values, ensuring that the first value after an NA will be correctly identified as new, if it differs from the value before the NA(s). "as_element": Treats all NAs as the string "NA". This means, that threshold must be NULL when using this method. Numeric scalar: A numeric value to replace NAs with. factor_conversion_warning Whether to throw a warning when converting a factor to a character. (Logi- cal) Value vector with either the differing values or the indices of the differing values. N.B. If `data` is a grouped data.frame, the output is a list of vectors with the differing values. The names are based on the group indices (see dplyr::group_indices()). Author(s) <NAME>, <<EMAIL>> See Also Other l_starts tools: find_missing_starts(), find_starts(), group_factor(), group() Examples # Attach packages library(groupdata2) # Create a data frame df <- data.frame( "a" = factor(c("a", "a", "b", "b", "c", "c")), "n" = c(1, 3, 6, 2, 2, 4) ) # Get differing values in column 'a' with no threshold. # This will simply check, if it is different to the previous value or not. differs_from_previous(df, col = "a") # Get indices of differing values in column 'a' with no threshold. differs_from_previous(df, col = "a", return_index = TRUE) # Get values, that are 2 or more greater than the previous value differs_from_previous(df, col = "n", threshold = 2, direction = "positive") # Get values, that are 4 or more less than the previous value differs_from_previous(df, col = "n", threshold = 4, direction = "negative") # Get values, that are either 2 or more greater than the previous value # or 4 or more less than the previous value differs_from_previous(df, col = "n", threshold = c(-4, 2), direction = "both") downsample Downsampling of rows in a data frame Description [Maturing] Uses random downsampling to fix the group sizes to the smallest group in the data.frame. Wraps balance(). Usage downsample(data, cat_col, id_col = NULL, id_method = "n_ids") Arguments data data.frame. Can be grouped, in which case the function is applied group-wise. cat_col Name of categorical variable to balance by. (Character) id_col Name of factor with IDs. (Character) IDs are considered entities, e.g. allowing us to add or remove all rows for an ID. How this is used is up to the `id_method`. E.g. If we have measured a participant multiple times and want make sure that we keep all these measurements. Then we would either remove/add all mea- surements for the participant or leave in all measurements for the participant. N.B. When `data` is a grouped data.frame (see dplyr::group_by()), IDs that appear in multiple groupings are considered separate entities within those groupings. id_method Method for balancing the IDs. (Character) "n_ids", "n_rows_c", "distributed", or "nested". n_ids (default): Balances on ID level only. It makes sure there are the same number of IDs for each category. This might lead to a different number of rows between categories. n_rows_c: Attempts to level the number of rows per category, while only removing/adding entire IDs. This is done in 2 steps: 1. If a category needs to add all its rows one or more times, the data is re- peated. 2. Iteratively, the ID with the number of rows closest to the lacking/excessive number of rows is added/removed. This happens until adding/removing the closest ID would lead to a size further from the target size than the current size. If multiple IDs are closest, one is randomly sampled. distributed: Distributes the lacking/excess rows equally between the IDs. If the number to distribute can not be equally divided, some IDs will have 1 row more/less than the others. nested: Calls balance() on each category with IDs as cat_col. I.e. if size is "min", IDs will have the size of the smallest ID in their category. Details Without ‘id_col‘: Downsampling is done without replacement, meaning that rows are not duplicated but only removed. With ‘id_col‘: See `id_method` description. Value data.frame with some rows removed. Ordered by potential grouping variables, `cat_col` and (potentially) `id_col`. Author(s) <NAME>, <<EMAIL>> See Also Other sampling functions: balance(), upsample() Examples # Attach packages library(groupdata2) # Create data frame df <- data.frame( "participant" = factor(c(1, 1, 2, 3, 3, 3, 3, 4, 4, 5, 5, 5, 5)), "diagnosis" = factor(c(0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0)), "trial" = c(1, 2, 1, 1, 2, 3, 4, 1, 2, 1, 2, 3, 4), "score" = sample(c(1:100), 13) ) # Using downsample() downsample(df, cat_col = "diagnosis") # Using downsample() with id_method "n_ids" # With column specifying added rows downsample(df, cat_col = "diagnosis", id_col = "participant", id_method = "n_ids" ) # Using downsample() with id_method "n_rows_c" # With column specifying added rows downsample(df, cat_col = "diagnosis", id_col = "participant", id_method = "n_rows_c" ) # Using downsample() with id_method "distributed" downsample(df, cat_col = "diagnosis", id_col = "participant", id_method = "distributed" ) # Using downsample() with id_method "nested" downsample(df, cat_col = "diagnosis", id_col = "participant", id_method = "nested" ) find_missing_starts Find start positions that cannot be found in ‘data‘ Description [Maturing] Tells you which values and (optionally) skip-to-numbers that are recursively removed when using the "l_starts" method with `remove_missing_starts` set to TRUE. Usage find_missing_starts(data, n, starts_col = NULL, return_skip_numbers = TRUE) Arguments data data.frame or vector. N.B. If `data` is a grouped data.frame, the function is applied group-wise and the output is a list of either vectors or lists. The names are based on the group indices (see dplyr::group_indices()). n List of starting positions. Skip values by c(value, skip_to_number) where skip_to_number is the nth appearance of the value in the vector. See group_factor() for explanations and examples of using the "l_starts" method. starts_col Name of column with values to match when `data` is a data.frame. Pass 'index' to use row names. (Character) return_skip_numbers Return skip-to-numbers along with values (Logical). Value List of start values and skip-to-numbers or a vector with the start values. Returns NULL if no values were found. N.B. If `data` is a grouped data.frame, the function is applied group-wise and the output is a list of either vectors or lists. The names are based on the group indices (see dplyr::group_indices()). Author(s) <NAME>, <<EMAIL>> See Also Other l_starts tools: differs_from_previous(), find_starts(), group_factor(), group() Examples # Attach packages library(groupdata2) # Create a data frame df <- data.frame( "a" = c("a", "a", "b", "b", "c", "c"), stringsAsFactors = FALSE ) # Create list of starts starts <- c("a", "e", "b", "d", "c") # Find missing starts with skip_to numbers find_missing_starts(df, starts, starts_col = "a") # Find missing starts without skip_to numbers find_missing_starts(df, starts, starts_col = "a", return_skip_numbers = FALSE ) find_starts Find start positions of groups in data Description [Maturing] Finds values or indices of values that are not the same as the previous value. E.g. to use with the "l_starts" method. Wraps differs_from_previous(). Usage find_starts( data, col = NULL, return_index = FALSE, handle_na = "ignore", factor_conversion_warning = TRUE ) Arguments data data.frame or vector. N.B. If checking a factor, it is converted to a character vector. Conversion will generate a warning, which can be turned off by setting `factor_conversion_warning` to FALSE. N.B. If `data` is a grouped data.frame, the function is applied group-wise and the output is a list of vectors. The names are based on the group indices (see dplyr::group_indices()). col Name of column to find starts in. Used when `data` is a data.frame. (Char- acter) return_index Whether to return indices of starts. (Logical) handle_na How to handle NAs in the column. "ignore": Removes the NAs before finding the differing values, ensuring that the first value after an NA will be correctly identified as new, if it differs from the value before the NA(s). "as_element": Treats all NAs as the string "NA". This means, that threshold must be NULL when using this method. Numeric scalar: A numeric value to replace NAs with. factor_conversion_warning Throw warning when converting factor to character. (Logical) Value vector with either the start values or the indices of the start values. N.B. If `data` is a grouped data.frame, the output is a list of vectors. The names are based on the group indices (see dplyr::group_indices()). Author(s) <NAME>, <<EMAIL>> See Also Other l_starts tools: differs_from_previous(), find_missing_starts(), group_factor(), group() Examples # Attach packages library(groupdata2) # Create a data frame df <- data.frame( "a" = c("a", "a", "b", "b", "c", "c"), stringsAsFactors = FALSE ) # Get start values for new groups in column 'a' find_starts(df, col = "a") # Get indices of start values for new groups # in column 'a' find_starts(df, col = "a", return_index = TRUE ) ## Use found starts with l_starts method # Notice: This is equivalent to n = 'auto' # with l_starts method # Get start values for new groups in column 'a' starts <- find_starts(df, col = "a") # Use starts in group() with 'l_starts' method group(df, n = starts, method = "l_starts", starts_col = "a" ) # Similar but with indices instead of values # Get indices of start values for new groups # in column 'a' starts_ind <- find_starts(df, col = "a", return_index = TRUE ) # Use starts in group() with 'l_starts' method group(df, n = starts_ind, method = "l_starts", starts_col = "index" ) fold Create balanced folds for cross-validation Description [Stable] Divides data into groups by a wide range of methods. Balances a given categorical variable and/or numerical variable between folds and keeps (if possible) all data points with a shared ID (e.g. par- ticipant_id) in the same fold. Can create multiple unique fold columns for repeated cross-validation. Usage fold( data, k = 5, cat_col = NULL, num_col = NULL, id_col = NULL, method = "n_dist", id_aggregation_fn = sum, extreme_pairing_levels = 1, num_fold_cols = 1, unique_fold_cols_only = TRUE, max_iters = 5, use_of_triplets = "fill", handle_existing_fold_cols = "keep_warn", parallel = FALSE ) Arguments data data.frame. Can be grouped, in which case the function is applied group-wise. k Depends on ‘method‘. Number of folds (default), fold size, with more (see `method`). When `num_fold_cols` > 1, `k` can also be a vector with one `k` per fold column. This allows trying multiple `k` settings at a time. Note that the gener- ated fold columns are not guaranteed to be in the order of `k`. Given as whole number or percentage (0 < `k` < 1). cat_col Name of categorical variable to balance between folds. E.g. when predicting a binary variable (a or b), we usually want both classes represented in every fold. N.B. If also passing an `id_col`, `cat_col` should be constant within each ID. num_col Name of numerical variable to balance between folds. N.B. When used with `id_col`, values for each ID are aggregated using `id_aggregation_fn` before being balanced. N.B. When passing `num_col`, the `method` parameter is ignored. id_col Name of factor with IDs. This will be used to keep all rows that share an ID in the same fold (if possible). E.g. If we have measured a participant multiple times and want to see the effect of time, we want to have all observations of this participant in the same fold. N.B. When `data` is a grouped data.frame (see dplyr::group_by()), IDs that appear in multiple groupings might end up in different folds in those group- ings. method "n_dist", "n_fill", "n_last", "n_rand", "greedy", or "staircase". Notice: examples are sizes of the generated groups based on a vector with 57 elements. n_dist (default): Divides the data into a specified number of groups and distributes excess data points across groups (e.g.11, 11, 12, 11, 12). `k` is number of groups n_fill: Divides the data into a specified number of groups and fills up groups with excess data points from the beginning (e.g.12, 12, 11, 11, 11). `k` is number of groups n_last: Divides the data into a specified number of groups. It finds the most equal group sizes possible, using all data points. Only the last group is able to differ in size (e.g.11, 11, 11, 11, 13). `k` is number of groups n_rand: Divides the data into a specified number of groups. Excess data points are placed randomly in groups (only 1 per group) (e.g.12, 11, 11, 11, 12). `k` is number of groups greedy: Divides up the data greedily given a specified group size (e.g.10, 10, 10, 10, 10, 7). `k` is group size staircase: Uses step size to divide up the data. Group size increases with 1 step for every group, until there is no more data (e.g.5, 10, 15, 20, 7). `k` is step size id_aggregation_fn Function for aggregating values in `num_col` for each ID, before balancing `num_col`. N.B. Only used when `num_col` and `id_col` are both specified. extreme_pairing_levels How many levels of extreme pairing to do when balancing folds by a numerical column (i.e. `num_col` is specified). Extreme pairing: Rows/pairs are ordered as smallest, largest, second smallest, second largest, etc. If extreme_pairing_levels > 1, this is done "recursively" on the extreme pairs. See `Details/num_col` for more. N.B. Larger values work best with large datasets. If set too high, the result might not be stochastic. Always check if an increase actually makes the folds more balanced. See example. num_fold_cols Number of fold columns to create. Useful for repeated cross-validation. If num_fold_cols > 1, columns will be named ”.f olds1 ”, ”.f olds2 ”, etc. Oth- erwise simply ”.f olds”. N.B. If `unique_fold_cols_only` is TRUE, we can end up with fewer columns than specified, see `max_iters`. N.B. If `data` has existing fold columns, see `handle_existing_fold_cols`. unique_fold_cols_only Check if fold columns are identical and keep only unique columns. As the number of column comparisons can be time consuming, we can run this part in parallel. See `parallel`. N.B. We can end up with fewer columns than specified in `num_fold_cols`, see `max_iters`. N.B. Only used when `num_fold_cols` > 1 or `data` has existing fold columns. max_iters Maximum number of attempts at reaching `num_fold_cols` unique fold columns. When only keeping unique fold columns, we risk having fewer columns than expected. Hence, we repeatedly create the missing columns and remove those that are not unique. This is done until we have `num_fold_cols` unique fold columns or we have attempted `max_iters` times. In some cases, it is not possible to create `num_fold_cols` unique combina- tions of the dataset, e.g. when specifying `cat_col`, `id_col` and `num_col`. `max_iters` specifies when to stop trying. Note that we can end up with fewer columns than specified in `num_fold_cols`. N.B. Only used when `num_fold_cols` > 1. use_of_triplets "fill", "instead" or "never". When to use extreme triplet grouping in numerical balancing (when `num_col` is specified). fill (default): When extreme pairing cannot create enough unique fold columns, use extreme triplet grouping to create additional unique fold columns. instead: Use extreme triplet grouping instead of extreme pairing. For some datasets, grouping in triplets give better balancing than grouping in pairs. This can be worth exploring when numerical balancing is important. Tip: Compare the balances with summarize_balances() and ranked_balances(). never: Never use extreme triplet grouping. Extreme triplet grouping: Similar to extreme pairing (see Details >> num_col), extreme triplet grouping orders the rows as smallest, closest to the median, largest, second smallest, second closest to the median, second largest, etc. Each triplet gets a group identifier and we either perform recursive ex- treme triplet grouping on the identifiers or fold the identifiers and transfer the fold IDs to the original rows. For some datasets, this can be give more balanced groups than extreme pairing, but on average, extreme pairing works better. Due to the grouping into triplets instead of pairs they tend to create different groupings though, so when creat- ing many fold columns and extreme pairing cannot create enough unique fold columns, we can create the remaining (or at least some additional number) with extreme triplet grouping. Extreme triplet grouping is implemented in rearrr::triplet_extremes(). handle_existing_fold_cols How to handle existing fold columns. Either "keep_warn", "keep", or "remove". To add extra fold columns, use "keep" or "keep_warn". Note that existing fold columns might be renamed. To replace the existing fold columns, use "remove". parallel Whether to parallelize the fold column comparisons, when `unique_fold_cols_only` is TRUE. Requires a registered parallel backend. Like doParallel::registerDoParallel. Details cat_col: 1. `data` is subset by `cat_col`. 2. Subsets are grouped and merged. id_col: 1. Groups are created from unique IDs. num_col: 1. Rows are shuffled. Note that this will only affect rows with the same value in `num_col`. 2. Extreme pairing 1: Rows are ordered as smallest, largest, second smallest, second largest, etc. Each pair get a group identifier. (See rearrr::pair_extremes()) 3. If `extreme_pairing_levels` > 1: These group identifiers are reordered as smallest, largest, second smallest, second largest, etc., by the sum of `num_col` in the represented rows. These pairs (of pairs) get a new set of group identifiers, and the process is repeated `extreme_pairing_levels`-2 times. Note that the group identifiers at the last level will represent 2^`extreme_pairing_levels` rows, why you should be careful when choosing that setting. 4. The group identifiers from the last pairing are folded (randomly divided into groups), and the fold identifiers are transferred to the original rows. N.B. When doing extreme pairing of an unequal number of rows, the row with the smallest value is placed in a group by itself, and the order is instead: smallest, second smallest, largest, third smallest, second largest, etc. N.B. When `num_fold_cols` > 1 and fewer than `num_fold_cols` fold columns have been cre- ated after `max_iters` attempts, we try with extreme triplets instead (see rearrr::triplet_extremes()). It groups the elements as smallest, closest to the median, largest, second smallest, second clos- est to the median, second largest, etc. We can also choose to never/only use extreme triplets via `use_of_triplets`. cat_col AND id_col: 1. `data` is subset by `cat_col`. 2. Groups are created from unique IDs in each subset. 3. Subsets are merged. cat_col AND num_col: 1. `data` is subset by `cat_col`. 2. Subsets are grouped by `num_col`. 3. Subsets are merged such that the largest group (by sum of `num_col`) from the first category is merged with the smallest group from the second category, etc. num_col AND id_col: 1. Values in `num_col` are aggregated for each ID, using `id_aggregation_fn`. 2. The IDs are grouped, using the aggregated values as "num_col". 3. The groups of the IDs are transferred to the rows. cat_col AND num_col AND id_col: 1. Values in `num_col` are aggregated for each ID, using `id_aggregation_fn`. 2. IDs are subset by `cat_col`. 3. The IDs in each subset are grouped, by using the aggregated values as "num_col". 4. The subsets are merged such that the largest group (by sum of the aggregated values) from the first category is merged with the smallest group from the second category, etc. 5. The groups of the IDs are transferred to the rows. Value data.frame with grouping factor for subsetting in cross-validation. Author(s) <NAME>, <<EMAIL>> See Also partition for balanced partitions Other grouping functions: all_groups_identical(), collapse_groups_by, collapse_groups(), group_factor(), group(), partition(), splt() Examples # Attach packages library(groupdata2) library(dplyr) # Create data frame df <- data.frame( "participant" = factor(rep(c("1", "2", "3", "4", "5", "6"), 3)), "age" = rep(sample(c(1:100), 6), 3), "diagnosis" = factor(rep(c("a", "b", "a", "a", "b", "b"), 3)), "score" = sample(c(1:100), 3 * 6) ) df <- df %>% arrange(participant) df$session <- rep(c("1", "2", "3"), 6) # Using fold() ## Without balancing df_folded <- fold(data = df, k = 3, method = "n_dist") ## With cat_col df_folded <- fold( data = df, k = 3, cat_col = "diagnosis", method = "n_dist" ) ## With id_col df_folded <- fold( data = df, k = 3, id_col = "participant", method = "n_dist" ) ## With num_col # Note: 'method' would not be used in this case df_folded <- fold(data = df, k = 3, num_col = "score") # With cat_col and id_col df_folded <- fold( data = df, k = 3, cat_col = "diagnosis", id_col = "participant", method = "n_dist" ) ## With cat_col, id_col and num_col df_folded <- fold( data = df, k = 3, cat_col = "diagnosis", id_col = "participant", num_col = "score" ) # Order by folds df_folded <- df_folded %>% arrange(.folds) ## Multiple fold columns # Useful for repeated cross-validation # Note: Consider running in parallel df_folded <- fold( data = df, k = 3, cat_col = "diagnosis", id_col = "participant", num_fold_cols = 5, unique_fold_cols_only = TRUE, max_iters = 4 ) # Different `k` per fold column # Note: `length(k) == num_fold_cols` df_folded <- fold( data = df, k = c(2, 3), cat_col = "diagnosis", id_col = "participant", num_fold_cols = 2, unique_fold_cols_only = TRUE, max_iters = 4 ) # Check the generated columns # with `summarize_group_cols()` summarize_group_cols( data = df_folded, group_cols = paste0('.folds_', 1:2) ) ## Check if additional `extreme_pairing_levels` ## improve the numerical balance set.seed(2) # try with seed 1 as well df_folded_1 <- fold( data = df, k = 3, num_col = "score", extreme_pairing_levels = 1 ) df_folded_1 %>% dplyr::ungroup() %>% summarize_balances(group_cols = '.folds', num_cols = 'score') set.seed(2) # Try with seed 1 as well df_folded_2 <- fold( data = df, k = 3, num_col = "score", extreme_pairing_levels = 2 ) df_folded_2 %>% dplyr::ungroup() %>% summarize_balances(group_cols = '.folds', num_cols = 'score') # We can directly compare how balanced the 'score' is # in the two fold columns using a combination of # `summarize_balances()` and `ranked_balances()` # We see that the second fold column (made with `extreme_pairing_levels = 2`) # has a lower standard deviation of its mean scores - meaning that they # are more similar and thus more balanced df_folded_1$.folds_2 <- df_folded_2$.folds df_folded_1 %>% dplyr::ungroup() %>% summarize_balances(group_cols = c('.folds', '.folds_2'), num_cols = 'score') %>% ranked_balances() group Create groups from your data Description [Stable] Divides data into groups by a wide range of methods. Creates a grouping factor with 1s for group 1, 2s for group 2, etc. Returns a data.frame grouped by the grouping factor for easy use in magrittr `%>%` pipelines. By default*, the data points in a group are connected sequentially (e.g. c(1, 1, 2, 2, 3, 3)) and splitting is done from top to bottom. *Except in the "every" method. There are five types of grouping methods: The "n_*" methods split the data into a given number of groups. They differ in how they handle excess data points. The "greedy" method uses a group size to split the data into groups, greedily grabbing `n` data points from the top. The last group may thus differ in size (e.g. c(1, 1, 2, 2, 3)). The "l_*" methods use a list of either starting points ("l_starts") or group sizes ("l_sizes"). The "l_starts" method can also auto-detect group starts (when a value differs from the previous value). The "every" method puts every `n`th data point into the same group (e.g. c(1, 2, 3, 1, 2, 3)). The step methods "staircase" and "primes" increase the group size by a step for each group. Note: To create groups balanced by a categorical and/or numerical variable, see the fold() and partition() functions. Usage group( data, n, method = "n_dist", starts_col = NULL, force_equal = FALSE, allow_zero = FALSE, return_factor = FALSE, descending = FALSE, randomize = FALSE, col_name = ".groups", remove_missing_starts = FALSE ) Arguments data data.frame or vector. When a grouped data.frame, the function is applied group-wise. n Depends on ‘method‘. Number of groups (default), group size, list of group sizes, list of group starts, number of data points between group members, step size or prime number to start at. See `method`. Passed as whole number(s) and/or percentage(s) (0 < n < 1) and/or character. Method "l_starts" allows 'auto'. method "greedy", "n_dist", "n_fill", "n_last", "n_rand", "l_sizes", "l_starts", "every", "staircase", or "primes". Note: examples are sizes of the generated groups based on a vector with 57 elements. greedy: Divides up the data greedily given a specified group size (e.g.10, 10, 10, 10, 10, 7). `n` is group size. n_dist (default): Divides the data into a specified number of groups and distributes excess data points across groups (e.g.11, 11, 12, 11, 12). `n` is number of groups. n_fill: Divides the data into a specified number of groups and fills up groups with excess data points from the beginning (e.g.12, 12, 11, 11, 11). `n` is number of groups. n_last: Divides the data into a specified number of groups. It finds the most equal group sizes possible, using all data points. Only the last group is able to differ in size (e.g.11, 11, 11, 11, 13). `n` is number of groups. n_rand: Divides the data into a specified number of groups. Excess data points are placed randomly in groups (max. 1 per group) (e.g.12, 11, 11, 11, 12). `n` is number of groups. l_sizes: Divides up the data by a list of group sizes. Excess data points are placed in an extra group at the end. E.g.n = list(0.2, 0.3)outputsgroupswithsizes(11, 17, 29). `n` is a list of group sizes. l_starts: Starts new groups at specified values in the `starts_col` vector. n is a list of starting positions. Skip values by c(value, skip_to_number) where skip_to_number is the nth appearance of the value in the vector af- ter the previous group start. The first data point is automatically a starting position. E.g.n = c(1, 3, 7, 25, 50)outputsgroupswithsizes(2, 4, 18, 25, 8). To skip: givenvectorc(”a”, ”e”, ”o”, ”a”, ”e”, ”o”), n = list(”a”, ”e”, c(”o”, 2))outputsgroups If passing n =′ auto′ the starting positions are automatically found such that a group is started whenever a value differs from the previous value (see find_starts()). Note that all NAs are first replaced by a single unique value, meaning that they will also cause group starts. See differs_from_previous() to set a threshold for what is considered "different". E.g.n = ”auto”f orc(10, 10, 7, 8, 8, 9)wouldstartgroupsatthef irst10, 7, 8and9, andgivec(1, 1, 2 every: Combines every `n`th data point into a group. (e.g.12, 12, 11, 11, 11withn = 5). `n` is the number of data points between group members ("every n"). staircase: Uses step size to divide up the data. Group size increases with 1 step for every group, until there is no more data (e.g.5, 10, 15, 20, 7). `n` is step size. primes: Uses prime numbers as group sizes. Group size increases to the next prime number until there is no more data. (e.g.5, 7, 11, 13, 17, 4). `n` is the prime number to start at. starts_col Name of column with values to match in method "l_starts" when `data` is a data.frame. Pass 'index' to use row names. (Character) force_equal Create equal groups by discarding excess data points. Implementation varies between methods. (Logical) allow_zero Whether `n` can be passed as 0. Can be useful when programmatically finding n. (Logical) return_factor Only return the grouping factor. (Logical) descending Change the direction of the method. (Not fully implemented) (Logical) randomize Randomize the grouping factor. (Logical) col_name Name of the added grouping factor. remove_missing_starts Recursively remove elements from the list of starts that are not found. For method "l_starts" only. (Logical) Value data.frame grouped by existing grouping variables and the new grouping factor. Author(s) <NAME>, <<EMAIL>> See Also Other grouping functions: all_groups_identical(), collapse_groups_by, collapse_groups(), fold(), group_factor(), partition(), splt() Other staircase tools: %primes%(), %staircase%(), group_factor() Other l_starts tools: differs_from_previous(), find_missing_starts(), find_starts(), group_factor() Examples # Attach packages library(groupdata2) library(dplyr) # Create data frame df <- data.frame( "x" = c(1:12), "species" = factor(rep(c("cat", "pig", "human"), 4)), "age" = sample(c(1:100), 12) ) # Using group() df_grouped <- group(df, n = 5, method = "n_dist") # Using group() in pipeline to get mean age df_means <- df %>% group(n = 5, method = "n_dist") %>% dplyr::summarise(mean_age = mean(age)) # Using group() with `l_sizes` df_grouped <- group( data = df, n = list(0.2, 0.3), method = "l_sizes" ) # Using group_factor() with `l_starts` # `c('pig', 2)` skips to the second appearance of # 'pig' after the first appearance of 'cat' df_grouped <- group( data = df, n = list("cat", c("pig", 2), "human"), method = "l_starts", starts_col = "species" ) groupdata2 groupdata2: A package for creating groups from data Description Methods for dividing data into groups. Create balanced partitions and cross-validation folds. Per- form time series windowing and general grouping and splitting of data. Balance existing groups with up- and downsampling. Details The groupdata2 package provides six main functions: group(), group_factor(), splt(), partition(), fold(), and balance(). group Create groups from your data. Divides data into groups by a wide range of methods. Creates a grouping factor with 1s for group 1, 2s for group 2, etc. Returns a data.frame grouped by the grouping factor for easy use in magrittr pipelines. Go to group() group_factor Create grouping factor for subsetting your data. Divides data into groups by a wide range of methods. Creates and returns a grouping factor with 1s for group 1, 2s for group 2, etc. Go to group_factor() splt Split data by a wide range of methods. Divides data into groups by a wide range of methods. Splits data by these groups. Go to splt() partition Create balanced partitions (e.g. training/test sets). Splits data into partitions. Balances a given categorical variable between partitions and keeps (if possible) all data points with a shared ID (e.g. participant_id) in the same partition. Go to partition() fold Create balanced folds for cross-validation. Divides data into groups (folds) by a wide range of methods. Balances a given categorical variable between folds and keeps (if possible) all data points with the same ID (e.g. participant_id) in the same fold. Go to fold() balance Balance the sizes of your groups with up- and downsampling. Uses up- and/or downsampling to fix the group sizes to the min, max, mean, or median group size or to a specific number of rows. Has a set of methods for balancing on ID level. Go to balance() Author(s) <NAME>, <<EMAIL>> group_factor Create grouping factor for subsetting your data Description [Stable] Divides data into groups by a wide range of methods. Creates and returns a grouping factor with 1s for group 1, 2s for group 2, etc. By default*, the data points in a group are connected sequentially (e.g. c(1, 1, 2, 2, 3, 3)) and splitting is done from top to bottom. *Except in the "every" method. There are five types of grouping methods: The "n_*" methods split the data into a given number of groups. They differ in how they handle excess data points. The "greedy" method uses a group size to split the data into groups, greedily grabbing `n` data points from the top. The last group may thus differ in size (e.g. c(1, 1, 2, 2, 3)). The "l_*" methods use a list of either starting points ("l_starts") or group sizes ("l_sizes"). The "l_starts" method can also auto-detect group starts (when a value differs from the previous value). The "every" method puts every `n`th data point into the same group (e.g. c(1, 2, 3, 1, 2, 3)). The step methods "staircase" and "primes" increase the group size by a step for each group. Note: To create groups balanced by a categorical and/or numerical variable, see the fold() and partition() functions. Usage group_factor( data, n, method = "n_dist", starts_col = NULL, force_equal = FALSE, allow_zero = FALSE, descending = FALSE, randomize = FALSE, remove_missing_starts = FALSE ) Arguments data data.frame or vector. When a grouped data.frame, the function is applied group-wise. n Depends on ‘method‘. Number of groups (default), group size, list of group sizes, list of group starts, number of data points between group members, step size or prime number to start at. See `method`. Passed as whole number(s) and/or percentage(s) (0 < n < 1) and/or character. Method "l_starts" allows 'auto'. method "greedy", "n_dist", "n_fill", "n_last", "n_rand", "l_sizes", "l_starts", "every", "staircase", or "primes". Note: examples are sizes of the generated groups based on a vector with 57 elements. greedy: Divides up the data greedily given a specified group size (e.g.10, 10, 10, 10, 10, 7). `n` is group size. n_dist (default): Divides the data into a specified number of groups and distributes excess data points across groups (e.g.11, 11, 12, 11, 12). `n` is number of groups. n_fill: Divides the data into a specified number of groups and fills up groups with excess data points from the beginning (e.g.12, 12, 11, 11, 11). `n` is number of groups. n_last: Divides the data into a specified number of groups. It finds the most equal group sizes possible, using all data points. Only the last group is able to differ in size (e.g.11, 11, 11, 11, 13). `n` is number of groups. n_rand: Divides the data into a specified number of groups. Excess data points are placed randomly in groups (max. 1 per group) (e.g.12, 11, 11, 11, 12). `n` is number of groups. l_sizes: Divides up the data by a list of group sizes. Excess data points are placed in an extra group at the end. E.g.n = list(0.2, 0.3)outputsgroupswithsizes(11, 17, 29). `n` is a list of group sizes. l_starts: Starts new groups at specified values in the `starts_col` vector. n is a list of starting positions. Skip values by c(value, skip_to_number) where skip_to_number is the nth appearance of the value in the vector af- ter the previous group start. The first data point is automatically a starting position. E.g.n = c(1, 3, 7, 25, 50)outputsgroupswithsizes(2, 4, 18, 25, 8). To skip: givenvectorc(”a”, ”e”, ”o”, ”a”, ”e”, ”o”), n = list(”a”, ”e”, c(”o”, 2))outputsgroups If passing n =′ auto′ the starting positions are automatically found such that a group is started whenever a value differs from the previous value (see find_starts()). Note that all NAs are first replaced by a single unique value, meaning that they will also cause group starts. See differs_from_previous() to set a threshold for what is considered "different". E.g.n = ”auto”f orc(10, 10, 7, 8, 8, 9)wouldstartgroupsatthef irst10, 7, 8and9, andgivec(1, 1, 2 every: Combines every `n`th data point into a group. (e.g.12, 12, 11, 11, 11withn = 5). `n` is the number of data points between group members ("every n"). staircase: Uses step size to divide up the data. Group size increases with 1 step for every group, until there is no more data (e.g.5, 10, 15, 20, 7). `n` is step size. primes: Uses prime numbers as group sizes. Group size increases to the next prime number until there is no more data. (e.g.5, 7, 11, 13, 17, 4). `n` is the prime number to start at. starts_col Name of column with values to match in method "l_starts" when `data` is a data.frame. Pass 'index' to use row names. (Character) force_equal Create equal groups by discarding excess data points. Implementation varies between methods. (Logical) allow_zero Whether `n` can be passed as 0. Can be useful when programmatically finding n. (Logical) descending Change the direction of the method. (Not fully implemented) (Logical) randomize Randomize the grouping factor. (Logical) remove_missing_starts Recursively remove elements from the list of starts that are not found. For method "l_starts" only. (Logical) Value Grouping factor with 1s for group 1, 2s for group 2, etc. N.B. If `data` is a grouped data.frame, the output is a data.frame with the existing groupings and the generated grouping factor. The row order from `data` is maintained. Author(s) <NAME>, <<EMAIL>> See Also Other grouping functions: all_groups_identical(), collapse_groups_by, collapse_groups(), fold(), group(), partition(), splt() Other staircase tools: %primes%(), %staircase%(), group() Other l_starts tools: differs_from_previous(), find_missing_starts(), find_starts(), group() Examples # Attach packages library(groupdata2) library(dplyr) # Create a data frame df <- data.frame( "x" = c(1:12), "species" = factor(rep(c("cat", "pig", "human"), 4)), "age" = sample(c(1:100), 12) ) # Using group_factor() with n_dist groups <- group_factor(df, 5, method = "n_dist") df$groups <- groups # Using group_factor() with greedy groups <- group_factor(df, 5, method = "greedy") df$groups <- groups # Using group_factor() with l_sizes groups <- group_factor(df, list(0.2, 0.3), method = "l_sizes") df$groups <- groups # Using group_factor() with l_starts groups <- group_factor(df, list("cat", c("pig", 2), "human"), method = "l_starts", starts_col = "species" ) df$groups <- groups partition Create balanced partitions Description [Stable] Splits data into partitions. Balances a given categorical variable and/or numerical variable between partitions and keeps (if possible) all data points with a shared ID (e.g. participant_id) in the same partition. Usage partition( data, p = 0.2, cat_col = NULL, num_col = NULL, id_col = NULL, id_aggregation_fn = sum, extreme_pairing_levels = 1, force_equal = FALSE, list_out = TRUE ) Arguments data data.frame. Can be grouped, in which case the function is applied group-wise. p List or vector of partition sizes. Given as whole number(s) and/or percentage(s) (0 < `p` < 1). E.g. c(0.2, 3, 0.1). cat_col Name of categorical variable to balance between partitions. E.g. when training and testing a model for predicting a binary variable (a or b), we usually want both classes represented in both the training set and the test set. N.B. If also passing an `id_col`, `cat_col` should be constant within each ID. num_col Name of numerical variable to balance between partitions. N.B. When used with `id_col`, values in `num_col` for each ID are aggre- gated using `id_aggregation_fn` before being balanced. id_col Name of factor with IDs. Used to keep all rows that share an ID in the same partition (if possible). E.g. If we have measured a participant multiple times and want to see the effect of time, we want to have all observations of this participant in the same partition. N.B. When `data` is a grouped data.frame (see dplyr::group_by()), IDs that appear in multiple groupings might end up in different partitions in those groupings. id_aggregation_fn Function for aggregating values in `num_col` for each ID, before balancing `num_col`. N.B. Only used when `num_col` and `id_col` are both specified. extreme_pairing_levels How many levels of extreme pairing to do when balancing partitions by a nu- merical column (i.e. `num_col` is specified). Extreme pairing: Rows/pairs are ordered as smallest, largest, second smallest, second largest, etc. If `extreme_pairing_levels` > 1, this is done "recur- sively" on the extreme pairs. See `Details/num_col` for more. N.B. Larger values work best with large datasets. If set too high, the result might not be stochastic. Always check if an increase actually makes the partitions more balanced. See `Examples`. force_equal Whether to discard excess data. (Logical) list_out Whether to return partitions in a list. (Logical) N.B. When `data` is a grouped data.frame, the output is always a data.frame with partition identifiers. Details cat_col: 1. `data` is subset by `cat_col`. 2. Subsets are partitioned and merged. id_col: 1. Partitions are created from unique IDs. num_col: 1. Rows are shuffled. Note that this will only affect rows with the same value in `num_col`. 2. Extreme pairing 1: Rows are ordered as smallest, largest, second smallest, second largest, etc. Each pair get a group identifier. 3. If `extreme_pairing_levels` > 1: The group identifiers are reordered as smallest, largest, second smallest, second largest, etc., by the sum of `num_col` in the represented rows. These pairs (of pairs) get a new set of group identifiers, and the process is repeated `extreme_pairing_levels`-2 times. Note that the group identifiers at the last level will represent 2^`extreme_pairing_levels` rows, why you should be careful when choosing that setting. 4. The final group identifiers are shuffled, and their order is applied to the full dataset. 5. The ordered dataset is split by the sizes in `p`. N.B. When doing extreme pairing of an unequal number of rows, the row with the largest value is placed in a group by itself, and the order is instead: smallest, second largest, second smallest, third largest, ... , largest. cat_col AND id_col: 1. `data` is subset by `cat_col`. 2. Partitions are created from unique IDs in each subset. 3. Subsets are merged. cat_col AND num_col: 1. `data` is subset by `cat_col`. 2. Subsets are partitioned by `num_col`. 3. Subsets are merged. num_col AND id_col: 1. Values in `num_col` are aggregated for each ID, using id_aggregation_fn. 2. The IDs are partitioned, using the aggregated values as "num_col". 3. The partition identifiers are transferred to the rows of the IDs. cat_col AND num_col AND id_col: 1. Values in `num_col` are aggregated for each ID, using id_aggregation_fn. 2. IDs are subset by `cat_col`. 3. The IDs for each subset are partitioned, by using the aggregated values as "num_col". 4. The partition identifiers are transferred to the rows of the IDs. Value If `list_out` is TRUE: A list of partitions where partitions are data.frames. If `list_out` is FALSE: A data.frame with grouping factor for subsetting. N.B. When `data` is a grouped data.frame, the output is always a data.frame with a grouping factor. Author(s) <NAME>, <<EMAIL>> See Also Other grouping functions: all_groups_identical(), collapse_groups_by, collapse_groups(), fold(), group_factor(), group(), splt() Examples # Attach packages library(groupdata2) library(dplyr) # Create data frame df <- data.frame( "participant" = factor(rep(c("1", "2", "3", "4", "5", "6"), 3)), "age" = rep(sample(c(1:100), 6), 3), "diagnosis" = factor(rep(c("a", "b", "a", "a", "b", "b"), 3)), "score" = sample(c(1:100), 3 * 6) ) df <- df %>% arrange(participant) df$session <- rep(c("1", "2", "3"), 6) # Using partition() # Without balancing partitions <- partition(data = df, p = c(0.2, 0.3)) # With cat_col partitions <- partition(data = df, p = 0.5, cat_col = "diagnosis") # With id_col partitions <- partition(data = df, p = 0.5, id_col = "participant") # With num_col partitions <- partition(data = df, p = 0.5, num_col = "score") # With cat_col and id_col partitions <- partition( data = df, p = 0.5, cat_col = "diagnosis", id_col = "participant" ) # With cat_col, num_col and id_col partitions <- partition( data = df, p = 0.5, cat_col = "diagnosis", num_col = "score", id_col = "participant" ) # Return data frame with grouping factor # with list_out = FALSE partitions <- partition(df, c(0.5), list_out = FALSE) # Check if additional extreme_pairing_levels # improve the numerical balance set.seed(2) # try with seed 1 as well partitions_1 <- partition( data = df, p = 0.5, num_col = "score", extreme_pairing_levels = 1, list_out = FALSE ) partitions_1 %>% dplyr::group_by(.partitions) %>% dplyr::summarise( sum_score = sum(score), mean_score = mean(score) ) set.seed(2) # try with seed 1 as well partitions_2 <- partition( data = df, p = 0.5, num_col = "score", extreme_pairing_levels = 2, list_out = FALSE ) partitions_2 %>% dplyr::group_by(.partitions) %>% dplyr::summarise( sum_score = sum(score), mean_score = mean(score) ) ranked_balances Extract ranked standard deviations from summary Description [Experimental] Extract the standard deviations (default) from the "Summary" data.frame from the output of summarize_balances(), ordered by the `SD_rank` column. See examples of usage in summarize_balances(). Usage ranked_balances(summary, measure = "SD") Arguments summary "Summary" data.frame from output of summarize_balances(). Can also be the direct output list of summarize_balances(), in which case the "Summary" element is used. measure The measure to extract rows for. One of: "mean", "median", "SD", "IQR", "min", "max". The most meaningful measures to consider as metrics of balance are `SD` and `IQR`, as a smaller spread of variables across group summaries means they are more similar. NOTE: Ranks are of standard deviations and not affected by this argument. Value The rows in `summary` where `measure` == "SD", ordered by the `SD_rank` column. Author(s) <NAME>, <<EMAIL>> See Also Other summarization functions: summarize_balances(), summarize_group_cols() splt Split data by a range of methods Description [Stable] Divides data into groups by a wide range of methods. Splits data by these groups. Wraps group() with split(). Usage splt( data, n, method = "n_dist", starts_col = NULL, force_equal = FALSE, allow_zero = FALSE, descending = FALSE, randomize = FALSE, remove_missing_starts = FALSE ) Arguments data data.frame or vector. When a grouped data.frame, the function is applied group-wise. n Depends on ‘method‘. Number of groups (default), group size, list of group sizes, list of group starts, number of data points between group members, step size or prime number to start at. See `method`. Passed as whole number(s) and/or percentage(s) (0 < n < 1) and/or character. Method "l_starts" allows 'auto'. method "greedy", "n_dist", "n_fill", "n_last", "n_rand", "l_sizes", "l_starts", "every", "staircase", or "primes". Note: examples are sizes of the generated groups based on a vector with 57 elements. greedy: Divides up the data greedily given a specified group size (e.g.10, 10, 10, 10, 10, 7). `n` is group size. n_dist (default): Divides the data into a specified number of groups and distributes excess data points across groups (e.g.11, 11, 12, 11, 12). `n` is number of groups. n_fill: Divides the data into a specified number of groups and fills up groups with excess data points from the beginning (e.g.12, 12, 11, 11, 11). `n` is number of groups. n_last: Divides the data into a specified number of groups. It finds the most equal group sizes possible, using all data points. Only the last group is able to differ in size (e.g.11, 11, 11, 11, 13). `n` is number of groups. n_rand: Divides the data into a specified number of groups. Excess data points are placed randomly in groups (max. 1 per group) (e.g.12, 11, 11, 11, 12). `n` is number of groups. l_sizes: Divides up the data by a list of group sizes. Excess data points are placed in an extra group at the end. E.g.n = list(0.2, 0.3)outputsgroupswithsizes(11, 17, 29). `n` is a list of group sizes. l_starts: Starts new groups at specified values in the `starts_col` vector. n is a list of starting positions. Skip values by c(value, skip_to_number) where skip_to_number is the nth appearance of the value in the vector af- ter the previous group start. The first data point is automatically a starting position. E.g.n = c(1, 3, 7, 25, 50)outputsgroupswithsizes(2, 4, 18, 25, 8). To skip: givenvectorc(”a”, ”e”, ”o”, ”a”, ”e”, ”o”), n = list(”a”, ”e”, c(”o”, 2))outputsgroups If passing n =′ auto′ the starting positions are automatically found such that a group is started whenever a value differs from the previous value (see find_starts()). Note that all NAs are first replaced by a single unique value, meaning that they will also cause group starts. See differs_from_previous() to set a threshold for what is considered "different". E.g.n = ”auto”f orc(10, 10, 7, 8, 8, 9)wouldstartgroupsatthef irst10, 7, 8and9, andgivec(1, 1, 2 every: Combines every `n`th data point into a group. (e.g.12, 12, 11, 11, 11withn = 5). `n` is the number of data points between group members ("every n"). staircase: Uses step size to divide up the data. Group size increases with 1 step for every group, until there is no more data (e.g.5, 10, 15, 20, 7). `n` is step size. primes: Uses prime numbers as group sizes. Group size increases to the next prime number until there is no more data. (e.g.5, 7, 11, 13, 17, 4). `n` is the prime number to start at. starts_col Name of column with values to match in method "l_starts" when `data` is a data.frame. Pass 'index' to use row names. (Character) force_equal Create equal groups by discarding excess data points. Implementation varies between methods. (Logical) allow_zero Whether `n` can be passed as 0. Can be useful when programmatically finding n. (Logical) descending Change the direction of the method. (Not fully implemented) (Logical) randomize Randomize the grouping factor. (Logical) remove_missing_starts Recursively remove elements from the list of starts that are not found. For method "l_starts" only. (Logical) Value list of the split `data`. N.B. If `data` is a grouped data.frame, there’s an outer list for each group. The names are based on the group indices (see dplyr::group_indices()). Author(s) <NAME>, <<EMAIL>> See Also Other grouping functions: all_groups_identical(), collapse_groups_by, collapse_groups(), fold(), group_factor(), group(), partition() Examples # Attach packages library(groupdata2) library(dplyr) # Create data frame df <- data.frame( "x" = c(1:12), "species" = factor(rep(c("cat", "pig", "human"), 4)), "age" = sample(c(1:100), 12) ) # Using splt() df_list <- splt(df, 5, method = "n_dist") summarize_balances Summarize group balances Description [Experimental] Summarize the balances of numeric, categorical, and ID columns in and between groups in one or more group columns. This tool allows you to quickly and thoroughly assess the balance of different columns between groups. This is for instance useful after creating groups with fold(), partition(), or collapse_groups() to check how well they did and to compare multiple groupings. The output contains: 1. `Groups`: a summary per group (per grouping column). 2. `Summary`: statistical descriptors of the group summaries. 3. `Normalized Summary`: statistical descriptors of a set of "normalized" group summaries. (Disabled by default) When comparing how balanced the grouping columns are, we can use the standard deviations of the group summary columns. The lower a standard deviation is, the more similar the groups are in that column. To quickly extract these standard deviations, ordered by an aggregated rank, use ranked_balances() on the "Summary" data.frame in the output. Usage summarize_balances( data, group_cols, cat_cols = NULL, num_cols = NULL, id_cols = NULL, summarize_size = TRUE, include_normalized = FALSE, rank_weights = NULL, cat_levels_rank_weights = NULL, num_normalize_fn = function(x) { rearrr::min_max_scale(x, old_min = quantile(x, 0.025), old_max = quantile(x, 0.975), new_min = 0, new_max = 1) } ) Arguments data data.frame with group columns to summarize by. Can be grouped (see dplyr::group_by()), in which case the function is applied group-wise. This is not to be confused with `group_cols`. group_cols Names of columns with group identifiers to summarize columns in `data` by. cat_cols Names of categorical columns to summarize. Each categorical level is counted per group. To distinguish between levels with the same name from different `cat_col` columns, we prefix the count column name for each categorical level with parts of the name of the categorical column. This amount can be controlled with `max_cat_prefix_chars`. Normalization when `include_normalized` is enabled: The counts of each categorical level is normalized with log(1 + count). num_cols Names of numerical columns to summarize. For each column, the mean and sum is calculated per group. Normalization when `include_normalized` is enabled: Each column is nor- malized with `num_normalize_fn` before calculating the mean and sum per group. id_cols Names of factor columns with IDs to summarize. The number of unique IDs are counted per group. Normalization when `include_normalized` is enabled: The count of unique IDs is normalized with log(1 + count). summarize_size Whether to summarize the number of rows per group. include_normalized Whether to calculate and include the normalized summary in the output. rank_weights A named vector with weights for averaging the rank columns when calculating the `SD_rank` column. The name is one of the balancing columns and the num- ber is its weight. Non-specified columns are given the weight 1. The weights are automatically scaled to sum to 1. When summarizing size (see `summarize_size`), name its weight "size". E.g. c("size" = 1, "a_cat_col" = 2, "a_num_col" = 4, "an_id_col" = 2). cat_levels_rank_weights Weights for averaging ranks of the categorical levels in `cat_cols`. Given as a named list with a named vector for each column in `cat_cols`. Non- specified levels are given the weight 1. The weights are automatically scaled to sum to 1. E.g. list("a_cat_col" = c("a" = 3, "b" = 5), "b_cat_col" = c("1" = 3, "2" = 9)) num_normalize_fn Function for normalizing the `num_cols` columns before calculating normal- ized group summaries. Only used when `include_normalized` is enabled. Value list with two/three data.frames: Groups: A summary per group. `cat_cols`: Each level has its own column with the count of the level per group. `num_cols`: The mean and sum per group. `id_cols`: The count of unique IDs per group. Summary: Statistical descriptors of the columns in `Groups`. Contains the mean, median, standard deviation (SD), interquartile range (IQR), min, and max mea- sures. Especially the standard deviations and IQR measures can tell us about how balanced the groups are. When comparing multiple `group_cols`, the group column with the lowest SD and IQR can be considered the most balanced. Normalized Summary: (Disabled by default) Same statistical descriptors as in `Summary` but for a "normalized" version of the group sum- maries. The motivation is that these normalized measures can more easily be compared or com- bined to a single "balance score". First, we normalize each balance column: `cat_cols`: The level counts in the original group summaries are normalized with with log(1 + count). This eases comparison of the statistical descriptors (especially standard deviations) of levels with very different count scales. `num_cols`: The numeric columns are normalized prior to summarization by group, using the `num_normalize_fn` function. By default this applies MinMax scaling to columns such that ~95% of the values are expected to be in the [0, 1] range. `id_cols`: The counts of unique IDs in the original group summaries are normalized with log(1 + count). Contains the mean, median, standard deviation (SD), interquartile range (IQR), min, and max mea- sures. Author(s) <NAME>, <<EMAIL>> See Also Other summarization functions: ranked_balances(), summarize_group_cols() Examples # Attach packages library(groupdata2) library(dplyr) set.seed(1) # Create data frame df <- data.frame( "participant" = factor(rep(c("1", "2", "3", "4", "5", "6"), 3)), "age" = rep(sample(c(1:100), 6), 3), "diagnosis" = factor(rep(c("a", "b", "a", "a", "b", "b"), 3)), "score" = sample(c(1:100), 3 * 6) ) df <- df %>% arrange(participant) df$session <- rep(c("1", "2", "3"), 6) # Using fold() ## Without balancing set.seed(1) df_folded <- fold(data = df, k = 3) # Check the balances of the various columns # As we have not used balancing in `fold()` # we should not expect it to be amazingly balanced df_folded %>% dplyr::ungroup() %>% summarize_balances( group_cols = ".folds", num_cols = c("score", "age"), cat_cols = "diagnosis", id_cols = "participant" ) ## With balancing set.seed(1) df_folded <- fold( data = df, k = 3, cat_col = "diagnosis", num_col = 'score', id_col = 'participant' ) # Now the balance should be better # although it may be difficult to get a good balance # the 'score' column when also balancing on 'diagnosis' # and keeping all rows per participant in the same fold df_folded %>% dplyr::ungroup() %>% summarize_balances( group_cols = ".folds", num_cols = c("score", "age"), cat_cols = "diagnosis", id_cols = "participant" ) # Comparing multiple grouping columns # Create 3 fold column that only balance "score" set.seed(1) df_folded <- fold( data = df, k = 3, num_fold_cols = 3, num_col = 'score' ) # Summarize all three grouping cols at once (summ <- df_folded %>% dplyr::ungroup() %>% summarize_balances( group_cols = paste0(".folds_", 1:3), num_cols = c("score") ) ) # Extract the across-group standard deviations # The group column with the lowest standard deviation(s) # is the most balanced group column summ %>% ranked_balances() summarize_group_cols Summarize group columns Description [Experimental] Get the following summary statistics for each group column: 1. Number of groups 2. Mean, median, std., IQR, min, and max number of rows per group. The output can be given in either long (default) or wide format. Usage summarize_group_cols(data, group_cols, long = TRUE) Arguments data data.frame with one or more group columns (factors) to summarize. group_cols Names of columns to summarize. These columns must be factors in `data`. long Whether the output should be in long or wide format. Value Data frame (tibble) with summary statistics for each column in `group_cols`. Author(s) <NAME>, <<EMAIL>> See Also Other summarization functions: ranked_balances(), summarize_balances() Examples # Attach packages library(groupdata2) # Create data frame df <- data.frame( "some_var" = runif(25), "grp_1" = factor(sample(1:5, size = 25, replace=TRUE)), "grp_2" = factor(sample(1:8, size = 25, replace=TRUE)), "grp_3" = factor(sample(LETTERS[1:3], size = 25, replace=TRUE)), "grp_4" = factor(sample(LETTERS[1:12], size = 25, replace=TRUE)) ) # Summarize the group columns (long format) summarize_group_cols( data = df, group_cols = paste0("grp_", 1:4), long = TRUE ) # Summarize the group columns (wide format) summarize_group_cols( data = df, group_cols = paste0("grp_", 1:4), long = FALSE ) upsample Upsampling of rows in a data frame Description [Maturing] Uses random upsampling to fix the group sizes to the largest group in the data frame. Wraps balance(). Usage upsample( data, cat_col, id_col = NULL, id_method = "n_ids", mark_new_rows = FALSE, new_rows_col_name = ".new_row" ) Arguments data data.frame. Can be grouped, in which case the function is applied group-wise. cat_col Name of categorical variable to balance by. (Character) id_col Name of factor with IDs. (Character) IDs are considered entities, e.g. allowing us to add or remove all rows for an ID. How this is used is up to the `id_method`. E.g. If we have measured a participant multiple times and want make sure that we keep all these measurements. Then we would either remove/add all mea- surements for the participant or leave in all measurements for the participant. N.B. When `data` is a grouped data.frame (see dplyr::group_by()), IDs that appear in multiple groupings are considered separate entities within those groupings. id_method Method for balancing the IDs. (Character) "n_ids", "n_rows_c", "distributed", or "nested". n_ids (default): Balances on ID level only. It makes sure there are the same number of IDs for each category. This might lead to a different number of rows between categories. n_rows_c: Attempts to level the number of rows per category, while only removing/adding entire IDs. This is done in 2 steps: 1. If a category needs to add all its rows one or more times, the data is re- peated. 2. Iteratively, the ID with the number of rows closest to the lacking/excessive number of rows is added/removed. This happens until adding/removing the closest ID would lead to a size further from the target size than the current size. If multiple IDs are closest, one is randomly sampled. distributed: Distributes the lacking/excess rows equally between the IDs. If the number to distribute can not be equally divided, some IDs will have 1 row more/less than the others. nested: Calls balance() on each category with IDs as cat_col. I.e. if size is "min", IDs will have the size of the smallest ID in their category. mark_new_rows Add column with 1s for added rows, and 0s for original rows. (Logical) new_rows_col_name Name of column marking new rows. Defaults to ".new_row". Details Without ‘id_col‘: Upsampling is done with replacement for added rows, while the original data remains intact. With ‘id_col‘: See `id_method` description. Value data.frame with added rows. Ordered by potential grouping variables, `cat_col` and (poten- tially) `id_col`. Author(s) <NAME>, <<EMAIL>> See Also Other sampling functions: balance(), downsample() Examples # Attach packages library(groupdata2) # Create data frame df <- data.frame( "participant" = factor(c(1, 1, 2, 3, 3, 3, 3, 4, 4, 5, 5, 5, 5)), "diagnosis" = factor(c(0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0)), "trial" = c(1, 2, 1, 1, 2, 3, 4, 1, 2, 1, 2, 3, 4), "score" = sample(c(1:100), 13) ) # Using upsample() upsample(df, cat_col = "diagnosis") # Using upsample() with id_method "n_ids" # With column specifying added rows upsample(df, cat_col = "diagnosis", id_col = "participant", id_method = "n_ids", mark_new_rows = TRUE ) # Using upsample() with id_method "n_rows_c" # With column specifying added rows upsample(df, cat_col = "diagnosis", id_col = "participant", id_method = "n_rows_c", mark_new_rows = TRUE ) # Using upsample() with id_method "distributed" # With column specifying added rows upsample(df, cat_col = "diagnosis", id_col = "participant", id_method = "distributed", mark_new_rows = TRUE ) # Using upsample() with id_method "nested" # With column specifying added rows upsample(df, cat_col = "diagnosis", id_col = "participant", id_method = "nested", mark_new_rows = TRUE ) %primes% Find remainder from ’primes’ method Description [Stable] When using the "primes" method, the last group might not have the size of the associated prime number if there are not enough elements left. Use %primes% to find this remainder. Usage size %primes% start_at Arguments size Size to group (Integer) start_at Prime to start at (Integer) Value Remainder (Integer). Returns 0 if the last group has the size of the associated prime number. Author(s) <NAME>, <<EMAIL>> See Also Other staircase tools: %staircase%(), group_factor(), group() Other remainder tools: %staircase%() Examples # Attach packages library(groupdata2) 100 %primes% 2 %staircase% Find remainder from ’staircase’ method Description [Stable] When using the "staircase" method, the last group might not have the size of the second last group + step size. Use %staircase% to find this remainder. Usage size %staircase% step_size Arguments size Size to staircase (Integer) step_size Step size (Integer) Value Remainder (Integer). Returns 0 if the last group has the size of the second last group + step size. Author(s) <NAME>, <<EMAIL>> See Also Other staircase tools: %primes%(), group_factor(), group() Other remainder tools: %primes%() 60 %staircase% Examples # Attach packages library(groupdata2) 100 %staircase% 2 # Finding remainder with value 0 size = 150 for (step_size in c(1:30)){ if(size %staircase% step_size == 0){ print(step_size) }}
dawcs_flow
hex
Erlang
[flow](#flow-circleci-cljdoc-badge) [CircleCI](https://circleci.com/gh/fmnoise/flow/tree/master) [cljdoc badge](/d/dawcs/flow/CURRENT) === [Usage](#usage) --- [![Current Version](https://clojars.org/dawcs/flow/latest-version.svg)](https://clojars.org/dawcs/flow) ### [Motivation](#motivation) Consider trivial example: ``` (defn update-handler [req db] (if-let [user (:user req)] (if-let [id (:id req)] (if-let [entity (fetch-entity db id)] (if (accessible? entity user) (update-entity! entity (:params req)) {:error "Access denied" :code 403}) {:error "Entity not found" :code 404}) {:error "Missing entity id" :code 400}) {:error "Login required" :code 401})) ``` Looks ugly enough? Let's add some readability. First, require flow: ``` (require '[dawcs.flow :refer [then else]]) ``` Then let's extract each check to function to make code more clear and testable(notice using `ex-info` as error container with ability to store map with some data in addition to message): ``` (defn check-user [req] (or (:user req) (ex-info "Login requred" {:code 401}))) (defn check-entity-id [req] (or (:id req) (ex-info "Missing entity id" {:code 400}))) (defn check-entity-exists [db id] (or (fetch-entity db id) (ex-info "Entity not found" {:code 404}))) (defn check-entity-access [entity user] (if (accessible? entity user) entity (ex-info "Access denied" {:code 403}))) ``` Then let's add error formatting helper to turn ex-info data into desired format: ``` (defn format-error [^Throwable err] (assoc (ex-data err) :error (.getMessage err))) ;; ex-message in clojure 1.10 can be used instead ``` And finally we can write pretty readable pipeline(notice thread-last macro usage): ``` (defn update-handler [req db] (->> (check-user req) (then (fn [_] (check-entity-id req)) (then #(check-entity-exists db %)) (then #(check-entity-access % (:user req)) (then #(update-entity! % (:params req)))) (else format-error))) ``` ### [Basic blocks](#basic-blocks) Let's see what's going on here: **then** accepts value and a function, if value is not an exception instance, it calls function on it, returning result, otherwise it returns given exception instance. **else** works as opposite, simply returning non-exception values and applying given function to exception instance values. There's also a syntax-sugar version - **else-if**. It accepts exception class as first agrument, making it pretty useful as functional `catch` branches replacement: ``` (->> (call / 1 0) (then inc) ;; bypassed (else-if ArithmeticException (constantly :bad-math)) (else-if Throwable (constantly :unknown-error))) ;; this is also bypassed cause previous function will return normal value ``` **call** is functional `try/catch` replacement designed to catch all exceptions(starting from `Throwable` but that can be changed, more details soon) and return their instances so any thrown exception will be caught and passed through chain. `call` accepts a function and its arguments, wraps function call to `try/catch` block and returns either caught exception instance or function call result, example: ``` (->> (call / 1 0) (then inc)) ;; => #error {:cause "Divide by zero" :via ...} (->> (call / 0 1) (then inc)) ;; => 1 ``` Using `call` inside `then` may look verbose: ``` (->> (rand-int 10) ;; some calculation which may return 0 (then (fn [v] (call #(/ 10 v))) ;; can cause "Divide by zero" so should be inside call ``` so there's **then-call** for it (and **else-call** also exists for consistency) ``` (->> (rand-int 10) (then-call #(/ 10 %))) ``` If we need to pass both cases (exception instances and normal values) through some function, **thru** is right tool. It works similar to `doto` but accepts function as first argument. It always returns given value, so supplied function is called only for side-effects(like error logging or cleaning up): ``` (->> (call / 1 0) (thru println)) ;; => #error {:cause "Divide by zero" :via ...} (->> (call / 0 1) (thru println)) ;; => 0 ``` `thru` may be used similarly to `finally`, despite it's not exactly the same. And a small cheatsheet to summarize on basic blocks: ![cheatsheet](https://raw.githubusercontent.com/dawcs/flow/master/doc/flow.png) ### [Early return](#early-return) Having in mind that `call` will catch exceptions and return them immediately, throwing exception may be used as replacement for `return`: ``` (->> (call get-objects) (then-call (partial map (fn [obj] (if (unprocessable? obj) (throw (ex-info "Unprocessable object" {:object obj})) (calculate-result object)))))) ``` Another case where early return may be useful is `let`: ``` (defn assign-manager [report-id manager-id] (->> (call (fn [] (let [report (or (db-find report-id) (throw (ex-info "Report not found" {:id report-id}))) manager (or (db-find manager-id) (throw (ex-info "Manager not found" {:id manager-id})))] {:manager manager :report report}))) (then db-persist)) (else log-error))) ``` Wrapping function to `call` and throwing inside `let` in order to achieve early return may look ugly and verbose, so `flow` has own version of let - `flet`, which wraps all evaluations to `call`. In case of returning exception instance during bindings or body evaluation, it's immediately returned, otherwise it works as normal `let`: ``` (flet [a 1 b 2] (+ a b)) ;; => 3 (flet [a 1 b (ex-info "oops" {:reason "something went wrong"})] (+ a b)) ;; => #error { :cause "oops" ... } (flet [a 1 b 2] (Exception. "oops")) ;; => #error { :cause "oops" ... } (flet [a 1 b (throw (Exception. "boom"))] (+ a b)) ;; => #error { :cause "boom" ... } (flet [a 1 b 2] (throw (Exception. "boom"))) ;; => #error { :cause "boom" ... } ``` So previous example can be simplified: ``` (defn assign-manager [report-id manager-id] (->> (flet [report (or (db-find report-id) (ex-info "Report not found" {:id report-id})) manager (or (db-find manager-id) (ex-info "Manager not found" {:id manager-id}))] {:manager manager :report report}) (then db-persist) (else log-error))) ``` ### [Tuning exceptions catching](#tuning-exceptions-catching) `call` catches `java.lang.Throwable` by default, which may be not what you need, so this behavior can be changed: ``` (catch-from! java.lang.Exception) ``` Some exceptions (like `clojure.lang.ArityException`) signal about bad code or typo and throwing them helps to find it as early as possible, while catching may lead to obscurity and hidden problems. In order to prevent catching them by `call`, certain exception classes may be added to ignored exceptions list: ``` (ignore-exceptions! #{IllegalArgumentException ClassCastException}) ;; add without overwriting previous values (add-ignored-exceptions! #{NullPointerException}) ``` These methods are using mutation of dynamic variables and can be used during system startup to perform global change, but if you need to change behavior in certain block of code(or you simply want more functional approach without involving global mutable state) there's **call-with** which works similar to `call` but its first argument is handler - function which is called on caught exception: ``` (defn handler [e] (if (instance? clojure.lang.ArityException) (throw e) e)) (call-with handler inc) ;; throws ArityException, as inc requires more than 1 argument ``` Using multimethods/protocols we can achieve full power of fine-tuning what to catch and return as exception instance and what to throw: ``` (defprotocol ErrorHandling (handle [e])) ;; let's say we want to catch everything starting from Exception but throw NullPointerException (extend-protocol ErrorHandling Throwable (handle [e] (throw e)) Exception (handle [e] e) NullPointerException (handle [e] (throw e))) (call-with handle + 1 nil) ;; throws NullPointerException ``` Custom handler may be also passed to `flet` in first pair of binding vector: ``` ;; this flet works the same as let if exception occured (flet [:handler #(throw %) a 1 b (/ a 0)] (+ a b)) ;; throws ArithmeticException ;; but it can do early return if exception is returned as value (flet [:handler #(throw %) a 1 b (ex-info "Something went wrong" {:because "Monday"})] (/ a b)) ;; => #error {:cause "Something went wrong" :data {:because "Monday"} ... } ``` [How it's different from Either?](#how-its-different-from-either) --- The core idea of `flow` is clear separation of normal value(everything which is not exception instance) and value which indicates error(exception instance) without involving additional containers. This allows to get rid of redundant abstractions like `Either`, and also prevents mess with value containers (if you've ever seen `Either.Left` inside `Either.Right` you probably know what I'm talking about). Exceptions are already first-class citizens in Java world but are usually combined with side-effect (throwing) for propagation purposes, while `flow` actively promotes more functional usage of it with returning exception instance: ``` ;; construction (ex-info "User not found" {:id 123}) ;; catching and returning instance (try (/ 1 0) (catch Exception e e)) ``` In both examples above we clearly understand that returned value is an error, so there's no need to wrap it to any other container like `Either`(also, Clojure's core function `ex-info` is perfect tool for storing additional data in exception instance and it's already available from the box). That means no or minimal rework of existing code in order to get started with `flow`, while `Either` would need wrapping both normal and error values into its corresponding `Right` and `Left` containers. Due to described features `flow` is much easier to introduce into existing project than `Either`. ### [But isn't using exceptions costly?](#but-isnt-using-exceptions-costly) In some of examples above exception instance is constructed and passed through chain without throwing. That's main use-case and ideology of flow - using exception instance as error value. But we know that constructing exception is costly due to stacktrace creation. Java 7 has a possibility to omit stacktrace creation, but that change to ExceptionInfo was not accepted by the core team (more details [here](https://clojure.atlassian.net/browse/CLJ-2423)) so we ended up creating custom exception class which implements `IExceptionInfo` but can skip stacktrace creation. It's called `Fail` and there's handly constuctor for it: ``` (fail-with {:msg "User not found" :data {:id 1}}) ;; => #error {:cause "User not found" :data {:id 1} :via [...] :trace []} ;; it behaves the same as ExceptionInfo (ex-data *1) ;; => {:id 1} ;; map may be empty or nil (fail-with nil) ;; => #error {:cause nil :data {} :via [...] :trace []} ;; stacktrace is disabled by default but can be turned on (fail-with {:msg "User not found" :data {:id 1} :trace? true}) ;; there's also throwing constuctor (stacktrace is enabled by default) (fail-with! {:msg "User not found" :data {:id 1}}) ``` [Status](#status) --- API is considered stable since version `1.0.0`. See changelog for the list of breaking changes. [Who’s using Flow?](#whos-using-flow) --- * [Eventum](https://eventum.no) - connects event organizers with their dream venue * [Yellowsack](https://yellowsack.com) - dumpster bag & and pick up service [Acknowledgements](#acknowledgements) --- Thanks to <NAME> for his inspiring talk about Railway Oriented Programming <https://fsharpforfunandprofit.com/rop/[License](#license) --- Copyright © 2018 fmnoise Distributed under the Eclipse Public License either version 1.0 or (at your option) any later version. [2.0.0](#200) --- * **BREAKING!** `then` doesn't wrap to `call` anymore, use `then-call` to achieve that * Added call-wrapping `then-call`, `else-call` and `thru-call` * Added Fail - custom container for failure representation with ability to skip stacktrace * Added `fail-with` and `fail-with!` - map-oriented Fail construction helpers * Added `*default-handler*` and `call-with` for more functional and thread-safe exceptions handling * Added ability to pass exceptions handler to `flet` * Mark `fail`, `fail!`, `catching` and `ignoring` deprecated [1.0.0](#100) --- * Added `fail!` - fail throwing shortcut * **BREAKING!** Removed `fail-data`, `fail-cause` and `fail-trace` * **BREAKING!** `ignored?` now accepts instance of `Throwable` instead of class * **BREAKING!** `*exception-base-class*` is now `*catch-from*` [0.5.0](#050) --- * Fix `fail-data` implementation * Mark `fail-data`, `fail-cause` and `fail-trace` deprecated [0.4.0](#040) --- * **BREAKING!** `nil` is passed to `ExceptionInfo` if no message has passed to `fail` dawcs.flow === --- #### *catch-from*clj Base exception class which will be caught by `call`. Dynamic, defaults to `Throwable`. Use `catch-from!` or `catching` to modify ``` Base exception class which will be caught by `call`. Dynamic, defaults to `Throwable`. Use `catch-from!` or `catching` to modify ``` [source](https://github.com/dawcs/flow/blob/2.0.0/src/dawcs/flow.clj#L6)[raw docstring](#) --- #### *default-handler*clj Default handler for processing caught exceptions. When caught an exception which class is `*catch-from*` or a subclass of it, and is not listed in `*ignored-exceptions*`(and is not a subclass of any classes listed there) returns instance of caught exception, otherwise throws it ``` Default handler for processing caught exceptions. When caught an exception which class is `*catch-from*` or a subclass of it, and is not listed in `*ignored-exceptions*`(and is not a subclass of any classes listed there) returns instance of caught exception, otherwise throws it ``` [source](https://github.com/dawcs/flow/blob/2.0.0/src/dawcs/flow.clj#L88)[raw docstring](#) --- #### *ignored-exceptions*clj Exception classes which will be ignored by `call`. Dynamic, defaults to empty set. Use `ignore-exceptions!`, `add-ignored-exceptions!` or `ignoring` to modify ``` Exception classes which will be ignored by `call`. Dynamic, defaults to empty set. Use `ignore-exceptions!`, `add-ignored-exceptions!` or `ignoring` to modify ``` [source](https://github.com/dawcs/flow/blob/2.0.0/src/dawcs/flow.clj#L10)[raw docstring](#) --- #### add-ignored-exceptions!clj ``` (add-ignored-exceptions! ex-class-set) ``` Adds given set of classes to `*ignored-exceptions*` ``` Adds given set of classes to `*ignored-exceptions*` ``` [source](https://github.com/dawcs/flow/blob/2.0.0/src/dawcs/flow.clj#L22)[raw docstring](#) --- #### callclj ``` (call f & args) ``` Calls given function with supplied args in `try/catch` block, then calls `*default-handler*` on caught exception. If no exception has caught during function call returns its result ``` Calls given function with supplied args in `try/catch` block, then calls `*default-handler*` on caught exception. If no exception has caught during function call returns its result ``` [source](https://github.com/dawcs/flow/blob/2.0.0/src/dawcs/flow.clj#L94)[raw docstring](#) --- #### call-withclj ``` (call-with handler f & args) ``` Calls given function with supplied args in `try/catch` block, then calls handler on caught exception. If no exception has caught during function call returns its result ``` Calls given function with supplied args in `try/catch` block, then calls handler on caught exception. If no exception has caught during function call returns its result ``` [source](https://github.com/dawcs/flow/blob/2.0.0/src/dawcs/flow.clj#L101)[raw docstring](#) --- #### catch-from!clj ``` (catch-from! ex-class) ``` Sets *exception-base-class* to specified class ``` Sets *exception-base-class* to specified class ``` [source](https://github.com/dawcs/flow/blob/2.0.0/src/dawcs/flow.clj#L28)[raw docstring](#) --- #### catchingcljmacrodeprecated ``` (catching exception-base-class & body) ``` Executes body with `*exception-base-class*` bound to given class. Deprecated due to possible problems with multi-threaded code. Use `call-with` to achieve same behavior with thread-safety ``` Executes body with `*exception-base-class*` bound to given class. Deprecated due to possible problems with multi-threaded code. Use `call-with` to achieve same behavior with thread-safety ``` [source](https://github.com/dawcs/flow/blob/2.0.0/src/dawcs/flow.clj#L36)[raw docstring](#) --- #### elseclj ``` (else f value) ``` If value is a `fail?`, calls applies f to it, otherwise returns value ``` If value is a `fail?`, calls applies f to it, otherwise returns value ``` [source](https://github.com/dawcs/flow/blob/2.0.0/src/dawcs/flow.clj#L120)[raw docstring](#) --- #### else-callclj ``` (else-call f value) ``` If value is a `fail?`, applies f to it wrapped to `call`, otherwise returns value ``` If value is a `fail?`, applies f to it wrapped to `call`, otherwise returns value ``` [source](https://github.com/dawcs/flow/blob/2.0.0/src/dawcs/flow.clj#L125)[raw docstring](#) --- #### else-ifclj ``` (else-if ex-class f value) ``` If value is an exception of ex-class, applies f to it, otherwise returns value ``` If value is an exception of ex-class, applies f to it, otherwise returns value ``` [source](https://github.com/dawcs/flow/blob/2.0.0/src/dawcs/flow.clj#L143)[raw docstring](#) --- #### failcljdeprecated ``` (fail) ``` ``` (fail msg-or-data) ``` ``` (fail msg data) ``` ``` (fail msg data cause) ``` Calls `ex-info` with given msg(optional, defaults to nil), data(optional, defaults to {}) and cause(optional, defaults to nil). Deprecated, use `ex-info` instead ``` Calls `ex-info` with given msg(optional, defaults to nil), data(optional, defaults to {}) and cause(optional, defaults to nil). Deprecated, use `ex-info` instead ``` [source](https://github.com/dawcs/flow/blob/2.0.0/src/dawcs/flow.clj#L170)[raw docstring](#) --- #### fail!cljdeprecated ``` (fail! & args) ``` Constructs `fail` with given args and throws it. Deprecated, use `ex-info` with `throw` instead ``` Constructs `fail` with given args and throws it. Deprecated, use `ex-info` with `throw` instead ``` [source](https://github.com/dawcs/flow/blob/2.0.0/src/dawcs/flow.clj#L186)[raw docstring](#) --- #### fail-withclj ``` (fail-with {:keys [msg data cause suppress? trace?] :or {data {} suppress? false trace? false} :as options}) ``` Constructs `Fail` with given options. Stacktrace is disabled by default ``` Constructs `Fail` with given options. Stacktrace is disabled by default ``` [source](https://github.com/dawcs/flow/blob/2.0.0/src/dawcs/flow.clj#L73)[raw docstring](#) --- #### fail-with!clj ``` (fail-with! {:keys [trace?] :or {trace? true} :as options}) ``` Constructs `Fail` with given options and throws it. Stacktrace is enabled by default. ``` Constructs `Fail` with given options and throws it. Stacktrace is enabled by default. ``` [source](https://github.com/dawcs/flow/blob/2.0.0/src/dawcs/flow.clj#L80)[raw docstring](#) --- #### fail?clj ``` (fail? t) ``` ``` (fail? ex-class t) ``` Checks if value is exception of given class(optional, defaults to Throwable) ``` Checks if value is exception of given class(optional, defaults to Throwable) ``` [source](https://github.com/dawcs/flow/blob/2.0.0/src/dawcs/flow.clj#L56)[raw docstring](#) --- #### fletcljmacro ``` (flet bindings & body) ``` Flow adaptation of Clojure `let`. Wraps evaluation of each binding to `call-with` with `*default-handler*`. If `fail?` value returned from binding evaluation, it's returned immediately and all other bindings and body are skipped. May use custom exception handler passed as first binding with name :handler ``` Flow adaptation of Clojure `let`. Wraps evaluation of each binding to `call-with` with `*default-handler*`. If `fail?` value returned from binding evaluation, it's returned immediately and all other bindings and body are skipped. May use custom exception handler passed as first binding with name :handler ``` [source](https://github.com/dawcs/flow/blob/2.0.0/src/dawcs/flow.clj#L159)[raw docstring](#) --- #### ignore-exceptions!clj ``` (ignore-exceptions! ex-class-set) ``` Sets `*ignored-exceptions*` to given set of classes ``` Sets `*ignored-exceptions*` to given set of classes ``` [source](https://github.com/dawcs/flow/blob/2.0.0/src/dawcs/flow.clj#L16)[raw docstring](#) --- #### ignored?clj ``` (ignored? t) ``` Checks if exception should be ignored ``` Checks if exception should be ignored ``` [source](https://github.com/dawcs/flow/blob/2.0.0/src/dawcs/flow.clj#L63)[raw docstring](#) --- #### ignoringcljmacrodeprecated ``` (ignoring ignored-exceptions & body) ``` Executes body with `*ignored-exceptions*` bound to given value. Deprecated due to possible problems with multi-threaded code. Use `call-with` to achieve same behavior with thread-safety ``` Executes body with `*ignored-exceptions*` bound to given value. Deprecated due to possible problems with multi-threaded code. Use `call-with` to achieve same behavior with thread-safety ``` [source](https://github.com/dawcs/flow/blob/2.0.0/src/dawcs/flow.clj#L45)[raw docstring](#) --- #### thenclj ``` (then f value) ``` If value is not a `fail?`, applies f to it, otherwise returns value ``` If value is not a `fail?`, applies f to it, otherwise returns value ``` [source](https://github.com/dawcs/flow/blob/2.0.0/src/dawcs/flow.clj#L109)[raw docstring](#) --- #### then-callclj ``` (then-call f value) ``` If value is not a `fail?`, applies f to it wrapped to `call`, otherwise returns value ``` If value is not a `fail?`, applies f to it wrapped to `call`, otherwise returns value ``` [source](https://github.com/dawcs/flow/blob/2.0.0/src/dawcs/flow.clj#L114)[raw docstring](#) --- #### thruclj ``` (thru f value) ``` Applies f to value (for side effects). Returns value. Works similar to doto, but accepts function as first arg ``` Applies f to value (for side effects). Returns value. Works similar to doto, but accepts function as first arg ``` [source](https://github.com/dawcs/flow/blob/2.0.0/src/dawcs/flow.clj#L131)[raw docstring](#) --- #### thru-callclj ``` (thru-call f value) ``` Applies f to value wrapped to call (for side effects). Returns value. Works similar to doto, but accepts function as first arg. Please not that exception thrown inside of function will be silently ignored by default ``` Applies f to value wrapped to call (for side effects). Returns value. Works similar to doto, but accepts function as first arg. Please not that exception thrown inside of function will be silently ignored by default ``` [source](https://github.com/dawcs/flow/blob/2.0.0/src/dawcs/flow.clj#L137)[raw docstring](#)
@jgertig/aws-sdk
npm
JavaScript
AWS SDK for JavaScript === The official AWS SDK for JavaScript, available for browsers and mobile devices, or Node.js backends For release notes, see the [CHANGELOG](https://github.com/aws/aws-sdk-js/blob/master/CHANGELOG.md). Prior to v2.4.8, release notes can be found at <https://aws.amazon.com/releasenotes/?tag=releasenotes%23keywords%23javascriptIf you are upgrading from 1.x to 2.0 of the SDK, please see the [upgrading notes](https://github.com/aws/aws-sdk-js/blob/master/UPGRADING.md) for information on how to migrate existing code to work with the new major version. Installing --- ### In the Browser To use the SDK in the browser, simply add the following script tag to your HTML pages: ``` <script src="https://sdk.amazonaws.com/js/aws-sdk-2.555.0.min.js"></script> ``` You can also build a custom browser SDK with your specified set of AWS services. This can allow you to reduce the SDK's size, specify different API versions of services, or use AWS services that don't currently support CORS if you are working in an environment that does not enforce CORS. To get started: <http://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/building-sdk-for-browsers.htmlThe AWS SDK is also compatible with [browserify](http://browserify.org). For browser-based web, mobile and hybrid apps, you can use [AWS Amplify Library](https://aws.github.io/aws-amplify/?utm_source=aws-js-sdk&utm_campaign=browser) which extends the AWS SDK and provides an easier and declarative interface. ### In Node.js The preferred way to install the AWS SDK for Node.js is to use the [npm](http://npmjs.org) package manager for Node.js. Simply type the following into a terminal window: ``` npm install aws-sdk ``` ### In React Native To use the SDK in a react native project, first install the SDK using npm: ``` npm install aws-sdk ``` Then within your application, you can reference the react native compatible version of the SDK with the following: ``` var AWS = require('aws-sdk/dist/aws-sdk-react-native'); ``` Alternatively, you can use [AWS Amplify Library](https://aws.github.io/aws-amplify/media/react_native_guide?utm_source=aws-js-sdk&utm_campaign=react-native) which extends AWS SDK and provides React Native UI components and CLI support to work with AWS services. ### Using Bower You can also use [Bower](http://bower.io) to install the SDK by typing the following into a terminal window: ``` bower install aws-sdk-js ``` Usage and Getting Started --- You can find a getting started guide at: <http://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guideAPI reference at: <https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/Usage with TypeScript --- The AWS SDK for JavaScript bundles TypeScript definition files for use in TypeScript projects and to support tools that can read `.d.ts` files. Our goal is to keep these TypeScript definition files updated with each release for any public api. ### Pre-requisites Before you can begin using these TypeScript definitions with your project, you need to make sure your project meets a few of these requirements: * Use TypeScript v2.x * Includes the TypeScript definitions for node. You can use npm to install this by typing the following into a terminal window: ``` npm install --save-dev @types/node ``` * If you are targeting at es5 or older ECMA standards, your `tsconfig.json` has to include `'es5'` and `'es2015.promise'` under `compilerOptions.lib`. See [tsconfig.json](https://github.com/aws/aws-sdk-js/blob/master/ts/tsconfig.json) for an example. ### In the Browser To use the TypeScript definition files with the global `AWS` object in a front-end project, add the following line to the top of your JavaScript file: ``` /// <reference types="aws-sdk" /> ``` This will provide support for the global `AWS` object. ### In Node.js To use the TypeScript definition files within a Node.js project, simply import `aws-sdk` as you normally would. In a TypeScript file: ``` // import entire SDK import AWS from 'aws-sdk'; // import AWS object without services import AWS from 'aws-sdk/global'; // import individual service import S3 from 'aws-sdk/clients/s3'; ``` In a JavaScript file: ``` // import entire SDK var AWS = require('aws-sdk'); // import AWS object without services var AWS = require('aws-sdk/global'); // import individual service var S3 = require('aws-sdk/clients/s3'); ``` ### With React To create React applications with AWS SDK, you can use [AWS Amplify Library](https://aws.github.io/aws-amplify/media/react_guide?utm_source=aws-js-sdk&utm_campaign=react) which provides React components and CLI support to work with AWS services. ### With Angular Due to the SDK's reliance on node.js typings, you may encounter compilation [issues](https://github.com/aws/aws-sdk-js/issues/1271) when using the typings provided by the SDK in an Angular project created using the Angular CLI. To resolve these issues, either add `"types": ["node"]` to the project's `tsconfig.app.json` file, or remove the `"types"` field entirely. [AWS Amplify Library](https://aws.github.io/aws-amplify/media/angular_guide?utm_source=aws-js-sdk&utm_campaign=angular) provides Angular components and CLI support to work with AWS services. ### Known Limitations There are a few known limitations with the bundled TypeScript definitions at this time: * Service client typings reflect the latest `apiVersion`, regardless of which `apiVersion` is specified when creating a client. * Service-bound parameters use the `any` type. Getting Help --- Please use these community resources for getting help. We use the GitHub issues for tracking bugs and feature requests and have limited bandwidth to address them. * Ask a question on [StackOverflow](https://stackoverflow.com/) and tag it with `aws-sdk-js` * Come join the AWS JavaScript community on [gitter](https://gitter.im/aws/aws-sdk-js?source=orgpage) * Open a support ticket with [AWS Support](https://console.aws.amazon.com/support/home#/) * If it turns out that you may have found a bug, please [open an issue](https://github.com/aws/aws-sdk-js/issues/new) Opening Issues --- If you encounter a bug with the AWS SDK for JavaScript we would like to hear about it. Search the [existing issues](https://github.com/aws/aws-sdk-js/issues) and try to make sure your problem doesn’t already exist before opening a new issue. It’s helpful if you include the version of the SDK, Node.js or browser environment and OS you’re using. Please include a stack trace and reduced repro case when appropriate, too. The GitHub issues are intended for bug reports and feature requests. For help and questions with using the AWS SDK for JavaScript please make use of the resources listed in the [Getting Help](https://github.com/aws/aws-sdk-js#getting-help) section. There are limited resources available for handling issues and by keeping the list of open issues lean we can respond in a timely manner. Supported Services --- Please see [SERVICES.md](https://github.com/aws/aws-sdk-js/blob/master/SERVICES.md) for a list of supported services. License --- This SDK is distributed under the [Apache License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0), see LICENSE.txt and NOTICE.txt for more information. Readme --- ### Keywords * api * amazon * aws * ec2 * simpledb * s3 * sqs * ses * sns * route53 * rds * elasticache * cloudfront * fps * cloudformation * cloudwatch * dynamodb * iam * swf * autoscaling * cloudsearch * elb * loadbalancing * emr * mapreduce * importexport * storagegateway * workflow * ebs * vpc * beanstalk * glacier * kinesis * cloudtrail * waf
xpectr
cran
R
Package ‘xpectr’ November 17, 2022 Title Generates Expectations for 'testthat' Unit Testing Version 0.4.3 Description Helps systematize and ease the process of building unit tests with the 'testthat' package by providing tools for generating expectations. License MIT + file LICENSE URL https://github.com/ludvigolsen/xpectr BugReports https://github.com/ludvigolsen/xpectr/issues Depends R (>= 3.5.0) Imports clipr (>= 0.7.0), checkmate (>= 2.0.0), dplyr, fansi (>= 0.4.1), lifecycle, plyr, rlang, rstudioapi (>= 0.10), stats, testthat (>= 2.3.1), tibble, utils, withr (>= 2.0.0) Suggests data.table, knitr, rmarkdown RdMacros lifecycle Encoding UTF-8 Roxygen list(markdown = TRUE) RoxygenNote 7.2.1 VignetteBuilder knitr R topics documented: assertCollectionAddi... 2 capture_parse_eval_side_effect... 3 capture_side_effect... 4 dputSelectedAddi... 6 element_classe... 7 element_length... 8 element_type... 9 gxs_functio... 10 gxs_selectio... 13 initializeGXSFunctionAddi... 16 initializeTestthatAddi... 17 insertExpectationsAddi... 18 navigateTestFileAddi... 20 num_total_element... 21 prepare_insertio... 22 set_test_see... 23 simplified_formal... 23 smp... 24 stop_i... 25 stri... 26 strip_ms... 27 suppress_m... 29 wrapStringAddi... 30 xpect... 31 assertCollectionAddin Inserts code for a checkmate assert collection Description [Experimental] RStudio Addin: Inserts code for initializing and reporting a checkmate assert collection. See `Details` for how to set a key command. Usage assertCollectionAddin(add_comments = TRUE, insert = TRUE, indentation = NULL) Arguments add_comments Whether to add comments around. (Logical) This makes it easy for a user to create their own addin without the comments. insert Whether to insert the code via rstudioapi::insertText() or return it. (Log- ical) N.B. Mainly intended for testing the addin programmatically. indentation Indentation of the code. (Numeric) N.B. Mainly intended for testing the addin programmatically. Details How to set up a key command in RStudio: After installing the package. Go to: Tools >> Addins >> Browse Addins >> Keyboard Shortcuts. Find "Insert checkmate AssertCollection Code" and press its field under Shortcut. Press desired key command, e.g. Alt+C. Press Apply. Press Execute. Value Inserts the following (excluding the ----): ---- # Check arguments #### assert_collection <- checkmate::makeAssertCollection() # checkmate::assert_ , add = assert_collection) checkmate::reportAssertions(assert_collection) # End of argument checks #### ---- Returns NULL invisibly. Author(s) <NAME>, <<EMAIL>> See Also Other addins: dputSelectedAddin(), initializeGXSFunctionAddin(), initializeTestthatAddin(), insertExpectationsAddin(), navigateTestFileAddin(), wrapStringAddin() capture_parse_eval_side_effects Capture side effects from parse eval Description Wraps string in capture_side_effects() before parsing and evaluating it. The side effects (error, warnings, messages) are returned in a list. When capturing an error, no other side effects are captured. Usage capture_parse_eval_side_effects( string, envir = NULL, copy_env = FALSE, reset_seed = FALSE, disable_crayon = TRUE ) Arguments string String of code that can be parsed and evaluated in envir. envir Environment to evaluate in. Defaults to parent.frame(). copy_env Whether to use deep copies of the environment when capturing side effects. (Logical) Disabled by default to save memory but is often preferable to enable, e.g. when the function alters non-local variables before throwing its error/warning/message. reset_seed Whether to reset the random state on exit. (Logical) disable_crayon Whether to disable crayon formatting. This can remove ANSI characters from the messages. (Logical) Value named list with the side effects. Author(s) <NAME>, <<EMAIL>> See Also Other capturers: capture_side_effects() Examples # Attach package library(xpectr) capture_parse_eval_side_effects("stop('hi!')") capture_parse_eval_side_effects("warning('hi!')") capture_parse_eval_side_effects("message('hi!')") capture_side_effects Capture side effects Description Captures errors, warnings, and messages from an expression. In case of an error, no other side effects are captured. Simple wrapper for testthat’s capture_error(), capture_warnings() and capture_messages(). Note: Evaluates expr up to three times. Usage capture_side_effects( expr, envir = NULL, copy_env = FALSE, reset_seed = FALSE, disable_crayon = TRUE ) Arguments expr Expression. envir Environment to evaluate in. Defaults to parent.frame(). copy_env Whether to use deep copies of the environment when capturing side effects. (Logical) Disabled by default to save memory but is often preferable to enable, e.g. when the function alters non-local variables before throwing its error/warning/message. reset_seed Whether to reset the random state on exit. (Logical) disable_crayon Whether to disable crayon formatting. This can remove ANSI characters from the messages. (Logical) Value named list with the side effects. Author(s) <NAME>, <<EMAIL>> See Also Other capturers: capture_parse_eval_side_effects() Examples # Attach packages library(xpectr) fn <- function(raise = FALSE){ message("Hi! I'm Kevin, your favorite message!") warning("G'Day Mam! I'm a warning to the world!") message("Kevin is ma name! Yesss!") warning("Hopefully the whole world will see me :o") if (isTRUE(raise)){ stop("Lord Evil Error has arrived! Yeehaaa") } "the output" } capture_side_effects(fn()) capture_side_effects(fn(raise = TRUE)) capture_side_effects(fn(raise = TRUE), copy_env = TRUE) dputSelectedAddin Replaces selected code with its dput() output Description [Experimental] RStudio Addin: Runs dput() on the selected code and inserts it instead of the selection. See `Details` for how to set a key command. Usage dputSelectedAddin(selection = NULL, insert = TRUE, indentation = 0) Arguments selection String of code. (Character) E.g. "stop('This gives an expect_error test')". N.B. Mainly intended for testing the addin programmatically. insert Whether to insert the expectations via rstudioapi::insertText() or return them. (Logical) N.B. Mainly intended for testing the addin programmatically. indentation Indentation of the selection. (Numeric) N.B. Mainly intended for testing the addin programmatically. Details How: Parses and evaluates the selected code string, applies dput() and inserts the output instead of the selection. How to set up a key command in RStudio: After installing the package. Go to: Tools >> Addins >> Browse Addins >> Keyboard Shortcuts. Find "dput() Selected" and press its field under Shortcut. Press desired key command, e.g. Alt+D. Press Apply. Press Execute. Value Inserts the output of running dput() on the selected code. Does not return anything. Author(s) <NAME>, <<EMAIL>> See Also Other addins: assertCollectionAddin(), initializeGXSFunctionAddin(), initializeTestthatAddin(), insertExpectationsAddin(), navigateTestFileAddin(), wrapStringAddin() element_classes Gets the class of each element Description [Experimental] Applies class() to each element of `x` (without recursion). When class() returns multiple strings, the first class string is returned. Usage element_classes(x, keep_names = FALSE) Arguments x List with elements. keep_names Whether to keep existing names. (Logical) Details Gets first string in class() for all elements. Value The main class of each element. Author(s) <NAME>, <<EMAIL>> See Also Other element descriptors: element_lengths(), element_types(), num_total_elements() Examples # Attach packages library(xpectr) l <- list("a" = c(1,2,3), "b" = "a", "c" = NULL) element_classes(l) element_classes(l, keep_names = TRUE) element_lengths Gets the length of each element Description [Experimental] Applies length() to each element of `x` (without recursion). Usage element_lengths(x, keep_names = FALSE) Arguments x List with elements. keep_names Whether to keep existing names. (Logical) Details Simple wrapper for unlist(lapply(x, length)). Value The length of each element. Author(s) <NAME>, <<EMAIL>> See Also Other element descriptors: element_classes(), element_types(), num_total_elements() Examples # Attach packages library(xpectr) l <- list("a" = c(1,2,3), "b" = 1, "c" = NULL) element_lengths(l) element_lengths(l, keep_names = TRUE) element_types Gets the type of each element Description [Experimental] Applies typeof() to each element of `x` (without recursion). Usage element_types(x, keep_names = FALSE) Arguments x List with elements. keep_names Whether to keep existing names. (Logical) Details Simple wrapper for unlist(lapply(x, typeof)). Value The type of each element. Author(s) <NAME>, <<EMAIL>> See Also Other element descriptors: element_classes(), element_lengths(), num_total_elements() Examples # Attach packages library(xpectr) l <- list("a" = c(1,2,3), "b" = "a", "c" = NULL) element_types(l) element_types(l, keep_names = TRUE) gxs_function Generate testhat expectations for argument values in a function Description [Experimental] Based on a set of supplied values for each function argument, a set of testthat expect_* state- ments are generated. Included tests: The first value supplied for an argument is considered the valid baseline value. For each argument, we create tests for each of the supplied values, where the other arguments have their baseline value. When testing a function that alters non-local variables, consider enabling `copy_env`. See supported objects in details. Usage gxs_function( fn, args_values, extra_combinations = NULL, check_nulls = TRUE, indentation = 0, tolerance = "1e-4", round_to_tolerance = TRUE, strip = TRUE, sample_n = 30, envir = NULL, copy_env = FALSE, assign_output = TRUE, seed = 42, add_wrapper_comments = TRUE, add_test_comments = TRUE, start_with_newline = TRUE, end_with_newline = TRUE, out = "insert", parallel = FALSE ) Arguments fn Function to create tests for. args_values The arguments and the values to create tests for. Should be supplied as a named list of lists, like the following: args_values = list( "x1" = list(1, 2, 3), "x2" = list("a", "b", "c") ) The first value for each argument (referred to as the ’baseline’ value) should be valid (not throw an error/message/warning). N.B. This is not checked but should lead to more meaningful tests. N.B. Please define the list directly in the function call. This is currently neces- sary. extra_combinations Additional combinations to test. List of lists, where each combination is a named sublist. E.g. the following two combinations: extra_combinations = list( list("x1" = 4, "x2" = "b"), list("x1" = 7, "x2" = "c") ) N.B. Unspecified arguments gets the baseline value. If you find yourself adding many combinations, an additional gxs_function() call with different baseline values might be preferable. check_nulls Whether to try all arguments with NULL. (Logical) When enabled, you don’t need to add NULL to your `args_values`, unless it should be the baseline value. indentation Indentation of the selection. (Numeric) tolerance The tolerance for numeric tests as a string, like "1e-4". (Character) round_to_tolerance Whether to round numeric elements to the specified tolerance. (Logical) This is currently applied to numeric columns and vectors (excluding some lists). strip Whether to insert strip_msg() and strip() in tests of side effects. (Logical) Sometimes testthat tests have differences in punctuation and newlines on differ- ent systems. By stripping both the error message and the expected message of non-alphanumeric symbols, we can avoid such failed tests. sample_n The number of elements/rows to sample. Set to NULL to avoid sampling. Inserts smpl() in the generated tests when sampling was used. A seed is set internally, setting sample.kind as "Rounding" to ensure compatibility with R versions < 3.6.0. The order of the elements/rows is kept intact. No replacement is used, why no oversampling will take place. When testing a big data.frame, sampling the rows can help keep the test files somewhat readable. envir Environment to evaluate in. Defaults to parent.frame(). copy_env Whether each combination should be tested in a deep copy of the environment. (Logical) Side effects will be captured in copies of the copy, why two copies of the envi- ronment will exist at the same time. Disabled by default to save memory but is often preferable to enable, e.g. when the function changes non-local variables. assign_output Whether to assign the output of a function call or long selection to a variable. This will avoid recalling the function and decrease cluttering. (Logical) Heuristic: when the `selection` isn’t of a string and contains a parenthesis, it is considered a function call. A selection with more than 30 characters will be assigned as well. The tests themselves can be more difficult to interpret, as you will have to look at the assignment to see the object that is being tested. seed seed to set. (Whole number) add_wrapper_comments Whether to add intro and outro comments. (Logical) add_test_comments Whether to add comments for each test. (Logical) start_with_newline, end_with_newline Whether to have a newline in the beginning/end. (Logical) out Either "insert" or "return". "insert" (Default): Inserts the expectations via rstudioapi::insertText(). "return": Returns the expectations in a list. These can be prepared for insertion with prepare_insertion(). parallel Whether to parallelize the generation of expectations. (Logical) Requires a registered parallel backend. Like with doParallel::registerDoParallel. Details The following "types" are currently supported or intended to be supported in the future. Please suggest more types and tests in a GitHub issue! Note: A set of fallback tests will be generated for unsupported objects. Type Supported Notes Side effects Yes Errors, warnings, and messages. Vector Yes Lists are treated differently, depending on their structure. Factor Yes Data Frame Yes List columns (like nested tibbles) are currently skipped. Matrix Yes Supported but could be improved. Formula Yes Function Yes NULL Yes Array No Dates No Base and lubridate. ggplot2 No This may be a challenge, but would be cool! Value Either NULL or the unprepared expectations as a character vector. Author(s) <NAME>, <<EMAIL>> See Also Other expectation generators: gxs_selection(), initializeGXSFunctionAddin(), insertExpectationsAddin() Examples # Attach packages library(xpectr) ## Not run: fn <- function(x, y, z){ if (x>3) stop("'x' > 3") if (y<0) warning("'y'<0") if (z==10) message("'z' was 10!") x + y + z } # Create expectations # Note: define the list in the call gxs_function(fn, args_values = list( "x" = list(2, 4, NA), "y" = list(0, -1), "z" = list(5, 10)) ) # Add additional combinations gxs_function(fn, args_values = list( "x" = list(2, 4, NA), "y" = list(0, -1), "z" = list(5, 10)), extra_combinations = list( list("x" = 4, "z" = 10), list("y" = 1, "z" = 10)) ) ## End(Not run) gxs_selection Generate testhat expectations from selection Description [Experimental] Based on the selection (string of code), a set of testthat expect_* statements are generated. Example: If the selected code is the name of a data.frame object, it will create an expect_equal test for each column, along with a test of the column names, types and classes, dimensions, grouping keys, etc. See supported objects in details. When testing a function that alters non-local variables, consider enabling `copy_env`. Feel free to suggest useful tests etc. in a GitHub issue! Addin: insertExpectationsAddin() Usage gxs_selection( selection, indentation = 0, tolerance = "1e-4", round_to_tolerance = TRUE, strip = TRUE, sample_n = 30, envir = NULL, copy_env = FALSE, assign_output = TRUE, seed = 42, test_id = NULL, add_wrapper_comments = TRUE, add_test_comments = TRUE, start_with_newline = TRUE, end_with_newline = TRUE, out = "insert" ) Arguments selection String of code. (Character) E.g. "stop('This gives an expect_error test')". indentation Indentation of the selection. (Numeric) tolerance The tolerance for numeric tests as a string, like "1e-4". (Character) round_to_tolerance Whether to round numeric elements to the specified tolerance. (Logical) This is currently applied to numeric columns and vectors (excluding some lists). strip Whether to insert strip_msg() and strip() in tests of side effects. (Logical) Sometimes testthat tests have differences in punctuation and newlines on differ- ent systems. By stripping both the error message and the expected message of non-alphanumeric symbols, we can avoid such failed tests. sample_n The number of elements/rows to sample. Set to NULL to avoid sampling. Inserts smpl() in the generated tests when sampling was used. A seed is set internally, setting sample.kind as "Rounding" to ensure compatibility with R versions < 3.6.0. The order of the elements/rows is kept intact. No replacement is used, why no oversampling will take place. When testing a big data.frame, sampling the rows can help keep the test files somewhat readable. envir Environment to evaluate in. Defaults to parent.frame(). copy_env Whether to work in a deep copy of the environment. (Logical) Side effects will be captured in copies of the copy, why two copies of the envi- ronment will exist at the same time. Disabled by default to save memory but is often preferable to enable, e.g. when the function changes non-local variables. assign_output Whether to assign the output of a function call or long selection to a variable. This will avoid recalling the function and decrease cluttering. (Logical) Heuristic: when the `selection` isn’t of a string and contains a parenthesis, it is considered a function call. A selection with more than 30 characters will be assigned as well. The tests themselves can be more difficult to interpret, as you will have to look at the assignment to see the object that is being tested. seed seed to set. (Whole number) test_id Number to append to assignment names. (Whole number) For instance used to create the "output_" name: output_<test_id>. add_wrapper_comments Whether to add intro and outro comments. (Logical) add_test_comments Whether to add comments for each test. (Logical) start_with_newline, end_with_newline Whether to have a newline in the beginning/end. (Logical) out Either "insert" or "return". "insert" (Default): Inserts the expectations via rstudioapi::insertText(). "return": Returns the expectations in a list. These can be prepared for insertion with prepare_insertion(). Details The following "types" are currently supported or intended to be supported in the future. Please suggest more types and tests in a GitHub issue! Note: A set of fallback tests will be generated for unsupported objects. Type Supported Notes Side effects Yes Errors, warnings, and messages. Vector Yes Lists are treated differently, depending on their structure. Factor Yes Data Frame Yes List columns (like nested tibbles) are currently skipped. Matrix Yes Supported but could be improved. Formula Yes Function Yes NULL Yes Array No Dates No Base and lubridate. ggplot2 No This may be a challenge, but would be cool! Value Either NULL or the unprepared expectations as a character vector. Author(s) <NAME>, <<EMAIL>> See Also Other expectation generators: gxs_function(), initializeGXSFunctionAddin(), insertExpectationsAddin() Examples # Attach packages library(xpectr) ## Not run: df <- data.frame('a' = c(1, 2, 3), 'b' = c('t', 'y', 'u'), stringsAsFactors = FALSE) gxs_selection("stop('This gives an expect_error test!')") gxs_selection("warning('This gives a set of side effect tests!')") gxs_selection("message('This also gives a set of side effect tests!')") gxs_selection("stop('This: tests the -> punctuation!')", strip = FALSE) gxs_selection("sum(1, 2, 3, 4)") gxs_selection("df") tests <- gxs_selection("df", out = "return") for_insertion <- prepare_insertion(tests) rstudioapi::insertText(for_insertion) ## End(Not run) initializeGXSFunctionAddin Initialize gxs_function() call Description [Experimental] Initializes the gxs_function() call with the arguments and default values of the selected function. See `Details` for how to set a key command. Usage initializeGXSFunctionAddin(selection = NULL, insert = TRUE, indentation = 0) Arguments selection Name of function to test with gxs_function(). (Character) N.B. Mainly intended for testing the addin programmatically. insert Whether to insert the code via rstudioapi::insertText() or return them. (Logical) N.B. Mainly intended for testing the addin programmatically. indentation Indentation of the selection. (Numeric) N.B. Mainly intended for testing the addin programmatically. Details How: Parses and evaluates the selected code string within the parent environment. When the output is a function, it extracts the formals (arguments and default values) and creates the initial `args_values` for gxs_function(). When the output is not a function, it throws an error. How to set up a key command in RStudio: After installing the package. Go to: Tools >> Addins >> Browse Addins >> Keyboard Shortcuts. Find "Initialize gxs_function()" and press its field under Shortcut. Press desired key command, e.g. Alt+F. Press Apply. Press Execute. Value Inserts gxs_function() call for the selected function. Returns NULL invisibly. Author(s) <NAME>, <<EMAIL>> See Also Other expectation generators: gxs_function(), gxs_selection(), insertExpectationsAddin() Other addins: assertCollectionAddin(), dputSelectedAddin(), initializeTestthatAddin(), insertExpectationsAddin(), navigateTestFileAddin(), wrapStringAddin() initializeTestthatAddin Initializes test_that() call Description [Experimental] Inserts code for calling testthat::test_that(). See `Details` for how to set a key command. Usage initializeTestthatAddin(insert = TRUE, indentation = NULL) Arguments insert Whether to insert the code via rstudioapi::insertText() or return it. (Log- ical) N.B. Mainly intended for testing the addin programmatically. indentation Indentation of the code. (Numeric) N.B. Mainly intended for testing the addin programmatically. Details How to set up a key command in RStudio: After installing the package. Go to: Tools >> Addins >> Browse Addins >> Keyboard Shortcuts. Find "Initialize test_that()" and press its field under Shortcut. Press desired key command, e.g. Alt+T. Press Apply. Press Execute. Value Inserts code for calling testthat::test_that(). Returns NULL invisibly. Author(s) <NAME>, <<EMAIL>> See Also Other addins: assertCollectionAddin(), dputSelectedAddin(), initializeGXSFunctionAddin(), insertExpectationsAddin(), navigateTestFileAddin(), wrapStringAddin() insertExpectationsAddin Creates testthat tests for selected code Description [Experimental] Inserts relevant expect_* tests based on the evaluation of the selected code. Example: If the selected code is the name of a data.frame object, it will create an expect_equal test for each column, along with a test of the column names. Currently supports side effects (error, warnings, messages), data.frames, and vectors. List columns in data.frames (like nested tibbles) are currently skipped. See `Details` for how to set a key command. Usage insertExpectationsAddin( selection = NULL, insert = TRUE, indentation = 0, copy_env = FALSE ) insertExpectationsCopyEnvAddin( selection = NULL, insert = TRUE, indentation = 0, copy_env = TRUE ) Arguments selection String of code. (Character) E.g. "stop('This gives an expect_error test')". N.B. Mainly intended for testing the addin programmatically. insert Whether to insert the expectations via rstudioapi::insertText() or return them. (Logical) N.B. Mainly intended for testing the addin programmatically. indentation Indentation of the selection. (Numeric) N.B. Mainly intended for testing the addin programmatically. copy_env Whether to work in a deep copy of the environment. (Logical) Side effects will be captured in copies of the copy, why two copies of the envi- ronment will exist at the same time. Disabled by default to save memory but is often preferable to enable, e.g. when the function changes non-local variables. Details How: Parses and evaluates the selected code string within the parent environment (or a deep copy thereof). Depending on the output, it creates a set of unit tests (like expect_equal(data[["column"]], c(1,2,3))), and inserts them instead of the selection. How to set up a key command in RStudio: After installing the package. Go to: Tools >> Addins >> Browse Addins >> Keyboard Shortcuts. Find "Insert Expectations" and press its field under Shortcut. Press desired key command, e.g. Alt+E. Press Apply. Press Execute. Value Inserts testthat::expect_* unit tests for the selected code. Returns NULL invisibly. Author(s) <NAME>, <<EMAIL>> See Also Other expectation generators: gxs_function(), gxs_selection(), initializeGXSFunctionAddin() Other addins: assertCollectionAddin(), dputSelectedAddin(), initializeGXSFunctionAddin(), initializeTestthatAddin(), navigateTestFileAddin(), wrapStringAddin() navigateTestFileAddin Navigates to test file Description [Experimental] RStudio Addin: Extracts file name and (possibly) line number of a test file from a selection or from clipboard content. Navigates to the file and places the cursor at the line number. Supported types of strings: "test_x.R:3", "test_x.R#3", "test_x.R". The string must start with "test_" and contain ".R". It is split at either ":" or "#", with the second element (here "3") being interpreted as the line number. See `Details` for how to set a key command. Usage navigateTestFileAddin(selection = NULL, navigate = TRUE, abs_path = TRUE) Arguments selection String with file name and line number. (Character) E.g. "test_x.R:3:", which navigates to the third line of "/tests/testthat/test_x.R". N.B. Mainly intended for testing the addin programmatically. navigate Whether to navigate to the file or return the extracted file name and line number. (Logical) N.B. Mainly intended for testing the addin programmatically. abs_path Whether to return the full path or only the file name when `navigate` is FALSE. N.B. Mainly intended for testing the addin programmatically. Details How to set up a key command in RStudio: After installing the package. Go to: Tools >> Addins >> Browse Addins >> Keyboard Shortcuts. Find "Go To Test File" and press its field under Shortcut. Press desired key command, e.g. Alt+N. Press Apply. Press Execute. Value Navigates to file and line number. Does not return anything. Author(s) <NAME>, <<EMAIL>> See Also Other addins: assertCollectionAddin(), dputSelectedAddin(), initializeGXSFunctionAddin(), initializeTestthatAddin(), insertExpectationsAddin(), wrapStringAddin() num_total_elements Total number of elements Description [Experimental] Unlists `x` recursively and finds the total number of elements. Usage num_total_elements(x, deduplicated = FALSE) Arguments x List with elements. deduplicated Whether to only count the unique elements. (Logical) Details Simple wrapper for length(unlist(x, recursive = TRUE, use.names = FALSE)). Value The total number of elements in `x`. Author(s) <NAME>, <<EMAIL>> See Also Other element descriptors: element_classes(), element_lengths(), element_types() Examples # Attach packages library(xpectr) l <- list(list(list(1, 2, 3), list(2, list(3, 2))), list(1, list(list(2, 4), list(7, 1, list(3, 8)))), list(list(2, 7, 8), list(10, 2, list(18, 1, 4)))) num_total_elements(l) num_total_elements(l, deduplicated = TRUE) prepare_insertion Prepare expectations for insertion Description [Experimental] Collapses a list/vector of expectation strings and adds the specified indentation. Usage prepare_insertion( strings, indentation = 0, trim_left = FALSE, trim_right = FALSE ) Arguments strings Expectation strings. (List or Character) As returned with gxs_* functions with out = "return". indentation Indentation to add. (Numeric) trim_left Whether to trim whitespaces from the beginning of the collapsed string. (Logi- cal) trim_right Whether to trim whitespaces from the end of the collapsed string. (Logical) Value A string for insertion with rstudioapi::insertText(). Author(s) <NAME>, <<EMAIL>> Examples # Attach packages library(xpectr) ## Not run: df <- data.frame('a' = c(1, 2, 3), 'b' = c('t', 'y', 'u'), stringsAsFactors = FALSE) tests <- gxs_selection("df", out = "return") for_insertion <- prepare_insertion(tests) for_insertion rstudioapi::insertText(for_insertion) ## End(Not run) set_test_seed Set random seed for unit tests Description [Experimental] In order for tests to be compatible with R versions < 3.6.0, we set the sample.kind argument in set.seed() to "Rounding" when using R versions >= 3.6.0. Usage set_test_seed(seed = 42, ...) Arguments seed Random seed. ... Named arguments to set.seed(). Details Initially contributed by <NAME> (github: @rmsharp). Value NULL. Author(s) <NAME>, <<EMAIL>> <NAME> simplified_formals Extract and simplify a function’s formal arguments Description [Experimental] Extracts formals and formats them as an easily testable character vector. Usage simplified_formals(fn) Arguments fn Function. Value A character vector with the simplified formals. Author(s) <NAME>, <<EMAIL>> Examples # Attach packages library(xpectr) fn1 <- function(a = "x", b = NULL, c = NA, d){ paste0(a, b, c, d) } simplified_formals(fn1) smpl Random sampling Description [Experimental] Samples a vector, factor or data.frame. Useful to reduce size of testthat expect_* tests. Not intended for other purposes. Wraps sample.int(). data.frames are sampled row-wise. The seed is set within the function with sample.kind as "Rounding" for compatibility with R versions < 3.6.0. On exit, the random state is restored. Usage smpl(data, n, keep_order = TRUE, seed = 42) Arguments data vector or data.frame. (Logical) n Number of elements/rows to sample. N.B. No replacement is used, why n > the number of elements/rows in `data` won’t perform oversampling. keep_order Whether to keep the order of the elements. (Logical) seed seed to use. The seed is set with sample.kind = "Rounding" for compatibility with R ver- sions < 3.6.0. Value When `data` has <=`n` elements, `data` is returned. Otherwise, `data` is sampled and returned. Author(s) <NAME>, <<EMAIL>> Examples # Attach packages library(xpectr) smpl(c(1,2,3,4,5), n = 3) smpl(data.frame("a" = c(1,2,3,4,5), "b" = c(2,3,4,5,6), stringsAsFactors = FALSE), n = 3) stop_if Simple side effect functions Description [Experimental] If the `condition` is TRUE, generate error/warning/message with the supplied message. Usage stop_if(condition, message = NULL, sys.parent.n = 0L) warn_if(condition, message = NULL, sys.parent.n = 0L) message_if(condition, message = NULL, sys.parent.n = 0L) Arguments condition The condition to check. (Logical) message Message. (Character) Note: If NULL, the `condition` will be used as message. sys.parent.n The number of generations to go back when calling the message function. Details When `condition` is FALSE, they return NULL invisibly. When `condition` is TRUE: stop_if(): Throws error with the supplied message. warn_if(): Throws warning with the supplied message. message_if(): Generates message with the supplied message. Value Returns NULL invisibly. Author(s) <NAME>, <<EMAIL>> Examples # Attach packages library(xpectr) ## Not run: a <- 0 stop_if(a == 0, "'a' cannot be 0.") warn_if(a == 0, "'a' was 0.") message_if(a == 0, "'a' was so kind to be 0.") ## End(Not run) strip Strip strings of non-alphanumeric characters Description [Experimental] 1. Removes any character that is not alphanumeric or a space. 2. (Disabled by default): Remove numbers. 3. Reduces multiple consecutive whitespaces to a single whitespace and trims ends. Can for instance be used to simplify error messages before checking them. Usage strip( strings, replacement = "", remove_spaces = FALSE, remove_numbers = FALSE, remove_ansi = TRUE, lowercase = FALSE, allow_na = TRUE ) Arguments strings vector of strings. (Character) replacement What to replace blocks of punctuation with. (Character) remove_spaces Whether to remove all whitespaces. (Logical) remove_numbers Whether to remove all numbers. (Logical) remove_ansi Whether to remove ANSI control sequences. (Logical) lowercase Whether to make the strings lowercase. (Logical) allow_na Whether to allow strings to contain NAs. (Logical) Details 1. ANSI control sequences are removed with fansi::strip_ctl(). 2. gsub("[^[:alnum:][:blank:]]", replacement, strings)) 3. gsub('[0-9]+', '', strings) (Note: only if specified!) 4. trimws( gsub("[[:blank:]]+", " ", strings) ) (Or "" if remove_spaces is TRUE) Value The stripped strings. Author(s) <NAME>, <<EMAIL>> See Also Other strippers: strip_msg() Examples # Attach packages library(xpectr) strings <- c( "Hello! I am George. \n\rDon't call me Frank! 123", " \tAs that, is, not, my, name!" ) strip(strings) strip(strings, remove_spaces = TRUE) strip(strings, remove_numbers = TRUE) strip_msg Strip side-effect messages of non-alphanumeric characters and rethrow them Description [Experimental] Catches side effects (error, warnings, messages), strips the message strings of non-alphanumeric characters with strip() and regenerates them. When numbers in error messages vary slightly between systems (and this variation isn’t important to catch), we can strip the numbers as well. Use case: Sometimes testthat tests have differences in punctuation and newlines on different systems. By stripping both the error message and the expected message (with strip()), we can avoid such failed tests. Usage strip_msg( x, remove_spaces = FALSE, remove_numbers = FALSE, remove_ansi = TRUE, lowercase = FALSE ) Arguments x Code that potentially throws warnings, messages, or an error. remove_spaces Whether to remove all whitespaces. (Logical) remove_numbers Whether to remove all numbers. (Logical) remove_ansi Whether to remove ANSI control sequences. (Logical) lowercase Whether to make the strings lowercase. (Logical) Value Returns NULL invisibly. Author(s) <NAME>, <<EMAIL>> See Also Other strippers: strip() Examples # Attach packages library(xpectr) library(testthat) ## Not run: strip_msg(stop("this 'dot' .\n is removed! 123")) strip_msg(warning("this 'dot' .\n is removed! 123")) strip_msg(message("this 'dot' .\n is removed! 123")) strip_msg(message("this 'dot' .\n is removed! 123"), remove_numbers = TRUE) error_fn <- function(){stop("this 'dot' .\n is removed! 123")} strip_msg(error_fn()) # With testthat tests expect_error(strip_msg(error_fn()), strip("this 'dot' .\n is removed! 123")) expect_error(strip_msg(error_fn(), remove_numbers = TRUE), strip("this 'dot' .\n is removed! 123", remove_numbers = TRUE)) ## End(Not run) suppress_mw Suppress warnings and messages Description [Experimental] Run expression wrapped in both suppressMessages() and suppressWarnings(). Usage suppress_mw(expr) Arguments expr Any expression to run within suppressMessages() and suppressWarnings(). Details suppressWarnings(suppressMessages(expr)) Value The output of expr. Author(s) <NAME>, <<EMAIL>> Examples # Attach packages library(xpectr) fn <- function(a, b){ warning("a warning") message("a message") a + b } suppress_mw(fn(1, 5)) wrapStringAddin Wraps the selection with paste0 Description [Experimental] Splits the selection every n characters and inserts it in a paste0() call. See `Details` for how to set a key command. Usage wrapStringAddin( selection = NULL, indentation = 0, every_n = NULL, tolerance = 10, insert = TRUE ) Arguments selection String of code. (Character) N.B. Mainly intended for testing the addin programmatically. indentation Indentation of the selection. (Numeric) N.B. Mainly intended for testing the addin programmatically. every_n Number of characters per split. If NULL, the following is used to calculate the string width: max(min(80 - indentation, 70), 50) N.B. Strings shorter than every_n + tolerance will not be wrapped. tolerance Tolerance. Number of characters. We may prefer not to split a string that’s only a few characters too long. Strings shorter than every_n + tolerance will not be wrapped. insert Whether to insert the wrapped text via rstudioapi::insertText() or return it. (Logical) N.B. Mainly intended for testing the addin programmatically. Details How to set up a key command in RStudio: After installing the package. Go to: Tools >> Addins >> Browse Addins >> Keyboard Shortcuts. Find "Wrap String with paste0" and press its field under Shortcut. Press desired key command, e.g. Alt+P. Press Apply. Press Execute. Value Inserts the following (with newlines and correct indentation): paste0("first n chars", "next n chars") Returns NULL invisibly. Author(s) <NAME>, <<EMAIL>> See Also Other addins: assertCollectionAddin(), dputSelectedAddin(), initializeGXSFunctionAddin(), initializeTestthatAddin(), insertExpectationsAddin(), navigateTestFileAddin() xpectr xpectr: A package for generating tests for testthat unit testing Description A set of utilities and RStudio addins for generating tests. Author(s) <NAME>, <<EMAIL>>
@bem-react/eslint-plugin
npm
JavaScript
@bem-react/eslint-plugin · === Plugin for checking some things in projects based on [BEM React](https://github.com/bem/bem-react). Usage --- Add `@bem-react` to the plugins section of your `.eslintrc` configuration file: ``` { "plugins": ["@bem-react"] } ``` Then configure the rules you want to use under the rules section. ``` { "rules": { "@bem-react/no-classname-runtime": "warn", "@bem-react/whitelist-levels-imports": [ "error", { "defaultLevel": "common", "whiteList": { "common": ["common"], "desktop": ["common", "desktop"], "mobile": ["common", "mobile"] } } ] } } ``` Supported Rules --- Currently is supported: * [whitelist-levels-imports](https://github.com/bem/bem-react/blob/HEAD/docs/rules/whitelist-levels-imports.md) * [no-classname-runtime](https://github.com/bem/bem-react/blob/HEAD/docs/rules/no-classname-runtime.md) Readme --- ### Keywords * eslint * eslintplugin * eslint-plugin * bem * bem-react * redefinition-levels
github.com/yuin/goldmark
go
Go
README [¶](#section-readme) --- ### goldmark [![](https://pkg.go.dev/badge/github.com/yuin/goldmark.svg)](https://pkg.go.dev/github.com/yuin/goldmark) [![](https://github.com/yuin/goldmark/workflows/test/badge.svg?branch=master&event=push)](https://github.com/yuin/goldmark/actions?query=workflow:test) [![](https://coveralls.io/repos/github/yuin/goldmark/badge.svg?branch=master)](https://coveralls.io/github/yuin/goldmark) [![](https://goreportcard.com/badge/github.com/yuin/goldmark)](https://goreportcard.com/report/github.com/yuin/goldmark) > A Markdown parser written in Go. Easy to extend, standards-compliant, well-structured. goldmark is compliant with CommonMark 0.30. #### Motivation I needed a Markdown parser for Go that satisfies the following requirements: * Easy to extend. + Markdown is poor in document expressions compared to other light markup languages such as reStructuredText. + We have extensions to the Markdown syntax, e.g. PHP Markdown Extra, GitHub Flavored Markdown. * Standards-compliant. + Markdown has many dialects. + GitHub-Flavored Markdown is widely used and is based upon CommonMark, effectively mooting the question of whether or not CommonMark is an ideal specification. - CommonMark is complicated and hard to implement. * Well-structured. + AST-based; preserves source position of nodes. * Written in pure Go. [golang-commonmark](https://gitlab.com/golang-commonmark/markdown) may be a good choice, but it seems to be a copy of [markdown-it](https://github.com/markdown-it). [blackfriday.v2](https://github.com/russross/blackfriday/tree/v2) is a fast and widely-used implementation, but is not CommonMark-compliant and cannot be extended from outside of the package, since its AST uses structs instead of interfaces. Furthermore, its behavior differs from other implementations in some cases, especially regarding lists: [Deep nested lists don't output correctly #329](https://github.com/russross/blackfriday/issues/329), [List block cannot have a second line #244](https://github.com/russross/blackfriday/issues/244), etc. This behavior sometimes causes problems. If you migrate your Markdown text from GitHub to blackfriday-based wikis, many lists will immediately be broken. As mentioned above, CommonMark is complicated and hard to implement, so Markdown parsers based on CommonMark are few and far between. #### Features * **Standards-compliant.** goldmark is fully compliant with the latest [CommonMark](https://commonmark.org/) specification. * **Extensible.** Do you want to add a `@username` mention syntax to Markdown? You can easily do so in goldmark. You can add your AST nodes, parsers for block-level elements, parsers for inline-level elements, transformers for paragraphs, transformers for the whole AST structure, and renderers. * **Performance.** goldmark's performance is on par with that of cmark, the CommonMark reference implementation written in C. * **Robust.** goldmark is tested with `go test --fuzz`. * **Built-in extensions.** goldmark ships with common extensions like tables, strikethrough, task lists, and definition lists. * **Depends only on standard libraries.** #### Installation ``` $ go get github.com/yuin/goldmark ``` #### Usage Import packages: ``` import ( "bytes" "github.com/yuin/goldmark" ) ``` Convert Markdown documents with the CommonMark-compliant mode: ``` var buf bytes.Buffer if err := goldmark.Convert(source, &buf); err != nil { panic(err) } ``` #### With options ``` var buf bytes.Buffer if err := goldmark.Convert(source, &buf, parser.WithContext(ctx)); err != nil { panic(err) } ``` | Functional option | Type | Description | | --- | --- | --- | | `parser.WithContext` | A `parser.Context` | Context for the parsing phase. | #### Context options | Functional option | Type | Description | | --- | --- | --- | | `parser.WithIDs` | A `parser.IDs` | `IDs` allows you to change logics that are related to element id(ex: Auto heading id generation). | #### Custom parser and renderer ``` import ( "bytes" "github.com/yuin/goldmark" "github.com/yuin/goldmark/extension" "github.com/yuin/goldmark/parser" "github.com/yuin/goldmark/renderer/html" ) md := goldmark.New( goldmark.WithExtensions(extension.GFM), goldmark.WithParserOptions( parser.WithAutoHeadingID(), ), goldmark.WithRendererOptions( html.WithHardWraps(), html.WithXHTML(), ), ) var buf bytes.Buffer if err := md.Convert(source, &buf); err != nil { panic(err) } ``` | Functional option | Type | Description | | --- | --- | --- | | `goldmark.WithParser` | `parser.Parser` | This option must be passed before `goldmark.WithParserOptions` and `goldmark.WithExtensions` | | `goldmark.WithRenderer` | `renderer.Renderer` | This option must be passed before `goldmark.WithRendererOptions` and `goldmark.WithExtensions` | | `goldmark.WithParserOptions` | `...parser.Option` | | | `goldmark.WithRendererOptions` | `...renderer.Option` | | | `goldmark.WithExtensions` | `...goldmark.Extender` | | #### Parser and Renderer options ##### Parser options | Functional option | Type | Description | | --- | --- | --- | | `parser.WithBlockParsers` | A `util.PrioritizedSlice` whose elements are `parser.BlockParser` | Parsers for parsing block level elements. | | `parser.WithInlineParsers` | A `util.PrioritizedSlice` whose elements are `parser.InlineParser` | Parsers for parsing inline level elements. | | `parser.WithParagraphTransformers` | A `util.PrioritizedSlice` whose elements are `parser.ParagraphTransformer` | Transformers for transforming paragraph nodes. | | `parser.WithASTTransformers` | A `util.PrioritizedSlice` whose elements are `parser.ASTTransformer` | Transformers for transforming an AST. | | `parser.WithAutoHeadingID` | `-` | Enables auto heading ids. | | `parser.WithAttribute` | `-` | Enables custom attributes. Currently only headings supports attributes. | ##### HTML Renderer options | Functional option | Type | Description | | --- | --- | --- | | `html.WithWriter` | `html.Writer` | `html.Writer` for writing contents to an `io.Writer`. | | `html.WithHardWraps` | `-` | Render newlines as `<br>`. | | `html.WithXHTML` | `-` | Render as XHTML. | | `html.WithUnsafe` | `-` | By default, goldmark does not render raw HTML or potentially dangerous links. With this option, goldmark renders such content as written. | ##### Built-in extensions * `extension.Table` + [GitHub Flavored Markdown: Tables](https://github.github.com/gfm/#tables-extension-) * `extension.Strikethrough` + [GitHub Flavored Markdown: Strikethrough](https://github.github.com/gfm/#strikethrough-extension-) * `extension.Linkify` + [GitHub Flavored Markdown: Autolinks](https://github.github.com/gfm/#autolinks-extension-) * `extension.TaskList` + [GitHub Flavored Markdown: Task list items](https://github.github.com/gfm/#task-list-items-extension-) * `extension.GFM` + This extension enables Table, Strikethrough, Linkify and TaskList. + This extension does not filter tags defined in [6.11: Disallowed Raw HTML (extension)](https://github.github.com/gfm/#disallowed-raw-html-extension-). If you need to filter HTML tags, see [Security](#readme-security). + If you need to parse github emojis, you can use [goldmark-emoji](https://github.com/yuin/goldmark-emoji) extension. * `extension.DefinitionList` + [PHP Markdown Extra: Definition lists](https://michelf.ca/projects/php-markdown/extra/#def-list) * `extension.Footnote` + [PHP Markdown Extra: Footnotes](https://michelf.ca/projects/php-markdown/extra/#footnotes) * `extension.Typographer` + This extension substitutes punctuations with typographic entities like [smartypants](https://daringfireball.net/projects/smartypants/). * `extension.CJK` + This extension is a shortcut for CJK related functionalities. ##### Attributes The `parser.WithAttribute` option allows you to define attributes on some elements. Currently only headings support attributes. **Attributes are being discussed in the [CommonMark forum](https://talk.commonmark.org/t/consistent-attribute-syntax/272). This syntax may possibly change in the future.** ###### Headings ``` ## heading ## {#id .className attrName=attrValue class="class1 class2"} ## heading {#id .className attrName=attrValue class="class1 class2"} ``` ``` heading {#id .className attrName=attrValue} === ``` ##### Table extension The Table extension implements [Table(extension)](https://github.github.com/gfm/#tables-extension-), as defined in [GitHub Flavored Markdown Spec](https://github.github.com/gfm/). Specs are defined for XHTML, so specs use some deprecated attributes for HTML5. You can override alignment rendering method via options. | Functional option | Type | Description | | --- | --- | --- | | `extension.WithTableCellAlignMethod` | `extension.TableCellAlignMethod` | Option indicates how are table cells aligned. | ##### Typographer extension The Typographer extension translates plain ASCII punctuation characters into typographic-punctuation HTML entities. Default substitutions are: | Punctuation | Default entity | | --- | --- | | `'` | `&lsquo;`, `&rsquo;` | | `"` | `&ldquo;`, `&rdquo;` | | `--` | `&ndash;` | | `---` | `&mdash;` | | `...` | `&hellip;` | | `<<` | `&laquo;` | | `>>` | `&raquo;` | You can override the default substitutions via `extensions.WithTypographicSubstitutions`: ``` markdown := goldmark.New( goldmark.WithExtensions( extension.NewTypographer( extension.WithTypographicSubstitutions(extension.TypographicSubstitutions{ extension.LeftSingleQuote: []byte("&sbquo;"), extension.RightSingleQuote: nil, // nil disables a substitution }), ), ), ) ``` ##### Linkify extension The Linkify extension implements [Autolinks(extension)](https://github.github.com/gfm/#autolinks-extension-), as defined in [GitHub Flavored Markdown Spec](https://github.github.com/gfm/). Since the spec does not define details about URLs, there are numerous ambiguous cases. You can override autolinking patterns via options. | Functional option | Type | Description | | --- | --- | --- | | `extension.WithLinkifyAllowedProtocols` | `[][]byte` | List of allowed protocols such as `[][]byte{ []byte("http:") }` | | `extension.WithLinkifyURLRegexp` | `*regexp.Regexp` | Regexp that defines URLs, including protocols | | `extension.WithLinkifyWWWRegexp` | `*regexp.Regexp` | Regexp that defines URL starting with `www.`. This pattern corresponds to [the extended www autolink](https://github.github.com/gfm/#extended-www-autolink) | | `extension.WithLinkifyEmailRegexp` | `*regexp.Regexp` | Regexp that defines email addresses` | Example, using [xurls](https://github.com/mvdan/xurls): ``` import "mvdan.cc/xurls/v2" markdown := goldmark.New( goldmark.WithRendererOptions( html.WithXHTML(), html.WithUnsafe(), ), goldmark.WithExtensions( extension.NewLinkify( extension.WithLinkifyAllowedProtocols([][]byte{ []byte("http:"), []byte("https:"), }), extension.WithLinkifyURLRegexp( xurls.Strict, ), ), ), ) ``` ##### Footnotes extension The Footnote extension implements [PHP Markdown Extra: Footnotes](https://michelf.ca/projects/php-markdown/extra/#footnotes). This extension has some options: | Functional option | Type | Description | | --- | --- | --- | | `extension.WithFootnoteIDPrefix` | `[]byte` | a prefix for the id attributes. | | `extension.WithFootnoteIDPrefixFunction` | `func(gast.Node) []byte` | a function that determines the id attribute for given Node. | | `extension.WithFootnoteLinkTitle` | `[]byte` | an optional title attribute for footnote links. | | `extension.WithFootnoteBacklinkTitle` | `[]byte` | an optional title attribute for footnote backlinks. | | `extension.WithFootnoteLinkClass` | `[]byte` | a class for footnote links. This defaults to `footnote-ref`. | | `extension.WithFootnoteBacklinkClass` | `[]byte` | a class for footnote backlinks. This defaults to `footnote-backref`. | | `extension.WithFootnoteBacklinkHTML` | `[]byte` | a class for footnote backlinks. This defaults to `&#x21a9;&#xfe0e;`. | Some options can have special substitutions. Occurrences of “^^” in the string will be replaced by the corresponding footnote number in the HTML output. Occurrences of “%%” will be replaced by a number for the reference (footnotes can have multiple references). `extension.WithFootnoteIDPrefix` and `extension.WithFootnoteIDPrefixFunction` are useful if you have multiple Markdown documents displayed inside one HTML document to avoid footnote ids to clash each other. `extension.WithFootnoteIDPrefix` sets fixed id prefix, so you may write codes like the following: ``` for _, path := range files { source := readAll(path) prefix := getPrefix(path) markdown := goldmark.New( goldmark.WithExtensions( NewFootnote( WithFootnoteIDPrefix([]byte(path)), ), ), ) var b bytes.Buffer err := markdown.Convert(source, &b) if err != nil { t.Error(err.Error()) } } ``` `extension.WithFootnoteIDPrefixFunction` determines an id prefix by calling given function, so you may write codes like the following: ``` markdown := goldmark.New( goldmark.WithExtensions( NewFootnote( WithFootnoteIDPrefixFunction(func(n gast.Node) []byte { v, ok := n.OwnerDocument().Meta()["footnote-prefix"] if ok { return util.StringToReadOnlyBytes(v.(string)) } return nil }), ), ), ) for _, path := range files { source := readAll(path) var b bytes.Buffer doc := markdown.Parser().Parse(text.NewReader(source)) doc.Meta()["footnote-prefix"] = getPrefix(path) err := markdown.Renderer().Render(&b, source, doc) } ``` You can use [goldmark-meta](https://github.com/yuin/goldmark-meta) to define a id prefix in the markdown document: ``` --- title: document title slug: article1 footnote-prefix: article1 --- # My article ``` ##### CJK extension CommonMark gives compatibilities a high priority and original markdown was designed by westerners. So CommonMark lacks considerations for languages like CJK. This extension provides additional options for CJK users. | Functional option | Type | Description | | --- | --- | --- | | `extension.WithEastAsianLineBreaks` | `-` | Soft line breaks are rendered as a newline. Some asian users will see it as an unnecessary space. With this option, soft line breaks between east asian wide characters will be ignored. | | `extension.WithEscapedSpace` | `-` | Without spaces around an emphasis started with east asian punctuations, it is not interpreted as an emphasis(as defined in CommonMark spec). With this option, you can avoid this inconvenient behavior by putting 'not rendered' spaces around an emphasis like `太郎は\ **「こんにちわ」**\ といった`. | #### Security By default, goldmark does not render raw HTML or potentially-dangerous URLs. If you need to gain more control over untrusted contents, it is recommended that you use an HTML sanitizer such as [bluemonday](https://github.com/microcosm-cc/bluemonday). #### Benchmark You can run this benchmark in the `_benchmark` directory. ##### against other golang libraries blackfriday v2 seems to be the fastest, but as it is not CommonMark compliant, its performance cannot be directly compared to that of the CommonMark-compliant libraries. goldmark, meanwhile, builds a clean, extensible AST structure, achieves full compliance with CommonMark, and consumes less memory, all while being reasonably fast. * MBP 2019 13″(i5, 16GB), Go1.17 ``` BenchmarkMarkdown/Blackfriday-v2-8 302 3743747 ns/op 3290445 B/op 20050 allocs/op BenchmarkMarkdown/GoldMark-8 280 4200974 ns/op 2559738 B/op 13435 allocs/op BenchmarkMarkdown/CommonMark-8 226 5283686 ns/op 2702490 B/op 20792 allocs/op BenchmarkMarkdown/Lute-8 12 92652857 ns/op 10602649 B/op 40555 allocs/op BenchmarkMarkdown/GoMarkdown-8 13 81380167 ns/op 2245002 B/op 22889 allocs/op ``` ##### against cmark (CommonMark reference implementation written in C) * MBP 2019 13″(i5, 16GB), Go1.17 ``` --- cmark --- file: _data.md iteration: 50 average: 0.0044073057 sec --- goldmark --- file: _data.md iteration: 50 average: 0.0041611990 sec ``` As you can see, goldmark's performance is on par with cmark's. #### Extensions * [goldmark-meta](https://github.com/yuin/goldmark-meta): A YAML metadata extension for the goldmark Markdown parser. * [goldmark-highlighting](https://github.com/yuin/goldmark-highlighting): A syntax-highlighting extension for the goldmark markdown parser. * [goldmark-emoji](https://github.com/yuin/goldmark-emoji): An emoji extension for the goldmark Markdown parser. * [goldmark-mathjax](https://github.com/litao91/goldmark-mathjax): Mathjax support for the goldmark markdown parser * [goldmark-pdf](https://github.com/stephenafamo/goldmark-pdf): A PDF renderer that can be passed to `goldmark.WithRenderer()`. * [goldmark-hashtag](https://github.com/abhinav/goldmark-hashtag): Adds support for `#hashtag`-based tagging to goldmark. * [goldmark-wikilink](https://github.com/abhinav/goldmark-wikilink): Adds support for `[[wiki]]`-style links to goldmark. * [goldmark-anchor](https://github.com/abhinav/goldmark-anchor): Adds anchors (permalinks) next to all headers in a document. * [goldmark-figure](https://github.com/mangoumbrella/goldmark-figure): Adds support for rendering paragraphs starting with an image to `<figure>` elements. * [goldmark-frontmatter](https://github.com/abhinav/goldmark-frontmatter): Adds support for YAML, TOML, and custom front matter to documents. * [goldmark-toc](https://github.com/abhinav/goldmark-toc): Adds support for generating tables-of-contents for goldmark documents. * [goldmark-mermaid](https://github.com/abhinav/goldmark-mermaid): Adds support for rendering [Mermaid](https://mermaid-js.github.io/mermaid/) diagrams in goldmark documents. * [goldmark-pikchr](https://github.com/jchenry/goldmark-pikchr): Adds support for rendering [Pikchr](https://pikchr.org/home/doc/trunk/homepage.md) diagrams in goldmark documents. * [goldmark-embed](https://github.com/13rac1/goldmark-embed): Adds support for rendering embeds from YouTube links. * [goldmark-latex](https://github.com/soypat/goldmark-latex): A $\LaTeX$ renderer that can be passed to `goldmark.WithRenderer()`. * [goldmark-fences](https://github.com/stefanfritsch/goldmark-fences): Support for pandoc-style [fenced divs](https://pandoc.org/MANUAL.html#divs-and-spans) in goldmark. * [goldmark-d2](https://github.com/FurqanSoftware/goldmark-d2): Adds support for [D2](https://d2lang.com/) diagrams. * [goldmark-katex](https://github.com/FurqanSoftware/goldmark-katex): Adds support for [KaTeX](https://katex.org/) math and equations. * [goldmark-img64](https://github.com/tenkoh/goldmark-img64): Adds support for embedding images into the document as DataURL (base64 encoded). #### goldmark internal(for extension developers) ##### Overview goldmark's Markdown processing is outlined in the diagram below. ``` <Markdown in []byte, parser.Context> | V +--- parser.Parser --- | 1. Parse block elements into AST | 1. If a parsed block is a paragraph, apply | ast.ParagraphTransformer | 2. Traverse AST and parse blocks. | 1. Process delimiters(emphasis) at the end of | block parsing | 3. Apply parser.ASTTransformers to AST | V <ast.Node> | V +--- renderer.Renderer --- | 1. Traverse AST and apply renderer.NodeRenderer | corespond to the node type | V <Output> ``` ##### Parsing Markdown documents are read through `text.Reader` interface. AST nodes do not have concrete text. AST nodes have segment information of the documents, represented by `text.Segment` . `text.Segment` has 3 attributes: `Start`, `End`, `Padding` . (TBC) **TODO** See `extension` directory for examples of extensions. Summary: 1. Define AST Node as a struct in which `ast.BaseBlock` or `ast.BaseInline` is embedded. 2. Write a parser that implements `parser.BlockParser` or `parser.InlineParser`. 3. Write a renderer that implements `renderer.NodeRenderer`. 4. Define your goldmark extension that implements `goldmark.Extender`. #### Donation BTC: 1NEDSyUmo4SMTDP83JJQSWi1MvQUGGNMZB #### License MIT #### Author <NAME> Documentation [¶](#section-documentation) --- ### Overview [¶](#pkg-overview) Package goldmark implements functions to convert markdown text to a desired format. ### Index [¶](#pkg-index) * [func Convert(source []byte, w io.Writer, opts ...parser.ParseOption) error](#Convert) * [func DefaultParser() parser.Parser](#DefaultParser) * [func DefaultRenderer() renderer.Renderer](#DefaultRenderer) * [type Extender](#Extender) * [type Markdown](#Markdown) * + [func New(options ...Option) Markdown](#New) * [type Option](#Option) * + [func WithExtensions(ext ...Extender) Option](#WithExtensions) + [func WithParser(p parser.Parser) Option](#WithParser) + [func WithParserOptions(opts ...parser.Option) Option](#WithParserOptions) + [func WithRenderer(r renderer.Renderer) Option](#WithRenderer) + [func WithRendererOptions(opts ...renderer.Option) Option](#WithRendererOptions) ### Constants [¶](#pkg-constants) This section is empty. ### Variables [¶](#pkg-variables) This section is empty. ### Functions [¶](#pkg-functions) #### func [Convert](https://github.com/yuin/goldmark/blob/v1.5.6/markdown.go#L30) [¶](#Convert) ``` func Convert(source [][byte](/builtin#byte), w [io](/io).[Writer](/io#Writer), opts ...[parser](/github.com/yuin/[email protected]/parser).[ParseOption](/github.com/yuin/[email protected]/parser#ParseOption)) [error](/builtin#error) ``` Convert interprets a UTF-8 bytes source in Markdown and write rendered contents to a writer w. #### func [DefaultParser](https://github.com/yuin/goldmark/blob/v1.5.6/markdown.go#L14) [¶](#DefaultParser) ``` func DefaultParser() [parser](/github.com/yuin/[email protected]/parser).[Parser](/github.com/yuin/[email protected]/parser#Parser) ``` DefaultParser returns a new Parser that is configured by default values. #### func [DefaultRenderer](https://github.com/yuin/goldmark/blob/v1.5.6/markdown.go#L22) [¶](#DefaultRenderer) ``` func DefaultRenderer() [renderer](/github.com/yuin/[email protected]/renderer).[Renderer](/github.com/yuin/[email protected]/renderer#Renderer) ``` DefaultRenderer returns a new Renderer that is configured by default values. ### Types [¶](#pkg-types) #### type [Extender](https://github.com/yuin/goldmark/blob/v1.5.6/markdown.go#L137) [¶](#Extender) ``` type Extender interface { // Extend extends the Markdown. Extend([Markdown](#Markdown)) } ``` An Extender interface is used for extending Markdown. #### type [Markdown](https://github.com/yuin/goldmark/blob/v1.5.6/markdown.go#L36) [¶](#Markdown) ``` type Markdown interface { // Convert interprets a UTF-8 bytes source in Markdown and write rendered // contents to a writer w. Convert(source [][byte](/builtin#byte), writer [io](/io).[Writer](/io#Writer), opts ...[parser](/github.com/yuin/[email protected]/parser).[ParseOption](/github.com/yuin/[email protected]/parser#ParseOption)) [error](/builtin#error) // Parser returns a Parser that will be used for conversion. Parser() [parser](/github.com/yuin/[email protected]/parser).[Parser](/github.com/yuin/[email protected]/parser#Parser) // SetParser sets a Parser to this object. SetParser([parser](/github.com/yuin/[email protected]/parser).[Parser](/github.com/yuin/[email protected]/parser#Parser)) // Parser returns a Renderer that will be used for conversion. Renderer() [renderer](/github.com/yuin/[email protected]/renderer).[Renderer](/github.com/yuin/[email protected]/renderer#Renderer) // SetRenderer sets a Renderer to this object. SetRenderer([renderer](/github.com/yuin/[email protected]/renderer).[Renderer](/github.com/yuin/[email protected]/renderer#Renderer)) } ``` A Markdown interface offers functions to convert Markdown text to a desired format. #### func [New](https://github.com/yuin/goldmark/blob/v1.5.6/markdown.go#L99) [¶](#New) ``` func New(options ...[Option](#Option)) [Markdown](#Markdown) ``` New returns a new Markdown with given options. #### type [Option](https://github.com/yuin/goldmark/blob/v1.5.6/markdown.go#L55) [¶](#Option) ``` type Option func(*markdown) ``` Option is a functional option type for Markdown objects. #### func [WithExtensions](https://github.com/yuin/goldmark/blob/v1.5.6/markdown.go#L58) [¶](#WithExtensions) ``` func WithExtensions(ext ...[Extender](#Extender)) [Option](#Option) ``` WithExtensions adds extensions. #### func [WithParser](https://github.com/yuin/goldmark/blob/v1.5.6/markdown.go#L65) [¶](#WithParser) ``` func WithParser(p [parser](/github.com/yuin/[email protected]/parser).[Parser](/github.com/yuin/[email protected]/parser#Parser)) [Option](#Option) ``` WithParser allows you to override the default parser. #### func [WithParserOptions](https://github.com/yuin/goldmark/blob/v1.5.6/markdown.go#L72) [¶](#WithParserOptions) ``` func WithParserOptions(opts ...[parser](/github.com/yuin/[email protected]/parser).[Option](/github.com/yuin/[email protected]/parser#Option)) [Option](#Option) ``` WithParserOptions applies options for the parser. #### func [WithRenderer](https://github.com/yuin/goldmark/blob/v1.5.6/markdown.go#L79) [¶](#WithRenderer) ``` func WithRenderer(r [renderer](/github.com/yuin/[email protected]/renderer).[Renderer](/github.com/yuin/[email protected]/renderer#Renderer)) [Option](#Option) ``` WithRenderer allows you to override the default renderer. #### func [WithRendererOptions](https://github.com/yuin/goldmark/blob/v1.5.6/markdown.go#L86) [¶](#WithRendererOptions) ``` func WithRendererOptions(opts ...[renderer](/github.com/yuin/[email protected]/renderer).[Option](/github.com/yuin/[email protected]/renderer#Option)) [Option](#Option) ``` WithRendererOptions applies options for the renderer.
accelerated_amimetic_co_uk
free_programming_book
Markdown
# The Series ## TypeScript Accelerated Like something traditional? A concise, clear introduction to TypeScript. An innovative approach to learning programming. The App has over 50 flash cards and tracks your fluency at each individually. When you have a few minutes spare the App allows you to quickly drill flash cards, focusing on those you are struggling with. ## Elm Accelerated Elm is an extraordinary language and ecosystem for building complex front end web applications. While Elm takes ideas from the academic world of functional programming, it applies them in a pragmatic way to allow you to create web apps which not only never crash but are also amazingly maintainable. Out of the box you get an amazing compiler (which is more like a hyper intelligent pair programmer), a React-like virtual DOM, a Redux-like way of modeling state (but with so much less boilerplate and much more help from tooling). While Elm is incredibly solid and ready for production use (I've used it on two substantial, shipped projects) it is a bit lacking in learning resources. Elm Accelerated is a set of resources including an App, Book and more. The App, via flash cards, tracks your fluency on various topics and offers quick access to explanations for anything you are struggling with. Like something traditional? A concise, clear introduction to Elm and its core libraries. An innovative approach to learning programming. The App has over one hundred flash cards and tracks your fluency at each. It includes the full book text so you can look up anything you are struggling with. Don't just learn Elm, get fluent at it. ## The Audio Book A 10 or so minute summary of the key points from Elm Accelerated. Set to repeat? ## Even More Do a quick online Elm quiz or see sample chapters and resources. Quiz, sample chapters and resources on Elm The primary form of TypeScript Accelerated is the App. Out now for iPhone and iPad. It is an almost impossible task to write a great book on programming. It requires not just skill and effort on the part of the author, but a rare alignment of experience, aptitude and interest with the reader. A dry reference is possible but of limited value. A tutorial unlikely to be pitched at the right level. This book tries something different in its goal of teaching TypeScript. In that it is not primarily a book. It is primarily a Flash Card App. It focuses on the most useful details. You absolutely should work through real examples, but these should come from the demands of your own project(s) or failing that in a structured, challenge based way (I recommend sources for these). It targets fluency not superficial knowledge, but includes the hard parts and tools to make learning them reasonable. I want to make it possible for you to learn TypeScript more quickly and more thoroughly than has otherwise been possible. Ideally you would already know some Javascript. The focus will be on the features TypeScript adds to Javascript. However if you know languages like Java or C# you shouldn't have too much trouble. ### Typescript Javascript is the most popular programming language in the world by most measures. It has incredible reach: from webpages to servers, to native mobile apps and IOT devices. It is easy (at least to get started). It is surprisingly performant. It is flexible. And it is also (in some respects) terrible, constrained by its legacy and superficially worse than most programming languages (apart from PHP). But it is getting better. The language has rapidly evolved. It has incredible frameworks for user interfaces, testing, web servers and data visualisation. It has an incredible range of open source libraries and frameworks (all easily accessible via NPM). And on top of its flawed foundations tooling has addressed many of the worst issues. This is where TypeScript comes in. TypeScript provides additional syntax on top of Javascript, allowing us to declare types and use features like generics and annotations that you may have seen in other languages. But it does this in a way that is a very Javascripty way, indeed working with TypeScript is likely to give you a clearer understanding of Javascript's type system. With this extra information development becomes easier and safer as our editor or IDE is more able to give us information or suggest completions. It also makes it easier to work with other people (or to work with our own code from a few months ago). But it works with 'legacy' Javascript code as we can use type definitions to help (even where the code is vanilla Javascript). Indeed TypeScript's approach is by far the most popular and type definitions are available for pretty much all popular Javascript libraries. Technically TypeScript is an optional typing system, but in practice it encourages us to statically type the great majority of our code and allows us the escape hatch of dynamic typing when it is needed. This book consists of lots of concise chapters, each with a small number of core ideas. Most focus on a topic and give a concise but clear description. Ideally you would mainly use the Flash Card App to drive learning from this book, looking up things you are struggling with. On the Accelerated website you will find more resources on TypeScript and other technologies. For more comprehensive documentation see The Official TypeScript Documentation. This book/App focuses on the core things you need to know to be productive, but we leave out some minor or less used parts of the language. In the next chapter we look at how to set up TypeScript. You might actually want to skip ahead and come back to this later, but I think it is important to see how easily TypeScript can be added to a wide variety of projects. ## Functions ## Function Types Functions, again, build on modern Javascript, offering support for typing of regular and arrow functions and default arguments. For example ``` function reverse(s: string): string { return s.split("").reverse().join(""); } reverse("java"); // "avaj" f(4,5); // 9 ``` It is typically reasonable to define types for all the input parameters to a function but let TypeScript infer the output type(s), at least in simple cases like the above. ``` function reverse(s: string) { return s.split("").reverse().join(""); } ## Advanced Function Typing For default arguments we just add `= value` . ``` const f = (x: number, y: number = 1) => x + y; f(2); // 3 ``` We can also have an arbitrary number of same-typed parameters (rest parameters) with a `...` before the final declared parameter. Like: ``` const g = (...ns: number[]) => ns.sort(); ``` Notice how the type is an array type. This allows you to do things that you might do with the horrendous-yet-flexible `arguments` pseudo-keyword in Javascript. ## Types We have seen basic types for primitives, collections and functions. And this is what you should try to use as much as possible. But TypeScript has some fancier types and ways of combining types that allow us to be more specific, more flexible or (if necessary) effectively ignore typing. ## Empty and Nullable Types `null` was called a billion dollar mistake by its creator (almost certainly a massive underestimate). Javascript unusually has (effectively) two different `null` values: `null` and `undefined` . TypeScript allows you to use both of these as types or use `void` to mean either. If a function doesn't return a value it returns `undefined` (this is a Javascript thing). These types aren't very useful on their own (at least to declare variables). They may be the return type of a function, although there we are likely just to leave off the type. Where it gets more interesting is via nullable types, when a value can either be `null` / `undefined` . For example: Now TypeScript will try to ensure we handle the case where `s` is a string or `null` . This is an example of a union type, which we will return to later. ## any Type TypeScript offers `any` as an escape hatch. If a variable is of type `any` then TypeScript will assume you know what you are doing. Generally you should use these as little as possible. It will make your code much harder to reason about and very difficult to maintain or change. Your colleagues or future self might not forgive you for it. ## Type Assertions The other common way to tell TypeScript that you know better than it does is with a type assertion. There are two syntaxes for this, using angle brackets and `as` . ``` let letters = (<string>something).split("") let letters = (something as string).split("") ``` ## never Type You can explicitly declare a `never` return type for functions that never return (e.g. throw exceptions or have infinite loops). ## Type Aliases We can give a name to a type by using `type` : ``` type Label = string; type Result = [ boolean, string ]; ``` Right now this isn't so useful but it is standard for both object types and for union types. The two things we are about to look at. ## Object Types We can define object types by declaring each property wrapped in curly brackets (so it looks like the object itself). For example: ``` type Name = { first: string, last: string }; type Person = { name: Name, age: number, partnerName?: Name }; let me: Name = { first: "James", last: "Porter" }; me.last; // "Porter" ``` As you can see it is possible to nest these. We can also have nullable fields by using a `?` . ## Union Types When a variable might be of multiple types we can use union types with a `|` to separate options. We have already seen But we can actually be more specific. For example we might model state of part of an app with: ``` type State = "loading" | "loaded" | "error"; ``` Now Typescript will check that the value is actually one of these three specific strings (this is a string literal type, but you can also have literals of the other primitive types). This can be used within more complex types, like Objects or Tuples. Union types become really useful when the objects can be distinguished, for example if each type has a fixed value for one property. Then we can do `switch` statements over the object allowing for very clear, concise and safe code (it is close to pattern matching in functional programming languages). This is called a discriminated union. For example: ``` type AppState = { state: "loading" } | { state: "loaded", data: string } | { state: "error", error: string } const printState = (s: AppState) => { switch(s.state) { case "loading": return "Loading"; case "error": return `Error: ${s.error}`; case "loaded": return `Loaded this data: ${s.data}`; } } ``` ## Generics Javascript will happily work with very different kinds of variables, often converting them to what it thinks you mean (sometimes with almost pathologically terrible outcomes). In TypeScript we try to be more specific so as to avoid bugs and make refactoring more straightforward. But we would still like to be able to write general purpose code. Generics is a key tool for accomplishing this. Let's consider how we might create a `plus` function. We can do ``` function plus(x: any, y:any):any { return x + y } ``` Indeed this is effectively the 'Javascript' approach. But we would typically want to only add two things together if they are of the same type. The syntax is simple (indeed the same as many other languages such as Java). ``` function plus<T>(x: T, y:T):T { return x + y } ``` Now we can use plus to add any two things of the same kind together. While this might still not work, it is a lot better than vanilla Javascript and will catch many errors. Indeed TypeScript actually treats the regular `+` operation in this way. Note that you can't (without weird hacks) have generic arrow functions. (A potential point of confusion: Flow does allow this.) The syntax is also the same as we saw before for asserting types. ``` let letters = (<string>something).split("") ``` Okay so where might we actually create Generics? Actually, unless you are creating a library you may find that you are mostly a consumer of generic code. Using it for operations on collections or, what we see in the next chapter, when extending classes. For example one way of declaring arrays is: `Array<T>` . # Accelerated website # The Official TypeScript Documentation # Guide to TypeScript Declaration Files # ts-node # TypeScript Playground # Guide to differences between Flow and TypeScript Date: 2018-04-10 Categories: Tags: The primary version of Elm Accelerated is the App. Currently the App is out on iOS. # Quiz Let's rate your Elm knowledge. Once you have used the Elm Accelerated App for a while you should be able to answer all of these (and similar) questions with relative ease. It is an almost impossible task to write a great book on programming. It requires not just skill and effort on the part of the author, but a rare alignment of experience, aptitude and interest with the reader. A dry reference is possible but of limited value. A tutorial unlikely to be pitched at the right level. This book tries something different in its goal of teaching the core of the Elm programming language and ecosystem. It is not primarily a book. It is (available as) a Flash Card App and has associated resources, quizzes, audio content and more online. It focuses on the most useful details. You absolutely should work through real examples, but these should come from the demands of your own project or failing that in a structured, challenge based way (I recommend sources for these). It targets fluency not superficial knowledge, but includes the hard parts and tools to make learning them reasonable. I want to make it possible for you to learn Elm more quickly and more thoroughly than is otherwise possible. You should already know at least one mainstream programming language. But you may not be familiar with functional programming or have studied computer science formally. If you have never programmed before this is a terrible place to start. If you are familiar with functional programming, particularly ML languages, you can probably skip or skim the more conceptual parts. ### Elm Often programmers will talk of trade offs between languages. That one 'Serious' language is hard, verbose but performant and that other little language is easy, quick, dirty but slow. (I have examples in mind, but sharing them seems ill advised!) Don't believe them. It is 2018 and you should demand all the things. Languages like Elm, Kotlin and Swift reject these kinds of false choices and give us concise, clear code, performance, safety and clarity. They are simply better than many common alternatives. Elm is the friendliest language I've ever encountered and has great tools today. It combines theory with pragmatism, looks beautiful and is remarkably useful in the right context. As long as you want to create applications in a web browser it is likely to be a great choice, particularly if it is somewhat complex with many different interactions or updates. But even if you don't end up using it in production, just learning will give you a much better feel for modern Javascript approaches to state like Redux. This book consists of lots of concise chapters. A small number target core ideas or examples. Most focus on a topic and give a concise but clear description. Ideally you would mainly use the Flash Card App to drive learning from this book, looking up things you are struggling with. On the Elm Accelerated website you will find more resources like quizzes or audio summaries. These provide additional tools to accelerate your learning. ### Install Elm Before continuing you should probably install Elm (if you haven't already). Go to elm-lang.org and follow the instructions there for your operating system. You will also want to get the Elm plugins for your text editor of choice (Visual Studio Code support is great, but similar tools exist for Atom and Sublime). It is also worth getting the `elm-format` tool and configuring your editor to format your code on save. ### Try Elm Once you've done that why not try out a few simple commands in the `elm-repl` . Don't worry if you don't know exactly what is going on (this book will teach you), but notice how you can evaluate simple expressions and create functions with very little noise. Notice also how unusually friendly and helpful any error messages are. This is one of the many areas where Elm is way ahead of the curve. ``` > elm-repl > 1 + 1 2 : number > 4 > 5 False : Bool > if True then "hi" else "bye" "hi" : String > f x = x * x <function> : number -> number > f 4 16 : number > f "hi" -- TYPE MISMATCH ------------------------ The argument to function `f` is causing a mismatch. 3| f "hi" ^^^^ Function `f` is expecting the argument to be: number But it is: String ``` ## Lists We create lists in Elm with square brackets. All items must be of the same type. ``` > [1,2,3] [1,2,3] : List number > ["a", "b", "c"] ["a","b","c"] : List String ``` A list actually consists of a first element (head) and rest of list (tail). We use `::` to 'glue' these together. ``` > 1 :: [] [1] : List number > 1 :: [2] [1,2] : List number > 1 :: 2 :: 3 :: [] [1,2,3] : List number ``` In most functional programming languages lists are the most common data structure. We can naturally write recursive algorithms which operate on the head and tail. For example: ``` > f x = case x of h :: t -> h + (f t) [] -> 0 <function> : List number -> number > f [1,2,3] 6 : number ``` However, in Elm we will more commonly use standard list functions. There are a few basic functions. The easy ones are `isEmpty` and `length` which do the obvious things. ``` isEmpty : List a -> Bool length : List a -> Int ``` We can also reverse a list or find out if a particular item is in a list. ``` reverse : List a -> List a member : a -> List a -> Bool ``` We can extract the head or tail (a list can be thought of as `head :: tail` ) with ``` head : List a -> Maybe a tail : List a -> Maybe (List a) ``` Notice how these are `Maybe a` as an empty list has neither head nor tail. However a one item list does have a tail, the empty list! The simplest ways to extract items from a list are: ``` take : Int -> List a -> List a drop : Int -> List a -> List a filter : (a -> Bool) -> List a -> List a ``` `take` takes up to the number of items supplied from a list. `drop` drops the supplied number of items (where possible). Where `filter` returns a list with only items which return `True` from the supplied function. ``` > List.take 2 [1,2,3] [1,2] : List number > List.take 4 [1,2,3] [1,2,3] : List number > List.drop 2 [1,2,3] [3] : List number > List.filter (\n -> n > 1) [1,2,3] [2,3] : List number ``` A very common list function (for example in generating labels for a data visualisation or indicies for some kind of processing is `range` ). It generates a list of integers between and including the two values supplied. For example. ``` > List.range 1 5 [1,2,3,4,5] : List Int > List.range 1 10 [1,2,3,4,5,6,7,8,9,10] : List Int ``` But if that range is empty an empty list is returned. ``` > List.range 1 -4 [] : List Int ``` We can compose lists with `append` and `concat` . Append joins two lists together. Whereas `concat` takes a list of lists and appends them all together into one list. ``` > List.append [1,2] [3,4] [1,2,3,4] : List number > List.concat [[1,2], [3,4], [5,6]] [1,2,3,4,5,6] : List number ``` To do simple processing of a list we use `map` which applies a function to each element. ``` map : (a -> b) -> List a -> List b > List.map (\n -> n * n) [1,2,3] [1,4,9] : List number > List.map String.toUpper ["hi", "world"] ["HI","WORLD"] : List String ``` There are actually versions of map for functions operating on multiple lists. For example `map2` . ``` > List.map2 (*) [1,2,3] [4,5,6] [4,10,18] : List number ``` Here we wrap the infix operator in `(*)` to pass as a regular function. There are some standard functions which work on entire lists. ``` sum : List number -> number product : List number -> number maximum : List comparable -> Maybe comparable minimum : List comparable -> Maybe comparable ``` These all do the obvious thing (though it is not obvious that these functions should be in the `List` module). There are also two functions for looking at logical queries on a list `all` and `any` : each takes a function that maps each element to `True` or `False` . Then if all (any) hold true for `all` ( `any` ) it returns `True` , otherwise `False` . ``` all : (a -> Bool) -> List a -> Bool any : (a -> Bool) -> List a -> Bool ``` We can do custom operations on entire lists with `foldl` , `foldr` and `scanl` . `foldl` is commonly called `reduce` in other languages. These take a function which takes a list item, the accumulated value and returns a new accumulated value; an initial accumulated value; and a list. The function then returns the final accumulated value having 'folded' across the entire list. That sounds really complex but let's look at some simple examples. ``` > List.foldl (+) 0 [1,2,3,4] 10 : number > List.foldl (*) 1 [2,3,4] 24 : number ``` `foldr` just does the same but 'folds' from the right (i.e. starts from the end of the list). `scanl` instead of returning the final accumulated value, returns all the values as we pass through the list. ``` > List.foldr (+) 0 [1,2,3,4] 10 : number > List.scanl (*) 1 [2,3,4] [1,2,6,24] : List number ``` There are several advanced maps. We look at two here: `filterMap` and `indexedMap` . The first applies a function to a list which returns a `Maybe` . It then drops any `Nothing` s (perhaps resulting in an empty list). This can make data processing pipelines a lot more concise. ``` filterMap : (a -> Maybe b) -> List a -> List b queries |> filterMap getResultIfAny |> map convertResult ``` `indexedMap` allows you to write functions to apply to list elements along with their index (for example when drawing a data visualisation we may want to offset each item by their index (place in list) scaled). ``` indexedMap : (Int -> a -> b) -> List a -> List b indexedMap (\idx d -> { x = (toFloat idx) * dx , y = y * dy } ) data ``` We have three standard ways to sort a list. The first works in the 'standard' way with standard Elm primitives which are comparable (numbers, characters, strings, lists of comparables and tuples of comparables). ``` sort : List comparable -> List comparable > List.sort [2,4,1,2,6,9] [1,2,2,4,6,9] : List number ``` We can also sort by something extracted from each list item with `sortBy` . If these items are records we can use `.key` to pull out the `key` item from each. ``` sortBy : (a -> comparable) -> List a -> List a > sortBy .key listOfRecordsWithKey ``` If we want full control of sort we do this with `sortWith` and the helper `Order` type. Here we write a function that takes two list elements and then returns an `Order` , either: `LT` , `EQ` and `GT` (less than, equal, greater than). This is similar to javascript where you might pass a sort function which returns a negative number, zero or a positive number to indicate how two elements are related. ``` sortWith : (a -> a -> Order) -> List a -> List a ``` sortFrom is not a core `List` sorting function. There are a handful of other `List` functions. Two of note are `partition` which generates two lists, one which satisfies a condition, the other which fails and `unzip` which takes a list of a `Tuple` and converts it into a `Tuple` of lists of each item. ``` partition : (a -> Bool) -> List a -> (List a, List a) unzip : List (a, b) -> (List a, List b) ``` ## Regular Expressions For more elaborate String validations and matching we can use regular expressions. Elm provides a nice set of functions for working with regular expressions in its core library. We construct regular expressions (type `Regex` ) with `regex` which takes a string. Notice the double `\\` this is so that a backslash is included, rather than the next letter being escaped. ``` > regex "0123" > regex "Elm" > regex "\\d{1,3}" ``` There are four basic functions to work with `Regex` : `contains` , `find` , `replace` and `split` . `contains` does what you might expect ``` contains : Regex -> String -> Bool > contains (regex "\\d") "1234" True > contains (regex "\\D") "1234" False > contains (regex "[a-c]") "1234" False > contains (regex "[a-c]") "12a23" True ``` Before we look at the other main functions we need to learn about the data structures Elm provides to make working with them pleasant. To specify the number of (potential) matches we have the elegant and precise type: ``` type HowMany = All | AtMost Int ``` And when we get matches the type used is: ``` type alias Match = { match : String , submatches : List (Maybe String) , index : Int , number : Int } ``` Where the `match` is the match, `submatches` and parenthetical capture groups (i.e. parts of match grouped in brackets), the `index` is original location and the `number` is the index of the match itself. Actually once you understand these types there is very little left to worry about (unlike other languages where you often seem to have to look up weird details of what regular expression methods return). ``` find : HowMany -> Regex -> String -> List Match findThreeNumbers = find (AtMost 3) (regex "\\d") ``` As `find` returns a list of `Match` you can pull out the matches or submatches or perform further processing very straightforwardly. Let's consider `replace` . Again the function signature tells you pretty much all you need to know. ``` replace: HowMany -> Regex -> (Match -> String) -> String -> String ``` The only non obvious part is the `Match -> String` i.e. you can actually write a function from the match to the replacement. This gives you access to all the `Match` fields, so you can do whatever you want. In many cases this function might produce the same output. To signal to the Elm compiler this is intentional use an `_` . ``` removeUpToTwoNumbers = replace (AtMost 2) (regex "\\d") (_ -> "") shortenWords = replace All (regex "\\w+") (\{match} -> String.left 3 match) ``` The final function is `split` ``` split : HowMany -> Regex -> String -> List String > split All (regex ",") "1,2,3" ["1","2","3"] ``` You might want to put these back together again (after some processing or filtering). For that use ``` join : String -> List String -> String ``` # Full comment documentation From the Comments chapter of Elm Accelerated # Video course on Elm # A few free video lessons # Exercism # Slideshow example in Elm # A simple Elm game # Medium Clone (in Elm) # Example Elm Game From the The Elm Architecture Part 2 chapter of Elm Accelerated # react-elm-components # Elm Webpack Loader # json-to-elm Currently the only set of learning resources (Apps, book, quiz, resources) is Elm Accelerated. But I have ideas for other topics such as Data in/for React and GLSL. Elm Accelerated: Quiz, sample chapters and resources on ElmTypeScript Accelerated: sample chapters and resources on TypeScript
rocket_launch_live
rust
Rust
Crate rocket_launch_live === Rocket Launch Live --- A type safe and asynchronous wrapper around the RocketLaunch.Live API. `rocket_launch_live` allows you to easily integrate your code asynchronously with the RocketLaunch.Live API. Instead of dealing with low level details, the user can instanciate a client with a valid API key and use a high level interface to this service. Design --- `RocketLaunchLive` is the main struct, containing methods for each endpoint. The JSON data is deserialised into meaningful model types defined in the `api_models` module. Each call to an endpoint method returns a `Response<T>` which is generic over T, allowing tailored responses. Depending on which method you call, the response contains a result field of type `Vec<T>` where T can be of the type `api_models::Company`, `api_models::Launch`, `api_models::Location`, `api_models::Mission`, `api_models::Pad`, `api_models::Tag` or `api_models::Vehicle`. This REST API provides access to a growing database of curated rocket launch data through the following endpoints: * Companies * Launches * Locations * Missions * Pads * Tags * Vehicles Examples --- ``` use rocket_launch_live::api_models::{Launch, Response}; use rocket_launch_live::{Direction, LaunchParamsBuilder, NaiveDate, RocketLaunchLive}; use std::{env, error::Error}; #[tokio::main] async fn main() -> Result<(), Box<dyn Error>> { // Read the API key from an environment variable. let api_key = env::var("RLL_API_KEY")?; // Create an instance of RocketLaunchLive to access the API. let client = RocketLaunchLive::new(&api_key); // Create an instance of LaunchParamsBuilder. // Set some parameters to filter out the launches we're interested in. let params = LaunchParamsBuilder::new() .country_code("US") .after_date(NaiveDate::from_ymd_opt(2023, 9, 1))? .search("ISS") .direction(Direction::Descending) .limit(10) .build(); // Call the launches endpoint method with the parameters set above. // This returns a Response from the API server asynchronously. // Generic type annotations since each endpoint has a specific response. let resp: Response<Launch> = client.launches(Some(params)).await?; // Iterate over the the result field of the Response. for launch in resp.result { println!( "{} | {} | {}", launch.date_str, launch.vehicle.name, launch.name ); } Ok(()) } ``` Modules --- * api_models Macros --- * add_paramSimplify conditional concatenation of API parameters. Structs --- * CommonParamsParameters used by multiple builders by composition. * CompanyParamsBuilderBuilder to generate the API parameters to filter calls to the companies endpoint. * LaunchParamsBuilderBuilder to generate the API parameters to filter calls to the launches endpoint. * LocationParamsBuilderBuilder to generate the API parameters to filter calls to the locations endpoint. * MissionParamsBuilderBuilder to generate the API parameters to filter calls to the missions endpoint. * NaiveDateISO 8601 calendar date without timezone. Allows for every proleptic Gregorian date from Jan 1, 262145 BCE to Dec 31, 262143 CE. Also supports the conversion from ISO 8601 ordinal and week date. * NaiveDateTimeISO 8601 combined date and time without timezone. * NaiveTimeISO 8601 time without timezone. Allows for the nanosecond precision and optional leap second representation. * PadParamsBuilderBuilder to generate the API parameters to filter calls to the pads endpoint. * ParamsLow level text representation of the API parameters sent to the server. * RocketLaunchLiveAPI client containing all the public endpoint methods. * TagParamsBuilderBuilder to generate the API parameters to filter calls to the tags endpoint. * VehicleParamsBuilderBuilder to generate the API parameters to filter calls to the vehicles endpoint. Enums --- * DirectionRepresents the sorting order of results (ascending or descending). Crate rocket_launch_live === Rocket Launch Live --- A type safe and asynchronous wrapper around the RocketLaunch.Live API. `rocket_launch_live` allows you to easily integrate your code asynchronously with the RocketLaunch.Live API. Instead of dealing with low level details, the user can instanciate a client with a valid API key and use a high level interface to this service. Design --- `RocketLaunchLive` is the main struct, containing methods for each endpoint. The JSON data is deserialised into meaningful model types defined in the `api_models` module. Each call to an endpoint method returns a `Response<T>` which is generic over T, allowing tailored responses. Depending on which method you call, the response contains a result field of type `Vec<T>` where T can be of the type `api_models::Company`, `api_models::Launch`, `api_models::Location`, `api_models::Mission`, `api_models::Pad`, `api_models::Tag` or `api_models::Vehicle`. This REST API provides access to a growing database of curated rocket launch data through the following endpoints: * Companies * Launches * Locations * Missions * Pads * Tags * Vehicles Examples --- ``` use rocket_launch_live::api_models::{Launch, Response}; use rocket_launch_live::{Direction, LaunchParamsBuilder, NaiveDate, RocketLaunchLive}; use std::{env, error::Error}; #[tokio::main] async fn main() -> Result<(), Box<dyn Error>> { // Read the API key from an environment variable. let api_key = env::var("RLL_API_KEY")?; // Create an instance of RocketLaunchLive to access the API. let client = RocketLaunchLive::new(&api_key); // Create an instance of LaunchParamsBuilder. // Set some parameters to filter out the launches we're interested in. let params = LaunchParamsBuilder::new() .country_code("US") .after_date(NaiveDate::from_ymd_opt(2023, 9, 1))? .search("ISS") .direction(Direction::Descending) .limit(10) .build(); // Call the launches endpoint method with the parameters set above. // This returns a Response from the API server asynchronously. // Generic type annotations since each endpoint has a specific response. let resp: Response<Launch> = client.launches(Some(params)).await?; // Iterate over the the result field of the Response. for launch in resp.result { println!( "{} | {} | {}", launch.date_str, launch.vehicle.name, launch.name ); } Ok(()) } ``` Modules --- * api_models Macros --- * add_paramSimplify conditional concatenation of API parameters. Structs --- * CommonParamsParameters used by multiple builders by composition. * CompanyParamsBuilderBuilder to generate the API parameters to filter calls to the companies endpoint. * LaunchParamsBuilderBuilder to generate the API parameters to filter calls to the launches endpoint. * LocationParamsBuilderBuilder to generate the API parameters to filter calls to the locations endpoint. * MissionParamsBuilderBuilder to generate the API parameters to filter calls to the missions endpoint. * NaiveDateISO 8601 calendar date without timezone. Allows for every proleptic Gregorian date from Jan 1, 262145 BCE to Dec 31, 262143 CE. Also supports the conversion from ISO 8601 ordinal and week date. * NaiveDateTimeISO 8601 combined date and time without timezone. * NaiveTimeISO 8601 time without timezone. Allows for the nanosecond precision and optional leap second representation. * PadParamsBuilderBuilder to generate the API parameters to filter calls to the pads endpoint. * ParamsLow level text representation of the API parameters sent to the server. * RocketLaunchLiveAPI client containing all the public endpoint methods. * TagParamsBuilderBuilder to generate the API parameters to filter calls to the tags endpoint. * VehicleParamsBuilderBuilder to generate the API parameters to filter calls to the vehicles endpoint. Enums --- * DirectionRepresents the sorting order of results (ascending or descending). Struct rocket_launch_live::RocketLaunchLive === ``` pub struct RocketLaunchLive<'a> { /* private fields */ } ``` API client containing all the public endpoint methods. Implementations --- ### impl<'a> RocketLaunchLive<'a#### pub fn new(key: &'a str) -> Self Create a new API client with an API key. ##### Examples found in repository? examples/launches.rs (line 11) ``` 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 ``` ``` &varrasync fn main() -> Result<(), Box<dyn Error>> { // Read the API key from an environment variable. let api_key = env::var("RLL_API_KEY")?; // Create an instance of RocketLaunchLive to access the API. let client = RocketLaunchLive::new(&api_key); // Create an instance of LaunchParamsBuilder. // Set some parameters to filter out the launches we're interested in. let params = LaunchParamsBuilder::new() .country_code("US") .after_date(NaiveDate::from_ymd_opt(2023, 9, 1))? .search("ISS") .direction(Direction::Descending) .limit(10) .build(); // Call the launches endpoint method with the parameters set above. // This returns a Response from the API server asynchronously. // Generic type annotations since each endpoint has a specific response. let resp: Response<Launch> = client.launches(Some(params)).await?; // Iterate over the the result field of the Response. for launch in resp.result { println!( "{} | {} | {}", launch.date_str, launch.vehicle.name, launch.name ); } Ok(()) } ``` #### pub async fn companies<T: DeserializeOwned>( &self, params: Option<Params> ) -> Result<Response<T>, Box<dyn Error>Retrieve all companies in the database (optionally filtered by params) or an error. #### pub async fn launches<T: DeserializeOwned>( &self, params: Option<Params> ) -> Result<Response<T>, Box<dyn Error>Retrieve all launches in the database (optionally filtered by params) or an error. ##### Examples found in repository? examples/launches.rs (line 26) ``` 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 ``` ``` &varrasync fn main() -> Result<(), Box<dyn Error>> { // Read the API key from an environment variable. let api_key = env::var("RLL_API_KEY")?; // Create an instance of RocketLaunchLive to access the API. let client = RocketLaunchLive::new(&api_key); // Create an instance of LaunchParamsBuilder. // Set some parameters to filter out the launches we're interested in. let params = LaunchParamsBuilder::new() .country_code("US") .after_date(NaiveDate::from_ymd_opt(2023, 9, 1))? .search("ISS") .direction(Direction::Descending) .limit(10) .build(); // Call the launches endpoint method with the parameters set above. // This returns a Response from the API server asynchronously. // Generic type annotations since each endpoint has a specific response. let resp: Response<Launch> = client.launches(Some(params)).await?; // Iterate over the the result field of the Response. for launch in resp.result { println!( "{} | {} | {}", launch.date_str, launch.vehicle.name, launch.name ); } Ok(()) } ``` #### pub async fn locations<T: DeserializeOwned>( &self, params: Option<Params> ) -> Result<Response<T>, Box<dyn Error>Retrieve all locations in the database (optionally filtered by params) or an error. #### pub async fn missions<T: DeserializeOwned>( &self, params: Option<Params> ) -> Result<Response<T>, Box<dyn Error>Retrieve all missions in the database (optionally filtered by params) or an error. #### pub async fn pads<T: DeserializeOwned>( &self, params: Option<Params> ) -> Result<Response<T>, Box<dyn Error>Retrieve all pads in the database (optionally filtered by params) or an error. #### pub async fn tags<T: DeserializeOwned>( &self, params: Option<Params> ) -> Result<Response<T>, Box<dyn Error>Retrieve all tags in the database (optionally filtered by params) or an error. #### pub async fn vehicles<T: DeserializeOwned>( &self, params: Option<Params> ) -> Result<Response<T>, Box<dyn Error>Retrieve all vehicles in the database (optionally filtered by params) or an error. Auto Trait Implementations --- ### impl<'a> RefUnwindSafe for RocketLaunchLive<'a### impl<'a> Send for RocketLaunchLive<'a### impl<'a> Sync for RocketLaunchLive<'a### impl<'a> Unpin for RocketLaunchLive<'a### impl<'a> UnwindSafe for RocketLaunchLive<'aBlanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more Struct rocket_launch_live::api_models::Response === ``` pub struct Response<T> { pub errors: Option<Vec<String>>, pub valid_auth: bool, pub count: Option<i64>, pub limit: Option<i64>, pub total: Option<i64>, pub last_page: Option<i64>, pub result: Vec<T>, } ``` Fields --- `errors: Option<Vec<String>>``valid_auth: bool``count: Option<i64>``limit: Option<i64>``total: Option<i64>``last_page: Option<i64>``result: Vec<T>`Trait Implementations --- ### impl<T: Clone> Clone for Response<T#### fn clone(&self) -> Response<TReturns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Formats the value using the given formatter. T: Deserialize<'de>, #### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where __D: Deserializer<'de>, Deserialize this value from the given Serde deserializer. This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl<T> Serialize for Response<T>where T: Serialize, #### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where __S: Serializer, Serialize this value into the given Serde serializer. --- ### impl<T> RefUnwindSafe for Response<T>where T: RefUnwindSafe, ### impl<T> Send for Response<T>where T: Send, ### impl<T> Sync for Response<T>where T: Sync, ### impl<T> Unpin for Response<T>where T: Unpin, ### impl<T> UnwindSafe for Response<T>where T: UnwindSafe, Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. T: for<'de> Deserialize<'de>, Struct rocket_launch_live::api_models::Company === ``` pub struct Company { pub id: Option<i64>, pub name: String, pub inactive: bool, pub country: Country, } ``` Fields --- `id: Option<i64>``name: String``inactive: bool``country: Country`Trait Implementations --- ### impl Clone for Company #### fn clone(&self) -> Company Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn default() -> Company Returns the “default value” for a type. #### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where __D: Deserializer<'de>, Deserialize this value from the given Serde deserializer. #### fn eq(&self, other: &Company) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for Company #### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where __S: Serializer, Serialize this value into the given Serde serializer. Auto Trait Implementations --- ### impl RefUnwindSafe for Company ### impl Send for Company ### impl Sync for Company ### impl Unpin for Company ### impl UnwindSafe for Company Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. T: for<'de> Deserialize<'de>, Struct rocket_launch_live::api_models::Launch === ``` pub struct Launch { pub id: Option<i64>, pub cospar_id: Option<String>, pub sort_date: String, pub name: String, pub provider: Provider, pub vehicle: Vehicle, pub pad: Pad, pub missions: Vec<Mission>, pub mission_description: Option<String>, pub launch_description: String, pub win_open: Value, pub t0: Option<String>, pub win_close: Value, pub est_date: EstDate, pub date_str: String, pub tags: Vec<Tag>, pub slug: String, pub weather_summary: Value, pub weather_temp: Value, pub weather_condition: Value, pub weather_wind_mph: Value, pub weather_icon: Value, pub weather_updated: Value, pub quicktext: String, pub media: Vec<Medum>, pub result: Option<i64>, pub suborbital: bool, pub modified: String, } ``` Fields --- `id: Option<i64>``cospar_id: Option<String>``sort_date: String``name: String``provider: Provider``vehicle: Vehicle``pad: Pad``missions: Vec<Mission>``mission_description: Option<String>``launch_description: String``win_open: Value``t0: Option<String>``win_close: Value``est_date: EstDate``date_str: String``tags: Vec<Tag>``slug: String``weather_summary: Value``weather_temp: Value``weather_condition: Value``weather_wind_mph: Value``weather_icon: Value``weather_updated: Value``quicktext: String``media: Vec<Medum>``result: Option<i64>``suborbital: bool``modified: String`Trait Implementations --- ### impl Clone for Launch #### fn clone(&self) -> Launch Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn default() -> Launch Returns the “default value” for a type. #### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where __D: Deserializer<'de>, Deserialize this value from the given Serde deserializer. #### fn eq(&self, other: &Launch) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for Launch #### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where __S: Serializer, Serialize this value into the given Serde serializer. Auto Trait Implementations --- ### impl RefUnwindSafe for Launch ### impl Send for Launch ### impl Sync for Launch ### impl Unpin for Launch ### impl UnwindSafe for Launch Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. T: for<'de> Deserialize<'de>, Struct rocket_launch_live::api_models::Location === ``` pub struct Location { pub id: Option<i64>, pub name: String, pub state: Option<String>, pub statename: Option<String>, pub country: String, pub slug: String, } ``` Fields --- `id: Option<i64>``name: String``state: Option<String>``statename: Option<String>``country: String``slug: String`Trait Implementations --- ### impl Clone for Location #### fn clone(&self) -> Location Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn default() -> Location Returns the “default value” for a type. #### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where __D: Deserializer<'de>, Deserialize this value from the given Serde deserializer. #### fn eq(&self, other: &Location) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for Location #### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where __S: Serializer, Serialize this value into the given Serde serializer. Auto Trait Implementations --- ### impl RefUnwindSafe for Location ### impl Send for Location ### impl Sync for Location ### impl Unpin for Location ### impl UnwindSafe for Location Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. T: for<'de> Deserialize<'de>, Struct rocket_launch_live::api_models::Mission === ``` pub struct Mission { pub id: Option<i64>, pub name: String, pub description: Option<String>, } ``` Fields --- `id: Option<i64>``name: String``description: Option<String>`Trait Implementations --- ### impl Clone for Mission #### fn clone(&self) -> Mission Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn default() -> Mission Returns the “default value” for a type. #### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where __D: Deserializer<'de>, Deserialize this value from the given Serde deserializer. #### fn eq(&self, other: &Mission) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for Mission #### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where __S: Serializer, Serialize this value into the given Serde serializer. Auto Trait Implementations --- ### impl RefUnwindSafe for Mission ### impl Send for Mission ### impl Sync for Mission ### impl Unpin for Mission ### impl UnwindSafe for Mission Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. T: for<'de> Deserialize<'de>, Struct rocket_launch_live::api_models::Pad === ``` pub struct Pad { pub id: Option<i64>, pub name: String, pub location: Location, } ``` Fields --- `id: Option<i64>``name: String``location: Location`Trait Implementations --- ### impl Clone for Pad #### fn clone(&self) -> Pad Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn default() -> Pad Returns the “default value” for a type. #### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where __D: Deserializer<'de>, Deserialize this value from the given Serde deserializer. #### fn eq(&self, other: &Pad) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for Pad #### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where __S: Serializer, Serialize this value into the given Serde serializer. Auto Trait Implementations --- ### impl RefUnwindSafe for Pad ### impl Send for Pad ### impl Sync for Pad ### impl Unpin for Pad ### impl UnwindSafe for Pad Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. T: for<'de> Deserialize<'de>, Struct rocket_launch_live::api_models::Tag === ``` pub struct Tag { pub id: Option<i64>, pub text: String, } ``` Fields --- `id: Option<i64>``text: String`Trait Implementations --- ### impl Clone for Tag #### fn clone(&self) -> Tag Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn default() -> Tag Returns the “default value” for a type. #### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where __D: Deserializer<'de>, Deserialize this value from the given Serde deserializer. #### fn eq(&self, other: &Tag) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for Tag #### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where __S: Serializer, Serialize this value into the given Serde serializer. Auto Trait Implementations --- ### impl RefUnwindSafe for Tag ### impl Send for Tag ### impl Sync for Tag ### impl Unpin for Tag ### impl UnwindSafe for Tag Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. T: for<'de> Deserialize<'de>, Struct rocket_launch_live::api_models::Vehicle === ``` pub struct Vehicle { pub id: Option<i64>, pub name: String, pub company_id: Option<i64>, pub slug: String, } ``` Fields --- `id: Option<i64>``name: String``company_id: Option<i64>``slug: String`Trait Implementations --- ### impl Clone for Vehicle #### fn clone(&self) -> Vehicle Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn default() -> Vehicle Returns the “default value” for a type. #### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where __D: Deserializer<'de>, Deserialize this value from the given Serde deserializer. #### fn eq(&self, other: &Vehicle) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for Vehicle #### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where __S: Serializer, Serialize this value into the given Serde serializer. Auto Trait Implementations --- ### impl RefUnwindSafe for Vehicle ### impl Send for Vehicle ### impl Sync for Vehicle ### impl Unpin for Vehicle ### impl UnwindSafe for Vehicle Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. T: for<'de> Deserialize<'de>, Macro rocket_launch_live::add_param === ``` macro_rules! add_param { ($vec:expr, $val:expr, $name:expr) => { ... }; } ``` Simplify conditional concatenation of API parameters. Struct rocket_launch_live::CommonParams === ``` pub struct CommonParams<'a> { /* private fields */ } ``` Parameters used by multiple builders by composition. Trait Implementations --- ### impl<'a> Default for CommonParams<'a#### fn default() -> CommonParams<'aReturns the “default value” for a type. Read moreAuto Trait Implementations --- ### impl<'a> RefUnwindSafe for CommonParams<'a### impl<'a> Send for CommonParams<'a### impl<'a> Sync for CommonParams<'a### impl<'a> Unpin for CommonParams<'a### impl<'a> UnwindSafe for CommonParams<'aBlanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more Struct rocket_launch_live::CompanyParamsBuilder === ``` pub struct CompanyParamsBuilder<'a> { /* private fields */ } ``` Builder to generate the API parameters to filter calls to the companies endpoint. Implementations --- ### impl<'a> CompanyParamsBuilder<'a#### pub fn new() -> Self Create a new builder for the company paramaters. #### pub fn id(&mut self, id: i64) -> &mut Self Set the company id parameter. #### pub fn name(&mut self, name: &'a str) -> &mut Self Set the company name paramter. #### pub fn country_code(&mut self, country_code: &'a str) -> &mut Self Set the company country_code parameter. #### pub fn slug(&mut self, slug: &'a str) -> &mut Self Set the company slug paramter. #### pub fn inactive(&mut self, inactive: bool) -> &mut Self Set the company inactive parameter. #### pub fn page(&mut self, page: i64) -> &mut Self Set the company page parameter. #### pub fn build(&self) -> Params Build the low level company parameters from all the set parameters. Trait Implementations --- ### impl<'a> Default for CompanyParamsBuilder<'a#### fn default() -> CompanyParamsBuilder<'aReturns the “default value” for a type. Read moreAuto Trait Implementations --- ### impl<'a> RefUnwindSafe for CompanyParamsBuilder<'a### impl<'a> Send for CompanyParamsBuilder<'a### impl<'a> Sync for CompanyParamsBuilder<'a### impl<'a> Unpin for CompanyParamsBuilder<'a### impl<'a> UnwindSafe for CompanyParamsBuilder<'aBlanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more Struct rocket_launch_live::LaunchParamsBuilder === ``` pub struct LaunchParamsBuilder<'a> { /* private fields */ } ``` Builder to generate the API parameters to filter calls to the launches endpoint. Implementations --- ### impl<'a> LaunchParamsBuilder<'a#### pub fn new() -> Self Create a new builder for the launch paramaters. ##### Examples found in repository? examples/launches.rs (line 15) ``` 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 ``` ``` &varrasync fn main() -> Result<(), Box<dyn Error>> { // Read the API key from an environment variable. let api_key = env::var("RLL_API_KEY")?; // Create an instance of RocketLaunchLive to access the API. let client = RocketLaunchLive::new(&api_key); // Create an instance of LaunchParamsBuilder. // Set some parameters to filter out the launches we're interested in. let params = LaunchParamsBuilder::new() .country_code("US") .after_date(NaiveDate::from_ymd_opt(2023, 9, 1))? .search("ISS") .direction(Direction::Descending) .limit(10) .build(); // Call the launches endpoint method with the parameters set above. // This returns a Response from the API server asynchronously. // Generic type annotations since each endpoint has a specific response. let resp: Response<Launch> = client.launches(Some(params)).await?; // Iterate over the the result field of the Response. for launch in resp.result { println!( "{} | {} | {}", launch.date_str, launch.vehicle.name, launch.name ); } Ok(()) } ``` #### pub fn id(&mut self, id: i64) -> &mut Self Set the launch id parameter. #### pub fn cospar_id(&mut self, cospar_id: &'a str) -> &mut Self Set the launch cospar_id parameter. #### pub fn after_date( &mut self, after_date: Option<NaiveDate> ) -> Result<&mut Self, &'static strSet the launch after_date parameter. ##### Examples found in repository? examples/launches.rs (line 17) ``` 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 ``` ``` &varrasync fn main() -> Result<(), Box<dyn Error>> { // Read the API key from an environment variable. let api_key = env::var("RLL_API_KEY")?; // Create an instance of RocketLaunchLive to access the API. let client = RocketLaunchLive::new(&api_key); // Create an instance of LaunchParamsBuilder. // Set some parameters to filter out the launches we're interested in. let params = LaunchParamsBuilder::new() .country_code("US") .after_date(NaiveDate::from_ymd_opt(2023, 9, 1))? .search("ISS") .direction(Direction::Descending) .limit(10) .build(); // Call the launches endpoint method with the parameters set above. // This returns a Response from the API server asynchronously. // Generic type annotations since each endpoint has a specific response. let resp: Response<Launch> = client.launches(Some(params)).await?; // Iterate over the the result field of the Response. for launch in resp.result { println!( "{} | {} | {}", launch.date_str, launch.vehicle.name, launch.name ); } Ok(()) } ``` #### pub fn before_date( &mut self, before_date: Option<NaiveDate> ) -> Result<&mut Self, &'static strSet the launch before_date parameter. #### pub fn modified_since( &mut self, date: Option<NaiveDate>, time: Option<NaiveTime> ) -> Result<&mut Self, &'static strSet the launch modified_since parameter. #### pub fn location_id(&mut self, location_id: i64) -> &mut Self Set the launch location_id parameter. #### pub fn pad_id(&mut self, pad_id: i64) -> &mut Self Set the launch pad_id parameter. #### pub fn provider_id(&mut self, provider_id: i64) -> &mut Self Set the launch provider_id parameter. #### pub fn tag_id(&mut self, tag_id: i64) -> &mut Self Set the launch tag_id parameter. #### pub fn vehicle_id(&mut self, vehicle_id: i64) -> &mut Self Set the launch vehicle_id parameter. #### pub fn state_abbr(&mut self, sate_abbr: &'a str) -> &mut Self Set the launch state_abbr parameter. #### pub fn country_code(&mut self, country_code: &'a str) -> &mut Self Set the launch country_code parameter. ##### Examples found in repository? examples/launches.rs (line 16) ``` 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 ``` ``` &varrasync fn main() -> Result<(), Box<dyn Error>> { // Read the API key from an environment variable. let api_key = env::var("RLL_API_KEY")?; // Create an instance of RocketLaunchLive to access the API. let client = RocketLaunchLive::new(&api_key); // Create an instance of LaunchParamsBuilder. // Set some parameters to filter out the launches we're interested in. let params = LaunchParamsBuilder::new() .country_code("US") .after_date(NaiveDate::from_ymd_opt(2023, 9, 1))? .search("ISS") .direction(Direction::Descending) .limit(10) .build(); // Call the launches endpoint method with the parameters set above. // This returns a Response from the API server asynchronously. // Generic type annotations since each endpoint has a specific response. let resp: Response<Launch> = client.launches(Some(params)).await?; // Iterate over the the result field of the Response. for launch in resp.result { println!( "{} | {} | {}", launch.date_str, launch.vehicle.name, launch.name ); } Ok(()) } ``` #### pub fn search(&mut self, search: &'a str) -> &mut Self Set the launch search parameter. ##### Examples found in repository? examples/launches.rs (line 18) ``` 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 ``` ``` &varrasync fn main() -> Result<(), Box<dyn Error>> { // Read the API key from an environment variable. let api_key = env::var("RLL_API_KEY")?; // Create an instance of RocketLaunchLive to access the API. let client = RocketLaunchLive::new(&api_key); // Create an instance of LaunchParamsBuilder. // Set some parameters to filter out the launches we're interested in. let params = LaunchParamsBuilder::new() .country_code("US") .after_date(NaiveDate::from_ymd_opt(2023, 9, 1))? .search("ISS") .direction(Direction::Descending) .limit(10) .build(); // Call the launches endpoint method with the parameters set above. // This returns a Response from the API server asynchronously. // Generic type annotations since each endpoint has a specific response. let resp: Response<Launch> = client.launches(Some(params)).await?; // Iterate over the the result field of the Response. for launch in resp.result { println!( "{} | {} | {}", launch.date_str, launch.vehicle.name, launch.name ); } Ok(()) } ``` #### pub fn slug(&mut self, slug: &'a str) -> &mut Self Set the launch slug parameter. #### pub fn limit(&mut self, limit: i64) -> &mut Self Set the launch limit parameter. ##### Examples found in repository? examples/launches.rs (line 20) ``` 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 ``` ``` &varrasync fn main() -> Result<(), Box<dyn Error>> { // Read the API key from an environment variable. let api_key = env::var("RLL_API_KEY")?; // Create an instance of RocketLaunchLive to access the API. let client = RocketLaunchLive::new(&api_key); // Create an instance of LaunchParamsBuilder. // Set some parameters to filter out the launches we're interested in. let params = LaunchParamsBuilder::new() .country_code("US") .after_date(NaiveDate::from_ymd_opt(2023, 9, 1))? .search("ISS") .direction(Direction::Descending) .limit(10) .build(); // Call the launches endpoint method with the parameters set above. // This returns a Response from the API server asynchronously. // Generic type annotations since each endpoint has a specific response. let resp: Response<Launch> = client.launches(Some(params)).await?; // Iterate over the the result field of the Response. for launch in resp.result { println!( "{} | {} | {}", launch.date_str, launch.vehicle.name, launch.name ); } Ok(()) } ``` #### pub fn direction(&mut self, direction: Direction) -> &mut Self Set the launch direction parameter. ##### Examples found in repository? examples/launches.rs (line 19) ``` 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 ``` ``` &varrasync fn main() -> Result<(), Box<dyn Error>> { // Read the API key from an environment variable. let api_key = env::var("RLL_API_KEY")?; // Create an instance of RocketLaunchLive to access the API. let client = RocketLaunchLive::new(&api_key); // Create an instance of LaunchParamsBuilder. // Set some parameters to filter out the launches we're interested in. let params = LaunchParamsBuilder::new() .country_code("US") .after_date(NaiveDate::from_ymd_opt(2023, 9, 1))? .search("ISS") .direction(Direction::Descending) .limit(10) .build(); // Call the launches endpoint method with the parameters set above. // This returns a Response from the API server asynchronously. // Generic type annotations since each endpoint has a specific response. let resp: Response<Launch> = client.launches(Some(params)).await?; // Iterate over the the result field of the Response. for launch in resp.result { println!( "{} | {} | {}", launch.date_str, launch.vehicle.name, launch.name ); } Ok(()) } ``` #### pub fn page(&mut self, page: i64) -> &mut Self Set the launch page parameter. #### pub fn build(&self) -> Params Build the low level launch parameters from all the set parameters. ##### Examples found in repository? examples/launches.rs (line 21) ``` 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 ``` ``` &varrasync fn main() -> Result<(), Box<dyn Error>> { // Read the API key from an environment variable. let api_key = env::var("RLL_API_KEY")?; // Create an instance of RocketLaunchLive to access the API. let client = RocketLaunchLive::new(&api_key); // Create an instance of LaunchParamsBuilder. // Set some parameters to filter out the launches we're interested in. let params = LaunchParamsBuilder::new() .country_code("US") .after_date(NaiveDate::from_ymd_opt(2023, 9, 1))? .search("ISS") .direction(Direction::Descending) .limit(10) .build(); // Call the launches endpoint method with the parameters set above. // This returns a Response from the API server asynchronously. // Generic type annotations since each endpoint has a specific response. let resp: Response<Launch> = client.launches(Some(params)).await?; // Iterate over the the result field of the Response. for launch in resp.result { println!( "{} | {} | {}", launch.date_str, launch.vehicle.name, launch.name ); } Ok(()) } ``` Trait Implementations --- ### impl<'a> Default for LaunchParamsBuilder<'a#### fn default() -> LaunchParamsBuilder<'aReturns the “default value” for a type. Read moreAuto Trait Implementations --- ### impl<'a> RefUnwindSafe for LaunchParamsBuilder<'a### impl<'a> Send for LaunchParamsBuilder<'a### impl<'a> Sync for LaunchParamsBuilder<'a### impl<'a> Unpin for LaunchParamsBuilder<'a### impl<'a> UnwindSafe for LaunchParamsBuilder<'aBlanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more Struct rocket_launch_live::LocationParamsBuilder === ``` pub struct LocationParamsBuilder<'a> { /* private fields */ } ``` Builder to generate the API parameters to filter calls to the locations endpoint. Implementations --- ### impl<'a> LocationParamsBuilder<'a#### pub fn new() -> Self Create a new builder for the location parameters. #### pub fn id(&mut self, id: i64) -> &mut Self Set the location id parameter. #### pub fn name(&mut self, name: &'a str) -> &mut Self Set the location name parameter. #### pub fn state_abbr(&mut self, state_abbr: &'a str) -> &mut Self Set the location state_abbr parameter. #### pub fn country_code(&mut self, country_code: &'a str) -> &mut Self Set the location country_code parameter. #### pub fn page(&mut self, page: i64) -> &mut Self Set the location page parameter. #### pub fn build(&self) -> Params Build the low level location parameters from all the set parameters. Trait Implementations --- ### impl<'a> Default for LocationParamsBuilder<'a#### fn default() -> LocationParamsBuilder<'aReturns the “default value” for a type. Read moreAuto Trait Implementations --- ### impl<'a> RefUnwindSafe for LocationParamsBuilder<'a### impl<'a> Send for LocationParamsBuilder<'a### impl<'a> Sync for LocationParamsBuilder<'a### impl<'a> Unpin for LocationParamsBuilder<'a### impl<'a> UnwindSafe for LocationParamsBuilder<'aBlanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more Struct rocket_launch_live::MissionParamsBuilder === ``` pub struct MissionParamsBuilder<'a> { /* private fields */ } ``` Builder to generate the API parameters to filter calls to the missions endpoint. Implementations --- ### impl<'a> MissionParamsBuilder<'a#### pub fn new() -> Self Create a new builder for the mission parameters. #### pub fn id(&mut self, id: i64) -> &mut Self Set the mission id parameter. #### pub fn name(&mut self, name: &'a str) -> &mut Self Set the mission name parameter. #### pub fn page(&mut self, page: i64) -> &mut Self Set the mission page parameter. #### pub fn build(&self) -> Params Build the low level mission parameters from all the set parameters. Trait Implementations --- ### impl<'a> Default for MissionParamsBuilder<'a#### fn default() -> MissionParamsBuilder<'aReturns the “default value” for a type. Read moreAuto Trait Implementations --- ### impl<'a> RefUnwindSafe for MissionParamsBuilder<'a### impl<'a> Send for MissionParamsBuilder<'a### impl<'a> Sync for MissionParamsBuilder<'a### impl<'a> Unpin for MissionParamsBuilder<'a### impl<'a> UnwindSafe for MissionParamsBuilder<'aBlanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more Struct rocket_launch_live::NaiveDate === ``` pub struct NaiveDate { /* private fields */ } ``` ISO 8601 calendar date without timezone. Allows for every proleptic Gregorian date from Jan 1, 262145 BCE to Dec 31, 262143 CE. Also supports the conversion from ISO 8601 ordinal and week date. Calendar Date --- The ISO 8601 **calendar date** follows the proleptic Gregorian calendar. It is like a normal civil calendar but note some slight differences: * Dates before the Gregorian calendar’s inception in 1582 are defined via the extrapolation. Be careful, as historical dates are often noted in the Julian calendar and others and the transition to Gregorian may differ across countries (as late as early 20C). (Some example: Both Shakespeare from Britain and Cervantes from Spain seemingly died on the same calendar date—April 23, 1616—but in the different calendar. Britain used the Julian calendar at that time, so Shakespeare’s death is later.) * ISO 8601 calendars has the year 0, which is 1 BCE (a year before 1 CE). If you need a typical BCE/BC and CE/AD notation for year numbers, use the `Datelike::year_ce` method. Week Date --- The ISO 8601 **week date** is a triple of year number, week number and day of the week with the following rules: * A week consists of Monday through Sunday, and is always numbered within some year. The week number ranges from 1 to 52 or 53 depending on the year. * The week 1 of given year is defined as the first week containing January 4 of that year, or equivalently, the first week containing four or more days in that year. * The year number in the week date may *not* correspond to the actual Gregorian year. For example, January 3, 2016 (Sunday) was on the last (53rd) week of 2015. Chrono’s date types default to the ISO 8601 calendar date, but `Datelike::iso_week` and `Datelike::weekday` methods can be used to get the corresponding week date. Ordinal Date --- The ISO 8601 **ordinal date** is a pair of year number and day of the year (“ordinal”). The ordinal number ranges from 1 to 365 or 366 depending on the year. The year number is the same as that of the calendar date. This is currently the internal format of Chrono’s date types. Implementations --- ### impl NaiveDate #### pub const fn from_ymd(year: i32, month: u32, day: u32) -> NaiveDate 👎Deprecated since 0.4.23: use `from_ymd_opt()` insteadMakes a new `NaiveDate` from the calendar date (year, month and day). ##### Panics Panics if the specified calendar day does not exist, on invalid values for `month` or `day`, or if `year` is out of range for `NaiveDate`. #### pub const fn from_ymd_opt(year: i32, month: u32, day: u32) -> Option<NaiveDateMakes a new `NaiveDate` from the calendar date (year, month and day). ##### Errors Returns `None` if: * The specified calendar day does not exist (for example 2023-04-31). * The value for `month` or `day` is invalid. * `year` is out of range for `NaiveDate`. ##### Example ``` use chrono::NaiveDate; let from_ymd_opt = NaiveDate::from_ymd_opt; assert!(from_ymd_opt(2015, 3, 14).is_some()); assert!(from_ymd_opt(2015, 0, 14).is_none()); assert!(from_ymd_opt(2015, 2, 29).is_none()); assert!(from_ymd_opt(-4, 2, 29).is_some()); // 5 BCE is a leap year assert!(from_ymd_opt(400000, 1, 1).is_none()); assert!(from_ymd_opt(-400000, 1, 1).is_none()); ``` #### pub const fn from_yo(year: i32, ordinal: u32) -> NaiveDate 👎Deprecated since 0.4.23: use `from_yo_opt()` insteadMakes a new `NaiveDate` from the ordinal date (year and day of the year). ##### Panics Panics if the specified ordinal day does not exist, on invalid values for `ordinal`, or if `year` is out of range for `NaiveDate`. #### pub const fn from_yo_opt(year: i32, ordinal: u32) -> Option<NaiveDateMakes a new `NaiveDate` from the ordinal date (year and day of the year). ##### Errors Returns `None` if: * The specified ordinal day does not exist (for example 2023-366). * The value for `ordinal` is invalid (for example: `0`, `400`). * `year` is out of range for `NaiveDate`. ##### Example ``` use chrono::NaiveDate; let from_yo_opt = NaiveDate::from_yo_opt; assert!(from_yo_opt(2015, 100).is_some()); assert!(from_yo_opt(2015, 0).is_none()); assert!(from_yo_opt(2015, 365).is_some()); assert!(from_yo_opt(2015, 366).is_none()); assert!(from_yo_opt(-4, 366).is_some()); // 5 BCE is a leap year assert!(from_yo_opt(400000, 1).is_none()); assert!(from_yo_opt(-400000, 1).is_none()); ``` #### pub const fn from_isoywd(year: i32, week: u32, weekday: Weekday) -> NaiveDate 👎Deprecated since 0.4.23: use `from_isoywd_opt()` insteadMakes a new `NaiveDate` from the ISO week date (year, week number and day of the week). The resulting `NaiveDate` may have a different year from the input year. ##### Panics Panics if the specified week does not exist in that year, on invalid values for `week`, or if the resulting date is out of range for `NaiveDate`. #### pub const fn from_isoywd_opt( year: i32, week: u32, weekday: Weekday ) -> Option<NaiveDateMakes a new `NaiveDate` from the ISO week date (year, week number and day of the week). The resulting `NaiveDate` may have a different year from the input year. ##### Errors Returns `None` if: * The specified week does not exist in that year (for example 2023 week 53). * The value for `week` is invalid (for example: `0`, `60`). * If the resulting date is out of range for `NaiveDate`. ##### Example ``` use chrono::{NaiveDate, Weekday}; let from_ymd = |y, m, d| NaiveDate::from_ymd_opt(y, m, d).unwrap(); let from_isoywd_opt = NaiveDate::from_isoywd_opt; assert_eq!(from_isoywd_opt(2015, 0, Weekday::Sun), None); assert_eq!(from_isoywd_opt(2015, 10, Weekday::Sun), Some(from_ymd(2015, 3, 8))); assert_eq!(from_isoywd_opt(2015, 30, Weekday::Mon), Some(from_ymd(2015, 7, 20))); assert_eq!(from_isoywd_opt(2015, 60, Weekday::Mon), None); assert_eq!(from_isoywd_opt(400000, 10, Weekday::Fri), None); assert_eq!(from_isoywd_opt(-400000, 10, Weekday::Sat), None); ``` The year number of ISO week date may differ from that of the calendar date. ``` // Mo Tu We Th Fr Sa Su // 2014-W52 22 23 24 25 26 27 28 has 4+ days of new year, // 2015-W01 29 30 31 1 2 3 4 <- so this is the first week assert_eq!(from_isoywd_opt(2014, 52, Weekday::Sun), Some(from_ymd(2014, 12, 28))); assert_eq!(from_isoywd_opt(2014, 53, Weekday::Mon), None); assert_eq!(from_isoywd_opt(2015, 1, Weekday::Mon), Some(from_ymd(2014, 12, 29))); // 2015-W52 21 22 23 24 25 26 27 has 4+ days of old year, // 2015-W53 28 29 30 31 1 2 3 <- so this is the last week // 2016-W01 4 5 6 7 8 9 10 assert_eq!(from_isoywd_opt(2015, 52, Weekday::Sun), Some(from_ymd(2015, 12, 27))); assert_eq!(from_isoywd_opt(2015, 53, Weekday::Sun), Some(from_ymd(2016, 1, 3))); assert_eq!(from_isoywd_opt(2015, 54, Weekday::Mon), None); assert_eq!(from_isoywd_opt(2016, 1, Weekday::Mon), Some(from_ymd(2016, 1, 4))); ``` #### pub const fn from_num_days_from_ce(days: i32) -> NaiveDate 👎Deprecated since 0.4.23: use `from_num_days_from_ce_opt()` insteadMakes a new `NaiveDate` from a day’s number in the proleptic Gregorian calendar, with January 1, 1 being day 1. ##### Panics Panics if the date is out of range. #### pub const fn from_num_days_from_ce_opt(days: i32) -> Option<NaiveDateMakes a new `NaiveDate` from a day’s number in the proleptic Gregorian calendar, with January 1, 1 being day 1. ##### Errors Returns `None` if the date is out of range. ##### Example ``` use chrono::NaiveDate; let from_ndays_opt = NaiveDate::from_num_days_from_ce_opt; let from_ymd = |y, m, d| NaiveDate::from_ymd_opt(y, m, d).unwrap(); assert_eq!(from_ndays_opt(730_000), Some(from_ymd(1999, 9, 3))); assert_eq!(from_ndays_opt(1), Some(from_ymd(1, 1, 1))); assert_eq!(from_ndays_opt(0), Some(from_ymd(0, 12, 31))); assert_eq!(from_ndays_opt(-1), Some(from_ymd(0, 12, 30))); assert_eq!(from_ndays_opt(100_000_000), None); assert_eq!(from_ndays_opt(-100_000_000), None); ``` #### pub const fn from_weekday_of_month( year: i32, month: u32, weekday: Weekday, n: u8 ) -> NaiveDate 👎Deprecated since 0.4.23: use `from_weekday_of_month_opt()` insteadMakes a new `NaiveDate` by counting the number of occurrences of a particular day-of-week since the beginning of the given month. For instance, if you want the 2nd Friday of March 2017, you would use `NaiveDate::from_weekday_of_month(2017, 3, Weekday::Fri, 2)`. `n` is 1-indexed. ##### Panics Panics if the specified day does not exist in that month, on invalid values for `month` or `n`, or if `year` is out of range for `NaiveDate`. #### pub const fn from_weekday_of_month_opt( year: i32, month: u32, weekday: Weekday, n: u8 ) -> Option<NaiveDateMakes a new `NaiveDate` by counting the number of occurrences of a particular day-of-week since the beginning of the given month. For instance, if you want the 2nd Friday of March 2017, you would use `NaiveDate::from_weekday_of_month(2017, 3, Weekday::Fri, 2)`. `n` is 1-indexed. ##### Errors Returns `None` if: * The specified day does not exist in that month (for example the 5th Monday of Apr. 2023). * The value for `month` or `n` is invalid. * `year` is out of range for `NaiveDate`. ##### Example ``` use chrono::{NaiveDate, Weekday}; assert_eq!(NaiveDate::from_weekday_of_month_opt(2017, 3, Weekday::Fri, 2), NaiveDate::from_ymd_opt(2017, 3, 10)) ``` #### pub fn parse_from_str(s: &str, fmt: &str) -> Result<NaiveDate, ParseErrorParses a string with the specified format string and returns a new `NaiveDate`. See the `format::strftime` module on the supported escape sequences. ##### Example ``` use chrono::NaiveDate; let parse_from_str = NaiveDate::parse_from_str; assert_eq!(parse_from_str("2015-09-05", "%Y-%m-%d"), Ok(NaiveDate::from_ymd_opt(2015, 9, 5).unwrap())); assert_eq!(parse_from_str("5sep2015", "%d%b%Y"), Ok(NaiveDate::from_ymd_opt(2015, 9, 5).unwrap())); ``` Time and offset is ignored for the purpose of parsing. ``` assert_eq!(parse_from_str("2014-5-17T12:34:56+09:30", "%Y-%m-%dT%H:%M:%S%z"), Ok(NaiveDate::from_ymd_opt(2014, 5, 17).unwrap())); ``` Out-of-bound dates or insufficient fields are errors. ``` assert!(parse_from_str("2015/9", "%Y/%m").is_err()); assert!(parse_from_str("2015/9/31", "%Y/%m/%d").is_err()); ``` All parsed fields should be consistent to each other, otherwise it’s an error. ``` assert!(parse_from_str("Sat, 09 Aug 2013", "%a, %d %b %Y").is_err()); ``` #### pub fn parse_and_remainder<'a>( s: &'a str, fmt: &str ) -> Result<(NaiveDate, &'a str), ParseErrorParses a string from a user-specified format into a new `NaiveDate` value, and a slice with the remaining portion of the string. See the `format::strftime` module on the supported escape sequences. Similar to `parse_from_str`. ##### Example ``` let (date, remainder) = NaiveDate::parse_and_remainder( "2015-02-18 trailing text", "%Y-%m-%d").unwrap(); assert_eq!(date, NaiveDate::from_ymd_opt(2015, 2, 18).unwrap()); assert_eq!(remainder, " trailing text"); ``` #### pub const fn checked_add_months(self, months: Months) -> Option<NaiveDateAdd a duration in `Months` to the date Uses the last day of the month if the day does not exist in the resulting month. ##### Errors Returns `None` if the resulting date would be out of range. ##### Example ``` assert_eq!( NaiveDate::from_ymd_opt(2022, 2, 20).unwrap().checked_add_months(Months::new(6)), Some(NaiveDate::from_ymd_opt(2022, 8, 20).unwrap()) ); assert_eq!( NaiveDate::from_ymd_opt(2022, 7, 31).unwrap().checked_add_months(Months::new(2)), Some(NaiveDate::from_ymd_opt(2022, 9, 30).unwrap()) ); ``` #### pub const fn checked_sub_months(self, months: Months) -> Option<NaiveDateSubtract a duration in `Months` from the date Uses the last day of the month if the day does not exist in the resulting month. ##### Errors Returns `None` if the resulting date would be out of range. ##### Example ``` assert_eq!( NaiveDate::from_ymd_opt(2022, 2, 20).unwrap().checked_sub_months(Months::new(6)), Some(NaiveDate::from_ymd_opt(2021, 8, 20).unwrap()) ); assert_eq!( NaiveDate::from_ymd_opt(2014, 1, 1).unwrap() .checked_sub_months(Months::new(core::i32::MAX as u32 + 1)), None ); ``` #### pub const fn checked_add_days(self, days: Days) -> Option<NaiveDateAdd a duration in `Days` to the date ##### Errors Returns `None` if the resulting date would be out of range. ##### Example ``` assert_eq!( NaiveDate::from_ymd_opt(2022, 2, 20).unwrap().checked_add_days(Days::new(9)), Some(NaiveDate::from_ymd_opt(2022, 3, 1).unwrap()) ); assert_eq!( NaiveDate::from_ymd_opt(2022, 7, 31).unwrap().checked_add_days(Days::new(2)), Some(NaiveDate::from_ymd_opt(2022, 8, 2).unwrap()) ); assert_eq!( NaiveDate::from_ymd_opt(2022, 7, 31).unwrap().checked_add_days(Days::new(1000000000000)), None ); ``` #### pub const fn checked_sub_days(self, days: Days) -> Option<NaiveDateSubtract a duration in `Days` from the date ##### Errors Returns `None` if the resulting date would be out of range. ##### Example ``` assert_eq!( NaiveDate::from_ymd_opt(2022, 2, 20).unwrap().checked_sub_days(Days::new(6)), Some(NaiveDate::from_ymd_opt(2022, 2, 14).unwrap()) ); assert_eq!( NaiveDate::from_ymd_opt(2022, 2, 20).unwrap().checked_sub_days(Days::new(1000000000000)), None ); ``` #### pub const fn and_time(&self, time: NaiveTime) -> NaiveDateTime Makes a new `NaiveDateTime` from the current date and given `NaiveTime`. ##### Example ``` use chrono::{NaiveDate, NaiveTime, NaiveDateTime}; let d = NaiveDate::from_ymd_opt(2015, 6, 3).unwrap(); let t = NaiveTime::from_hms_milli_opt(12, 34, 56, 789).unwrap(); let dt: NaiveDateTime = d.and_time(t); assert_eq!(dt.date(), d); assert_eq!(dt.time(), t); ``` #### pub const fn and_hms(&self, hour: u32, min: u32, sec: u32) -> NaiveDateTime 👎Deprecated since 0.4.23: use `and_hms_opt()` insteadMakes a new `NaiveDateTime` from the current date, hour, minute and second. No leap second is allowed here; use `NaiveDate::and_hms_*` methods with a subsecond parameter instead. ##### Panics Panics on invalid hour, minute and/or second. #### pub const fn and_hms_opt( &self, hour: u32, min: u32, sec: u32 ) -> Option<NaiveDateTimeMakes a new `NaiveDateTime` from the current date, hour, minute and second. No leap second is allowed here; use `NaiveDate::and_hms_*_opt` methods with a subsecond parameter instead. ##### Errors Returns `None` on invalid hour, minute and/or second. ##### Example ``` use chrono::NaiveDate; let d = NaiveDate::from_ymd_opt(2015, 6, 3).unwrap(); assert!(d.and_hms_opt(12, 34, 56).is_some()); assert!(d.and_hms_opt(12, 34, 60).is_none()); // use `and_hms_milli_opt` instead assert!(d.and_hms_opt(12, 60, 56).is_none()); assert!(d.and_hms_opt(24, 34, 56).is_none()); ``` #### pub const fn and_hms_milli( &self, hour: u32, min: u32, sec: u32, milli: u32 ) -> NaiveDateTime 👎Deprecated since 0.4.23: use `and_hms_milli_opt()` insteadMakes a new `NaiveDateTime` from the current date, hour, minute, second and millisecond. The millisecond part is allowed to exceed 1,000,000,000 in order to represent a leap second, but only when `sec == 59`. ##### Panics Panics on invalid hour, minute, second and/or millisecond. #### pub const fn and_hms_milli_opt( &self, hour: u32, min: u32, sec: u32, milli: u32 ) -> Option<NaiveDateTimeMakes a new `NaiveDateTime` from the current date, hour, minute, second and millisecond. The millisecond part is allowed to exceed 1,000,000,000 in order to represent a leap second, but only when `sec == 59`. ##### Errors Returns `None` on invalid hour, minute, second and/or millisecond. ##### Example ``` use chrono::NaiveDate; let d = NaiveDate::from_ymd_opt(2015, 6, 3).unwrap(); assert!(d.and_hms_milli_opt(12, 34, 56, 789).is_some()); assert!(d.and_hms_milli_opt(12, 34, 59, 1_789).is_some()); // leap second assert!(d.and_hms_milli_opt(12, 34, 59, 2_789).is_none()); assert!(d.and_hms_milli_opt(12, 34, 60, 789).is_none()); assert!(d.and_hms_milli_opt(12, 60, 56, 789).is_none()); assert!(d.and_hms_milli_opt(24, 34, 56, 789).is_none()); ``` #### pub const fn and_hms_micro( &self, hour: u32, min: u32, sec: u32, micro: u32 ) -> NaiveDateTime 👎Deprecated since 0.4.23: use `and_hms_micro_opt()` insteadMakes a new `NaiveDateTime` from the current date, hour, minute, second and microsecond. The microsecond part is allowed to exceed 1,000,000,000 in order to represent a leap second, but only when `sec == 59`. ##### Panics Panics on invalid hour, minute, second and/or microsecond. ##### Example ``` use chrono::{NaiveDate, NaiveDateTime, Datelike, Timelike, Weekday}; let d = NaiveDate::from_ymd_opt(2015, 6, 3).unwrap(); let dt: NaiveDateTime = d.and_hms_micro_opt(12, 34, 56, 789_012).unwrap(); assert_eq!(dt.year(), 2015); assert_eq!(dt.weekday(), Weekday::Wed); assert_eq!(dt.second(), 56); assert_eq!(dt.nanosecond(), 789_012_000); ``` #### pub const fn and_hms_micro_opt( &self, hour: u32, min: u32, sec: u32, micro: u32 ) -> Option<NaiveDateTimeMakes a new `NaiveDateTime` from the current date, hour, minute, second and microsecond. The microsecond part is allowed to exceed 1,000,000,000 in order to represent a leap second, but only when `sec == 59`. ##### Errors Returns `None` on invalid hour, minute, second and/or microsecond. ##### Example ``` use chrono::NaiveDate; let d = NaiveDate::from_ymd_opt(2015, 6, 3).unwrap(); assert!(d.and_hms_micro_opt(12, 34, 56, 789_012).is_some()); assert!(d.and_hms_micro_opt(12, 34, 59, 1_789_012).is_some()); // leap second assert!(d.and_hms_micro_opt(12, 34, 59, 2_789_012).is_none()); assert!(d.and_hms_micro_opt(12, 34, 60, 789_012).is_none()); assert!(d.and_hms_micro_opt(12, 60, 56, 789_012).is_none()); assert!(d.and_hms_micro_opt(24, 34, 56, 789_012).is_none()); ``` #### pub const fn and_hms_nano( &self, hour: u32, min: u32, sec: u32, nano: u32 ) -> NaiveDateTime 👎Deprecated since 0.4.23: use `and_hms_nano_opt()` insteadMakes a new `NaiveDateTime` from the current date, hour, minute, second and nanosecond. The nanosecond part is allowed to exceed 1,000,000,000 in order to represent a leap second, but only when `sec == 59`. ##### Panics Panics on invalid hour, minute, second and/or nanosecond. #### pub const fn and_hms_nano_opt( &self, hour: u32, min: u32, sec: u32, nano: u32 ) -> Option<NaiveDateTimeMakes a new `NaiveDateTime` from the current date, hour, minute, second and nanosecond. The nanosecond part is allowed to exceed 1,000,000,000 in order to represent a leap second, but only when `sec == 59`. ##### Errors Returns `None` on invalid hour, minute, second and/or nanosecond. ##### Example ``` use chrono::NaiveDate; let d = NaiveDate::from_ymd_opt(2015, 6, 3).unwrap(); assert!(d.and_hms_nano_opt(12, 34, 56, 789_012_345).is_some()); assert!(d.and_hms_nano_opt(12, 34, 59, 1_789_012_345).is_some()); // leap second assert!(d.and_hms_nano_opt(12, 34, 59, 2_789_012_345).is_none()); assert!(d.and_hms_nano_opt(12, 34, 60, 789_012_345).is_none()); assert!(d.and_hms_nano_opt(12, 60, 56, 789_012_345).is_none()); assert!(d.and_hms_nano_opt(24, 34, 56, 789_012_345).is_none()); ``` #### pub const fn succ(&self) -> NaiveDate 👎Deprecated since 0.4.23: use `succ_opt()` insteadMakes a new `NaiveDate` for the next calendar date. ##### Panics Panics when `self` is the last representable date. #### pub const fn succ_opt(&self) -> Option<NaiveDateMakes a new `NaiveDate` for the next calendar date. ##### Errors Returns `None` when `self` is the last representable date. ##### Example ``` use chrono::NaiveDate; assert_eq!(NaiveDate::from_ymd_opt(2015, 6, 3).unwrap().succ_opt(), Some(NaiveDate::from_ymd_opt(2015, 6, 4).unwrap())); assert_eq!(NaiveDate::MAX.succ_opt(), None); ``` #### pub const fn pred(&self) -> NaiveDate 👎Deprecated since 0.4.23: use `pred_opt()` insteadMakes a new `NaiveDate` for the previous calendar date. ##### Panics Panics when `self` is the first representable date. #### pub const fn pred_opt(&self) -> Option<NaiveDateMakes a new `NaiveDate` for the previous calendar date. ##### Errors Returns `None` when `self` is the first representable date. ##### Example ``` use chrono::NaiveDate; assert_eq!(NaiveDate::from_ymd_opt(2015, 6, 3).unwrap().pred_opt(), Some(NaiveDate::from_ymd_opt(2015, 6, 2).unwrap())); assert_eq!(NaiveDate::MIN.pred_opt(), None); ``` #### pub fn checked_add_signed(self, rhs: Duration) -> Option<NaiveDateAdds the number of whole days in the given `Duration` to the current date. ##### Errors Returns `None` if the resulting date would be out of range. ##### Example ``` use chrono::{Duration, NaiveDate}; let d = NaiveDate::from_ymd_opt(2015, 9, 5).unwrap(); assert_eq!(d.checked_add_signed(Duration::days(40)), Some(NaiveDate::from_ymd_opt(2015, 10, 15).unwrap())); assert_eq!(d.checked_add_signed(Duration::days(-40)), Some(NaiveDate::from_ymd_opt(2015, 7, 27).unwrap())); assert_eq!(d.checked_add_signed(Duration::days(1_000_000_000)), None); assert_eq!(d.checked_add_signed(Duration::days(-1_000_000_000)), None); assert_eq!(NaiveDate::MAX.checked_add_signed(Duration::days(1)), None); ``` #### pub fn checked_sub_signed(self, rhs: Duration) -> Option<NaiveDateSubtracts the number of whole days in the given `Duration` from the current date. ##### Errors Returns `None` if the resulting date would be out of range. ##### Example ``` use chrono::{Duration, NaiveDate}; let d = NaiveDate::from_ymd_opt(2015, 9, 5).unwrap(); assert_eq!(d.checked_sub_signed(Duration::days(40)), Some(NaiveDate::from_ymd_opt(2015, 7, 27).unwrap())); assert_eq!(d.checked_sub_signed(Duration::days(-40)), Some(NaiveDate::from_ymd_opt(2015, 10, 15).unwrap())); assert_eq!(d.checked_sub_signed(Duration::days(1_000_000_000)), None); assert_eq!(d.checked_sub_signed(Duration::days(-1_000_000_000)), None); assert_eq!(NaiveDate::MIN.checked_sub_signed(Duration::days(1)), None); ``` #### pub fn signed_duration_since(self, rhs: NaiveDate) -> Duration Subtracts another `NaiveDate` from the current date. Returns a `Duration` of integral numbers. This does not overflow or underflow at all, as all possible output fits in the range of `Duration`. ##### Example ``` use chrono::{Duration, NaiveDate}; let from_ymd = |y, m, d| NaiveDate::from_ymd_opt(y, m, d).unwrap(); let since = NaiveDate::signed_duration_since; assert_eq!(since(from_ymd(2014, 1, 1), from_ymd(2014, 1, 1)), Duration::zero()); assert_eq!(since(from_ymd(2014, 1, 1), from_ymd(2013, 12, 31)), Duration::days(1)); assert_eq!(since(from_ymd(2014, 1, 1), from_ymd(2014, 1, 2)), Duration::days(-1)); assert_eq!(since(from_ymd(2014, 1, 1), from_ymd(2013, 9, 23)), Duration::days(100)); assert_eq!(since(from_ymd(2014, 1, 1), from_ymd(2013, 1, 1)), Duration::days(365)); assert_eq!(since(from_ymd(2014, 1, 1), from_ymd(2010, 1, 1)), Duration::days(365*4 + 1)); assert_eq!(since(from_ymd(2014, 1, 1), from_ymd(1614, 1, 1)), Duration::days(365*400 + 97)); ``` #### pub const fn years_since(&self, base: NaiveDate) -> Option<u32Returns the number of whole years from the given `base` until `self`. ##### Errors Returns `None` if `base < self`. #### pub fn format_with_items<'a, I, B>(&self, items: I) -> DelayedFormat<I>where I: Iterator<Item = B> + Clone, B: Borrow<Item<'a>>, Formats the date with the specified formatting items. Otherwise it is the same as the ordinary `format` method. The `Iterator` of items should be `Clone`able, since the resulting `DelayedFormat` value may be formatted multiple times. ##### Example ``` use chrono::NaiveDate; use chrono::format::strftime::StrftimeItems; let fmt = StrftimeItems::new("%Y-%m-%d"); let d = NaiveDate::from_ymd_opt(2015, 9, 5).unwrap(); assert_eq!(d.format_with_items(fmt.clone()).to_string(), "2015-09-05"); assert_eq!(d.format("%Y-%m-%d").to_string(), "2015-09-05"); ``` The resulting `DelayedFormat` can be formatted directly via the `Display` trait. ``` assert_eq!(format!("{}", d.format_with_items(fmt)), "2015-09-05"); ``` #### pub fn format<'a>(&self, fmt: &'a str) -> DelayedFormat<StrftimeItems<'a>Formats the date with the specified format string. See the `format::strftime` module on the supported escape sequences. This returns a `DelayedFormat`, which gets converted to a string only when actual formatting happens. You may use the `to_string` method to get a `String`, or just feed it into `print!` and other formatting macros. (In this way it avoids the redundant memory allocation.) A wrong format string does *not* issue an error immediately. Rather, converting or formatting the `DelayedFormat` fails. You are recommended to immediately use `DelayedFormat` for this reason. ##### Example ``` use chrono::NaiveDate; let d = NaiveDate::from_ymd_opt(2015, 9, 5).unwrap(); assert_eq!(d.format("%Y-%m-%d").to_string(), "2015-09-05"); assert_eq!(d.format("%A, %-d %B, %C%y").to_string(), "Saturday, 5 September, 2015"); ``` The resulting `DelayedFormat` can be formatted directly via the `Display` trait. ``` assert_eq!(format!("{}", d.format("%Y-%m-%d")), "2015-09-05"); assert_eq!(format!("{}", d.format("%A, %-d %B, %C%y")), "Saturday, 5 September, 2015"); ``` #### pub const fn iter_days(&self) -> NaiveDateDaysIterator Returns an iterator that steps by days across all representable dates. ##### Example ``` let expected = [ NaiveDate::from_ymd_opt(2016, 2, 27).unwrap(), NaiveDate::from_ymd_opt(2016, 2, 28).unwrap(), NaiveDate::from_ymd_opt(2016, 2, 29).unwrap(), NaiveDate::from_ymd_opt(2016, 3, 1).unwrap(), ]; let mut count = 0; for (idx, d) in NaiveDate::from_ymd_opt(2016, 2, 27).unwrap().iter_days().take(4).enumerate() { assert_eq!(d, expected[idx]); count += 1; } assert_eq!(count, 4); for d in NaiveDate::from_ymd_opt(2016, 3, 1).unwrap().iter_days().rev().take(4) { count -= 1; assert_eq!(d, expected[count]); } ``` #### pub const fn iter_weeks(&self) -> NaiveDateWeeksIterator Returns an iterator that steps by weeks across all representable dates. ##### Example ``` let expected = [ NaiveDate::from_ymd_opt(2016, 2, 27).unwrap(), NaiveDate::from_ymd_opt(2016, 3, 5).unwrap(), NaiveDate::from_ymd_opt(2016, 3, 12).unwrap(), NaiveDate::from_ymd_opt(2016, 3, 19).unwrap(), ]; let mut count = 0; for (idx, d) in NaiveDate::from_ymd_opt(2016, 2, 27).unwrap().iter_weeks().take(4).enumerate() { assert_eq!(d, expected[idx]); count += 1; } assert_eq!(count, 4); for d in NaiveDate::from_ymd_opt(2016, 3, 19).unwrap().iter_weeks().rev().take(4) { count -= 1; assert_eq!(d, expected[count]); } ``` #### pub const fn week(&self, start: Weekday) -> NaiveWeek Returns the `NaiveWeek` that the date belongs to, starting with the `Weekday` specified. #### pub const fn leap_year(&self) -> bool Returns `true` if this is a leap year. ``` assert_eq!(NaiveDate::from_ymd_opt(2000, 1, 1).unwrap().leap_year(), true); assert_eq!(NaiveDate::from_ymd_opt(2001, 1, 1).unwrap().leap_year(), false); assert_eq!(NaiveDate::from_ymd_opt(2002, 1, 1).unwrap().leap_year(), false); assert_eq!(NaiveDate::from_ymd_opt(2003, 1, 1).unwrap().leap_year(), false); assert_eq!(NaiveDate::from_ymd_opt(2004, 1, 1).unwrap().leap_year(), true); assert_eq!(NaiveDate::from_ymd_opt(2100, 1, 1).unwrap().leap_year(), false); ``` #### pub const MIN: NaiveDate = _ The minimum possible `NaiveDate` (January 1, 262145 BCE). #### pub const MAX: NaiveDate = _ The maximum possible `NaiveDate` (December 31, 262143 CE). Trait Implementations --- ### impl Add<Days> for NaiveDate #### type Output = NaiveDate The resulting type after applying the `+` operator.#### fn add(self, days: Days) -> <NaiveDate as Add<Days>>::Output Performs the `+` operation. An addition of `Duration` to `NaiveDate` discards the fractional days, rounding to the closest integral number of days towards `Duration::zero()`. Panics on underflow or overflow. Use `NaiveDate::checked_add_signed` to detect that. #### Example ``` use chrono::{Duration, NaiveDate}; let from_ymd = |y, m, d| NaiveDate::from_ymd_opt(y, m, d).unwrap(); assert_eq!(from_ymd(2014, 1, 1) + Duration::zero(), from_ymd(2014, 1, 1)); assert_eq!(from_ymd(2014, 1, 1) + Duration::seconds(86399), from_ymd(2014, 1, 1)); assert_eq!(from_ymd(2014, 1, 1) + Duration::seconds(-86399), from_ymd(2014, 1, 1)); assert_eq!(from_ymd(2014, 1, 1) + Duration::days(1), from_ymd(2014, 1, 2)); assert_eq!(from_ymd(2014, 1, 1) + Duration::days(-1), from_ymd(2013, 12, 31)); assert_eq!(from_ymd(2014, 1, 1) + Duration::days(364), from_ymd(2014, 12, 31)); assert_eq!(from_ymd(2014, 1, 1) + Duration::days(365*4 + 1), from_ymd(2018, 1, 1)); assert_eq!(from_ymd(2014, 1, 1) + Duration::days(365*400 + 97), from_ymd(2414, 1, 1)); ``` #### type Output = NaiveDate The resulting type after applying the `+` operator.#### fn add(self, rhs: Duration) -> NaiveDate Performs the `+` operation. #### fn add(self, months: Months) -> <NaiveDate as Add<Months>>::Output An addition of months to `NaiveDate` clamped to valid days in resulting month. ##### Panics Panics if the resulting date would be out of range. ##### Example ``` use chrono::{NaiveDate, Months}; let from_ymd = |y, m, d| NaiveDate::from_ymd_opt(y, m, d).unwrap(); assert_eq!(from_ymd(2014, 1, 1) + Months::new(1), from_ymd(2014, 2, 1)); assert_eq!(from_ymd(2014, 1, 1) + Months::new(11), from_ymd(2014, 12, 1)); assert_eq!(from_ymd(2014, 1, 1) + Months::new(12), from_ymd(2015, 1, 1)); assert_eq!(from_ymd(2014, 1, 1) + Months::new(13), from_ymd(2015, 2, 1)); assert_eq!(from_ymd(2014, 1, 31) + Months::new(1), from_ymd(2014, 2, 28)); assert_eq!(from_ymd(2020, 1, 31) + Months::new(1), from_ymd(2020, 2, 29)); ``` #### type Output = NaiveDate The resulting type after applying the `+` operator.### impl AddAssign<Duration> for NaiveDate #### fn add_assign(&mut self, rhs: Duration) Performs the `+=` operation. #### fn clone(&self) -> NaiveDate Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn year(&self) -> i32 Returns the year number in the calendar date. ##### Example ``` use chrono::{NaiveDate, Datelike}; assert_eq!(NaiveDate::from_ymd_opt(2015, 9, 8).unwrap().year(), 2015); assert_eq!(NaiveDate::from_ymd_opt(-308, 3, 14).unwrap().year(), -308); // 309 BCE ``` #### fn month(&self) -> u32 Returns the month number starting from 1. The return value ranges from 1 to 12. ##### Example ``` use chrono::{NaiveDate, Datelike}; assert_eq!(NaiveDate::from_ymd_opt(2015, 9, 8).unwrap().month(), 9); assert_eq!(NaiveDate::from_ymd_opt(-308, 3, 14).unwrap().month(), 3); ``` #### fn month0(&self) -> u32 Returns the month number starting from 0. The return value ranges from 0 to 11. ##### Example ``` use chrono::{NaiveDate, Datelike}; assert_eq!(NaiveDate::from_ymd_opt(2015, 9, 8).unwrap().month0(), 8); assert_eq!(NaiveDate::from_ymd_opt(-308, 3, 14).unwrap().month0(), 2); ``` #### fn day(&self) -> u32 Returns the day of month starting from 1. The return value ranges from 1 to 31. (The last day of month differs by months.) ##### Example ``` use chrono::{NaiveDate, Datelike}; assert_eq!(NaiveDate::from_ymd_opt(2015, 9, 8).unwrap().day(), 8); assert_eq!(NaiveDate::from_ymd_opt(-308, 3, 14).unwrap().day(), 14); ``` Combined with `NaiveDate::pred`, one can determine the number of days in a particular month. (Note that this panics when `year` is out of range.) ``` use chrono::{NaiveDate, Datelike}; fn ndays_in_month(year: i32, month: u32) -> u32 { // the first day of the next month... let (y, m) = if month == 12 { (year + 1, 1) } else { (year, month + 1) }; let d = NaiveDate::from_ymd_opt(y, m, 1).unwrap(); // ...is preceded by the last day of the original month d.pred_opt().unwrap().day() } assert_eq!(ndays_in_month(2015, 8), 31); assert_eq!(ndays_in_month(2015, 9), 30); assert_eq!(ndays_in_month(2015, 12), 31); assert_eq!(ndays_in_month(2016, 2), 29); assert_eq!(ndays_in_month(2017, 2), 28); ``` #### fn day0(&self) -> u32 Returns the day of month starting from 0. The return value ranges from 0 to 30. (The last day of month differs by months.) ##### Example ``` use chrono::{NaiveDate, Datelike}; assert_eq!(NaiveDate::from_ymd_opt(2015, 9, 8).unwrap().day0(), 7); assert_eq!(NaiveDate::from_ymd_opt(-308, 3, 14).unwrap().day0(), 13); ``` #### fn ordinal(&self) -> u32 Returns the day of year starting from 1. The return value ranges from 1 to 366. (The last day of year differs by years.) ##### Example ``` use chrono::{NaiveDate, Datelike}; assert_eq!(NaiveDate::from_ymd_opt(2015, 9, 8).unwrap().ordinal(), 251); assert_eq!(NaiveDate::from_ymd_opt(-308, 3, 14).unwrap().ordinal(), 74); ``` Combined with `NaiveDate::pred`, one can determine the number of days in a particular year. (Note that this panics when `year` is out of range.) ``` use chrono::{NaiveDate, Datelike}; fn ndays_in_year(year: i32) -> u32 { // the first day of the next year... let d = NaiveDate::from_ymd_opt(year + 1, 1, 1).unwrap(); // ...is preceded by the last day of the original year d.pred_opt().unwrap().ordinal() } assert_eq!(ndays_in_year(2015), 365); assert_eq!(ndays_in_year(2016), 366); assert_eq!(ndays_in_year(2017), 365); assert_eq!(ndays_in_year(2000), 366); assert_eq!(ndays_in_year(2100), 365); ``` #### fn ordinal0(&self) -> u32 Returns the day of year starting from 0. The return value ranges from 0 to 365. (The last day of year differs by years.) ##### Example ``` use chrono::{NaiveDate, Datelike}; assert_eq!(NaiveDate::from_ymd_opt(2015, 9, 8).unwrap().ordinal0(), 250); assert_eq!(NaiveDate::from_ymd_opt(-308, 3, 14).unwrap().ordinal0(), 73); ``` #### fn weekday(&self) -> Weekday Returns the day of week. ##### Example ``` use chrono::{NaiveDate, Datelike, Weekday}; assert_eq!(NaiveDate::from_ymd_opt(2015, 9, 8).unwrap().weekday(), Weekday::Tue); assert_eq!(NaiveDate::from_ymd_opt(-308, 3, 14).unwrap().weekday(), Weekday::Fri); ``` #### fn with_year(&self, year: i32) -> Option<NaiveDateMakes a new `NaiveDate` with the year number changed, while keeping the same month and day. ##### Errors Returns `None` if the resulting date does not exist, or when the `NaiveDate` would be out of range. ##### Example ``` use chrono::{NaiveDate, Datelike}; assert_eq!(NaiveDate::from_ymd_opt(2015, 9, 8).unwrap().with_year(2016), Some(NaiveDate::from_ymd_opt(2016, 9, 8).unwrap())); assert_eq!(NaiveDate::from_ymd_opt(2015, 9, 8).unwrap().with_year(-308), Some(NaiveDate::from_ymd_opt(-308, 9, 8).unwrap())); ``` A leap day (February 29) is a good example that this method can return `None`. ``` assert!(NaiveDate::from_ymd_opt(2016, 2, 29).unwrap().with_year(2015).is_none()); assert!(NaiveDate::from_ymd_opt(2016, 2, 29).unwrap().with_year(2020).is_some()); ``` #### fn with_month(&self, month: u32) -> Option<NaiveDateMakes a new `NaiveDate` with the month number (starting from 1) changed. ##### Errors Returns `None` if the resulting date does not exist, or if the value for `month` is invalid. ##### Example ``` use chrono::{NaiveDate, Datelike}; assert_eq!(NaiveDate::from_ymd_opt(2015, 9, 8).unwrap().with_month(10), Some(NaiveDate::from_ymd_opt(2015, 10, 8).unwrap())); assert_eq!(NaiveDate::from_ymd_opt(2015, 9, 8).unwrap().with_month(13), None); // no month 13 assert_eq!(NaiveDate::from_ymd_opt(2015, 9, 30).unwrap().with_month(2), None); // no February 30 ``` #### fn with_month0(&self, month0: u32) -> Option<NaiveDateMakes a new `NaiveDate` with the month number (starting from 0) changed. ##### Errors Returns `None` if the resulting date does not exist, or if the value for `month0` is invalid. ##### Example ``` use chrono::{NaiveDate, Datelike}; assert_eq!(NaiveDate::from_ymd_opt(2015, 9, 8).unwrap().with_month0(9), Some(NaiveDate::from_ymd_opt(2015, 10, 8).unwrap())); assert_eq!(NaiveDate::from_ymd_opt(2015, 9, 8).unwrap().with_month0(12), None); // no month 13 assert_eq!(NaiveDate::from_ymd_opt(2015, 9, 30).unwrap().with_month0(1), None); // no February 30 ``` #### fn with_day(&self, day: u32) -> Option<NaiveDateMakes a new `NaiveDate` with the day of month (starting from 1) changed. ##### Errors Returns `None` if the resulting date does not exist, or if the value for `day` is invalid. ##### Example ``` use chrono::{NaiveDate, Datelike}; assert_eq!(NaiveDate::from_ymd_opt(2015, 9, 8).unwrap().with_day(30), Some(NaiveDate::from_ymd_opt(2015, 9, 30).unwrap())); assert_eq!(NaiveDate::from_ymd_opt(2015, 9, 8).unwrap().with_day(31), None); // no September 31 ``` #### fn with_day0(&self, day0: u32) -> Option<NaiveDateMakes a new `NaiveDate` with the day of month (starting from 0) changed. ##### Errors Returns `None` if the resulting date does not exist, or if the value for `day0` is invalid. ##### Example ``` use chrono::{NaiveDate, Datelike}; assert_eq!(NaiveDate::from_ymd_opt(2015, 9, 8).unwrap().with_day0(29), Some(NaiveDate::from_ymd_opt(2015, 9, 30).unwrap())); assert_eq!(NaiveDate::from_ymd_opt(2015, 9, 8).unwrap().with_day0(30), None); // no September 31 ``` #### fn with_ordinal(&self, ordinal: u32) -> Option<NaiveDateMakes a new `NaiveDate` with the day of year (starting from 1) changed. ##### Errors Returns `None` if the resulting date does not exist, or if the value for `ordinal` is invalid. ##### Example ``` use chrono::{NaiveDate, Datelike}; assert_eq!(NaiveDate::from_ymd_opt(2015, 1, 1).unwrap().with_ordinal(60), Some(NaiveDate::from_ymd_opt(2015, 3, 1).unwrap())); assert_eq!(NaiveDate::from_ymd_opt(2015, 1, 1).unwrap().with_ordinal(366), None); // 2015 had only 365 days assert_eq!(NaiveDate::from_ymd_opt(2016, 1, 1).unwrap().with_ordinal(60), Some(NaiveDate::from_ymd_opt(2016, 2, 29).unwrap())); assert_eq!(NaiveDate::from_ymd_opt(2016, 1, 1).unwrap().with_ordinal(366), Some(NaiveDate::from_ymd_opt(2016, 12, 31).unwrap())); ``` #### fn with_ordinal0(&self, ordinal0: u32) -> Option<NaiveDateMakes a new `NaiveDate` with the day of year (starting from 0) changed. ##### Errors Returns `None` if the resulting date does not exist, or if the value for `ordinal0` is invalid. ##### Example ``` use chrono::{NaiveDate, Datelike}; assert_eq!(NaiveDate::from_ymd_opt(2015, 1, 1).unwrap().with_ordinal0(59), Some(NaiveDate::from_ymd_opt(2015, 3, 1).unwrap())); assert_eq!(NaiveDate::from_ymd_opt(2015, 1, 1).unwrap().with_ordinal0(365), None); // 2015 had only 365 days assert_eq!(NaiveDate::from_ymd_opt(2016, 1, 1).unwrap().with_ordinal0(59), Some(NaiveDate::from_ymd_opt(2016, 2, 29).unwrap())); assert_eq!(NaiveDate::from_ymd_opt(2016, 1, 1).unwrap().with_ordinal0(365), Some(NaiveDate::from_ymd_opt(2016, 12, 31).unwrap())); ``` #### fn iso_week(&self) -> IsoWeek Returns the ISO week.#### fn year_ce(&self) -> (bool, u32) Returns the absolute year number starting from 1 with a boolean flag, which is false when the year predates the epoch (BCE/BC) and true otherwise (CE/AD).#### fn num_days_from_ce(&self) -> i32 Counts the days in the proleptic Gregorian calendar, with January 1, Year 1 (CE) as day 1. The `Debug` output of the naive date `d` is the same as `d.format("%Y-%m-%d")`. The string printed can be readily parsed via the `parse` method on `str`. #### Example ``` use chrono::NaiveDate; assert_eq!(format!("{:?}", NaiveDate::from_ymd_opt(2015, 9, 5).unwrap()), "2015-09-05"); assert_eq!(format!("{:?}", NaiveDate::from_ymd_opt( 0, 1, 1).unwrap()), "0000-01-01"); assert_eq!(format!("{:?}", NaiveDate::from_ymd_opt(9999, 12, 31).unwrap()), "9999-12-31"); ``` ISO 8601 requires an explicit sign for years before 1 BCE or after 9999 CE. ``` assert_eq!(format!("{:?}", NaiveDate::from_ymd_opt( -1, 1, 1).unwrap()), "-0001-01-01"); assert_eq!(format!("{:?}", NaiveDate::from_ymd_opt(10000, 12, 31).unwrap()), "+10000-12-31"); ``` #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter. The default value for a NaiveDate is 1st of January 1970. #### Example ``` use chrono::NaiveDate; let default_date = NaiveDate::default(); assert_eq!(default_date, NaiveDate::from_ymd_opt(1970, 1, 1).unwrap()); ``` #### fn default() -> NaiveDate Returns the “default value” for a type. The `Display` output of the naive date `d` is the same as `d.format("%Y-%m-%d")`. The string printed can be readily parsed via the `parse` method on `str`. #### Example ``` use chrono::NaiveDate; assert_eq!(format!("{}", NaiveDate::from_ymd_opt(2015, 9, 5).unwrap()), "2015-09-05"); assert_eq!(format!("{}", NaiveDate::from_ymd_opt( 0, 1, 1).unwrap()), "0000-01-01"); assert_eq!(format!("{}", NaiveDate::from_ymd_opt(9999, 12, 31).unwrap()), "9999-12-31"); ``` ISO 8601 requires an explicit sign for years before 1 BCE or after 9999 CE. ``` assert_eq!(format!("{}", NaiveDate::from_ymd_opt( -1, 1, 1).unwrap()), "-0001-01-01"); assert_eq!(format!("{}", NaiveDate::from_ymd_opt(10000, 12, 31).unwrap()), "+10000-12-31"); ``` #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter. #### fn from(naive_datetime: NaiveDateTime) -> NaiveDate Converts to this type from the input type.### impl FromStr for NaiveDate Parsing a `str` into a `NaiveDate` uses the same format, `%Y-%m-%d`, as in `Debug` and `Display`. #### Example ``` use chrono::NaiveDate; let d = NaiveDate::from_ymd_opt(2015, 9, 18).unwrap(); assert_eq!("2015-09-18".parse::<NaiveDate>(), Ok(d)); let d = NaiveDate::from_ymd_opt(12345, 6, 7).unwrap(); assert_eq!("+12345-6-7".parse::<NaiveDate>(), Ok(d)); assert!("foo".parse::<NaiveDate>().is_err()); ``` #### type Err = ParseError The associated error which can be returned from parsing.#### fn from_str(s: &str) -> Result<NaiveDate, ParseErrorParses a string `s` to return a value of this type. #### fn hash<__H>(&self, state: &mut __H)where __H: Hasher, Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where H: Hasher, Self: Sized, Feeds a slice of this type into the given `Hasher`. #### fn cmp(&self, other: &NaiveDate) -> Ordering This method returns an `Ordering` between `self` and `other`. Read more1.21.0 · source#### fn max(self, other: Self) -> Selfwhere Self: Sized, Compares and returns the maximum of two values. Read more1.21.0 · source#### fn min(self, other: Self) -> Selfwhere Self: Sized, Compares and returns the minimum of two values. Read more1.50.0 · source#### fn clamp(self, min: Self, max: Self) -> Selfwhere Self: Sized + PartialOrd<Self>, Restrict a value to a certain interval. #### fn eq(&self, other: &NaiveDate) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialOrd<NaiveDate> for NaiveDate #### fn partial_cmp(&self, other: &NaiveDate) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists. Read more1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### type Output = NaiveDate The resulting type after applying the `-` operator.#### fn sub(self, days: Days) -> <NaiveDate as Sub<Days>>::Output Performs the `-` operation. A subtraction of `Duration` from `NaiveDate` discards the fractional days, rounding to the closest integral number of days towards `Duration::zero()`. It is the same as the addition with a negated `Duration`. Panics on underflow or overflow. Use `NaiveDate::checked_sub_signed` to detect that. #### Example ``` use chrono::{Duration, NaiveDate}; let from_ymd = |y, m, d| NaiveDate::from_ymd_opt(y, m, d).unwrap(); assert_eq!(from_ymd(2014, 1, 1) - Duration::zero(), from_ymd(2014, 1, 1)); assert_eq!(from_ymd(2014, 1, 1) - Duration::seconds(86399), from_ymd(2014, 1, 1)); assert_eq!(from_ymd(2014, 1, 1) - Duration::seconds(-86399), from_ymd(2014, 1, 1)); assert_eq!(from_ymd(2014, 1, 1) - Duration::days(1), from_ymd(2013, 12, 31)); assert_eq!(from_ymd(2014, 1, 1) - Duration::days(-1), from_ymd(2014, 1, 2)); assert_eq!(from_ymd(2014, 1, 1) - Duration::days(364), from_ymd(2013, 1, 2)); assert_eq!(from_ymd(2014, 1, 1) - Duration::days(365*4 + 1), from_ymd(2010, 1, 1)); assert_eq!(from_ymd(2014, 1, 1) - Duration::days(365*400 + 97), from_ymd(1614, 1, 1)); ``` #### type Output = NaiveDate The resulting type after applying the `-` operator.#### fn sub(self, rhs: Duration) -> NaiveDate Performs the `-` operation. #### fn sub(self, months: Months) -> <NaiveDate as Sub<Months>>::Output A subtraction of Months from `NaiveDate` clamped to valid days in resulting month. ##### Panics Panics if the resulting date would be out of range. ##### Example ``` use chrono::{NaiveDate, Months}; let from_ymd = |y, m, d| NaiveDate::from_ymd_opt(y, m, d).unwrap(); assert_eq!(from_ymd(2014, 1, 1) - Months::new(11), from_ymd(2013, 2, 1)); assert_eq!(from_ymd(2014, 1, 1) - Months::new(12), from_ymd(2013, 1, 1)); assert_eq!(from_ymd(2014, 1, 1) - Months::new(13), from_ymd(2012, 12, 1)); ``` #### type Output = NaiveDate The resulting type after applying the `-` operator.### impl Sub<NaiveDate> for NaiveDate Subtracts another `NaiveDate` from the current date. Returns a `Duration` of integral numbers. This does not overflow or underflow at all, as all possible output fits in the range of `Duration`. The implementation is a wrapper around `NaiveDate::signed_duration_since`. #### Example ``` use chrono::{Duration, NaiveDate}; let from_ymd = |y, m, d| NaiveDate::from_ymd_opt(y, m, d).unwrap(); assert_eq!(from_ymd(2014, 1, 1) - from_ymd(2014, 1, 1), Duration::zero()); assert_eq!(from_ymd(2014, 1, 1) - from_ymd(2013, 12, 31), Duration::days(1)); assert_eq!(from_ymd(2014, 1, 1) - from_ymd(2014, 1, 2), Duration::days(-1)); assert_eq!(from_ymd(2014, 1, 1) - from_ymd(2013, 9, 23), Duration::days(100)); assert_eq!(from_ymd(2014, 1, 1) - from_ymd(2013, 1, 1), Duration::days(365)); assert_eq!(from_ymd(2014, 1, 1) - from_ymd(2010, 1, 1), Duration::days(365*4 + 1)); assert_eq!(from_ymd(2014, 1, 1) - from_ymd(1614, 1, 1), Duration::days(365*400 + 97)); ``` #### type Output = Duration The resulting type after applying the `-` operator.#### fn sub(self, rhs: NaiveDate) -> Duration Performs the `-` operation. #### fn sub_assign(&mut self, rhs: Duration) Performs the `-=` operation. ### impl Eq for NaiveDate ### impl StructuralEq for NaiveDate ### impl StructuralPartialEq for NaiveDate Auto Trait Implementations --- ### impl RefUnwindSafe for NaiveDate ### impl Send for NaiveDate ### impl Sync for NaiveDate ### impl Unpin for NaiveDate ### impl UnwindSafe for NaiveDate Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. Q: Eq + ?Sized, K: Borrow<Q> + ?Sized, #### fn equivalent(&self, key: &K) -> bool Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. T: Display + ?Sized, #### default fn to_string(&self) -> String Converts the given value to a `String`. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more Struct rocket_launch_live::NaiveDateTime === ``` pub struct NaiveDateTime { /* private fields */ } ``` ISO 8601 combined date and time without timezone. Example --- `NaiveDateTime` is commonly created from `NaiveDate`. ``` use chrono::{NaiveDate, NaiveDateTime}; let dt: NaiveDateTime = NaiveDate::from_ymd_opt(2016, 7, 8).unwrap().and_hms_opt(9, 10, 11).unwrap(); ``` You can use typical date-like and time-like methods, provided that relevant traits are in the scope. ``` use chrono::{Datelike, Timelike, Weekday}; assert_eq!(dt.weekday(), Weekday::Fri); assert_eq!(dt.num_seconds_from_midnight(), 33011); ``` Implementations --- ### impl NaiveDateTime #### pub const fn new(date: NaiveDate, time: NaiveTime) -> NaiveDateTime Makes a new `NaiveDateTime` from date and time components. Equivalent to `date.and_time(time)` and many other helper constructors on `NaiveDate`. ##### Example ``` use chrono::{NaiveDate, NaiveTime, NaiveDateTime}; let d = NaiveDate::from_ymd_opt(2015, 6, 3).unwrap(); let t = NaiveTime::from_hms_milli_opt(12, 34, 56, 789).unwrap(); let dt = NaiveDateTime::new(d, t); assert_eq!(dt.date(), d); assert_eq!(dt.time(), t); ``` #### pub fn from_timestamp(secs: i64, nsecs: u32) -> NaiveDateTime 👎Deprecated since 0.4.23: use `from_timestamp_opt()` insteadMakes a new `NaiveDateTime` corresponding to a UTC date and time, from the number of non-leap seconds since the midnight UTC on January 1, 1970 (aka “UNIX timestamp”) and the number of nanoseconds since the last whole non-leap second. For a non-naive version of this function see `TimeZone::timestamp`. The nanosecond part can exceed 1,000,000,000 in order to represent a leap second, but only when `secs % 60 == 59`. (The true “UNIX timestamp” cannot represent a leap second unambiguously.) ##### Panics Panics if the number of seconds would be out of range for a `NaiveDateTime` (more than ca. 262,000 years away from common era), and panics on an invalid nanosecond (2 seconds or more). #### pub fn from_timestamp_millis(millis: i64) -> Option<NaiveDateTimeCreates a new NaiveDateTime from milliseconds since the UNIX epoch. The UNIX epoch starts on midnight, January 1, 1970, UTC. ##### Errors Returns `None` if the number of milliseconds would be out of range for a `NaiveDateTime` (more than ca. 262,000 years away from common era) ##### Example ``` use chrono::NaiveDateTime; let timestamp_millis: i64 = 1662921288000; //Sunday, September 11, 2022 6:34:48 PM let naive_datetime = NaiveDateTime::from_timestamp_millis(timestamp_millis); assert!(naive_datetime.is_some()); assert_eq!(timestamp_millis, naive_datetime.unwrap().timestamp_millis()); // Negative timestamps (before the UNIX epoch) are supported as well. let timestamp_millis: i64 = -2208936075000; //Mon Jan 01 1900 14:38:45 GMT+0000 let naive_datetime = NaiveDateTime::from_timestamp_millis(timestamp_millis); assert!(naive_datetime.is_some()); assert_eq!(timestamp_millis, naive_datetime.unwrap().timestamp_millis()); ``` #### pub fn from_timestamp_micros(micros: i64) -> Option<NaiveDateTimeCreates a new NaiveDateTime from microseconds since the UNIX epoch. The UNIX epoch starts on midnight, January 1, 1970, UTC. ##### Errors Returns `None` if the number of microseconds would be out of range for a `NaiveDateTime` (more than ca. 262,000 years away from common era) ##### Example ``` use chrono::NaiveDateTime; let timestamp_micros: i64 = 1662921288000000; //Sunday, September 11, 2022 6:34:48 PM let naive_datetime = NaiveDateTime::from_timestamp_micros(timestamp_micros); assert!(naive_datetime.is_some()); assert_eq!(timestamp_micros, naive_datetime.unwrap().timestamp_micros()); // Negative timestamps (before the UNIX epoch) are supported as well. let timestamp_micros: i64 = -2208936075000000; //Mon Jan 01 1900 14:38:45 GMT+0000 let naive_datetime = NaiveDateTime::from_timestamp_micros(timestamp_micros); assert!(naive_datetime.is_some()); assert_eq!(timestamp_micros, naive_datetime.unwrap().timestamp_micros()); ``` #### pub fn from_timestamp_opt(secs: i64, nsecs: u32) -> Option<NaiveDateTimeMakes a new `NaiveDateTime` corresponding to a UTC date and time, from the number of non-leap seconds since the midnight UTC on January 1, 1970 (aka “UNIX timestamp”) and the number of nanoseconds since the last whole non-leap second. The nanosecond part can exceed 1,000,000,000 in order to represent a leap second, but only when `secs % 60 == 59`. (The true “UNIX timestamp” cannot represent a leap second unambiguously.) ##### Errors Returns `None` if the number of seconds would be out of range for a `NaiveDateTime` (more than ca. 262,000 years away from common era), and panics on an invalid nanosecond (2 seconds or more). ##### Example ``` use chrono::NaiveDateTime; use std::i64; let from_timestamp_opt = NaiveDateTime::from_timestamp_opt; assert!(from_timestamp_opt(0, 0).is_some()); assert!(from_timestamp_opt(0, 999_999_999).is_some()); assert!(from_timestamp_opt(0, 1_500_000_000).is_none()); // invalid leap second assert!(from_timestamp_opt(59, 1_500_000_000).is_some()); // leap second assert!(from_timestamp_opt(59, 2_000_000_000).is_none()); assert!(from_timestamp_opt(i64::MAX, 0).is_none()); ``` #### pub fn parse_from_str(s: &str, fmt: &str) -> Result<NaiveDateTime, ParseErrorParses a string with the specified format string and returns a new `NaiveDateTime`. See the `format::strftime` module on the supported escape sequences. ##### Example ``` use chrono::{NaiveDateTime, NaiveDate}; let parse_from_str = NaiveDateTime::parse_from_str; assert_eq!(parse_from_str("2015-09-05 23:56:04", "%Y-%m-%d %H:%M:%S"), Ok(NaiveDate::from_ymd_opt(2015, 9, 5).unwrap().and_hms_opt(23, 56, 4).unwrap())); assert_eq!(parse_from_str("5sep2015pm012345.6789", "%d%b%Y%p%I%M%S%.f"), Ok(NaiveDate::from_ymd_opt(2015, 9, 5).unwrap().and_hms_micro_opt(13, 23, 45, 678_900).unwrap())); ``` Offset is ignored for the purpose of parsing. ``` assert_eq!(parse_from_str("2014-5-17T12:34:56+09:30", "%Y-%m-%dT%H:%M:%S%z"), Ok(NaiveDate::from_ymd_opt(2014, 5, 17).unwrap().and_hms_opt(12, 34, 56).unwrap())); ``` Leap seconds are correctly handled by treating any time of the form `hh:mm:60` as a leap second. (This equally applies to the formatting, so the round trip is possible.) ``` assert_eq!(parse_from_str("2015-07-01 08:59:60.123", "%Y-%m-%d %H:%M:%S%.f"), Ok(NaiveDate::from_ymd_opt(2015, 7, 1).unwrap().and_hms_milli_opt(8, 59, 59, 1_123).unwrap())); ``` Missing seconds are assumed to be zero, but out-of-bound times or insufficient fields are errors otherwise. ``` assert_eq!(parse_from_str("94/9/4 7:15", "%y/%m/%d %H:%M"), Ok(NaiveDate::from_ymd_opt(1994, 9, 4).unwrap().and_hms_opt(7, 15, 0).unwrap())); assert!(parse_from_str("04m33s", "%Mm%Ss").is_err()); assert!(parse_from_str("94/9/4 12", "%y/%m/%d %H").is_err()); assert!(parse_from_str("94/9/4 17:60", "%y/%m/%d %H:%M").is_err()); assert!(parse_from_str("94/9/4 24:00:00", "%y/%m/%d %H:%M:%S").is_err()); ``` All parsed fields should be consistent to each other, otherwise it’s an error. ``` let fmt = "%Y-%m-%d %H:%M:%S = UNIX timestamp %s"; assert!(parse_from_str("2001-09-09 01:46:39 = UNIX timestamp 999999999", fmt).is_ok()); assert!(parse_from_str("1970-01-01 00:00:00 = UNIX timestamp 1", fmt).is_err()); ``` Years before 1 BCE or after 9999 CE, require an initial sign ``` let fmt = "%Y-%m-%d %H:%M:%S"; assert!(parse_from_str("10000-09-09 01:46:39", fmt).is_err()); assert!(parse_from_str("+10000-09-09 01:46:39", fmt).is_ok()); ``` #### pub fn parse_and_remainder<'a>( s: &'a str, fmt: &str ) -> Result<(NaiveDateTime, &'a str), ParseErrorParses a string with the specified format string and returns a new `NaiveDateTime`, and a slice with the remaining portion of the string. See the `format::strftime` module on the supported escape sequences. Similar to `parse_from_str`. ##### Example ``` let (datetime, remainder) = NaiveDateTime::parse_and_remainder( "2015-02-18 23:16:09 trailing text", "%Y-%m-%d %H:%M:%S").unwrap(); assert_eq!( datetime, NaiveDate::from_ymd_opt(2015, 2, 18).unwrap().and_hms_opt(23, 16, 9).unwrap() ); assert_eq!(remainder, " trailing text"); ``` #### pub const fn date(&self) -> NaiveDate Retrieves a date component. ##### Example ``` use chrono::NaiveDate; let dt = NaiveDate::from_ymd_opt(2016, 7, 8).unwrap().and_hms_opt(9, 10, 11).unwrap(); assert_eq!(dt.date(), NaiveDate::from_ymd_opt(2016, 7, 8).unwrap()); ``` #### pub const fn time(&self) -> NaiveTime Retrieves a time component. ##### Example ``` use chrono::{NaiveDate, NaiveTime}; let dt = NaiveDate::from_ymd_opt(2016, 7, 8).unwrap().and_hms_opt(9, 10, 11).unwrap(); assert_eq!(dt.time(), NaiveTime::from_hms_opt(9, 10, 11).unwrap()); ``` #### pub fn timestamp(&self) -> i64 Returns the number of non-leap seconds since the midnight on January 1, 1970. Note that this does *not* account for the timezone! The true “UNIX timestamp” would count seconds since the midnight *UTC* on the epoch. ##### Example ``` use chrono::NaiveDate; let dt = NaiveDate::from_ymd_opt(1970, 1, 1).unwrap().and_hms_milli_opt(0, 0, 1, 980).unwrap(); assert_eq!(dt.timestamp(), 1); let dt = NaiveDate::from_ymd_opt(2001, 9, 9).unwrap().and_hms_opt(1, 46, 40).unwrap(); assert_eq!(dt.timestamp(), 1_000_000_000); let dt = NaiveDate::from_ymd_opt(1969, 12, 31).unwrap().and_hms_opt(23, 59, 59).unwrap(); assert_eq!(dt.timestamp(), -1); let dt = NaiveDate::from_ymd_opt(-1, 1, 1).unwrap().and_hms_opt(0, 0, 0).unwrap(); assert_eq!(dt.timestamp(), -62198755200); ``` #### pub fn timestamp_millis(&self) -> i64 Returns the number of non-leap *milliseconds* since midnight on January 1, 1970. Note that this does *not* account for the timezone! The true “UNIX timestamp” would count seconds since the midnight *UTC* on the epoch. ##### Example ``` use chrono::NaiveDate; let dt = NaiveDate::from_ymd_opt(1970, 1, 1).unwrap().and_hms_milli_opt(0, 0, 1, 444).unwrap(); assert_eq!(dt.timestamp_millis(), 1_444); let dt = NaiveDate::from_ymd_opt(2001, 9, 9).unwrap().and_hms_milli_opt(1, 46, 40, 555).unwrap(); assert_eq!(dt.timestamp_millis(), 1_000_000_000_555); let dt = NaiveDate::from_ymd_opt(1969, 12, 31).unwrap().and_hms_milli_opt(23, 59, 59, 100).unwrap(); assert_eq!(dt.timestamp_millis(), -900); ``` #### pub fn timestamp_micros(&self) -> i64 Returns the number of non-leap *microseconds* since midnight on January 1, 1970. Note that this does *not* account for the timezone! The true “UNIX timestamp” would count seconds since the midnight *UTC* on the epoch. ##### Example ``` use chrono::NaiveDate; let dt = NaiveDate::from_ymd_opt(1970, 1, 1).unwrap().and_hms_micro_opt(0, 0, 1, 444).unwrap(); assert_eq!(dt.timestamp_micros(), 1_000_444); let dt = NaiveDate::from_ymd_opt(2001, 9, 9).unwrap().and_hms_micro_opt(1, 46, 40, 555).unwrap(); assert_eq!(dt.timestamp_micros(), 1_000_000_000_000_555); ``` #### pub fn timestamp_nanos(&self) -> i64 👎Deprecated since 0.4.31: use `timestamp_nanos_opt()` insteadReturns the number of non-leap *nanoseconds* since midnight on January 1, 1970. Note that this does *not* account for the timezone! The true “UNIX timestamp” would count seconds since the midnight *UTC* on the epoch. ##### Panics An `i64` with nanosecond precision can span a range of ~584 years. This function panics on an out of range `NaiveDateTime`. The dates that can be represented as nanoseconds are between 1677-09-21T00:12:44.0 and 2262-04-11T23:47:16.854775804. #### pub fn timestamp_nanos_opt(&self) -> Option<i64Returns the number of non-leap *nanoseconds* since midnight on January 1, 1970. Note that this does *not* account for the timezone! The true “UNIX timestamp” would count seconds since the midnight *UTC* on the epoch. ##### Errors An `i64` with nanosecond precision can span a range of ~584 years. This function returns `None` on an out of range `NaiveDateTime`. The dates that can be represented as nanoseconds are between 1677-09-21T00:12:44.0 and 2262-04-11T23:47:16.854775804. ##### Example ``` use chrono::{NaiveDate, NaiveDateTime}; let dt = NaiveDate::from_ymd_opt(1970, 1, 1).unwrap().and_hms_nano_opt(0, 0, 1, 444).unwrap(); assert_eq!(dt.timestamp_nanos_opt(), Some(1_000_000_444)); let dt = NaiveDate::from_ymd_opt(2001, 9, 9).unwrap().and_hms_nano_opt(1, 46, 40, 555).unwrap(); const A_BILLION: i64 = 1_000_000_000; let nanos = dt.timestamp_nanos_opt().unwrap(); assert_eq!(nanos, 1_000_000_000_000_000_555); assert_eq!( Some(dt), NaiveDateTime::from_timestamp_opt(nanos / A_BILLION, (nanos % A_BILLION) as u32) ); ``` #### pub fn timestamp_subsec_millis(&self) -> u32 Returns the number of milliseconds since the last whole non-leap second. The return value ranges from 0 to 999, or for leap seconds, to 1,999. ##### Example ``` use chrono::NaiveDate; let dt = NaiveDate::from_ymd_opt(2016, 7, 8).unwrap().and_hms_nano_opt(9, 10, 11, 123_456_789).unwrap(); assert_eq!(dt.timestamp_subsec_millis(), 123); let dt = NaiveDate::from_ymd_opt(2015, 7, 1).unwrap().and_hms_nano_opt(8, 59, 59, 1_234_567_890).unwrap(); assert_eq!(dt.timestamp_subsec_millis(), 1_234); ``` #### pub fn timestamp_subsec_micros(&self) -> u32 Returns the number of microseconds since the last whole non-leap second. The return value ranges from 0 to 999,999, or for leap seconds, to 1,999,999. ##### Example ``` use chrono::NaiveDate; let dt = NaiveDate::from_ymd_opt(2016, 7, 8).unwrap().and_hms_nano_opt(9, 10, 11, 123_456_789).unwrap(); assert_eq!(dt.timestamp_subsec_micros(), 123_456); let dt = NaiveDate::from_ymd_opt(2015, 7, 1).unwrap().and_hms_nano_opt(8, 59, 59, 1_234_567_890).unwrap(); assert_eq!(dt.timestamp_subsec_micros(), 1_234_567); ``` #### pub fn timestamp_subsec_nanos(&self) -> u32 Returns the number of nanoseconds since the last whole non-leap second. The return value ranges from 0 to 999,999,999, or for leap seconds, to 1,999,999,999. ##### Example ``` use chrono::NaiveDate; let dt = NaiveDate::from_ymd_opt(2016, 7, 8).unwrap().and_hms_nano_opt(9, 10, 11, 123_456_789).unwrap(); assert_eq!(dt.timestamp_subsec_nanos(), 123_456_789); let dt = NaiveDate::from_ymd_opt(2015, 7, 1).unwrap().and_hms_nano_opt(8, 59, 59, 1_234_567_890).unwrap(); assert_eq!(dt.timestamp_subsec_nanos(), 1_234_567_890); ``` #### pub fn checked_add_signed(self, rhs: Duration) -> Option<NaiveDateTimeAdds given `Duration` to the current date and time. As a part of Chrono’s leap second handling, the addition assumes that **there is no leap second ever**, except when the `NaiveDateTime` itself represents a leap second in which case the assumption becomes that **there is exactly a single leap second ever**. ##### Errors Returns `None` if the resulting date would be out of range. ##### Example ``` use chrono::{Duration, NaiveDate}; let from_ymd = |y, m, d| NaiveDate::from_ymd_opt(y, m, d).unwrap(); let d = from_ymd(2016, 7, 8); let hms = |h, m, s| d.and_hms_opt(h, m, s).unwrap(); assert_eq!(hms(3, 5, 7).checked_add_signed(Duration::zero()), Some(hms(3, 5, 7))); assert_eq!(hms(3, 5, 7).checked_add_signed(Duration::seconds(1)), Some(hms(3, 5, 8))); assert_eq!(hms(3, 5, 7).checked_add_signed(Duration::seconds(-1)), Some(hms(3, 5, 6))); assert_eq!(hms(3, 5, 7).checked_add_signed(Duration::seconds(3600 + 60)), Some(hms(4, 6, 7))); assert_eq!(hms(3, 5, 7).checked_add_signed(Duration::seconds(86_400)), Some(from_ymd(2016, 7, 9).and_hms_opt(3, 5, 7).unwrap())); let hmsm = |h, m, s, milli| d.and_hms_milli_opt(h, m, s, milli).unwrap(); assert_eq!(hmsm(3, 5, 7, 980).checked_add_signed(Duration::milliseconds(450)), Some(hmsm(3, 5, 8, 430))); ``` Overflow returns `None`. ``` assert_eq!(hms(3, 5, 7).checked_add_signed(Duration::days(1_000_000_000)), None); ``` Leap seconds are handled, but the addition assumes that it is the only leap second happened. ``` let leap = hmsm(3, 5, 59, 1_300); assert_eq!(leap.checked_add_signed(Duration::zero()), Some(hmsm(3, 5, 59, 1_300))); assert_eq!(leap.checked_add_signed(Duration::milliseconds(-500)), Some(hmsm(3, 5, 59, 800))); assert_eq!(leap.checked_add_signed(Duration::milliseconds(500)), Some(hmsm(3, 5, 59, 1_800))); assert_eq!(leap.checked_add_signed(Duration::milliseconds(800)), Some(hmsm(3, 6, 0, 100))); assert_eq!(leap.checked_add_signed(Duration::seconds(10)), Some(hmsm(3, 6, 9, 300))); assert_eq!(leap.checked_add_signed(Duration::seconds(-10)), Some(hmsm(3, 5, 50, 300))); assert_eq!(leap.checked_add_signed(Duration::days(1)), Some(from_ymd(2016, 7, 9).and_hms_milli_opt(3, 5, 59, 300).unwrap())); ``` #### pub fn checked_add_months(self, rhs: Months) -> Option<NaiveDateTimeAdds given `Months` to the current date and time. Uses the last day of the month if the day does not exist in the resulting month. ##### Errors Returns `None` if the resulting date would be out of range. ##### Example ``` use chrono::{Months, NaiveDate}; assert_eq!( NaiveDate::from_ymd_opt(2014, 1, 1).unwrap().and_hms_opt(1, 0, 0).unwrap() .checked_add_months(Months::new(1)), Some(NaiveDate::from_ymd_opt(2014, 2, 1).unwrap().and_hms_opt(1, 0, 0).unwrap()) ); assert_eq!( NaiveDate::from_ymd_opt(2014, 1, 1).unwrap().and_hms_opt(1, 0, 0).unwrap() .checked_add_months(Months::new(core::i32::MAX as u32 + 1)), None ); ``` #### pub fn checked_sub_signed(self, rhs: Duration) -> Option<NaiveDateTimeSubtracts given `Duration` from the current date and time. As a part of Chrono’s leap second handling, the subtraction assumes that **there is no leap second ever**, except when the `NaiveDateTime` itself represents a leap second in which case the assumption becomes that **there is exactly a single leap second ever**. ##### Errors Returns `None` if the resulting date would be out of range. ##### Example ``` use chrono::{Duration, NaiveDate}; let from_ymd = |y, m, d| NaiveDate::from_ymd_opt(y, m, d).unwrap(); let d = from_ymd(2016, 7, 8); let hms = |h, m, s| d.and_hms_opt(h, m, s).unwrap(); assert_eq!(hms(3, 5, 7).checked_sub_signed(Duration::zero()), Some(hms(3, 5, 7))); assert_eq!(hms(3, 5, 7).checked_sub_signed(Duration::seconds(1)), Some(hms(3, 5, 6))); assert_eq!(hms(3, 5, 7).checked_sub_signed(Duration::seconds(-1)), Some(hms(3, 5, 8))); assert_eq!(hms(3, 5, 7).checked_sub_signed(Duration::seconds(3600 + 60)), Some(hms(2, 4, 7))); assert_eq!(hms(3, 5, 7).checked_sub_signed(Duration::seconds(86_400)), Some(from_ymd(2016, 7, 7).and_hms_opt(3, 5, 7).unwrap())); let hmsm = |h, m, s, milli| d.and_hms_milli_opt(h, m, s, milli).unwrap(); assert_eq!(hmsm(3, 5, 7, 450).checked_sub_signed(Duration::milliseconds(670)), Some(hmsm(3, 5, 6, 780))); ``` Overflow returns `None`. ``` assert_eq!(hms(3, 5, 7).checked_sub_signed(Duration::days(1_000_000_000)), None); ``` Leap seconds are handled, but the subtraction assumes that it is the only leap second happened. ``` let leap = hmsm(3, 5, 59, 1_300); assert_eq!(leap.checked_sub_signed(Duration::zero()), Some(hmsm(3, 5, 59, 1_300))); assert_eq!(leap.checked_sub_signed(Duration::milliseconds(200)), Some(hmsm(3, 5, 59, 1_100))); assert_eq!(leap.checked_sub_signed(Duration::milliseconds(500)), Some(hmsm(3, 5, 59, 800))); assert_eq!(leap.checked_sub_signed(Duration::seconds(60)), Some(hmsm(3, 5, 0, 300))); assert_eq!(leap.checked_sub_signed(Duration::days(1)), Some(from_ymd(2016, 7, 7).and_hms_milli_opt(3, 6, 0, 300).unwrap())); ``` #### pub fn checked_sub_months(self, rhs: Months) -> Option<NaiveDateTimeSubtracts given `Months` from the current date and time. Uses the last day of the month if the day does not exist in the resulting month. ##### Errors Returns `None` if the resulting date would be out of range. ##### Example ``` use chrono::{Months, NaiveDate}; assert_eq!( NaiveDate::from_ymd_opt(2014, 1, 1).unwrap().and_hms_opt(1, 0, 0).unwrap() .checked_sub_months(Months::new(1)), Some(NaiveDate::from_ymd_opt(2013, 12, 1).unwrap().and_hms_opt(1, 0, 0).unwrap()) ); assert_eq!( NaiveDate::from_ymd_opt(2014, 1, 1).unwrap().and_hms_opt(1, 0, 0).unwrap() .checked_sub_months(Months::new(core::i32::MAX as u32 + 1)), None ); ``` #### pub fn checked_add_days(self, days: Days) -> Option<NaiveDateTimeAdd a duration in `Days` to the date part of the `NaiveDateTime` Returns `None` if the resulting date would be out of range. #### pub fn checked_sub_days(self, days: Days) -> Option<NaiveDateTimeSubtract a duration in `Days` from the date part of the `NaiveDateTime` Returns `None` if the resulting date would be out of range. #### pub fn signed_duration_since(self, rhs: NaiveDateTime) -> Duration Subtracts another `NaiveDateTime` from the current date and time. This does not overflow or underflow at all. As a part of Chrono’s leap second handling, the subtraction assumes that **there is no leap second ever**, except when any of the `NaiveDateTime`s themselves represents a leap second in which case the assumption becomes that **there are exactly one (or two) leap second(s) ever**. ##### Example ``` use chrono::{Duration, NaiveDate}; let from_ymd = |y, m, d| NaiveDate::from_ymd_opt(y, m, d).unwrap(); let d = from_ymd(2016, 7, 8); assert_eq!(d.and_hms_opt(3, 5, 7).unwrap().signed_duration_since(d.and_hms_opt(2, 4, 6).unwrap()), Duration::seconds(3600 + 60 + 1)); // July 8 is 190th day in the year 2016 let d0 = from_ymd(2016, 1, 1); assert_eq!(d.and_hms_milli_opt(0, 7, 6, 500).unwrap().signed_duration_since(d0.and_hms_opt(0, 0, 0).unwrap()), Duration::seconds(189 * 86_400 + 7 * 60 + 6) + Duration::milliseconds(500)); ``` Leap seconds are handled, but the subtraction assumes that there were no other leap seconds happened. ``` let leap = from_ymd(2015, 6, 30).and_hms_milli_opt(23, 59, 59, 1_500).unwrap(); assert_eq!(leap.signed_duration_since(from_ymd(2015, 6, 30).and_hms_opt(23, 0, 0).unwrap()), Duration::seconds(3600) + Duration::milliseconds(500)); assert_eq!(from_ymd(2015, 7, 1).and_hms_opt(1, 0, 0).unwrap().signed_duration_since(leap), Duration::seconds(3600) - Duration::milliseconds(500)); ``` #### pub fn format_with_items<'a, I, B>(&self, items: I) -> DelayedFormat<I>where I: Iterator<Item = B> + Clone, B: Borrow<Item<'a>>, Formats the combined date and time with the specified formatting items. Otherwise it is the same as the ordinary `format` method. The `Iterator` of items should be `Clone`able, since the resulting `DelayedFormat` value may be formatted multiple times. ##### Example ``` use chrono::NaiveDate; use chrono::format::strftime::StrftimeItems; let fmt = StrftimeItems::new("%Y-%m-%d %H:%M:%S"); let dt = NaiveDate::from_ymd_opt(2015, 9, 5).unwrap().and_hms_opt(23, 56, 4).unwrap(); assert_eq!(dt.format_with_items(fmt.clone()).to_string(), "2015-09-05 23:56:04"); assert_eq!(dt.format("%Y-%m-%d %H:%M:%S").to_string(), "2015-09-05 23:56:04"); ``` The resulting `DelayedFormat` can be formatted directly via the `Display` trait. ``` assert_eq!(format!("{}", dt.format_with_items(fmt)), "2015-09-05 23:56:04"); ``` #### pub fn format<'a>(&self, fmt: &'a str) -> DelayedFormat<StrftimeItems<'a>Formats the combined date and time with the specified format string. See the `format::strftime` module on the supported escape sequences. This returns a `DelayedFormat`, which gets converted to a string only when actual formatting happens. You may use the `to_string` method to get a `String`, or just feed it into `print!` and other formatting macros. (In this way it avoids the redundant memory allocation.) A wrong format string does *not* issue an error immediately. Rather, converting or formatting the `DelayedFormat` fails. You are recommended to immediately use `DelayedFormat` for this reason. ##### Example ``` use chrono::NaiveDate; let dt = NaiveDate::from_ymd_opt(2015, 9, 5).unwrap().and_hms_opt(23, 56, 4).unwrap(); assert_eq!(dt.format("%Y-%m-%d %H:%M:%S").to_string(), "2015-09-05 23:56:04"); assert_eq!(dt.format("around %l %p on %b %-d").to_string(), "around 11 PM on Sep 5"); ``` The resulting `DelayedFormat` can be formatted directly via the `Display` trait. ``` assert_eq!(format!("{}", dt.format("%Y-%m-%d %H:%M:%S")), "2015-09-05 23:56:04"); assert_eq!(format!("{}", dt.format("around %l %p on %b %-d")), "around 11 PM on Sep 5"); ``` #### pub fn and_local_timezone<Tz>(&self, tz: Tz) -> LocalResult<DateTime<Tz>>where Tz: TimeZone, Converts the `NaiveDateTime` into the timezone-aware `DateTime<Tz>` with the provided timezone, if possible. This can fail in cases where the local time represented by the `NaiveDateTime` is not a valid local timestamp in the target timezone due to an offset transition for example if the target timezone had a change from +00:00 to +01:00 occuring at 2015-09-05 22:59:59, then a local time of 2015-09-05 23:56:04 could never occur. Similarly, if the offset transitioned in the opposite direction then there would be two local times of 2015-09-05 23:56:04, one at +00:00 and one at +01:00. ##### Example ``` use chrono::{NaiveDate, FixedOffset}; let hour = 3600; let tz = FixedOffset::east_opt(5 * hour).unwrap(); let dt = NaiveDate::from_ymd_opt(2015, 9, 5).unwrap().and_hms_opt(23, 56, 4).unwrap().and_local_timezone(tz).unwrap(); assert_eq!(dt.timezone(), tz); ``` #### pub fn and_utc(&self) -> DateTime<UtcConverts the `NaiveDateTime` into the timezone-aware `DateTime<Utc>`. ##### Example ``` use chrono::{NaiveDate, Utc}; let dt = NaiveDate::from_ymd_opt(2023, 1, 30).unwrap().and_hms_opt(19, 32, 33).unwrap().and_utc(); assert_eq!(dt.timezone(), Utc); ``` #### pub const MIN: NaiveDateTime = _ The minimum possible `NaiveDateTime`. #### pub const MAX: NaiveDateTime = _ The maximum possible `NaiveDateTime`. #### pub const UNIX_EPOCH: NaiveDateTime = _ The Unix Epoch, 1970-01-01 00:00:00. Trait Implementations --- ### impl Add<Days> for NaiveDateTime #### type Output = NaiveDateTime The resulting type after applying the `+` operator.#### fn add(self, days: Days) -> <NaiveDateTime as Add<Days>>::Output Performs the `+` operation. An addition of `Duration` to `NaiveDateTime` yields another `NaiveDateTime`. As a part of Chrono’s leap second handling, the addition assumes that **there is no leap second ever**, except when the `NaiveDateTime` itself represents a leap second in which case the assumption becomes that **there is exactly a single leap second ever**. #### Panics Panics if the resulting date would be out of range. Use `NaiveDateTime::checked_add_signed` to detect that. #### Example ``` use chrono::{Duration, NaiveDate}; let from_ymd = |y, m, d| NaiveDate::from_ymd_opt(y, m, d).unwrap(); let d = from_ymd(2016, 7, 8); let hms = |h, m, s| d.and_hms_opt(h, m, s).unwrap(); assert_eq!(hms(3, 5, 7) + Duration::zero(), hms(3, 5, 7)); assert_eq!(hms(3, 5, 7) + Duration::seconds(1), hms(3, 5, 8)); assert_eq!(hms(3, 5, 7) + Duration::seconds(-1), hms(3, 5, 6)); assert_eq!(hms(3, 5, 7) + Duration::seconds(3600 + 60), hms(4, 6, 7)); assert_eq!(hms(3, 5, 7) + Duration::seconds(86_400), from_ymd(2016, 7, 9).and_hms_opt(3, 5, 7).unwrap()); assert_eq!(hms(3, 5, 7) + Duration::days(365), from_ymd(2017, 7, 8).and_hms_opt(3, 5, 7).unwrap()); let hmsm = |h, m, s, milli| d.and_hms_milli_opt(h, m, s, milli).unwrap(); assert_eq!(hmsm(3, 5, 7, 980) + Duration::milliseconds(450), hmsm(3, 5, 8, 430)); ``` Leap seconds are handled, but the addition assumes that it is the only leap second happened. ``` let leap = hmsm(3, 5, 59, 1_300); assert_eq!(leap + Duration::zero(), hmsm(3, 5, 59, 1_300)); assert_eq!(leap + Duration::milliseconds(-500), hmsm(3, 5, 59, 800)); assert_eq!(leap + Duration::milliseconds(500), hmsm(3, 5, 59, 1_800)); assert_eq!(leap + Duration::milliseconds(800), hmsm(3, 6, 0, 100)); assert_eq!(leap + Duration::seconds(10), hmsm(3, 6, 9, 300)); assert_eq!(leap + Duration::seconds(-10), hmsm(3, 5, 50, 300)); assert_eq!(leap + Duration::days(1), from_ymd(2016, 7, 9).and_hms_milli_opt(3, 5, 59, 300).unwrap()); ``` #### type Output = NaiveDateTime The resulting type after applying the `+` operator.#### fn add(self, rhs: Duration) -> NaiveDateTime Performs the `+` operation. #### type Output = NaiveDateTime The resulting type after applying the `+` operator.#### fn add(self, rhs: Duration) -> NaiveDateTime Performs the `+` operation. #### type Output = NaiveDateTime The resulting type after applying the `+` operator.#### fn add(self, rhs: FixedOffset) -> NaiveDateTime Performs the `+` operation. #### fn add(self, rhs: Months) -> <NaiveDateTime as Add<Months>>::Output An addition of months to `NaiveDateTime` clamped to valid days in resulting month. ##### Panics Panics if the resulting date would be out of range. ##### Example ``` use chrono::{Months, NaiveDate}; assert_eq!( NaiveDate::from_ymd_opt(2014, 1, 1).unwrap().and_hms_opt(1, 0, 0).unwrap() + Months::new(1), NaiveDate::from_ymd_opt(2014, 2, 1).unwrap().and_hms_opt(1, 0, 0).unwrap() ); assert_eq!( NaiveDate::from_ymd_opt(2014, 1, 1).unwrap().and_hms_opt(0, 2, 0).unwrap() + Months::new(11), NaiveDate::from_ymd_opt(2014, 12, 1).unwrap().and_hms_opt(0, 2, 0).unwrap() ); assert_eq!( NaiveDate::from_ymd_opt(2014, 1, 1).unwrap().and_hms_opt(0, 0, 3).unwrap() + Months::new(12), NaiveDate::from_ymd_opt(2015, 1, 1).unwrap().and_hms_opt(0, 0, 3).unwrap() ); assert_eq!( NaiveDate::from_ymd_opt(2014, 1, 1).unwrap().and_hms_opt(0, 0, 4).unwrap() + Months::new(13), NaiveDate::from_ymd_opt(2015, 2, 1).unwrap().and_hms_opt(0, 0, 4).unwrap() ); assert_eq!( NaiveDate::from_ymd_opt(2014, 1, 31).unwrap().and_hms_opt(0, 5, 0).unwrap() + Months::new(1), NaiveDate::from_ymd_opt(2014, 2, 28).unwrap().and_hms_opt(0, 5, 0).unwrap() ); assert_eq!( NaiveDate::from_ymd_opt(2020, 1, 31).unwrap().and_hms_opt(6, 0, 0).unwrap() + Months::new(1), NaiveDate::from_ymd_opt(2020, 2, 29).unwrap().and_hms_opt(6, 0, 0).unwrap() ); ``` #### type Output = NaiveDateTime The resulting type after applying the `+` operator.### impl AddAssign<Duration> for NaiveDateTime #### fn add_assign(&mut self, rhs: Duration) Performs the `+=` operation. #### fn add_assign(&mut self, rhs: Duration) Performs the `+=` operation. #### fn clone(&self) -> NaiveDateTime Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn year(&self) -> i32 Returns the year number in the calendar date. See also the `NaiveDate::year` method. ##### Example ``` use chrono::{NaiveDate, NaiveDateTime, Datelike}; let dt: NaiveDateTime = NaiveDate::from_ymd_opt(2015, 9, 25).unwrap().and_hms_opt(12, 34, 56).unwrap(); assert_eq!(dt.year(), 2015); ``` #### fn month(&self) -> u32 Returns the month number starting from 1. The return value ranges from 1 to 12. See also the `NaiveDate::month` method. ##### Example ``` use chrono::{NaiveDate, NaiveDateTime, Datelike}; let dt: NaiveDateTime = NaiveDate::from_ymd_opt(2015, 9, 25).unwrap().and_hms_opt(12, 34, 56).unwrap(); assert_eq!(dt.month(), 9); ``` #### fn month0(&self) -> u32 Returns the month number starting from 0. The return value ranges from 0 to 11. See also the `NaiveDate::month0` method. ##### Example ``` use chrono::{NaiveDate, NaiveDateTime, Datelike}; let dt: NaiveDateTime = NaiveDate::from_ymd_opt(2015, 9, 25).unwrap().and_hms_opt(12, 34, 56).unwrap(); assert_eq!(dt.month0(), 8); ``` #### fn day(&self) -> u32 Returns the day of month starting from 1. The return value ranges from 1 to 31. (The last day of month differs by months.) See also the `NaiveDate::day` method. ##### Example ``` use chrono::{NaiveDate, NaiveDateTime, Datelike}; let dt: NaiveDateTime = NaiveDate::from_ymd_opt(2015, 9, 25).unwrap().and_hms_opt(12, 34, 56).unwrap(); assert_eq!(dt.day(), 25); ``` #### fn day0(&self) -> u32 Returns the day of month starting from 0. The return value ranges from 0 to 30. (The last day of month differs by months.) See also the `NaiveDate::day0` method. ##### Example ``` use chrono::{NaiveDate, NaiveDateTime, Datelike}; let dt: NaiveDateTime = NaiveDate::from_ymd_opt(2015, 9, 25).unwrap().and_hms_opt(12, 34, 56).unwrap(); assert_eq!(dt.day0(), 24); ``` #### fn ordinal(&self) -> u32 Returns the day of year starting from 1. The return value ranges from 1 to 366. (The last day of year differs by years.) See also the `NaiveDate::ordinal` method. ##### Example ``` use chrono::{NaiveDate, NaiveDateTime, Datelike}; let dt: NaiveDateTime = NaiveDate::from_ymd_opt(2015, 9, 25).unwrap().and_hms_opt(12, 34, 56).unwrap(); assert_eq!(dt.ordinal(), 268); ``` #### fn ordinal0(&self) -> u32 Returns the day of year starting from 0. The return value ranges from 0 to 365. (The last day of year differs by years.) See also the `NaiveDate::ordinal0` method. ##### Example ``` use chrono::{NaiveDate, NaiveDateTime, Datelike}; let dt: NaiveDateTime = NaiveDate::from_ymd_opt(2015, 9, 25).unwrap().and_hms_opt(12, 34, 56).unwrap(); assert_eq!(dt.ordinal0(), 267); ``` #### fn weekday(&self) -> Weekday Returns the day of week. See also the `NaiveDate::weekday` method. ##### Example ``` use chrono::{NaiveDate, NaiveDateTime, Datelike, Weekday}; let dt: NaiveDateTime = NaiveDate::from_ymd_opt(2015, 9, 25).unwrap().and_hms_opt(12, 34, 56).unwrap(); assert_eq!(dt.weekday(), Weekday::Fri); ``` #### fn with_year(&self, year: i32) -> Option<NaiveDateTimeMakes a new `NaiveDateTime` with the year number changed, while keeping the same month and day. See also the `NaiveDate::with_year` method. ##### Errors Returns `None` if the resulting date does not exist, or when the `NaiveDateTime` would be out of range. ##### Example ``` use chrono::{NaiveDate, NaiveDateTime, Datelike}; let dt: NaiveDateTime = NaiveDate::from_ymd_opt(2015, 9, 25).unwrap().and_hms_opt(12, 34, 56).unwrap(); assert_eq!(dt.with_year(2016), Some(NaiveDate::from_ymd_opt(2016, 9, 25).unwrap().and_hms_opt(12, 34, 56).unwrap())); assert_eq!(dt.with_year(-308), Some(NaiveDate::from_ymd_opt(-308, 9, 25).unwrap().and_hms_opt(12, 34, 56).unwrap())); ``` #### fn with_month(&self, month: u32) -> Option<NaiveDateTimeMakes a new `NaiveDateTime` with the month number (starting from 1) changed. See also the `NaiveDate::with_month` method. ##### Errors Returns `None` if the resulting date does not exist, or if the value for `month` is invalid. ##### Example ``` use chrono::{NaiveDate, NaiveDateTime, Datelike}; let dt: NaiveDateTime = NaiveDate::from_ymd_opt(2015, 9, 30).unwrap().and_hms_opt(12, 34, 56).unwrap(); assert_eq!(dt.with_month(10), Some(NaiveDate::from_ymd_opt(2015, 10, 30).unwrap().and_hms_opt(12, 34, 56).unwrap())); assert_eq!(dt.with_month(13), None); // no month 13 assert_eq!(dt.with_month(2), None); // no February 30 ``` #### fn with_month0(&self, month0: u32) -> Option<NaiveDateTimeMakes a new `NaiveDateTime` with the month number (starting from 0) changed. See also the `NaiveDate::with_month0` method. ##### Errors Returns `None` if the resulting date does not exist, or if the value for `month0` is invalid. ##### Example ``` use chrono::{NaiveDate, NaiveDateTime, Datelike}; let dt: NaiveDateTime = NaiveDate::from_ymd_opt(2015, 9, 30).unwrap().and_hms_opt(12, 34, 56).unwrap(); assert_eq!(dt.with_month0(9), Some(NaiveDate::from_ymd_opt(2015, 10, 30).unwrap().and_hms_opt(12, 34, 56).unwrap())); assert_eq!(dt.with_month0(12), None); // no month 13 assert_eq!(dt.with_month0(1), None); // no February 30 ``` #### fn with_day(&self, day: u32) -> Option<NaiveDateTimeMakes a new `NaiveDateTime` with the day of month (starting from 1) changed. See also the `NaiveDate::with_day` method. ##### Errors Returns `None` if the resulting date does not exist, or if the value for `day` is invalid. ##### Example ``` use chrono::{NaiveDate, NaiveDateTime, Datelike}; let dt: NaiveDateTime = NaiveDate::from_ymd_opt(2015, 9, 8).unwrap().and_hms_opt(12, 34, 56).unwrap(); assert_eq!(dt.with_day(30), Some(NaiveDate::from_ymd_opt(2015, 9, 30).unwrap().and_hms_opt(12, 34, 56).unwrap())); assert_eq!(dt.with_day(31), None); // no September 31 ``` #### fn with_day0(&self, day0: u32) -> Option<NaiveDateTimeMakes a new `NaiveDateTime` with the day of month (starting from 0) changed. See also the `NaiveDate::with_day0` method. ##### Errors Returns `None` if the resulting date does not exist, or if the value for `day0` is invalid. ##### Example ``` use chrono::{NaiveDate, NaiveDateTime, Datelike}; let dt: NaiveDateTime = NaiveDate::from_ymd_opt(2015, 9, 8).unwrap().and_hms_opt(12, 34, 56).unwrap(); assert_eq!(dt.with_day0(29), Some(NaiveDate::from_ymd_opt(2015, 9, 30).unwrap().and_hms_opt(12, 34, 56).unwrap())); assert_eq!(dt.with_day0(30), None); // no September 31 ``` #### fn with_ordinal(&self, ordinal: u32) -> Option<NaiveDateTimeMakes a new `NaiveDateTime` with the day of year (starting from 1) changed. See also the `NaiveDate::with_ordinal` method. ##### Errors Returns `None` if the resulting date does not exist, or if the value for `ordinal` is invalid. ##### Example ``` use chrono::{NaiveDate, NaiveDateTime, Datelike}; let dt: NaiveDateTime = NaiveDate::from_ymd_opt(2015, 9, 8).unwrap().and_hms_opt(12, 34, 56).unwrap(); assert_eq!(dt.with_ordinal(60), Some(NaiveDate::from_ymd_opt(2015, 3, 1).unwrap().and_hms_opt(12, 34, 56).unwrap())); assert_eq!(dt.with_ordinal(366), None); // 2015 had only 365 days let dt: NaiveDateTime = NaiveDate::from_ymd_opt(2016, 9, 8).unwrap().and_hms_opt(12, 34, 56).unwrap(); assert_eq!(dt.with_ordinal(60), Some(NaiveDate::from_ymd_opt(2016, 2, 29).unwrap().and_hms_opt(12, 34, 56).unwrap())); assert_eq!(dt.with_ordinal(366), Some(NaiveDate::from_ymd_opt(2016, 12, 31).unwrap().and_hms_opt(12, 34, 56).unwrap())); ``` #### fn with_ordinal0(&self, ordinal0: u32) -> Option<NaiveDateTimeMakes a new `NaiveDateTime` with the day of year (starting from 0) changed. See also the `NaiveDate::with_ordinal0` method. ##### Errors Returns `None` if the resulting date does not exist, or if the value for `ordinal0` is invalid. ##### Example ``` use chrono::{NaiveDate, NaiveDateTime, Datelike}; let dt: NaiveDateTime = NaiveDate::from_ymd_opt(2015, 9, 8).unwrap().and_hms_opt(12, 34, 56).unwrap(); assert_eq!(dt.with_ordinal0(59), Some(NaiveDate::from_ymd_opt(2015, 3, 1).unwrap().and_hms_opt(12, 34, 56).unwrap())); assert_eq!(dt.with_ordinal0(365), None); // 2015 had only 365 days let dt: NaiveDateTime = NaiveDate::from_ymd_opt(2016, 9, 8).unwrap().and_hms_opt(12, 34, 56).unwrap(); assert_eq!(dt.with_ordinal0(59), Some(NaiveDate::from_ymd_opt(2016, 2, 29).unwrap().and_hms_opt(12, 34, 56).unwrap())); assert_eq!(dt.with_ordinal0(365), Some(NaiveDate::from_ymd_opt(2016, 12, 31).unwrap().and_hms_opt(12, 34, 56).unwrap())); ``` #### fn iso_week(&self) -> IsoWeek Returns the ISO week.#### fn year_ce(&self) -> (bool, u32) Returns the absolute year number starting from 1 with a boolean flag, which is false when the year predates the epoch (BCE/BC) and true otherwise (CE/AD).#### fn num_days_from_ce(&self) -> i32 Counts the days in the proleptic Gregorian calendar, with January 1, Year 1 (CE) as day 1. The `Debug` output of the naive date and time `dt` is the same as `dt.format("%Y-%m-%dT%H:%M:%S%.f")`. The string printed can be readily parsed via the `parse` method on `str`. It should be noted that, for leap seconds not on the minute boundary, it may print a representation not distinguishable from non-leap seconds. This doesn’t matter in practice, since such leap seconds never happened. (By the time of the first leap second on 1972-06-30, every time zone offset around the world has standardized to the 5-minute alignment.) #### Example ``` use chrono::NaiveDate; let dt = NaiveDate::from_ymd_opt(2016, 11, 15).unwrap().and_hms_opt(7, 39, 24).unwrap(); assert_eq!(format!("{:?}", dt), "2016-11-15T07:39:24"); ``` Leap seconds may also be used. ``` let dt = NaiveDate::from_ymd_opt(2015, 6, 30).unwrap().and_hms_milli_opt(23, 59, 59, 1_500).unwrap(); assert_eq!(format!("{:?}", dt), "2015-06-30T23:59:60.500"); ``` #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter. The default value for a NaiveDateTime is one with epoch 0 that is, 1st of January 1970 at 00:00:00. #### Example ``` use chrono::NaiveDateTime; let default_date = NaiveDateTime::default(); assert_eq!(Some(default_date), NaiveDateTime::from_timestamp_opt(0, 0)); ``` #### fn default() -> NaiveDateTime Returns the “default value” for a type. The `Display` output of the naive date and time `dt` is the same as `dt.format("%Y-%m-%d %H:%M:%S%.f")`. It should be noted that, for leap seconds not on the minute boundary, it may print a representation not distinguishable from non-leap seconds. This doesn’t matter in practice, since such leap seconds never happened. (By the time of the first leap second on 1972-06-30, every time zone offset around the world has standardized to the 5-minute alignment.) #### Example ``` use chrono::NaiveDate; let dt = NaiveDate::from_ymd_opt(2016, 11, 15).unwrap().and_hms_opt(7, 39, 24).unwrap(); assert_eq!(format!("{}", dt), "2016-11-15 07:39:24"); ``` Leap seconds may also be used. ``` let dt = NaiveDate::from_ymd_opt(2015, 6, 30).unwrap().and_hms_milli_opt(23, 59, 59, 1_500).unwrap(); assert_eq!(format!("{}", dt), "2015-06-30 23:59:60.500"); ``` #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter. #### type Err = RoundingError Error that can occur in rounding or truncating#### fn duration_round( self, duration: Duration ) -> Result<NaiveDateTime, <NaiveDateTime as DurationRound>::ErrReturn a copy rounded by Duration. self, duration: Duration ) -> Result<NaiveDateTime, <NaiveDateTime as DurationRound>::ErrReturn a copy truncated by Duration. #### fn from(naive_datetime: NaiveDateTime) -> NaiveDate Converts to this type from the input type.### impl FromStr for NaiveDateTime Parsing a `str` into a `NaiveDateTime` uses the same format, `%Y-%m-%dT%H:%M:%S%.f`, as in `Debug`. #### Example ``` use chrono::{NaiveDateTime, NaiveDate}; let dt = NaiveDate::from_ymd_opt(2015, 9, 18).unwrap().and_hms_opt(23, 56, 4).unwrap(); assert_eq!("2015-09-18T23:56:04".parse::<NaiveDateTime>(), Ok(dt)); let dt = NaiveDate::from_ymd_opt(12345, 6, 7).unwrap().and_hms_milli_opt(7, 59, 59, 1_500).unwrap(); // leap second assert_eq!("+12345-6-7T7:59:60.5".parse::<NaiveDateTime>(), Ok(dt)); assert!("foo".parse::<NaiveDateTime>().is_err()); ``` #### type Err = ParseError The associated error which can be returned from parsing.#### fn from_str(s: &str) -> Result<NaiveDateTime, ParseErrorParses a string `s` to return a value of this type. #### fn hash<__H>(&self, state: &mut __H)where __H: Hasher, Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where H: Hasher, Self: Sized, Feeds a slice of this type into the given `Hasher`. #### fn cmp(&self, other: &NaiveDateTime) -> Ordering This method returns an `Ordering` between `self` and `other`. Read more1.21.0 · source#### fn max(self, other: Self) -> Selfwhere Self: Sized, Compares and returns the maximum of two values. Read more1.21.0 · source#### fn min(self, other: Self) -> Selfwhere Self: Sized, Compares and returns the minimum of two values. Read more1.50.0 · source#### fn clamp(self, min: Self, max: Self) -> Selfwhere Self: Sized + PartialOrd<Self>, Restrict a value to a certain interval. #### fn eq(&self, other: &NaiveDateTime) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialOrd<NaiveDateTime> for NaiveDateTime #### fn partial_cmp(&self, other: &NaiveDateTime) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists. Read more1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. #### type Output = NaiveDateTime The resulting type after applying the `-` operator.#### fn sub(self, days: Days) -> <NaiveDateTime as Sub<Days>>::Output Performs the `-` operation. #### type Output = NaiveDateTime The resulting type after applying the `-` operator.#### fn sub(self, rhs: Duration) -> NaiveDateTime Performs the `-` operation. A subtraction of `Duration` from `NaiveDateTime` yields another `NaiveDateTime`. It is the same as the addition with a negated `Duration`. As a part of Chrono’s leap second handling the subtraction assumes that **there is no leap second ever**, except when the `NaiveDateTime` itself represents a leap second in which case the assumption becomes that **there is exactly a single leap second ever**. Panics on underflow or overflow. Use `NaiveDateTime::checked_sub_signed` to detect that. #### Example ``` use chrono::{Duration, NaiveDate}; let from_ymd = |y, m, d| NaiveDate::from_ymd_opt(y, m, d).unwrap(); let d = from_ymd(2016, 7, 8); let hms = |h, m, s| d.and_hms_opt(h, m, s).unwrap(); assert_eq!(hms(3, 5, 7) - Duration::zero(), hms(3, 5, 7)); assert_eq!(hms(3, 5, 7) - Duration::seconds(1), hms(3, 5, 6)); assert_eq!(hms(3, 5, 7) - Duration::seconds(-1), hms(3, 5, 8)); assert_eq!(hms(3, 5, 7) - Duration::seconds(3600 + 60), hms(2, 4, 7)); assert_eq!(hms(3, 5, 7) - Duration::seconds(86_400), from_ymd(2016, 7, 7).and_hms_opt(3, 5, 7).unwrap()); assert_eq!(hms(3, 5, 7) - Duration::days(365), from_ymd(2015, 7, 9).and_hms_opt(3, 5, 7).unwrap()); let hmsm = |h, m, s, milli| d.and_hms_milli_opt(h, m, s, milli).unwrap(); assert_eq!(hmsm(3, 5, 7, 450) - Duration::milliseconds(670), hmsm(3, 5, 6, 780)); ``` Leap seconds are handled, but the subtraction assumes that it is the only leap second happened. ``` let leap = hmsm(3, 5, 59, 1_300); assert_eq!(leap - Duration::zero(), hmsm(3, 5, 59, 1_300)); assert_eq!(leap - Duration::milliseconds(200), hmsm(3, 5, 59, 1_100)); assert_eq!(leap - Duration::milliseconds(500), hmsm(3, 5, 59, 800)); assert_eq!(leap - Duration::seconds(60), hmsm(3, 5, 0, 300)); assert_eq!(leap - Duration::days(1), from_ymd(2016, 7, 7).and_hms_milli_opt(3, 6, 0, 300).unwrap()); ``` #### type Output = NaiveDateTime The resulting type after applying the `-` operator.#### fn sub(self, rhs: Duration) -> NaiveDateTime Performs the `-` operation. #### type Output = NaiveDateTime The resulting type after applying the `-` operator.#### fn sub(self, rhs: FixedOffset) -> NaiveDateTime Performs the `-` operation. A subtraction of Months from `NaiveDateTime` clamped to valid days in resulting month. #### Panics Panics if the resulting date would be out of range. #### Example ``` use chrono::{Months, NaiveDate}; assert_eq!( NaiveDate::from_ymd_opt(2014, 01, 01).unwrap().and_hms_opt(01, 00, 00).unwrap() - Months::new(11), NaiveDate::from_ymd_opt(2013, 02, 01).unwrap().and_hms_opt(01, 00, 00).unwrap() ); assert_eq!( NaiveDate::from_ymd_opt(2014, 01, 01).unwrap().and_hms_opt(00, 02, 00).unwrap() - Months::new(12), NaiveDate::from_ymd_opt(2013, 01, 01).unwrap().and_hms_opt(00, 02, 00).unwrap() ); assert_eq!( NaiveDate::from_ymd_opt(2014, 01, 01).unwrap().and_hms_opt(00, 00, 03).unwrap() - Months::new(13), NaiveDate::from_ymd_opt(2012, 12, 01).unwrap().and_hms_opt(00, 00, 03).unwrap() ); ``` #### type Output = NaiveDateTime The resulting type after applying the `-` operator.#### fn sub(self, rhs: Months) -> <NaiveDateTime as Sub<Months>>::Output Performs the `-` operation. Subtracts another `NaiveDateTime` from the current date and time. This does not overflow or underflow at all. As a part of Chrono’s leap second handling, the subtraction assumes that **there is no leap second ever**, except when any of the `NaiveDateTime`s themselves represents a leap second in which case the assumption becomes that **there are exactly one (or two) leap second(s) ever**. The implementation is a wrapper around `NaiveDateTime::signed_duration_since`. #### Example ``` use chrono::{Duration, NaiveDate}; let from_ymd = |y, m, d| NaiveDate::from_ymd_opt(y, m, d).unwrap(); let d = from_ymd(2016, 7, 8); assert_eq!(d.and_hms_opt(3, 5, 7).unwrap() - d.and_hms_opt(2, 4, 6).unwrap(), Duration::seconds(3600 + 60 + 1)); // July 8 is 190th day in the year 2016 let d0 = from_ymd(2016, 1, 1); assert_eq!(d.and_hms_milli_opt(0, 7, 6, 500).unwrap() - d0.and_hms_opt(0, 0, 0).unwrap(), Duration::seconds(189 * 86_400 + 7 * 60 + 6) + Duration::milliseconds(500)); ``` Leap seconds are handled, but the subtraction assumes that no other leap seconds happened. ``` let leap = from_ymd(2015, 6, 30).and_hms_milli_opt(23, 59, 59, 1_500).unwrap(); assert_eq!(leap - from_ymd(2015, 6, 30).and_hms_opt(23, 0, 0).unwrap(), Duration::seconds(3600) + Duration::milliseconds(500)); assert_eq!(from_ymd(2015, 7, 1).and_hms_opt(1, 0, 0).unwrap() - leap, Duration::seconds(3600) - Duration::milliseconds(500)); ``` #### type Output = Duration The resulting type after applying the `-` operator.#### fn sub(self, rhs: NaiveDateTime) -> Duration Performs the `-` operation. #### fn sub_assign(&mut self, rhs: Duration) Performs the `-=` operation. #### fn sub_assign(&mut self, rhs: Duration) Performs the `-=` operation. #### fn hour(&self) -> u32 Returns the hour number from 0 to 23. See also the `NaiveTime::hour` method. ##### Example ``` use chrono::{NaiveDate, NaiveDateTime, Timelike}; let dt: NaiveDateTime = NaiveDate::from_ymd_opt(2015, 9, 8).unwrap().and_hms_milli_opt(12, 34, 56, 789).unwrap(); assert_eq!(dt.hour(), 12); ``` #### fn minute(&self) -> u32 Returns the minute number from 0 to 59. See also the `NaiveTime::minute` method. ##### Example ``` use chrono::{NaiveDate, NaiveDateTime, Timelike}; let dt: NaiveDateTime = NaiveDate::from_ymd_opt(2015, 9, 8).unwrap().and_hms_milli_opt(12, 34, 56, 789).unwrap(); assert_eq!(dt.minute(), 34); ``` #### fn second(&self) -> u32 Returns the second number from 0 to 59. See also the `NaiveTime::second` method. ##### Example ``` use chrono::{NaiveDate, NaiveDateTime, Timelike}; let dt: NaiveDateTime = NaiveDate::from_ymd_opt(2015, 9, 8).unwrap().and_hms_milli_opt(12, 34, 56, 789).unwrap(); assert_eq!(dt.second(), 56); ``` #### fn nanosecond(&self) -> u32 Returns the number of nanoseconds since the whole non-leap second. The range from 1,000,000,000 to 1,999,999,999 represents the leap second. See also the `NaiveTime::nanosecond` method. ##### Example ``` use chrono::{NaiveDate, NaiveDateTime, Timelike}; let dt: NaiveDateTime = NaiveDate::from_ymd_opt(2015, 9, 8).unwrap().and_hms_milli_opt(12, 34, 56, 789).unwrap(); assert_eq!(dt.nanosecond(), 789_000_000); ``` #### fn with_hour(&self, hour: u32) -> Option<NaiveDateTimeMakes a new `NaiveDateTime` with the hour number changed. See also the `NaiveTime::with_hour` method. ##### Errors Returns `None` if the value for `hour` is invalid. ##### Example ``` use chrono::{NaiveDate, NaiveDateTime, Timelike}; let dt: NaiveDateTime = NaiveDate::from_ymd_opt(2015, 9, 8).unwrap().and_hms_milli_opt(12, 34, 56, 789).unwrap(); assert_eq!(dt.with_hour(7), Some(NaiveDate::from_ymd_opt(2015, 9, 8).unwrap().and_hms_milli_opt(7, 34, 56, 789).unwrap())); assert_eq!(dt.with_hour(24), None); ``` #### fn with_minute(&self, min: u32) -> Option<NaiveDateTimeMakes a new `NaiveDateTime` with the minute number changed. See also the `NaiveTime::with_minute` method. ##### Errors Returns `None` if the value for `minute` is invalid. ##### Example ``` use chrono::{NaiveDate, NaiveDateTime, Timelike}; let dt: NaiveDateTime = NaiveDate::from_ymd_opt(2015, 9, 8).unwrap().and_hms_milli_opt(12, 34, 56, 789).unwrap(); assert_eq!(dt.with_minute(45), Some(NaiveDate::from_ymd_opt(2015, 9, 8).unwrap().and_hms_milli_opt(12, 45, 56, 789).unwrap())); assert_eq!(dt.with_minute(60), None); ``` #### fn with_second(&self, sec: u32) -> Option<NaiveDateTimeMakes a new `NaiveDateTime` with the second number changed. As with the `second` method, the input range is restricted to 0 through 59. See also the `NaiveTime::with_second` method. ##### Errors Returns `None` if the value for `second` is invalid. ##### Example ``` use chrono::{NaiveDate, NaiveDateTime, Timelike}; let dt: NaiveDateTime = NaiveDate::from_ymd_opt(2015, 9, 8).unwrap().and_hms_milli_opt(12, 34, 56, 789).unwrap(); assert_eq!(dt.with_second(17), Some(NaiveDate::from_ymd_opt(2015, 9, 8).unwrap().and_hms_milli_opt(12, 34, 17, 789).unwrap())); assert_eq!(dt.with_second(60), None); ``` #### fn with_nanosecond(&self, nano: u32) -> Option<NaiveDateTimeMakes a new `NaiveDateTime` with nanoseconds since the whole non-leap second changed. Returns `None` when the resulting `NaiveDateTime` would be invalid. As with the `NaiveDateTime::nanosecond` method, the input range can exceed 1,000,000,000 for leap seconds. See also the `NaiveTime::with_nanosecond` method. ##### Errors Returns `None` if `nanosecond >= 2,000,000,000`. ##### Example ``` use chrono::{NaiveDate, NaiveDateTime, Timelike}; let dt: NaiveDateTime = NaiveDate::from_ymd_opt(2015, 9, 8).unwrap().and_hms_milli_opt(12, 34, 59, 789).unwrap(); assert_eq!(dt.with_nanosecond(333_333_333), Some(NaiveDate::from_ymd_opt(2015, 9, 8).unwrap().and_hms_nano_opt(12, 34, 59, 333_333_333).unwrap())); assert_eq!(dt.with_nanosecond(1_333_333_333), // leap second Some(NaiveDate::from_ymd_opt(2015, 9, 8).unwrap().and_hms_nano_opt(12, 34, 59, 1_333_333_333).unwrap())); assert_eq!(dt.with_nanosecond(2_000_000_000), None); ``` #### fn hour12(&self) -> (bool, u32) Returns the hour number from 1 to 12 with a boolean flag, which is false for AM and true for PM.#### fn num_seconds_from_midnight(&self) -> u32 Returns the number of non-leap seconds past the last midnight. ### impl Eq for NaiveDateTime ### impl StructuralEq for NaiveDateTime ### impl StructuralPartialEq for NaiveDateTime Auto Trait Implementations --- ### impl RefUnwindSafe for NaiveDateTime ### impl Send for NaiveDateTime ### impl Sync for NaiveDateTime ### impl Unpin for NaiveDateTime ### impl UnwindSafe for NaiveDateTime Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. Q: Eq + ?Sized, K: Borrow<Q> + ?Sized, #### fn equivalent(&self, key: &K) -> bool Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> SubsecRound for Twhere T: Add<Duration, Output = T> + Sub<Duration, Output = T> + Timelike, #### fn round_subsecs(self, digits: u16) -> T Return a copy rounded to the specified number of subsecond digits. With 9 or more digits, self is returned unmodified. Halfway values are rounded up (away from zero). Return a copy truncated to the specified number of subsecond digits. With 9 or more digits, self is returned unmodified. T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. T: Display + ?Sized, #### default fn to_string(&self) -> String Converts the given value to a `String`. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more Struct rocket_launch_live::NaiveTime === ``` pub struct NaiveTime { /* private fields */ } ``` ISO 8601 time without timezone. Allows for the nanosecond precision and optional leap second representation. Leap Second Handling --- Since 1960s, the manmade atomic clock has been so accurate that it is much more accurate than Earth’s own motion. It became desirable to define the civil time in terms of the atomic clock, but that risks the desynchronization of the civil time from Earth. To account for this, the designers of the Coordinated Universal Time (UTC) made that the UTC should be kept within 0.9 seconds of the observed Earth-bound time. When the mean solar day is longer than the ideal (86,400 seconds), the error slowly accumulates and it is necessary to add a **leap second** to slow the UTC down a bit. (We may also remove a second to speed the UTC up a bit, but it never happened.) The leap second, if any, follows 23:59:59 of June 30 or December 31 in the UTC. Fast forward to the 21st century, we have seen 26 leap seconds from January 1972 to December 2015. Yes, 26 seconds. Probably you can read this paragraph within 26 seconds. But those 26 seconds, and possibly more in the future, are never predictable, and whether to add a leap second or not is known only before 6 months. Internet-based clocks (via NTP) do account for known leap seconds, but the system API normally doesn’t (and often can’t, with no network connection) and there is no reliable way to retrieve leap second information. Chrono does not try to accurately implement leap seconds; it is impossible. Rather, **it allows for leap seconds but behaves as if there are *no other* leap seconds.** Various operations will ignore any possible leap second(s) except when any of the operands were actually leap seconds. If you cannot tolerate this behavior, you must use a separate `TimeZone` for the International Atomic Time (TAI). TAI is like UTC but has no leap seconds, and thus slightly differs from UTC. Chrono does not yet provide such implementation, but it is planned. ### Representing Leap Seconds The leap second is indicated via fractional seconds more than 1 second. This makes possible to treat a leap second as the prior non-leap second if you don’t care about sub-second accuracy. You should use the proper formatting to get the raw leap second. All methods accepting fractional seconds will accept such values. ``` use chrono::{NaiveDate, NaiveTime, Utc}; let t = NaiveTime::from_hms_milli_opt(8, 59, 59, 1_000).unwrap(); let dt1 = NaiveDate::from_ymd_opt(2015, 7, 1).unwrap().and_hms_micro_opt(8, 59, 59, 1_000_000).unwrap(); let dt2 = NaiveDate::from_ymd_opt(2015, 6, 30).unwrap().and_hms_nano_opt(23, 59, 59, 1_000_000_000).unwrap().and_local_timezone(Utc).unwrap(); ``` Note that the leap second can happen anytime given an appropriate time zone; 2015-07-01 01:23:60 would be a proper leap second if UTC+01:24 had existed. Practically speaking, though, by the time of the first leap second on 1972-06-30, every time zone offset around the world has standardized to the 5-minute alignment. ### Date And Time Arithmetics As a concrete example, let’s assume that `03:00:60` and `04:00:60` are leap seconds. In reality, of course, leap seconds are separated by at least 6 months. We will also use some intuitive concise notations for the explanation. `Time + Duration` (short for `NaiveTime::overflowing_add_signed`): * `03:00:00 + 1s = 03:00:01`. * `03:00:59 + 60s = 03:01:59`. * `03:00:59 + 61s = 03:02:00`. * `03:00:59 + 1s = 03:01:00`. * `03:00:60 + 1s = 03:01:00`. Note that the sum is identical to the previous. * `03:00:60 + 60s = 03:01:59`. * `03:00:60 + 61s = 03:02:00`. * `03:00:60.1 + 0.8s = 03:00:60.9`. `Time - Duration` (short for `NaiveTime::overflowing_sub_signed`): * `03:00:00 - 1s = 02:59:59`. * `03:01:00 - 1s = 03:00:59`. * `03:01:00 - 60s = 03:00:00`. * `03:00:60 - 60s = 03:00:00`. Note that the result is identical to the previous. * `03:00:60.7 - 0.4s = 03:00:60.3`. * `03:00:60.7 - 0.9s = 03:00:59.8`. `Time - Time` (short for `NaiveTime::signed_duration_since`): * `04:00:00 - 03:00:00 = 3600s`. * `03:01:00 - 03:00:00 = 60s`. * `03:00:60 - 03:00:00 = 60s`. Note that the difference is identical to the previous. * `03:00:60.6 - 03:00:59.4 = 1.2s`. * `03:01:00 - 03:00:59.8 = 0.2s`. * `03:01:00 - 03:00:60.5 = 0.5s`. Note that the difference is larger than the previous, even though the leap second clearly follows the previous whole second. * `04:00:60.9 - 03:00:60.1 = (04:00:60.9 - 04:00:00) + (04:00:00 - 03:01:00) + (03:01:00 - 03:00:60.1) = 60.9s + 3540s + 0.9s = 3601.8s`. In general, * `Time + Duration` unconditionally equals to `Duration + Time`. * `Time - Duration` unconditionally equals to `Time + (-Duration)`. * `Time1 - Time2` unconditionally equals to `-(Time2 - Time1)`. * Associativity does not generally hold, because `(Time + Duration1) - Duration2` no longer equals to `Time + (Duration1 - Duration2)` for two positive durations. + As a special case, `(Time + Duration) - Duration` also does not equal to `Time`. + If you can assume that all durations have the same sign, however, then the associativity holds: `(Time + Duration1) + Duration2` equals to `Time + (Duration1 + Duration2)` for two positive durations. ### Reading And Writing Leap Seconds The “typical” leap seconds on the minute boundary are correctly handled both in the formatting and parsing. The leap second in the human-readable representation will be represented as the second part being 60, as required by ISO 8601. ``` use chrono::{Utc, NaiveDate}; let dt = NaiveDate::from_ymd_opt(2015, 6, 30).unwrap().and_hms_milli_opt(23, 59, 59, 1_000).unwrap().and_local_timezone(Utc).unwrap(); assert_eq!(format!("{:?}", dt), "2015-06-30T23:59:60Z"); ``` There are hypothetical leap seconds not on the minute boundary nevertheless supported by Chrono. They are allowed for the sake of completeness and consistency; there were several “exotic” time zone offsets with fractional minutes prior to UTC after all. For such cases the human-readable representation is ambiguous and would be read back to the next non-leap second. A `NaiveTime` with a leap second that is not on a minute boundary can only be created from a `DateTime` with fractional minutes as offset, or using `Timelike::with_nanosecond()`. ``` use chrono::{FixedOffset, NaiveDate, TimeZone}; let paramaribo_pre1945 = FixedOffset::east_opt(-13236).unwrap(); // -03:40:36 let leap_sec_2015 = NaiveDate::from_ymd_opt(2015, 6, 30).unwrap().and_hms_milli_opt(23, 59, 59, 1_000).unwrap(); let dt1 = paramaribo_pre1945.from_utc_datetime(&leap_sec_2015); assert_eq!(format!("{:?}", dt1), "2015-06-30T20:19:24-03:40:36"); assert_eq!(format!("{:?}", dt1.time()), "20:19:24"); let next_sec = NaiveDate::from_ymd_opt(2015, 7, 1).unwrap().and_hms_opt(0, 0, 0).unwrap(); let dt2 = paramaribo_pre1945.from_utc_datetime(&next_sec); assert_eq!(format!("{:?}", dt2), "2015-06-30T20:19:24-03:40:36"); assert_eq!(format!("{:?}", dt2.time()), "20:19:24"); assert!(dt1.time() != dt2.time()); assert!(dt1.time().to_string() == dt2.time().to_string()); ``` Since Chrono alone cannot determine any existence of leap seconds, **there is absolutely no guarantee that the leap second read has actually happened**. Implementations --- ### impl NaiveTime #### pub const fn from_hms(hour: u32, min: u32, sec: u32) -> NaiveTime 👎Deprecated since 0.4.23: use `from_hms_opt()` insteadMakes a new `NaiveTime` from hour, minute and second. No leap second is allowed here; use `NaiveTime::from_hms_*` methods with a subsecond parameter instead. ##### Panics Panics on invalid hour, minute and/or second. #### pub const fn from_hms_opt(hour: u32, min: u32, sec: u32) -> Option<NaiveTimeMakes a new `NaiveTime` from hour, minute and second. The millisecond part is allowed to exceed 1,000,000,000 in order to represent a leap second, but only when `sec == 59`. ##### Errors Returns `None` on invalid hour, minute and/or second. ##### Example ``` use chrono::NaiveTime; let from_hms_opt = NaiveTime::from_hms_opt; assert!(from_hms_opt(0, 0, 0).is_some()); assert!(from_hms_opt(23, 59, 59).is_some()); assert!(from_hms_opt(24, 0, 0).is_none()); assert!(from_hms_opt(23, 60, 0).is_none()); assert!(from_hms_opt(23, 59, 60).is_none()); ``` #### pub const fn from_hms_milli( hour: u32, min: u32, sec: u32, milli: u32 ) -> NaiveTime 👎Deprecated since 0.4.23: use `from_hms_milli_opt()` insteadMakes a new `NaiveTime` from hour, minute, second and millisecond. The millisecond part can exceed 1,000 in order to represent the leap second. ##### Panics Panics on invalid hour, minute, second and/or millisecond. #### pub const fn from_hms_milli_opt( hour: u32, min: u32, sec: u32, milli: u32 ) -> Option<NaiveTimeMakes a new `NaiveTime` from hour, minute, second and millisecond. The millisecond part is allowed to exceed 1,000,000,000 in order to represent a leap second, but only when `sec == 59`. ##### Errors Returns `None` on invalid hour, minute, second and/or millisecond. ##### Example ``` use chrono::NaiveTime; let from_hmsm_opt = NaiveTime::from_hms_milli_opt; assert!(from_hmsm_opt(0, 0, 0, 0).is_some()); assert!(from_hmsm_opt(23, 59, 59, 999).is_some()); assert!(from_hmsm_opt(23, 59, 59, 1_999).is_some()); // a leap second after 23:59:59 assert!(from_hmsm_opt(24, 0, 0, 0).is_none()); assert!(from_hmsm_opt(23, 60, 0, 0).is_none()); assert!(from_hmsm_opt(23, 59, 60, 0).is_none()); assert!(from_hmsm_opt(23, 59, 59, 2_000).is_none()); ``` #### pub const fn from_hms_micro( hour: u32, min: u32, sec: u32, micro: u32 ) -> NaiveTime 👎Deprecated since 0.4.23: use `from_hms_micro_opt()` insteadMakes a new `NaiveTime` from hour, minute, second and microsecond. The microsecond part is allowed to exceed 1,000,000,000 in order to represent a leap second, but only when `sec == 59`. ##### Panics Panics on invalid hour, minute, second and/or microsecond. #### pub const fn from_hms_micro_opt( hour: u32, min: u32, sec: u32, micro: u32 ) -> Option<NaiveTimeMakes a new `NaiveTime` from hour, minute, second and microsecond. The microsecond part is allowed to exceed 1,000,000,000 in order to represent a leap second, but only when `sec == 59`. ##### Errors Returns `None` on invalid hour, minute, second and/or microsecond. ##### Example ``` use chrono::NaiveTime; let from_hmsu_opt = NaiveTime::from_hms_micro_opt; assert!(from_hmsu_opt(0, 0, 0, 0).is_some()); assert!(from_hmsu_opt(23, 59, 59, 999_999).is_some()); assert!(from_hmsu_opt(23, 59, 59, 1_999_999).is_some()); // a leap second after 23:59:59 assert!(from_hmsu_opt(24, 0, 0, 0).is_none()); assert!(from_hmsu_opt(23, 60, 0, 0).is_none()); assert!(from_hmsu_opt(23, 59, 60, 0).is_none()); assert!(from_hmsu_opt(23, 59, 59, 2_000_000).is_none()); ``` #### pub const fn from_hms_nano( hour: u32, min: u32, sec: u32, nano: u32 ) -> NaiveTime 👎Deprecated since 0.4.23: use `from_hms_nano_opt()` insteadMakes a new `NaiveTime` from hour, minute, second and nanosecond. The nanosecond part is allowed to exceed 1,000,000,000 in order to represent a leap second, but only when `sec == 59`. ##### Panics Panics on invalid hour, minute, second and/or nanosecond. #### pub const fn from_hms_nano_opt( hour: u32, min: u32, sec: u32, nano: u32 ) -> Option<NaiveTimeMakes a new `NaiveTime` from hour, minute, second and nanosecond. The nanosecond part is allowed to exceed 1,000,000,000 in order to represent a leap second, but only when `sec == 59`. ##### Errors Returns `None` on invalid hour, minute, second and/or nanosecond. ##### Example ``` use chrono::NaiveTime; let from_hmsn_opt = NaiveTime::from_hms_nano_opt; assert!(from_hmsn_opt(0, 0, 0, 0).is_some()); assert!(from_hmsn_opt(23, 59, 59, 999_999_999).is_some()); assert!(from_hmsn_opt(23, 59, 59, 1_999_999_999).is_some()); // a leap second after 23:59:59 assert!(from_hmsn_opt(24, 0, 0, 0).is_none()); assert!(from_hmsn_opt(23, 60, 0, 0).is_none()); assert!(from_hmsn_opt(23, 59, 60, 0).is_none()); assert!(from_hmsn_opt(23, 59, 59, 2_000_000_000).is_none()); ``` #### pub const fn from_num_seconds_from_midnight(secs: u32, nano: u32) -> NaiveTime 👎Deprecated since 0.4.23: use `from_num_seconds_from_midnight_opt()` insteadMakes a new `NaiveTime` from the number of seconds since midnight and nanosecond. The nanosecond part is allowed to exceed 1,000,000,000 in order to represent a leap second, but only when `secs % 60 == 59`. ##### Panics Panics on invalid number of seconds and/or nanosecond. #### pub const fn from_num_seconds_from_midnight_opt( secs: u32, nano: u32 ) -> Option<NaiveTimeMakes a new `NaiveTime` from the number of seconds since midnight and nanosecond. The nanosecond part is allowed to exceed 1,000,000,000 in order to represent a leap second, but only when `secs % 60 == 59`. ##### Errors Returns `None` on invalid number of seconds and/or nanosecond. ##### Example ``` use chrono::NaiveTime; let from_nsecs_opt = NaiveTime::from_num_seconds_from_midnight_opt; assert!(from_nsecs_opt(0, 0).is_some()); assert!(from_nsecs_opt(86399, 999_999_999).is_some()); assert!(from_nsecs_opt(86399, 1_999_999_999).is_some()); // a leap second after 23:59:59 assert!(from_nsecs_opt(86_400, 0).is_none()); assert!(from_nsecs_opt(86399, 2_000_000_000).is_none()); ``` #### pub fn parse_from_str(s: &str, fmt: &str) -> Result<NaiveTime, ParseErrorParses a string with the specified format string and returns a new `NaiveTime`. See the `format::strftime` module on the supported escape sequences. ##### Example ``` use chrono::NaiveTime; let parse_from_str = NaiveTime::parse_from_str; assert_eq!(parse_from_str("23:56:04", "%H:%M:%S"), Ok(NaiveTime::from_hms_opt(23, 56, 4).unwrap())); assert_eq!(parse_from_str("pm012345.6789", "%p%I%M%S%.f"), Ok(NaiveTime::from_hms_micro_opt(13, 23, 45, 678_900).unwrap())); ``` Date and offset is ignored for the purpose of parsing. ``` assert_eq!(parse_from_str("2014-5-17T12:34:56+09:30", "%Y-%m-%dT%H:%M:%S%z"), Ok(NaiveTime::from_hms_opt(12, 34, 56).unwrap())); ``` Leap seconds are correctly handled by treating any time of the form `hh:mm:60` as a leap second. (This equally applies to the formatting, so the round trip is possible.) ``` assert_eq!(parse_from_str("08:59:60.123", "%H:%M:%S%.f"), Ok(NaiveTime::from_hms_milli_opt(8, 59, 59, 1_123).unwrap())); ``` Missing seconds are assumed to be zero, but out-of-bound times or insufficient fields are errors otherwise. ``` assert_eq!(parse_from_str("7:15", "%H:%M"), Ok(NaiveTime::from_hms_opt(7, 15, 0).unwrap())); assert!(parse_from_str("04m33s", "%Mm%Ss").is_err()); assert!(parse_from_str("12", "%H").is_err()); assert!(parse_from_str("17:60", "%H:%M").is_err()); assert!(parse_from_str("24:00:00", "%H:%M:%S").is_err()); ``` All parsed fields should be consistent to each other, otherwise it’s an error. Here `%H` is for 24-hour clocks, unlike `%I`, and thus can be independently determined without AM/PM. ``` assert!(parse_from_str("13:07 AM", "%H:%M %p").is_err()); ``` #### pub fn parse_and_remainder<'a>( s: &'a str, fmt: &str ) -> Result<(NaiveTime, &'a str), ParseErrorParses a string from a user-specified format into a new `NaiveTime` value, and a slice with the remaining portion of the string. See the `format::strftime` module on the supported escape sequences. Similar to `parse_from_str`. ##### Example ``` let (time, remainder) = NaiveTime::parse_and_remainder( "3h4m33s trailing text", "%-Hh%-Mm%-Ss").unwrap(); assert_eq!(time, NaiveTime::from_hms_opt(3, 4, 33).unwrap()); assert_eq!(remainder, " trailing text"); ``` #### pub fn overflowing_add_signed(&self, rhs: Duration) -> (NaiveTime, i64) Adds given `Duration` to the current time, and also returns the number of *seconds* in the integral number of days ignored from the addition. ##### Example ``` use chrono::{Duration, NaiveTime}; let from_hms = |h, m, s| { NaiveTime::from_hms_opt(h, m, s).unwrap() }; assert_eq!(from_hms(3, 4, 5).overflowing_add_signed(Duration::hours(11)), (from_hms(14, 4, 5), 0)); assert_eq!(from_hms(3, 4, 5).overflowing_add_signed(Duration::hours(23)), (from_hms(2, 4, 5), 86_400)); assert_eq!(from_hms(3, 4, 5).overflowing_add_signed(Duration::hours(-7)), (from_hms(20, 4, 5), -86_400)); ``` #### pub fn overflowing_sub_signed(&self, rhs: Duration) -> (NaiveTime, i64) Subtracts given `Duration` from the current time, and also returns the number of *seconds* in the integral number of days ignored from the subtraction. ##### Example ``` use chrono::{Duration, NaiveTime}; let from_hms = |h, m, s| { NaiveTime::from_hms_opt(h, m, s).unwrap() }; assert_eq!(from_hms(3, 4, 5).overflowing_sub_signed(Duration::hours(2)), (from_hms(1, 4, 5), 0)); assert_eq!(from_hms(3, 4, 5).overflowing_sub_signed(Duration::hours(17)), (from_hms(10, 4, 5), 86_400)); assert_eq!(from_hms(3, 4, 5).overflowing_sub_signed(Duration::hours(-22)), (from_hms(1, 4, 5), -86_400)); ``` #### pub fn signed_duration_since(self, rhs: NaiveTime) -> Duration Subtracts another `NaiveTime` from the current time. Returns a `Duration` within +/- 1 day. This does not overflow or underflow at all. As a part of Chrono’s leap second handling, the subtraction assumes that **there is no leap second ever**, except when any of the `NaiveTime`s themselves represents a leap second in which case the assumption becomes that **there are exactly one (or two) leap second(s) ever**. ##### Example ``` use chrono::{Duration, NaiveTime}; let from_hmsm = |h, m, s, milli| { NaiveTime::from_hms_milli_opt(h, m, s, milli).unwrap() }; let since = NaiveTime::signed_duration_since; assert_eq!(since(from_hmsm(3, 5, 7, 900), from_hmsm(3, 5, 7, 900)), Duration::zero()); assert_eq!(since(from_hmsm(3, 5, 7, 900), from_hmsm(3, 5, 7, 875)), Duration::milliseconds(25)); assert_eq!(since(from_hmsm(3, 5, 7, 900), from_hmsm(3, 5, 6, 925)), Duration::milliseconds(975)); assert_eq!(since(from_hmsm(3, 5, 7, 900), from_hmsm(3, 5, 0, 900)), Duration::seconds(7)); assert_eq!(since(from_hmsm(3, 5, 7, 900), from_hmsm(3, 0, 7, 900)), Duration::seconds(5 * 60)); assert_eq!(since(from_hmsm(3, 5, 7, 900), from_hmsm(0, 5, 7, 900)), Duration::seconds(3 * 3600)); assert_eq!(since(from_hmsm(3, 5, 7, 900), from_hmsm(4, 5, 7, 900)), Duration::seconds(-3600)); assert_eq!(since(from_hmsm(3, 5, 7, 900), from_hmsm(2, 4, 6, 800)), Duration::seconds(3600 + 60 + 1) + Duration::milliseconds(100)); ``` Leap seconds are handled, but the subtraction assumes that there were no other leap seconds happened. ``` assert_eq!(since(from_hmsm(3, 0, 59, 1_000), from_hmsm(3, 0, 59, 0)), Duration::seconds(1)); assert_eq!(since(from_hmsm(3, 0, 59, 1_500), from_hmsm(3, 0, 59, 0)), Duration::milliseconds(1500)); assert_eq!(since(from_hmsm(3, 0, 59, 1_000), from_hmsm(3, 0, 0, 0)), Duration::seconds(60)); assert_eq!(since(from_hmsm(3, 0, 0, 0), from_hmsm(2, 59, 59, 1_000)), Duration::seconds(1)); assert_eq!(since(from_hmsm(3, 0, 59, 1_000), from_hmsm(2, 59, 59, 1_000)), Duration::seconds(61)); ``` #### pub fn format_with_items<'a, I, B>(&self, items: I) -> DelayedFormat<I>where I: Iterator<Item = B> + Clone, B: Borrow<Item<'a>>, Formats the time with the specified formatting items. Otherwise it is the same as the ordinary `format` method. The `Iterator` of items should be `Clone`able, since the resulting `DelayedFormat` value may be formatted multiple times. ##### Example ``` use chrono::NaiveTime; use chrono::format::strftime::StrftimeItems; let fmt = StrftimeItems::new("%H:%M:%S"); let t = NaiveTime::from_hms_opt(23, 56, 4).unwrap(); assert_eq!(t.format_with_items(fmt.clone()).to_string(), "23:56:04"); assert_eq!(t.format("%H:%M:%S").to_string(), "23:56:04"); ``` The resulting `DelayedFormat` can be formatted directly via the `Display` trait. ``` assert_eq!(format!("{}", t.format_with_items(fmt)), "23:56:04"); ``` #### pub fn format<'a>(&self, fmt: &'a str) -> DelayedFormat<StrftimeItems<'a>Formats the time with the specified format string. See the `format::strftime` module on the supported escape sequences. This returns a `DelayedFormat`, which gets converted to a string only when actual formatting happens. You may use the `to_string` method to get a `String`, or just feed it into `print!` and other formatting macros. (In this way it avoids the redundant memory allocation.) A wrong format string does *not* issue an error immediately. Rather, converting or formatting the `DelayedFormat` fails. You are recommended to immediately use `DelayedFormat` for this reason. ##### Example ``` use chrono::NaiveTime; let t = NaiveTime::from_hms_nano_opt(23, 56, 4, 12_345_678).unwrap(); assert_eq!(t.format("%H:%M:%S").to_string(), "23:56:04"); assert_eq!(t.format("%H:%M:%S%.6f").to_string(), "23:56:04.012345"); assert_eq!(t.format("%-I:%M %p").to_string(), "11:56 PM"); ``` The resulting `DelayedFormat` can be formatted directly via the `Display` trait. ``` assert_eq!(format!("{}", t.format("%H:%M:%S")), "23:56:04"); assert_eq!(format!("{}", t.format("%H:%M:%S%.6f")), "23:56:04.012345"); assert_eq!(format!("{}", t.format("%-I:%M %p")), "11:56 PM"); ``` #### pub const MIN: NaiveTime = _ The earliest possible `NaiveTime` Trait Implementations --- ### impl Add<Duration> for NaiveTime #### type Output = NaiveTime The resulting type after applying the `+` operator.#### fn add(self, rhs: Duration) -> NaiveTime Performs the `+` operation. An addition of `Duration` to `NaiveTime` wraps around and never overflows or underflows. In particular the addition ignores integral number of days. As a part of Chrono’s leap second handling, the addition assumes that **there is no leap second ever**, except when the `NaiveTime` itself represents a leap second in which case the assumption becomes that **there is exactly a single leap second ever**. #### Example ``` use chrono::{Duration, NaiveTime}; let from_hmsm = |h, m, s, milli| { NaiveTime::from_hms_milli_opt(h, m, s, milli).unwrap() }; assert_eq!(from_hmsm(3, 5, 7, 0) + Duration::zero(), from_hmsm(3, 5, 7, 0)); assert_eq!(from_hmsm(3, 5, 7, 0) + Duration::seconds(1), from_hmsm(3, 5, 8, 0)); assert_eq!(from_hmsm(3, 5, 7, 0) + Duration::seconds(-1), from_hmsm(3, 5, 6, 0)); assert_eq!(from_hmsm(3, 5, 7, 0) + Duration::seconds(60 + 4), from_hmsm(3, 6, 11, 0)); assert_eq!(from_hmsm(3, 5, 7, 0) + Duration::seconds(7*60*60 - 6*60), from_hmsm(9, 59, 7, 0)); assert_eq!(from_hmsm(3, 5, 7, 0) + Duration::milliseconds(80), from_hmsm(3, 5, 7, 80)); assert_eq!(from_hmsm(3, 5, 7, 950) + Duration::milliseconds(280), from_hmsm(3, 5, 8, 230)); assert_eq!(from_hmsm(3, 5, 7, 950) + Duration::milliseconds(-980), from_hmsm(3, 5, 6, 970)); ``` The addition wraps around. ``` assert_eq!(from_hmsm(3, 5, 7, 0) + Duration::seconds(22*60*60), from_hmsm(1, 5, 7, 0)); assert_eq!(from_hmsm(3, 5, 7, 0) + Duration::seconds(-8*60*60), from_hmsm(19, 5, 7, 0)); assert_eq!(from_hmsm(3, 5, 7, 0) + Duration::days(800), from_hmsm(3, 5, 7, 0)); ``` Leap seconds are handled, but the addition assumes that it is the only leap second happened. ``` let leap = from_hmsm(3, 5, 59, 1_300); assert_eq!(leap + Duration::zero(), from_hmsm(3, 5, 59, 1_300)); assert_eq!(leap + Duration::milliseconds(-500), from_hmsm(3, 5, 59, 800)); assert_eq!(leap + Duration::milliseconds(500), from_hmsm(3, 5, 59, 1_800)); assert_eq!(leap + Duration::milliseconds(800), from_hmsm(3, 6, 0, 100)); assert_eq!(leap + Duration::seconds(10), from_hmsm(3, 6, 9, 300)); assert_eq!(leap + Duration::seconds(-10), from_hmsm(3, 5, 50, 300)); assert_eq!(leap + Duration::days(1), from_hmsm(3, 5, 59, 300)); ``` #### type Output = NaiveTime The resulting type after applying the `+` operator.#### fn add(self, rhs: Duration) -> NaiveTime Performs the `+` operation. #### type Output = NaiveTime The resulting type after applying the `+` operator.#### fn add(self, rhs: FixedOffset) -> NaiveTime Performs the `+` operation. #### fn add_assign(&mut self, rhs: Duration) Performs the `+=` operation. #### fn add_assign(&mut self, rhs: Duration) Performs the `+=` operation. #### fn clone(&self) -> NaiveTime Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. The `Debug` output of the naive time `t` is the same as `t.format("%H:%M:%S%.f")`. The string printed can be readily parsed via the `parse` method on `str`. It should be noted that, for leap seconds not on the minute boundary, it may print a representation not distinguishable from non-leap seconds. This doesn’t matter in practice, since such leap seconds never happened. (By the time of the first leap second on 1972-06-30, every time zone offset around the world has standardized to the 5-minute alignment.) #### Example ``` use chrono::NaiveTime; assert_eq!(format!("{:?}", NaiveTime::from_hms_opt(23, 56, 4).unwrap()), "23:56:04"); assert_eq!(format!("{:?}", NaiveTime::from_hms_milli_opt(23, 56, 4, 12).unwrap()), "23:56:04.012"); assert_eq!(format!("{:?}", NaiveTime::from_hms_micro_opt(23, 56, 4, 1234).unwrap()), "23:56:04.001234"); assert_eq!(format!("{:?}", NaiveTime::from_hms_nano_opt(23, 56, 4, 123456).unwrap()), "23:56:04.000123456"); ``` Leap seconds may also be used. ``` assert_eq!(format!("{:?}", NaiveTime::from_hms_milli_opt(6, 59, 59, 1_500).unwrap()), "06:59:60.500"); ``` #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter. The default value for a NaiveTime is midnight, 00:00:00 exactly. #### Example ``` use chrono::NaiveTime; let default_time = NaiveTime::default(); assert_eq!(default_time, NaiveTime::from_hms_opt(0, 0, 0).unwrap()); ``` #### fn default() -> NaiveTime Returns the “default value” for a type. The `Display` output of the naive time `t` is the same as `t.format("%H:%M:%S%.f")`. The string printed can be readily parsed via the `parse` method on `str`. It should be noted that, for leap seconds not on the minute boundary, it may print a representation not distinguishable from non-leap seconds. This doesn’t matter in practice, since such leap seconds never happened. (By the time of the first leap second on 1972-06-30, every time zone offset around the world has standardized to the 5-minute alignment.) #### Example ``` use chrono::NaiveTime; assert_eq!(format!("{}", NaiveTime::from_hms_opt(23, 56, 4).unwrap()), "23:56:04"); assert_eq!(format!("{}", NaiveTime::from_hms_milli_opt(23, 56, 4, 12).unwrap()), "23:56:04.012"); assert_eq!(format!("{}", NaiveTime::from_hms_micro_opt(23, 56, 4, 1234).unwrap()), "23:56:04.001234"); assert_eq!(format!("{}", NaiveTime::from_hms_nano_opt(23, 56, 4, 123456).unwrap()), "23:56:04.000123456"); ``` Leap seconds may also be used. ``` assert_eq!(format!("{}", NaiveTime::from_hms_milli_opt(6, 59, 59, 1_500).unwrap()), "06:59:60.500"); ``` #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter. Parsing a `str` into a `NaiveTime` uses the same format, `%H:%M:%S%.f`, as in `Debug` and `Display`. #### Example ``` use chrono::NaiveTime; let t = NaiveTime::from_hms_opt(23, 56, 4).unwrap(); assert_eq!("23:56:04".parse::<NaiveTime>(), Ok(t)); let t = NaiveTime::from_hms_nano_opt(23, 56, 4, 12_345_678).unwrap(); assert_eq!("23:56:4.012345678".parse::<NaiveTime>(), Ok(t)); let t = NaiveTime::from_hms_nano_opt(23, 59, 59, 1_234_567_890).unwrap(); // leap second assert_eq!("23:59:60.23456789".parse::<NaiveTime>(), Ok(t)); // Seconds are optional let t = NaiveTime::from_hms_opt(23, 56, 0).unwrap(); assert_eq!("23:56".parse::<NaiveTime>(), Ok(t)); assert!("foo".parse::<NaiveTime>().is_err()); ``` #### type Err = ParseError The associated error which can be returned from parsing.#### fn from_str(s: &str) -> Result<NaiveTime, ParseErrorParses a string `s` to return a value of this type. #### fn hash<__H>(&self, state: &mut __H)where __H: Hasher, Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where H: Hasher, Self: Sized, Feeds a slice of this type into the given `Hasher`. #### fn cmp(&self, other: &NaiveTime) -> Ordering This method returns an `Ordering` between `self` and `other`. Read more1.21.0 · source#### fn max(self, other: Self) -> Selfwhere Self: Sized, Compares and returns the maximum of two values. Read more1.21.0 · source#### fn min(self, other: Self) -> Selfwhere Self: Sized, Compares and returns the minimum of two values. Read more1.50.0 · source#### fn clamp(self, min: Self, max: Self) -> Selfwhere Self: Sized + PartialOrd<Self>, Restrict a value to a certain interval. #### fn eq(&self, other: &NaiveTime) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl PartialOrd<NaiveTime> for NaiveTime #### fn partial_cmp(&self, other: &NaiveTime) -> Option<OrderingThis method returns an ordering between `self` and `other` values if one exists. Read more1.0.0 · source#### fn lt(&self, other: &Rhs) -> bool This method tests less than (for `self` and `other`) and is used by the `<` operator. Read more1.0.0 · source#### fn le(&self, other: &Rhs) -> bool This method tests less than or equal to (for `self` and `other`) and is used by the `<=` operator. Read more1.0.0 · source#### fn gt(&self, other: &Rhs) -> bool This method tests greater than (for `self` and `other`) and is used by the `>` operator. Read more1.0.0 · source#### fn ge(&self, other: &Rhs) -> bool This method tests greater than or equal to (for `self` and `other`) and is used by the `>=` operator. A subtraction of `Duration` from `NaiveTime` wraps around and never overflows or underflows. In particular the addition ignores integral number of days. It is the same as the addition with a negated `Duration`. As a part of Chrono’s leap second handling, the subtraction assumes that **there is no leap second ever**, except when the `NaiveTime` itself represents a leap second in which case the assumption becomes that **there is exactly a single leap second ever**. #### Example ``` use chrono::{Duration, NaiveTime}; let from_hmsm = |h, m, s, milli| { NaiveTime::from_hms_milli_opt(h, m, s, milli).unwrap() }; assert_eq!(from_hmsm(3, 5, 7, 0) - Duration::zero(), from_hmsm(3, 5, 7, 0)); assert_eq!(from_hmsm(3, 5, 7, 0) - Duration::seconds(1), from_hmsm(3, 5, 6, 0)); assert_eq!(from_hmsm(3, 5, 7, 0) - Duration::seconds(60 + 5), from_hmsm(3, 4, 2, 0)); assert_eq!(from_hmsm(3, 5, 7, 0) - Duration::seconds(2*60*60 + 6*60), from_hmsm(0, 59, 7, 0)); assert_eq!(from_hmsm(3, 5, 7, 0) - Duration::milliseconds(80), from_hmsm(3, 5, 6, 920)); assert_eq!(from_hmsm(3, 5, 7, 950) - Duration::milliseconds(280), from_hmsm(3, 5, 7, 670)); ``` The subtraction wraps around. ``` assert_eq!(from_hmsm(3, 5, 7, 0) - Duration::seconds(8*60*60), from_hmsm(19, 5, 7, 0)); assert_eq!(from_hmsm(3, 5, 7, 0) - Duration::days(800), from_hmsm(3, 5, 7, 0)); ``` Leap seconds are handled, but the subtraction assumes that it is the only leap second happened. ``` let leap = from_hmsm(3, 5, 59, 1_300); assert_eq!(leap - Duration::zero(), from_hmsm(3, 5, 59, 1_300)); assert_eq!(leap - Duration::milliseconds(200), from_hmsm(3, 5, 59, 1_100)); assert_eq!(leap - Duration::milliseconds(500), from_hmsm(3, 5, 59, 800)); assert_eq!(leap - Duration::seconds(60), from_hmsm(3, 5, 0, 300)); assert_eq!(leap - Duration::days(1), from_hmsm(3, 6, 0, 300)); ``` #### type Output = NaiveTime The resulting type after applying the `-` operator.#### fn sub(self, rhs: Duration) -> NaiveTime Performs the `-` operation. #### type Output = NaiveTime The resulting type after applying the `-` operator.#### fn sub(self, rhs: Duration) -> NaiveTime Performs the `-` operation. #### type Output = NaiveTime The resulting type after applying the `-` operator.#### fn sub(self, rhs: FixedOffset) -> NaiveTime Performs the `-` operation. Subtracts another `NaiveTime` from the current time. Returns a `Duration` within +/- 1 day. This does not overflow or underflow at all. As a part of Chrono’s leap second handling, the subtraction assumes that **there is no leap second ever**, except when any of the `NaiveTime`s themselves represents a leap second in which case the assumption becomes that **there are exactly one (or two) leap second(s) ever**. The implementation is a wrapper around `NaiveTime::signed_duration_since`. #### Example ``` use chrono::{Duration, NaiveTime}; let from_hmsm = |h, m, s, milli| { NaiveTime::from_hms_milli_opt(h, m, s, milli).unwrap() }; assert_eq!(from_hmsm(3, 5, 7, 900) - from_hmsm(3, 5, 7, 900), Duration::zero()); assert_eq!(from_hmsm(3, 5, 7, 900) - from_hmsm(3, 5, 7, 875), Duration::milliseconds(25)); assert_eq!(from_hmsm(3, 5, 7, 900) - from_hmsm(3, 5, 6, 925), Duration::milliseconds(975)); assert_eq!(from_hmsm(3, 5, 7, 900) - from_hmsm(3, 5, 0, 900), Duration::seconds(7)); assert_eq!(from_hmsm(3, 5, 7, 900) - from_hmsm(3, 0, 7, 900), Duration::seconds(5 * 60)); assert_eq!(from_hmsm(3, 5, 7, 900) - from_hmsm(0, 5, 7, 900), Duration::seconds(3 * 3600)); assert_eq!(from_hmsm(3, 5, 7, 900) - from_hmsm(4, 5, 7, 900), Duration::seconds(-3600)); assert_eq!(from_hmsm(3, 5, 7, 900) - from_hmsm(2, 4, 6, 800), Duration::seconds(3600 + 60 + 1) + Duration::milliseconds(100)); ``` Leap seconds are handled, but the subtraction assumes that there were no other leap seconds happened. ``` assert_eq!(from_hmsm(3, 0, 59, 1_000) - from_hmsm(3, 0, 59, 0), Duration::seconds(1)); assert_eq!(from_hmsm(3, 0, 59, 1_500) - from_hmsm(3, 0, 59, 0), Duration::milliseconds(1500)); assert_eq!(from_hmsm(3, 0, 59, 1_000) - from_hmsm(3, 0, 0, 0), Duration::seconds(60)); assert_eq!(from_hmsm(3, 0, 0, 0) - from_hmsm(2, 59, 59, 1_000), Duration::seconds(1)); assert_eq!(from_hmsm(3, 0, 59, 1_000) - from_hmsm(2, 59, 59, 1_000), Duration::seconds(61)); ``` #### type Output = Duration The resulting type after applying the `-` operator.#### fn sub(self, rhs: NaiveTime) -> Duration Performs the `-` operation. #### fn sub_assign(&mut self, rhs: Duration) Performs the `-=` operation. #### fn sub_assign(&mut self, rhs: Duration) Performs the `-=` operation. #### fn hour(&self) -> u32 Returns the hour number from 0 to 23. ##### Example ``` use chrono::{NaiveTime, Timelike}; assert_eq!(NaiveTime::from_hms_opt(0, 0, 0).unwrap().hour(), 0); assert_eq!(NaiveTime::from_hms_nano_opt(23, 56, 4, 12_345_678).unwrap().hour(), 23); ``` #### fn minute(&self) -> u32 Returns the minute number from 0 to 59. ##### Example ``` use chrono::{NaiveTime, Timelike}; assert_eq!(NaiveTime::from_hms_opt(0, 0, 0).unwrap().minute(), 0); assert_eq!(NaiveTime::from_hms_nano_opt(23, 56, 4, 12_345_678).unwrap().minute(), 56); ``` #### fn second(&self) -> u32 Returns the second number from 0 to 59. ##### Example ``` use chrono::{NaiveTime, Timelike}; assert_eq!(NaiveTime::from_hms_opt(0, 0, 0).unwrap().second(), 0); assert_eq!(NaiveTime::from_hms_nano_opt(23, 56, 4, 12_345_678).unwrap().second(), 4); ``` This method never returns 60 even when it is a leap second. (Why?) Use the proper formatting method to get a human-readable representation. ``` let leap = NaiveTime::from_hms_milli_opt(23, 59, 59, 1_000).unwrap(); assert_eq!(leap.second(), 59); assert_eq!(leap.format("%H:%M:%S").to_string(), "23:59:60"); ``` #### fn nanosecond(&self) -> u32 Returns the number of nanoseconds since the whole non-leap second. The range from 1,000,000,000 to 1,999,999,999 represents the leap second. ##### Example ``` use chrono::{NaiveTime, Timelike}; assert_eq!(NaiveTime::from_hms_opt(0, 0, 0).unwrap().nanosecond(), 0); assert_eq!(NaiveTime::from_hms_nano_opt(23, 56, 4, 12_345_678).unwrap().nanosecond(), 12_345_678); ``` Leap seconds may have seemingly out-of-range return values. You can reduce the range with `time.nanosecond() % 1_000_000_000`, or use the proper formatting method to get a human-readable representation. ``` let leap = NaiveTime::from_hms_milli_opt(23, 59, 59, 1_000).unwrap(); assert_eq!(leap.nanosecond(), 1_000_000_000); assert_eq!(leap.format("%H:%M:%S%.9f").to_string(), "23:59:60.000000000"); ``` #### fn with_hour(&self, hour: u32) -> Option<NaiveTimeMakes a new `NaiveTime` with the hour number changed. ##### Errors Returns `None` if the value for `hour` is invalid. ##### Example ``` use chrono::{NaiveTime, Timelike}; let dt = NaiveTime::from_hms_nano_opt(23, 56, 4, 12_345_678).unwrap(); assert_eq!(dt.with_hour(7), Some(NaiveTime::from_hms_nano_opt(7, 56, 4, 12_345_678).unwrap())); assert_eq!(dt.with_hour(24), None); ``` #### fn with_minute(&self, min: u32) -> Option<NaiveTimeMakes a new `NaiveTime` with the minute number changed. ##### Errors Returns `None` if the value for `minute` is invalid. ##### Example ``` use chrono::{NaiveTime, Timelike}; let dt = NaiveTime::from_hms_nano_opt(23, 56, 4, 12_345_678).unwrap(); assert_eq!(dt.with_minute(45), Some(NaiveTime::from_hms_nano_opt(23, 45, 4, 12_345_678).unwrap())); assert_eq!(dt.with_minute(60), None); ``` #### fn with_second(&self, sec: u32) -> Option<NaiveTimeMakes a new `NaiveTime` with the second number changed. As with the `second` method, the input range is restricted to 0 through 59. ##### Errors Returns `None` if the value for `second` is invalid. ##### Example ``` use chrono::{NaiveTime, Timelike}; let dt = NaiveTime::from_hms_nano_opt(23, 56, 4, 12_345_678).unwrap(); assert_eq!(dt.with_second(17), Some(NaiveTime::from_hms_nano_opt(23, 56, 17, 12_345_678).unwrap())); assert_eq!(dt.with_second(60), None); ``` #### fn with_nanosecond(&self, nano: u32) -> Option<NaiveTimeMakes a new `NaiveTime` with nanoseconds since the whole non-leap second changed. As with the `nanosecond` method, the input range can exceed 1,000,000,000 for leap seconds. ##### Errors Returns `None` if `nanosecond >= 2,000,000,000`. ##### Example ``` use chrono::{NaiveTime, Timelike}; let dt = NaiveTime::from_hms_nano_opt(23, 56, 4, 12_345_678).unwrap(); assert_eq!(dt.with_nanosecond(333_333_333), Some(NaiveTime::from_hms_nano_opt(23, 56, 4, 333_333_333).unwrap())); assert_eq!(dt.with_nanosecond(2_000_000_000), None); ``` Leap seconds can theoretically follow *any* whole second. The following would be a proper leap second at the time zone offset of UTC-00:03:57 (there are several historical examples comparable to this “non-sense” offset), and therefore is allowed. ``` let dt = NaiveTime::from_hms_nano_opt(23, 56, 4, 12_345_678).unwrap(); let strange_leap_second = dt.with_nanosecond(1_333_333_333).unwrap(); assert_eq!(strange_leap_second.nanosecond(), 1_333_333_333); ``` #### fn num_seconds_from_midnight(&self) -> u32 Returns the number of non-leap seconds past the last midnight. ##### Example ``` use chrono::{NaiveTime, Timelike}; assert_eq!(NaiveTime::from_hms_opt(1, 2, 3).unwrap().num_seconds_from_midnight(), 3723); assert_eq!(NaiveTime::from_hms_nano_opt(23, 56, 4, 12_345_678).unwrap().num_seconds_from_midnight(), 86164); assert_eq!(NaiveTime::from_hms_milli_opt(23, 59, 59, 1_000).unwrap().num_seconds_from_midnight(), 86399); ``` #### fn hour12(&self) -> (bool, u32) Returns the hour number from 1 to 12 with a boolean flag, which is false for AM and true for PM.### impl Copy for NaiveTime ### impl Eq for NaiveTime ### impl StructuralEq for NaiveTime ### impl StructuralPartialEq for NaiveTime Auto Trait Implementations --- ### impl RefUnwindSafe for NaiveTime ### impl Send for NaiveTime ### impl Sync for NaiveTime ### impl Unpin for NaiveTime ### impl UnwindSafe for NaiveTime Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. Q: Eq + ?Sized, K: Borrow<Q> + ?Sized, #### fn equivalent(&self, key: &K) -> bool Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> SubsecRound for Twhere T: Add<Duration, Output = T> + Sub<Duration, Output = T> + Timelike, #### fn round_subsecs(self, digits: u16) -> T Return a copy rounded to the specified number of subsecond digits. With 9 or more digits, self is returned unmodified. Halfway values are rounded up (away from zero). Return a copy truncated to the specified number of subsecond digits. With 9 or more digits, self is returned unmodified. T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. T: Display + ?Sized, #### default fn to_string(&self) -> String Converts the given value to a `String`. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more Struct rocket_launch_live::PadParamsBuilder === ``` pub struct PadParamsBuilder<'a> { /* private fields */ } ``` Builder to generate the API parameters to filter calls to the pads endpoint. Implementations --- ### impl<'a> PadParamsBuilder<'a#### pub fn new() -> Self Create a new builder for the pad parameters. #### pub fn id(&mut self, id: i64) -> &mut Self Set the pad id parameter. #### pub fn name(&mut self, name: &'a str) -> &mut Self Set the pad name parameter. #### pub fn state_abbr(&mut self, state_abbr: &'a str) -> &mut Self Set the pad state_abbr parameter. #### pub fn country_code(&mut self, country_code: &'a str) -> &mut Self Set the pad country_code parameter. #### pub fn page(&mut self, page: i64) -> &mut Self Set the pad page parameter. #### pub fn build(&self) -> Params Build the low level pad parameters from all the set parameters. Trait Implementations --- ### impl<'a> Default for PadParamsBuilder<'a#### fn default() -> PadParamsBuilder<'aReturns the “default value” for a type. Read moreAuto Trait Implementations --- ### impl<'a> RefUnwindSafe for PadParamsBuilder<'a### impl<'a> Send for PadParamsBuilder<'a### impl<'a> Sync for PadParamsBuilder<'a### impl<'a> Unpin for PadParamsBuilder<'a### impl<'a> UnwindSafe for PadParamsBuilder<'aBlanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more Struct rocket_launch_live::Params === ``` pub struct Params(/* private fields */); ``` Low level text representation of the API parameters sent to the server. Trait Implementations --- ### impl Debug for Params #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn default() -> Params Returns the “default value” for a type. Read moreAuto Trait Implementations --- ### impl RefUnwindSafe for Params ### impl Send for Params ### impl Sync for Params ### impl Unpin for Params ### impl UnwindSafe for Params Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more Struct rocket_launch_live::TagParamsBuilder === ``` pub struct TagParamsBuilder<'a> { /* private fields */ } ``` Builder to generate the API parameters to filter calls to the tags endpoint. Implementations --- ### impl<'a> TagParamsBuilder<'a#### pub fn new() -> Self Create a new builder for the tag parameters. #### pub fn id(&mut self, id: i64) -> &mut Self Set the tag id parameter. #### pub fn text(&mut self, text: &'a str) -> &mut Self Set the tag text parameter. #### pub fn page(&mut self, page: i64) -> &mut Self Set the tag page parameter. #### pub fn build(&self) -> Params Build the low level tag parameters from all the set parameters. Trait Implementations --- ### impl<'a> Default for TagParamsBuilder<'a#### fn default() -> TagParamsBuilder<'aReturns the “default value” for a type. Read moreAuto Trait Implementations --- ### impl<'a> RefUnwindSafe for TagParamsBuilder<'a### impl<'a> Send for TagParamsBuilder<'a### impl<'a> Sync for TagParamsBuilder<'a### impl<'a> Unpin for TagParamsBuilder<'a### impl<'a> UnwindSafe for TagParamsBuilder<'aBlanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more Struct rocket_launch_live::VehicleParamsBuilder === ``` pub struct VehicleParamsBuilder<'a> { /* private fields */ } ``` Builder to generate the API parameters to filter calls to the vehicles endpoint. Implementations --- ### impl<'a> VehicleParamsBuilder<'a#### pub fn new() -> Self Create a new builder for the vehicle parameters. #### pub fn id(&mut self, id: i64) -> &mut Self Set the vehicle id parameter. #### pub fn name(&mut self, name: &'a str) -> &mut Self Set the vehicle name parameter. #### pub fn page(&mut self, page: i64) -> &mut Self Set the vehicle page parameter. #### pub fn build(&self) -> Params Build the low level vehicle parameters from all the set parameters. Trait Implementations --- ### impl<'a> Default for VehicleParamsBuilder<'a#### fn default() -> VehicleParamsBuilder<'aReturns the “default value” for a type. Read moreAuto Trait Implementations --- ### impl<'a> RefUnwindSafe for VehicleParamsBuilder<'a### impl<'a> Send for VehicleParamsBuilder<'a### impl<'a> Sync for VehicleParamsBuilder<'a### impl<'a> Unpin for VehicleParamsBuilder<'a### impl<'a> UnwindSafe for VehicleParamsBuilder<'aBlanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more Enum rocket_launch_live::Direction === ``` pub enum Direction { Ascending, Descending, } ``` Represents the sorting order of results (ascending or descending). Variants --- ### Ascending ### Descending Auto Trait Implementations --- ### impl RefUnwindSafe for Direction ### impl Send for Direction ### impl Sync for Direction ### impl Unpin for Direction ### impl UnwindSafe for Direction Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. Read more
MultiRNG
cran
R
Package ‘MultiRNG’ October 12, 2022 Type Package Title Multivariate Pseudo-Random Number Generation Version 1.2.4 Date 2021-03-05 Author <NAME>, <NAME>, <NAME> Maintainer <NAME> <<EMAIL>> Description Pseudo-random number generation for 11 multivariate distributions: Normal, t, Uni- form, Bernoulli, Hypergeometric, Beta (Dirichlet), Multinomial, Dirichlet- Multinomial, Laplace, Wishart, and Inverted Wishart. The details of the method are ex- plained in Demirtas (2004) <DOI:10.22237/jmasm/1099268340>. License GPL-2 | GPL-3 NeedsCompilation no Repository CRAN Date/Publication 2021-03-05 18:10:05 UTC R topics documented: MultiRNG-packag... 2 draw.correlated.binar... 3 draw.d.variate.norma... 4 draw.d.variate.... 5 draw.d.variate.unifor... 6 draw.dirichle... 7 draw.dirichlet.multinomia... 8 draw.inv.wishar... 9 draw.multinomia... 10 draw.multivariate.hypergeometri... 11 draw.multivariate.laplac... 12 draw.wishar... 13 generate.point.in.spher... 14 loc.mi... 14 MultiRNG-package Multivariate Pseudo-Random Number Generation Description This package implements the algorithms described in Demirtas (2004) for pseudo-random num- ber generation of 11 multivariate distributions. The following multivariate distributions are avail- able: Normal, t, Uniform, Bernoulli, Hypergeometric, Beta (Dirichlet), Multinomial, Dirichlet- Multinomial, Laplace, Wishart, and Inverted Wishart. This package contains 11 main functions and 2 auxiliary functions. The methodology for each random-number generation procedure varies and each distribution has its own function. For mul- tivariate normal, draw.d.variate.normal employs the Cholesky decomposition and a vector of univariate normal draws and for multivariate t, draw.d.variate.t employs the Cholesky decom- position and a vector of univariate normal and chi-squared draws. draw.d.variate.uniform is based on cdf of multivariate normal deviates (Falk, 1999) and draw.correlated.binary gen- erates correlated binary variables using an algorithm developed by Park, Park and Shin (1996) and makes use of the auxiliary function loc.min. draw.multivariate.hypergeometric em- ploys sequential generation of succeeding conditionals which are univariate hypergeometric. Fur- thermore, draw.dirichlet uses the ratios of gamma variates with a common scale parameter and draw.multinomial generates data via sequential generation of marginals which are bino- mials. draw.dirichlet.multinomial is a mixture distribution of a multinomial that is a re- alization of a random variable having a Dirichlet distribution. draw.multivariate.laplace is based on generation of a point s on the d-dimensional sphere and utilizes the auxiliary function generate.point.in.sphere. draw.wishart and draw.inv.wishart employs Wishart variates that follow d-variate normal distribution. Details Package: MultiRNG Type: Package Version: 1.2.4 Date: 2021-03-05 License: GPL-2 | GPL-3 Author(s) <NAME>, <NAME>, <NAME> Maintainer: <NAME> <<EMAIL>> References Demirtas, H. (2004). Pseudo-random number generation in R for commonly used multivariate distributions. Journal of Modern Applied Statistical Methods, 3(2), 485-497. <NAME>. (1999). A simple approach to the generation of uniformly distributed random variables with prescribed correlations. Communications in Statistics, Simulation and Computation, 28(3), 785-791. <NAME>., <NAME>., & <NAME>. (1996). A simple method for generating correlated binary variates. The American Statistician, 50(4), 306-310. draw.correlated.binary Generation of Correlated Binary Data Description This function implements pseudo-random number generation for a multivariate Bernoulli distribu- tion (correlated binary data). Usage draw.correlated.binary(no.row,d,prop.vec,corr.mat) Arguments no.row Number of rows to generate. d Number of variables to generate. prop.vec Vector of means. corr.mat Correlation matrix. Value A no.row × d matrix of generated data. References <NAME>., <NAME>., & <NAME>. (1996). A simple method for generating correlated binary variates. The American Statistician, 50(4), 306-310. See Also loc.min Examples cmat<-matrix(c(1,0.2,0.3,0.2,1,0.2,0.3,0.2,1), nrow=3, ncol=3) propvec=c(0.3,0.5,0.7) mydata=draw.correlated.binary(no.row=1e5,d=3,prop.vec=propvec,corr.mat=cmat) apply(mydata,2,mean)-propvec cor(mydata)-cmat draw.d.variate.normal Pseudo-Random Number Generation under Multivariate Normal Dis- tribution Description This function implements pseudo-random number generation for a multivariate normal distribution with pdf f (x|µ, Σ) = c exp (− (x − µ)T Σ−1 (x − µ)) for −∞ < x < ∞ and c = (2π)−d/2 |Σ|−1/2 , Σ is symmetric and positive definite, where µ and Σ are the mean vector and the variance-covariance matrix, respectively. Usage draw.d.variate.normal(no.row,d,mean.vec,cov.mat) Arguments no.row Number of rows to generate. d Number of variables to generate. mean.vec Vector of means. cov.mat Variance-covariance matrix. Value A no.row × d matrix of generated data. Examples cmat<-matrix(c(1,0.2,0.3,0.2,1,0.2,0.3,0.2,1), nrow=3, ncol=3) meanvec=c(0,3,7) mydata=draw.d.variate.normal(no.row=1e5,d=3,mean.vec=meanvec,cov.mat=cmat) apply(mydata,2,mean)-meanvec cor(mydata)-cmat draw.d.variate.t Pseudo-Random Number Generation under Multivariate t Distribu- tion Description This function implements pseudo-random number generation for a multivariate t distribution with pdf  −(ν+d)/2 f (x|µ, Σ, ν) = c 1 + (x − µ)T Σ−1 (x − µ) ν Γ((ν+d)/2) −1/2 for −∞ < x < ∞ and c = Γ(ν/2)(νπ) d/2 |Σ| , Σ is symmetric and positive definite, ν > 0, where µ, Σ, and ν are the mean vector, the variance-covariance matrix, and the degrees of freedom, respectively. Usage draw.d.variate.t(dof,no.row,d,mean.vec,cov.mat) Arguments dof Degrees of freedom. no.row Number of rows to generate. d Number of variables to generate. mean.vec Vector of means. cov.mat Variance-covariance matrix. Value A no.row × d matrix of generated data. Examples cmat<-matrix(c(1,0.2,0.3,0.2,1,0.2,0.3,0.2,1), nrow=3, ncol=3) meanvec=c(0,3,7) mydata=draw.d.variate.t(dof=5,no.row=1e5,d=3,mean.vec=meanvec,cov.mat=cmat) apply(mydata,2,mean)-meanvec cor(mydata)-cmat draw.d.variate.uniform Pseudo-Random Number Generation under Multivariate Uniform Distribution Description This function implements pseudo-random number generation for a multivariate uniform distribution with specified mean vector and covariance matrix. Usage draw.d.variate.uniform(no.row,d,cov.mat) Arguments no.row Number of rows to generate. d Number of variables to generate. cov.mat Variance-covariance matrix. Value A no.row × d matrix of generated data. References <NAME>. (1999). A simple approach to the generation of uniformly distributed random variables with prescribed correlations. Communications in Statistics, Simulation and Computation, 28(3), 785-791. Examples cmat<-matrix(c(1,0.2,0.3,0.2,1,0.2,0.3,0.2,1), nrow=3, ncol=3) mydata=draw.d.variate.uniform(no.row=1e5,d=3,cov.mat=cmat) apply(mydata,2,mean)-rep(0.5,3) cor(mydata)-cmat draw.dirichlet Pseudo-Random Number Generation under Multivariate Beta (Dirichlet) Distribution Description This function implements pseudo-random number generation for a multivariate beta (Dirichlet) distribution with pdf Pd d f (x|α1 , ..., αd ) = Qd xj j Pd for αj > 0, xj ≥ 0, and j=1 xj = 1, where α1 , ..., αd are the shape parameters and β is a common scale paramter. Usage draw.dirichlet(no.row,d,alpha,beta) Arguments no.row Number of rows to generate. d Number of variables to generate. alpha Vector of shape parameters. beta Scale parameter common to d variables. Value A no.row × d matrix of generated data. Examples alpha.vec=c(1,3,4,4) mydata=draw.dirichlet(no.row=1e5,d=4,alpha=alpha.vec,beta=2) apply(mydata,2,mean)-alpha.vec/sum(alpha.vec) draw.dirichlet.multinomial Pseudo-Random Number Generation under Dirichlet-Multinomial Distribution Description This function implements pseudo-random number generation for a Dirichlet-multinomial distribu- tion. This is a mixture distribution that is multinomial with parameter θ that is a realization of a random variable having a Dirichlet distribution with shape vector α. N is the sample size and β is a common scale parameter. Usage draw.dirichlet.multinomial(no.row,d,alpha,beta,N) Arguments no.row Number of rows to generate. d Number of variables to generate. alpha Vector of shape parameters. beta Scale parameter common to d variables. N Sample size. Value A no.row × d matrix of generated data. See Also draw.dirichlet, draw.multinomial Examples alpha.vec=c(1,3,4,4) ; N=3 mydata=draw.dirichlet.multinomial(no.row=1e5,d=4,alpha=alpha.vec,beta=2, N=3) apply(mydata,2,mean)-N*alpha.vec/sum(alpha.vec) draw.inv.wishart Pseudo-Random Number Generation under Inverted Wishart Distri- bution Description This function implements pseudo-random number generation for an inverted Wishart distribution with pdf d f (x|ν, Σ) = (2νd/2 π d(d−1)/4 Γ((ν + 1 − i)/2))−1 |Σ|ν/2 |x|−(ν+d+1)/2 exp(− tr(Σx−1 )) i=1 x is positive definite, ν ≥ d, and Σ−1 is symmetric and positive definite, where ν and Σ−1 are the degrees of freedom and the inverse scale matrix, respectively. Usage draw.inv.wishart(no.row,d,nu,inv.sigma) Arguments no.row Number of rows to generate. d Number of variables to generate. nu Degrees of freedom. inv.sigma Inverse scale matrix. Value A no.row × d2 matrix ofcontaining Wishart deviates in the form of rows. To obtain the Inverted- Wishart matrix, convert each row to a matrix where rows are filled first. See Also draw.wishart Examples mymat<-matrix(c(1,0.2,0.3,0.2,1,0.2,0.3,0.2,1), nrow=3, ncol=3) draw.inv.wishart(no.row=1e5,d=3,nu=5,inv.sigma=mymat) draw.multinomial Pseudo-Random Number Generation under Multivariate Multinomial Distribution Description This function implements pseudo-random number generation for a multivariate multinomial distri- bution with pdf d N ! Y xj f (x|θ1 , ..., θd ) = Q θ Pd for 0 < θj < 1, xj ≥ 0, and j=1 xj = N , where θ1 , ..., θd are cell probabilities and N is the size. Usage draw.multinomial(no.row,d,theta,N) Arguments no.row Number of rows to generate. d Number of variables to generate. theta Vector of cell probabilities. N Sample Size. Must be at least 2. Value A no.row × d matrix of generated data. Examples theta.vec=c(0.3,0.3,0.25,0.15) ; N=4 mydata=draw.multinomial(no.row=1e5,d=4,theta=c(0.3,0.3,0.25,0.15),N=4) apply(mydata,2,mean)-N*theta.vec draw.multivariate.hypergeometric Pseudo-Random Number Generation under Multivariate Hypergeo- metric Distribution Description This function implements pseudo-random number generation for a multivariate hypergeometric distribution. Usage draw.multivariate.hypergeometric(no.row,d,mean.vec,k) Arguments no.row Number of rows to generate. d Number of variables to generate. mean.vec Number of items in each category. k Number of items to be sampled. Must be a positive integer. Value A no.row × d matrix of generated data. References <NAME>. (2004). Pseudo-random number generation in R for commonly used multivariate distributions. Journal of Modern Applied Statistical Methods, 3(2), 485-497. Examples meanvec=c(10,10,12) ; myk=5 mydata=draw.multivariate.hypergeometric(no.row=1e5,d=3,mean.vec=meanvec,k=myk) apply(mydata,2,mean)-myk*meanvec/sum(meanvec) draw.multivariate.laplace Pseudo-Random Number Generation under Multivariate Laplace Dis- tribution Description This function implements pseudo-random number generation for a multivariate Laplace (double exponential) distribution with pdf f (x|µ, Σ, γ) = c exp(−((x − µ)T Σ−1 (x − µ))γ/2 ) for −∞ < x < ∞ and c = 2πγΓ(d/2)d/2 Γ(d/γ) |Σ| , Σ is symmetric and positive definite, where µ, Σ, and γ are the mean vector, the variance-covariance matrix, and the shape parameter, respectively. Usage draw.multivariate.laplace(no.row,d,gamma,mu,Sigma) Arguments no.row Number of rows to generate. d Number of variables to generate. gamma Shape parameter. mu Vector of means. Sigma Variance-covariance matrix. Value A no.row × d matrix of generated data. References <NAME>. (1998). A multivariate generalized Laplace distribution. Computational Statistics, 13, 227-232. See Also generate.point.in.sphere Examples cmat<-matrix(c(1,0.2,0.3,0.2,1,0.2,0.3,0.2,1), nrow=3, ncol=3) mu.vec=c(0,3,7) mydata=draw.multivariate.laplace(no.row=1e5,d=3,gamma=2,mu=mu.vec,Sigma=cmat) apply(mydata,2,mean)-mu.vec cor(mydata)-cmat draw.wishart Pseudo-Random Number Generation under Wishart Distribution Description This function implements pseudo-random number generation for a Wishart distribution with pdf d f (x|ν, Σ) = (2 νd/2 d(d−1)/4 π Γ((ν + 1 − i)/2))−1 |Σ|−ν/2 |x|(ν−d−1)/2 exp(− tr(Σ−1 x)) i=1 x is positive definite, ν ≥ d, and Σ is symmetric and positive definite, where ν and Σ are positive definite and the scale matrix, respectively. Usage draw.wishart(no.row,d,nu,sigma) Arguments no.row Number of rows to generate. d Number of variables to generate. nu Degrees of freedom. sigma Scale matrix. Value A no.row × d2 matrix of Wishart deviates in the form of rows.To obtain the Wishart matrix, convert each row to a matrix where rows are filled first. See Also draw.d.variate.normal Examples mymat<-matrix(c(1,0.2,0.3,0.2,1,0.2,0.3,0.2,1), nrow=3, ncol=3) draw.wishart(no.row=1e5,d=3,nu=5,sigma=mymat) generate.point.in.sphere Point Generation for a Sphere Description This function generates s points on a d-dimensional sphere. Usage generate.point.in.sphere(no.row,d) Arguments no.row Number of rows to generate. d Number of variables to generate. Value A no.row × d matrix of coordinates of points in sphere. References <NAME>. (1972). Choosing a point from the surface of a sphere. Annals of Mathematical Statistics, 43, 645-646. Examples generate.point.in.sphere(no.row=1e5,d=3) loc.min Minimum Location Finder Description This function identifies the location of the minimum value in a square matrix. Usage loc.min(my.mat,d) Arguments my.mat A square matrix. d Dimensions of the matrix. Value A vector containing the row and column number of the minimum value. Examples cmat<-matrix(c(1,0.2,0.3,0.2,1,0.2,0.3,0.2,1), nrow=3, ncol=3) loc.min(my.mat=cmat, d=3)
rt3
cran
R
Package ‘rt3’ October 14, 2022 Type Package Title Tic-Tac-Toe Package for R Version 0.1.2 Author <NAME> Maintainer <NAME> <<EMAIL>> Description Play the classic game of tic-tac-toe (naughts and crosses). License MIT + file LICENSE Encoding UTF-8 LazyData true RoxygenNote 5.0.1 Suggests testthat NeedsCompilation no Repository CRAN Date/Publication 2016-12-05 23:43:08 R topics documented: EMPT... 2 firstAvailableMovePlaye... 2 gameStat... 3 getMove... 3 makeMov... 4 NON... 4 ... 5 playGam... 5 randomMovePlaye... 6 rt... 6 startGam... 7 ... 8 EMPTY Constant for the empty square. It’s value is the character "_". Description It’s value is the character "_". Usage EMPTY Format An object of class character of length 1. firstAvailableMovePlayer Player that always takes the first move in the list of valid moves. Description Internally this player calls getMoves and then picks the first entry in the list of moves. A player is a function that takes a game state as input and returns a valid move index. Usage firstAvailableMovePlayer(gameState) Arguments gameState The gameState that the player should act on. Value moveIndex Index to a valid move as returned by the getMoves function. Examples gameState <- startGame() move <- firstAvailableMovePlayer(gameState) gameState The game state is represented by a list of 8 values. Description board The boards state represented by a list. It contains a list of X’s, O’s and EMPTY’s. It’s initially filled by EMPTY’s. currentPlayer The player who needs to make the next move. This either X or O. startingPlayer the player who was the first player to move in this game state. This either X or O. moves The list of moves made by players to get to this game state. This initially filled with 0’s. movesP The player turn list. It contains a list of alternating X’s and O’s numMoves Number of moves made to get to this game state. isDone This indicates wheter this is a final game state. It is final if either X or O has won if there is no winner: NONE. winner If there is a winner in this games state the value is either X or O. If the game state is a draw or the game is not finished the value is NONE. Usage gameState Format An object of class list of length 8. getMoves Get the list of valid move from the game state. Description Get the list of valid move from the game state. Usage getMoves(gameState) Arguments gameState The gameState for which moves must be calculated. Value validMoves An array (["integer"]) of valid moves based on the provided game state. Examples gameState <- startGame() validMoves <- getMoves(gameState) makeMove Apply the move to the current game state an produce a new game state. Description Apply the move to the current game state an produce a new game state. Usage makeMove(gameState, move) Arguments gameState The gameState to apply the move to. move The move to be applied to the game state. Value gameState The game state after applying the move to the game state. Examples gameState <- startGame() gameState <- makeMove(gameState,1) NONE Constant for no winner. It’s value is the character "_". Description It’s value is the character "_". Usage NONE Format An object of class character of length 1. O Constant for the O player. Description It’s value is the character "O". Usage O Format An object of class character of length 1. playGame Play a game of Tic-Tac-Toe using the two provided stragies. Description Play a game of Tic-Tac-Toe using the two provided stragies. Usage playGame(px, po) Arguments px The X player strategy. po The O player strategy. Value gameState The final gameState after playing a full game. Examples px <- firstAvailableMovePlayer py <- randomMovePlayer finalGameState <- playGame(px,py) randomMovePlayer Player that picks a random move Description Internally this player calls getMoves and then picks an entry in the list of moves at random. A player is a function that takes a game state as input and returns a valid move index. Usage randomMovePlayer(gameState) Arguments gameState The gameState that the player should act on. Value moveIndex Index to a valid move as returned by the getMoves function. Examples gameState <- startGame() move <- randomMovePlayer(gameState) rt3 rt3: A Package for Playing Tic-Tac-Toe in R. Description The rt3 package provides functions to allow a user to simulate tic-tac-toe games. It provides a convenient gameState object as well as simple interface for developing new types of players. Main Function playGame Play a game of tic-tac-toe. Structures gameState A tic-tac-toe game state. Constants X The X player. O The O player. EMPTY The EMPTY constant. Used to indicate an empty board position. NONE The NONE constant. Used to indicate a draw. Support Functions These functions are used by the playGame function.The will also be usefull in building game dec- sion trees for more complex players. startGame Create a new tic-tac-toe game state. getMoves Get the current set of valid moves for a given game state makeMove Apply a move to the given game state and return the resulting game state Built-In Player Functions randomMovePlayer A player that plays random valid moves firstAvailableMovePlayer A player that always plays the first move available References https://en.wikipedia.org/wiki/Tic-tac-toe startGame Start a new game Description This function starts a new game. It randomly assigns a starting player and returns a new game state object. Usage startGame() Value gameState A new gameState. Examples gameState <- startGame() X Constant for the X player. Description It’s value is the character "O". Usage X Format An object of class character of length 1.
titeIR
cran
R
Package ‘titeIR’ October 14, 2022 Type Package Title Isotonic Designs for Phase 1 Trials with Late-Onset Toxicities Version 0.1.0 Maintainer <NAME> <<EMAIL>> Date 2018-09-17 Description Functions to design phase 1 trials using an isotonic regression based design incorporat- ing time-to-event information. Simulation and design functions are available, which incorpo- rate information about followup and DLTs, and apply isotonic regression to devise esti- mates of DLT probability. License GPL-3 Imports Iso Encoding UTF-8 LazyData true RoxygenNote 6.1.0 NeedsCompilation no Author <NAME> [aut, cre] Repository CRAN Date/Publication 2018-09-28 18:10:03 UTC R topics documented: isotitedos... 2 isotitesi... 3 isotitedose Dose assignment for TITE-IR designs Description Calculate the next dose assignment for a TITE-IR design. Usage isotitedose(followup, DLT, assignment, obswin, doses, target = 1/3, safety = 0.05) Arguments followup A vector of followup times DLT A vector of DLT results. FALSE or 0 is interpreted as no observed DLT and TRUE or 1 is interpreted as observed DLT. assignment a vector of dose assignments. Doses should be labeled in consecutive integers from 1 to number of dose levels. obswin The observation window with respect to which the MTD is defined. doses An integer providing the number of doses. target Target DLT rate safety The safety factor to prevent overly aggressive escalation Value an integer specifying the recommended dose level See Also isotitesim for simulations Examples isotitedose(followup = c(6, 5, 4, 3, 2, 1), DLT = c(0, 0, 0, 0, 0, 0), assignment = c(1, 1, 1, 2, 2, 2), obswin = 6, doses = 6) isotitesim Simulate TITE-IR designs Description Simulates trials based on the TITE-IR design. Usage isotitesim(PI, target, n, nsim, obswin = 1, rate = 1, safety = 0.05, accrual = "poisson", restrict = TRUE) Arguments PI A vector of true toxicity probabilities at each dose target Target DLT rate n Sample size of the trial nsim Number of trial replicates obswin The observation window with respect to which the MTD is defined rate Patient arrival rate: expected number of arrivals per observation window safety The safety factor to prevent overly aggressive escalation accrual Specify the accrual distribution. Can be either "poisson" or "fixed". Partial strings are also acceptable. restrict If TRUE, do not allow escalation immediately after a toxic outcome (require co- herent escalation) Value Object of type isotite which provides results from TITE-IR simulations See Also isotitedose for dose recommendation Examples isotitesim(PI = c(0.05, 0.10, 0.20, 0.30, 0.50, 0.70), target = 1/3, n = 24, nsim = 10, obswin = 6, rate = 12)
SIMPLE
cran
R
Package ‘SIMPLE.REGRESSION’ February 24, 2023 Type Package Title Multiple Regression and Moderated Regression Made Simple Version 0.1.6 Date 2023-02-24 Author <NAME> Maintainer <NAME> <<EMAIL>> Description Provides SPSS- and SAS-like output for least squares multiple regression and moderated regression, as well as interaction plots and Johnson-Neyman regions of significance for interactions. The output includes standardized coefficients, partial and semi-partial correlations, collinearity diagnostics, plots of residuals, and detailed information about simple slopes for interactions. There are numerous options for designing interaction plots, including plots of interactions for both lm and lme models. Imports graphics, stats, utils, nlme LazyLoad yes LazyData yes License GPL (>= 2) NeedsCompilation no Repository CRAN Date/Publication 2023-02-24 10:00:02 UTC R topics documented: SIMPLE.REGRESSION-packag... 2 data_Bauer_Curran_200... 3 data_Bodner_201... 4 data_Chapman_Little_201... 4 data_Cohen_Aiken_West_2003_... 5 data_Cohen_Aiken_West_2003_... 6 data_Green_Salkind_201... 7 data_Huitema_201... 7 data_Lorah_Wong_201... 8 data_OConnor_Dvorak_200... 9 data_Pedhazur_199... 10 PARTIAL_COEF... 10 REGIONS_OF_SIGNIFICANC... 11 SIMPLE.REGRESSIO... 14 SIMPLE.REGRESSION-package SIMPLE.REGRESSION Description Provides SPSS- and SAS-like output for least squares multiple regression and moderated regres- sion, as well as interaction plots and Johnson-Neyman regions of significance for interactions. The output includes standardized coefficients, partial and semi-partial correlations, collinearity diagnos- tics, plots of residuals, and detailed information about simple slopes for interactions. There are numerous options for designing interaction plots. The REGIONS_OF_SIGNIFICANCE function also provides Johnson-Neyman regions of signifi- cance and plots of interactions for both lm and lme models (lme models are from the nlme package). References <NAME>., & <NAME>. (2005). Probing interactions in fixed and multilevel regression: Infer- ential and graphical techniques. Multivariate Behavioral Research, 40(3), 373-400. <NAME>., <NAME>., <NAME>., & <NAME>. (2003). Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.). Lawrence Erlbaum Associates Publishers. <NAME>., & <NAME>. (2017). Regression analysis and linear models: Concepts, appli- cations, and implementation. New York: The Guilford Press. <NAME>. (2018a). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach (2nd ed.). New York, NY: Guilford Press. Huitema, B. (2011). The analysis of covariance and alternatives: Statistical methods for exper- iments, quasi-experiments, and single-case studies. Hoboken, NJ: Wiley. <NAME>., & <NAME>. (1950). The Johnson-Neyman technique, its theory, and applica- tion. Psychometrika, 15, 349-367. <NAME>. & <NAME>. (2018). Contemporary applications of moderation analysis in coun- seling psychology. Counseling Psychology, 65(5), 629-640. <NAME>. (1998). All-in-one programs for exploring interactions in moderated multiple re- gression. Educational and Psychological Measurement, 58, 833-837. <NAME>. (1997). Multiple regression in behavioral research: Explanation and prediction. (3rd ed.). Fort Worth, Texas: Wadsworth Thomson Learning data_Bauer_Curran_2005 data_Bauer_Curran_2005 Description Multilevel moderated regression data from Bauer and Curran (2005). Usage data(data_Bauer_Curran_2005) Source <NAME>., & <NAME>. (2005). Probing interactions in fixed and multilevel regression: Infer- ential and graphical techniques. Multivariate Behavioral Research, 40(3), 373-400. Examples head(data_Bauer_Curran_2005) HSBmod <-nlme::lme(MathAch ~ Sector + CSES + CSES:Sector, data = data_Bauer_Curran_2005, random = ~1 + CSES|School, method = "ML") summary(HSBmod) REGIONS_OF_SIGNIFICANCE(model=HSBmod, IV_range=NULL, MOD_range=NULL, PLOT_title=NULL, Xaxis_label=NULL, Yaxis_label=NULL, LEGEND_label=NULL, namesIVMOD_raw=NULL, namesIVMOD_model=NULL) data_Bodner_2016 data_Bodner_2016 Description Moderated regression data used by Bodner (2016) to illustrate the tumble graphs method of plotting interactions. The data were also used by Bauer and Curran (2005). Usage data(data_Bodner_2016) Source <NAME>. (2016). Tumble Graphs: Avoiding misleading end point extrapolation when graphing interactions from a moderated multiple regression analysis. Journal of Educational and Behavioral Statistics, 41(6), 593-604. <NAME>., & <NAME>. (2005). Probing interactions in fixed and multilevel regression: In- ferential and graphical techniques. Multivariate Behavioral Research, 40(3), 373-400. Examples head(data_Bodner_2016) # replicates p 599 of Bodner_2016 SIMPLE.REGRESSION(data=data_Bodner_2016, DV='math90', IV='Anti90', IV_type = 'numeric', IV_range='tumble', MOD='Hyper90', MOD_type = 'numeric', MOD_levels='quantiles', quantiles_IV=c(.1, .9), quantiles_MOD=c(.25, .5, .75), CENTER = FALSE, COVARS=c('age90month','female','grade90','minority'), PLOT_type = 'interaction', PLOT_title=NULL, Xaxis_label=NULL, Yaxis_label=NULL, JN_type = 'Huitema', verbose=TRUE ) data_Chapman_Little_2016 data_Chapman_Little_2016 Description Moderated regression data from Chapman and Little (2016). Usage data(data_Chapman_Little_2016) Source <NAME>., & <NAME>. (2016). Climate change and disasters: How framing affects justi- fications for giving or withholding aid to disaster victims. Social Psychological and Personality Science, 7, 13-20. Examples head(data_Chapman_Little_2016) # the data used by Hayes (2018, Introduction to Mediation, Moderation, and # Conditional Process Analysis: A Regression-Based Approach), replicating p. 239 SIMPLE.REGRESSION(data=data_Chapman_Little_2016, DV='justify', IV='frame', IV_type = 'numeric', IV_range='tumble', MOD='skeptic', MOD_type = 'numeric', MOD_levels='AikenWest', MOD_range=NULL, quantiles_IV=c(.1, .9), quantiles_MOD=c(.25, .5, .75), CENTER = FALSE, COVARS=NULL, PLOT_type = 'regions', PLOT_title=NULL, Xaxis_label=NULL, Yaxis_label=NULL, LEGEND_label=NULL, JN_type = 'Huitema', verbose=TRUE ) data_Cohen_Aiken_West_2003_7 data_Cohen_Aiken_West_2003_7 Description Moderated regression data for a continuous predictor and a continuous moderator from Cohen, Cohen, West, & Aiken (2003, Chapter 7). Usage data(data_Cohen_Aiken_West_2003_7) Source <NAME>., <NAME>., <NAME>., & <NAME>. (2003). Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.). Lawrence Erlbaum Associates Publishers. Examples head(data_Cohen_Aiken_West_2003_7) # replicates p 276 of Chapter 7 of Cohen, Cohen, West, & Aiken (2003) SIMPLE.REGRESSION(data=data_Cohen_Aiken_West_2003_7, DV='yendu', IV='xage', IV_type = 'numeric', IV_range='tumble', MOD='zexer', MOD_type = 'numeric', MOD_levels='AikenWest', MOD_range=NULL, quantiles_IV=c(.1, .9), quantiles_MOD=c(.25, .5, .75), CENTER = TRUE, COVARS=NULL, PLOT_type = 'regions', PLOT_title=NULL, Xaxis_label=NULL, Yaxis_label=NULL, LEGEND_label=NULL, JN_type = 'Huitema', verbose=TRUE ) data_Cohen_Aiken_West_2003_9 data_Cohen_Aiken_West_2003_9 Description Moderated regression data for a continuous predictor and a categorical moderator from Cohen, Cohen, West, & Aiken (2003, Chapter 9). Usage data(data_Cohen_Aiken_West_2003_9) Source <NAME>., <NAME>., <NAME>., & <NAME>. (2003). Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.). Lawrence Erlbaum Associates Publishers. Examples head(data_Cohen_Aiken_West_2003_9) SIMPLE.REGRESSION(data=data_Cohen_Aiken_West_2003_9, DV='SALARY', forced=c('PUB','DEPART_f')) # replicates p 376 of Chapter 9 of Cohen, Cohen, West, & Aiken (2003) SIMPLE.REGRESSION(data=data_Cohen_Aiken_West_2003_9, DV='SALARY', forced=NULL, IV='PUB', IV_type = 'numeric', IV_range='tumble', MOD='DEPART_f', MOD_type = 'factor', MOD_levels='AikenWest', MOD_range=NULL, quantiles_IV=c(.1, .9), quantiles_MOD=c(.25, .5, .75), CENTER = TRUE, COVARS=NULL, PLOT_type = 'regions', PLOT_title=NULL, Xaxis_label=NULL, Yaxis_label=NULL, LEGEND_label=NULL, JN_type = 'Huitema', verbose=TRUE ) data_Green_Salkind_2014 data_Green_Salkind_2014 Description Mutiple regression data from Green and Salkind (2018). Usage data(data_Green_Salkind_2014) Source <NAME>., & <NAME>. (2014). Lesson 34: Multiple linear regression (pp. 257-269). In, Using SPSS for Windows and Macintosh: Analyzing and understanding data. New York, NY: Pearson. Examples head(data_Green_Salkind_2014) # forced (simultaneous) entry; replicating the output on p. 263 SIMPLE.REGRESSION(data=data_Green_Salkind_2014, DV='injury', forced=c('quads','gluts','abdoms','arms','grip')) # hierarchical entry; replicating the output on p. 265-266 SIMPLE.REGRESSION(data=data_Green_Salkind_2014, DV='injury', hierarchical = list( step1=c('quads','gluts','abdoms'), step2=c('arms','grip')) ) data_Huitema_2011 data_Huitema_2011 Description Moderated regression data for a continuous predictor and a dichotomous moderator from Huitema (2011, p. 253). Usage data(data_Huitema_2011) Source <NAME>. (2011). The analysis of covariance and alternatives: Statistical methods for experi- ments, quasi-experiments, and single-case studies. Hoboken, NJ: Wiley. Examples head(data_Huitema_2011) # replicating results on p. 255 for the Johnson-Neyman technique for a categorical moderator SIMPLE.REGRESSION(data=data_Huitema_2011, DV='Y', IV='X', IV_type = 'numeric', IV_range='tumble', MOD='D', MOD_type = 'factor', CENTER = FALSE, COVARS=NULL, PLOT_type = 'interaction', PLOT_title=NULL, Xaxis_label=NULL, Yaxis_label=NULL, LEGEND_label=NULL, JN_type = 'Huitema', verbose=TRUE ) data_Lorah_Wong_2018 data_Lorah_Wong_2018 Description Moderated regression data from Lorah and Wong (2018). Usage data(data_Lorah_Wong_2018) Source <NAME>. & <NAME>. (2018). Contemporary applications of moderation analysis in counseling psychology. Journal of Counseling Psychology, 65(5), 629-640. Examples head(data_Lorah_Wong_2018) SIMPLE.REGRESSION(data=data_Lorah_Wong_2018, DV='sis', IV='burden', IV_type = 'numeric', IV_range='tumble', MOD='belong', MOD_type = 'numeric', MOD_levels='quantiles', MOD_range=NULL, quantiles_IV=c(.1, .9), quantiles_MOD=c(.25, .5, .75), CENTER = TRUE, COVARS='dep', PLOT_type = 'regions', PLOT_title=NULL, Xaxis_label=NULL, Yaxis_label=NULL, LEGEND_label=NULL, JN_type = 'Huitema', verbose=TRUE ) data_OConnor_Dvorak_2001 data_OConnor_Dvorak_2001 Description Moderated regression data from O’Connor and Dvorak (2001) Details A data frame with scores for 131 male adolescents on resiliency, maternal harshness, and aggressive behavior. The data are from O’Connor and Dvorak (2001, p. 17) and are provided as trial moderated regression data for the SIMPLE_REGRESSION and REGIONS_OF_SIGNIFICANCE functions. References <NAME>., & <NAME>. (2001). Conditional associations between parental behavior and adolescent problems: A search for personality-environment interactions. Journal of Research in Personality, 35, 1-26. Examples head(data_OConnor_Dvorak_2001) mharsh_agg <- SIMPLE.REGRESSION(data=data_OConnor_Dvorak_2001, DV='Aggressive_Behavior', IV='Maternal_Harshness', IV_type = 'numeric', IV_range=c(1,7.7), MOD='Resiliency', MOD_type = 'numeric', MOD_levels='AikenWest', MOD_range=NULL, quantiles_IV=c(.1, .9), quantiles_MOD=c(.25, .5, .75), CENTER = FALSE, COVARS=NULL, PLOT_type = 'interaction', PLOT_title=NULL, DV_range = c(1,6), Xaxis_label='Maternal Harshness', Yaxis_label='Adolescent Aggressive Behavior', LEGEND_label='Resiliency', JN_type = 'Huitema', verbose=TRUE ) REGIONS_OF_SIGNIFICANCE(model=mharsh_agg, IV_range=NULL, MOD_range='minmax', PLOT_title='Slopes of Maternal Harshness on Aggression by Resiliency', Xaxis_label='Resiliency', Yaxis_label='Slopes of Maternal Harshness on Aggressive Behavior ', LEGEND_label=NULL, namesIVMOD_raw=NULL, namesIVMOD_model=NULL) data_Pedhazur_1997 data_Pedhazur_1997 Description Moderated regression data for a continuous predictor and a dichotomous moderator from Pedhazur (1997, p. 588). Usage data(data_Pedhazur_1997) Source <NAME>. (1997). Multiple regression in behavioral research: Explanation and prediction. (3rd ed.). <NAME>, Texas: Wadsworth Thomson Learning. Examples head(data_Pedhazur_1997) # replicating results on p. 594 for the Johnson-Neyman technique for a categorical moderator SIMPLE.REGRESSION(data=data_Pedhazur_1997, DV='Y', IV='X', IV_type = 'numeric', IV_range='tumble', MOD='Directive', MOD_type = 'factor', MOD_levels='quantiles', MOD_range=NULL, quantiles_IV=c(.1, .9), quantiles_MOD=c(.25, .5, .75), CENTER = FALSE, COVARS=NULL, PLOT_type = 'interaction', PLOT_title=NULL, Xaxis_label=NULL, Yaxis_label=NULL, LEGEND_label=NULL, JN_type = 'Pedhazur', verbose=TRUE ) PARTIAL_COEFS Standardized coefficients and partial correlations for multiple regres- sion Description Produces standardized regression coefficients, partial correlations, and semi-partial correlations for a correlation matrix in which one variable is a dependent or outcome variable and the other variables are independent or predictor variables. Usage PARTIAL_COEFS(cormat, modelRsq=NULL, verbose=TRUE) Arguments cormat A correlation matrix. The DV (the dependent or outcome variable) must be in the first row/column of cormat. modelRsq optional. The model Rsquared, which makes the computations slightly faster when it is available. verbose Should detailed results be displayed in console? The options are: TRUE (default) or FALSE. Value An object of class "data.frame" containing with the standardized regression coefficients (betas), the Pearson correlations, the partial correlations, and the semi-partial correlations for each variable with the DV. Author(s) <NAME> References <NAME>., <NAME>., <NAME>., & <NAME>. (2003). Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.). Lawrence Erlbaum Associates Publishers. Examples PARTIAL_COEFS(cormat = cor(data_Green_Salkind_2014)) REGIONS_OF_SIGNIFICANCE Plots of Johnson-Neyman regions of significance for interactions Description Plots of Johnson-Neyman regions of significance for interactions in moderated multiple regression, for both SIMPLE.REGRESSION models (objects) and for lme models from the nlme package. Usage REGIONS_OF_SIGNIFICANCE(model, IV_range=NULL, MOD_range=NULL, PLOT_title=NULL, Xaxis_label=NULL, Yaxis_label=NULL, LEGEND_label=NULL, namesIVMOD_raw=NULL, namesIVMOD_model=NULL) Arguments model The name of a SIMPLE.REGRESSION model, or of an lme model from the nlme package. IV_range (optional) The range of the IV to be used in the plot. MOD_range (optional) The range of the MOD values to be used in the PLOT_title (optional) The plot title. Xaxis_label (optional) A label for the X axis to be used in the requested plot. Yaxis_label (optional) A label for the Y axis to be used in the requested plot. LEGEND_label (optional) The legend label for a moderated regression. namesIVMOD_raw optional, and for lme/nlme models only. If model is an lme object & IV is a two-level factor, then namesIVMOD_model must be specified (because lme alters the variable names). namesIVMOD_model optional, and for lme/nlme models only. The namesIVMOD_model argument can be used to id the key terms () from an lme model that involved more than IV, MOD, & Xn terms. The argument is used only to create the key B and S objects for the J-N analyses. Other terms in the model are ignored. Value An object of class "SIMPLE.REGRESSION". The object is a list containing the following possible components: JN.data The Johnson-Neyman results for a moderated regression. ros The Johnson-Neyman regions of significance for a moderated regression. Author(s) <NAME> References <NAME>., & <NAME>. (2005). Probing interactions in fixed and multilevel regression: Infer- ential and graphical techniques. Multivariate Behavioral Research, 40(3), 373-400. <NAME>. (2011). The analysis of covariance and alternatives: Statistical methods for exper- iments, quasi-experiments, and single-case studies. Hoboken, NJ: Wiley. <NAME>., & <NAME>. (1936). Tests of certain linear hypotheses and their application to some educational problems. Statistical Research Memoirs, 1, 57-93. <NAME>., & <NAME>. (1950). The Johnson-Neyman technique, its theory, and applica- tion. Psychometrika, 15, 349-367. <NAME>. (1997). Multiple regression in behavioral research: Explanation and prediction. (3rd ed.). <NAME>, Texas: Wadsworth Thomson Learning <NAME>., <NAME>., <NAME>., & <NAME>. (2014). The identification of regions of sig- nificance in the effect of multimorbidity on depressive symptoms using longitudinal data: An ap- plication of the Johnson-Neyman technique. Gerontology, 60, 274-281. Examples head(data_Cohen_Aiken_West_2003_7) CAW_7 <- SIMPLE.REGRESSION(data=data_Cohen_Aiken_West_2003_7, DV='yendu', IV='xage', IV_type = 'numeric', IV_range='tumble', MOD='zexer', MOD_type = 'numeric', MOD_levels='quantiles', MOD_range=NULL, quantiles_IV=c(.1, .9), quantiles_MOD=c(.25, .5, .75), CENTER = TRUE, COVARS=NULL, PLOT_type = 'interaction', PLOT_title=NULL, Xaxis_label=NULL, Yaxis_label=NULL, LEGEND_label=NULL, JN_type = 'Huitema', verbose=TRUE ) REGIONS_OF_SIGNIFICANCE(model=CAW_7, IV_range=NULL, MOD_range='minmax', PLOT_title=NULL, Xaxis_label=NULL, Yaxis_label=NULL, LEGEND_label=NULL, namesIVMOD_raw=NULL, namesIVMOD_model=NULL) head(data_Bauer_Curran_2005) HSBmod <- nlme::lme(MathAch ~ Sector + CSES + CSES:Sector , data = data_Bauer_Curran_2005, random = ~1 + CSES|School, method = "ML") summary(HSBmod) REGIONS_OF_SIGNIFICANCE(model=HSBmod, IV_range=NULL, MOD_range=NULL, PLOT_title=NULL, Xaxis_label=NULL, Yaxis_label=NULL, LEGEND_label=NULL, namesIVMOD_raw=NULL, namesIVMOD_model=NULL) # moderated regression -- with numeric values for IV_range & MOD_levels='AikenWest' mharsh_agg <- SIMPLE.REGRESSION(data=data_OConnor_Dvorak_2001, DV='Aggressive_Behavior', IV='Maternal_Harshness', IV_type = 'numeric', IV_range=c(1,7.7), MOD='Resiliency', MOD_type = 'numeric', MOD_levels='AikenWest', MOD_range=NULL, quantiles_IV=c(.1, .9), quantiles_MOD=c(.25, .5, .75), CENTER = FALSE, COVARS=NULL, PLOT_type = 'interaction', PLOT_title=NULL, DV_range = c(1,6), Xaxis_label='Maternal Harshness', Yaxis_label='Adolescent Aggressive Behavior', LEGEND_label='Resiliency', JN_type = 'Huitema', verbose=TRUE ) REGIONS_OF_SIGNIFICANCE(model=mharsh_agg, IV_range=NULL, MOD_range='minmax', PLOT_title='Slopes of Maternal Harshness on Aggression by Resiliency', Xaxis_label='Resiliency', Yaxis_label='Slopes of Maternal Harshness on Aggressive Behavior ', LEGEND_label=NULL, namesIVMOD_raw=NULL, namesIVMOD_model=NULL) SIMPLE.REGRESSION Multiple regression and moderated multiple regression Description Provides SPSS- and SAS-like output for least squares simultaneous entry regression, hierarchical entry regression, and moderated regression, as well as interaction plots and Johnson-Neyman re- gions of significance for interactions. The output includes standardized coefficients, partial and semi-partial correlations, collinearity diagnostics, plots of residuals, and detailed information about simple slopes for interactions. Usage SIMPLE.REGRESSION(data, DV, forced=NULL, hierarchical=NULL, IV=NULL, IV_type = 'numeric', IV_range='tumble', MOD=NULL, MOD_type = 'numeric', MOD_levels='quantiles', MOD_range=NULL, quantiles_IV=c(.1, .9), quantiles_MOD=c(.25, .5, .75), CENTER = TRUE, COVARS=NULL, PLOT_type = NULL, PLOT_title=NULL, DV_range=NULL, Xaxis_label=NULL, Yaxis_label=NULL, LEGEND_label=NULL, JN_type = 'Huitema', verbose=TRUE ) Arguments data A dataframe where the rows are cases and the columns are the variables. DV The name of the dependent variable, e.g., DV = ’outcomeVar’. forced (optional) A vector of the names of the predictor variables for a forced/simultaneous entry regression, e.g., forced = c(’VarA’, ’VarB’, ’VarC’). The variables can be numeric or factors. hierarchical (optional) A list with the names of the predictor variables for each step of a hi- erarchical regression, e.g., hierarchical = list(step1=c(’VarA’, ’VarB’), step2=c(’VarC’, ’VarD’), step3=c(’VarE’, ’VarF’)). The variables can be numeric or factors. IV (optional) The name of the independent variable for a moderated regression. Not required for forced or hierarchical regression. IV_type (optional) The type of independent variable for a moderated regression. The options are ’numeric’ (the default) or ’factor’. Not required for forced or hierar- chical regression. IV_range (optional) The independent variable range for a moderated regression plot. The options are: ’tumble’ (the default), for tumble graphs following Bodner (2016); ’quantiles’, in which case the 10th and 90th quantiles of the IV will be used (alternative values can be specified using the quantiles_IV argument); NULL, in which case the minimum and maximum IV values will be used; ’AikenWest’, in which case the IV mean - one SD, and the IV mean + one SD, will be used; and a vector of two user-provided values (e.g., c(1, 10)). MOD (optional) The name of the moderator variable for a moderated regression. Not required for a regular (non moderated) multiple regression. MOD_type (optional) The type of moderator variable for a moderated regression. The op- tions are ’numeric’ (the default) or ’factor’. Not required for forced or hierar- chical regression. MOD_levels (optional) The levels of the moderator variable to be used in a moderated regres- sion, if MOD is continuous. Not required for a regular (non moderated) multiple regression. The options are: ’quantiles’, in which case the .25, .5, and .75 quan- tiles of the MOD variable will be used (alternative values can be specified using the quantiles_MOD argument); ’AikenWest’, in which case the mean of MOD, the mean of MOD - one SD, and the mean of MOD + one SD, will be used; and a vector of two user-provided values (e.g., c(1, 10)). MOD_range (optional) The range of the MOD values to be used in the Johnson-Neyman regions of significance analyses. The options are: NULL (the default), in which case the minimum and maximum MOD values will be used; and a vector of two user-provided values (e.g., c(1, 10)). quantiles_IV (optional) The quantiles the independent variable to be used as the IV range for a moderated regression plot. quantiles_MOD (optional) The quantiles the moderator variable to be used as the MOD simple slope values in the moderated regression analyses. CENTER (optional) Logical indicating whether the IV and MOD variables should be cen- tered in a moderated regression analysis (default = TRUE). COVARS (optional) The name(s) of possible covariates variable for a moderated regres- sion analysis, e.g., COVARS = c(’CovarA’, ’CovarB’, ’CovarC’). PLOT_type (optional) The kind of plot. The options are ’residuals’ (the default), ’interac- tion’ (for a traditional moderated regression interaction plot), and ’regions’ (for a moderated regression Johnson-Neyman regions of significance plot). PLOT_title (optional) The plot title for a moderated regression. DV_range (optional) The range of Y-axis values for a moderated regression interaction plot, e.g., DV_range = c(1,10). Xaxis_label (optional) A label for the X axis to be used in the requested plot. Yaxis_label (optional) A label for the Y axis to be used in the requested plot. LEGEND_label (optional) The legend label for a moderated regression. JN_type (optional) The formula to be used in computing the critical F value for the Johnson-Neyman regions of significance analyses. The options are ’Huitema’ (the default), or ’Pedhazur’. verbose Should detailed results be displayed in console? The options are: TRUE (default) or FALSE. If TRUE, plots of residuals are also produced. Details This function relies heavily on the lm function from the stats package. It supplements the lm function output with additional statistics and it formats the output so that it resembles SPSS and SAS regression output. The predictor variables can be numeric or factors. Only least squares regressions are performed. Value An object of class "SIMPLE.REGRESSION". The object is a list containing the following possible components: modelMAIN All of the lm function output for the regression model without interaction terms. modelMAINsum All of the summary.lm function output for the regression model without interac- tion terms. mainRcoefs Predictor coefficients for the model without interaction terms. modeldata All of the predictor and outcome raw data that were used in the model, along with regression diagnostic statistics for each case. collin_diags Collinearity diagnostic coefficients for models without interaction terms. modelXNsum Regression model statistics with interaction terms. RsqchXn Rsquared change for the interaction. fsquaredXN fsquared change for the interaction. xnRcoefs Predictor coefficients for the model with interaction terms. simslop The simple slopes. simslopZ The standardized simple slopes. plotdon The plot data for a moderated regression. JN.data The Johnson-Neyman results for a moderated regression. ros The Johnson-Neyman regions of significance for a moderated regression. Author(s) <NAME> References <NAME>. (2016). Tumble graphs: Avoiding misleading end point extrapolation when graphing interactions from a moderated multiple regression analysis. Journal of Educational and Behavioral Statistics, 41, 593-604. <NAME>., <NAME>., <NAME>., & <NAME>. (2003). Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.). Lawrence Erlbaum Associates Publishers. <NAME>., & <NAME>. (2017). Regression analysis and linear models: Concepts, appli- cations, and implementation. New York: The Guilford Press. <NAME>. (2018a). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach (2nd ed.). New York, NY: Guilford Press. <NAME>., & <NAME>. (2016). A tutorial on testing, visualizing, and probing an inter- action involving a multicategorical variable in linear regression analysis. Communication Methods and Measures, 11, 1-30. <NAME>. (1997). Multiple regression in behavioral research: Explanation and prediction. (3rd ed.). Fort Worth, Texas: Wadsworth Thomson Learning Examples # forced (simultaneous) entry head(data_Green_Salkind_2014) SIMPLE.REGRESSION(data=data_Green_Salkind_2014, DV='injury', forced = c('quads','gluts','abdoms','arms','grip')) # hierarchical entry SIMPLE.REGRESSION(data=data_Green_Salkind_2014, DV='injury', hierarchical = list( step1=c('quads','gluts','abdoms'), step2=c('arms','grip')) ) # moderated regression -- with IV_range = 'AikenWest' head(data_Lorah_Wong_2018) SIMPLE.REGRESSION(data=data_Lorah_Wong_2018, DV='sis', IV='burden', IV_type = 'numeric', IV_range='AikenWest', MOD='belong', MOD_type = 'numeric', MOD_levels='quantiles', MOD_range=NULL, quantiles_IV=c(.1, .9), quantiles_MOD=c(.25, .5, .75), CENTER = TRUE, COVARS='dep', PLOT_type = 'interaction', PLOT_title=NULL, DV_range = c(1,1.25), # 'regions' Xaxis_label=NULL, Yaxis_label=NULL, LEGEND_label=NULL, JN_type = 'Huitema', verbose=TRUE ) # moderated regression -- with IV_range = 'tumble' head(data_Lorah_Wong_2018) SIMPLE.REGRESSION(data=data_Lorah_Wong_2018, DV='sis', IV='burden', IV_type = 'numeric', IV_range='tumble', MOD='belong', MOD_type = 'numeric', MOD_levels='quantiles', MOD_range=NULL, quantiles_IV=c(.1, .9), quantiles_MOD=c(.25, .5, .75), CENTER = TRUE, COVARS='dep', PLOT_type = 'interaction', PLOT_title=NULL, DV_range = c(1,1.25), # 'regions' Xaxis_label=NULL, Yaxis_label=NULL, LEGEND_label=NULL, JN_type = 'Huitema', verbose=TRUE ) # moderated regression -- with numeric values for IV_range & MOD_levels='AikenWest' SIMPLE.REGRESSION(data=data_OConnor_Dvorak_2001, DV='Aggressive_Behavior', IV='Maternal_Harshness', IV_type = 'numeric', IV_range=c(1,7.7), MOD='Resiliency', MOD_type = 'numeric', MOD_levels='AikenWest', MOD_range=NULL, quantiles_IV=c(.1, .9), quantiles_MOD=c(.25, .5, .75), CENTER = FALSE, COVARS=NULL, PLOT_type = 'interaction', PLOT_title=NULL, DV_range = c(1,6), Xaxis_label='Maternal Harshness', Yaxis_label='Adolescent Aggressive Behavior', LEGEND_label='Resiliency', JN_type = 'Huitema', verbose=TRUE )
rusoto_cloudwatch
rust
Rust
Crate rusoto_cloudwatch === Amazon CloudWatch monitors your Amazon Web Services (AWS) resources and the applications you run on AWS in real time. You can use CloudWatch to collect and track metrics, which are the variables you want to measure for your resources and applications. CloudWatch alarms send notifications or automatically change the resources you are monitoring based on rules that you define. For example, you can monitor the CPU usage and disk reads and writes of your Amazon EC2 instances. Then, use this data to determine whether you should launch additional instances to handle increased load. You can also use this data to stop under-used instances to save money. In addition to monitoring the built-in metrics that come with AWS, you can monitor your own custom metrics. With CloudWatch, you gain system-wide visibility into resource utilization, application performance, and operational health. If you’re using the service, you’re probably looking for CloudWatchClient and CloudWatch. Structs --- AlarmHistoryItemRepresents the history of a specific alarm. AnomalyDetectorAn anomaly detection model associated with a particular CloudWatch metric and statistic. You can use the model to display a band of expected normal values when the metric is graphed. AnomalyDetectorConfigurationThe configuration specifies details about how the anomaly detection model is to be trained, including time ranges to exclude from use for training the model and the time zone to use for the metric. CloudWatchClientA client for the CloudWatch API. CompositeAlarmThe details about a composite alarm. DashboardEntryRepresents a specific dashboard. DashboardValidationMessageAn error or warning for the operation. DatapointEncapsulates the statistical data that CloudWatch computes from metric data. DeleteAlarmsInputDeleteAnomalyDetectorInputDeleteAnomalyDetectorOutputDeleteDashboardsInputDeleteDashboardsOutputDeleteInsightRulesInputDeleteInsightRulesOutputDeleteMetricStreamInputDeleteMetricStreamOutputDescribeAlarmHistoryInputDescribeAlarmHistoryOutputDescribeAlarmsForMetricInputDescribeAlarmsForMetricOutputDescribeAlarmsInputDescribeAlarmsOutputDescribeAnomalyDetectorsInputDescribeAnomalyDetectorsOutputDescribeInsightRulesInputDescribeInsightRulesOutputDimensionA dimension is a name/value pair that is part of the identity of a metric. You can assign up to 10 dimensions to a metric. Because dimensions are part of the unique identifier for a metric, whenever you add a unique name/value pair to one of your metrics, you are creating a new variation of that metric. DimensionFilterRepresents filters for a dimension. DisableAlarmActionsInputDisableInsightRulesInputDisableInsightRulesOutputEnableAlarmActionsInputEnableInsightRulesInputEnableInsightRulesOutputGetDashboardInputGetDashboardOutputGetInsightRuleReportInputGetInsightRuleReportOutputGetMetricDataInputGetMetricDataOutputGetMetricStatisticsInputGetMetricStatisticsOutputGetMetricStreamInputGetMetricStreamOutputGetMetricWidgetImageInputGetMetricWidgetImageOutputInsightRuleThis structure contains the definition for a Contributor Insights rule. InsightRuleContributorOne of the unique contributors found by a Contributor Insights rule. If the rule contains multiple keys, then a unique contributor is a unique combination of values from all the keys in the rule. If the rule contains a single key, then each unique contributor is each unique value for this key. For more information, see GetInsightRuleReport. InsightRuleContributorDatapointOne data point related to one contributor. For more information, see GetInsightRuleReport and InsightRuleContributor. InsightRuleMetricDatapointOne data point from the metric time series returned in a Contributor Insights rule report. For more information, see GetInsightRuleReport. LabelOptionsThis structure includes the `Timezone` parameter, which you can use to specify your time zone so that the labels that are associated with returned metrics display the correct time for your time zone. The `Timezone` value affects a label only if you have a time-based dynamic expression in the label. For more information about dynamic expressions in labels, see Using Dynamic Labels. ListDashboardsInputListDashboardsOutputListMetricStreamsInputListMetricStreamsOutputListMetricsInputListMetricsOutputListTagsForResourceInputListTagsForResourceOutputMessageDataA message returned by the `GetMetricData`API, including a code and a description. MetricRepresents a specific metric. MetricAlarmThe details about a metric alarm. MetricDataQueryThis structure is used in both `GetMetricData` and `PutMetricAlarm`. The supported use of this structure is different for those two operations. When used in `GetMetricData`, it indicates the metric data to return, and whether this call is just retrieving a batch set of data for one metric, or is performing a math expression on metric data. A single `GetMetricData` call can include up to 500 `MetricDataQuery` structures. When used in `PutMetricAlarm`, it enables you to create an alarm based on a metric math expression. Each `MetricDataQuery` in the array specifies either a metric to retrieve, or a math expression to be performed on retrieved metrics. A single `PutMetricAlarm` call can include up to 20 `MetricDataQuery` structures in the array. The 20 structures can include as many as 10 structures that contain a `MetricStat` parameter to retrieve a metric, and as many as 10 structures that contain the `Expression` parameter to perform a math expression. Of those `Expression` structures, one must have `True` as the value for `ReturnData`. The result of this expression is the value the alarm watches. Any expression used in a `PutMetricAlarm` operation must return a single time series. For more information, see Metric Math Syntax and Functions in the *Amazon CloudWatch User Guide*. Some of the parameters of this structure also have different uses whether you are using this structure in a `GetMetricData` operation or a `PutMetricAlarm` operation. These differences are explained in the following parameter list. MetricDataResultA `GetMetricData` call returns an array of `MetricDataResult` structures. Each of these structures includes the data points for that metric, along with the timestamps of those data points and other identifying information. MetricDatumEncapsulates the information sent to either create a metric or add new values to be aggregated into an existing metric. MetricStatThis structure defines the metric to be returned, along with the statistics, period, and units. MetricStreamEntryThis structure contains the configuration information about one metric stream. MetricStreamFilterThis structure contains the name of one of the metric namespaces that is listed in a filter of a metric stream. PartialFailureThis array is empty if the API operation was successful for all the rules specified in the request. If the operation could not process one of the rules, the following data is returned for each of those rules. PutAnomalyDetectorInputPutAnomalyDetectorOutputPutCompositeAlarmInputPutDashboardInputPutDashboardOutputPutInsightRuleInputPutInsightRuleOutputPutMetricAlarmInputPutMetricDataInputPutMetricStreamInputPutMetricStreamOutputRangeSpecifies one range of days or times to exclude from use for training an anomaly detection model. SetAlarmStateInputStartMetricStreamsInputStartMetricStreamsOutputStatisticSetRepresents a set of statistics that describes a specific metric. StopMetricStreamsInputStopMetricStreamsOutputTagA key-value pair associated with a CloudWatch resource. TagResourceInputTagResourceOutputUntagResourceInputUntagResourceOutputEnums --- DeleteAlarmsErrorErrors returned by DeleteAlarms DeleteAnomalyDetectorErrorErrors returned by DeleteAnomalyDetector DeleteDashboardsErrorErrors returned by DeleteDashboards DeleteInsightRulesErrorErrors returned by DeleteInsightRules DeleteMetricStreamErrorErrors returned by DeleteMetricStream DescribeAlarmHistoryErrorErrors returned by DescribeAlarmHistory DescribeAlarmsErrorErrors returned by DescribeAlarms DescribeAlarmsForMetricErrorErrors returned by DescribeAlarmsForMetric DescribeAnomalyDetectorsErrorErrors returned by DescribeAnomalyDetectors DescribeInsightRulesErrorErrors returned by DescribeInsightRules DisableAlarmActionsErrorErrors returned by DisableAlarmActions DisableInsightRulesErrorErrors returned by DisableInsightRules EnableAlarmActionsErrorErrors returned by EnableAlarmActions EnableInsightRulesErrorErrors returned by EnableInsightRules GetDashboardErrorErrors returned by GetDashboard GetInsightRuleReportErrorErrors returned by GetInsightRuleReport GetMetricDataErrorErrors returned by GetMetricData GetMetricStatisticsErrorErrors returned by GetMetricStatistics GetMetricStreamErrorErrors returned by GetMetricStream GetMetricWidgetImageErrorErrors returned by GetMetricWidgetImage ListDashboardsErrorErrors returned by ListDashboards ListMetricStreamsErrorErrors returned by ListMetricStreams ListMetricsErrorErrors returned by ListMetrics ListTagsForResourceErrorErrors returned by ListTagsForResource PutAnomalyDetectorErrorErrors returned by PutAnomalyDetector PutCompositeAlarmErrorErrors returned by PutCompositeAlarm PutDashboardErrorErrors returned by PutDashboard PutInsightRuleErrorErrors returned by PutInsightRule PutMetricAlarmErrorErrors returned by PutMetricAlarm PutMetricDataErrorErrors returned by PutMetricData PutMetricStreamErrorErrors returned by PutMetricStream SetAlarmStateErrorErrors returned by SetAlarmState StartMetricStreamsErrorErrors returned by StartMetricStreams StopMetricStreamsErrorErrors returned by StopMetricStreams TagResourceErrorErrors returned by TagResource UntagResourceErrorErrors returned by UntagResource Traits --- CloudWatchTrait representing the capabilities of the CloudWatch API. CloudWatch clients implement this trait. Crate rusoto_cloudwatch === Amazon CloudWatch monitors your Amazon Web Services (AWS) resources and the applications you run on AWS in real time. You can use CloudWatch to collect and track metrics, which are the variables you want to measure for your resources and applications. CloudWatch alarms send notifications or automatically change the resources you are monitoring based on rules that you define. For example, you can monitor the CPU usage and disk reads and writes of your Amazon EC2 instances. Then, use this data to determine whether you should launch additional instances to handle increased load. You can also use this data to stop under-used instances to save money. In addition to monitoring the built-in metrics that come with AWS, you can monitor your own custom metrics. With CloudWatch, you gain system-wide visibility into resource utilization, application performance, and operational health. If you’re using the service, you’re probably looking for CloudWatchClient and CloudWatch. Structs --- AlarmHistoryItemRepresents the history of a specific alarm. AnomalyDetectorAn anomaly detection model associated with a particular CloudWatch metric and statistic. You can use the model to display a band of expected normal values when the metric is graphed. AnomalyDetectorConfigurationThe configuration specifies details about how the anomaly detection model is to be trained, including time ranges to exclude from use for training the model and the time zone to use for the metric. CloudWatchClientA client for the CloudWatch API. CompositeAlarmThe details about a composite alarm. DashboardEntryRepresents a specific dashboard. DashboardValidationMessageAn error or warning for the operation. DatapointEncapsulates the statistical data that CloudWatch computes from metric data. DeleteAlarmsInputDeleteAnomalyDetectorInputDeleteAnomalyDetectorOutputDeleteDashboardsInputDeleteDashboardsOutputDeleteInsightRulesInputDeleteInsightRulesOutputDeleteMetricStreamInputDeleteMetricStreamOutputDescribeAlarmHistoryInputDescribeAlarmHistoryOutputDescribeAlarmsForMetricInputDescribeAlarmsForMetricOutputDescribeAlarmsInputDescribeAlarmsOutputDescribeAnomalyDetectorsInputDescribeAnomalyDetectorsOutputDescribeInsightRulesInputDescribeInsightRulesOutputDimensionA dimension is a name/value pair that is part of the identity of a metric. You can assign up to 10 dimensions to a metric. Because dimensions are part of the unique identifier for a metric, whenever you add a unique name/value pair to one of your metrics, you are creating a new variation of that metric. DimensionFilterRepresents filters for a dimension. DisableAlarmActionsInputDisableInsightRulesInputDisableInsightRulesOutputEnableAlarmActionsInputEnableInsightRulesInputEnableInsightRulesOutputGetDashboardInputGetDashboardOutputGetInsightRuleReportInputGetInsightRuleReportOutputGetMetricDataInputGetMetricDataOutputGetMetricStatisticsInputGetMetricStatisticsOutputGetMetricStreamInputGetMetricStreamOutputGetMetricWidgetImageInputGetMetricWidgetImageOutputInsightRuleThis structure contains the definition for a Contributor Insights rule. InsightRuleContributorOne of the unique contributors found by a Contributor Insights rule. If the rule contains multiple keys, then a unique contributor is a unique combination of values from all the keys in the rule. If the rule contains a single key, then each unique contributor is each unique value for this key. For more information, see GetInsightRuleReport. InsightRuleContributorDatapointOne data point related to one contributor. For more information, see GetInsightRuleReport and InsightRuleContributor. InsightRuleMetricDatapointOne data point from the metric time series returned in a Contributor Insights rule report. For more information, see GetInsightRuleReport. LabelOptionsThis structure includes the `Timezone` parameter, which you can use to specify your time zone so that the labels that are associated with returned metrics display the correct time for your time zone. The `Timezone` value affects a label only if you have a time-based dynamic expression in the label. For more information about dynamic expressions in labels, see Using Dynamic Labels. ListDashboardsInputListDashboardsOutputListMetricStreamsInputListMetricStreamsOutputListMetricsInputListMetricsOutputListTagsForResourceInputListTagsForResourceOutputMessageDataA message returned by the `GetMetricData`API, including a code and a description. MetricRepresents a specific metric. MetricAlarmThe details about a metric alarm. MetricDataQueryThis structure is used in both `GetMetricData` and `PutMetricAlarm`. The supported use of this structure is different for those two operations. When used in `GetMetricData`, it indicates the metric data to return, and whether this call is just retrieving a batch set of data for one metric, or is performing a math expression on metric data. A single `GetMetricData` call can include up to 500 `MetricDataQuery` structures. When used in `PutMetricAlarm`, it enables you to create an alarm based on a metric math expression. Each `MetricDataQuery` in the array specifies either a metric to retrieve, or a math expression to be performed on retrieved metrics. A single `PutMetricAlarm` call can include up to 20 `MetricDataQuery` structures in the array. The 20 structures can include as many as 10 structures that contain a `MetricStat` parameter to retrieve a metric, and as many as 10 structures that contain the `Expression` parameter to perform a math expression. Of those `Expression` structures, one must have `True` as the value for `ReturnData`. The result of this expression is the value the alarm watches. Any expression used in a `PutMetricAlarm` operation must return a single time series. For more information, see Metric Math Syntax and Functions in the *Amazon CloudWatch User Guide*. Some of the parameters of this structure also have different uses whether you are using this structure in a `GetMetricData` operation or a `PutMetricAlarm` operation. These differences are explained in the following parameter list. MetricDataResultA `GetMetricData` call returns an array of `MetricDataResult` structures. Each of these structures includes the data points for that metric, along with the timestamps of those data points and other identifying information. MetricDatumEncapsulates the information sent to either create a metric or add new values to be aggregated into an existing metric. MetricStatThis structure defines the metric to be returned, along with the statistics, period, and units. MetricStreamEntryThis structure contains the configuration information about one metric stream. MetricStreamFilterThis structure contains the name of one of the metric namespaces that is listed in a filter of a metric stream. PartialFailureThis array is empty if the API operation was successful for all the rules specified in the request. If the operation could not process one of the rules, the following data is returned for each of those rules. PutAnomalyDetectorInputPutAnomalyDetectorOutputPutCompositeAlarmInputPutDashboardInputPutDashboardOutputPutInsightRuleInputPutInsightRuleOutputPutMetricAlarmInputPutMetricDataInputPutMetricStreamInputPutMetricStreamOutputRangeSpecifies one range of days or times to exclude from use for training an anomaly detection model. SetAlarmStateInputStartMetricStreamsInputStartMetricStreamsOutputStatisticSetRepresents a set of statistics that describes a specific metric. StopMetricStreamsInputStopMetricStreamsOutputTagA key-value pair associated with a CloudWatch resource. TagResourceInputTagResourceOutputUntagResourceInputUntagResourceOutputEnums --- DeleteAlarmsErrorErrors returned by DeleteAlarms DeleteAnomalyDetectorErrorErrors returned by DeleteAnomalyDetector DeleteDashboardsErrorErrors returned by DeleteDashboards DeleteInsightRulesErrorErrors returned by DeleteInsightRules DeleteMetricStreamErrorErrors returned by DeleteMetricStream DescribeAlarmHistoryErrorErrors returned by DescribeAlarmHistory DescribeAlarmsErrorErrors returned by DescribeAlarms DescribeAlarmsForMetricErrorErrors returned by DescribeAlarmsForMetric DescribeAnomalyDetectorsErrorErrors returned by DescribeAnomalyDetectors DescribeInsightRulesErrorErrors returned by DescribeInsightRules DisableAlarmActionsErrorErrors returned by DisableAlarmActions DisableInsightRulesErrorErrors returned by DisableInsightRules EnableAlarmActionsErrorErrors returned by EnableAlarmActions EnableInsightRulesErrorErrors returned by EnableInsightRules GetDashboardErrorErrors returned by GetDashboard GetInsightRuleReportErrorErrors returned by GetInsightRuleReport GetMetricDataErrorErrors returned by GetMetricData GetMetricStatisticsErrorErrors returned by GetMetricStatistics GetMetricStreamErrorErrors returned by GetMetricStream GetMetricWidgetImageErrorErrors returned by GetMetricWidgetImage ListDashboardsErrorErrors returned by ListDashboards ListMetricStreamsErrorErrors returned by ListMetricStreams ListMetricsErrorErrors returned by ListMetrics ListTagsForResourceErrorErrors returned by ListTagsForResource PutAnomalyDetectorErrorErrors returned by PutAnomalyDetector PutCompositeAlarmErrorErrors returned by PutCompositeAlarm PutDashboardErrorErrors returned by PutDashboard PutInsightRuleErrorErrors returned by PutInsightRule PutMetricAlarmErrorErrors returned by PutMetricAlarm PutMetricDataErrorErrors returned by PutMetricData PutMetricStreamErrorErrors returned by PutMetricStream SetAlarmStateErrorErrors returned by SetAlarmState StartMetricStreamsErrorErrors returned by StartMetricStreams StopMetricStreamsErrorErrors returned by StopMetricStreams TagResourceErrorErrors returned by TagResource UntagResourceErrorErrors returned by UntagResource Traits --- CloudWatchTrait representing the capabilities of the CloudWatch API. CloudWatch clients implement this trait. Struct rusoto_cloudwatch::CloudWatchClient === ``` pub struct CloudWatchClient { /* private fields */ } ``` A client for the CloudWatch API. Implementations --- source### impl CloudWatchClient source#### pub fn new(region: Region) -> CloudWatchClient Creates a client backed by the default tokio event loop. The client will use the default credentials provider and tls client. source#### pub fn new_with<P, D>(    request_dispatcher: D,     credentials_provider: P,     region: Region) -> CloudWatchClient where    P: ProvideAwsCredentials + Send + Sync + 'static,    D: DispatchSignedRequest + Send + Sync + 'static, source#### pub fn new_with_client(client: Client, region: Region) -> CloudWatchClient Trait Implementations --- source### impl Clone for CloudWatchClient source#### fn clone(&self) -> CloudWatchClient Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl CloudWatch for CloudWatchClient source#### fn delete_alarms<'life0, 'async_trait>(    &'life0 self,     input: DeleteAlarmsInput) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<DeleteAlarmsError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Deletes the specified alarms. You can delete up to 100 alarms in one operation. However, this total can include no more than one composite alarm. For example, you could delete 99 metric alarms and one composite alarms with one operation, but you can't delete two composite alarms with one operation. In the event of an error, no alarms are deleted. It is possible to create a loop or cycle of composite alarms, where composite alarm A depends on composite alarm B, and composite alarm B also depends on composite alarm A. In this scenario, you can't delete any composite alarm that is part of the cycle because there is always still a composite alarm that depends on that alarm that you want to delete. To get out of such a situation, you must break the cycle by changing the rule of one of the composite alarms in the cycle to remove a dependency that creates the cycle. The simplest change to make to break a cycle is to change the `AlarmRule` of one of the alarms to `False`. Additionally, the evaluation of composite alarms stops if CloudWatch detects a cycle in the evaluation path. source#### fn delete_anomaly_detector<'life0, 'async_trait>(    &'life0 self,     input: DeleteAnomalyDetectorInput) -> Pin<Box<dyn Future<Output = Result<DeleteAnomalyDetectorOutput, RusotoError<DeleteAnomalyDetectorError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Deletes the specified anomaly detection model from your account. source#### fn delete_dashboards<'life0, 'async_trait>(    &'life0 self,     input: DeleteDashboardsInput) -> Pin<Box<dyn Future<Output = Result<DeleteDashboardsOutput, RusotoError<DeleteDashboardsError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Deletes all dashboards that you specify. You can specify up to 100 dashboards to delete. If there is an error during this call, no dashboards are deleted. source#### fn delete_insight_rules<'life0, 'async_trait>(    &'life0 self,     input: DeleteInsightRulesInput) -> Pin<Box<dyn Future<Output = Result<DeleteInsightRulesOutput, RusotoError<DeleteInsightRulesError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Permanently deletes the specified Contributor Insights rules. If you create a rule, delete it, and then re-create it with the same name, historical data from the first time the rule was created might not be available. source#### fn delete_metric_stream<'life0, 'async_trait>(    &'life0 self,     input: DeleteMetricStreamInput) -> Pin<Box<dyn Future<Output = Result<DeleteMetricStreamOutput, RusotoError<DeleteMetricStreamError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Permanently deletes the metric stream that you specify. source#### fn describe_alarm_history<'life0, 'async_trait>(    &'life0 self,     input: DescribeAlarmHistoryInput) -> Pin<Box<dyn Future<Output = Result<DescribeAlarmHistoryOutput, RusotoError<DescribeAlarmHistoryError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Retrieves the history for the specified alarm. You can filter the results by date range or item type. If an alarm name is not specified, the histories for either all metric alarms or all composite alarms are returned. CloudWatch retains the history of an alarm even if you delete the alarm. source#### fn describe_alarms<'life0, 'async_trait>(    &'life0 self,     input: DescribeAlarmsInput) -> Pin<Box<dyn Future<Output = Result<DescribeAlarmsOutput, RusotoError<DescribeAlarmsError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Retrieves the specified alarms. You can filter the results by specifying a prefix for the alarm name, the alarm state, or a prefix for any action. source#### fn describe_alarms_for_metric<'life0, 'async_trait>(    &'life0 self,     input: DescribeAlarmsForMetricInput) -> Pin<Box<dyn Future<Output = Result<DescribeAlarmsForMetricOutput, RusotoError<DescribeAlarmsForMetricError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Retrieves the alarms for the specified metric. To filter the results, specify a statistic, period, or unit. This operation retrieves only standard alarms that are based on the specified metric. It does not return alarms based on math expressions that use the specified metric, or composite alarms that use the specified metric. source#### fn describe_anomaly_detectors<'life0, 'async_trait>(    &'life0 self,     input: DescribeAnomalyDetectorsInput) -> Pin<Box<dyn Future<Output = Result<DescribeAnomalyDetectorsOutput, RusotoError<DescribeAnomalyDetectorsError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Lists the anomaly detection models that you have created in your account. You can list all models in your account or filter the results to only the models that are related to a certain namespace, metric name, or metric dimension. source#### fn describe_insight_rules<'life0, 'async_trait>(    &'life0 self,     input: DescribeInsightRulesInput) -> Pin<Box<dyn Future<Output = Result<DescribeInsightRulesOutput, RusotoError<DescribeInsightRulesError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Returns a list of all the Contributor Insights rules in your account. For more information about Contributor Insights, see Using Contributor Insights to Analyze High-Cardinality Data. source#### fn disable_alarm_actions<'life0, 'async_trait>(    &'life0 self,     input: DisableAlarmActionsInput) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<DisableAlarmActionsError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Disables the actions for the specified alarms. When an alarm's actions are disabled, the alarm actions do not execute when the alarm state changes. source#### fn disable_insight_rules<'life0, 'async_trait>(    &'life0 self,     input: DisableInsightRulesInput) -> Pin<Box<dyn Future<Output = Result<DisableInsightRulesOutput, RusotoError<DisableInsightRulesError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Disables the specified Contributor Insights rules. When rules are disabled, they do not analyze log groups and do not incur costs. source#### fn enable_alarm_actions<'life0, 'async_trait>(    &'life0 self,     input: EnableAlarmActionsInput) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<EnableAlarmActionsError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Enables the actions for the specified alarms. source#### fn enable_insight_rules<'life0, 'async_trait>(    &'life0 self,     input: EnableInsightRulesInput) -> Pin<Box<dyn Future<Output = Result<EnableInsightRulesOutput, RusotoError<EnableInsightRulesError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Enables the specified Contributor Insights rules. When rules are enabled, they immediately begin analyzing log data. source#### fn get_dashboard<'life0, 'async_trait>(    &'life0 self,     input: GetDashboardInput) -> Pin<Box<dyn Future<Output = Result<GetDashboardOutput, RusotoError<GetDashboardError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Displays the details of the dashboard that you specify. To copy an existing dashboard, use `GetDashboard`, and then use the data returned within `DashboardBody` as the template for the new dashboard when you call `PutDashboard` to create the copy. source#### fn get_insight_rule_report<'life0, 'async_trait>(    &'life0 self,     input: GetInsightRuleReportInput) -> Pin<Box<dyn Future<Output = Result<GetInsightRuleReportOutput, RusotoError<GetInsightRuleReportError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, This operation returns the time series data collected by a Contributor Insights rule. The data includes the identity and number of contributors to the log group. You can also optionally return one or more statistics about each data point in the time series. These statistics can include the following: * `UniqueContributors` -- the number of unique contributors for each data point. * `MaxContributorValue` -- the value of the top contributor for each data point. The identity of the contributor might change for each data point in the graph. If this rule aggregates by COUNT, the top contributor for each data point is the contributor with the most occurrences in that period. If the rule aggregates by SUM, the top contributor is the contributor with the highest sum in the log field specified by the rule's `Value`, during that period. * `SampleCount` -- the number of data points matched by the rule. * `Sum` -- the sum of the values from all contributors during the time period represented by that data point. * `Minimum` -- the minimum value from a single observation during the time period represented by that data point. * `Maximum` -- the maximum value from a single observation during the time period represented by that data point. * `Average` -- the average value from all contributors during the time period represented by that data point. source#### fn get_metric_data<'life0, 'async_trait>(    &'life0 self,     input: GetMetricDataInput) -> Pin<Box<dyn Future<Output = Result<GetMetricDataOutput, RusotoError<GetMetricDataError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, You can use the `GetMetricData` API to retrieve as many as 500 different metrics in a single request, with a total of as many as 100,800 data points. You can also optionally perform math expressions on the values of the returned statistics, to create new time series that represent new insights into your data. For example, using Lambda metrics, you could divide the Errors metric by the Invocations metric to get an error rate time series. For more information about metric math expressions, see Metric Math Syntax and Functions in the *Amazon CloudWatch User Guide*. Calls to the `GetMetricData` API have a different pricing structure than calls to `GetMetricStatistics`. For more information about pricing, see Amazon CloudWatch Pricing. Amazon CloudWatch retains metric data as follows: * Data points with a period of less than 60 seconds are available for 3 hours. These data points are high-resolution metrics and are available only for custom metrics that have been defined with a `StorageResolution` of 1. * Data points with a period of 60 seconds (1-minute) are available for 15 days. * Data points with a period of 300 seconds (5-minute) are available for 63 days. * Data points with a period of 3600 seconds (1 hour) are available for 455 days (15 months). Data points that are initially published with a shorter period are aggregated together for long-term storage. For example, if you collect data using a period of 1 minute, the data remains available for 15 days with 1-minute resolution. After 15 days, this data is still available, but is aggregated and retrievable only with a resolution of 5 minutes. After 63 days, the data is further aggregated and is available with a resolution of 1 hour. If you omit `Unit` in your request, all data that was collected with any unit is returned, along with the corresponding units that were specified when the data was reported to CloudWatch. If you specify a unit, the operation returns only data that was collected with that unit specified. If you specify a unit that does not match the data collected, the results of the operation are null. CloudWatch does not perform unit conversions. source#### fn get_metric_statistics<'life0, 'async_trait>(    &'life0 self,     input: GetMetricStatisticsInput) -> Pin<Box<dyn Future<Output = Result<GetMetricStatisticsOutput, RusotoError<GetMetricStatisticsError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Gets statistics for the specified metric. The maximum number of data points returned from a single call is 1,440. If you request more than 1,440 data points, CloudWatch returns an error. To reduce the number of data points, you can narrow the specified time range and make multiple requests across adjacent time ranges, or you can increase the specified period. Data points are not returned in chronological order. CloudWatch aggregates data points based on the length of the period that you specify. For example, if you request statistics with a one-hour period, CloudWatch aggregates all data points with time stamps that fall within each one-hour period. Therefore, the number of values aggregated by CloudWatch is larger than the number of data points returned. CloudWatch needs raw data points to calculate percentile statistics. If you publish data using a statistic set instead, you can only retrieve percentile statistics for this data if one of the following conditions is true: * The SampleCount value of the statistic set is 1. * The Min and the Max values of the statistic set are equal. Percentile statistics are not available for metrics when any of the metric values are negative numbers. Amazon CloudWatch retains metric data as follows: * Data points with a period of less than 60 seconds are available for 3 hours. These data points are high-resolution metrics and are available only for custom metrics that have been defined with a `StorageResolution` of 1. * Data points with a period of 60 seconds (1-minute) are available for 15 days. * Data points with a period of 300 seconds (5-minute) are available for 63 days. * Data points with a period of 3600 seconds (1 hour) are available for 455 days (15 months). Data points that are initially published with a shorter period are aggregated together for long-term storage. For example, if you collect data using a period of 1 minute, the data remains available for 15 days with 1-minute resolution. After 15 days, this data is still available, but is aggregated and retrievable only with a resolution of 5 minutes. After 63 days, the data is further aggregated and is available with a resolution of 1 hour. CloudWatch started retaining 5-minute and 1-hour metric data as of July 9, 2016. For information about metrics and dimensions supported by AWS services, see the Amazon CloudWatch Metrics and Dimensions Reference in the *Amazon CloudWatch User Guide*. source#### fn get_metric_stream<'life0, 'async_trait>(    &'life0 self,     input: GetMetricStreamInput) -> Pin<Box<dyn Future<Output = Result<GetMetricStreamOutput, RusotoError<GetMetricStreamError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Returns information about the metric stream that you specify. source#### fn get_metric_widget_image<'life0, 'async_trait>(    &'life0 self,     input: GetMetricWidgetImageInput) -> Pin<Box<dyn Future<Output = Result<GetMetricWidgetImageOutput, RusotoError<GetMetricWidgetImageError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, You can use the `GetMetricWidgetImage` API to retrieve a snapshot graph of one or more Amazon CloudWatch metrics as a bitmap image. You can then embed this image into your services and products, such as wiki pages, reports, and documents. You could also retrieve images regularly, such as every minute, and create your own custom live dashboard. The graph you retrieve can include all CloudWatch metric graph features, including metric math and horizontal and vertical annotations. There is a limit of 20 transactions per second for this API. Each `GetMetricWidgetImage` action has the following limits: * As many as 100 metrics in the graph. * Up to 100 KB uncompressed payload. source#### fn list_dashboards<'life0, 'async_trait>(    &'life0 self,     input: ListDashboardsInput) -> Pin<Box<dyn Future<Output = Result<ListDashboardsOutput, RusotoError<ListDashboardsError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Returns a list of the dashboards for your account. If you include `DashboardNamePrefix`, only those dashboards with names starting with the prefix are listed. Otherwise, all dashboards in your account are listed. `ListDashboards` returns up to 1000 results on one page. If there are more than 1000 dashboards, you can call `ListDashboards` again and include the value you received for `NextToken` in the first call, to receive the next 1000 results. source#### fn list_metric_streams<'life0, 'async_trait>(    &'life0 self,     input: ListMetricStreamsInput) -> Pin<Box<dyn Future<Output = Result<ListMetricStreamsOutput, RusotoError<ListMetricStreamsError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Returns a list of metric streams in this account. source#### fn list_metrics<'life0, 'async_trait>(    &'life0 self,     input: ListMetricsInput) -> Pin<Box<dyn Future<Output = Result<ListMetricsOutput, RusotoError<ListMetricsError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, List the specified metrics. You can use the returned metrics with GetMetricData or GetMetricStatistics to obtain statistical data. Up to 500 results are returned for any one call. To retrieve additional results, use the returned token with subsequent calls. After you create a metric, allow up to 15 minutes before the metric appears. You can see statistics about the metric sooner by using GetMetricData or GetMetricStatistics. `ListMetrics` doesn't return information about metrics if those metrics haven't reported data in the past two weeks. To retrieve those metrics, use GetMetricData or GetMetricStatistics. source#### fn list_tags_for_resource<'life0, 'async_trait>(    &'life0 self,     input: ListTagsForResourceInput) -> Pin<Box<dyn Future<Output = Result<ListTagsForResourceOutput, RusotoError<ListTagsForResourceError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Displays the tags associated with a CloudWatch resource. Currently, alarms and Contributor Insights rules support tagging. source#### fn put_anomaly_detector<'life0, 'async_trait>(    &'life0 self,     input: PutAnomalyDetectorInput) -> Pin<Box<dyn Future<Output = Result<PutAnomalyDetectorOutput, RusotoError<PutAnomalyDetectorError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Creates an anomaly detection model for a CloudWatch metric. You can use the model to display a band of expected normal values when the metric is graphed. For more information, see CloudWatch Anomaly Detection. source#### fn put_composite_alarm<'life0, 'async_trait>(    &'life0 self,     input: PutCompositeAlarmInput) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<PutCompositeAlarmError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Creates or updates a *composite alarm*. When you create a composite alarm, you specify a rule expression for the alarm that takes into account the alarm states of other alarms that you have created. The composite alarm goes into ALARM state only if all conditions of the rule are met. The alarms specified in a composite alarm's rule expression can include metric alarms and other composite alarms. Using composite alarms can reduce alarm noise. You can create multiple metric alarms, and also create a composite alarm and set up alerts only for the composite alarm. For example, you could create a composite alarm that goes into ALARM state only when more than one of the underlying metric alarms are in ALARM state. Currently, the only alarm actions that can be taken by composite alarms are notifying SNS topics. It is possible to create a loop or cycle of composite alarms, where composite alarm A depends on composite alarm B, and composite alarm B also depends on composite alarm A. In this scenario, you can't delete any composite alarm that is part of the cycle because there is always still a composite alarm that depends on that alarm that you want to delete. To get out of such a situation, you must break the cycle by changing the rule of one of the composite alarms in the cycle to remove a dependency that creates the cycle. The simplest change to make to break a cycle is to change the `AlarmRule` of one of the alarms to `False`. Additionally, the evaluation of composite alarms stops if CloudWatch detects a cycle in the evaluation path. When this operation creates an alarm, the alarm state is immediately set to `INSUFFICIENT_DATA`. The alarm is then evaluated and its state is set appropriately. Any actions associated with the new state are then executed. For a composite alarm, this initial time after creation is the only time that the alarm can be in `INSUFFICIENT_DATA` state. When you update an existing alarm, its state is left unchanged, but the update completely overwrites the previous configuration of the alarm. If you are an IAM user, you must have `iam:CreateServiceLinkedRole` to create a composite alarm that has Systems Manager OpsItem actions. source#### fn put_dashboard<'life0, 'async_trait>(    &'life0 self,     input: PutDashboardInput) -> Pin<Box<dyn Future<Output = Result<PutDashboardOutput, RusotoError<PutDashboardError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Creates a dashboard if it does not already exist, or updates an existing dashboard. If you update a dashboard, the entire contents are replaced with what you specify here. All dashboards in your account are global, not region-specific. A simple way to create a dashboard using `PutDashboard` is to copy an existing dashboard. To copy an existing dashboard using the console, you can load the dashboard and then use the View/edit source command in the Actions menu to display the JSON block for that dashboard. Another way to copy a dashboard is to use `GetDashboard`, and then use the data returned within `DashboardBody` as the template for the new dashboard when you call `PutDashboard`. When you create a dashboard with `PutDashboard`, a good practice is to add a text widget at the top of the dashboard with a message that the dashboard was created by script and should not be changed in the console. This message could also point console users to the location of the `DashboardBody` script or the CloudFormation template used to create the dashboard. source#### fn put_insight_rule<'life0, 'async_trait>(    &'life0 self,     input: PutInsightRuleInput) -> Pin<Box<dyn Future<Output = Result<PutInsightRuleOutput, RusotoError<PutInsightRuleError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Creates a Contributor Insights rule. Rules evaluate log events in a CloudWatch Logs log group, enabling you to find contributor data for the log events in that log group. For more information, see Using Contributor Insights to Analyze High-Cardinality Data. If you create a rule, delete it, and then re-create it with the same name, historical data from the first time the rule was created might not be available. source#### fn put_metric_alarm<'life0, 'async_trait>(    &'life0 self,     input: PutMetricAlarmInput) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<PutMetricAlarmError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Creates or updates an alarm and associates it with the specified metric, metric math expression, or anomaly detection model. Alarms based on anomaly detection models cannot have Auto Scaling actions. When this operation creates an alarm, the alarm state is immediately set to `INSUFFICIENT_DATA`. The alarm is then evaluated and its state is set appropriately. Any actions associated with the new state are then executed. When you update an existing alarm, its state is left unchanged, but the update completely overwrites the previous configuration of the alarm. If you are an IAM user, you must have Amazon EC2 permissions for some alarm operations: * The `iam:CreateServiceLinkedRole` for all alarms with EC2 actions * The `iam:CreateServiceLinkedRole` to create an alarm with Systems Manager OpsItem actions. The first time you create an alarm in the AWS Management Console, the CLI, or by using the PutMetricAlarm API, CloudWatch creates the necessary service-linked role for you. The service-linked roles are called `AWSServiceRoleForCloudWatchEvents` and `AWSServiceRoleForCloudWatchAlarms_ActionSSM`. For more information, see AWS service-linked role. source#### fn put_metric_data<'life0, 'async_trait>(    &'life0 self,     input: PutMetricDataInput) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<PutMetricDataError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Publishes metric data points to Amazon CloudWatch. CloudWatch associates the data points with the specified metric. If the specified metric does not exist, CloudWatch creates the metric. When CloudWatch creates a metric, it can take up to fifteen minutes for the metric to appear in calls to ListMetrics. You can publish either individual data points in the `Value` field, or arrays of values and the number of times each value occurred during the period by using the `Values` and `Counts` fields in the `MetricDatum` structure. Using the `Values` and `Counts` method enables you to publish up to 150 values per metric with one `PutMetricData` request, and supports retrieving percentile statistics on this data. Each `PutMetricData` request is limited to 40 KB in size for HTTP POST requests. You can send a payload compressed by gzip. Each request is also limited to no more than 20 different metrics. Although the `Value` parameter accepts numbers of type `Double`, CloudWatch rejects values that are either too small or too large. Values must be in the range of -2^360 to 2^360. In addition, special values (for example, NaN, +Infinity, -Infinity) are not supported. You can use up to 10 dimensions per metric to further clarify what data the metric collects. Each dimension consists of a Name and Value pair. For more information about specifying dimensions, see Publishing Metrics in the *Amazon CloudWatch User Guide*. You specify the time stamp to be associated with each data point. You can specify time stamps that are as much as two weeks before the current date, and as much as 2 hours after the current day and time. Data points with time stamps from 24 hours ago or longer can take at least 48 hours to become available for GetMetricData or GetMetricStatistics from the time they are submitted. Data points with time stamps between 3 and 24 hours ago can take as much as 2 hours to become available for for GetMetricData or GetMetricStatistics. CloudWatch needs raw data points to calculate percentile statistics. If you publish data using a statistic set instead, you can only retrieve percentile statistics for this data if one of the following conditions is true: * The `SampleCount` value of the statistic set is 1 and `Min`, `Max`, and `Sum` are all equal. * The `Min` and `Max` are equal, and `Sum` is equal to `Min` multiplied by `SampleCount`. source#### fn put_metric_stream<'life0, 'async_trait>(    &'life0 self,     input: PutMetricStreamInput) -> Pin<Box<dyn Future<Output = Result<PutMetricStreamOutput, RusotoError<PutMetricStreamError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Creates or updates a metric stream. Metric streams can automatically stream CloudWatch metrics to AWS destinations including Amazon S3 and to many third-party solutions. For more information, see Using Metric Streams. To create a metric stream, you must be logged on to an account that has the `iam:PassRole` permission and either the `CloudWatchFullAccess` policy or the `cloudwatch:PutMetricStream` permission. When you create or update a metric stream, you choose one of the following: * Stream metrics from all metric namespaces in the account. * Stream metrics from all metric namespaces in the account, except for the namespaces that you list in `ExcludeFilters`. * Stream metrics from only the metric namespaces that you list in `IncludeFilters`. When you use `PutMetricStream` to create a new metric stream, the stream is created in the `running` state. If you use it to update an existing stream, the state of the stream is not changed. source#### fn set_alarm_state<'life0, 'async_trait>(    &'life0 self,     input: SetAlarmStateInput) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<SetAlarmStateError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Temporarily sets the state of an alarm for testing purposes. When the updated state differs from the previous value, the action configured for the appropriate state is invoked. For example, if your alarm is configured to send an Amazon SNS message when an alarm is triggered, temporarily changing the alarm state to `ALARM` sends an SNS message. Metric alarms returns to their actual state quickly, often within seconds. Because the metric alarm state change happens quickly, it is typically only visible in the alarm's **History** tab in the Amazon CloudWatch console or through DescribeAlarmHistory. If you use `SetAlarmState` on a composite alarm, the composite alarm is not guaranteed to return to its actual state. It returns to its actual state only once any of its children alarms change state. It is also reevaluated if you update its configuration. If an alarm triggers EC2 Auto Scaling policies or application Auto Scaling policies, you must include information in the `StateReasonData` parameter to enable the policy to take the correct action. source#### fn start_metric_streams<'life0, 'async_trait>(    &'life0 self,     input: StartMetricStreamsInput) -> Pin<Box<dyn Future<Output = Result<StartMetricStreamsOutput, RusotoError<StartMetricStreamsError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Starts the streaming of metrics for one or more of your metric streams. source#### fn stop_metric_streams<'life0, 'async_trait>(    &'life0 self,     input: StopMetricStreamsInput) -> Pin<Box<dyn Future<Output = Result<StopMetricStreamsOutput, RusotoError<StopMetricStreamsError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Stops the streaming of metrics for one or more of your metric streams. source#### fn tag_resource<'life0, 'async_trait>(    &'life0 self,     input: TagResourceInput) -> Pin<Box<dyn Future<Output = Result<TagResourceOutput, RusotoError<TagResourceError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Assigns one or more tags (key-value pairs) to the specified CloudWatch resource. Currently, the only CloudWatch resources that can be tagged are alarms and Contributor Insights rules. Tags can help you organize and categorize your resources. You can also use them to scope user permissions by granting a user permission to access or change only resources with certain tag values. Tags don't have any semantic meaning to AWS and are interpreted strictly as strings of characters. You can use the `TagResource` action with an alarm that already has tags. If you specify a new tag key for the alarm, this tag is appended to the list of tags associated with the alarm. If you specify a tag key that is already associated with the alarm, the new tag value that you specify replaces the previous value for that tag. You can associate as many as 50 tags with a CloudWatch resource. source#### fn untag_resource<'life0, 'async_trait>(    &'life0 self,     input: UntagResourceInput) -> Pin<Box<dyn Future<Output = Result<UntagResourceOutput, RusotoError<UntagResourceError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Removes one or more tags from the specified resource. Auto Trait Implementations --- ### impl !RefUnwindSafe for CloudWatchClient ### impl Send for CloudWatchClient ### impl Sync for CloudWatchClient ### impl Unpin for CloudWatchClient ### impl !UnwindSafe for CloudWatchClient Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Trait rusoto_cloudwatch::CloudWatch === ``` pub trait CloudWatch { fn delete_alarms<'life0, 'async_trait>(         &'life0 self,         input: DeleteAlarmsInput     ) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<DeleteAlarmsError>>> + Send + 'async_trait>    where         'life0: 'async_trait,         Self: 'async_trait; fn delete_anomaly_detector<'life0, 'async_trait>(         &'life0 self,         input: DeleteAnomalyDetectorInput     ) -> Pin<Box<dyn Future<Output = Result<DeleteAnomalyDetectorOutput, RusotoError<DeleteAnomalyDetectorError>>> + Send + 'async_trait>    where         'life0: 'async_trait,         Self: 'async_trait; fn delete_dashboards<'life0, 'async_trait>(         &'life0 self,         input: DeleteDashboardsInput     ) -> Pin<Box<dyn Future<Output = Result<DeleteDashboardsOutput, RusotoError<DeleteDashboardsError>>> + Send + 'async_trait>    where         'life0: 'async_trait,         Self: 'async_trait; fn delete_insight_rules<'life0, 'async_trait>(         &'life0 self,         input: DeleteInsightRulesInput     ) -> Pin<Box<dyn Future<Output = Result<DeleteInsightRulesOutput, RusotoError<DeleteInsightRulesError>>> + Send + 'async_trait>    where         'life0: 'async_trait,         Self: 'async_trait; fn delete_metric_stream<'life0, 'async_trait>(         &'life0 self,         input: DeleteMetricStreamInput     ) -> Pin<Box<dyn Future<Output = Result<DeleteMetricStreamOutput, RusotoError<DeleteMetricStreamError>>> + Send + 'async_trait>    where         'life0: 'async_trait,         Self: 'async_trait; fn describe_alarm_history<'life0, 'async_trait>(         &'life0 self,         input: DescribeAlarmHistoryInput     ) -> Pin<Box<dyn Future<Output = Result<DescribeAlarmHistoryOutput, RusotoError<DescribeAlarmHistoryError>>> + Send + 'async_trait>    where         'life0: 'async_trait,         Self: 'async_trait; fn describe_alarms<'life0, 'async_trait>(         &'life0 self,         input: DescribeAlarmsInput     ) -> Pin<Box<dyn Future<Output = Result<DescribeAlarmsOutput, RusotoError<DescribeAlarmsError>>> + Send + 'async_trait>    where         'life0: 'async_trait,         Self: 'async_trait; fn describe_alarms_for_metric<'life0, 'async_trait>(         &'life0 self,         input: DescribeAlarmsForMetricInput     ) -> Pin<Box<dyn Future<Output = Result<DescribeAlarmsForMetricOutput, RusotoError<DescribeAlarmsForMetricError>>> + Send + 'async_trait>    where         'life0: 'async_trait,         Self: 'async_trait; fn describe_anomaly_detectors<'life0, 'async_trait>(         &'life0 self,         input: DescribeAnomalyDetectorsInput     ) -> Pin<Box<dyn Future<Output = Result<DescribeAnomalyDetectorsOutput, RusotoError<DescribeAnomalyDetectorsError>>> + Send + 'async_trait>    where         'life0: 'async_trait,         Self: 'async_trait; fn describe_insight_rules<'life0, 'async_trait>(         &'life0 self,         input: DescribeInsightRulesInput     ) -> Pin<Box<dyn Future<Output = Result<DescribeInsightRulesOutput, RusotoError<DescribeInsightRulesError>>> + Send + 'async_trait>    where         'life0: 'async_trait,         Self: 'async_trait; fn disable_alarm_actions<'life0, 'async_trait>(         &'life0 self,         input: DisableAlarmActionsInput     ) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<DisableAlarmActionsError>>> + Send + 'async_trait>    where         'life0: 'async_trait,         Self: 'async_trait; fn disable_insight_rules<'life0, 'async_trait>(         &'life0 self,         input: DisableInsightRulesInput     ) -> Pin<Box<dyn Future<Output = Result<DisableInsightRulesOutput, RusotoError<DisableInsightRulesError>>> + Send + 'async_trait>    where         'life0: 'async_trait,         Self: 'async_trait; fn enable_alarm_actions<'life0, 'async_trait>(         &'life0 self,         input: EnableAlarmActionsInput     ) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<EnableAlarmActionsError>>> + Send + 'async_trait>    where         'life0: 'async_trait,         Self: 'async_trait; fn enable_insight_rules<'life0, 'async_trait>(         &'life0 self,         input: EnableInsightRulesInput     ) -> Pin<Box<dyn Future<Output = Result<EnableInsightRulesOutput, RusotoError<EnableInsightRulesError>>> + Send + 'async_trait>    where         'life0: 'async_trait,         Self: 'async_trait; fn get_dashboard<'life0, 'async_trait>(         &'life0 self,         input: GetDashboardInput     ) -> Pin<Box<dyn Future<Output = Result<GetDashboardOutput, RusotoError<GetDashboardError>>> + Send + 'async_trait>    where         'life0: 'async_trait,         Self: 'async_trait; fn get_insight_rule_report<'life0, 'async_trait>(         &'life0 self,         input: GetInsightRuleReportInput     ) -> Pin<Box<dyn Future<Output = Result<GetInsightRuleReportOutput, RusotoError<GetInsightRuleReportError>>> + Send + 'async_trait>    where         'life0: 'async_trait,         Self: 'async_trait; fn get_metric_data<'life0, 'async_trait>(         &'life0 self,         input: GetMetricDataInput     ) -> Pin<Box<dyn Future<Output = Result<GetMetricDataOutput, RusotoError<GetMetricDataError>>> + Send + 'async_trait>    where         'life0: 'async_trait,         Self: 'async_trait; fn get_metric_statistics<'life0, 'async_trait>(         &'life0 self,         input: GetMetricStatisticsInput     ) -> Pin<Box<dyn Future<Output = Result<GetMetricStatisticsOutput, RusotoError<GetMetricStatisticsError>>> + Send + 'async_trait>    where         'life0: 'async_trait,         Self: 'async_trait; fn get_metric_stream<'life0, 'async_trait>(         &'life0 self,         input: GetMetricStreamInput     ) -> Pin<Box<dyn Future<Output = Result<GetMetricStreamOutput, RusotoError<GetMetricStreamError>>> + Send + 'async_trait>    where         'life0: 'async_trait,         Self: 'async_trait; fn get_metric_widget_image<'life0, 'async_trait>(         &'life0 self,         input: GetMetricWidgetImageInput     ) -> Pin<Box<dyn Future<Output = Result<GetMetricWidgetImageOutput, RusotoError<GetMetricWidgetImageError>>> + Send + 'async_trait>    where         'life0: 'async_trait,         Self: 'async_trait; fn list_dashboards<'life0, 'async_trait>(         &'life0 self,         input: ListDashboardsInput     ) -> Pin<Box<dyn Future<Output = Result<ListDashboardsOutput, RusotoError<ListDashboardsError>>> + Send + 'async_trait>    where         'life0: 'async_trait,         Self: 'async_trait; fn list_metric_streams<'life0, 'async_trait>(         &'life0 self,         input: ListMetricStreamsInput     ) -> Pin<Box<dyn Future<Output = Result<ListMetricStreamsOutput, RusotoError<ListMetricStreamsError>>> + Send + 'async_trait>    where         'life0: 'async_trait,         Self: 'async_trait; fn list_metrics<'life0, 'async_trait>(         &'life0 self,         input: ListMetricsInput     ) -> Pin<Box<dyn Future<Output = Result<ListMetricsOutput, RusotoError<ListMetricsError>>> + Send + 'async_trait>    where         'life0: 'async_trait,         Self: 'async_trait; fn list_tags_for_resource<'life0, 'async_trait>(         &'life0 self,         input: ListTagsForResourceInput     ) -> Pin<Box<dyn Future<Output = Result<ListTagsForResourceOutput, RusotoError<ListTagsForResourceError>>> + Send + 'async_trait>    where         'life0: 'async_trait,         Self: 'async_trait; fn put_anomaly_detector<'life0, 'async_trait>(         &'life0 self,         input: PutAnomalyDetectorInput     ) -> Pin<Box<dyn Future<Output = Result<PutAnomalyDetectorOutput, RusotoError<PutAnomalyDetectorError>>> + Send + 'async_trait>    where         'life0: 'async_trait,         Self: 'async_trait; fn put_composite_alarm<'life0, 'async_trait>(         &'life0 self,         input: PutCompositeAlarmInput     ) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<PutCompositeAlarmError>>> + Send + 'async_trait>    where         'life0: 'async_trait,         Self: 'async_trait; fn put_dashboard<'life0, 'async_trait>(         &'life0 self,         input: PutDashboardInput     ) -> Pin<Box<dyn Future<Output = Result<PutDashboardOutput, RusotoError<PutDashboardError>>> + Send + 'async_trait>    where         'life0: 'async_trait,         Self: 'async_trait; fn put_insight_rule<'life0, 'async_trait>(         &'life0 self,         input: PutInsightRuleInput     ) -> Pin<Box<dyn Future<Output = Result<PutInsightRuleOutput, RusotoError<PutInsightRuleError>>> + Send + 'async_trait>    where         'life0: 'async_trait,         Self: 'async_trait; fn put_metric_alarm<'life0, 'async_trait>(         &'life0 self,         input: PutMetricAlarmInput     ) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<PutMetricAlarmError>>> + Send + 'async_trait>    where         'life0: 'async_trait,         Self: 'async_trait; fn put_metric_data<'life0, 'async_trait>(         &'life0 self,         input: PutMetricDataInput     ) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<PutMetricDataError>>> + Send + 'async_trait>    where         'life0: 'async_trait,         Self: 'async_trait; fn put_metric_stream<'life0, 'async_trait>(         &'life0 self,         input: PutMetricStreamInput     ) -> Pin<Box<dyn Future<Output = Result<PutMetricStreamOutput, RusotoError<PutMetricStreamError>>> + Send + 'async_trait>    where         'life0: 'async_trait,         Self: 'async_trait; fn set_alarm_state<'life0, 'async_trait>(         &'life0 self,         input: SetAlarmStateInput     ) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<SetAlarmStateError>>> + Send + 'async_trait>    where         'life0: 'async_trait,         Self: 'async_trait; fn start_metric_streams<'life0, 'async_trait>(         &'life0 self,         input: StartMetricStreamsInput     ) -> Pin<Box<dyn Future<Output = Result<StartMetricStreamsOutput, RusotoError<StartMetricStreamsError>>> + Send + 'async_trait>    where         'life0: 'async_trait,         Self: 'async_trait; fn stop_metric_streams<'life0, 'async_trait>(         &'life0 self,         input: StopMetricStreamsInput     ) -> Pin<Box<dyn Future<Output = Result<StopMetricStreamsOutput, RusotoError<StopMetricStreamsError>>> + Send + 'async_trait>    where         'life0: 'async_trait,         Self: 'async_trait; fn tag_resource<'life0, 'async_trait>(         &'life0 self,         input: TagResourceInput     ) -> Pin<Box<dyn Future<Output = Result<TagResourceOutput, RusotoError<TagResourceError>>> + Send + 'async_trait>    where         'life0: 'async_trait,         Self: 'async_trait; fn untag_resource<'life0, 'async_trait>(         &'life0 self,         input: UntagResourceInput     ) -> Pin<Box<dyn Future<Output = Result<UntagResourceOutput, RusotoError<UntagResourceError>>> + Send + 'async_trait>    where         'life0: 'async_trait,         Self: 'async_trait; } ``` Trait representing the capabilities of the CloudWatch API. CloudWatch clients implement this trait. Required Methods --- source#### fn delete_alarms<'life0, 'async_trait>(    &'life0 self,     input: DeleteAlarmsInput) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<DeleteAlarmsError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Deletes the specified alarms. You can delete up to 100 alarms in one operation. However, this total can include no more than one composite alarm. For example, you could delete 99 metric alarms and one composite alarms with one operation, but you can't delete two composite alarms with one operation. In the event of an error, no alarms are deleted. It is possible to create a loop or cycle of composite alarms, where composite alarm A depends on composite alarm B, and composite alarm B also depends on composite alarm A. In this scenario, you can't delete any composite alarm that is part of the cycle because there is always still a composite alarm that depends on that alarm that you want to delete. To get out of such a situation, you must break the cycle by changing the rule of one of the composite alarms in the cycle to remove a dependency that creates the cycle. The simplest change to make to break a cycle is to change the `AlarmRule` of one of the alarms to `False`. Additionally, the evaluation of composite alarms stops if CloudWatch detects a cycle in the evaluation path. source#### fn delete_anomaly_detector<'life0, 'async_trait>(    &'life0 self,     input: DeleteAnomalyDetectorInput) -> Pin<Box<dyn Future<Output = Result<DeleteAnomalyDetectorOutput, RusotoError<DeleteAnomalyDetectorError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Deletes the specified anomaly detection model from your account. source#### fn delete_dashboards<'life0, 'async_trait>(    &'life0 self,     input: DeleteDashboardsInput) -> Pin<Box<dyn Future<Output = Result<DeleteDashboardsOutput, RusotoError<DeleteDashboardsError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Deletes all dashboards that you specify. You can specify up to 100 dashboards to delete. If there is an error during this call, no dashboards are deleted. source#### fn delete_insight_rules<'life0, 'async_trait>(    &'life0 self,     input: DeleteInsightRulesInput) -> Pin<Box<dyn Future<Output = Result<DeleteInsightRulesOutput, RusotoError<DeleteInsightRulesError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Permanently deletes the specified Contributor Insights rules. If you create a rule, delete it, and then re-create it with the same name, historical data from the first time the rule was created might not be available. source#### fn delete_metric_stream<'life0, 'async_trait>(    &'life0 self,     input: DeleteMetricStreamInput) -> Pin<Box<dyn Future<Output = Result<DeleteMetricStreamOutput, RusotoError<DeleteMetricStreamError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Permanently deletes the metric stream that you specify. source#### fn describe_alarm_history<'life0, 'async_trait>(    &'life0 self,     input: DescribeAlarmHistoryInput) -> Pin<Box<dyn Future<Output = Result<DescribeAlarmHistoryOutput, RusotoError<DescribeAlarmHistoryError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Retrieves the history for the specified alarm. You can filter the results by date range or item type. If an alarm name is not specified, the histories for either all metric alarms or all composite alarms are returned. CloudWatch retains the history of an alarm even if you delete the alarm. source#### fn describe_alarms<'life0, 'async_trait>(    &'life0 self,     input: DescribeAlarmsInput) -> Pin<Box<dyn Future<Output = Result<DescribeAlarmsOutput, RusotoError<DescribeAlarmsError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Retrieves the specified alarms. You can filter the results by specifying a prefix for the alarm name, the alarm state, or a prefix for any action. source#### fn describe_alarms_for_metric<'life0, 'async_trait>(    &'life0 self,     input: DescribeAlarmsForMetricInput) -> Pin<Box<dyn Future<Output = Result<DescribeAlarmsForMetricOutput, RusotoError<DescribeAlarmsForMetricError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Retrieves the alarms for the specified metric. To filter the results, specify a statistic, period, or unit. This operation retrieves only standard alarms that are based on the specified metric. It does not return alarms based on math expressions that use the specified metric, or composite alarms that use the specified metric. source#### fn describe_anomaly_detectors<'life0, 'async_trait>(    &'life0 self,     input: DescribeAnomalyDetectorsInput) -> Pin<Box<dyn Future<Output = Result<DescribeAnomalyDetectorsOutput, RusotoError<DescribeAnomalyDetectorsError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Lists the anomaly detection models that you have created in your account. You can list all models in your account or filter the results to only the models that are related to a certain namespace, metric name, or metric dimension. source#### fn describe_insight_rules<'life0, 'async_trait>(    &'life0 self,     input: DescribeInsightRulesInput) -> Pin<Box<dyn Future<Output = Result<DescribeInsightRulesOutput, RusotoError<DescribeInsightRulesError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Returns a list of all the Contributor Insights rules in your account. For more information about Contributor Insights, see Using Contributor Insights to Analyze High-Cardinality Data. source#### fn disable_alarm_actions<'life0, 'async_trait>(    &'life0 self,     input: DisableAlarmActionsInput) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<DisableAlarmActionsError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Disables the actions for the specified alarms. When an alarm's actions are disabled, the alarm actions do not execute when the alarm state changes. source#### fn disable_insight_rules<'life0, 'async_trait>(    &'life0 self,     input: DisableInsightRulesInput) -> Pin<Box<dyn Future<Output = Result<DisableInsightRulesOutput, RusotoError<DisableInsightRulesError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Disables the specified Contributor Insights rules. When rules are disabled, they do not analyze log groups and do not incur costs. source#### fn enable_alarm_actions<'life0, 'async_trait>(    &'life0 self,     input: EnableAlarmActionsInput) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<EnableAlarmActionsError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Enables the actions for the specified alarms. source#### fn enable_insight_rules<'life0, 'async_trait>(    &'life0 self,     input: EnableInsightRulesInput) -> Pin<Box<dyn Future<Output = Result<EnableInsightRulesOutput, RusotoError<EnableInsightRulesError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Enables the specified Contributor Insights rules. When rules are enabled, they immediately begin analyzing log data. source#### fn get_dashboard<'life0, 'async_trait>(    &'life0 self,     input: GetDashboardInput) -> Pin<Box<dyn Future<Output = Result<GetDashboardOutput, RusotoError<GetDashboardError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Displays the details of the dashboard that you specify. To copy an existing dashboard, use `GetDashboard`, and then use the data returned within `DashboardBody` as the template for the new dashboard when you call `PutDashboard` to create the copy. source#### fn get_insight_rule_report<'life0, 'async_trait>(    &'life0 self,     input: GetInsightRuleReportInput) -> Pin<Box<dyn Future<Output = Result<GetInsightRuleReportOutput, RusotoError<GetInsightRuleReportError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, This operation returns the time series data collected by a Contributor Insights rule. The data includes the identity and number of contributors to the log group. You can also optionally return one or more statistics about each data point in the time series. These statistics can include the following: * `UniqueContributors` -- the number of unique contributors for each data point. * `MaxContributorValue` -- the value of the top contributor for each data point. The identity of the contributor might change for each data point in the graph. If this rule aggregates by COUNT, the top contributor for each data point is the contributor with the most occurrences in that period. If the rule aggregates by SUM, the top contributor is the contributor with the highest sum in the log field specified by the rule's `Value`, during that period. * `SampleCount` -- the number of data points matched by the rule. * `Sum` -- the sum of the values from all contributors during the time period represented by that data point. * `Minimum` -- the minimum value from a single observation during the time period represented by that data point. * `Maximum` -- the maximum value from a single observation during the time period represented by that data point. * `Average` -- the average value from all contributors during the time period represented by that data point. source#### fn get_metric_data<'life0, 'async_trait>(    &'life0 self,     input: GetMetricDataInput) -> Pin<Box<dyn Future<Output = Result<GetMetricDataOutput, RusotoError<GetMetricDataError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, You can use the `GetMetricData` API to retrieve as many as 500 different metrics in a single request, with a total of as many as 100,800 data points. You can also optionally perform math expressions on the values of the returned statistics, to create new time series that represent new insights into your data. For example, using Lambda metrics, you could divide the Errors metric by the Invocations metric to get an error rate time series. For more information about metric math expressions, see Metric Math Syntax and Functions in the *Amazon CloudWatch User Guide*. Calls to the `GetMetricData` API have a different pricing structure than calls to `GetMetricStatistics`. For more information about pricing, see Amazon CloudWatch Pricing. Amazon CloudWatch retains metric data as follows: * Data points with a period of less than 60 seconds are available for 3 hours. These data points are high-resolution metrics and are available only for custom metrics that have been defined with a `StorageResolution` of 1. * Data points with a period of 60 seconds (1-minute) are available for 15 days. * Data points with a period of 300 seconds (5-minute) are available for 63 days. * Data points with a period of 3600 seconds (1 hour) are available for 455 days (15 months). Data points that are initially published with a shorter period are aggregated together for long-term storage. For example, if you collect data using a period of 1 minute, the data remains available for 15 days with 1-minute resolution. After 15 days, this data is still available, but is aggregated and retrievable only with a resolution of 5 minutes. After 63 days, the data is further aggregated and is available with a resolution of 1 hour. If you omit `Unit` in your request, all data that was collected with any unit is returned, along with the corresponding units that were specified when the data was reported to CloudWatch. If you specify a unit, the operation returns only data that was collected with that unit specified. If you specify a unit that does not match the data collected, the results of the operation are null. CloudWatch does not perform unit conversions. source#### fn get_metric_statistics<'life0, 'async_trait>(    &'life0 self,     input: GetMetricStatisticsInput) -> Pin<Box<dyn Future<Output = Result<GetMetricStatisticsOutput, RusotoError<GetMetricStatisticsError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Gets statistics for the specified metric. The maximum number of data points returned from a single call is 1,440. If you request more than 1,440 data points, CloudWatch returns an error. To reduce the number of data points, you can narrow the specified time range and make multiple requests across adjacent time ranges, or you can increase the specified period. Data points are not returned in chronological order. CloudWatch aggregates data points based on the length of the period that you specify. For example, if you request statistics with a one-hour period, CloudWatch aggregates all data points with time stamps that fall within each one-hour period. Therefore, the number of values aggregated by CloudWatch is larger than the number of data points returned. CloudWatch needs raw data points to calculate percentile statistics. If you publish data using a statistic set instead, you can only retrieve percentile statistics for this data if one of the following conditions is true: * The SampleCount value of the statistic set is 1. * The Min and the Max values of the statistic set are equal. Percentile statistics are not available for metrics when any of the metric values are negative numbers. Amazon CloudWatch retains metric data as follows: * Data points with a period of less than 60 seconds are available for 3 hours. These data points are high-resolution metrics and are available only for custom metrics that have been defined with a `StorageResolution` of 1. * Data points with a period of 60 seconds (1-minute) are available for 15 days. * Data points with a period of 300 seconds (5-minute) are available for 63 days. * Data points with a period of 3600 seconds (1 hour) are available for 455 days (15 months). Data points that are initially published with a shorter period are aggregated together for long-term storage. For example, if you collect data using a period of 1 minute, the data remains available for 15 days with 1-minute resolution. After 15 days, this data is still available, but is aggregated and retrievable only with a resolution of 5 minutes. After 63 days, the data is further aggregated and is available with a resolution of 1 hour. CloudWatch started retaining 5-minute and 1-hour metric data as of July 9, 2016. For information about metrics and dimensions supported by AWS services, see the Amazon CloudWatch Metrics and Dimensions Reference in the *Amazon CloudWatch User Guide*. source#### fn get_metric_stream<'life0, 'async_trait>(    &'life0 self,     input: GetMetricStreamInput) -> Pin<Box<dyn Future<Output = Result<GetMetricStreamOutput, RusotoError<GetMetricStreamError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Returns information about the metric stream that you specify. source#### fn get_metric_widget_image<'life0, 'async_trait>(    &'life0 self,     input: GetMetricWidgetImageInput) -> Pin<Box<dyn Future<Output = Result<GetMetricWidgetImageOutput, RusotoError<GetMetricWidgetImageError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, You can use the `GetMetricWidgetImage` API to retrieve a snapshot graph of one or more Amazon CloudWatch metrics as a bitmap image. You can then embed this image into your services and products, such as wiki pages, reports, and documents. You could also retrieve images regularly, such as every minute, and create your own custom live dashboard. The graph you retrieve can include all CloudWatch metric graph features, including metric math and horizontal and vertical annotations. There is a limit of 20 transactions per second for this API. Each `GetMetricWidgetImage` action has the following limits: * As many as 100 metrics in the graph. * Up to 100 KB uncompressed payload. source#### fn list_dashboards<'life0, 'async_trait>(    &'life0 self,     input: ListDashboardsInput) -> Pin<Box<dyn Future<Output = Result<ListDashboardsOutput, RusotoError<ListDashboardsError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Returns a list of the dashboards for your account. If you include `DashboardNamePrefix`, only those dashboards with names starting with the prefix are listed. Otherwise, all dashboards in your account are listed. `ListDashboards` returns up to 1000 results on one page. If there are more than 1000 dashboards, you can call `ListDashboards` again and include the value you received for `NextToken` in the first call, to receive the next 1000 results. source#### fn list_metric_streams<'life0, 'async_trait>(    &'life0 self,     input: ListMetricStreamsInput) -> Pin<Box<dyn Future<Output = Result<ListMetricStreamsOutput, RusotoError<ListMetricStreamsError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Returns a list of metric streams in this account. source#### fn list_metrics<'life0, 'async_trait>(    &'life0 self,     input: ListMetricsInput) -> Pin<Box<dyn Future<Output = Result<ListMetricsOutput, RusotoError<ListMetricsError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, List the specified metrics. You can use the returned metrics with GetMetricData or GetMetricStatistics to obtain statistical data. Up to 500 results are returned for any one call. To retrieve additional results, use the returned token with subsequent calls. After you create a metric, allow up to 15 minutes before the metric appears. You can see statistics about the metric sooner by using GetMetricData or GetMetricStatistics. `ListMetrics` doesn't return information about metrics if those metrics haven't reported data in the past two weeks. To retrieve those metrics, use GetMetricData or GetMetricStatistics. source#### fn list_tags_for_resource<'life0, 'async_trait>(    &'life0 self,     input: ListTagsForResourceInput) -> Pin<Box<dyn Future<Output = Result<ListTagsForResourceOutput, RusotoError<ListTagsForResourceError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Displays the tags associated with a CloudWatch resource. Currently, alarms and Contributor Insights rules support tagging. source#### fn put_anomaly_detector<'life0, 'async_trait>(    &'life0 self,     input: PutAnomalyDetectorInput) -> Pin<Box<dyn Future<Output = Result<PutAnomalyDetectorOutput, RusotoError<PutAnomalyDetectorError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Creates an anomaly detection model for a CloudWatch metric. You can use the model to display a band of expected normal values when the metric is graphed. For more information, see CloudWatch Anomaly Detection. source#### fn put_composite_alarm<'life0, 'async_trait>(    &'life0 self,     input: PutCompositeAlarmInput) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<PutCompositeAlarmError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Creates or updates a *composite alarm*. When you create a composite alarm, you specify a rule expression for the alarm that takes into account the alarm states of other alarms that you have created. The composite alarm goes into ALARM state only if all conditions of the rule are met. The alarms specified in a composite alarm's rule expression can include metric alarms and other composite alarms. Using composite alarms can reduce alarm noise. You can create multiple metric alarms, and also create a composite alarm and set up alerts only for the composite alarm. For example, you could create a composite alarm that goes into ALARM state only when more than one of the underlying metric alarms are in ALARM state. Currently, the only alarm actions that can be taken by composite alarms are notifying SNS topics. It is possible to create a loop or cycle of composite alarms, where composite alarm A depends on composite alarm B, and composite alarm B also depends on composite alarm A. In this scenario, you can't delete any composite alarm that is part of the cycle because there is always still a composite alarm that depends on that alarm that you want to delete. To get out of such a situation, you must break the cycle by changing the rule of one of the composite alarms in the cycle to remove a dependency that creates the cycle. The simplest change to make to break a cycle is to change the `AlarmRule` of one of the alarms to `False`. Additionally, the evaluation of composite alarms stops if CloudWatch detects a cycle in the evaluation path. When this operation creates an alarm, the alarm state is immediately set to `INSUFFICIENT_DATA`. The alarm is then evaluated and its state is set appropriately. Any actions associated with the new state are then executed. For a composite alarm, this initial time after creation is the only time that the alarm can be in `INSUFFICIENT_DATA` state. When you update an existing alarm, its state is left unchanged, but the update completely overwrites the previous configuration of the alarm. If you are an IAM user, you must have `iam:CreateServiceLinkedRole` to create a composite alarm that has Systems Manager OpsItem actions. source#### fn put_dashboard<'life0, 'async_trait>(    &'life0 self,     input: PutDashboardInput) -> Pin<Box<dyn Future<Output = Result<PutDashboardOutput, RusotoError<PutDashboardError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Creates a dashboard if it does not already exist, or updates an existing dashboard. If you update a dashboard, the entire contents are replaced with what you specify here. All dashboards in your account are global, not region-specific. A simple way to create a dashboard using `PutDashboard` is to copy an existing dashboard. To copy an existing dashboard using the console, you can load the dashboard and then use the View/edit source command in the Actions menu to display the JSON block for that dashboard. Another way to copy a dashboard is to use `GetDashboard`, and then use the data returned within `DashboardBody` as the template for the new dashboard when you call `PutDashboard`. When you create a dashboard with `PutDashboard`, a good practice is to add a text widget at the top of the dashboard with a message that the dashboard was created by script and should not be changed in the console. This message could also point console users to the location of the `DashboardBody` script or the CloudFormation template used to create the dashboard. source#### fn put_insight_rule<'life0, 'async_trait>(    &'life0 self,     input: PutInsightRuleInput) -> Pin<Box<dyn Future<Output = Result<PutInsightRuleOutput, RusotoError<PutInsightRuleError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Creates a Contributor Insights rule. Rules evaluate log events in a CloudWatch Logs log group, enabling you to find contributor data for the log events in that log group. For more information, see Using Contributor Insights to Analyze High-Cardinality Data. If you create a rule, delete it, and then re-create it with the same name, historical data from the first time the rule was created might not be available. source#### fn put_metric_alarm<'life0, 'async_trait>(    &'life0 self,     input: PutMetricAlarmInput) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<PutMetricAlarmError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Creates or updates an alarm and associates it with the specified metric, metric math expression, or anomaly detection model. Alarms based on anomaly detection models cannot have Auto Scaling actions. When this operation creates an alarm, the alarm state is immediately set to `INSUFFICIENT_DATA`. The alarm is then evaluated and its state is set appropriately. Any actions associated with the new state are then executed. When you update an existing alarm, its state is left unchanged, but the update completely overwrites the previous configuration of the alarm. If you are an IAM user, you must have Amazon EC2 permissions for some alarm operations: * The `iam:CreateServiceLinkedRole` for all alarms with EC2 actions * The `iam:CreateServiceLinkedRole` to create an alarm with Systems Manager OpsItem actions. The first time you create an alarm in the AWS Management Console, the CLI, or by using the PutMetricAlarm API, CloudWatch creates the necessary service-linked role for you. The service-linked roles are called `AWSServiceRoleForCloudWatchEvents` and `AWSServiceRoleForCloudWatchAlarms_ActionSSM`. For more information, see AWS service-linked role. source#### fn put_metric_data<'life0, 'async_trait>(    &'life0 self,     input: PutMetricDataInput) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<PutMetricDataError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Publishes metric data points to Amazon CloudWatch. CloudWatch associates the data points with the specified metric. If the specified metric does not exist, CloudWatch creates the metric. When CloudWatch creates a metric, it can take up to fifteen minutes for the metric to appear in calls to ListMetrics. You can publish either individual data points in the `Value` field, or arrays of values and the number of times each value occurred during the period by using the `Values` and `Counts` fields in the `MetricDatum` structure. Using the `Values` and `Counts` method enables you to publish up to 150 values per metric with one `PutMetricData` request, and supports retrieving percentile statistics on this data. Each `PutMetricData` request is limited to 40 KB in size for HTTP POST requests. You can send a payload compressed by gzip. Each request is also limited to no more than 20 different metrics. Although the `Value` parameter accepts numbers of type `Double`, CloudWatch rejects values that are either too small or too large. Values must be in the range of -2^360 to 2^360. In addition, special values (for example, NaN, +Infinity, -Infinity) are not supported. You can use up to 10 dimensions per metric to further clarify what data the metric collects. Each dimension consists of a Name and Value pair. For more information about specifying dimensions, see Publishing Metrics in the *Amazon CloudWatch User Guide*. You specify the time stamp to be associated with each data point. You can specify time stamps that are as much as two weeks before the current date, and as much as 2 hours after the current day and time. Data points with time stamps from 24 hours ago or longer can take at least 48 hours to become available for GetMetricData or GetMetricStatistics from the time they are submitted. Data points with time stamps between 3 and 24 hours ago can take as much as 2 hours to become available for for GetMetricData or GetMetricStatistics. CloudWatch needs raw data points to calculate percentile statistics. If you publish data using a statistic set instead, you can only retrieve percentile statistics for this data if one of the following conditions is true: * The `SampleCount` value of the statistic set is 1 and `Min`, `Max`, and `Sum` are all equal. * The `Min` and `Max` are equal, and `Sum` is equal to `Min` multiplied by `SampleCount`. source#### fn put_metric_stream<'life0, 'async_trait>(    &'life0 self,     input: PutMetricStreamInput) -> Pin<Box<dyn Future<Output = Result<PutMetricStreamOutput, RusotoError<PutMetricStreamError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Creates or updates a metric stream. Metric streams can automatically stream CloudWatch metrics to AWS destinations including Amazon S3 and to many third-party solutions. For more information, see Using Metric Streams. To create a metric stream, you must be logged on to an account that has the `iam:PassRole` permission and either the `CloudWatchFullAccess` policy or the `cloudwatch:PutMetricStream` permission. When you create or update a metric stream, you choose one of the following: * Stream metrics from all metric namespaces in the account. * Stream metrics from all metric namespaces in the account, except for the namespaces that you list in `ExcludeFilters`. * Stream metrics from only the metric namespaces that you list in `IncludeFilters`. When you use `PutMetricStream` to create a new metric stream, the stream is created in the `running` state. If you use it to update an existing stream, the state of the stream is not changed. source#### fn set_alarm_state<'life0, 'async_trait>(    &'life0 self,     input: SetAlarmStateInput) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<SetAlarmStateError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Temporarily sets the state of an alarm for testing purposes. When the updated state differs from the previous value, the action configured for the appropriate state is invoked. For example, if your alarm is configured to send an Amazon SNS message when an alarm is triggered, temporarily changing the alarm state to `ALARM` sends an SNS message. Metric alarms returns to their actual state quickly, often within seconds. Because the metric alarm state change happens quickly, it is typically only visible in the alarm's **History** tab in the Amazon CloudWatch console or through DescribeAlarmHistory. If you use `SetAlarmState` on a composite alarm, the composite alarm is not guaranteed to return to its actual state. It returns to its actual state only once any of its children alarms change state. It is also reevaluated if you update its configuration. If an alarm triggers EC2 Auto Scaling policies or application Auto Scaling policies, you must include information in the `StateReasonData` parameter to enable the policy to take the correct action. source#### fn start_metric_streams<'life0, 'async_trait>(    &'life0 self,     input: StartMetricStreamsInput) -> Pin<Box<dyn Future<Output = Result<StartMetricStreamsOutput, RusotoError<StartMetricStreamsError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Starts the streaming of metrics for one or more of your metric streams. source#### fn stop_metric_streams<'life0, 'async_trait>(    &'life0 self,     input: StopMetricStreamsInput) -> Pin<Box<dyn Future<Output = Result<StopMetricStreamsOutput, RusotoError<StopMetricStreamsError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Stops the streaming of metrics for one or more of your metric streams. source#### fn tag_resource<'life0, 'async_trait>(    &'life0 self,     input: TagResourceInput) -> Pin<Box<dyn Future<Output = Result<TagResourceOutput, RusotoError<TagResourceError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Assigns one or more tags (key-value pairs) to the specified CloudWatch resource. Currently, the only CloudWatch resources that can be tagged are alarms and Contributor Insights rules. Tags can help you organize and categorize your resources. You can also use them to scope user permissions by granting a user permission to access or change only resources with certain tag values. Tags don't have any semantic meaning to AWS and are interpreted strictly as strings of characters. You can use the `TagResource` action with an alarm that already has tags. If you specify a new tag key for the alarm, this tag is appended to the list of tags associated with the alarm. If you specify a tag key that is already associated with the alarm, the new tag value that you specify replaces the previous value for that tag. You can associate as many as 50 tags with a CloudWatch resource. source#### fn untag_resource<'life0, 'async_trait>(    &'life0 self,     input: UntagResourceInput) -> Pin<Box<dyn Future<Output = Result<UntagResourceOutput, RusotoError<UntagResourceError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Removes one or more tags from the specified resource. Implementors --- source### impl CloudWatch for CloudWatchClient Struct rusoto_cloudwatch::AlarmHistoryItem === ``` pub struct AlarmHistoryItem { pub alarm_name: Option<String>, pub alarm_type: Option<String>, pub history_data: Option<String>, pub history_item_type: Option<String>, pub history_summary: Option<String>, pub timestamp: Option<String>, } ``` Represents the history of a specific alarm. Fields --- `alarm_name: Option<String>`The descriptive name for the alarm. `alarm_type: Option<String>`The type of alarm, either metric alarm or composite alarm. `history_data: Option<String>`Data about the alarm, in JSON format. `history_item_type: Option<String>`The type of alarm history item. `history_summary: Option<String>`A summary of the alarm history, in text format. `timestamp: Option<String>`The time stamp for the alarm history item. Trait Implementations --- source### impl Clone for AlarmHistoryItem source#### fn clone(&self) -> AlarmHistoryItem Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for AlarmHistoryItem source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for AlarmHistoryItem source#### fn default() -> AlarmHistoryItem Returns the “default value” for a type. Read more source### impl PartialEq<AlarmHistoryItem> for AlarmHistoryItem source#### fn eq(&self, other: &AlarmHistoryItem) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &AlarmHistoryItem) -> bool This method tests for `!=`. source### impl StructuralPartialEq for AlarmHistoryItem Auto Trait Implementations --- ### impl RefUnwindSafe for AlarmHistoryItem ### impl Send for AlarmHistoryItem ### impl Sync for AlarmHistoryItem ### impl Unpin for AlarmHistoryItem ### impl UnwindSafe for AlarmHistoryItem Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::AnomalyDetector === ``` pub struct AnomalyDetector { pub configuration: Option<AnomalyDetectorConfiguration>, pub dimensions: Option<Vec<Dimension>>, pub metric_name: Option<String>, pub namespace: Option<String>, pub stat: Option<String>, pub state_value: Option<String>, } ``` An anomaly detection model associated with a particular CloudWatch metric and statistic. You can use the model to display a band of expected normal values when the metric is graphed. Fields --- `configuration: Option<AnomalyDetectorConfiguration>`The configuration specifies details about how the anomaly detection model is to be trained, including time ranges to exclude from use for training the model, and the time zone to use for the metric. `dimensions: Option<Vec<Dimension>>`The metric dimensions associated with the anomaly detection model. `metric_name: Option<String>`The name of the metric associated with the anomaly detection model. `namespace: Option<String>`The namespace of the metric associated with the anomaly detection model. `stat: Option<String>`The statistic associated with the anomaly detection model. `state_value: Option<String>`The current status of the anomaly detector's training. The possible values are `TRAINED | PENDING_TRAINING | TRAINED_INSUFFICIENT_DATA` Trait Implementations --- source### impl Clone for AnomalyDetector source#### fn clone(&self) -> AnomalyDetector Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for AnomalyDetector source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for AnomalyDetector source#### fn default() -> AnomalyDetector Returns the “default value” for a type. Read more source### impl PartialEq<AnomalyDetector> for AnomalyDetector source#### fn eq(&self, other: &AnomalyDetector) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &AnomalyDetector) -> bool This method tests for `!=`. source### impl StructuralPartialEq for AnomalyDetector Auto Trait Implementations --- ### impl RefUnwindSafe for AnomalyDetector ### impl Send for AnomalyDetector ### impl Sync for AnomalyDetector ### impl Unpin for AnomalyDetector ### impl UnwindSafe for AnomalyDetector Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::AnomalyDetectorConfiguration === ``` pub struct AnomalyDetectorConfiguration { pub excluded_time_ranges: Option<Vec<Range>>, pub metric_timezone: Option<String>, } ``` The configuration specifies details about how the anomaly detection model is to be trained, including time ranges to exclude from use for training the model and the time zone to use for the metric. Fields --- `excluded_time_ranges: Option<Vec<Range>>`An array of time ranges to exclude from use when the anomaly detection model is trained. Use this to make sure that events that could cause unusual values for the metric, such as deployments, aren't used when CloudWatch creates the model. `metric_timezone: Option<String>`The time zone to use for the metric. This is useful to enable the model to automatically account for daylight savings time changes if the metric is sensitive to such time changes. To specify a time zone, use the name of the time zone as specified in the standard tz database. For more information, see tz database. Trait Implementations --- source### impl Clone for AnomalyDetectorConfiguration source#### fn clone(&self) -> AnomalyDetectorConfiguration Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for AnomalyDetectorConfiguration source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for AnomalyDetectorConfiguration source#### fn default() -> AnomalyDetectorConfiguration Returns the “default value” for a type. Read more source### impl PartialEq<AnomalyDetectorConfiguration> for AnomalyDetectorConfiguration source#### fn eq(&self, other: &AnomalyDetectorConfiguration) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &AnomalyDetectorConfiguration) -> bool This method tests for `!=`. source### impl StructuralPartialEq for AnomalyDetectorConfiguration Auto Trait Implementations --- ### impl RefUnwindSafe for AnomalyDetectorConfiguration ### impl Send for AnomalyDetectorConfiguration ### impl Sync for AnomalyDetectorConfiguration ### impl Unpin for AnomalyDetectorConfiguration ### impl UnwindSafe for AnomalyDetectorConfiguration Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::CompositeAlarm === ``` pub struct CompositeAlarm { pub actions_enabled: Option<bool>, pub alarm_actions: Option<Vec<String>>, pub alarm_arn: Option<String>, pub alarm_configuration_updated_timestamp: Option<String>, pub alarm_description: Option<String>, pub alarm_name: Option<String>, pub alarm_rule: Option<String>, pub insufficient_data_actions: Option<Vec<String>>, pub ok_actions: Option<Vec<String>>, pub state_reason: Option<String>, pub state_reason_data: Option<String>, pub state_updated_timestamp: Option<String>, pub state_value: Option<String>, } ``` The details about a composite alarm. Fields --- `actions_enabled: Option<bool>`Indicates whether actions should be executed during any changes to the alarm state. `alarm_actions: Option<Vec<String>>`The actions to execute when this alarm transitions to the ALARM state from any other state. Each action is specified as an Amazon Resource Name (ARN). `alarm_arn: Option<String>`The Amazon Resource Name (ARN) of the alarm. `alarm_configuration_updated_timestamp: Option<String>`The time stamp of the last update to the alarm configuration. `alarm_description: Option<String>`The description of the alarm. `alarm_name: Option<String>`The name of the alarm. `alarm_rule: Option<String>`The rule that this alarm uses to evaluate its alarm state. `insufficient_data_actions: Option<Vec<String>>`The actions to execute when this alarm transitions to the INSUFFICIENT_DATA state from any other state. Each action is specified as an Amazon Resource Name (ARN). `ok_actions: Option<Vec<String>>`The actions to execute when this alarm transitions to the OK state from any other state. Each action is specified as an Amazon Resource Name (ARN). `state_reason: Option<String>`An explanation for the alarm state, in text format. `state_reason_data: Option<String>`An explanation for the alarm state, in JSON format. `state_updated_timestamp: Option<String>`The time stamp of the last update to the alarm state. `state_value: Option<String>`The state value for the alarm. Trait Implementations --- source### impl Clone for CompositeAlarm source#### fn clone(&self) -> CompositeAlarm Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for CompositeAlarm source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for CompositeAlarm source#### fn default() -> CompositeAlarm Returns the “default value” for a type. Read more source### impl PartialEq<CompositeAlarm> for CompositeAlarm source#### fn eq(&self, other: &CompositeAlarm) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &CompositeAlarm) -> bool This method tests for `!=`. source### impl StructuralPartialEq for CompositeAlarm Auto Trait Implementations --- ### impl RefUnwindSafe for CompositeAlarm ### impl Send for CompositeAlarm ### impl Sync for CompositeAlarm ### impl Unpin for CompositeAlarm ### impl UnwindSafe for CompositeAlarm Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::DashboardEntry === ``` pub struct DashboardEntry { pub dashboard_arn: Option<String>, pub dashboard_name: Option<String>, pub last_modified: Option<String>, pub size: Option<i64>, } ``` Represents a specific dashboard. Fields --- `dashboard_arn: Option<String>`The Amazon Resource Name (ARN) of the dashboard. `dashboard_name: Option<String>`The name of the dashboard. `last_modified: Option<String>`The time stamp of when the dashboard was last modified, either by an API call or through the console. This number is expressed as the number of milliseconds since Jan 1, 1970 00:00:00 UTC. `size: Option<i64>`The size of the dashboard, in bytes. Trait Implementations --- source### impl Clone for DashboardEntry source#### fn clone(&self) -> DashboardEntry Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for DashboardEntry source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for DashboardEntry source#### fn default() -> DashboardEntry Returns the “default value” for a type. Read more source### impl PartialEq<DashboardEntry> for DashboardEntry source#### fn eq(&self, other: &DashboardEntry) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &DashboardEntry) -> bool This method tests for `!=`. source### impl StructuralPartialEq for DashboardEntry Auto Trait Implementations --- ### impl RefUnwindSafe for DashboardEntry ### impl Send for DashboardEntry ### impl Sync for DashboardEntry ### impl Unpin for DashboardEntry ### impl UnwindSafe for DashboardEntry Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::DashboardValidationMessage === ``` pub struct DashboardValidationMessage { pub data_path: Option<String>, pub message: Option<String>, } ``` An error or warning for the operation. Fields --- `data_path: Option<String>`The data path related to the message. `message: Option<String>`A message describing the error or warning. Trait Implementations --- source### impl Clone for DashboardValidationMessage source#### fn clone(&self) -> DashboardValidationMessage Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for DashboardValidationMessage source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for DashboardValidationMessage source#### fn default() -> DashboardValidationMessage Returns the “default value” for a type. Read more source### impl PartialEq<DashboardValidationMessage> for DashboardValidationMessage source#### fn eq(&self, other: &DashboardValidationMessage) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &DashboardValidationMessage) -> bool This method tests for `!=`. source### impl StructuralPartialEq for DashboardValidationMessage Auto Trait Implementations --- ### impl RefUnwindSafe for DashboardValidationMessage ### impl Send for DashboardValidationMessage ### impl Sync for DashboardValidationMessage ### impl Unpin for DashboardValidationMessage ### impl UnwindSafe for DashboardValidationMessage Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::Datapoint === ``` pub struct Datapoint { pub average: Option<f64>, pub extended_statistics: Option<HashMap<String, f64>>, pub maximum: Option<f64>, pub minimum: Option<f64>, pub sample_count: Option<f64>, pub sum: Option<f64>, pub timestamp: Option<String>, pub unit: Option<String>, } ``` Encapsulates the statistical data that CloudWatch computes from metric data. Fields --- `average: Option<f64>`The average of the metric values that correspond to the data point. `extended_statistics: Option<HashMap<String, f64>>`The percentile statistic for the data point. `maximum: Option<f64>`The maximum metric value for the data point. `minimum: Option<f64>`The minimum metric value for the data point. `sample_count: Option<f64>`The number of metric values that contributed to the aggregate value of this data point. `sum: Option<f64>`The sum of the metric values for the data point. `timestamp: Option<String>`The time stamp used for the data point. `unit: Option<String>`The standard unit for the data point. Trait Implementations --- source### impl Clone for Datapoint source#### fn clone(&self) -> Datapoint Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for Datapoint source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for Datapoint source#### fn default() -> Datapoint Returns the “default value” for a type. Read more source### impl PartialEq<Datapoint> for Datapoint source#### fn eq(&self, other: &Datapoint) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &Datapoint) -> bool This method tests for `!=`. source### impl StructuralPartialEq for Datapoint Auto Trait Implementations --- ### impl RefUnwindSafe for Datapoint ### impl Send for Datapoint ### impl Sync for Datapoint ### impl Unpin for Datapoint ### impl UnwindSafe for Datapoint Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::DeleteAlarmsInput === ``` pub struct DeleteAlarmsInput { pub alarm_names: Vec<String>, } ``` Fields --- `alarm_names: Vec<String>`The alarms to be deleted. Trait Implementations --- source### impl Clone for DeleteAlarmsInput source#### fn clone(&self) -> DeleteAlarmsInput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for DeleteAlarmsInput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for DeleteAlarmsInput source#### fn default() -> DeleteAlarmsInput Returns the “default value” for a type. Read more source### impl PartialEq<DeleteAlarmsInput> for DeleteAlarmsInput source#### fn eq(&self, other: &DeleteAlarmsInput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &DeleteAlarmsInput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for DeleteAlarmsInput Auto Trait Implementations --- ### impl RefUnwindSafe for DeleteAlarmsInput ### impl Send for DeleteAlarmsInput ### impl Sync for DeleteAlarmsInput ### impl Unpin for DeleteAlarmsInput ### impl UnwindSafe for DeleteAlarmsInput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::DeleteAnomalyDetectorInput === ``` pub struct DeleteAnomalyDetectorInput { pub dimensions: Option<Vec<Dimension>>, pub metric_name: String, pub namespace: String, pub stat: String, } ``` Fields --- `dimensions: Option<Vec<Dimension>>`The metric dimensions associated with the anomaly detection model to delete. `metric_name: String`The metric name associated with the anomaly detection model to delete. `namespace: String`The namespace associated with the anomaly detection model to delete. `stat: String`The statistic associated with the anomaly detection model to delete. Trait Implementations --- source### impl Clone for DeleteAnomalyDetectorInput source#### fn clone(&self) -> DeleteAnomalyDetectorInput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for DeleteAnomalyDetectorInput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for DeleteAnomalyDetectorInput source#### fn default() -> DeleteAnomalyDetectorInput Returns the “default value” for a type. Read more source### impl PartialEq<DeleteAnomalyDetectorInput> for DeleteAnomalyDetectorInput source#### fn eq(&self, other: &DeleteAnomalyDetectorInput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &DeleteAnomalyDetectorInput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for DeleteAnomalyDetectorInput Auto Trait Implementations --- ### impl RefUnwindSafe for DeleteAnomalyDetectorInput ### impl Send for DeleteAnomalyDetectorInput ### impl Sync for DeleteAnomalyDetectorInput ### impl Unpin for DeleteAnomalyDetectorInput ### impl UnwindSafe for DeleteAnomalyDetectorInput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::DeleteAnomalyDetectorOutput === ``` pub struct DeleteAnomalyDetectorOutput {} ``` Trait Implementations --- source### impl Clone for DeleteAnomalyDetectorOutput source#### fn clone(&self) -> DeleteAnomalyDetectorOutput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for DeleteAnomalyDetectorOutput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for DeleteAnomalyDetectorOutput source#### fn default() -> DeleteAnomalyDetectorOutput Returns the “default value” for a type. Read more source### impl PartialEq<DeleteAnomalyDetectorOutput> for DeleteAnomalyDetectorOutput source#### fn eq(&self, other: &DeleteAnomalyDetectorOutput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. source### impl StructuralPartialEq for DeleteAnomalyDetectorOutput Auto Trait Implementations --- ### impl RefUnwindSafe for DeleteAnomalyDetectorOutput ### impl Send for DeleteAnomalyDetectorOutput ### impl Sync for DeleteAnomalyDetectorOutput ### impl Unpin for DeleteAnomalyDetectorOutput ### impl UnwindSafe for DeleteAnomalyDetectorOutput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::DeleteDashboardsInput === ``` pub struct DeleteDashboardsInput { pub dashboard_names: Vec<String>, } ``` Fields --- `dashboard_names: Vec<String>`The dashboards to be deleted. This parameter is required. Trait Implementations --- source### impl Clone for DeleteDashboardsInput source#### fn clone(&self) -> DeleteDashboardsInput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for DeleteDashboardsInput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for DeleteDashboardsInput source#### fn default() -> DeleteDashboardsInput Returns the “default value” for a type. Read more source### impl PartialEq<DeleteDashboardsInput> for DeleteDashboardsInput source#### fn eq(&self, other: &DeleteDashboardsInput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &DeleteDashboardsInput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for DeleteDashboardsInput Auto Trait Implementations --- ### impl RefUnwindSafe for DeleteDashboardsInput ### impl Send for DeleteDashboardsInput ### impl Sync for DeleteDashboardsInput ### impl Unpin for DeleteDashboardsInput ### impl UnwindSafe for DeleteDashboardsInput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::DeleteDashboardsOutput === ``` pub struct DeleteDashboardsOutput {} ``` Trait Implementations --- source### impl Clone for DeleteDashboardsOutput source#### fn clone(&self) -> DeleteDashboardsOutput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for DeleteDashboardsOutput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for DeleteDashboardsOutput source#### fn default() -> DeleteDashboardsOutput Returns the “default value” for a type. Read more source### impl PartialEq<DeleteDashboardsOutput> for DeleteDashboardsOutput source#### fn eq(&self, other: &DeleteDashboardsOutput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. source### impl StructuralPartialEq for DeleteDashboardsOutput Auto Trait Implementations --- ### impl RefUnwindSafe for DeleteDashboardsOutput ### impl Send for DeleteDashboardsOutput ### impl Sync for DeleteDashboardsOutput ### impl Unpin for DeleteDashboardsOutput ### impl UnwindSafe for DeleteDashboardsOutput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::DeleteInsightRulesInput === ``` pub struct DeleteInsightRulesInput { pub rule_names: Vec<String>, } ``` Fields --- `rule_names: Vec<String>`An array of the rule names to delete. If you need to find out the names of your rules, use DescribeInsightRules. Trait Implementations --- source### impl Clone for DeleteInsightRulesInput source#### fn clone(&self) -> DeleteInsightRulesInput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for DeleteInsightRulesInput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for DeleteInsightRulesInput source#### fn default() -> DeleteInsightRulesInput Returns the “default value” for a type. Read more source### impl PartialEq<DeleteInsightRulesInput> for DeleteInsightRulesInput source#### fn eq(&self, other: &DeleteInsightRulesInput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &DeleteInsightRulesInput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for DeleteInsightRulesInput Auto Trait Implementations --- ### impl RefUnwindSafe for DeleteInsightRulesInput ### impl Send for DeleteInsightRulesInput ### impl Sync for DeleteInsightRulesInput ### impl Unpin for DeleteInsightRulesInput ### impl UnwindSafe for DeleteInsightRulesInput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::DeleteInsightRulesOutput === ``` pub struct DeleteInsightRulesOutput { pub failures: Option<Vec<PartialFailure>>, } ``` Fields --- `failures: Option<Vec<PartialFailure>>`An array listing the rules that could not be deleted. You cannot delete built-in rules. Trait Implementations --- source### impl Clone for DeleteInsightRulesOutput source#### fn clone(&self) -> DeleteInsightRulesOutput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for DeleteInsightRulesOutput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for DeleteInsightRulesOutput source#### fn default() -> DeleteInsightRulesOutput Returns the “default value” for a type. Read more source### impl PartialEq<DeleteInsightRulesOutput> for DeleteInsightRulesOutput source#### fn eq(&self, other: &DeleteInsightRulesOutput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &DeleteInsightRulesOutput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for DeleteInsightRulesOutput Auto Trait Implementations --- ### impl RefUnwindSafe for DeleteInsightRulesOutput ### impl Send for DeleteInsightRulesOutput ### impl Sync for DeleteInsightRulesOutput ### impl Unpin for DeleteInsightRulesOutput ### impl UnwindSafe for DeleteInsightRulesOutput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::DeleteMetricStreamInput === ``` pub struct DeleteMetricStreamInput { pub name: String, } ``` Fields --- `name: String`The name of the metric stream to delete. Trait Implementations --- source### impl Clone for DeleteMetricStreamInput source#### fn clone(&self) -> DeleteMetricStreamInput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for DeleteMetricStreamInput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for DeleteMetricStreamInput source#### fn default() -> DeleteMetricStreamInput Returns the “default value” for a type. Read more source### impl PartialEq<DeleteMetricStreamInput> for DeleteMetricStreamInput source#### fn eq(&self, other: &DeleteMetricStreamInput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &DeleteMetricStreamInput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for DeleteMetricStreamInput Auto Trait Implementations --- ### impl RefUnwindSafe for DeleteMetricStreamInput ### impl Send for DeleteMetricStreamInput ### impl Sync for DeleteMetricStreamInput ### impl Unpin for DeleteMetricStreamInput ### impl UnwindSafe for DeleteMetricStreamInput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::DeleteMetricStreamOutput === ``` pub struct DeleteMetricStreamOutput {} ``` Trait Implementations --- source### impl Clone for DeleteMetricStreamOutput source#### fn clone(&self) -> DeleteMetricStreamOutput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for DeleteMetricStreamOutput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for DeleteMetricStreamOutput source#### fn default() -> DeleteMetricStreamOutput Returns the “default value” for a type. Read more source### impl PartialEq<DeleteMetricStreamOutput> for DeleteMetricStreamOutput source#### fn eq(&self, other: &DeleteMetricStreamOutput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. source### impl StructuralPartialEq for DeleteMetricStreamOutput Auto Trait Implementations --- ### impl RefUnwindSafe for DeleteMetricStreamOutput ### impl Send for DeleteMetricStreamOutput ### impl Sync for DeleteMetricStreamOutput ### impl Unpin for DeleteMetricStreamOutput ### impl UnwindSafe for DeleteMetricStreamOutput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::DescribeAlarmHistoryInput === ``` pub struct DescribeAlarmHistoryInput { pub alarm_name: Option<String>, pub alarm_types: Option<Vec<String>>, pub end_date: Option<String>, pub history_item_type: Option<String>, pub max_records: Option<i64>, pub next_token: Option<String>, pub scan_by: Option<String>, pub start_date: Option<String>, } ``` Fields --- `alarm_name: Option<String>`The name of the alarm. `alarm_types: Option<Vec<String>>`Use this parameter to specify whether you want the operation to return metric alarms or composite alarms. If you omit this parameter, only metric alarms are returned. `end_date: Option<String>`The ending date to retrieve alarm history. `history_item_type: Option<String>`The type of alarm histories to retrieve. `max_records: Option<i64>`The maximum number of alarm history records to retrieve. `next_token: Option<String>`The token returned by a previous call to indicate that there is more data available. `scan_by: Option<String>`Specified whether to return the newest or oldest alarm history first. Specify `TimestampDescending` to have the newest event history returned first, and specify `TimestampAscending` to have the oldest history returned first. `start_date: Option<String>`The starting date to retrieve alarm history. Trait Implementations --- source### impl Clone for DescribeAlarmHistoryInput source#### fn clone(&self) -> DescribeAlarmHistoryInput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for DescribeAlarmHistoryInput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for DescribeAlarmHistoryInput source#### fn default() -> DescribeAlarmHistoryInput Returns the “default value” for a type. Read more source### impl PartialEq<DescribeAlarmHistoryInput> for DescribeAlarmHistoryInput source#### fn eq(&self, other: &DescribeAlarmHistoryInput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &DescribeAlarmHistoryInput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for DescribeAlarmHistoryInput Auto Trait Implementations --- ### impl RefUnwindSafe for DescribeAlarmHistoryInput ### impl Send for DescribeAlarmHistoryInput ### impl Sync for DescribeAlarmHistoryInput ### impl Unpin for DescribeAlarmHistoryInput ### impl UnwindSafe for DescribeAlarmHistoryInput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::DescribeAlarmHistoryOutput === ``` pub struct DescribeAlarmHistoryOutput { pub alarm_history_items: Option<Vec<AlarmHistoryItem>>, pub next_token: Option<String>, } ``` Fields --- `alarm_history_items: Option<Vec<AlarmHistoryItem>>`The alarm histories, in JSON format. `next_token: Option<String>`The token that marks the start of the next batch of returned results. Trait Implementations --- source### impl Clone for DescribeAlarmHistoryOutput source#### fn clone(&self) -> DescribeAlarmHistoryOutput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for DescribeAlarmHistoryOutput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for DescribeAlarmHistoryOutput source#### fn default() -> DescribeAlarmHistoryOutput Returns the “default value” for a type. Read more source### impl PartialEq<DescribeAlarmHistoryOutput> for DescribeAlarmHistoryOutput source#### fn eq(&self, other: &DescribeAlarmHistoryOutput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &DescribeAlarmHistoryOutput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for DescribeAlarmHistoryOutput Auto Trait Implementations --- ### impl RefUnwindSafe for DescribeAlarmHistoryOutput ### impl Send for DescribeAlarmHistoryOutput ### impl Sync for DescribeAlarmHistoryOutput ### impl Unpin for DescribeAlarmHistoryOutput ### impl UnwindSafe for DescribeAlarmHistoryOutput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::DescribeAlarmsForMetricInput === ``` pub struct DescribeAlarmsForMetricInput { pub dimensions: Option<Vec<Dimension>>, pub extended_statistic: Option<String>, pub metric_name: String, pub namespace: String, pub period: Option<i64>, pub statistic: Option<String>, pub unit: Option<String>, } ``` Fields --- `dimensions: Option<Vec<Dimension>>`The dimensions associated with the metric. If the metric has any associated dimensions, you must specify them in order for the call to succeed. `extended_statistic: Option<String>`The percentile statistic for the metric. Specify a value between p0.0 and p100. `metric_name: String`The name of the metric. `namespace: String`The namespace of the metric. `period: Option<i64>`The period, in seconds, over which the statistic is applied. `statistic: Option<String>`The statistic for the metric, other than percentiles. For percentile statistics, use `ExtendedStatistics`. `unit: Option<String>`The unit for the metric. Trait Implementations --- source### impl Clone for DescribeAlarmsForMetricInput source#### fn clone(&self) -> DescribeAlarmsForMetricInput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for DescribeAlarmsForMetricInput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for DescribeAlarmsForMetricInput source#### fn default() -> DescribeAlarmsForMetricInput Returns the “default value” for a type. Read more source### impl PartialEq<DescribeAlarmsForMetricInput> for DescribeAlarmsForMetricInput source#### fn eq(&self, other: &DescribeAlarmsForMetricInput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &DescribeAlarmsForMetricInput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for DescribeAlarmsForMetricInput Auto Trait Implementations --- ### impl RefUnwindSafe for DescribeAlarmsForMetricInput ### impl Send for DescribeAlarmsForMetricInput ### impl Sync for DescribeAlarmsForMetricInput ### impl Unpin for DescribeAlarmsForMetricInput ### impl UnwindSafe for DescribeAlarmsForMetricInput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::DescribeAlarmsForMetricOutput === ``` pub struct DescribeAlarmsForMetricOutput { pub metric_alarms: Option<Vec<MetricAlarm>>, } ``` Fields --- `metric_alarms: Option<Vec<MetricAlarm>>`The information for each alarm with the specified metric. Trait Implementations --- source### impl Clone for DescribeAlarmsForMetricOutput source#### fn clone(&self) -> DescribeAlarmsForMetricOutput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for DescribeAlarmsForMetricOutput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for DescribeAlarmsForMetricOutput source#### fn default() -> DescribeAlarmsForMetricOutput Returns the “default value” for a type. Read more source### impl PartialEq<DescribeAlarmsForMetricOutput> for DescribeAlarmsForMetricOutput source#### fn eq(&self, other: &DescribeAlarmsForMetricOutput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &DescribeAlarmsForMetricOutput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for DescribeAlarmsForMetricOutput Auto Trait Implementations --- ### impl RefUnwindSafe for DescribeAlarmsForMetricOutput ### impl Send for DescribeAlarmsForMetricOutput ### impl Sync for DescribeAlarmsForMetricOutput ### impl Unpin for DescribeAlarmsForMetricOutput ### impl UnwindSafe for DescribeAlarmsForMetricOutput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::DescribeAlarmsInput === ``` pub struct DescribeAlarmsInput { pub action_prefix: Option<String>, pub alarm_name_prefix: Option<String>, pub alarm_names: Option<Vec<String>>, pub alarm_types: Option<Vec<String>>, pub children_of_alarm_name: Option<String>, pub max_records: Option<i64>, pub next_token: Option<String>, pub parents_of_alarm_name: Option<String>, pub state_value: Option<String>, } ``` Fields --- `action_prefix: Option<String>`Use this parameter to filter the results of the operation to only those alarms that use a certain alarm action. For example, you could specify the ARN of an SNS topic to find all alarms that send notifications to that topic. `alarm_name_prefix: Option<String>`An alarm name prefix. If you specify this parameter, you receive information about all alarms that have names that start with this prefix. If this parameter is specified, you cannot specify `AlarmNames`. `alarm_names: Option<Vec<String>>`The names of the alarms to retrieve information about. `alarm_types: Option<Vec<String>>`Use this parameter to specify whether you want the operation to return metric alarms or composite alarms. If you omit this parameter, only metric alarms are returned. `children_of_alarm_name: Option<String>`If you use this parameter and specify the name of a composite alarm, the operation returns information about the "children" alarms of the alarm you specify. These are the metric alarms and composite alarms referenced in the `AlarmRule` field of the composite alarm that you specify in `ChildrenOfAlarmName`. Information about the composite alarm that you name in `ChildrenOfAlarmName` is not returned. If you specify `ChildrenOfAlarmName`, you cannot specify any other parameters in the request except for `MaxRecords` and `NextToken`. If you do so, you receive a validation error. Only the `Alarm Name`, `ARN`, `StateValue` (OK/ALARM/INSUFFICIENT_DATA), and `StateUpdatedTimestamp` information are returned by this operation when you use this parameter. To get complete information about these alarms, perform another `DescribeAlarms` operation and specify the parent alarm names in the `AlarmNames` parameter. `max_records: Option<i64>`The maximum number of alarm descriptions to retrieve. `next_token: Option<String>`The token returned by a previous call to indicate that there is more data available. `parents_of_alarm_name: Option<String>`If you use this parameter and specify the name of a metric or composite alarm, the operation returns information about the "parent" alarms of the alarm you specify. These are the composite alarms that have `AlarmRule` parameters that reference the alarm named in `ParentsOfAlarmName`. Information about the alarm that you specify in `ParentsOfAlarmName` is not returned. If you specify `ParentsOfAlarmName`, you cannot specify any other parameters in the request except for `MaxRecords` and `NextToken`. If you do so, you receive a validation error. Only the Alarm Name and ARN are returned by this operation when you use this parameter. To get complete information about these alarms, perform another `DescribeAlarms` operation and specify the parent alarm names in the `AlarmNames` parameter. `state_value: Option<String>`Specify this parameter to receive information only about alarms that are currently in the state that you specify. Trait Implementations --- source### impl Clone for DescribeAlarmsInput source#### fn clone(&self) -> DescribeAlarmsInput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for DescribeAlarmsInput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for DescribeAlarmsInput source#### fn default() -> DescribeAlarmsInput Returns the “default value” for a type. Read more source### impl PartialEq<DescribeAlarmsInput> for DescribeAlarmsInput source#### fn eq(&self, other: &DescribeAlarmsInput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &DescribeAlarmsInput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for DescribeAlarmsInput Auto Trait Implementations --- ### impl RefUnwindSafe for DescribeAlarmsInput ### impl Send for DescribeAlarmsInput ### impl Sync for DescribeAlarmsInput ### impl Unpin for DescribeAlarmsInput ### impl UnwindSafe for DescribeAlarmsInput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::DescribeAlarmsOutput === ``` pub struct DescribeAlarmsOutput { pub composite_alarms: Option<Vec<CompositeAlarm>>, pub metric_alarms: Option<Vec<MetricAlarm>>, pub next_token: Option<String>, } ``` Fields --- `composite_alarms: Option<Vec<CompositeAlarm>>`The information about any composite alarms returned by the operation. `metric_alarms: Option<Vec<MetricAlarm>>`The information about any metric alarms returned by the operation. `next_token: Option<String>`The token that marks the start of the next batch of returned results. Trait Implementations --- source### impl Clone for DescribeAlarmsOutput source#### fn clone(&self) -> DescribeAlarmsOutput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for DescribeAlarmsOutput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for DescribeAlarmsOutput source#### fn default() -> DescribeAlarmsOutput Returns the “default value” for a type. Read more source### impl PartialEq<DescribeAlarmsOutput> for DescribeAlarmsOutput source#### fn eq(&self, other: &DescribeAlarmsOutput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &DescribeAlarmsOutput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for DescribeAlarmsOutput Auto Trait Implementations --- ### impl RefUnwindSafe for DescribeAlarmsOutput ### impl Send for DescribeAlarmsOutput ### impl Sync for DescribeAlarmsOutput ### impl Unpin for DescribeAlarmsOutput ### impl UnwindSafe for DescribeAlarmsOutput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::DescribeAnomalyDetectorsInput === ``` pub struct DescribeAnomalyDetectorsInput { pub dimensions: Option<Vec<Dimension>>, pub max_results: Option<i64>, pub metric_name: Option<String>, pub namespace: Option<String>, pub next_token: Option<String>, } ``` Fields --- `dimensions: Option<Vec<Dimension>>`Limits the results to only the anomaly detection models that are associated with the specified metric dimensions. If there are multiple metrics that have these dimensions and have anomaly detection models associated, they're all returned. `max_results: Option<i64>`The maximum number of results to return in one operation. The maximum value that you can specify is 100. To retrieve the remaining results, make another call with the returned `NextToken` value. `metric_name: Option<String>`Limits the results to only the anomaly detection models that are associated with the specified metric name. If there are multiple metrics with this name in different namespaces that have anomaly detection models, they're all returned. `namespace: Option<String>`Limits the results to only the anomaly detection models that are associated with the specified namespace. `next_token: Option<String>`Use the token returned by the previous operation to request the next page of results. Trait Implementations --- source### impl Clone for DescribeAnomalyDetectorsInput source#### fn clone(&self) -> DescribeAnomalyDetectorsInput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for DescribeAnomalyDetectorsInput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for DescribeAnomalyDetectorsInput source#### fn default() -> DescribeAnomalyDetectorsInput Returns the “default value” for a type. Read more source### impl PartialEq<DescribeAnomalyDetectorsInput> for DescribeAnomalyDetectorsInput source#### fn eq(&self, other: &DescribeAnomalyDetectorsInput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &DescribeAnomalyDetectorsInput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for DescribeAnomalyDetectorsInput Auto Trait Implementations --- ### impl RefUnwindSafe for DescribeAnomalyDetectorsInput ### impl Send for DescribeAnomalyDetectorsInput ### impl Sync for DescribeAnomalyDetectorsInput ### impl Unpin for DescribeAnomalyDetectorsInput ### impl UnwindSafe for DescribeAnomalyDetectorsInput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::DescribeAnomalyDetectorsOutput === ``` pub struct DescribeAnomalyDetectorsOutput { pub anomaly_detectors: Option<Vec<AnomalyDetector>>, pub next_token: Option<String>, } ``` Fields --- `anomaly_detectors: Option<Vec<AnomalyDetector>>`The list of anomaly detection models returned by the operation. `next_token: Option<String>`A token that you can use in a subsequent operation to retrieve the next set of results. Trait Implementations --- source### impl Clone for DescribeAnomalyDetectorsOutput source#### fn clone(&self) -> DescribeAnomalyDetectorsOutput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for DescribeAnomalyDetectorsOutput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for DescribeAnomalyDetectorsOutput source#### fn default() -> DescribeAnomalyDetectorsOutput Returns the “default value” for a type. Read more source### impl PartialEq<DescribeAnomalyDetectorsOutput> for DescribeAnomalyDetectorsOutput source#### fn eq(&self, other: &DescribeAnomalyDetectorsOutput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &DescribeAnomalyDetectorsOutput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for DescribeAnomalyDetectorsOutput Auto Trait Implementations --- ### impl RefUnwindSafe for DescribeAnomalyDetectorsOutput ### impl Send for DescribeAnomalyDetectorsOutput ### impl Sync for DescribeAnomalyDetectorsOutput ### impl Unpin for DescribeAnomalyDetectorsOutput ### impl UnwindSafe for DescribeAnomalyDetectorsOutput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::DescribeInsightRulesInput === ``` pub struct DescribeInsightRulesInput { pub max_results: Option<i64>, pub next_token: Option<String>, } ``` Fields --- `max_results: Option<i64>`The maximum number of results to return in one operation. If you omit this parameter, the default of 500 is used. `next_token: Option<String>`Include this value, if it was returned by the previous operation, to get the next set of rules. Trait Implementations --- source### impl Clone for DescribeInsightRulesInput source#### fn clone(&self) -> DescribeInsightRulesInput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for DescribeInsightRulesInput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for DescribeInsightRulesInput source#### fn default() -> DescribeInsightRulesInput Returns the “default value” for a type. Read more source### impl PartialEq<DescribeInsightRulesInput> for DescribeInsightRulesInput source#### fn eq(&self, other: &DescribeInsightRulesInput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &DescribeInsightRulesInput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for DescribeInsightRulesInput Auto Trait Implementations --- ### impl RefUnwindSafe for DescribeInsightRulesInput ### impl Send for DescribeInsightRulesInput ### impl Sync for DescribeInsightRulesInput ### impl Unpin for DescribeInsightRulesInput ### impl UnwindSafe for DescribeInsightRulesInput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::DescribeInsightRulesOutput === ``` pub struct DescribeInsightRulesOutput { pub insight_rules: Option<Vec<InsightRule>>, pub next_token: Option<String>, } ``` Fields --- `insight_rules: Option<Vec<InsightRule>>`The rules returned by the operation. `next_token: Option<String>`If this parameter is present, it is a token that marks the start of the next batch of returned results. Trait Implementations --- source### impl Clone for DescribeInsightRulesOutput source#### fn clone(&self) -> DescribeInsightRulesOutput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for DescribeInsightRulesOutput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for DescribeInsightRulesOutput source#### fn default() -> DescribeInsightRulesOutput Returns the “default value” for a type. Read more source### impl PartialEq<DescribeInsightRulesOutput> for DescribeInsightRulesOutput source#### fn eq(&self, other: &DescribeInsightRulesOutput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &DescribeInsightRulesOutput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for DescribeInsightRulesOutput Auto Trait Implementations --- ### impl RefUnwindSafe for DescribeInsightRulesOutput ### impl Send for DescribeInsightRulesOutput ### impl Sync for DescribeInsightRulesOutput ### impl Unpin for DescribeInsightRulesOutput ### impl UnwindSafe for DescribeInsightRulesOutput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::Dimension === ``` pub struct Dimension { pub name: String, pub value: String, } ``` A dimension is a name/value pair that is part of the identity of a metric. You can assign up to 10 dimensions to a metric. Because dimensions are part of the unique identifier for a metric, whenever you add a unique name/value pair to one of your metrics, you are creating a new variation of that metric. Fields --- `name: String`The name of the dimension. Dimension names cannot contain blank spaces or non-ASCII characters. `value: String`The value of the dimension. Dimension values cannot contain blank spaces or non-ASCII characters. Trait Implementations --- source### impl Clone for Dimension source#### fn clone(&self) -> Dimension Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for Dimension source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for Dimension source#### fn default() -> Dimension Returns the “default value” for a type. Read more source### impl PartialEq<Dimension> for Dimension source#### fn eq(&self, other: &Dimension) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &Dimension) -> bool This method tests for `!=`. source### impl StructuralPartialEq for Dimension Auto Trait Implementations --- ### impl RefUnwindSafe for Dimension ### impl Send for Dimension ### impl Sync for Dimension ### impl Unpin for Dimension ### impl UnwindSafe for Dimension Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::DimensionFilter === ``` pub struct DimensionFilter { pub name: String, pub value: Option<String>, } ``` Represents filters for a dimension. Fields --- `name: String`The dimension name to be matched. `value: Option<String>`The value of the dimension to be matched. Trait Implementations --- source### impl Clone for DimensionFilter source#### fn clone(&self) -> DimensionFilter Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for DimensionFilter source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for DimensionFilter source#### fn default() -> DimensionFilter Returns the “default value” for a type. Read more source### impl PartialEq<DimensionFilter> for DimensionFilter source#### fn eq(&self, other: &DimensionFilter) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &DimensionFilter) -> bool This method tests for `!=`. source### impl StructuralPartialEq for DimensionFilter Auto Trait Implementations --- ### impl RefUnwindSafe for DimensionFilter ### impl Send for DimensionFilter ### impl Sync for DimensionFilter ### impl Unpin for DimensionFilter ### impl UnwindSafe for DimensionFilter Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::DisableAlarmActionsInput === ``` pub struct DisableAlarmActionsInput { pub alarm_names: Vec<String>, } ``` Fields --- `alarm_names: Vec<String>`The names of the alarms. Trait Implementations --- source### impl Clone for DisableAlarmActionsInput source#### fn clone(&self) -> DisableAlarmActionsInput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for DisableAlarmActionsInput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for DisableAlarmActionsInput source#### fn default() -> DisableAlarmActionsInput Returns the “default value” for a type. Read more source### impl PartialEq<DisableAlarmActionsInput> for DisableAlarmActionsInput source#### fn eq(&self, other: &DisableAlarmActionsInput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &DisableAlarmActionsInput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for DisableAlarmActionsInput Auto Trait Implementations --- ### impl RefUnwindSafe for DisableAlarmActionsInput ### impl Send for DisableAlarmActionsInput ### impl Sync for DisableAlarmActionsInput ### impl Unpin for DisableAlarmActionsInput ### impl UnwindSafe for DisableAlarmActionsInput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::DisableInsightRulesInput === ``` pub struct DisableInsightRulesInput { pub rule_names: Vec<String>, } ``` Fields --- `rule_names: Vec<String>`An array of the rule names to disable. If you need to find out the names of your rules, use DescribeInsightRules. Trait Implementations --- source### impl Clone for DisableInsightRulesInput source#### fn clone(&self) -> DisableInsightRulesInput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for DisableInsightRulesInput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for DisableInsightRulesInput source#### fn default() -> DisableInsightRulesInput Returns the “default value” for a type. Read more source### impl PartialEq<DisableInsightRulesInput> for DisableInsightRulesInput source#### fn eq(&self, other: &DisableInsightRulesInput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &DisableInsightRulesInput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for DisableInsightRulesInput Auto Trait Implementations --- ### impl RefUnwindSafe for DisableInsightRulesInput ### impl Send for DisableInsightRulesInput ### impl Sync for DisableInsightRulesInput ### impl Unpin for DisableInsightRulesInput ### impl UnwindSafe for DisableInsightRulesInput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::DisableInsightRulesOutput === ``` pub struct DisableInsightRulesOutput { pub failures: Option<Vec<PartialFailure>>, } ``` Fields --- `failures: Option<Vec<PartialFailure>>`An array listing the rules that could not be disabled. You cannot disable built-in rules. Trait Implementations --- source### impl Clone for DisableInsightRulesOutput source#### fn clone(&self) -> DisableInsightRulesOutput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for DisableInsightRulesOutput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for DisableInsightRulesOutput source#### fn default() -> DisableInsightRulesOutput Returns the “default value” for a type. Read more source### impl PartialEq<DisableInsightRulesOutput> for DisableInsightRulesOutput source#### fn eq(&self, other: &DisableInsightRulesOutput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &DisableInsightRulesOutput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for DisableInsightRulesOutput Auto Trait Implementations --- ### impl RefUnwindSafe for DisableInsightRulesOutput ### impl Send for DisableInsightRulesOutput ### impl Sync for DisableInsightRulesOutput ### impl Unpin for DisableInsightRulesOutput ### impl UnwindSafe for DisableInsightRulesOutput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::EnableAlarmActionsInput === ``` pub struct EnableAlarmActionsInput { pub alarm_names: Vec<String>, } ``` Fields --- `alarm_names: Vec<String>`The names of the alarms. Trait Implementations --- source### impl Clone for EnableAlarmActionsInput source#### fn clone(&self) -> EnableAlarmActionsInput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for EnableAlarmActionsInput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for EnableAlarmActionsInput source#### fn default() -> EnableAlarmActionsInput Returns the “default value” for a type. Read more source### impl PartialEq<EnableAlarmActionsInput> for EnableAlarmActionsInput source#### fn eq(&self, other: &EnableAlarmActionsInput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &EnableAlarmActionsInput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for EnableAlarmActionsInput Auto Trait Implementations --- ### impl RefUnwindSafe for EnableAlarmActionsInput ### impl Send for EnableAlarmActionsInput ### impl Sync for EnableAlarmActionsInput ### impl Unpin for EnableAlarmActionsInput ### impl UnwindSafe for EnableAlarmActionsInput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::EnableInsightRulesInput === ``` pub struct EnableInsightRulesInput { pub rule_names: Vec<String>, } ``` Fields --- `rule_names: Vec<String>`An array of the rule names to enable. If you need to find out the names of your rules, use DescribeInsightRules. Trait Implementations --- source### impl Clone for EnableInsightRulesInput source#### fn clone(&self) -> EnableInsightRulesInput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for EnableInsightRulesInput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for EnableInsightRulesInput source#### fn default() -> EnableInsightRulesInput Returns the “default value” for a type. Read more source### impl PartialEq<EnableInsightRulesInput> for EnableInsightRulesInput source#### fn eq(&self, other: &EnableInsightRulesInput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &EnableInsightRulesInput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for EnableInsightRulesInput Auto Trait Implementations --- ### impl RefUnwindSafe for EnableInsightRulesInput ### impl Send for EnableInsightRulesInput ### impl Sync for EnableInsightRulesInput ### impl Unpin for EnableInsightRulesInput ### impl UnwindSafe for EnableInsightRulesInput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::EnableInsightRulesOutput === ``` pub struct EnableInsightRulesOutput { pub failures: Option<Vec<PartialFailure>>, } ``` Fields --- `failures: Option<Vec<PartialFailure>>`An array listing the rules that could not be enabled. You cannot disable or enable built-in rules. Trait Implementations --- source### impl Clone for EnableInsightRulesOutput source#### fn clone(&self) -> EnableInsightRulesOutput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for EnableInsightRulesOutput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for EnableInsightRulesOutput source#### fn default() -> EnableInsightRulesOutput Returns the “default value” for a type. Read more source### impl PartialEq<EnableInsightRulesOutput> for EnableInsightRulesOutput source#### fn eq(&self, other: &EnableInsightRulesOutput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &EnableInsightRulesOutput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for EnableInsightRulesOutput Auto Trait Implementations --- ### impl RefUnwindSafe for EnableInsightRulesOutput ### impl Send for EnableInsightRulesOutput ### impl Sync for EnableInsightRulesOutput ### impl Unpin for EnableInsightRulesOutput ### impl UnwindSafe for EnableInsightRulesOutput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::GetDashboardInput === ``` pub struct GetDashboardInput { pub dashboard_name: String, } ``` Fields --- `dashboard_name: String`The name of the dashboard to be described. Trait Implementations --- source### impl Clone for GetDashboardInput source#### fn clone(&self) -> GetDashboardInput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for GetDashboardInput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for GetDashboardInput source#### fn default() -> GetDashboardInput Returns the “default value” for a type. Read more source### impl PartialEq<GetDashboardInput> for GetDashboardInput source#### fn eq(&self, other: &GetDashboardInput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &GetDashboardInput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for GetDashboardInput Auto Trait Implementations --- ### impl RefUnwindSafe for GetDashboardInput ### impl Send for GetDashboardInput ### impl Sync for GetDashboardInput ### impl Unpin for GetDashboardInput ### impl UnwindSafe for GetDashboardInput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::GetDashboardOutput === ``` pub struct GetDashboardOutput { pub dashboard_arn: Option<String>, pub dashboard_body: Option<String>, pub dashboard_name: Option<String>, } ``` Fields --- `dashboard_arn: Option<String>`The Amazon Resource Name (ARN) of the dashboard. `dashboard_body: Option<String>`The detailed information about the dashboard, including what widgets are included and their location on the dashboard. For more information about the `DashboardBody` syntax, see Dashboard Body Structure and Syntax. `dashboard_name: Option<String>`The name of the dashboard. Trait Implementations --- source### impl Clone for GetDashboardOutput source#### fn clone(&self) -> GetDashboardOutput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for GetDashboardOutput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for GetDashboardOutput source#### fn default() -> GetDashboardOutput Returns the “default value” for a type. Read more source### impl PartialEq<GetDashboardOutput> for GetDashboardOutput source#### fn eq(&self, other: &GetDashboardOutput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &GetDashboardOutput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for GetDashboardOutput Auto Trait Implementations --- ### impl RefUnwindSafe for GetDashboardOutput ### impl Send for GetDashboardOutput ### impl Sync for GetDashboardOutput ### impl Unpin for GetDashboardOutput ### impl UnwindSafe for GetDashboardOutput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::GetInsightRuleReportInput === ``` pub struct GetInsightRuleReportInput { pub end_time: String, pub max_contributor_count: Option<i64>, pub metrics: Option<Vec<String>>, pub order_by: Option<String>, pub period: i64, pub rule_name: String, pub start_time: String, } ``` Fields --- `end_time: String`The end time of the data to use in the report. When used in a raw HTTP Query API, it is formatted as `yyyy-MM-dd'T'HH:mm:ss`. For example, `2019-07-01T23:59:59`. `max_contributor_count: Option<i64>`The maximum number of contributors to include in the report. The range is 1 to 100. If you omit this, the default of 10 is used. `metrics: Option<Vec<String>>`Specifies which metrics to use for aggregation of contributor values for the report. You can specify one or more of the following metrics: * `UniqueContributors` -- the number of unique contributors for each data point. * `MaxContributorValue` -- the value of the top contributor for each data point. The identity of the contributor might change for each data point in the graph. If this rule aggregates by COUNT, the top contributor for each data point is the contributor with the most occurrences in that period. If the rule aggregates by SUM, the top contributor is the contributor with the highest sum in the log field specified by the rule's `Value`, during that period. * `SampleCount` -- the number of data points matched by the rule. * `Sum` -- the sum of the values from all contributors during the time period represented by that data point. * `Minimum` -- the minimum value from a single observation during the time period represented by that data point. * `Maximum` -- the maximum value from a single observation during the time period represented by that data point. * `Average` -- the average value from all contributors during the time period represented by that data point. `order_by: Option<String>`Determines what statistic to use to rank the contributors. Valid values are SUM and MAXIMUM. `period: i64`The period, in seconds, to use for the statistics in the `InsightRuleMetricDatapoint` results. `rule_name: String`The name of the rule that you want to see data from. `start_time: String`The start time of the data to use in the report. When used in a raw HTTP Query API, it is formatted as `yyyy-MM-dd'T'HH:mm:ss`. For example, `2019-07-01T23:59:59`. Trait Implementations --- source### impl Clone for GetInsightRuleReportInput source#### fn clone(&self) -> GetInsightRuleReportInput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for GetInsightRuleReportInput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for GetInsightRuleReportInput source#### fn default() -> GetInsightRuleReportInput Returns the “default value” for a type. Read more source### impl PartialEq<GetInsightRuleReportInput> for GetInsightRuleReportInput source#### fn eq(&self, other: &GetInsightRuleReportInput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &GetInsightRuleReportInput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for GetInsightRuleReportInput Auto Trait Implementations --- ### impl RefUnwindSafe for GetInsightRuleReportInput ### impl Send for GetInsightRuleReportInput ### impl Sync for GetInsightRuleReportInput ### impl Unpin for GetInsightRuleReportInput ### impl UnwindSafe for GetInsightRuleReportInput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::GetInsightRuleReportOutput === ``` pub struct GetInsightRuleReportOutput { pub aggregate_value: Option<f64>, pub aggregation_statistic: Option<String>, pub approximate_unique_count: Option<i64>, pub contributors: Option<Vec<InsightRuleContributor>>, pub key_labels: Option<Vec<String>>, pub metric_datapoints: Option<Vec<InsightRuleMetricDatapoint>>, } ``` Fields --- `aggregate_value: Option<f64>`The sum of the values from all individual contributors that match the rule. `aggregation_statistic: Option<String>`Specifies whether this rule aggregates contributor data by COUNT or SUM. `approximate_unique_count: Option<i64>`An approximate count of the unique contributors found by this rule in this time period. `contributors: Option<Vec<InsightRuleContributor>>`An array of the unique contributors found by this rule in this time period. If the rule contains multiple keys, each combination of values for the keys counts as a unique contributor. `key_labels: Option<Vec<String>>`An array of the strings used as the keys for this rule. The keys are the dimensions used to classify contributors. If the rule contains more than one key, then each unique combination of values for the keys is counted as a unique contributor. `metric_datapoints: Option<Vec<InsightRuleMetricDatapoint>>`A time series of metric data points that matches the time period in the rule request. Trait Implementations --- source### impl Clone for GetInsightRuleReportOutput source#### fn clone(&self) -> GetInsightRuleReportOutput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for GetInsightRuleReportOutput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for GetInsightRuleReportOutput source#### fn default() -> GetInsightRuleReportOutput Returns the “default value” for a type. Read more source### impl PartialEq<GetInsightRuleReportOutput> for GetInsightRuleReportOutput source#### fn eq(&self, other: &GetInsightRuleReportOutput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &GetInsightRuleReportOutput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for GetInsightRuleReportOutput Auto Trait Implementations --- ### impl RefUnwindSafe for GetInsightRuleReportOutput ### impl Send for GetInsightRuleReportOutput ### impl Sync for GetInsightRuleReportOutput ### impl Unpin for GetInsightRuleReportOutput ### impl UnwindSafe for GetInsightRuleReportOutput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::GetMetricDataInput === ``` pub struct GetMetricDataInput { pub end_time: String, pub label_options: Option<LabelOptions>, pub max_datapoints: Option<i64>, pub metric_data_queries: Vec<MetricDataQuery>, pub next_token: Option<String>, pub scan_by: Option<String>, pub start_time: String, } ``` Fields --- `end_time: String`The time stamp indicating the latest data to be returned. The value specified is exclusive; results include data points up to the specified time stamp. For better performance, specify `StartTime` and `EndTime` values that align with the value of the metric's `Period` and sync up with the beginning and end of an hour. For example, if the `Period` of a metric is 5 minutes, specifying 12:05 or 12:30 as `EndTime` can get a faster response from CloudWatch than setting 12:07 or 12:29 as the `EndTime`. `label_options: Option<LabelOptions>`This structure includes the `Timezone` parameter, which you can use to specify your time zone so that the labels of returned data display the correct time for your time zone. `max_datapoints: Option<i64>`The maximum number of data points the request should return before paginating. If you omit this, the default of 100,800 is used. `metric_data_queries: Vec<MetricDataQuery>`The metric queries to be returned. A single `GetMetricData` call can include as many as 500 `MetricDataQuery` structures. Each of these structures can specify either a metric to retrieve, or a math expression to perform on retrieved data. `next_token: Option<String>`Include this value, if it was returned by the previous `GetMetricData` operation, to get the next set of data points. `scan_by: Option<String>`The order in which data points should be returned. `TimestampDescending` returns the newest data first and paginates when the `MaxDatapoints` limit is reached. `TimestampAscending` returns the oldest data first and paginates when the `MaxDatapoints` limit is reached. `start_time: String`The time stamp indicating the earliest data to be returned. The value specified is inclusive; results include data points with the specified time stamp. CloudWatch rounds the specified time stamp as follows: * Start time less than 15 days ago - Round down to the nearest whole minute. For example, 12:32:34 is rounded down to 12:32:00. * Start time between 15 and 63 days ago - Round down to the nearest 5-minute clock interval. For example, 12:32:34 is rounded down to 12:30:00. * Start time greater than 63 days ago - Round down to the nearest 1-hour clock interval. For example, 12:32:34 is rounded down to 12:00:00. If you set `Period` to 5, 10, or 30, the start time of your request is rounded down to the nearest time that corresponds to even 5-, 10-, or 30-second divisions of a minute. For example, if you make a query at (HH:mm:ss) 01:05:23 for the previous 10-second period, the start time of your request is rounded down and you receive data from 01:05:10 to 01:05:20. If you make a query at 15:07:17 for the previous 5 minutes of data, using a period of 5 seconds, you receive data timestamped between 15:02:15 and 15:07:15. For better performance, specify `StartTime` and `EndTime` values that align with the value of the metric's `Period` and sync up with the beginning and end of an hour. For example, if the `Period` of a metric is 5 minutes, specifying 12:05 or 12:30 as `StartTime` can get a faster response from CloudWatch than setting 12:07 or 12:29 as the `StartTime`. Trait Implementations --- source### impl Clone for GetMetricDataInput source#### fn clone(&self) -> GetMetricDataInput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for GetMetricDataInput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for GetMetricDataInput source#### fn default() -> GetMetricDataInput Returns the “default value” for a type. Read more source### impl PartialEq<GetMetricDataInput> for GetMetricDataInput source#### fn eq(&self, other: &GetMetricDataInput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &GetMetricDataInput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for GetMetricDataInput Auto Trait Implementations --- ### impl RefUnwindSafe for GetMetricDataInput ### impl Send for GetMetricDataInput ### impl Sync for GetMetricDataInput ### impl Unpin for GetMetricDataInput ### impl UnwindSafe for GetMetricDataInput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::GetMetricDataOutput === ``` pub struct GetMetricDataOutput { pub messages: Option<Vec<MessageData>>, pub metric_data_results: Option<Vec<MetricDataResult>>, pub next_token: Option<String>, } ``` Fields --- `messages: Option<Vec<MessageData>>`Contains a message about this `GetMetricData` operation, if the operation results in such a message. An example of a message that might be returned is `Maximum number of allowed metrics exceeded`. If there is a message, as much of the operation as possible is still executed. A message appears here only if it is related to the global `GetMetricData` operation. Any message about a specific metric returned by the operation appears in the `MetricDataResult` object returned for that metric. `metric_data_results: Option<Vec<MetricDataResult>>`The metrics that are returned, including the metric name, namespace, and dimensions. `next_token: Option<String>`A token that marks the next batch of returned results. Trait Implementations --- source### impl Clone for GetMetricDataOutput source#### fn clone(&self) -> GetMetricDataOutput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for GetMetricDataOutput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for GetMetricDataOutput source#### fn default() -> GetMetricDataOutput Returns the “default value” for a type. Read more source### impl PartialEq<GetMetricDataOutput> for GetMetricDataOutput source#### fn eq(&self, other: &GetMetricDataOutput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &GetMetricDataOutput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for GetMetricDataOutput Auto Trait Implementations --- ### impl RefUnwindSafe for GetMetricDataOutput ### impl Send for GetMetricDataOutput ### impl Sync for GetMetricDataOutput ### impl Unpin for GetMetricDataOutput ### impl UnwindSafe for GetMetricDataOutput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::GetMetricStatisticsInput === ``` pub struct GetMetricStatisticsInput { pub dimensions: Option<Vec<Dimension>>, pub end_time: String, pub extended_statistics: Option<Vec<String>>, pub metric_name: String, pub namespace: String, pub period: i64, pub start_time: String, pub statistics: Option<Vec<String>>, pub unit: Option<String>, } ``` Fields --- `dimensions: Option<Vec<Dimension>>`The dimensions. If the metric contains multiple dimensions, you must include a value for each dimension. CloudWatch treats each unique combination of dimensions as a separate metric. If a specific combination of dimensions was not published, you can't retrieve statistics for it. You must specify the same dimensions that were used when the metrics were created. For an example, see Dimension Combinations in the *Amazon CloudWatch User Guide*. For more information about specifying dimensions, see Publishing Metrics in the *Amazon CloudWatch User Guide*. `end_time: String`The time stamp that determines the last data point to return. The value specified is exclusive; results include data points up to the specified time stamp. In a raw HTTP query, the time stamp must be in ISO 8601 UTC format (for example, 2016-10-10T23:00:00Z). `extended_statistics: Option<Vec<String>>`The percentile statistics. Specify values between p0.0 and p100. When calling `GetMetricStatistics`, you must specify either `Statistics` or `ExtendedStatistics`, but not both. Percentile statistics are not available for metrics when any of the metric values are negative numbers. `metric_name: String`The name of the metric, with or without spaces. `namespace: String`The namespace of the metric, with or without spaces. `period: i64`The granularity, in seconds, of the returned data points. For metrics with regular resolution, a period can be as short as one minute (60 seconds) and must be a multiple of 60. For high-resolution metrics that are collected at intervals of less than one minute, the period can be 1, 5, 10, 30, 60, or any multiple of 60. High-resolution metrics are those metrics stored by a `PutMetricData` call that includes a `StorageResolution` of 1 second. If the `StartTime` parameter specifies a time stamp that is greater than 3 hours ago, you must specify the period as follows or no data points in that time range is returned: * Start time between 3 hours and 15 days ago - Use a multiple of 60 seconds (1 minute). * Start time between 15 and 63 days ago - Use a multiple of 300 seconds (5 minutes). * Start time greater than 63 days ago - Use a multiple of 3600 seconds (1 hour). `start_time: String`The time stamp that determines the first data point to return. Start times are evaluated relative to the time that CloudWatch receives the request. The value specified is inclusive; results include data points with the specified time stamp. In a raw HTTP query, the time stamp must be in ISO 8601 UTC format (for example, 2016-10-03T23:00:00Z). CloudWatch rounds the specified time stamp as follows: * Start time less than 15 days ago - Round down to the nearest whole minute. For example, 12:32:34 is rounded down to 12:32:00. * Start time between 15 and 63 days ago - Round down to the nearest 5-minute clock interval. For example, 12:32:34 is rounded down to 12:30:00. * Start time greater than 63 days ago - Round down to the nearest 1-hour clock interval. For example, 12:32:34 is rounded down to 12:00:00. If you set `Period` to 5, 10, or 30, the start time of your request is rounded down to the nearest time that corresponds to even 5-, 10-, or 30-second divisions of a minute. For example, if you make a query at (HH:mm:ss) 01:05:23 for the previous 10-second period, the start time of your request is rounded down and you receive data from 01:05:10 to 01:05:20. If you make a query at 15:07:17 for the previous 5 minutes of data, using a period of 5 seconds, you receive data timestamped between 15:02:15 and 15:07:15. `statistics: Option<Vec<String>>`The metric statistics, other than percentile. For percentile statistics, use `ExtendedStatistics`. When calling `GetMetricStatistics`, you must specify either `Statistics` or `ExtendedStatistics`, but not both. `unit: Option<String>`The unit for a given metric. If you omit `Unit`, all data that was collected with any unit is returned, along with the corresponding units that were specified when the data was reported to CloudWatch. If you specify a unit, the operation returns only data that was collected with that unit specified. If you specify a unit that does not match the data collected, the results of the operation are null. CloudWatch does not perform unit conversions. Trait Implementations --- source### impl Clone for GetMetricStatisticsInput source#### fn clone(&self) -> GetMetricStatisticsInput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for GetMetricStatisticsInput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for GetMetricStatisticsInput source#### fn default() -> GetMetricStatisticsInput Returns the “default value” for a type. Read more source### impl PartialEq<GetMetricStatisticsInput> for GetMetricStatisticsInput source#### fn eq(&self, other: &GetMetricStatisticsInput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &GetMetricStatisticsInput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for GetMetricStatisticsInput Auto Trait Implementations --- ### impl RefUnwindSafe for GetMetricStatisticsInput ### impl Send for GetMetricStatisticsInput ### impl Sync for GetMetricStatisticsInput ### impl Unpin for GetMetricStatisticsInput ### impl UnwindSafe for GetMetricStatisticsInput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::GetMetricStatisticsOutput === ``` pub struct GetMetricStatisticsOutput { pub datapoints: Option<Vec<Datapoint>>, pub label: Option<String>, } ``` Fields --- `datapoints: Option<Vec<Datapoint>>`The data points for the specified metric. `label: Option<String>`A label for the specified metric. Trait Implementations --- source### impl Clone for GetMetricStatisticsOutput source#### fn clone(&self) -> GetMetricStatisticsOutput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for GetMetricStatisticsOutput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for GetMetricStatisticsOutput source#### fn default() -> GetMetricStatisticsOutput Returns the “default value” for a type. Read more source### impl PartialEq<GetMetricStatisticsOutput> for GetMetricStatisticsOutput source#### fn eq(&self, other: &GetMetricStatisticsOutput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &GetMetricStatisticsOutput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for GetMetricStatisticsOutput Auto Trait Implementations --- ### impl RefUnwindSafe for GetMetricStatisticsOutput ### impl Send for GetMetricStatisticsOutput ### impl Sync for GetMetricStatisticsOutput ### impl Unpin for GetMetricStatisticsOutput ### impl UnwindSafe for GetMetricStatisticsOutput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::GetMetricStreamInput === ``` pub struct GetMetricStreamInput { pub name: String, } ``` Fields --- `name: String`The name of the metric stream to retrieve information about. Trait Implementations --- source### impl Clone for GetMetricStreamInput source#### fn clone(&self) -> GetMetricStreamInput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for GetMetricStreamInput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for GetMetricStreamInput source#### fn default() -> GetMetricStreamInput Returns the “default value” for a type. Read more source### impl PartialEq<GetMetricStreamInput> for GetMetricStreamInput source#### fn eq(&self, other: &GetMetricStreamInput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &GetMetricStreamInput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for GetMetricStreamInput Auto Trait Implementations --- ### impl RefUnwindSafe for GetMetricStreamInput ### impl Send for GetMetricStreamInput ### impl Sync for GetMetricStreamInput ### impl Unpin for GetMetricStreamInput ### impl UnwindSafe for GetMetricStreamInput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::GetMetricStreamOutput === ``` pub struct GetMetricStreamOutput { pub arn: Option<String>, pub creation_date: Option<String>, pub exclude_filters: Option<Vec<MetricStreamFilter>>, pub firehose_arn: Option<String>, pub include_filters: Option<Vec<MetricStreamFilter>>, pub last_update_date: Option<String>, pub name: Option<String>, pub output_format: Option<String>, pub role_arn: Option<String>, pub state: Option<String>, } ``` Fields --- `arn: Option<String>`The ARN of the metric stream. `creation_date: Option<String>`The date that the metric stream was created. `exclude_filters: Option<Vec<MetricStreamFilter>>`If this array of metric namespaces is present, then these namespaces are the only metric namespaces that are not streamed by this metric stream. In this case, all other metric namespaces in the account are streamed by this metric stream. `firehose_arn: Option<String>`The ARN of the Amazon Kinesis Firehose delivery stream that is used by this metric stream. `include_filters: Option<Vec<MetricStreamFilter>>`If this array of metric namespaces is present, then these namespaces are the only metric namespaces that are streamed by this metric stream. `last_update_date: Option<String>`The date of the most recent update to the metric stream's configuration. `name: Option<String>`The name of the metric stream. `output_format: Option<String>``role_arn: Option<String>`The ARN of the IAM role that is used by this metric stream. `state: Option<String>`The state of the metric stream. The possible values are `running` and `stopped`. Trait Implementations --- source### impl Clone for GetMetricStreamOutput source#### fn clone(&self) -> GetMetricStreamOutput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for GetMetricStreamOutput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for GetMetricStreamOutput source#### fn default() -> GetMetricStreamOutput Returns the “default value” for a type. Read more source### impl PartialEq<GetMetricStreamOutput> for GetMetricStreamOutput source#### fn eq(&self, other: &GetMetricStreamOutput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &GetMetricStreamOutput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for GetMetricStreamOutput Auto Trait Implementations --- ### impl RefUnwindSafe for GetMetricStreamOutput ### impl Send for GetMetricStreamOutput ### impl Sync for GetMetricStreamOutput ### impl Unpin for GetMetricStreamOutput ### impl UnwindSafe for GetMetricStreamOutput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::GetMetricWidgetImageInput === ``` pub struct GetMetricWidgetImageInput { pub metric_widget: String, pub output_format: Option<String>, } ``` Fields --- `metric_widget: String`A JSON string that defines the bitmap graph to be retrieved. The string includes the metrics to include in the graph, statistics, annotations, title, axis limits, and so on. You can include only one `MetricWidget` parameter in each `GetMetricWidgetImage` call. For more information about the syntax of `MetricWidget` see GetMetricWidgetImage: Metric Widget Structure and Syntax. If any metric on the graph could not load all the requested data points, an orange triangle with an exclamation point appears next to the graph legend. `output_format: Option<String>`The format of the resulting image. Only PNG images are supported. The default is `png`. If you specify `png`, the API returns an HTTP response with the content-type set to `text/xml`. The image data is in a `MetricWidgetImage` field. For example: `<GetMetricWidgetImageResponse xmlns=<URLstring>>` `<GetMetricWidgetImageResult>` `<MetricWidgetImage>` `iVBORw0KGgoAAAANSUhEUgAAAlgAAAGQEAYAAAAip...` `</MetricWidgetImage>` `</GetMetricWidgetImageResult>` `<ResponseMetadata>` `<RequestId>6f0d4192-4d42-11e8-82c1-f539a07e0e3b</RequestId>` `</ResponseMetadata>` `</GetMetricWidgetImageResponse>` The `image/png` setting is intended only for custom HTTP requests. For most use cases, and all actions using an AWS SDK, you should use `png`. If you specify `image/png`, the HTTP response has a content-type set to `image/png`, and the body of the response is a PNG image. Trait Implementations --- source### impl Clone for GetMetricWidgetImageInput source#### fn clone(&self) -> GetMetricWidgetImageInput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for GetMetricWidgetImageInput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for GetMetricWidgetImageInput source#### fn default() -> GetMetricWidgetImageInput Returns the “default value” for a type. Read more source### impl PartialEq<GetMetricWidgetImageInput> for GetMetricWidgetImageInput source#### fn eq(&self, other: &GetMetricWidgetImageInput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &GetMetricWidgetImageInput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for GetMetricWidgetImageInput Auto Trait Implementations --- ### impl RefUnwindSafe for GetMetricWidgetImageInput ### impl Send for GetMetricWidgetImageInput ### impl Sync for GetMetricWidgetImageInput ### impl Unpin for GetMetricWidgetImageInput ### impl UnwindSafe for GetMetricWidgetImageInput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::GetMetricWidgetImageOutput === ``` pub struct GetMetricWidgetImageOutput { pub metric_widget_image: Option<Bytes>, } ``` Fields --- `metric_widget_image: Option<Bytes>`The image of the graph, in the output format specified. The output is base64-encoded. Trait Implementations --- source### impl Clone for GetMetricWidgetImageOutput source#### fn clone(&self) -> GetMetricWidgetImageOutput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for GetMetricWidgetImageOutput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for GetMetricWidgetImageOutput source#### fn default() -> GetMetricWidgetImageOutput Returns the “default value” for a type. Read more source### impl PartialEq<GetMetricWidgetImageOutput> for GetMetricWidgetImageOutput source#### fn eq(&self, other: &GetMetricWidgetImageOutput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &GetMetricWidgetImageOutput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for GetMetricWidgetImageOutput Auto Trait Implementations --- ### impl RefUnwindSafe for GetMetricWidgetImageOutput ### impl Send for GetMetricWidgetImageOutput ### impl Sync for GetMetricWidgetImageOutput ### impl Unpin for GetMetricWidgetImageOutput ### impl UnwindSafe for GetMetricWidgetImageOutput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::InsightRule === ``` pub struct InsightRule { pub definition: String, pub name: String, pub schema: String, pub state: String, } ``` This structure contains the definition for a Contributor Insights rule. Fields --- `definition: String`The definition of the rule, as a JSON object. The definition contains the keywords used to define contributors, the value to aggregate on if this rule returns a sum instead of a count, and the filters. For details on the valid syntax, see Contributor Insights Rule Syntax. `name: String`The name of the rule. `schema: String`For rules that you create, this is always `{"Name": "CloudWatchLogRule", "Version": 1}`. For built-in rules, this is `{"Name": "ServiceLogRule", "Version": 1}` `state: String`Indicates whether the rule is enabled or disabled. Trait Implementations --- source### impl Clone for InsightRule source#### fn clone(&self) -> InsightRule Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for InsightRule source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for InsightRule source#### fn default() -> InsightRule Returns the “default value” for a type. Read more source### impl PartialEq<InsightRule> for InsightRule source#### fn eq(&self, other: &InsightRule) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &InsightRule) -> bool This method tests for `!=`. source### impl StructuralPartialEq for InsightRule Auto Trait Implementations --- ### impl RefUnwindSafe for InsightRule ### impl Send for InsightRule ### impl Sync for InsightRule ### impl Unpin for InsightRule ### impl UnwindSafe for InsightRule Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::InsightRuleContributor === ``` pub struct InsightRuleContributor { pub approximate_aggregate_value: f64, pub datapoints: Vec<InsightRuleContributorDatapoint>, pub keys: Vec<String>, } ``` One of the unique contributors found by a Contributor Insights rule. If the rule contains multiple keys, then a unique contributor is a unique combination of values from all the keys in the rule. If the rule contains a single key, then each unique contributor is each unique value for this key. For more information, see GetInsightRuleReport. Fields --- `approximate_aggregate_value: f64`An approximation of the aggregate value that comes from this contributor. `datapoints: Vec<InsightRuleContributorDatapoint>`An array of the data points where this contributor is present. Only the data points when this contributor appeared are included in the array. `keys: Vec<String>`One of the log entry field keywords that is used to define contributors for this rule. Trait Implementations --- source### impl Clone for InsightRuleContributor source#### fn clone(&self) -> InsightRuleContributor Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for InsightRuleContributor source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for InsightRuleContributor source#### fn default() -> InsightRuleContributor Returns the “default value” for a type. Read more source### impl PartialEq<InsightRuleContributor> for InsightRuleContributor source#### fn eq(&self, other: &InsightRuleContributor) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &InsightRuleContributor) -> bool This method tests for `!=`. source### impl StructuralPartialEq for InsightRuleContributor Auto Trait Implementations --- ### impl RefUnwindSafe for InsightRuleContributor ### impl Send for InsightRuleContributor ### impl Sync for InsightRuleContributor ### impl Unpin for InsightRuleContributor ### impl UnwindSafe for InsightRuleContributor Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::InsightRuleContributorDatapoint === ``` pub struct InsightRuleContributorDatapoint { pub approximate_value: f64, pub timestamp: String, } ``` One data point related to one contributor. For more information, see GetInsightRuleReport and InsightRuleContributor. Fields --- `approximate_value: f64`The approximate value that this contributor added during this timestamp. `timestamp: String`The timestamp of the data point. Trait Implementations --- source### impl Clone for InsightRuleContributorDatapoint source#### fn clone(&self) -> InsightRuleContributorDatapoint Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for InsightRuleContributorDatapoint source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for InsightRuleContributorDatapoint source#### fn default() -> InsightRuleContributorDatapoint Returns the “default value” for a type. Read more source### impl PartialEq<InsightRuleContributorDatapoint> for InsightRuleContributorDatapoint source#### fn eq(&self, other: &InsightRuleContributorDatapoint) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &InsightRuleContributorDatapoint) -> bool This method tests for `!=`. source### impl StructuralPartialEq for InsightRuleContributorDatapoint Auto Trait Implementations --- ### impl RefUnwindSafe for InsightRuleContributorDatapoint ### impl Send for InsightRuleContributorDatapoint ### impl Sync for InsightRuleContributorDatapoint ### impl Unpin for InsightRuleContributorDatapoint ### impl UnwindSafe for InsightRuleContributorDatapoint Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::InsightRuleMetricDatapoint === ``` pub struct InsightRuleMetricDatapoint { pub average: Option<f64>, pub max_contributor_value: Option<f64>, pub maximum: Option<f64>, pub minimum: Option<f64>, pub sample_count: Option<f64>, pub sum: Option<f64>, pub timestamp: String, pub unique_contributors: Option<f64>, } ``` One data point from the metric time series returned in a Contributor Insights rule report. For more information, see GetInsightRuleReport. Fields --- `average: Option<f64>`The average value from all contributors during the time period represented by that data point. This statistic is returned only if you included it in the `Metrics` array in your request. `max_contributor_value: Option<f64>`The maximum value provided by one contributor during this timestamp. Each timestamp is evaluated separately, so the identity of the max contributor could be different for each timestamp. This statistic is returned only if you included it in the `Metrics` array in your request. `maximum: Option<f64>`The maximum value from a single occurence from a single contributor during the time period represented by that data point. This statistic is returned only if you included it in the `Metrics` array in your request. `minimum: Option<f64>`The minimum value from a single contributor during the time period represented by that data point. This statistic is returned only if you included it in the `Metrics` array in your request. `sample_count: Option<f64>`The number of occurrences that matched the rule during this data point. This statistic is returned only if you included it in the `Metrics` array in your request. `sum: Option<f64>`The sum of the values from all contributors during the time period represented by that data point. This statistic is returned only if you included it in the `Metrics` array in your request. `timestamp: String`The timestamp of the data point. `unique_contributors: Option<f64>`The number of unique contributors who published data during this timestamp. This statistic is returned only if you included it in the `Metrics` array in your request. Trait Implementations --- source### impl Clone for InsightRuleMetricDatapoint source#### fn clone(&self) -> InsightRuleMetricDatapoint Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for InsightRuleMetricDatapoint source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for InsightRuleMetricDatapoint source#### fn default() -> InsightRuleMetricDatapoint Returns the “default value” for a type. Read more source### impl PartialEq<InsightRuleMetricDatapoint> for InsightRuleMetricDatapoint source#### fn eq(&self, other: &InsightRuleMetricDatapoint) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &InsightRuleMetricDatapoint) -> bool This method tests for `!=`. source### impl StructuralPartialEq for InsightRuleMetricDatapoint Auto Trait Implementations --- ### impl RefUnwindSafe for InsightRuleMetricDatapoint ### impl Send for InsightRuleMetricDatapoint ### impl Sync for InsightRuleMetricDatapoint ### impl Unpin for InsightRuleMetricDatapoint ### impl UnwindSafe for InsightRuleMetricDatapoint Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::LabelOptions === ``` pub struct LabelOptions { pub timezone: Option<String>, } ``` This structure includes the `Timezone` parameter, which you can use to specify your time zone so that the labels that are associated with returned metrics display the correct time for your time zone. The `Timezone` value affects a label only if you have a time-based dynamic expression in the label. For more information about dynamic expressions in labels, see Using Dynamic Labels. Fields --- `timezone: Option<String>`The time zone to use for metric data return in this operation. The format is `+` or `-` followed by four digits. The first two digits indicate the number of hours ahead or behind of UTC, and the final two digits are the number of minutes. For example, +0130 indicates a time zone that is 1 hour and 30 minutes ahead of UTC. The default is +0000. Trait Implementations --- source### impl Clone for LabelOptions source#### fn clone(&self) -> LabelOptions Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for LabelOptions source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for LabelOptions source#### fn default() -> LabelOptions Returns the “default value” for a type. Read more source### impl PartialEq<LabelOptions> for LabelOptions source#### fn eq(&self, other: &LabelOptions) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &LabelOptions) -> bool This method tests for `!=`. source### impl StructuralPartialEq for LabelOptions Auto Trait Implementations --- ### impl RefUnwindSafe for LabelOptions ### impl Send for LabelOptions ### impl Sync for LabelOptions ### impl Unpin for LabelOptions ### impl UnwindSafe for LabelOptions Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::ListDashboardsInput === ``` pub struct ListDashboardsInput { pub dashboard_name_prefix: Option<String>, pub next_token: Option<String>, } ``` Fields --- `dashboard_name_prefix: Option<String>`If you specify this parameter, only the dashboards with names starting with the specified string are listed. The maximum length is 255, and valid characters are A-Z, a-z, 0-9, ".", "-", and "_". `next_token: Option<String>`The token returned by a previous call to indicate that there is more data available. Trait Implementations --- source### impl Clone for ListDashboardsInput source#### fn clone(&self) -> ListDashboardsInput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for ListDashboardsInput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for ListDashboardsInput source#### fn default() -> ListDashboardsInput Returns the “default value” for a type. Read more source### impl PartialEq<ListDashboardsInput> for ListDashboardsInput source#### fn eq(&self, other: &ListDashboardsInput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &ListDashboardsInput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for ListDashboardsInput Auto Trait Implementations --- ### impl RefUnwindSafe for ListDashboardsInput ### impl Send for ListDashboardsInput ### impl Sync for ListDashboardsInput ### impl Unpin for ListDashboardsInput ### impl UnwindSafe for ListDashboardsInput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::ListDashboardsOutput === ``` pub struct ListDashboardsOutput { pub dashboard_entries: Option<Vec<DashboardEntry>>, pub next_token: Option<String>, } ``` Fields --- `dashboard_entries: Option<Vec<DashboardEntry>>`The list of matching dashboards. `next_token: Option<String>`The token that marks the start of the next batch of returned results. Trait Implementations --- source### impl Clone for ListDashboardsOutput source#### fn clone(&self) -> ListDashboardsOutput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for ListDashboardsOutput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for ListDashboardsOutput source#### fn default() -> ListDashboardsOutput Returns the “default value” for a type. Read more source### impl PartialEq<ListDashboardsOutput> for ListDashboardsOutput source#### fn eq(&self, other: &ListDashboardsOutput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &ListDashboardsOutput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for ListDashboardsOutput Auto Trait Implementations --- ### impl RefUnwindSafe for ListDashboardsOutput ### impl Send for ListDashboardsOutput ### impl Sync for ListDashboardsOutput ### impl Unpin for ListDashboardsOutput ### impl UnwindSafe for ListDashboardsOutput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::ListMetricStreamsInput === ``` pub struct ListMetricStreamsInput { pub max_results: Option<i64>, pub next_token: Option<String>, } ``` Fields --- `max_results: Option<i64>`The maximum number of results to return in one operation. `next_token: Option<String>`Include this value, if it was returned by the previous call, to get the next set of metric streams. Trait Implementations --- source### impl Clone for ListMetricStreamsInput source#### fn clone(&self) -> ListMetricStreamsInput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for ListMetricStreamsInput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for ListMetricStreamsInput source#### fn default() -> ListMetricStreamsInput Returns the “default value” for a type. Read more source### impl PartialEq<ListMetricStreamsInput> for ListMetricStreamsInput source#### fn eq(&self, other: &ListMetricStreamsInput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &ListMetricStreamsInput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for ListMetricStreamsInput Auto Trait Implementations --- ### impl RefUnwindSafe for ListMetricStreamsInput ### impl Send for ListMetricStreamsInput ### impl Sync for ListMetricStreamsInput ### impl Unpin for ListMetricStreamsInput ### impl UnwindSafe for ListMetricStreamsInput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::ListMetricStreamsOutput === ``` pub struct ListMetricStreamsOutput { pub entries: Option<Vec<MetricStreamEntry>>, pub next_token: Option<String>, } ``` Fields --- `entries: Option<Vec<MetricStreamEntry>>`The array of metric stream information. `next_token: Option<String>`The token that marks the start of the next batch of returned results. You can use this token in a subsequent operation to get the next batch of results. Trait Implementations --- source### impl Clone for ListMetricStreamsOutput source#### fn clone(&self) -> ListMetricStreamsOutput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for ListMetricStreamsOutput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for ListMetricStreamsOutput source#### fn default() -> ListMetricStreamsOutput Returns the “default value” for a type. Read more source### impl PartialEq<ListMetricStreamsOutput> for ListMetricStreamsOutput source#### fn eq(&self, other: &ListMetricStreamsOutput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &ListMetricStreamsOutput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for ListMetricStreamsOutput Auto Trait Implementations --- ### impl RefUnwindSafe for ListMetricStreamsOutput ### impl Send for ListMetricStreamsOutput ### impl Sync for ListMetricStreamsOutput ### impl Unpin for ListMetricStreamsOutput ### impl UnwindSafe for ListMetricStreamsOutput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::ListMetricsInput === ``` pub struct ListMetricsInput { pub dimensions: Option<Vec<DimensionFilter>>, pub metric_name: Option<String>, pub namespace: Option<String>, pub next_token: Option<String>, pub recently_active: Option<String>, } ``` Fields --- `dimensions: Option<Vec<DimensionFilter>>`The dimensions to filter against. Only the dimensions that match exactly will be returned. `metric_name: Option<String>`The name of the metric to filter against. Only the metrics with names that match exactly will be returned. `namespace: Option<String>`The metric namespace to filter against. Only the namespace that matches exactly will be returned. `next_token: Option<String>`The token returned by a previous call to indicate that there is more data available. `recently_active: Option<String>`To filter the results to show only metrics that have had data points published in the past three hours, specify this parameter with a value of `PT3H`. This is the only valid value for this parameter. The results that are returned are an approximation of the value you specify. There is a low probability that the returned results include metrics with last published data as much as 40 minutes more than the specified time interval. Trait Implementations --- source### impl Clone for ListMetricsInput source#### fn clone(&self) -> ListMetricsInput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for ListMetricsInput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for ListMetricsInput source#### fn default() -> ListMetricsInput Returns the “default value” for a type. Read more source### impl PartialEq<ListMetricsInput> for ListMetricsInput source#### fn eq(&self, other: &ListMetricsInput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &ListMetricsInput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for ListMetricsInput Auto Trait Implementations --- ### impl RefUnwindSafe for ListMetricsInput ### impl Send for ListMetricsInput ### impl Sync for ListMetricsInput ### impl Unpin for ListMetricsInput ### impl UnwindSafe for ListMetricsInput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::ListMetricsOutput === ``` pub struct ListMetricsOutput { pub metrics: Option<Vec<Metric>>, pub next_token: Option<String>, } ``` Fields --- `metrics: Option<Vec<Metric>>`The metrics that match your request. `next_token: Option<String>`The token that marks the start of the next batch of returned results. Trait Implementations --- source### impl Clone for ListMetricsOutput source#### fn clone(&self) -> ListMetricsOutput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for ListMetricsOutput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for ListMetricsOutput source#### fn default() -> ListMetricsOutput Returns the “default value” for a type. Read more source### impl PartialEq<ListMetricsOutput> for ListMetricsOutput source#### fn eq(&self, other: &ListMetricsOutput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &ListMetricsOutput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for ListMetricsOutput Auto Trait Implementations --- ### impl RefUnwindSafe for ListMetricsOutput ### impl Send for ListMetricsOutput ### impl Sync for ListMetricsOutput ### impl Unpin for ListMetricsOutput ### impl UnwindSafe for ListMetricsOutput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::ListTagsForResourceInput === ``` pub struct ListTagsForResourceInput { pub resource_arn: String, } ``` Fields --- `resource_arn: String`The ARN of the CloudWatch resource that you want to view tags for. The ARN format of an alarm is `arn:aws:cloudwatch:*Region*:*account-id*:alarm:*alarm-name*` The ARN format of a Contributor Insights rule is `arn:aws:cloudwatch:*Region*:*account-id*:insight-rule:*insight-rule-name*` For more information about ARN format, see Resource Types Defined by Amazon CloudWatch in the *Amazon Web Services General Reference*. Trait Implementations --- source### impl Clone for ListTagsForResourceInput source#### fn clone(&self) -> ListTagsForResourceInput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for ListTagsForResourceInput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for ListTagsForResourceInput source#### fn default() -> ListTagsForResourceInput Returns the “default value” for a type. Read more source### impl PartialEq<ListTagsForResourceInput> for ListTagsForResourceInput source#### fn eq(&self, other: &ListTagsForResourceInput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &ListTagsForResourceInput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for ListTagsForResourceInput Auto Trait Implementations --- ### impl RefUnwindSafe for ListTagsForResourceInput ### impl Send for ListTagsForResourceInput ### impl Sync for ListTagsForResourceInput ### impl Unpin for ListTagsForResourceInput ### impl UnwindSafe for ListTagsForResourceInput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::ListTagsForResourceOutput === ``` pub struct ListTagsForResourceOutput { pub tags: Option<Vec<Tag>>, } ``` Fields --- `tags: Option<Vec<Tag>>`The list of tag keys and values associated with the resource you specified. Trait Implementations --- source### impl Clone for ListTagsForResourceOutput source#### fn clone(&self) -> ListTagsForResourceOutput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for ListTagsForResourceOutput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for ListTagsForResourceOutput source#### fn default() -> ListTagsForResourceOutput Returns the “default value” for a type. Read more source### impl PartialEq<ListTagsForResourceOutput> for ListTagsForResourceOutput source#### fn eq(&self, other: &ListTagsForResourceOutput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &ListTagsForResourceOutput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for ListTagsForResourceOutput Auto Trait Implementations --- ### impl RefUnwindSafe for ListTagsForResourceOutput ### impl Send for ListTagsForResourceOutput ### impl Sync for ListTagsForResourceOutput ### impl Unpin for ListTagsForResourceOutput ### impl UnwindSafe for ListTagsForResourceOutput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::MessageData === ``` pub struct MessageData { pub code: Option<String>, pub value: Option<String>, } ``` A message returned by the `GetMetricData`API, including a code and a description. Fields --- `code: Option<String>`The error code or status code associated with the message. `value: Option<String>`The message text. Trait Implementations --- source### impl Clone for MessageData source#### fn clone(&self) -> MessageData Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for MessageData source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for MessageData source#### fn default() -> MessageData Returns the “default value” for a type. Read more source### impl PartialEq<MessageData> for MessageData source#### fn eq(&self, other: &MessageData) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &MessageData) -> bool This method tests for `!=`. source### impl StructuralPartialEq for MessageData Auto Trait Implementations --- ### impl RefUnwindSafe for MessageData ### impl Send for MessageData ### impl Sync for MessageData ### impl Unpin for MessageData ### impl UnwindSafe for MessageData Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::Metric === ``` pub struct Metric { pub dimensions: Option<Vec<Dimension>>, pub metric_name: Option<String>, pub namespace: Option<String>, } ``` Represents a specific metric. Fields --- `dimensions: Option<Vec<Dimension>>`The dimensions for the metric. `metric_name: Option<String>`The name of the metric. This is a required field. `namespace: Option<String>`The namespace of the metric. Trait Implementations --- source### impl Clone for Metric source#### fn clone(&self) -> Metric Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for Metric source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for Metric source#### fn default() -> Metric Returns the “default value” for a type. Read more source### impl PartialEq<Metric> for Metric source#### fn eq(&self, other: &Metric) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &Metric) -> bool This method tests for `!=`. source### impl StructuralPartialEq for Metric Auto Trait Implementations --- ### impl RefUnwindSafe for Metric ### impl Send for Metric ### impl Sync for Metric ### impl Unpin for Metric ### impl UnwindSafe for Metric Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::MetricAlarm === ``` pub struct MetricAlarm { pub actions_enabled: Option<bool>, pub alarm_actions: Option<Vec<String>>, pub alarm_arn: Option<String>, pub alarm_configuration_updated_timestamp: Option<String>, pub alarm_description: Option<String>, pub alarm_name: Option<String>, pub comparison_operator: Option<String>, pub datapoints_to_alarm: Option<i64>, pub dimensions: Option<Vec<Dimension>>, pub evaluate_low_sample_count_percentile: Option<String>, pub evaluation_periods: Option<i64>, pub extended_statistic: Option<String>, pub insufficient_data_actions: Option<Vec<String>>, pub metric_name: Option<String>, pub metrics: Option<Vec<MetricDataQuery>>, pub namespace: Option<String>, pub ok_actions: Option<Vec<String>>, pub period: Option<i64>, pub state_reason: Option<String>, pub state_reason_data: Option<String>, pub state_updated_timestamp: Option<String>, pub state_value: Option<String>, pub statistic: Option<String>, pub threshold: Option<f64>, pub threshold_metric_id: Option<String>, pub treat_missing_data: Option<String>, pub unit: Option<String>, } ``` The details about a metric alarm. Fields --- `actions_enabled: Option<bool>`Indicates whether actions should be executed during any changes to the alarm state. `alarm_actions: Option<Vec<String>>`The actions to execute when this alarm transitions to the `ALARM` state from any other state. Each action is specified as an Amazon Resource Name (ARN). `alarm_arn: Option<String>`The Amazon Resource Name (ARN) of the alarm. `alarm_configuration_updated_timestamp: Option<String>`The time stamp of the last update to the alarm configuration. `alarm_description: Option<String>`The description of the alarm. `alarm_name: Option<String>`The name of the alarm. `comparison_operator: Option<String>`The arithmetic operation to use when comparing the specified statistic and threshold. The specified statistic value is used as the first operand. `datapoints_to_alarm: Option<i64>`The number of data points that must be breaching to trigger the alarm. `dimensions: Option<Vec<Dimension>>`The dimensions for the metric associated with the alarm. `evaluate_low_sample_count_percentile: Option<String>`Used only for alarms based on percentiles. If `ignore`, the alarm state does not change during periods with too few data points to be statistically significant. If `evaluate` or this parameter is not used, the alarm is always evaluated and possibly changes state no matter how many data points are available. `evaluation_periods: Option<i64>`The number of periods over which data is compared to the specified threshold. `extended_statistic: Option<String>`The percentile statistic for the metric associated with the alarm. Specify a value between p0.0 and p100. `insufficient_data_actions: Option<Vec<String>>`The actions to execute when this alarm transitions to the `INSUFFICIENT_DATA` state from any other state. Each action is specified as an Amazon Resource Name (ARN). `metric_name: Option<String>`The name of the metric associated with the alarm, if this is an alarm based on a single metric. `metrics: Option<Vec<MetricDataQuery>>`An array of MetricDataQuery structures, used in an alarm based on a metric math expression. Each structure either retrieves a metric or performs a math expression. One item in the Metrics array is the math expression that the alarm watches. This expression by designated by having `ReturnData` set to true. `namespace: Option<String>`The namespace of the metric associated with the alarm. `ok_actions: Option<Vec<String>>`The actions to execute when this alarm transitions to the `OK` state from any other state. Each action is specified as an Amazon Resource Name (ARN). `period: Option<i64>`The period, in seconds, over which the statistic is applied. `state_reason: Option<String>`An explanation for the alarm state, in text format. `state_reason_data: Option<String>`An explanation for the alarm state, in JSON format. `state_updated_timestamp: Option<String>`The time stamp of the last update to the alarm state. `state_value: Option<String>`The state value for the alarm. `statistic: Option<String>`The statistic for the metric associated with the alarm, other than percentile. For percentile statistics, use `ExtendedStatistic`. `threshold: Option<f64>`The value to compare with the specified statistic. `threshold_metric_id: Option<String>`In an alarm based on an anomaly detection model, this is the ID of the `ANOMALY_DETECTION_BAND` function used as the threshold for the alarm. `treat_missing_data: Option<String>`Sets how this alarm is to handle missing data points. If this parameter is omitted, the default behavior of `missing` is used. `unit: Option<String>`The unit of the metric associated with the alarm. Trait Implementations --- source### impl Clone for MetricAlarm source#### fn clone(&self) -> MetricAlarm Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for MetricAlarm source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for MetricAlarm source#### fn default() -> MetricAlarm Returns the “default value” for a type. Read more source### impl PartialEq<MetricAlarm> for MetricAlarm source#### fn eq(&self, other: &MetricAlarm) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &MetricAlarm) -> bool This method tests for `!=`. source### impl StructuralPartialEq for MetricAlarm Auto Trait Implementations --- ### impl RefUnwindSafe for MetricAlarm ### impl Send for MetricAlarm ### impl Sync for MetricAlarm ### impl Unpin for MetricAlarm ### impl UnwindSafe for MetricAlarm Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::MetricDataQuery === ``` pub struct MetricDataQuery { pub expression: Option<String>, pub id: String, pub label: Option<String>, pub metric_stat: Option<MetricStat>, pub period: Option<i64>, pub return_data: Option<bool>, } ``` This structure is used in both `GetMetricData` and `PutMetricAlarm`. The supported use of this structure is different for those two operations. When used in `GetMetricData`, it indicates the metric data to return, and whether this call is just retrieving a batch set of data for one metric, or is performing a math expression on metric data. A single `GetMetricData` call can include up to 500 `MetricDataQuery` structures. When used in `PutMetricAlarm`, it enables you to create an alarm based on a metric math expression. Each `MetricDataQuery` in the array specifies either a metric to retrieve, or a math expression to be performed on retrieved metrics. A single `PutMetricAlarm` call can include up to 20 `MetricDataQuery` structures in the array. The 20 structures can include as many as 10 structures that contain a `MetricStat` parameter to retrieve a metric, and as many as 10 structures that contain the `Expression` parameter to perform a math expression. Of those `Expression` structures, one must have `True` as the value for `ReturnData`. The result of this expression is the value the alarm watches. Any expression used in a `PutMetricAlarm` operation must return a single time series. For more information, see Metric Math Syntax and Functions in the *Amazon CloudWatch User Guide*. Some of the parameters of this structure also have different uses whether you are using this structure in a `GetMetricData` operation or a `PutMetricAlarm` operation. These differences are explained in the following parameter list. Fields --- `expression: Option<String>`The math expression to be performed on the returned data, if this object is performing a math expression. This expression can use the `Id` of the other metrics to refer to those metrics, and can also use the `Id` of other expressions to use the result of those expressions. For more information about metric math expressions, see Metric Math Syntax and Functions in the *Amazon CloudWatch User Guide*. Within each MetricDataQuery object, you must specify either `Expression` or `MetricStat` but not both. `id: String`A short name used to tie this object to the results in the response. This name must be unique within a single call to `GetMetricData`. If you are performing math expressions on this set of data, this name represents that data and can serve as a variable in the mathematical expression. The valid characters are letters, numbers, and underscore. The first character must be a lowercase letter. `label: Option<String>`A human-readable label for this metric or expression. This is especially useful if this is an expression, so that you know what the value represents. If the metric or expression is shown in a CloudWatch dashboard widget, the label is shown. If Label is omitted, CloudWatch generates a default. You can put dynamic expressions into a label, so that it is more descriptive. For more information, see Using Dynamic Labels. `metric_stat: Option<MetricStat>`The metric to be returned, along with statistics, period, and units. Use this parameter only if this object is retrieving a metric and not performing a math expression on returned data. Within one MetricDataQuery object, you must specify either `Expression` or `MetricStat` but not both. `period: Option<i64>`The granularity, in seconds, of the returned data points. For metrics with regular resolution, a period can be as short as one minute (60 seconds) and must be a multiple of 60. For high-resolution metrics that are collected at intervals of less than one minute, the period can be 1, 5, 10, 30, 60, or any multiple of 60. High-resolution metrics are those metrics stored by a `PutMetricData` operation that includes a `StorageResolution of 1 second`. `return_data: Option<bool>`When used in `GetMetricData`, this option indicates whether to return the timestamps and raw data values of this metric. If you are performing this call just to do math expressions and do not also need the raw data returned, you can specify `False`. If you omit this, the default of `True` is used. When used in `PutMetricAlarm`, specify `True` for the one expression result to use as the alarm. For all other metrics and expressions in the same `PutMetricAlarm` operation, specify `ReturnData` as False. Trait Implementations --- source### impl Clone for MetricDataQuery source#### fn clone(&self) -> MetricDataQuery Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for MetricDataQuery source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for MetricDataQuery source#### fn default() -> MetricDataQuery Returns the “default value” for a type. Read more source### impl PartialEq<MetricDataQuery> for MetricDataQuery source#### fn eq(&self, other: &MetricDataQuery) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &MetricDataQuery) -> bool This method tests for `!=`. source### impl StructuralPartialEq for MetricDataQuery Auto Trait Implementations --- ### impl RefUnwindSafe for MetricDataQuery ### impl Send for MetricDataQuery ### impl Sync for MetricDataQuery ### impl Unpin for MetricDataQuery ### impl UnwindSafe for MetricDataQuery Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::MetricDataResult === ``` pub struct MetricDataResult { pub id: Option<String>, pub label: Option<String>, pub messages: Option<Vec<MessageData>>, pub status_code: Option<String>, pub timestamps: Option<Vec<String>>, pub values: Option<Vec<f64>>, } ``` A `GetMetricData` call returns an array of `MetricDataResult` structures. Each of these structures includes the data points for that metric, along with the timestamps of those data points and other identifying information. Fields --- `id: Option<String>`The short name you specified to represent this metric. `label: Option<String>`The human-readable label associated with the data. `messages: Option<Vec<MessageData>>`A list of messages with additional information about the data returned. `status_code: Option<String>`The status of the returned data. `Complete` indicates that all data points in the requested time range were returned. `PartialData` means that an incomplete set of data points were returned. You can use the `NextToken` value that was returned and repeat your request to get more data points. `NextToken` is not returned if you are performing a math expression. `InternalError` indicates that an error occurred. Retry your request using `NextToken`, if present. `timestamps: Option<Vec<String>>`The timestamps for the data points, formatted in Unix timestamp format. The number of timestamps always matches the number of values and the value for Timestamps[x] is Values[x]. `values: Option<Vec<f64>>`The data points for the metric corresponding to `Timestamps`. The number of values always matches the number of timestamps and the timestamp for Values[x] is Timestamps[x]. Trait Implementations --- source### impl Clone for MetricDataResult source#### fn clone(&self) -> MetricDataResult Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for MetricDataResult source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for MetricDataResult source#### fn default() -> MetricDataResult Returns the “default value” for a type. Read more source### impl PartialEq<MetricDataResult> for MetricDataResult source#### fn eq(&self, other: &MetricDataResult) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &MetricDataResult) -> bool This method tests for `!=`. source### impl StructuralPartialEq for MetricDataResult Auto Trait Implementations --- ### impl RefUnwindSafe for MetricDataResult ### impl Send for MetricDataResult ### impl Sync for MetricDataResult ### impl Unpin for MetricDataResult ### impl UnwindSafe for MetricDataResult Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::MetricDatum === ``` pub struct MetricDatum { pub counts: Option<Vec<f64>>, pub dimensions: Option<Vec<Dimension>>, pub metric_name: String, pub statistic_values: Option<StatisticSet>, pub storage_resolution: Option<i64>, pub timestamp: Option<String>, pub unit: Option<String>, pub value: Option<f64>, pub values: Option<Vec<f64>>, } ``` Encapsulates the information sent to either create a metric or add new values to be aggregated into an existing metric. Fields --- `counts: Option<Vec<f64>>`Array of numbers that is used along with the `Values` array. Each number in the `Count` array is the number of times the corresponding value in the `Values` array occurred during the period. If you omit the `Counts` array, the default of 1 is used as the value for each count. If you include a `Counts` array, it must include the same amount of values as the `Values` array. `dimensions: Option<Vec<Dimension>>`The dimensions associated with the metric. `metric_name: String`The name of the metric. `statistic_values: Option<StatisticSet>`The statistical values for the metric. `storage_resolution: Option<i64>`Valid values are 1 and 60. Setting this to 1 specifies this metric as a high-resolution metric, so that CloudWatch stores the metric with sub-minute resolution down to one second. Setting this to 60 specifies this metric as a regular-resolution metric, which CloudWatch stores at 1-minute resolution. Currently, high resolution is available only for custom metrics. For more information about high-resolution metrics, see High-Resolution Metrics in the *Amazon CloudWatch User Guide*. This field is optional, if you do not specify it the default of 60 is used. `timestamp: Option<String>`The time the metric data was received, expressed as the number of milliseconds since Jan 1, 1970 00:00:00 UTC. `unit: Option<String>`When you are using a `Put` operation, this defines what unit you want to use when storing the metric. In a `Get` operation, this displays the unit that is used for the metric. `value: Option<f64>`The value for the metric. Although the parameter accepts numbers of type Double, CloudWatch rejects values that are either too small or too large. Values must be in the range of -2^360 to 2^360. In addition, special values (for example, NaN, +Infinity, -Infinity) are not supported. `values: Option<Vec<f64>>`Array of numbers representing the values for the metric during the period. Each unique value is listed just once in this array, and the corresponding number in the `Counts` array specifies the number of times that value occurred during the period. You can include up to 150 unique values in each `PutMetricData` action that specifies a `Values` array. Although the `Values` array accepts numbers of type `Double`, CloudWatch rejects values that are either too small or too large. Values must be in the range of -2^360 to 2^360. In addition, special values (for example, NaN, +Infinity, -Infinity) are not supported. Trait Implementations --- source### impl Clone for MetricDatum source#### fn clone(&self) -> MetricDatum Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for MetricDatum source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for MetricDatum source#### fn default() -> MetricDatum Returns the “default value” for a type. Read more source### impl PartialEq<MetricDatum> for MetricDatum source#### fn eq(&self, other: &MetricDatum) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &MetricDatum) -> bool This method tests for `!=`. source### impl StructuralPartialEq for MetricDatum Auto Trait Implementations --- ### impl RefUnwindSafe for MetricDatum ### impl Send for MetricDatum ### impl Sync for MetricDatum ### impl Unpin for MetricDatum ### impl UnwindSafe for MetricDatum Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::MetricStat === ``` pub struct MetricStat { pub metric: Metric, pub period: i64, pub stat: String, pub unit: Option<String>, } ``` This structure defines the metric to be returned, along with the statistics, period, and units. Fields --- `metric: Metric`The metric to return, including the metric name, namespace, and dimensions. `period: i64`The granularity, in seconds, of the returned data points. For metrics with regular resolution, a period can be as short as one minute (60 seconds) and must be a multiple of 60. For high-resolution metrics that are collected at intervals of less than one minute, the period can be 1, 5, 10, 30, 60, or any multiple of 60. High-resolution metrics are those metrics stored by a `PutMetricData` call that includes a `StorageResolution` of 1 second. If the `StartTime` parameter specifies a time stamp that is greater than 3 hours ago, you must specify the period as follows or no data points in that time range is returned: * Start time between 3 hours and 15 days ago - Use a multiple of 60 seconds (1 minute). * Start time between 15 and 63 days ago - Use a multiple of 300 seconds (5 minutes). * Start time greater than 63 days ago - Use a multiple of 3600 seconds (1 hour). `stat: String`The statistic to return. It can include any CloudWatch statistic or extended statistic. `unit: Option<String>`When you are using a `Put` operation, this defines what unit you want to use when storing the metric. In a `Get` operation, if you omit `Unit` then all data that was collected with any unit is returned, along with the corresponding units that were specified when the data was reported to CloudWatch. If you specify a unit, the operation returns only data that was collected with that unit specified. If you specify a unit that does not match the data collected, the results of the operation are null. CloudWatch does not perform unit conversions. Trait Implementations --- source### impl Clone for MetricStat source#### fn clone(&self) -> MetricStat Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for MetricStat source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for MetricStat source#### fn default() -> MetricStat Returns the “default value” for a type. Read more source### impl PartialEq<MetricStat> for MetricStat source#### fn eq(&self, other: &MetricStat) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &MetricStat) -> bool This method tests for `!=`. source### impl StructuralPartialEq for MetricStat Auto Trait Implementations --- ### impl RefUnwindSafe for MetricStat ### impl Send for MetricStat ### impl Sync for MetricStat ### impl Unpin for MetricStat ### impl UnwindSafe for MetricStat Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::MetricStreamEntry === ``` pub struct MetricStreamEntry { pub arn: Option<String>, pub creation_date: Option<String>, pub firehose_arn: Option<String>, pub last_update_date: Option<String>, pub name: Option<String>, pub output_format: Option<String>, pub state: Option<String>, } ``` This structure contains the configuration information about one metric stream. Fields --- `arn: Option<String>`The ARN of the metric stream. `creation_date: Option<String>`The date that the metric stream was originally created. `firehose_arn: Option<String>`The ARN of the Kinesis Firehose devlivery stream that is used for this metric stream. `last_update_date: Option<String>`The date that the configuration of this metric stream was most recently updated. `name: Option<String>`The name of the metric stream. `output_format: Option<String>`The output format of this metric stream. Valid values are `json` and `opentelemetry0.7`. `state: Option<String>`The current state of this stream. Valid values are `running` and `stopped`. Trait Implementations --- source### impl Clone for MetricStreamEntry source#### fn clone(&self) -> MetricStreamEntry Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for MetricStreamEntry source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for MetricStreamEntry source#### fn default() -> MetricStreamEntry Returns the “default value” for a type. Read more source### impl PartialEq<MetricStreamEntry> for MetricStreamEntry source#### fn eq(&self, other: &MetricStreamEntry) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &MetricStreamEntry) -> bool This method tests for `!=`. source### impl StructuralPartialEq for MetricStreamEntry Auto Trait Implementations --- ### impl RefUnwindSafe for MetricStreamEntry ### impl Send for MetricStreamEntry ### impl Sync for MetricStreamEntry ### impl Unpin for MetricStreamEntry ### impl UnwindSafe for MetricStreamEntry Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::MetricStreamFilter === ``` pub struct MetricStreamFilter { pub namespace: Option<String>, } ``` This structure contains the name of one of the metric namespaces that is listed in a filter of a metric stream. Fields --- `namespace: Option<String>`The name of the metric namespace in the filter. Trait Implementations --- source### impl Clone for MetricStreamFilter source#### fn clone(&self) -> MetricStreamFilter Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for MetricStreamFilter source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for MetricStreamFilter source#### fn default() -> MetricStreamFilter Returns the “default value” for a type. Read more source### impl PartialEq<MetricStreamFilter> for MetricStreamFilter source#### fn eq(&self, other: &MetricStreamFilter) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &MetricStreamFilter) -> bool This method tests for `!=`. source### impl StructuralPartialEq for MetricStreamFilter Auto Trait Implementations --- ### impl RefUnwindSafe for MetricStreamFilter ### impl Send for MetricStreamFilter ### impl Sync for MetricStreamFilter ### impl Unpin for MetricStreamFilter ### impl UnwindSafe for MetricStreamFilter Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::PartialFailure === ``` pub struct PartialFailure { pub exception_type: Option<String>, pub failure_code: Option<String>, pub failure_description: Option<String>, pub failure_resource: Option<String>, } ``` This array is empty if the API operation was successful for all the rules specified in the request. If the operation could not process one of the rules, the following data is returned for each of those rules. Fields --- `exception_type: Option<String>`The type of error. `failure_code: Option<String>`The code of the error. `failure_description: Option<String>`A description of the error. `failure_resource: Option<String>`The specified rule that could not be deleted. Trait Implementations --- source### impl Clone for PartialFailure source#### fn clone(&self) -> PartialFailure Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for PartialFailure source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for PartialFailure source#### fn default() -> PartialFailure Returns the “default value” for a type. Read more source### impl PartialEq<PartialFailure> for PartialFailure source#### fn eq(&self, other: &PartialFailure) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &PartialFailure) -> bool This method tests for `!=`. source### impl StructuralPartialEq for PartialFailure Auto Trait Implementations --- ### impl RefUnwindSafe for PartialFailure ### impl Send for PartialFailure ### impl Sync for PartialFailure ### impl Unpin for PartialFailure ### impl UnwindSafe for PartialFailure Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::PutAnomalyDetectorInput === ``` pub struct PutAnomalyDetectorInput { pub configuration: Option<AnomalyDetectorConfiguration>, pub dimensions: Option<Vec<Dimension>>, pub metric_name: String, pub namespace: String, pub stat: String, } ``` Fields --- `configuration: Option<AnomalyDetectorConfiguration>`The configuration specifies details about how the anomaly detection model is to be trained, including time ranges to exclude when training and updating the model. You can specify as many as 10 time ranges. The configuration can also include the time zone to use for the metric. `dimensions: Option<Vec<Dimension>>`The metric dimensions to create the anomaly detection model for. `metric_name: String`The name of the metric to create the anomaly detection model for. `namespace: String`The namespace of the metric to create the anomaly detection model for. `stat: String`The statistic to use for the metric and the anomaly detection model. Trait Implementations --- source### impl Clone for PutAnomalyDetectorInput source#### fn clone(&self) -> PutAnomalyDetectorInput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for PutAnomalyDetectorInput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for PutAnomalyDetectorInput source#### fn default() -> PutAnomalyDetectorInput Returns the “default value” for a type. Read more source### impl PartialEq<PutAnomalyDetectorInput> for PutAnomalyDetectorInput source#### fn eq(&self, other: &PutAnomalyDetectorInput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &PutAnomalyDetectorInput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for PutAnomalyDetectorInput Auto Trait Implementations --- ### impl RefUnwindSafe for PutAnomalyDetectorInput ### impl Send for PutAnomalyDetectorInput ### impl Sync for PutAnomalyDetectorInput ### impl Unpin for PutAnomalyDetectorInput ### impl UnwindSafe for PutAnomalyDetectorInput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::PutAnomalyDetectorOutput === ``` pub struct PutAnomalyDetectorOutput {} ``` Trait Implementations --- source### impl Clone for PutAnomalyDetectorOutput source#### fn clone(&self) -> PutAnomalyDetectorOutput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for PutAnomalyDetectorOutput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for PutAnomalyDetectorOutput source#### fn default() -> PutAnomalyDetectorOutput Returns the “default value” for a type. Read more source### impl PartialEq<PutAnomalyDetectorOutput> for PutAnomalyDetectorOutput source#### fn eq(&self, other: &PutAnomalyDetectorOutput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. source### impl StructuralPartialEq for PutAnomalyDetectorOutput Auto Trait Implementations --- ### impl RefUnwindSafe for PutAnomalyDetectorOutput ### impl Send for PutAnomalyDetectorOutput ### impl Sync for PutAnomalyDetectorOutput ### impl Unpin for PutAnomalyDetectorOutput ### impl UnwindSafe for PutAnomalyDetectorOutput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::PutCompositeAlarmInput === ``` pub struct PutCompositeAlarmInput { pub actions_enabled: Option<bool>, pub alarm_actions: Option<Vec<String>>, pub alarm_description: Option<String>, pub alarm_name: String, pub alarm_rule: String, pub insufficient_data_actions: Option<Vec<String>>, pub ok_actions: Option<Vec<String>>, pub tags: Option<Vec<Tag>>, } ``` Fields --- `actions_enabled: Option<bool>`Indicates whether actions should be executed during any changes to the alarm state of the composite alarm. The default is `TRUE`. `alarm_actions: Option<Vec<String>>`The actions to execute when this alarm transitions to the `ALARM` state from any other state. Each action is specified as an Amazon Resource Name (ARN). Valid Values: `arn:aws:sns:*region*:*account-id*:*sns-topic-name*` | `arn:aws:ssm:*region*:*account-id*:opsitem:*severity*` `alarm_description: Option<String>`The description for the composite alarm. `alarm_name: String`The name for the composite alarm. This name must be unique within the Region. `alarm_rule: String`An expression that specifies which other alarms are to be evaluated to determine this composite alarm's state. For each alarm that you reference, you designate a function that specifies whether that alarm needs to be in ALARM state, OK state, or INSUFFICIENT_DATA state. You can use operators (AND, OR and NOT) to combine multiple functions in a single expression. You can use parenthesis to logically group the functions in your expression. You can use either alarm names or ARNs to reference the other alarms that are to be evaluated. Functions can include the following: * `ALARM("*alarm-name* or *alarm-ARN*")` is TRUE if the named alarm is in ALARM state. * `OK("*alarm-name* or *alarm-ARN*")` is TRUE if the named alarm is in OK state. * `INSUFFICIENT_DATA("*alarm-name* or *alarm-ARN*")` is TRUE if the named alarm is in INSUFFICIENT_DATA state. * `TRUE` always evaluates to TRUE. * `FALSE` always evaluates to FALSE. TRUE and FALSE are useful for testing a complex `AlarmRule` structure, and for testing your alarm actions. Alarm names specified in `AlarmRule` can be surrounded with double-quotes ("), but do not have to be. The following are some examples of `AlarmRule`: * `ALARM(CPUUtilizationTooHigh) AND ALARM(DiskReadOpsTooHigh)` specifies that the composite alarm goes into ALARM state only if both CPUUtilizationTooHigh and DiskReadOpsTooHigh alarms are in ALARM state. * `ALARM(CPUUtilizationTooHigh) AND NOT ALARM(DeploymentInProgress)` specifies that the alarm goes to ALARM state if CPUUtilizationTooHigh is in ALARM state and DeploymentInProgress is not in ALARM state. This example reduces alarm noise during a known deployment window. * `(ALARM(CPUUtilizationTooHigh) OR ALARM(DiskReadOpsTooHigh)) AND OK(NetworkOutTooHigh)` goes into ALARM state if CPUUtilizationTooHigh OR DiskReadOpsTooHigh is in ALARM state, and if NetworkOutTooHigh is in OK state. This provides another example of using a composite alarm to prevent noise. This rule ensures that you are not notified with an alarm action on high CPU or disk usage if a known network problem is also occurring. The `AlarmRule` can specify as many as 100 "children" alarms. The `AlarmRule` expression can have as many as 500 elements. Elements are child alarms, TRUE or FALSE statements, and parentheses. `insufficient_data_actions: Option<Vec<String>>`The actions to execute when this alarm transitions to the `INSUFFICIENT_DATA` state from any other state. Each action is specified as an Amazon Resource Name (ARN). Valid Values: `arn:aws:sns:*region*:*account-id*:*sns-topic-name*` `ok_actions: Option<Vec<String>>`The actions to execute when this alarm transitions to an `OK` state from any other state. Each action is specified as an Amazon Resource Name (ARN). Valid Values: `arn:aws:sns:*region*:*account-id*:*sns-topic-name*` `tags: Option<Vec<Tag>>`A list of key-value pairs to associate with the composite alarm. You can associate as many as 50 tags with an alarm. Tags can help you organize and categorize your resources. You can also use them to scope user permissions, by granting a user permission to access or change only resources with certain tag values. Trait Implementations --- source### impl Clone for PutCompositeAlarmInput source#### fn clone(&self) -> PutCompositeAlarmInput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for PutCompositeAlarmInput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for PutCompositeAlarmInput source#### fn default() -> PutCompositeAlarmInput Returns the “default value” for a type. Read more source### impl PartialEq<PutCompositeAlarmInput> for PutCompositeAlarmInput source#### fn eq(&self, other: &PutCompositeAlarmInput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &PutCompositeAlarmInput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for PutCompositeAlarmInput Auto Trait Implementations --- ### impl RefUnwindSafe for PutCompositeAlarmInput ### impl Send for PutCompositeAlarmInput ### impl Sync for PutCompositeAlarmInput ### impl Unpin for PutCompositeAlarmInput ### impl UnwindSafe for PutCompositeAlarmInput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::PutDashboardInput === ``` pub struct PutDashboardInput { pub dashboard_body: String, pub dashboard_name: String, } ``` Fields --- `dashboard_body: String`The detailed information about the dashboard in JSON format, including the widgets to include and their location on the dashboard. This parameter is required. For more information about the syntax, see Dashboard Body Structure and Syntax. `dashboard_name: String`The name of the dashboard. If a dashboard with this name already exists, this call modifies that dashboard, replacing its current contents. Otherwise, a new dashboard is created. The maximum length is 255, and valid characters are A-Z, a-z, 0-9, "-", and "_". This parameter is required. Trait Implementations --- source### impl Clone for PutDashboardInput source#### fn clone(&self) -> PutDashboardInput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for PutDashboardInput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for PutDashboardInput source#### fn default() -> PutDashboardInput Returns the “default value” for a type. Read more source### impl PartialEq<PutDashboardInput> for PutDashboardInput source#### fn eq(&self, other: &PutDashboardInput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &PutDashboardInput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for PutDashboardInput Auto Trait Implementations --- ### impl RefUnwindSafe for PutDashboardInput ### impl Send for PutDashboardInput ### impl Sync for PutDashboardInput ### impl Unpin for PutDashboardInput ### impl UnwindSafe for PutDashboardInput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::PutDashboardOutput === ``` pub struct PutDashboardOutput { pub dashboard_validation_messages: Option<Vec<DashboardValidationMessage>>, } ``` Fields --- `dashboard_validation_messages: Option<Vec<DashboardValidationMessage>>`If the input for `PutDashboard` was correct and the dashboard was successfully created or modified, this result is empty. If this result includes only warning messages, then the input was valid enough for the dashboard to be created or modified, but some elements of the dashboard might not render. If this result includes error messages, the input was not valid and the operation failed. Trait Implementations --- source### impl Clone for PutDashboardOutput source#### fn clone(&self) -> PutDashboardOutput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for PutDashboardOutput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for PutDashboardOutput source#### fn default() -> PutDashboardOutput Returns the “default value” for a type. Read more source### impl PartialEq<PutDashboardOutput> for PutDashboardOutput source#### fn eq(&self, other: &PutDashboardOutput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &PutDashboardOutput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for PutDashboardOutput Auto Trait Implementations --- ### impl RefUnwindSafe for PutDashboardOutput ### impl Send for PutDashboardOutput ### impl Sync for PutDashboardOutput ### impl Unpin for PutDashboardOutput ### impl UnwindSafe for PutDashboardOutput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::PutInsightRuleInput === ``` pub struct PutInsightRuleInput { pub rule_definition: String, pub rule_name: String, pub rule_state: Option<String>, pub tags: Option<Vec<Tag>>, } ``` Fields --- `rule_definition: String`The definition of the rule, as a JSON object. For details on the valid syntax, see Contributor Insights Rule Syntax. `rule_name: String`A unique name for the rule. `rule_state: Option<String>`The state of the rule. Valid values are ENABLED and DISABLED. `tags: Option<Vec<Tag>>`A list of key-value pairs to associate with the Contributor Insights rule. You can associate as many as 50 tags with a rule. Tags can help you organize and categorize your resources. You can also use them to scope user permissions, by granting a user permission to access or change only the resources that have certain tag values. To be able to associate tags with a rule, you must have the `cloudwatch:TagResource` permission in addition to the `cloudwatch:PutInsightRule` permission. If you are using this operation to update an existing Contributor Insights rule, any tags you specify in this parameter are ignored. To change the tags of an existing rule, use TagResource. Trait Implementations --- source### impl Clone for PutInsightRuleInput source#### fn clone(&self) -> PutInsightRuleInput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for PutInsightRuleInput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for PutInsightRuleInput source#### fn default() -> PutInsightRuleInput Returns the “default value” for a type. Read more source### impl PartialEq<PutInsightRuleInput> for PutInsightRuleInput source#### fn eq(&self, other: &PutInsightRuleInput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &PutInsightRuleInput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for PutInsightRuleInput Auto Trait Implementations --- ### impl RefUnwindSafe for PutInsightRuleInput ### impl Send for PutInsightRuleInput ### impl Sync for PutInsightRuleInput ### impl Unpin for PutInsightRuleInput ### impl UnwindSafe for PutInsightRuleInput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::PutInsightRuleOutput === ``` pub struct PutInsightRuleOutput {} ``` Trait Implementations --- source### impl Clone for PutInsightRuleOutput source#### fn clone(&self) -> PutInsightRuleOutput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for PutInsightRuleOutput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for PutInsightRuleOutput source#### fn default() -> PutInsightRuleOutput Returns the “default value” for a type. Read more source### impl PartialEq<PutInsightRuleOutput> for PutInsightRuleOutput source#### fn eq(&self, other: &PutInsightRuleOutput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. source### impl StructuralPartialEq for PutInsightRuleOutput Auto Trait Implementations --- ### impl RefUnwindSafe for PutInsightRuleOutput ### impl Send for PutInsightRuleOutput ### impl Sync for PutInsightRuleOutput ### impl Unpin for PutInsightRuleOutput ### impl UnwindSafe for PutInsightRuleOutput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::PutMetricAlarmInput === ``` pub struct PutMetricAlarmInput { pub actions_enabled: Option<bool>, pub alarm_actions: Option<Vec<String>>, pub alarm_description: Option<String>, pub alarm_name: String, pub comparison_operator: String, pub datapoints_to_alarm: Option<i64>, pub dimensions: Option<Vec<Dimension>>, pub evaluate_low_sample_count_percentile: Option<String>, pub evaluation_periods: i64, pub extended_statistic: Option<String>, pub insufficient_data_actions: Option<Vec<String>>, pub metric_name: Option<String>, pub metrics: Option<Vec<MetricDataQuery>>, pub namespace: Option<String>, pub ok_actions: Option<Vec<String>>, pub period: Option<i64>, pub statistic: Option<String>, pub tags: Option<Vec<Tag>>, pub threshold: Option<f64>, pub threshold_metric_id: Option<String>, pub treat_missing_data: Option<String>, pub unit: Option<String>, } ``` Fields --- `actions_enabled: Option<bool>`Indicates whether actions should be executed during any changes to the alarm state. The default is `TRUE`. `alarm_actions: Option<Vec<String>>`The actions to execute when this alarm transitions to the `ALARM` state from any other state. Each action is specified as an Amazon Resource Name (ARN). Valid Values: `arn:aws:automate:*region*:ec2:stop` | `arn:aws:automate:*region*:ec2:terminate` | `arn:aws:automate:*region*:ec2:recover` | `arn:aws:automate:*region*:ec2:reboot` | `arn:aws:sns:*region*:*account-id*:*sns-topic-name*` | `arn:aws:autoscaling:*region*:*account-id*:scalingPolicy:*policy-id*:autoScalingGroupName/*group-friendly-name*:policyName/*policy-friendly-name*` | `arn:aws:ssm:*region*:*account-id*:opsitem:*severity*` Valid Values (for use with IAM roles): `arn:aws:swf:*region*:*account-id*:action/actions/AWS_EC2.InstanceId.Stop/1.0` | `arn:aws:swf:*region*:*account-id*:action/actions/AWS_EC2.InstanceId.Terminate/1.0` | `arn:aws:swf:*region*:*account-id*:action/actions/AWS_EC2.InstanceId.Reboot/1.0` `alarm_description: Option<String>`The description for the alarm. `alarm_name: String`The name for the alarm. This name must be unique within the Region. `comparison_operator: String` The arithmetic operation to use when comparing the specified statistic and threshold. The specified statistic value is used as the first operand. The values `LessThanLowerOrGreaterThanUpperThreshold`, `LessThanLowerThreshold`, and `GreaterThanUpperThreshold` are used only for alarms based on anomaly detection models. `datapoints_to_alarm: Option<i64>`The number of data points that must be breaching to trigger the alarm. This is used only if you are setting an "M out of N" alarm. In that case, this value is the M. For more information, see Evaluating an Alarm in the *Amazon CloudWatch User Guide*. `dimensions: Option<Vec<Dimension>>`The dimensions for the metric specified in `MetricName`. `evaluate_low_sample_count_percentile: Option<String>` Used only for alarms based on percentiles. If you specify `ignore`, the alarm state does not change during periods with too few data points to be statistically significant. If you specify `evaluate` or omit this parameter, the alarm is always evaluated and possibly changes state no matter how many data points are available. For more information, see Percentile-Based CloudWatch Alarms and Low Data Samples. Valid Values: `evaluate | ignore` `evaluation_periods: i64`The number of periods over which data is compared to the specified threshold. If you are setting an alarm that requires that a number of consecutive data points be breaching to trigger the alarm, this value specifies that number. If you are setting an "M out of N" alarm, this value is the N. An alarm's total current evaluation period can be no longer than one day, so this number multiplied by `Period` cannot be more than 86,400 seconds. `extended_statistic: Option<String>`The percentile statistic for the metric specified in `MetricName`. Specify a value between p0.0 and p100. When you call `PutMetricAlarm` and specify a `MetricName`, you must specify either `Statistic` or `ExtendedStatistic,` but not both. `insufficient_data_actions: Option<Vec<String>>`The actions to execute when this alarm transitions to the `INSUFFICIENT_DATA` state from any other state. Each action is specified as an Amazon Resource Name (ARN). Valid Values: `arn:aws:automate:*region*:ec2:stop` | `arn:aws:automate:*region*:ec2:terminate` | `arn:aws:automate:*region*:ec2:recover` | `arn:aws:automate:*region*:ec2:reboot` | `arn:aws:sns:*region*:*account-id*:*sns-topic-name*` | `arn:aws:autoscaling:*region*:*account-id*:scalingPolicy:*policy-id*:autoScalingGroupName/*group-friendly-name*:policyName/*policy-friendly-name*` Valid Values (for use with IAM roles): `>arn:aws:swf:*region*:*account-id*:action/actions/AWS_EC2.InstanceId.Stop/1.0` | `arn:aws:swf:*region*:*account-id*:action/actions/AWS_EC2.InstanceId.Terminate/1.0` | `arn:aws:swf:*region*:*account-id*:action/actions/AWS_EC2.InstanceId.Reboot/1.0` `metric_name: Option<String>`The name for the metric associated with the alarm. For each `PutMetricAlarm` operation, you must specify either `MetricName` or a `Metrics` array. If you are creating an alarm based on a math expression, you cannot specify this parameter, or any of the `Dimensions`, `Period`, `Namespace`, `Statistic`, or `ExtendedStatistic` parameters. Instead, you specify all this information in the `Metrics` array. `metrics: Option<Vec<MetricDataQuery>>`An array of `MetricDataQuery` structures that enable you to create an alarm based on the result of a metric math expression. For each `PutMetricAlarm` operation, you must specify either `MetricName` or a `Metrics` array. Each item in the `Metrics` array either retrieves a metric or performs a math expression. One item in the `Metrics` array is the expression that the alarm watches. You designate this expression by setting `ReturnData` to true for this object in the array. For more information, see MetricDataQuery. If you use the `Metrics` parameter, you cannot include the `MetricName`, `Dimensions`, `Period`, `Namespace`, `Statistic`, or `ExtendedStatistic` parameters of `PutMetricAlarm` in the same operation. Instead, you retrieve the metrics you are using in your math expression as part of the `Metrics` array. `namespace: Option<String>`The namespace for the metric associated specified in `MetricName`. `ok_actions: Option<Vec<String>>`The actions to execute when this alarm transitions to an `OK` state from any other state. Each action is specified as an Amazon Resource Name (ARN). Valid Values: `arn:aws:automate:*region*:ec2:stop` | `arn:aws:automate:*region*:ec2:terminate` | `arn:aws:automate:*region*:ec2:recover` | `arn:aws:automate:*region*:ec2:reboot` | `arn:aws:sns:*region*:*account-id*:*sns-topic-name*` | `arn:aws:autoscaling:*region*:*account-id*:scalingPolicy:*policy-id*:autoScalingGroupName/*group-friendly-name*:policyName/*policy-friendly-name*` Valid Values (for use with IAM roles): `arn:aws:swf:*region*:*account-id*:action/actions/AWS_EC2.InstanceId.Stop/1.0` | `arn:aws:swf:*region*:*account-id*:action/actions/AWS_EC2.InstanceId.Terminate/1.0` | `arn:aws:swf:*region*:*account-id*:action/actions/AWS_EC2.InstanceId.Reboot/1.0` | `arn:aws:swf:*region*:*account-id*:action/actions/AWS_EC2.InstanceId.Recover/1.0` `period: Option<i64>`The length, in seconds, used each time the metric specified in `MetricName` is evaluated. Valid values are 10, 30, and any multiple of 60. `Period` is required for alarms based on static thresholds. If you are creating an alarm based on a metric math expression, you specify the period for each metric within the objects in the `Metrics` array. Be sure to specify 10 or 30 only for metrics that are stored by a `PutMetricData` call with a `StorageResolution` of 1. If you specify a period of 10 or 30 for a metric that does not have sub-minute resolution, the alarm still attempts to gather data at the period rate that you specify. In this case, it does not receive data for the attempts that do not correspond to a one-minute data resolution, and the alarm might often lapse into INSUFFICENT_DATA status. Specifying 10 or 30 also sets this alarm as a high-resolution alarm, which has a higher charge than other alarms. For more information about pricing, see Amazon CloudWatch Pricing. An alarm's total current evaluation period can be no longer than one day, so `Period` multiplied by `EvaluationPeriods` cannot be more than 86,400 seconds. `statistic: Option<String>`The statistic for the metric specified in `MetricName`, other than percentile. For percentile statistics, use `ExtendedStatistic`. When you call `PutMetricAlarm` and specify a `MetricName`, you must specify either `Statistic` or `ExtendedStatistic,` but not both. `tags: Option<Vec<Tag>>`A list of key-value pairs to associate with the alarm. You can associate as many as 50 tags with an alarm. Tags can help you organize and categorize your resources. You can also use them to scope user permissions by granting a user permission to access or change only resources with certain tag values. If you are using this operation to update an existing alarm, any tags you specify in this parameter are ignored. To change the tags of an existing alarm, use TagResource or UntagResource. `threshold: Option<f64>`The value against which the specified statistic is compared. This parameter is required for alarms based on static thresholds, but should not be used for alarms based on anomaly detection models. `threshold_metric_id: Option<String>`If this is an alarm based on an anomaly detection model, make this value match the ID of the `ANOMALY_DETECTION_BAND` function. For an example of how to use this parameter, see the **Anomaly Detection Model Alarm** example on this page. If your alarm uses this parameter, it cannot have Auto Scaling actions. `treat_missing_data: Option<String>` Sets how this alarm is to handle missing data points. If `TreatMissingData` is omitted, the default behavior of `missing` is used. For more information, see Configuring How CloudWatch Alarms Treats Missing Data. Valid Values: `breaching | notBreaching | ignore | missing` `unit: Option<String>`The unit of measure for the statistic. For example, the units for the Amazon EC2 NetworkIn metric are Bytes because NetworkIn tracks the number of bytes that an instance receives on all network interfaces. You can also specify a unit when you create a custom metric. Units help provide conceptual meaning to your data. Metric data points that specify a unit of measure, such as Percent, are aggregated separately. If you don't specify `Unit`, CloudWatch retrieves all unit types that have been published for the metric and attempts to evaluate the alarm. Usually, metrics are published with only one unit, so the alarm works as intended. However, if the metric is published with multiple types of units and you don't specify a unit, the alarm's behavior is not defined and it behaves predictably. We recommend omitting `Unit` so that you don't inadvertently specify an incorrect unit that is not published for this metric. Doing so causes the alarm to be stuck in the `INSUFFICIENT DATA` state. Trait Implementations --- source### impl Clone for PutMetricAlarmInput source#### fn clone(&self) -> PutMetricAlarmInput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for PutMetricAlarmInput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for PutMetricAlarmInput source#### fn default() -> PutMetricAlarmInput Returns the “default value” for a type. Read more source### impl PartialEq<PutMetricAlarmInput> for PutMetricAlarmInput source#### fn eq(&self, other: &PutMetricAlarmInput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &PutMetricAlarmInput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for PutMetricAlarmInput Auto Trait Implementations --- ### impl RefUnwindSafe for PutMetricAlarmInput ### impl Send for PutMetricAlarmInput ### impl Sync for PutMetricAlarmInput ### impl Unpin for PutMetricAlarmInput ### impl UnwindSafe for PutMetricAlarmInput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::PutMetricDataInput === ``` pub struct PutMetricDataInput { pub metric_data: Vec<MetricDatum>, pub namespace: String, } ``` Fields --- `metric_data: Vec<MetricDatum>`The data for the metric. The array can include no more than 20 metrics per call. `namespace: String`The namespace for the metric data. To avoid conflicts with AWS service namespaces, you should not specify a namespace that begins with `AWS/` Trait Implementations --- source### impl Clone for PutMetricDataInput source#### fn clone(&self) -> PutMetricDataInput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for PutMetricDataInput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for PutMetricDataInput source#### fn default() -> PutMetricDataInput Returns the “default value” for a type. Read more source### impl PartialEq<PutMetricDataInput> for PutMetricDataInput source#### fn eq(&self, other: &PutMetricDataInput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &PutMetricDataInput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for PutMetricDataInput Auto Trait Implementations --- ### impl RefUnwindSafe for PutMetricDataInput ### impl Send for PutMetricDataInput ### impl Sync for PutMetricDataInput ### impl Unpin for PutMetricDataInput ### impl UnwindSafe for PutMetricDataInput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::PutMetricStreamInput === ``` pub struct PutMetricStreamInput { pub exclude_filters: Option<Vec<MetricStreamFilter>>, pub firehose_arn: String, pub include_filters: Option<Vec<MetricStreamFilter>>, pub name: String, pub output_format: String, pub role_arn: String, pub tags: Option<Vec<Tag>>, } ``` Fields --- `exclude_filters: Option<Vec<MetricStreamFilter>>`If you specify this parameter, the stream sends metrics from all metric namespaces except for the namespaces that you specify here. You cannot include `ExcludeFilters` and `IncludeFilters` in the same operation. `firehose_arn: String`The ARN of the Amazon Kinesis Firehose delivery stream to use for this metric stream. This Amazon Kinesis Firehose delivery stream must already exist and must be in the same account as the metric stream. `include_filters: Option<Vec<MetricStreamFilter>>`If you specify this parameter, the stream sends only the metrics from the metric namespaces that you specify here. You cannot include `IncludeFilters` and `ExcludeFilters` in the same operation. `name: String`If you are creating a new metric stream, this is the name for the new stream. The name must be different than the names of other metric streams in this account and Region. If you are updating a metric stream, specify the name of that stream here. Valid characters are A-Z, a-z, 0-9, "-" and "_". `output_format: String`The output format for the stream. Valid values are `json` and `opentelemetry0.7`. For more information about metric stream output formats, see Metric streams output formats. `role_arn: String`The ARN of an IAM role that this metric stream will use to access Amazon Kinesis Firehose resources. This IAM role must already exist and must be in the same account as the metric stream. This IAM role must include the following permissions: * firehose:PutRecord * firehose:PutRecordBatch `tags: Option<Vec<Tag>>`A list of key-value pairs to associate with the metric stream. You can associate as many as 50 tags with a metric stream. Tags can help you organize and categorize your resources. You can also use them to scope user permissions by granting a user permission to access or change only resources with certain tag values. Trait Implementations --- source### impl Clone for PutMetricStreamInput source#### fn clone(&self) -> PutMetricStreamInput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for PutMetricStreamInput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for PutMetricStreamInput source#### fn default() -> PutMetricStreamInput Returns the “default value” for a type. Read more source### impl PartialEq<PutMetricStreamInput> for PutMetricStreamInput source#### fn eq(&self, other: &PutMetricStreamInput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &PutMetricStreamInput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for PutMetricStreamInput Auto Trait Implementations --- ### impl RefUnwindSafe for PutMetricStreamInput ### impl Send for PutMetricStreamInput ### impl Sync for PutMetricStreamInput ### impl Unpin for PutMetricStreamInput ### impl UnwindSafe for PutMetricStreamInput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::PutMetricStreamOutput === ``` pub struct PutMetricStreamOutput { pub arn: Option<String>, } ``` Fields --- `arn: Option<String>`The ARN of the metric stream. Trait Implementations --- source### impl Clone for PutMetricStreamOutput source#### fn clone(&self) -> PutMetricStreamOutput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for PutMetricStreamOutput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for PutMetricStreamOutput source#### fn default() -> PutMetricStreamOutput Returns the “default value” for a type. Read more source### impl PartialEq<PutMetricStreamOutput> for PutMetricStreamOutput source#### fn eq(&self, other: &PutMetricStreamOutput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &PutMetricStreamOutput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for PutMetricStreamOutput Auto Trait Implementations --- ### impl RefUnwindSafe for PutMetricStreamOutput ### impl Send for PutMetricStreamOutput ### impl Sync for PutMetricStreamOutput ### impl Unpin for PutMetricStreamOutput ### impl UnwindSafe for PutMetricStreamOutput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::Range === ``` pub struct Range { pub end_time: String, pub start_time: String, } ``` Specifies one range of days or times to exclude from use for training an anomaly detection model. Fields --- `end_time: String`The end time of the range to exclude. The format is `yyyy-MM-dd'T'HH:mm:ss`. For example, `2019-07-01T23:59:59`. `start_time: String`The start time of the range to exclude. The format is `yyyy-MM-dd'T'HH:mm:ss`. For example, `2019-07-01T23:59:59`. Trait Implementations --- source### impl Clone for Range source#### fn clone(&self) -> Range Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for Range source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for Range source#### fn default() -> Range Returns the “default value” for a type. Read more source### impl PartialEq<Range> for Range source#### fn eq(&self, other: &Range) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &Range) -> bool This method tests for `!=`. source### impl StructuralPartialEq for Range Auto Trait Implementations --- ### impl RefUnwindSafe for Range ### impl Send for Range ### impl Sync for Range ### impl Unpin for Range ### impl UnwindSafe for Range Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::SetAlarmStateInput === ``` pub struct SetAlarmStateInput { pub alarm_name: String, pub state_reason: String, pub state_reason_data: Option<String>, pub state_value: String, } ``` Fields --- `alarm_name: String`The name of the alarm. `state_reason: String`The reason that this alarm is set to this specific state, in text format. `state_reason_data: Option<String>`The reason that this alarm is set to this specific state, in JSON format. For SNS or EC2 alarm actions, this is just informational. But for EC2 Auto Scaling or application Auto Scaling alarm actions, the Auto Scaling policy uses the information in this field to take the correct action. `state_value: String`The value of the state. Trait Implementations --- source### impl Clone for SetAlarmStateInput source#### fn clone(&self) -> SetAlarmStateInput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for SetAlarmStateInput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for SetAlarmStateInput source#### fn default() -> SetAlarmStateInput Returns the “default value” for a type. Read more source### impl PartialEq<SetAlarmStateInput> for SetAlarmStateInput source#### fn eq(&self, other: &SetAlarmStateInput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &SetAlarmStateInput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for SetAlarmStateInput Auto Trait Implementations --- ### impl RefUnwindSafe for SetAlarmStateInput ### impl Send for SetAlarmStateInput ### impl Sync for SetAlarmStateInput ### impl Unpin for SetAlarmStateInput ### impl UnwindSafe for SetAlarmStateInput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::StartMetricStreamsInput === ``` pub struct StartMetricStreamsInput { pub names: Vec<String>, } ``` Fields --- `names: Vec<String>`The array of the names of metric streams to start streaming. This is an "all or nothing" operation. If you do not have permission to access all of the metric streams that you list here, then none of the streams that you list in the operation will start streaming. Trait Implementations --- source### impl Clone for StartMetricStreamsInput source#### fn clone(&self) -> StartMetricStreamsInput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for StartMetricStreamsInput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for StartMetricStreamsInput source#### fn default() -> StartMetricStreamsInput Returns the “default value” for a type. Read more source### impl PartialEq<StartMetricStreamsInput> for StartMetricStreamsInput source#### fn eq(&self, other: &StartMetricStreamsInput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &StartMetricStreamsInput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for StartMetricStreamsInput Auto Trait Implementations --- ### impl RefUnwindSafe for StartMetricStreamsInput ### impl Send for StartMetricStreamsInput ### impl Sync for StartMetricStreamsInput ### impl Unpin for StartMetricStreamsInput ### impl UnwindSafe for StartMetricStreamsInput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::StartMetricStreamsOutput === ``` pub struct StartMetricStreamsOutput {} ``` Trait Implementations --- source### impl Clone for StartMetricStreamsOutput source#### fn clone(&self) -> StartMetricStreamsOutput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for StartMetricStreamsOutput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for StartMetricStreamsOutput source#### fn default() -> StartMetricStreamsOutput Returns the “default value” for a type. Read more source### impl PartialEq<StartMetricStreamsOutput> for StartMetricStreamsOutput source#### fn eq(&self, other: &StartMetricStreamsOutput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. source### impl StructuralPartialEq for StartMetricStreamsOutput Auto Trait Implementations --- ### impl RefUnwindSafe for StartMetricStreamsOutput ### impl Send for StartMetricStreamsOutput ### impl Sync for StartMetricStreamsOutput ### impl Unpin for StartMetricStreamsOutput ### impl UnwindSafe for StartMetricStreamsOutput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::StatisticSet === ``` pub struct StatisticSet { pub maximum: f64, pub minimum: f64, pub sample_count: f64, pub sum: f64, } ``` Represents a set of statistics that describes a specific metric. Fields --- `maximum: f64`The maximum value of the sample set. `minimum: f64`The minimum value of the sample set. `sample_count: f64`The number of samples used for the statistic set. `sum: f64`The sum of values for the sample set. Trait Implementations --- source### impl Clone for StatisticSet source#### fn clone(&self) -> StatisticSet Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for StatisticSet source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for StatisticSet source#### fn default() -> StatisticSet Returns the “default value” for a type. Read more source### impl PartialEq<StatisticSet> for StatisticSet source#### fn eq(&self, other: &StatisticSet) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &StatisticSet) -> bool This method tests for `!=`. source### impl StructuralPartialEq for StatisticSet Auto Trait Implementations --- ### impl RefUnwindSafe for StatisticSet ### impl Send for StatisticSet ### impl Sync for StatisticSet ### impl Unpin for StatisticSet ### impl UnwindSafe for StatisticSet Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::StopMetricStreamsInput === ``` pub struct StopMetricStreamsInput { pub names: Vec<String>, } ``` Fields --- `names: Vec<String>`The array of the names of metric streams to stop streaming. This is an "all or nothing" operation. If you do not have permission to access all of the metric streams that you list here, then none of the streams that you list in the operation will stop streaming. Trait Implementations --- source### impl Clone for StopMetricStreamsInput source#### fn clone(&self) -> StopMetricStreamsInput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for StopMetricStreamsInput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for StopMetricStreamsInput source#### fn default() -> StopMetricStreamsInput Returns the “default value” for a type. Read more source### impl PartialEq<StopMetricStreamsInput> for StopMetricStreamsInput source#### fn eq(&self, other: &StopMetricStreamsInput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &StopMetricStreamsInput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for StopMetricStreamsInput Auto Trait Implementations --- ### impl RefUnwindSafe for StopMetricStreamsInput ### impl Send for StopMetricStreamsInput ### impl Sync for StopMetricStreamsInput ### impl Unpin for StopMetricStreamsInput ### impl UnwindSafe for StopMetricStreamsInput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::StopMetricStreamsOutput === ``` pub struct StopMetricStreamsOutput {} ``` Trait Implementations --- source### impl Clone for StopMetricStreamsOutput source#### fn clone(&self) -> StopMetricStreamsOutput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for StopMetricStreamsOutput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for StopMetricStreamsOutput source#### fn default() -> StopMetricStreamsOutput Returns the “default value” for a type. Read more source### impl PartialEq<StopMetricStreamsOutput> for StopMetricStreamsOutput source#### fn eq(&self, other: &StopMetricStreamsOutput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. source### impl StructuralPartialEq for StopMetricStreamsOutput Auto Trait Implementations --- ### impl RefUnwindSafe for StopMetricStreamsOutput ### impl Send for StopMetricStreamsOutput ### impl Sync for StopMetricStreamsOutput ### impl Unpin for StopMetricStreamsOutput ### impl UnwindSafe for StopMetricStreamsOutput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::Tag === ``` pub struct Tag { pub key: String, pub value: String, } ``` A key-value pair associated with a CloudWatch resource. Fields --- `key: String`A string that you can use to assign a value. The combination of tag keys and values can help you organize and categorize your resources. `value: String`The value for the specified tag key. Trait Implementations --- source### impl Clone for Tag source#### fn clone(&self) -> Tag Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for Tag source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for Tag source#### fn default() -> Tag Returns the “default value” for a type. Read more source### impl PartialEq<Tag> for Tag source#### fn eq(&self, other: &Tag) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &Tag) -> bool This method tests for `!=`. source### impl StructuralPartialEq for Tag Auto Trait Implementations --- ### impl RefUnwindSafe for Tag ### impl Send for Tag ### impl Sync for Tag ### impl Unpin for Tag ### impl UnwindSafe for Tag Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::TagResourceInput === ``` pub struct TagResourceInput { pub resource_arn: String, pub tags: Vec<Tag>, } ``` Fields --- `resource_arn: String`The ARN of the CloudWatch resource that you're adding tags to. The ARN format of an alarm is `arn:aws:cloudwatch:*Region*:*account-id*:alarm:*alarm-name*` The ARN format of a Contributor Insights rule is `arn:aws:cloudwatch:*Region*:*account-id*:insight-rule:*insight-rule-name*` For more information about ARN format, see Resource Types Defined by Amazon CloudWatch in the *Amazon Web Services General Reference*. `tags: Vec<Tag>`The list of key-value pairs to associate with the alarm. Trait Implementations --- source### impl Clone for TagResourceInput source#### fn clone(&self) -> TagResourceInput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for TagResourceInput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for TagResourceInput source#### fn default() -> TagResourceInput Returns the “default value” for a type. Read more source### impl PartialEq<TagResourceInput> for TagResourceInput source#### fn eq(&self, other: &TagResourceInput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &TagResourceInput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for TagResourceInput Auto Trait Implementations --- ### impl RefUnwindSafe for TagResourceInput ### impl Send for TagResourceInput ### impl Sync for TagResourceInput ### impl Unpin for TagResourceInput ### impl UnwindSafe for TagResourceInput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::TagResourceOutput === ``` pub struct TagResourceOutput {} ``` Trait Implementations --- source### impl Clone for TagResourceOutput source#### fn clone(&self) -> TagResourceOutput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for TagResourceOutput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for TagResourceOutput source#### fn default() -> TagResourceOutput Returns the “default value” for a type. Read more source### impl PartialEq<TagResourceOutput> for TagResourceOutput source#### fn eq(&self, other: &TagResourceOutput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. source### impl StructuralPartialEq for TagResourceOutput Auto Trait Implementations --- ### impl RefUnwindSafe for TagResourceOutput ### impl Send for TagResourceOutput ### impl Sync for TagResourceOutput ### impl Unpin for TagResourceOutput ### impl UnwindSafe for TagResourceOutput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::UntagResourceInput === ``` pub struct UntagResourceInput { pub resource_arn: String, pub tag_keys: Vec<String>, } ``` Fields --- `resource_arn: String`The ARN of the CloudWatch resource that you're removing tags from. The ARN format of an alarm is `arn:aws:cloudwatch:*Region*:*account-id*:alarm:*alarm-name*` The ARN format of a Contributor Insights rule is `arn:aws:cloudwatch:*Region*:*account-id*:insight-rule:*insight-rule-name*` For more information about ARN format, see Resource Types Defined by Amazon CloudWatch in the *Amazon Web Services General Reference*. `tag_keys: Vec<String>`The list of tag keys to remove from the resource. Trait Implementations --- source### impl Clone for UntagResourceInput source#### fn clone(&self) -> UntagResourceInput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for UntagResourceInput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for UntagResourceInput source#### fn default() -> UntagResourceInput Returns the “default value” for a type. Read more source### impl PartialEq<UntagResourceInput> for UntagResourceInput source#### fn eq(&self, other: &UntagResourceInput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &UntagResourceInput) -> bool This method tests for `!=`. source### impl StructuralPartialEq for UntagResourceInput Auto Trait Implementations --- ### impl RefUnwindSafe for UntagResourceInput ### impl Send for UntagResourceInput ### impl Sync for UntagResourceInput ### impl Unpin for UntagResourceInput ### impl UnwindSafe for UntagResourceInput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_cloudwatch::UntagResourceOutput === ``` pub struct UntagResourceOutput {} ``` Trait Implementations --- source### impl Clone for UntagResourceOutput source#### fn clone(&self) -> UntagResourceOutput Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for UntagResourceOutput source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for UntagResourceOutput source#### fn default() -> UntagResourceOutput Returns the “default value” for a type. Read more source### impl PartialEq<UntagResourceOutput> for UntagResourceOutput source#### fn eq(&self, other: &UntagResourceOutput) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. source### impl StructuralPartialEq for UntagResourceOutput Auto Trait Implementations --- ### impl RefUnwindSafe for UntagResourceOutput ### impl Send for UntagResourceOutput ### impl Sync for UntagResourceOutput ### impl Unpin for UntagResourceOutput ### impl UnwindSafe for UntagResourceOutput Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Enum rusoto_cloudwatch::DeleteAlarmsError === ``` pub enum DeleteAlarmsError { ResourceNotFound(String), } ``` Errors returned by DeleteAlarms Variants --- ### `ResourceNotFound(String)` The named resource does not exist. Implementations --- source### impl DeleteAlarmsError source#### pub fn from_response(    res: BufferedHttpResponse) -> RusotoError<DeleteAlarmsErrorTrait Implementations --- source### impl Debug for DeleteAlarmsError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Display for DeleteAlarmsError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Error for DeleteAlarmsError 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more 1.0.0 · source#### fn description(&self) -> &str 👎 Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting source### impl PartialEq<DeleteAlarmsError> for DeleteAlarmsError source#### fn eq(&self, other: &DeleteAlarmsError) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &DeleteAlarmsError) -> bool This method tests for `!=`. source### impl StructuralPartialEq for DeleteAlarmsError Auto Trait Implementations --- ### impl RefUnwindSafe for DeleteAlarmsError ### impl Send for DeleteAlarmsError ### impl Sync for DeleteAlarmsError ### impl Unpin for DeleteAlarmsError ### impl UnwindSafe for DeleteAlarmsError Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToString for T where    T: Display + ?Sized, source#### default fn to_string(&self) -> String Converts the given value to a `String`. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Enum rusoto_cloudwatch::DeleteAnomalyDetectorError === ``` pub enum DeleteAnomalyDetectorError { InternalServiceFault(String), InvalidParameterValue(String), MissingRequiredParameter(String), ResourceNotFound(String), } ``` Errors returned by DeleteAnomalyDetector Variants --- ### `InternalServiceFault(String)` Request processing has failed due to some unknown error, exception, or failure. ### `InvalidParameterValue(String)` The value of an input parameter is bad or out-of-range. ### `MissingRequiredParameter(String)` An input parameter that is required is missing. ### `ResourceNotFound(String)` The named resource does not exist. Implementations --- source### impl DeleteAnomalyDetectorError source#### pub fn from_response(    res: BufferedHttpResponse) -> RusotoError<DeleteAnomalyDetectorErrorTrait Implementations --- source### impl Debug for DeleteAnomalyDetectorError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Display for DeleteAnomalyDetectorError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Error for DeleteAnomalyDetectorError 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more 1.0.0 · source#### fn description(&self) -> &str 👎 Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting source### impl PartialEq<DeleteAnomalyDetectorError> for DeleteAnomalyDetectorError source#### fn eq(&self, other: &DeleteAnomalyDetectorError) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &DeleteAnomalyDetectorError) -> bool This method tests for `!=`. source### impl StructuralPartialEq for DeleteAnomalyDetectorError Auto Trait Implementations --- ### impl RefUnwindSafe for DeleteAnomalyDetectorError ### impl Send for DeleteAnomalyDetectorError ### impl Sync for DeleteAnomalyDetectorError ### impl Unpin for DeleteAnomalyDetectorError ### impl UnwindSafe for DeleteAnomalyDetectorError Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToString for T where    T: Display + ?Sized, source#### default fn to_string(&self) -> String Converts the given value to a `String`. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Enum rusoto_cloudwatch::DeleteDashboardsError === ``` pub enum DeleteDashboardsError { DashboardNotFoundError(String), InternalServiceFault(String), InvalidParameterValue(String), } ``` Errors returned by DeleteDashboards Variants --- ### `DashboardNotFoundError(String)` The specified dashboard does not exist. ### `InternalServiceFault(String)` Request processing has failed due to some unknown error, exception, or failure. ### `InvalidParameterValue(String)` The value of an input parameter is bad or out-of-range. Implementations --- source### impl DeleteDashboardsError source#### pub fn from_response(    res: BufferedHttpResponse) -> RusotoError<DeleteDashboardsErrorTrait Implementations --- source### impl Debug for DeleteDashboardsError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Display for DeleteDashboardsError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Error for DeleteDashboardsError 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more 1.0.0 · source#### fn description(&self) -> &str 👎 Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting source### impl PartialEq<DeleteDashboardsError> for DeleteDashboardsError source#### fn eq(&self, other: &DeleteDashboardsError) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &DeleteDashboardsError) -> bool This method tests for `!=`. source### impl StructuralPartialEq for DeleteDashboardsError Auto Trait Implementations --- ### impl RefUnwindSafe for DeleteDashboardsError ### impl Send for DeleteDashboardsError ### impl Sync for DeleteDashboardsError ### impl Unpin for DeleteDashboardsError ### impl UnwindSafe for DeleteDashboardsError Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToString for T where    T: Display + ?Sized, source#### default fn to_string(&self) -> String Converts the given value to a `String`. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Enum rusoto_cloudwatch::DeleteInsightRulesError === ``` pub enum DeleteInsightRulesError { InvalidParameterValue(String), MissingRequiredParameter(String), } ``` Errors returned by DeleteInsightRules Variants --- ### `InvalidParameterValue(String)` The value of an input parameter is bad or out-of-range. ### `MissingRequiredParameter(String)` An input parameter that is required is missing. Implementations --- source### impl DeleteInsightRulesError source#### pub fn from_response(    res: BufferedHttpResponse) -> RusotoError<DeleteInsightRulesErrorTrait Implementations --- source### impl Debug for DeleteInsightRulesError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Display for DeleteInsightRulesError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Error for DeleteInsightRulesError 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more 1.0.0 · source#### fn description(&self) -> &str 👎 Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting source### impl PartialEq<DeleteInsightRulesError> for DeleteInsightRulesError source#### fn eq(&self, other: &DeleteInsightRulesError) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &DeleteInsightRulesError) -> bool This method tests for `!=`. source### impl StructuralPartialEq for DeleteInsightRulesError Auto Trait Implementations --- ### impl RefUnwindSafe for DeleteInsightRulesError ### impl Send for DeleteInsightRulesError ### impl Sync for DeleteInsightRulesError ### impl Unpin for DeleteInsightRulesError ### impl UnwindSafe for DeleteInsightRulesError Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToString for T where    T: Display + ?Sized, source#### default fn to_string(&self) -> String Converts the given value to a `String`. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Enum rusoto_cloudwatch::DeleteMetricStreamError === ``` pub enum DeleteMetricStreamError { InternalServiceFault(String), InvalidParameterValue(String), MissingRequiredParameter(String), } ``` Errors returned by DeleteMetricStream Variants --- ### `InternalServiceFault(String)` Request processing has failed due to some unknown error, exception, or failure. ### `InvalidParameterValue(String)` The value of an input parameter is bad or out-of-range. ### `MissingRequiredParameter(String)` An input parameter that is required is missing. Implementations --- source### impl DeleteMetricStreamError source#### pub fn from_response(    res: BufferedHttpResponse) -> RusotoError<DeleteMetricStreamErrorTrait Implementations --- source### impl Debug for DeleteMetricStreamError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Display for DeleteMetricStreamError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Error for DeleteMetricStreamError 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more 1.0.0 · source#### fn description(&self) -> &str 👎 Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting source### impl PartialEq<DeleteMetricStreamError> for DeleteMetricStreamError source#### fn eq(&self, other: &DeleteMetricStreamError) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &DeleteMetricStreamError) -> bool This method tests for `!=`. source### impl StructuralPartialEq for DeleteMetricStreamError Auto Trait Implementations --- ### impl RefUnwindSafe for DeleteMetricStreamError ### impl Send for DeleteMetricStreamError ### impl Sync for DeleteMetricStreamError ### impl Unpin for DeleteMetricStreamError ### impl UnwindSafe for DeleteMetricStreamError Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToString for T where    T: Display + ?Sized, source#### default fn to_string(&self) -> String Converts the given value to a `String`. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Enum rusoto_cloudwatch::DescribeAlarmHistoryError === ``` pub enum DescribeAlarmHistoryError { InvalidNextToken(String), } ``` Errors returned by DescribeAlarmHistory Variants --- ### `InvalidNextToken(String)` The next token specified is invalid. Implementations --- source### impl DescribeAlarmHistoryError source#### pub fn from_response(    res: BufferedHttpResponse) -> RusotoError<DescribeAlarmHistoryErrorTrait Implementations --- source### impl Debug for DescribeAlarmHistoryError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Display for DescribeAlarmHistoryError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Error for DescribeAlarmHistoryError 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more 1.0.0 · source#### fn description(&self) -> &str 👎 Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting source### impl PartialEq<DescribeAlarmHistoryError> for DescribeAlarmHistoryError source#### fn eq(&self, other: &DescribeAlarmHistoryError) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &DescribeAlarmHistoryError) -> bool This method tests for `!=`. source### impl StructuralPartialEq for DescribeAlarmHistoryError Auto Trait Implementations --- ### impl RefUnwindSafe for DescribeAlarmHistoryError ### impl Send for DescribeAlarmHistoryError ### impl Sync for DescribeAlarmHistoryError ### impl Unpin for DescribeAlarmHistoryError ### impl UnwindSafe for DescribeAlarmHistoryError Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToString for T where    T: Display + ?Sized, source#### default fn to_string(&self) -> String Converts the given value to a `String`. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Enum rusoto_cloudwatch::DescribeAlarmsError === ``` pub enum DescribeAlarmsError { InvalidNextToken(String), } ``` Errors returned by DescribeAlarms Variants --- ### `InvalidNextToken(String)` The next token specified is invalid. Implementations --- source### impl DescribeAlarmsError source#### pub fn from_response(    res: BufferedHttpResponse) -> RusotoError<DescribeAlarmsErrorTrait Implementations --- source### impl Debug for DescribeAlarmsError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Display for DescribeAlarmsError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Error for DescribeAlarmsError 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more 1.0.0 · source#### fn description(&self) -> &str 👎 Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting source### impl PartialEq<DescribeAlarmsError> for DescribeAlarmsError source#### fn eq(&self, other: &DescribeAlarmsError) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &DescribeAlarmsError) -> bool This method tests for `!=`. source### impl StructuralPartialEq for DescribeAlarmsError Auto Trait Implementations --- ### impl RefUnwindSafe for DescribeAlarmsError ### impl Send for DescribeAlarmsError ### impl Sync for DescribeAlarmsError ### impl Unpin for DescribeAlarmsError ### impl UnwindSafe for DescribeAlarmsError Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToString for T where    T: Display + ?Sized, source#### default fn to_string(&self) -> String Converts the given value to a `String`. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Enum rusoto_cloudwatch::DescribeAlarmsForMetricError === ``` pub enum DescribeAlarmsForMetricError {} ``` Errors returned by DescribeAlarmsForMetric Implementations --- source### impl DescribeAlarmsForMetricError source#### pub fn from_response(    res: BufferedHttpResponse) -> RusotoError<DescribeAlarmsForMetricErrorTrait Implementations --- source### impl Debug for DescribeAlarmsForMetricError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Display for DescribeAlarmsForMetricError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Error for DescribeAlarmsForMetricError 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more 1.0.0 · source#### fn description(&self) -> &str 👎 Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting source### impl PartialEq<DescribeAlarmsForMetricError> for DescribeAlarmsForMetricError source#### fn eq(&self, other: &DescribeAlarmsForMetricError) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. source### impl StructuralPartialEq for DescribeAlarmsForMetricError Auto Trait Implementations --- ### impl RefUnwindSafe for DescribeAlarmsForMetricError ### impl Send for DescribeAlarmsForMetricError ### impl Sync for DescribeAlarmsForMetricError ### impl Unpin for DescribeAlarmsForMetricError ### impl UnwindSafe for DescribeAlarmsForMetricError Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToString for T where    T: Display + ?Sized, source#### default fn to_string(&self) -> String Converts the given value to a `String`. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Enum rusoto_cloudwatch::DescribeAnomalyDetectorsError === ``` pub enum DescribeAnomalyDetectorsError { InternalServiceFault(String), InvalidNextToken(String), InvalidParameterValue(String), } ``` Errors returned by DescribeAnomalyDetectors Variants --- ### `InternalServiceFault(String)` Request processing has failed due to some unknown error, exception, or failure. ### `InvalidNextToken(String)` The next token specified is invalid. ### `InvalidParameterValue(String)` The value of an input parameter is bad or out-of-range. Implementations --- source### impl DescribeAnomalyDetectorsError source#### pub fn from_response(    res: BufferedHttpResponse) -> RusotoError<DescribeAnomalyDetectorsErrorTrait Implementations --- source### impl Debug for DescribeAnomalyDetectorsError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Display for DescribeAnomalyDetectorsError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Error for DescribeAnomalyDetectorsError 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more 1.0.0 · source#### fn description(&self) -> &str 👎 Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting source### impl PartialEq<DescribeAnomalyDetectorsError> for DescribeAnomalyDetectorsError source#### fn eq(&self, other: &DescribeAnomalyDetectorsError) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &DescribeAnomalyDetectorsError) -> bool This method tests for `!=`. source### impl StructuralPartialEq for DescribeAnomalyDetectorsError Auto Trait Implementations --- ### impl RefUnwindSafe for DescribeAnomalyDetectorsError ### impl Send for DescribeAnomalyDetectorsError ### impl Sync for DescribeAnomalyDetectorsError ### impl Unpin for DescribeAnomalyDetectorsError ### impl UnwindSafe for DescribeAnomalyDetectorsError Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToString for T where    T: Display + ?Sized, source#### default fn to_string(&self) -> String Converts the given value to a `String`. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Enum rusoto_cloudwatch::DescribeInsightRulesError === ``` pub enum DescribeInsightRulesError { InvalidNextToken(String), } ``` Errors returned by DescribeInsightRules Variants --- ### `InvalidNextToken(String)` The next token specified is invalid. Implementations --- source### impl DescribeInsightRulesError source#### pub fn from_response(    res: BufferedHttpResponse) -> RusotoError<DescribeInsightRulesErrorTrait Implementations --- source### impl Debug for DescribeInsightRulesError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Display for DescribeInsightRulesError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Error for DescribeInsightRulesError 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more 1.0.0 · source#### fn description(&self) -> &str 👎 Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting source### impl PartialEq<DescribeInsightRulesError> for DescribeInsightRulesError source#### fn eq(&self, other: &DescribeInsightRulesError) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &DescribeInsightRulesError) -> bool This method tests for `!=`. source### impl StructuralPartialEq for DescribeInsightRulesError Auto Trait Implementations --- ### impl RefUnwindSafe for DescribeInsightRulesError ### impl Send for DescribeInsightRulesError ### impl Sync for DescribeInsightRulesError ### impl Unpin for DescribeInsightRulesError ### impl UnwindSafe for DescribeInsightRulesError Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToString for T where    T: Display + ?Sized, source#### default fn to_string(&self) -> String Converts the given value to a `String`. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Enum rusoto_cloudwatch::DisableAlarmActionsError === ``` pub enum DisableAlarmActionsError {} ``` Errors returned by DisableAlarmActions Implementations --- source### impl DisableAlarmActionsError source#### pub fn from_response(    res: BufferedHttpResponse) -> RusotoError<DisableAlarmActionsErrorTrait Implementations --- source### impl Debug for DisableAlarmActionsError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Display for DisableAlarmActionsError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Error for DisableAlarmActionsError 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more 1.0.0 · source#### fn description(&self) -> &str 👎 Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting source### impl PartialEq<DisableAlarmActionsError> for DisableAlarmActionsError source#### fn eq(&self, other: &DisableAlarmActionsError) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. source### impl StructuralPartialEq for DisableAlarmActionsError Auto Trait Implementations --- ### impl RefUnwindSafe for DisableAlarmActionsError ### impl Send for DisableAlarmActionsError ### impl Sync for DisableAlarmActionsError ### impl Unpin for DisableAlarmActionsError ### impl UnwindSafe for DisableAlarmActionsError Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToString for T where    T: Display + ?Sized, source#### default fn to_string(&self) -> String Converts the given value to a `String`. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Enum rusoto_cloudwatch::DisableInsightRulesError === ``` pub enum DisableInsightRulesError { InvalidParameterValue(String), MissingRequiredParameter(String), } ``` Errors returned by DisableInsightRules Variants --- ### `InvalidParameterValue(String)` The value of an input parameter is bad or out-of-range. ### `MissingRequiredParameter(String)` An input parameter that is required is missing. Implementations --- source### impl DisableInsightRulesError source#### pub fn from_response(    res: BufferedHttpResponse) -> RusotoError<DisableInsightRulesErrorTrait Implementations --- source### impl Debug for DisableInsightRulesError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Display for DisableInsightRulesError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Error for DisableInsightRulesError 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more 1.0.0 · source#### fn description(&self) -> &str 👎 Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting source### impl PartialEq<DisableInsightRulesError> for DisableInsightRulesError source#### fn eq(&self, other: &DisableInsightRulesError) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &DisableInsightRulesError) -> bool This method tests for `!=`. source### impl StructuralPartialEq for DisableInsightRulesError Auto Trait Implementations --- ### impl RefUnwindSafe for DisableInsightRulesError ### impl Send for DisableInsightRulesError ### impl Sync for DisableInsightRulesError ### impl Unpin for DisableInsightRulesError ### impl UnwindSafe for DisableInsightRulesError Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToString for T where    T: Display + ?Sized, source#### default fn to_string(&self) -> String Converts the given value to a `String`. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Enum rusoto_cloudwatch::EnableAlarmActionsError === ``` pub enum EnableAlarmActionsError {} ``` Errors returned by EnableAlarmActions Implementations --- source### impl EnableAlarmActionsError source#### pub fn from_response(    res: BufferedHttpResponse) -> RusotoError<EnableAlarmActionsErrorTrait Implementations --- source### impl Debug for EnableAlarmActionsError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Display for EnableAlarmActionsError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Error for EnableAlarmActionsError 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more 1.0.0 · source#### fn description(&self) -> &str 👎 Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting source### impl PartialEq<EnableAlarmActionsError> for EnableAlarmActionsError source#### fn eq(&self, other: &EnableAlarmActionsError) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. source### impl StructuralPartialEq for EnableAlarmActionsError Auto Trait Implementations --- ### impl RefUnwindSafe for EnableAlarmActionsError ### impl Send for EnableAlarmActionsError ### impl Sync for EnableAlarmActionsError ### impl Unpin for EnableAlarmActionsError ### impl UnwindSafe for EnableAlarmActionsError Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToString for T where    T: Display + ?Sized, source#### default fn to_string(&self) -> String Converts the given value to a `String`. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Enum rusoto_cloudwatch::EnableInsightRulesError === ``` pub enum EnableInsightRulesError { InvalidParameterValue(String), LimitExceeded(String), MissingRequiredParameter(String), } ``` Errors returned by EnableInsightRules Variants --- ### `InvalidParameterValue(String)` The value of an input parameter is bad or out-of-range. ### `LimitExceeded(String)` The operation exceeded one or more limits. ### `MissingRequiredParameter(String)` An input parameter that is required is missing. Implementations --- source### impl EnableInsightRulesError source#### pub fn from_response(    res: BufferedHttpResponse) -> RusotoError<EnableInsightRulesErrorTrait Implementations --- source### impl Debug for EnableInsightRulesError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Display for EnableInsightRulesError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Error for EnableInsightRulesError 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more 1.0.0 · source#### fn description(&self) -> &str 👎 Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting source### impl PartialEq<EnableInsightRulesError> for EnableInsightRulesError source#### fn eq(&self, other: &EnableInsightRulesError) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &EnableInsightRulesError) -> bool This method tests for `!=`. source### impl StructuralPartialEq for EnableInsightRulesError Auto Trait Implementations --- ### impl RefUnwindSafe for EnableInsightRulesError ### impl Send for EnableInsightRulesError ### impl Sync for EnableInsightRulesError ### impl Unpin for EnableInsightRulesError ### impl UnwindSafe for EnableInsightRulesError Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToString for T where    T: Display + ?Sized, source#### default fn to_string(&self) -> String Converts the given value to a `String`. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Enum rusoto_cloudwatch::GetDashboardError === ``` pub enum GetDashboardError { DashboardNotFoundError(String), InternalServiceFault(String), InvalidParameterValue(String), } ``` Errors returned by GetDashboard Variants --- ### `DashboardNotFoundError(String)` The specified dashboard does not exist. ### `InternalServiceFault(String)` Request processing has failed due to some unknown error, exception, or failure. ### `InvalidParameterValue(String)` The value of an input parameter is bad or out-of-range. Implementations --- source### impl GetDashboardError source#### pub fn from_response(    res: BufferedHttpResponse) -> RusotoError<GetDashboardErrorTrait Implementations --- source### impl Debug for GetDashboardError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Display for GetDashboardError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Error for GetDashboardError 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more 1.0.0 · source#### fn description(&self) -> &str 👎 Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting source### impl PartialEq<GetDashboardError> for GetDashboardError source#### fn eq(&self, other: &GetDashboardError) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &GetDashboardError) -> bool This method tests for `!=`. source### impl StructuralPartialEq for GetDashboardError Auto Trait Implementations --- ### impl RefUnwindSafe for GetDashboardError ### impl Send for GetDashboardError ### impl Sync for GetDashboardError ### impl Unpin for GetDashboardError ### impl UnwindSafe for GetDashboardError Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToString for T where    T: Display + ?Sized, source#### default fn to_string(&self) -> String Converts the given value to a `String`. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Enum rusoto_cloudwatch::GetInsightRuleReportError === ``` pub enum GetInsightRuleReportError { InvalidParameterValue(String), MissingRequiredParameter(String), ResourceNotFound(String), } ``` Errors returned by GetInsightRuleReport Variants --- ### `InvalidParameterValue(String)` The value of an input parameter is bad or out-of-range. ### `MissingRequiredParameter(String)` An input parameter that is required is missing. ### `ResourceNotFound(String)` The named resource does not exist. Implementations --- source### impl GetInsightRuleReportError source#### pub fn from_response(    res: BufferedHttpResponse) -> RusotoError<GetInsightRuleReportErrorTrait Implementations --- source### impl Debug for GetInsightRuleReportError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Display for GetInsightRuleReportError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Error for GetInsightRuleReportError 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more 1.0.0 · source#### fn description(&self) -> &str 👎 Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting source### impl PartialEq<GetInsightRuleReportError> for GetInsightRuleReportError source#### fn eq(&self, other: &GetInsightRuleReportError) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &GetInsightRuleReportError) -> bool This method tests for `!=`. source### impl StructuralPartialEq for GetInsightRuleReportError Auto Trait Implementations --- ### impl RefUnwindSafe for GetInsightRuleReportError ### impl Send for GetInsightRuleReportError ### impl Sync for GetInsightRuleReportError ### impl Unpin for GetInsightRuleReportError ### impl UnwindSafe for GetInsightRuleReportError Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToString for T where    T: Display + ?Sized, source#### default fn to_string(&self) -> String Converts the given value to a `String`. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Enum rusoto_cloudwatch::GetMetricDataError === ``` pub enum GetMetricDataError { InvalidNextToken(String), } ``` Errors returned by GetMetricData Variants --- ### `InvalidNextToken(String)` The next token specified is invalid. Implementations --- source### impl GetMetricDataError source#### pub fn from_response(    res: BufferedHttpResponse) -> RusotoError<GetMetricDataErrorTrait Implementations --- source### impl Debug for GetMetricDataError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Display for GetMetricDataError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Error for GetMetricDataError 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more 1.0.0 · source#### fn description(&self) -> &str 👎 Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting source### impl PartialEq<GetMetricDataError> for GetMetricDataError source#### fn eq(&self, other: &GetMetricDataError) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &GetMetricDataError) -> bool This method tests for `!=`. source### impl StructuralPartialEq for GetMetricDataError Auto Trait Implementations --- ### impl RefUnwindSafe for GetMetricDataError ### impl Send for GetMetricDataError ### impl Sync for GetMetricDataError ### impl Unpin for GetMetricDataError ### impl UnwindSafe for GetMetricDataError Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToString for T where    T: Display + ?Sized, source#### default fn to_string(&self) -> String Converts the given value to a `String`. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Enum rusoto_cloudwatch::GetMetricStatisticsError === ``` pub enum GetMetricStatisticsError { InternalServiceFault(String), InvalidParameterCombination(String), InvalidParameterValue(String), MissingRequiredParameter(String), } ``` Errors returned by GetMetricStatistics Variants --- ### `InternalServiceFault(String)` Request processing has failed due to some unknown error, exception, or failure. ### `InvalidParameterCombination(String)` Parameters were used together that cannot be used together. ### `InvalidParameterValue(String)` The value of an input parameter is bad or out-of-range. ### `MissingRequiredParameter(String)` An input parameter that is required is missing. Implementations --- source### impl GetMetricStatisticsError source#### pub fn from_response(    res: BufferedHttpResponse) -> RusotoError<GetMetricStatisticsErrorTrait Implementations --- source### impl Debug for GetMetricStatisticsError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Display for GetMetricStatisticsError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Error for GetMetricStatisticsError 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more 1.0.0 · source#### fn description(&self) -> &str 👎 Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting source### impl PartialEq<GetMetricStatisticsError> for GetMetricStatisticsError source#### fn eq(&self, other: &GetMetricStatisticsError) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &GetMetricStatisticsError) -> bool This method tests for `!=`. source### impl StructuralPartialEq for GetMetricStatisticsError Auto Trait Implementations --- ### impl RefUnwindSafe for GetMetricStatisticsError ### impl Send for GetMetricStatisticsError ### impl Sync for GetMetricStatisticsError ### impl Unpin for GetMetricStatisticsError ### impl UnwindSafe for GetMetricStatisticsError Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToString for T where    T: Display + ?Sized, source#### default fn to_string(&self) -> String Converts the given value to a `String`. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Enum rusoto_cloudwatch::GetMetricStreamError === ``` pub enum GetMetricStreamError { InternalServiceFault(String), InvalidParameterCombination(String), InvalidParameterValue(String), MissingRequiredParameter(String), ResourceNotFound(String), } ``` Errors returned by GetMetricStream Variants --- ### `InternalServiceFault(String)` Request processing has failed due to some unknown error, exception, or failure. ### `InvalidParameterCombination(String)` Parameters were used together that cannot be used together. ### `InvalidParameterValue(String)` The value of an input parameter is bad or out-of-range. ### `MissingRequiredParameter(String)` An input parameter that is required is missing. ### `ResourceNotFound(String)` The named resource does not exist. Implementations --- source### impl GetMetricStreamError source#### pub fn from_response(    res: BufferedHttpResponse) -> RusotoError<GetMetricStreamErrorTrait Implementations --- source### impl Debug for GetMetricStreamError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Display for GetMetricStreamError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Error for GetMetricStreamError 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more 1.0.0 · source#### fn description(&self) -> &str 👎 Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting source### impl PartialEq<GetMetricStreamError> for GetMetricStreamError source#### fn eq(&self, other: &GetMetricStreamError) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &GetMetricStreamError) -> bool This method tests for `!=`. source### impl StructuralPartialEq for GetMetricStreamError Auto Trait Implementations --- ### impl RefUnwindSafe for GetMetricStreamError ### impl Send for GetMetricStreamError ### impl Sync for GetMetricStreamError ### impl Unpin for GetMetricStreamError ### impl UnwindSafe for GetMetricStreamError Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToString for T where    T: Display + ?Sized, source#### default fn to_string(&self) -> String Converts the given value to a `String`. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Enum rusoto_cloudwatch::GetMetricWidgetImageError === ``` pub enum GetMetricWidgetImageError {} ``` Errors returned by GetMetricWidgetImage Implementations --- source### impl GetMetricWidgetImageError source#### pub fn from_response(    res: BufferedHttpResponse) -> RusotoError<GetMetricWidgetImageErrorTrait Implementations --- source### impl Debug for GetMetricWidgetImageError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Display for GetMetricWidgetImageError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Error for GetMetricWidgetImageError 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more 1.0.0 · source#### fn description(&self) -> &str 👎 Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting source### impl PartialEq<GetMetricWidgetImageError> for GetMetricWidgetImageError source#### fn eq(&self, other: &GetMetricWidgetImageError) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. source### impl StructuralPartialEq for GetMetricWidgetImageError Auto Trait Implementations --- ### impl RefUnwindSafe for GetMetricWidgetImageError ### impl Send for GetMetricWidgetImageError ### impl Sync for GetMetricWidgetImageError ### impl Unpin for GetMetricWidgetImageError ### impl UnwindSafe for GetMetricWidgetImageError Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToString for T where    T: Display + ?Sized, source#### default fn to_string(&self) -> String Converts the given value to a `String`. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Enum rusoto_cloudwatch::ListDashboardsError === ``` pub enum ListDashboardsError { InternalServiceFault(String), InvalidParameterValue(String), } ``` Errors returned by ListDashboards Variants --- ### `InternalServiceFault(String)` Request processing has failed due to some unknown error, exception, or failure. ### `InvalidParameterValue(String)` The value of an input parameter is bad or out-of-range. Implementations --- source### impl ListDashboardsError source#### pub fn from_response(    res: BufferedHttpResponse) -> RusotoError<ListDashboardsErrorTrait Implementations --- source### impl Debug for ListDashboardsError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Display for ListDashboardsError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Error for ListDashboardsError 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more 1.0.0 · source#### fn description(&self) -> &str 👎 Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting source### impl PartialEq<ListDashboardsError> for ListDashboardsError source#### fn eq(&self, other: &ListDashboardsError) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &ListDashboardsError) -> bool This method tests for `!=`. source### impl StructuralPartialEq for ListDashboardsError Auto Trait Implementations --- ### impl RefUnwindSafe for ListDashboardsError ### impl Send for ListDashboardsError ### impl Sync for ListDashboardsError ### impl Unpin for ListDashboardsError ### impl UnwindSafe for ListDashboardsError Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToString for T where    T: Display + ?Sized, source#### default fn to_string(&self) -> String Converts the given value to a `String`. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Enum rusoto_cloudwatch::ListMetricStreamsError === ``` pub enum ListMetricStreamsError { InternalServiceFault(String), InvalidNextToken(String), InvalidParameterValue(String), MissingRequiredParameter(String), } ``` Errors returned by ListMetricStreams Variants --- ### `InternalServiceFault(String)` Request processing has failed due to some unknown error, exception, or failure. ### `InvalidNextToken(String)` The next token specified is invalid. ### `InvalidParameterValue(String)` The value of an input parameter is bad or out-of-range. ### `MissingRequiredParameter(String)` An input parameter that is required is missing. Implementations --- source### impl ListMetricStreamsError source#### pub fn from_response(    res: BufferedHttpResponse) -> RusotoError<ListMetricStreamsErrorTrait Implementations --- source### impl Debug for ListMetricStreamsError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Display for ListMetricStreamsError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Error for ListMetricStreamsError 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more 1.0.0 · source#### fn description(&self) -> &str 👎 Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting source### impl PartialEq<ListMetricStreamsError> for ListMetricStreamsError source#### fn eq(&self, other: &ListMetricStreamsError) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &ListMetricStreamsError) -> bool This method tests for `!=`. source### impl StructuralPartialEq for ListMetricStreamsError Auto Trait Implementations --- ### impl RefUnwindSafe for ListMetricStreamsError ### impl Send for ListMetricStreamsError ### impl Sync for ListMetricStreamsError ### impl Unpin for ListMetricStreamsError ### impl UnwindSafe for ListMetricStreamsError Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToString for T where    T: Display + ?Sized, source#### default fn to_string(&self) -> String Converts the given value to a `String`. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Enum rusoto_cloudwatch::ListMetricsError === ``` pub enum ListMetricsError { InternalServiceFault(String), InvalidParameterValue(String), } ``` Errors returned by ListMetrics Variants --- ### `InternalServiceFault(String)` Request processing has failed due to some unknown error, exception, or failure. ### `InvalidParameterValue(String)` The value of an input parameter is bad or out-of-range. Implementations --- source### impl ListMetricsError source#### pub fn from_response(res: BufferedHttpResponse) -> RusotoError<ListMetricsErrorTrait Implementations --- source### impl Debug for ListMetricsError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Display for ListMetricsError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Error for ListMetricsError 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more 1.0.0 · source#### fn description(&self) -> &str 👎 Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting source### impl PartialEq<ListMetricsError> for ListMetricsError source#### fn eq(&self, other: &ListMetricsError) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &ListMetricsError) -> bool This method tests for `!=`. source### impl StructuralPartialEq for ListMetricsError Auto Trait Implementations --- ### impl RefUnwindSafe for ListMetricsError ### impl Send for ListMetricsError ### impl Sync for ListMetricsError ### impl Unpin for ListMetricsError ### impl UnwindSafe for ListMetricsError Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToString for T where    T: Display + ?Sized, source#### default fn to_string(&self) -> String Converts the given value to a `String`. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Enum rusoto_cloudwatch::ListTagsForResourceError === ``` pub enum ListTagsForResourceError { InternalServiceFault(String), InvalidParameterValue(String), ResourceNotFound(String), } ``` Errors returned by ListTagsForResource Variants --- ### `InternalServiceFault(String)` Request processing has failed due to some unknown error, exception, or failure. ### `InvalidParameterValue(String)` The value of an input parameter is bad or out-of-range. ### `ResourceNotFound(String)` The named resource does not exist. Implementations --- source### impl ListTagsForResourceError source#### pub fn from_response(    res: BufferedHttpResponse) -> RusotoError<ListTagsForResourceErrorTrait Implementations --- source### impl Debug for ListTagsForResourceError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Display for ListTagsForResourceError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Error for ListTagsForResourceError 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more 1.0.0 · source#### fn description(&self) -> &str 👎 Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting source### impl PartialEq<ListTagsForResourceError> for ListTagsForResourceError source#### fn eq(&self, other: &ListTagsForResourceError) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &ListTagsForResourceError) -> bool This method tests for `!=`. source### impl StructuralPartialEq for ListTagsForResourceError Auto Trait Implementations --- ### impl RefUnwindSafe for ListTagsForResourceError ### impl Send for ListTagsForResourceError ### impl Sync for ListTagsForResourceError ### impl Unpin for ListTagsForResourceError ### impl UnwindSafe for ListTagsForResourceError Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToString for T where    T: Display + ?Sized, source#### default fn to_string(&self) -> String Converts the given value to a `String`. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Enum rusoto_cloudwatch::PutAnomalyDetectorError === ``` pub enum PutAnomalyDetectorError { InternalServiceFault(String), InvalidParameterValue(String), LimitExceeded(String), MissingRequiredParameter(String), } ``` Errors returned by PutAnomalyDetector Variants --- ### `InternalServiceFault(String)` Request processing has failed due to some unknown error, exception, or failure. ### `InvalidParameterValue(String)` The value of an input parameter is bad or out-of-range. ### `LimitExceeded(String)` The operation exceeded one or more limits. ### `MissingRequiredParameter(String)` An input parameter that is required is missing. Implementations --- source### impl PutAnomalyDetectorError source#### pub fn from_response(    res: BufferedHttpResponse) -> RusotoError<PutAnomalyDetectorErrorTrait Implementations --- source### impl Debug for PutAnomalyDetectorError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Display for PutAnomalyDetectorError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Error for PutAnomalyDetectorError 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more 1.0.0 · source#### fn description(&self) -> &str 👎 Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting source### impl PartialEq<PutAnomalyDetectorError> for PutAnomalyDetectorError source#### fn eq(&self, other: &PutAnomalyDetectorError) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &PutAnomalyDetectorError) -> bool This method tests for `!=`. source### impl StructuralPartialEq for PutAnomalyDetectorError Auto Trait Implementations --- ### impl RefUnwindSafe for PutAnomalyDetectorError ### impl Send for PutAnomalyDetectorError ### impl Sync for PutAnomalyDetectorError ### impl Unpin for PutAnomalyDetectorError ### impl UnwindSafe for PutAnomalyDetectorError Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToString for T where    T: Display + ?Sized, source#### default fn to_string(&self) -> String Converts the given value to a `String`. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Enum rusoto_cloudwatch::PutCompositeAlarmError === ``` pub enum PutCompositeAlarmError { LimitExceededFault(String), } ``` Errors returned by PutCompositeAlarm Variants --- ### `LimitExceededFault(String)` The quota for alarms for this customer has already been reached. Implementations --- source### impl PutCompositeAlarmError source#### pub fn from_response(    res: BufferedHttpResponse) -> RusotoError<PutCompositeAlarmErrorTrait Implementations --- source### impl Debug for PutCompositeAlarmError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Display for PutCompositeAlarmError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Error for PutCompositeAlarmError 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more 1.0.0 · source#### fn description(&self) -> &str 👎 Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting source### impl PartialEq<PutCompositeAlarmError> for PutCompositeAlarmError source#### fn eq(&self, other: &PutCompositeAlarmError) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &PutCompositeAlarmError) -> bool This method tests for `!=`. source### impl StructuralPartialEq for PutCompositeAlarmError Auto Trait Implementations --- ### impl RefUnwindSafe for PutCompositeAlarmError ### impl Send for PutCompositeAlarmError ### impl Sync for PutCompositeAlarmError ### impl Unpin for PutCompositeAlarmError ### impl UnwindSafe for PutCompositeAlarmError Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToString for T where    T: Display + ?Sized, source#### default fn to_string(&self) -> String Converts the given value to a `String`. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Enum rusoto_cloudwatch::PutDashboardError === ``` pub enum PutDashboardError { DashboardInvalidInputError(String), InternalServiceFault(String), } ``` Errors returned by PutDashboard Variants --- ### `DashboardInvalidInputError(String)` Some part of the dashboard data is invalid. ### `InternalServiceFault(String)` Request processing has failed due to some unknown error, exception, or failure. Implementations --- source### impl PutDashboardError source#### pub fn from_response(    res: BufferedHttpResponse) -> RusotoError<PutDashboardErrorTrait Implementations --- source### impl Debug for PutDashboardError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Display for PutDashboardError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Error for PutDashboardError 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more 1.0.0 · source#### fn description(&self) -> &str 👎 Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting source### impl PartialEq<PutDashboardError> for PutDashboardError source#### fn eq(&self, other: &PutDashboardError) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &PutDashboardError) -> bool This method tests for `!=`. source### impl StructuralPartialEq for PutDashboardError Auto Trait Implementations --- ### impl RefUnwindSafe for PutDashboardError ### impl Send for PutDashboardError ### impl Sync for PutDashboardError ### impl Unpin for PutDashboardError ### impl UnwindSafe for PutDashboardError Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToString for T where    T: Display + ?Sized, source#### default fn to_string(&self) -> String Converts the given value to a `String`. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Enum rusoto_cloudwatch::PutInsightRuleError === ``` pub enum PutInsightRuleError { InvalidParameterValue(String), LimitExceeded(String), MissingRequiredParameter(String), } ``` Errors returned by PutInsightRule Variants --- ### `InvalidParameterValue(String)` The value of an input parameter is bad or out-of-range. ### `LimitExceeded(String)` The operation exceeded one or more limits. ### `MissingRequiredParameter(String)` An input parameter that is required is missing. Implementations --- source### impl PutInsightRuleError source#### pub fn from_response(    res: BufferedHttpResponse) -> RusotoError<PutInsightRuleErrorTrait Implementations --- source### impl Debug for PutInsightRuleError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Display for PutInsightRuleError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Error for PutInsightRuleError 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more 1.0.0 · source#### fn description(&self) -> &str 👎 Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting source### impl PartialEq<PutInsightRuleError> for PutInsightRuleError source#### fn eq(&self, other: &PutInsightRuleError) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &PutInsightRuleError) -> bool This method tests for `!=`. source### impl StructuralPartialEq for PutInsightRuleError Auto Trait Implementations --- ### impl RefUnwindSafe for PutInsightRuleError ### impl Send for PutInsightRuleError ### impl Sync for PutInsightRuleError ### impl Unpin for PutInsightRuleError ### impl UnwindSafe for PutInsightRuleError Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToString for T where    T: Display + ?Sized, source#### default fn to_string(&self) -> String Converts the given value to a `String`. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Enum rusoto_cloudwatch::PutMetricAlarmError === ``` pub enum PutMetricAlarmError { LimitExceededFault(String), } ``` Errors returned by PutMetricAlarm Variants --- ### `LimitExceededFault(String)` The quota for alarms for this customer has already been reached. Implementations --- source### impl PutMetricAlarmError source#### pub fn from_response(    res: BufferedHttpResponse) -> RusotoError<PutMetricAlarmErrorTrait Implementations --- source### impl Debug for PutMetricAlarmError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Display for PutMetricAlarmError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Error for PutMetricAlarmError 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more 1.0.0 · source#### fn description(&self) -> &str 👎 Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting source### impl PartialEq<PutMetricAlarmError> for PutMetricAlarmError source#### fn eq(&self, other: &PutMetricAlarmError) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &PutMetricAlarmError) -> bool This method tests for `!=`. source### impl StructuralPartialEq for PutMetricAlarmError Auto Trait Implementations --- ### impl RefUnwindSafe for PutMetricAlarmError ### impl Send for PutMetricAlarmError ### impl Sync for PutMetricAlarmError ### impl Unpin for PutMetricAlarmError ### impl UnwindSafe for PutMetricAlarmError Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToString for T where    T: Display + ?Sized, source#### default fn to_string(&self) -> String Converts the given value to a `String`. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Enum rusoto_cloudwatch::PutMetricDataError === ``` pub enum PutMetricDataError { InternalServiceFault(String), InvalidParameterCombination(String), InvalidParameterValue(String), MissingRequiredParameter(String), } ``` Errors returned by PutMetricData Variants --- ### `InternalServiceFault(String)` Request processing has failed due to some unknown error, exception, or failure. ### `InvalidParameterCombination(String)` Parameters were used together that cannot be used together. ### `InvalidParameterValue(String)` The value of an input parameter is bad or out-of-range. ### `MissingRequiredParameter(String)` An input parameter that is required is missing. Implementations --- source### impl PutMetricDataError source#### pub fn from_response(    res: BufferedHttpResponse) -> RusotoError<PutMetricDataErrorTrait Implementations --- source### impl Debug for PutMetricDataError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Display for PutMetricDataError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Error for PutMetricDataError 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more 1.0.0 · source#### fn description(&self) -> &str 👎 Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting source### impl PartialEq<PutMetricDataError> for PutMetricDataError source#### fn eq(&self, other: &PutMetricDataError) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &PutMetricDataError) -> bool This method tests for `!=`. source### impl StructuralPartialEq for PutMetricDataError Auto Trait Implementations --- ### impl RefUnwindSafe for PutMetricDataError ### impl Send for PutMetricDataError ### impl Sync for PutMetricDataError ### impl Unpin for PutMetricDataError ### impl UnwindSafe for PutMetricDataError Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToString for T where    T: Display + ?Sized, source#### default fn to_string(&self) -> String Converts the given value to a `String`. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Enum rusoto_cloudwatch::PutMetricStreamError === ``` pub enum PutMetricStreamError { ConcurrentModification(String), InternalServiceFault(String), InvalidParameterCombination(String), InvalidParameterValue(String), MissingRequiredParameter(String), } ``` Errors returned by PutMetricStream Variants --- ### `ConcurrentModification(String)` More than one process tried to modify a resource at the same time. ### `InternalServiceFault(String)` Request processing has failed due to some unknown error, exception, or failure. ### `InvalidParameterCombination(String)` Parameters were used together that cannot be used together. ### `InvalidParameterValue(String)` The value of an input parameter is bad or out-of-range. ### `MissingRequiredParameter(String)` An input parameter that is required is missing. Implementations --- source### impl PutMetricStreamError source#### pub fn from_response(    res: BufferedHttpResponse) -> RusotoError<PutMetricStreamErrorTrait Implementations --- source### impl Debug for PutMetricStreamError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Display for PutMetricStreamError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Error for PutMetricStreamError 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more 1.0.0 · source#### fn description(&self) -> &str 👎 Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting source### impl PartialEq<PutMetricStreamError> for PutMetricStreamError source#### fn eq(&self, other: &PutMetricStreamError) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &PutMetricStreamError) -> bool This method tests for `!=`. source### impl StructuralPartialEq for PutMetricStreamError Auto Trait Implementations --- ### impl RefUnwindSafe for PutMetricStreamError ### impl Send for PutMetricStreamError ### impl Sync for PutMetricStreamError ### impl Unpin for PutMetricStreamError ### impl UnwindSafe for PutMetricStreamError Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToString for T where    T: Display + ?Sized, source#### default fn to_string(&self) -> String Converts the given value to a `String`. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Enum rusoto_cloudwatch::SetAlarmStateError === ``` pub enum SetAlarmStateError { InvalidFormatFault(String), ResourceNotFound(String), } ``` Errors returned by SetAlarmState Variants --- ### `InvalidFormatFault(String)` Data was not syntactically valid JSON. ### `ResourceNotFound(String)` The named resource does not exist. Implementations --- source### impl SetAlarmStateError source#### pub fn from_response(    res: BufferedHttpResponse) -> RusotoError<SetAlarmStateErrorTrait Implementations --- source### impl Debug for SetAlarmStateError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Display for SetAlarmStateError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Error for SetAlarmStateError 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more 1.0.0 · source#### fn description(&self) -> &str 👎 Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting source### impl PartialEq<SetAlarmStateError> for SetAlarmStateError source#### fn eq(&self, other: &SetAlarmStateError) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &SetAlarmStateError) -> bool This method tests for `!=`. source### impl StructuralPartialEq for SetAlarmStateError Auto Trait Implementations --- ### impl RefUnwindSafe for SetAlarmStateError ### impl Send for SetAlarmStateError ### impl Sync for SetAlarmStateError ### impl Unpin for SetAlarmStateError ### impl UnwindSafe for SetAlarmStateError Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToString for T where    T: Display + ?Sized, source#### default fn to_string(&self) -> String Converts the given value to a `String`. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Enum rusoto_cloudwatch::StartMetricStreamsError === ``` pub enum StartMetricStreamsError { InternalServiceFault(String), InvalidParameterValue(String), MissingRequiredParameter(String), } ``` Errors returned by StartMetricStreams Variants --- ### `InternalServiceFault(String)` Request processing has failed due to some unknown error, exception, or failure. ### `InvalidParameterValue(String)` The value of an input parameter is bad or out-of-range. ### `MissingRequiredParameter(String)` An input parameter that is required is missing. Implementations --- source### impl StartMetricStreamsError source#### pub fn from_response(    res: BufferedHttpResponse) -> RusotoError<StartMetricStreamsErrorTrait Implementations --- source### impl Debug for StartMetricStreamsError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Display for StartMetricStreamsError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Error for StartMetricStreamsError 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more 1.0.0 · source#### fn description(&self) -> &str 👎 Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting source### impl PartialEq<StartMetricStreamsError> for StartMetricStreamsError source#### fn eq(&self, other: &StartMetricStreamsError) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &StartMetricStreamsError) -> bool This method tests for `!=`. source### impl StructuralPartialEq for StartMetricStreamsError Auto Trait Implementations --- ### impl RefUnwindSafe for StartMetricStreamsError ### impl Send for StartMetricStreamsError ### impl Sync for StartMetricStreamsError ### impl Unpin for StartMetricStreamsError ### impl UnwindSafe for StartMetricStreamsError Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToString for T where    T: Display + ?Sized, source#### default fn to_string(&self) -> String Converts the given value to a `String`. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Enum rusoto_cloudwatch::StopMetricStreamsError === ``` pub enum StopMetricStreamsError { InternalServiceFault(String), InvalidParameterValue(String), MissingRequiredParameter(String), } ``` Errors returned by StopMetricStreams Variants --- ### `InternalServiceFault(String)` Request processing has failed due to some unknown error, exception, or failure. ### `InvalidParameterValue(String)` The value of an input parameter is bad or out-of-range. ### `MissingRequiredParameter(String)` An input parameter that is required is missing. Implementations --- source### impl StopMetricStreamsError source#### pub fn from_response(    res: BufferedHttpResponse) -> RusotoError<StopMetricStreamsErrorTrait Implementations --- source### impl Debug for StopMetricStreamsError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Display for StopMetricStreamsError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Error for StopMetricStreamsError 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more 1.0.0 · source#### fn description(&self) -> &str 👎 Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting source### impl PartialEq<StopMetricStreamsError> for StopMetricStreamsError source#### fn eq(&self, other: &StopMetricStreamsError) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &StopMetricStreamsError) -> bool This method tests for `!=`. source### impl StructuralPartialEq for StopMetricStreamsError Auto Trait Implementations --- ### impl RefUnwindSafe for StopMetricStreamsError ### impl Send for StopMetricStreamsError ### impl Sync for StopMetricStreamsError ### impl Unpin for StopMetricStreamsError ### impl UnwindSafe for StopMetricStreamsError Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToString for T where    T: Display + ?Sized, source#### default fn to_string(&self) -> String Converts the given value to a `String`. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Enum rusoto_cloudwatch::TagResourceError === ``` pub enum TagResourceError { ConcurrentModification(String), InternalServiceFault(String), InvalidParameterValue(String), ResourceNotFound(String), } ``` Errors returned by TagResource Variants --- ### `ConcurrentModification(String)` More than one process tried to modify a resource at the same time. ### `InternalServiceFault(String)` Request processing has failed due to some unknown error, exception, or failure. ### `InvalidParameterValue(String)` The value of an input parameter is bad or out-of-range. ### `ResourceNotFound(String)` The named resource does not exist. Implementations --- source### impl TagResourceError source#### pub fn from_response(res: BufferedHttpResponse) -> RusotoError<TagResourceErrorTrait Implementations --- source### impl Debug for TagResourceError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Display for TagResourceError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Error for TagResourceError 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more 1.0.0 · source#### fn description(&self) -> &str 👎 Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting source### impl PartialEq<TagResourceError> for TagResourceError source#### fn eq(&self, other: &TagResourceError) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &TagResourceError) -> bool This method tests for `!=`. source### impl StructuralPartialEq for TagResourceError Auto Trait Implementations --- ### impl RefUnwindSafe for TagResourceError ### impl Send for TagResourceError ### impl Sync for TagResourceError ### impl Unpin for TagResourceError ### impl UnwindSafe for TagResourceError Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToString for T where    T: Display + ?Sized, source#### default fn to_string(&self) -> String Converts the given value to a `String`. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Enum rusoto_cloudwatch::UntagResourceError === ``` pub enum UntagResourceError { ConcurrentModification(String), InternalServiceFault(String), InvalidParameterValue(String), ResourceNotFound(String), } ``` Errors returned by UntagResource Variants --- ### `ConcurrentModification(String)` More than one process tried to modify a resource at the same time. ### `InternalServiceFault(String)` Request processing has failed due to some unknown error, exception, or failure. ### `InvalidParameterValue(String)` The value of an input parameter is bad or out-of-range. ### `ResourceNotFound(String)` The named resource does not exist. Implementations --- source### impl UntagResourceError source#### pub fn from_response(    res: BufferedHttpResponse) -> RusotoError<UntagResourceErrorTrait Implementations --- source### impl Debug for UntagResourceError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Display for UntagResourceError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Error for UntagResourceError 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more 1.0.0 · source#### fn description(&self) -> &str 👎 Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting source### impl PartialEq<UntagResourceError> for UntagResourceError source#### fn eq(&self, other: &UntagResourceError) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &UntagResourceError) -> bool This method tests for `!=`. source### impl StructuralPartialEq for UntagResourceError Auto Trait Implementations --- ### impl RefUnwindSafe for UntagResourceError ### impl Send for UntagResourceError ### impl Sync for UntagResourceError ### impl Unpin for UntagResourceError ### impl UnwindSafe for UntagResourceError Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToString for T where    T: Display + ?Sized, source#### default fn to_string(&self) -> String Converts the given value to a `String`. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more
viscomp
cran
R
Package ‘viscomp’ January 16, 2023 Type Package Title Visualize Multi-Component Interventions in Network Meta-Analysis Version 1.0.0 Maintainer <NAME> <<EMAIL>> Description A set of functions providing several visualization tools for exploring the behav- ior of the components in a network meta-analysis of multi-component (complex) interventions: - components descriptive analysis - heat plot of the two-by-two component combinations - leaving one component combination out scatter plot - violin plot for specific component combinations' effects - density plot for components' effects - waterfall plot for the interventions' effects that differ by a certain component combination - network graph of components - rank heat plot of components for multiple outcomes. The implemented tools are described by Seitidis et al. (2023) <doi:10.1002/jrsm.1617>. License GPL (>= 3) Encoding UTF-8 LazyData true Depends R (>= 2.10), Imports circlize (>= 0.4.15), dplyr (>= 1.0.9), ggExtra (>= 0.10.0), ggnewscale (>= 0.4.8), ggplot2 (>= 3.3.6), Hmisc (>= 4.7.0), MASS (>= 7.3.56), netmeta (>= 1.3-0), plyr (>= 1.8.7), qgraph (>= 1.9.2), reshape2 (>= 1.4.4), tibble (>= 3.1.7), tidyr (>= 1.2.0) RoxygenNote 7.2.0 Suggests covr, knitr, rmarkdown URL https://github.com/georgiosseitidis/viscomp, https://georgiosseitidis.github.io/viscomp/ BugReports https://github.com/georgiosseitidis/viscomp/issues VignetteBuilder knitr NeedsCompilation no Author <NAME> [aut, cre] (<https://orcid.org/0000-0003-0856-1892>), <NAME> [aut] (<https://orcid.org/0000-0002-6258-8861>), <NAME> [aut], <NAME> [aut], <NAME> [aut], <NAME> [aut] (<https://orcid.org/0000-0002-9769-3725>), <NAME> [aut] (<https://orcid.org/0000-0001-6388-4825>), <NAME> [aut] (<https://orcid.org/0000-0003-1041-4592>) Repository CRAN Date/Publication 2023-01-16 09:50:02 UTC R topics documented: compdes... 2 compGrap... 4 denscom... 5 heatcom... 6 locco... 8 MAC... 9 nmaMAC... 10 rankheatplo... 10 spec... 12 watercom... 14 compdesc Components descriptive analysis Description The function performs a descriptive analysis regarding the frequency of the components in the network meta-analysis model. Usage compdesc(model, sep = "+", heatmap = TRUE, percentage = TRUE, digits = 2) Arguments model An object of class netmeta. sep A single character that defines the separator between interventions components. heatmap logical. If TRUE a heat matrix of the component’s frequency is plotted. percentage logical. If TRUE combinations’ percentages are printed as a number instead of fraction value in the heatmap. digits A single integer value that specifies the percentages’ decimal places in the heatmap. Value A list containing three items crosstable A cross-table containing the frequency of the components. Each cell represents the number of arms where the corresponding component combination was ob- served. frequency A data.frame that contains the component’s frequency. Columns • Component denotes the name of each component • Frequency denotes the number of arms where the component was observed • A denotes the number of studies in which the corresponding component was included in all arms • A_percent denotes the percentage of studies in which the corresponding component was included in all arms • B denotes the number of studies in which the corresponding component was included in at least one arm • B_percent denotes the percentage of studies in which the corresponding component was included in at least one arm • C denotes the number of studies in which the corresponding component was not included in any arm • C_percent denotes the percentage of studies in which the corresponding component was not included in any arm • A.B denotes the ratio of columns A and B. heatmat An object of class ggplot that visualizes item crosstable. Diagonal elements refer to the components and in parentheses the proportion of study arms includ- ing that component is provided, while off-diagonal elements to the frequency of component’s combinations and in parentheses the proportion of study arms with both components out of those study arms that have the component in the row is provided. Also, the intensity of the color is proportional to the relative frequency of the component combination. Note The function can be applied only in network meta-analysis models that contain multi-component interventions. Examples data(nmaMACE) compdesc(model = nmaMACE) compGraph Components Network Graph Description The Components Network Graph is meant to visualize the frequency of components’ combinations found in the network. Usage compGraph( model, sep = "+", mostF = 5, excl = NULL, title = "Most frequent combinations of components", print_legend = TRUE, size_legend = 0.825 ) Arguments model An object of class netmeta. sep A single character that defines the separator between interventions components. mostF Number of most frequent combinations of the network. excl A character vector that specifies the combinations to be excluded from the plot. title A single character that specifies the overall title of the plot. print_legend logical. If TRUE the legend is printed. size_legend size of the legend. Details The function resembles a network plot where nodes represent the individual components found in the network and edges represent the combination of components found in at least one treatment arm of the trials included in the network meta-analysis model. Each edge’s color represents one of the unique interventions (components’ combination) found in the network of interventions. Edges’ thickness indicates the frequency by which each intervention (combination of components) was observed in the network (number of arms in which the combination was assigned). The number of the most frequent combinations can be modified from the argument mostF. The function by default plots the five most frequent components’ combinations found in the network. Value Returns (invisibly) a qgraph object. Note The function can be applied only in network meta-analysis models that contain multi-component interventions. Examples data(nmaMACE) compGraph(model = nmaMACE) denscomp Components Density Plot Description The function creates density plots in order to explore the efficacy of the components. Usage denscomp( model, sep = "+", combination, violin = FALSE, random = TRUE, z_value = FALSE ) Arguments model An object of class netmeta. sep A single character that defines the separator between interventions components. combination A character vector that contains the component combinations of interest. violin logical. If TRUE the density is visualized via violins instead of density plots. random logical. If TRUE the random-effects NMA model is used, instead of the fixed- effect NMA model. z_value logical. If TRUE z-values are used, instead intervention effects. Details If the length of the argument combination is 1, the function creates two density plots. The first is produced based on the interventions that include the component combination of interest (which is specified by the argument combination), while the second on the interventions that do not include the underlying component combination. If the argument combination includes more than one elements, the number of densities is equal with the length of the argument combination, and each density is based on the interventions that include the relative component combination. For example, if combination = c("A + B", "B + C", "A") the function will produce 3 density plots that are based on the interventions that includes components "A" and "B", the interventions that include components "B" and "C" and interventions that includes component "A", respectively. The function by default uses the intervention’s relative effects (z_value = FALSE) obtained from the random-effects network meta-analysis (NMA) model (random = TRUE). It can be also adjusted to use the intervention’s z-values instead of the relative effects, by setting z_value = TRUE. Value An object of class ggplot. Note The efficacy of the components could be explored via violins plots instead of density plots, by setting violin = TRUE. Also, in the case of dichotomous outcomes, the log-scale is used. The function can be applied only in network meta-analysis models that contain multi-component interventions. Examples data(nmaMACE) denscomp(model = nmaMACE, combination = "C") heatcomp Components Heat Plot Description The function creates a heat plot based on the two-by-two component combinations, obtained from the network meta-analysis (NMA) model. Usage heatcomp( model, sep = "+", median = TRUE, random = TRUE, z_value = FALSE, freq = TRUE, legend_name = NULL ) Arguments model An object of class netmeta. sep A single character that defines the separator between interventions components. median logical. If TRUE the median is used instead of the mean as a summary measure. random logical. If TRUE the random-effects NMA model is used instead of the fixed- effect NMA model. z_value logical. If TRUE z-values are used instead of interventions effects. freq logical. If TRUE the frequency of component combinations are printed. legend_name A single character that specifies the title of the legend. Details Diagonal elements refer to components, while off-diagonal to components’ combinations. Each element summarizes by default the NMA relative effects (z_value = FALSE) of the interventions that includes the corresponding component combination. Combinations that were not observed in the NMA model, are denoted by the letter "X". Frequency of component combinations observed in the NMA is printed by default (freq = TRUE). As a summary measure, the median is used by default (median = TRUE). The magnitude of each relative effect is reflected by the color’s intensity. Estimates close to zero are denoted by white color, and indicates a small magnitude of the corre- sponding component combination, while deep green and red colors indicate a large magnitude of the corresponding component combination. Outcomes nature (beneficial or harmful) is defined in the netmeta model. The function can be also adjusted to include z-scores by setting the argument z_value = TRUE. Z-scores quantify the strength of statistical evidence. Thus, dark green (or red) indicates strong statistical evidence that the corresponding component (or combination of components) performs better (or worse) than the reference intervention. Value An object of class ggplot. Note In the case where the NMA relative effects are used, the uncertainty of the NMA estimates are reflected by the size of the grey boxes. The bigger the box, the more precise the estimate. By setting median = FALSE, the mean is used instead of the median as a summary measure. The function can be applied only in network meta-analysis models that contain multi-component interventions. Examples data(nmaMACE) heatcomp(model = nmaMACE) loccos Leaving One Component Combination Out Scatter plot Description The function based on the network meta-analysis (NMA) estimates explores if a set of components has a positive or a negative impact on the outcome, by creating a scatter plot based on the set of interventions that differ by a specific set of components. Usage loccos( model, sep = "+", combination = NULL, random = TRUE, z_value = FALSE, histogram = TRUE, histogram.color = "blue" ) Arguments model An object of class netmeta. sep A single character that defines the separator between interventions components. combination A single character that specifies the component combination of interest. random logical. If TRUE the random-effects NMA model is used instead of the fixed- effect NMA model. z_value logical. If TRUE z-values are used instead of interventions effects. histogram logical. If TRUE histograms are added to the plot. histogram.color A single character that specifies the color of the histogram. See ggMarginal for more details. Details Axis y represents the intervention’s effect when the component combination is not included in the intervention, while axis x represents the intervention’s effect when is included. Line y = x splits the plot in two parts. For a beneficial outcome, dots above the line indicates that the inclusion of component combination balk the intervention’s efficacy, while dots below the line indicate that the inclusion of the component combination increases intervention’s efficacy. The opposite holds for harmful outcomes. The component combination of interest is specified by the argument combination. For example, if combination = "A", the function plots all the interventions that differ by the component "A". If combination = NULL, all interventions that differ by one component are plotted. The function by default uses the NMA relative effects estimates, but it can be adjusted to use the z-values by setting the argument z_value = TRUE. Histograms for the nodes that include and not include the component combination can be added to the scatter plot, by setting the argument histogram = TRUE. Value An object of class ggplot. Note In the case of dichotomous outcomes, the log-scale is used for both axis. Also, the function can be applied only in network meta-analysis models that contain multi-component interventions. Examples data(nmaMACE) loccos(model = nmaMACE, combination = c("B")) MACE Major Adverse Cardiovascular Event Description An artificial network meta-analysis data compering the effectiveness of a number of interventions for major adverse cardiovascular events. Usage MACE Format A data.frame with the following columns Study The study name of the trial treat1, treat2, treat3, treat4 Treatment names of arms n1, n2, n3, n4 Total number of participants in arms event1, event2, event3, event4 Total number of events in arms nmaMACE Network Meta-Analysis of Major Adverse Cardiovascular Event Description An artificial network meta-analysis (of class netmeta) compering the effectiveness of a number of interventions for major adverse cardiovascular events. Usage nmaMACE Format An object of class netmeta of length 157. rankheatplot Components Rank Heat Plot Description Rank heat plot summarizes the components’ p-scores for multiple outcomes. Usage rankheatplot( model, sep = "+", median = TRUE, random = TRUE, outcomeNames = NULL, cex_components = NULL, cex_values = NULL, cex_outcomes = NULL ) Arguments model A list of netmeta models. sep A single character that defines the separator between interventions components. median logical. If TRUE the median is used as a summary measure instead of the mean. random A logical vector that specifies the NMA model for each outcome. If TRUE the random-effects NMA model is used instead of the fixed-effects NMA model. outcomeNames A character vector that specifies the names of the outcomes. cex_components Font size of components’ names. cex_values Font size of p-scores. cex_outcomes Font size of outcomes’ names. Details The function creates a rank heat plot, where the number of circles depend on the number of out- comes. Each circle is divided by the total number of components, and each sector is colored ac- cording the corresponding component p-score. Components’ p-scores are summarized by using either the median (median = TRUE) or the mean (median = FALSE) of the p-scores obtained from the interventions that include the corresponding component. The sector’s colors reflect the magnitude of the components p-scores. Red color indicates a low p-score (close to zero), while green color indicates values close to 1. Intervention’s p-scores are obtained from the network meta-analysis (NMA) model. By default the random-effects NMA model is used for each outcome (random = TRUE). Value Returns (invisibly) a rank heat plot. Note The function can be applied only in network meta-analysis models that contain multi-component interventions. Examples # Artificial data set t1 <- c("A", "B", "C", "A+B", "A+C", "B+C", "A") t2 <- c("C", "A", "A+C", "B+C", "A", "B", "B+C") TE1 <- c(2.12, 3.24, 5.65, -0.60, 0.13, 0.66, 3.28) TE2 <- c(4.69, 2.67, 2.73, -3.41, 1.79, 2.93, 2.51) seTE1 <- rep(0.1, 7) seTE2 <- rep(0.2, 7) study <- paste0("study_", 1:7) data1 <- data.frame( "TE" = TE1, "seTE" = seTE1, "treat1" = t1, "treat2" = t2, "studlab" = study, stringsAsFactors = FALSE ) data2 <- data.frame( "TE" = TE2, "seTE" = seTE2, "treat1" = t1, "treat2" = t2, "studlab" = study, stringsAsFactors = FALSE ) # Network meta-analysis models net1 <- netmeta::netmeta( TE = TE, seTE = seTE, studlab = studlab, treat1 = treat1, treat2 = treat2, data = data1, ref = "A" ) net2 <- netmeta::netmeta( TE = TE, seTE = seTE, studlab = studlab, treat1 = treat1, treat2 = treat2, data = data2, ref = "A" ) # Rank heat plot rankheatplot(model = list(net1, net2)) specc Specific Component Combination violin plots Description The function based on the network meta-analysis (NMA) estimates produces violin plots from in- terventions that include the component combinations of interest. Usage specc( model, sep = "+", combination = NULL, components_number = FALSE, groups = NULL, random = TRUE, z_value = FALSE, prop_size = TRUE, fill_violin = "lightblue", color_violin = "lightblue", adj_violin = 1, width_violin = 1, boxplot = TRUE, width_boxplot = 0.5, errorbar_type = 5, dots = TRUE, jitter_shape = 16, jitter_position = 0.01, values = TRUE ) Arguments model An object of class netmeta. sep A single character that defines the separator between interventions components. combination A character vector that specifies the component combinations of interest. components_number logical. If TRUE the violins are created based on the number of components included in the interventions. groups A character vector that contains the clusters of the number of components. Ele- ments of the vector must be integer numbers (e.g. 5 or "5"), or range values (e.g. "3-4" ), or in the "xx+" format (e.g "5+"). random logical. If TRUE the random-effects NMA model is used instead of the fixed- effect NMA model. z_value logical. If TRUE z-values are used instead of interventions effects. prop_size logical. If TRUE in the case where z_value == FALSE, the size of the dots is proportional to the precision of the estimates. fill_violin fill color of the violin. See geom_violin for more details. color_violin color of the violin. See geom_violin for more details. adj_violin adjustment of the violin. See geom_violin for more details. width_violin width of the violin. See geom_violin for more details. boxplot logical. If TRUE boxplots are plotted. width_boxplot width of the boxplot. See geom_boxplot for more details. errorbar_type boxplot’s line type. See stat_boxplot for more details. dots logical. If TRUE data points are plotted. jitter_shape jitter shape. See geom_jitter for more details. jitter_position jitter position. See geom_jitter for more details. values logical. If TRUE median value of each violin is printed. Details By default the function creates a violin for each component of the network (combination = NULL). Each violin visualizes the distribution of the effect estimates, obtained from the interventions that include the corresponding component. Combinations of interest are specified from the argument combination. For example, if combination = c("A", "A + B"), two violin plots are produced. The first one is based on the interventions that contain the component "A", and the second one, based on the interventions that contain both components A and B. By setting the argument components_number = TRUE, the behavior of intervention’s effect as the number of components increased is explored, by producing violins based on the number of compo- nents included in the interventions. If the number of components included in a intervention ranges between 1 and 3, then 3 violins will be produced in total. The violins will be based on the inter- ventions that include one component, two components, and three components respectively. The number of components could be also categorized in groups by the argument groups. For example if components_number = TRUE and groups = c("1-3", 4, "5+"), 3 violins will be created. One for the interventions that contain less than 3 components, one for the interventions that contain 4 components and one for those that contain more than 5 components. The function by default uses the NMA relative effects, but it could be adjusted to use intervention’s z-scores by setting z_value = TRUE. In the case where the NMA relative effects, the size of dots reflects the precision of the estimates. Larger dots indicates more precise NMA estimates. Value An object of class ggplot. Note In the case of dichotomous outcomes, the log-scale is used in axis y. Also, the function can be applied only in network meta-analysis models that contain multi-component interventions. Examples data(nmaMACE) specc(model = nmaMACE, combination = c("B", "C", "B + C")) watercomp Waterfall plot Description The function produces a waterfall plot based on the z-values from the interventions that differ by one specific component combination. Usage watercomp(model, sep = "+", combination = NULL, z_value = FALSE, random = TRUE) Arguments model An object of class netmeta. sep A single character that defines the separator between interventions components. combination A single character that specifies the component combination of interest. z_value logical. If TRUE z-values are used instead of interventions effects. random logical. If TRUE z-values are obtained from the random-effects NMA model instead of the fixed-effect NMA model. Details The function based on the intervention’s z-values (default choice) obtained from the network meta- analysis (NMA) model visualizes all the observed interventions that differ by one specific compo- nent combination, in order to explore if the one extra component combination from every compar- ison has a positive or negative impact. Bars above or below of the y = 0 line, indicates that the inclusion of the extra specific component combination has an impact on the intervention. The direc- tion of the impact (positive or negative), depends on the outcomes’ nature (beneficial or harmful). The combination of interest is defined from the argument combination. By default the function visualizes the interventions that differ by one component (combination = NULL). If for example combination = "A+B", the function plots the interventions that differ by "A+B". Value An object of class ggplot. Note In the case of dichotomous outcomes, the log-scale is used in axis y. Also, the function can be applied only in network meta-analysis models that contain multi-component interventions. Examples data(nmaMACE) watercomp(nmaMACE)
leanpub_com_understandinges6_read
free_programming_book
Markdown
# Table of Contents Date: 1999-01-01 Categories: Tags: ## Table of Contents * Introduction * Block Bindings * Strings and Regular Expressions * Functions * Expanded Object Functionality * Destructuring for Easier Data Access * Symbols and Symbol Properties * Sets and Maps * Iterators and Generators * Introducing JavaScript Classes * Improved Array Capabilities * Promises and Asynchronous Programming * Proxies and the Reflection API * The Array Problem * What are Proxies and Reflection? * Creating a Simple Proxy * Validating Properties Using the `set` Trap * Object Shape Validation Using the `get` Trap * Hiding Property Existence Using the `has` Trap * Preventing Property Deletion with the `deleteProperty` Trap * Prototype Proxy Traps * Object Extensibility Traps * Property Descriptor Traps * The `ownKeys` Trap * Function Proxies with the `apply` and `construct` Traps * Revocable Proxies * Solving the Array Problem * Using a Proxy as a Prototype * Summary * Encapsulating Code With Modules * Appendix A: Smaller Changes * Appendix B: Understanding ECMAScript 7 (2016) ## Introduction The JavaScript core language features are defined in a standard called ECMA-262. The language defined in this standard is called ECMAScript. What you know as JavaScript in browsers and Node.js is actually a superset of ECMAScript. Browsers and Node.js add more functionality through additional objects and methods, but the core of the language remains as defined in ECMAScript. The ongoing development of ECMA-262 is vital to the success of JavaScript as a whole, and this book covers the changes brought about by the most recent major update to the language: ECMAScript 6. ### The Road to ECMAScript 6 In 2007, JavaScript was at a crossroads. The popularity of Ajax was ushering in a new age of dynamic web applications, while JavaScript hadn’t changed since the third edition of ECMA-262 was published in 1999. TC-39, the committee responsible for driving the ECMAScript development process, put together a large draft specification for ECMAScript 4. ECMAScript 4 was massive in scope, introducing changes both small and large to the language. Updated features included new syntax, modules, classes, classical inheritance, private object members, optional type annotations, and more. The scope of the ECMAScript 4 changes caused a rift to form in TC-39, with some members feeling that the fourth edition was trying to accomplish too much. A group of leaders from Yahoo, Google, and Microsoft created an alternate proposal for the next version of ECMAScript that they initially called ECMAScript 3.1. The “3.1” was intended to show that this was an incremental change to the existing standard. ECMAScript 3.1 introduced very few syntax changes, instead focusing on property attributes, native JSON support, and adding methods to already-existing objects. Although there was an early attempt to reconcile ECMAScript 3.1 and ECMAScript 4, this ultimately failed as the two camps had difficulty with the very different perspectives on how the language should grow. In 2008, <NAME>, the creator of JavaScript, announced that TC-39 would focus its efforts on standardizing ECMAScript 3.1. They would table the major syntax and feature changes of ECMAScript 4 until after the next version of ECMAScript was standardized, and all members of the committee would work to bring the best pieces of ECMAScript 3.1 and 4 together after that point into an effort initially nicknamed ECMAScript Harmony. ECMAScript 3.1 was eventually standardized as the fifth edition of ECMA-262, also described as ECMAScript 5. The committee never released an ECMAScript 4 standard to avoid confusion with the now-defunct effort of the same name. Work then began on ECMAScript Harmony, with ECMAScript 6 being the first standard released in this new “harmonious” spirit. ECMAScript 6 reached feature complete status in 2015 and was formally dubbed “ECMAScript 2015.” (But this text still refers to it as ECMAScript 6, the name most familiar to developers.) The features vary widely from completely new objects and patterns to syntax changes to new methods on existing objects. The exciting thing about ECMAScript 6 is that all of its changes are geared toward solving problems that developers actually face. ### About This Book A good understanding of ECMAScript 6 features is key for all JavaScript developers going forward. The language features introduced in ECMAScript 6 represent the foundation upon which JavaScript applications will be built for the foreseeable future. That’s where this book comes in. My hope is that you’ll read this book to learn about ECMAScript 6 features so that you’ll be ready to start using them as soon as you need to. # Browser and Node.js Compatibility Many JavaScript environments, such as web browsers and Node.js, are actively working on implementing ECMAScript 6. This book doesn’t attempt to address the inconsistencies between implementations and instead focuses on what the specification defines as the correct behavior. As such, it’s possible that your JavaScript environment may not conform to the behavior described in this book. # Who This Book is For This book is intended as a guide for those who are already familiar with JavaScript and ECMAScript 5. While a deep understanding of the language isn’t necessary to use this book, it will help you understand the differences between ECMAScript 5 and 6. In particular, this book is aimed at intermediate-to-advanced JavaScript developers programming for a browser or Node.js environment who want to learn about the latest developments in the language. This book is not for beginners who have never written JavaScript. You will need to have a good basic understanding of the language to make use of this book. # Overview Each of this book’s thirteen chapters covers a different aspect of ECMAScript 6. Many chapters start by discussing problems that ECMAScript 6 changes were made to solve, to give you a broader context for those changes, and all chapters include code examples to help you learn new syntax and concepts. Chapter 1: How Block Bindings Work talks about `let` and `const` , the block-level replacement for `var` . Chapter 2: Strings and Regular Expressions covers additional functionality for string manipulation and inspection as well as the introduction of template strings. Chapter 3: Functions in ECMAScript 6 discusses the various changes to functions. This includes the arrow function form, default parameters, rest parameters, and more. Chapter 4: Expanded Object Functionality explains the changes to how objects are created, modified, and used. Topics include changes to object literal syntax, and new reflection methods. Chapter 5: Destructuring for Easier Data Access introduces object and array destructuring, which allow you to decompose objects and arrays using a concise syntax. Chapter 6: Symbols and Symbol Properties introduces the concept of symbols, a new way to define properties. Symbols are a new primitive type that can be used to obscure (but not hide) object properties and methods. Chapter 7: Sets and Maps details the new collection types of `Set` , `WeakSet` , `Map` , and `WeakMap` . These types expand on the usefulness of arrays by adding semantics, de-duping, and memory management designed specifically for JavaScript. Chapter 8: Iterators and Generators discusses the addition of iterators and generators to the language. These features allow you to work with collections of data in powerful ways that were not possible in previous versions of JavaScript. Chapter 9: Introducing JavaScript Classes introduces the first formal concept of classes in JavaScript. Often a point of confusion for those coming from other languages, the addition of class syntax in JavaScript makes the language more approachable to others and more concise for enthusiasts. Chapter 10: Improved Array Capabilities details the changes to native arrays and the interesting new ways they can be used in JavaScript. Chapter 11: Promises and Asynchronous Programming introduces promises as a new part of the language. Promises were a grassroots effort that eventually took off and gained in popularity due to extensive library support. ECMAScript 6 formalizes promises and makes them available by default. Chapter 12: Proxies and the Reflection API introduces the formalized reflection API for JavaScript and the new proxy object that allows you to intercept every operation performed on an object. Proxies give developers unprecedented control over objects and, as such, unlimited possibilities for defining new interaction patterns. Chapter 13: Encapsulating Code with Modules details the official module format for JavaScript. The intent is that these modules can replace the numerous ad-hoc module definition formats that have appeared over the years. Appendix A: Smaller ECMAScript 6 Changes covers other changes implemented in ECMAScript 6 that you’ll use less frequently or that didn’t quite fit into the broader major topics covered in each chapter. Appendix B: Understanding ECMAScript 7 (2016) describes the two additions to the standard that were implemented for ECMAScript 7, which didn’t impact JavaScript nearly as much as ECMAScript 6. # Conventions Used The following typographical conventions are used in this book: * Italics introduces new terms * `Constant width` indicates a piece of code or filename Additionally, longer code examples are contained in constant width code blocks such as: Within a code block, comments to the right of a `console.log()` statement indicate the output you’ll see in the browser or Node.js console when the code is executed, for example: If a line of code in a code block throws an error, this is also indicated to the right of the code: # Help and Support You can file issues, suggest changes, and open pull requests against this book by visiting: https://github.com/nzakas/understandinges6 If you have questions as you read this book, please send a message to my mailing list: http://groups.google.com/group/zakasbooks. ### Acknowledgments Thanks to <NAME>, <NAME>, and everyone at No Starch Press for their support and help with this book. Their understanding and patience as my productivity slowed to a crawl during my extended illness is something I will never forget. I’m grateful for the watchful eye of <NAME> as tech editor and to Dr. <NAME> for his feedback and several conversations that helped to clarify some of the concepts discussed in this book. Thanks to everyone who submitted fixes to the version of this book that is hosted on GitHub: ShMcK, <NAME>, <NAME>, blacktail, <NAME>, Lonniebiz, <NAME>, jakub-g, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, kavun, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, alexyans, robertd, 404, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, Arjunkumar, <NAME>, <NAME>, <NAME>, <NAME>, Mallory, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, and Shidhin. Also, thanks to everyone who supported this book on Patreon: Casey Visco. ## Block Bindings Traditionally, the way variable declarations work has been one tricky part of programming in JavaScript. In most C-based languages, variables (or bindings) are created at the spot where the declaration occurs. In JavaScript, however, this is not the case. Where your variables are actually created depends on how you declare them, and ECMAScript 6 offers options to make controlling scope easier. This chapter demonstrates why classic `var` declarations can be confusing, introduces block-level bindings in ECMAScript 6, and then offers some best practices for using them. ### Var Declarations and Hoisting Variable declarations using `var` are treated as if they are at the top of the function (or global scope, if declared outside of a function) regardless of where the actual declaration occurs; this is called hoisting. For a demonstration of what hoisting does, consider the following function definition: If you are unfamiliar with JavaScript, then you might expect the variable `value` to only be created if `condition` evaluates to true. In fact, the variable `value` is created regardless. Behind the scenes, the JavaScript engine changes the `getValue` function to look like this: The declaration of `value` is hoisted to the top, while the initialization remains in the same spot. That means the variable `value` is actually still accessible from within the `else` clause. If accessed from there, the variable would just have a value of `undefined` because it hasn’t been initialized. It often takes new JavaScript developers some time to get used to declaration hoisting, and misunderstanding this unique behavior can end up causing bugs. For this reason, ECMAScript 6 introduces block level scoping options to make the controlling a variable’s lifecycle a little more powerful. ### Block-Level Declarations Block-level declarations are those that declare variables that are inaccessible outside of a given block scope. Block scopes, also called lexical scopes, are created: * Inside of a function * Inside of a block (indicated by the `{` and `}` characters) Block scoping is how many C-based languages work, and the introduction of block-level declarations in ECMAScript 6 is intended to bring that same flexibility (and uniformity) to JavaScript. # Let Declarations The `let` declaration syntax is the same as the syntax for `var` . You can basically replace `var` with `let` to declare a variable, but limit the variable’s scope to only the current code block (there are a few other subtle differences discussed a bit later, as well). Since `let` declarations are not hoisted to the top of the enclosing block, you may want to always place `let` declarations first in the block, so that they are available to the entire block. Here’s an example: This version of the `getValue` function behaves much closer to how you’d expect it to in other C-based languages. Since the variable `value` is declared using `let` instead of `var` , the declaration isn’t hoisted to the top of the function definition, and the variable `value` is no longer accessible once execution flows out of the `if` block. If `condition` evaluates to false, then `value` is never declared or initialized. # No Redeclaration If an identifier has already been defined in a scope, then using the identifier in a `let` declaration inside that scope causes an error to be thrown. For example: In this example, `count` is declared twice: once with `var` and once with `let` . Because `let` will not redefine an identifier that already exists in the same scope, the `let` declaration will throw an error. On the other hand, no error is thrown if a `let` declaration creates a new variable with the same name as a variable in its containing scope, as demonstrated in the following code: This `let` declaration doesn’t throw an error because it creates a new variable called `count` within the `if` statement, instead of creating `count` in the surrounding block. Inside the `if` block, this new variable shadows the global `count` , preventing access to it until execution leaves the block. # Constant Declarations You can also define variables in ECMAScript 6 with the `const` declaration syntax. Variables declared using `const` are considered constants, meaning their values cannot be changed once set. For this reason, every `const` variable must be initialized on declaration, as shown in this example: The `maxItems` variable is initialized, so its `const` declaration should work without a problem. The `name` variable, however, would cause a syntax error if you tried to run the program containing this code, because `name` is not initialized. # Constants vs Let Declarations Constants, like `let` declarations, are block-level declarations. That means constants are no longer accessible once execution flows out of the block in which they were declared, and declarations are not hoisted, as demonstrated in this example: In this code, the constant `maxItems` is declared within an `if` statement. Once the statement finishes executing, `maxItems` is not accessible outside of that block. In another similarity to `let` , a `const` declaration throws an error when made with an identifier for an already-defined variable in the same scope. It doesn’t matter if that variable was declared using `var` (for global or function scope) or `let` (for block scope). For example, consider this code: The two `const` declarations would be valid alone, but given the previous `var` and `let` declarations in this case, neither will work as intended. Despite those similarities, there is one big difference between `let` and `const` to remember. Attempting to assign a `const` to a previously defined constant will throw an error, in both strict and non-strict modes: Much like constants in other languages, the `maxItems` variable can’t be assigned a new value later on. However, unlike constants in other languages, the value a constant holds may be modified if it is an object. # Declaring Objects with Const A `const` declaration prevents modification of the binding and not of the value itself. That means `const` declarations for objects do not prevent modification of those objects. For example: Here, the binding `person` is created with an initial value of an object with one property. It’s possible to change `person.name` without causing an error because this changes what `person` contains and doesn’t change the value that `person` is bound to. When this code attempts to assign a value to `person` (thus attempting to change the binding), an error will be thrown. This subtlety in how `const` works with objects is easy to misunderstand. Just remember: `const` prevents modification of the binding, not modification of the bound value. # The Temporal Dead Zone A variable declared with either `let` or `const` cannot be accessed until after the declaration. Attempting to do so results in a reference error, even when using normally safe operations such as the `typeof` operation in this example: Here, the variable `value` is defined and initialized using `let` , but that statement is never executed because the previous line throws an error. The issue is that `value` exists in what the JavaScript community has dubbed the temporal dead zone (TDZ). The TDZ is never named explicitly in the ECMAScript specification, but the term is often used to describe why `let` and `const` declarations are not accessible before their declaration. This section covers some subtleties of declaration placement that the TDZ causes, and although the examples shown all use `let` , note that the same information applies to `const` . When a JavaScript engine looks through an upcoming block and finds a variable declaration, it either hoists the declaration to the top of the function or global scope (for `var` ) or places the declaration in the TDZ (for `let` and `const` ). Any attempt to access a variable in the TDZ results in a runtime error. That variable is only removed from the TDZ, and therefore safe to use, once execution flows to the variable declaration. This is true anytime you attempt to use a variable declared with `let` or `const` before it’s been defined. As the previous example demonstrated, this even applies to the normally safe `typeof` operator. You can, however, use `typeof` on a variable outside of the block where that variable is declared, though it may not give the results you’re after. Consider this code: The variable `value` isn’t in the TDZ when the `typeof` operation executes because it occurs outside of the block in which `value` is declared. That means there is no `value` binding, and `typeof` simply returns `"undefined"` . The TDZ is just one unique aspect of block bindings. Another unique aspect has to do with their use inside of loops. ### Block Binding in Loops Perhaps one area where developers most want block level scoping of variables is within `for` loops, where the throwaway counter variable is meant to be used only inside the loop. For instance, it’s not uncommon to see code like this in JavaScript: In other languages, where block level scoping is the default, this example should work as intended, and only the `for` loop should have access to the `i` variable. In JavaScript, however, the variable `i` is still accessible after the loop is completed because the `var` declaration gets hoisted. Using `let` instead, as in the following code, should give the intended behavior: In this example, the variable `i` only exists within the `for` loop. Once the loop is complete, the variable is no longer accessible elsewhere. # Functions in Loops The characteristics of `var` have long made creating functions inside of loops problematic, because the loop variables are accessible from outside the scope of the loop. Consider the following code: You might ordinarily expect this code to print the numbers 0 to 9, but it outputs the number 10 ten times in a row. That’s because `i` is shared across each iteration of the loop, meaning the functions created inside the loop all hold a reference to the same variable. The variable `i` has a value of `10` once the loop completes, and so when `console.log(i)` is called, that value prints each time. To fix this problem, developers use immediately-invoked function expressions (IIFEs) inside of loops to force a new copy of the variable they want to iterate over to be created, as in this example: This version uses an IIFE inside of the loop. The `i` variable is passed to the IIFE, which creates its own copy and stores it as `value` . This is the value used by the function for that iteration, so calling each function returns the expected value as the loop counts up from 0 to 9. Fortunately, block-level binding with `let` and `const` in ECMAScript 6 can simplify this loop for you. # Let Declarations in Loops A `let` declaration simplifies loops by effectively mimicking what the IIFE does in the previous example. On each iteration, the loop creates a new variable and initializes it to the value of the variable with the same name from the previous iteration. That means you can omit the IIFE altogether and get the results you expect, like this: This loop works exactly like the loop that used `var` and an IIFE but is, arguably, cleaner. The `let` declaration creates a new variable `i` each time through the loop, so each function created inside the loop gets its own copy of `i` . Each copy of `i` has the value it was assigned at the beginning of the loop iteration in which it was created. The same is true for `for-in` and `for-of` loops, as shown here: In this example, the `for-in` loop shows the same behavior as the `for` loop. Each time through the loop, a new `key` binding is created, and so each function has its own copy of the `key` variable. The result is that each function outputs a different value. If `var` were used to declare `key` , all functions would output `"c"` . # Constant Declarations in Loops The ECMAScript 6 specification doesn’t explicitly disallow `const` declarations in loops; however, there are different behaviors based on the type of loop you’re using. For a normal `for` loop, you can use `const` in the initializer, but the loop will throw a warning if you attempt to change the value. For example: In this code, the `i` variable is declared as a constant. The first iteration of the loop, where `i` is 0, executes successfully. An error is thrown when `i++` executes because it’s attempting to modify a constant. As such, you can only use `const` to declare a variable in the loop initializer if you are not modifying that variable. When used in a `for-in` or `for-of` loop, on the other hand, a `const` variable behaves the same as a `let` variable. So the following should not cause an error: This code functions almost exactly the same as the second example in the “Let Declarations in Loops” section. The only difference is that the value of `key` cannot be changed inside the loop. The `for-in` and `for-of` loops work with `const` because the loop initializer creates a new binding on each iteration through the loop rather than attempting to modify the value of an existing binding (as was the case with the previous example using `for` instead of `for-in` ). ### Global Block Bindings Another way in which `let` and `const` are different from `var` is in their global scope behavior. When `var` is used in the global scope, it creates a new global variable, which is a property on the global object ( `window` in browsers). That means you can accidentally overwrite an existing global using `var` , such as: Even though the `RegExp` global is defined on `window` , it is not safe from being overwritten by a `var` declaration. This example declares a new global variable `RegExp` that overwrites the original. Similarly, `ncz` is defined as a global variable and immediately defined as a property on `window` . This is the way JavaScript has always worked. If you instead use `let` or `const` in the global scope, a new binding is created in the global scope but no property is added to the global object. That also means you cannot overwrite a global variable using `let` or `const` , you can only shadow it. Here’s an example: Here, a new `let` declaration for `RegExp` creates a binding that shadows the global `RegExp` . That means `window.RegExp` and `RegExp` are not the same, so there is no disruption to the global scope. Also, the `const` declaration for `ncz` creates a binding but does not create a property on the global object. This capability makes `let` and `const` a lot safer to use in the global scope when you don’t want to create properties on the global object. ### Emerging Best Practices for Block Bindings While ECMAScript 6 was in development, there was widespread belief you should use `let` by default instead of `var` for variable declarations. For many JavaScript developers, `let` behaves exactly the way they thought `var` should have behaved, and so the direct replacement makes logical sense. In this case, you would use `const` for variables that needed modification protection. However, as more developers migrated to ECMAScript 6, an alternate approach gained popularity: use `const` by default and only use `let` when you know a variable’s value needs to change. The rationale is that most variables should not change their value after initialization because unexpected value changes are a source of bugs. This idea has a significant amount of traction and is worth exploring in your code as you adopt ECMAScript 6. The `let` and `const` block bindings introduce lexical scoping to JavaScript. These declarations are not hoisted and only exist within the block in which they are declared. This offers behavior that is more like other languages and less likely to cause unintentional errors, as variables can now be declared exactly where they are needed. As a side effect, you cannot access variables before they are declared, even with safe operators such as `typeof` . Attempting to access a block binding before its declaration results in an error due to the binding’s presence in the temporal dead zone (TDZ). In many cases, `let` and `const` behave in a manner similar to `var` ; however, this is not true for loops. For both `let` and `const` , `for-in` and `for-of` loops create a new binding with each iteration through the loop. That means functions created inside the loop body can access the loop bindings values as they are during the current iteration, rather than as they were after the loop’s final iteration (the behavior with `var` ). The same is true for `let` declarations in `for` loops, while attempting to use `const` declarations in a `for` loop may result in an error. The current best practice for block bindings is to use `const` by default and only use `let` when you know a variable’s value needs to change. This ensures a basic level of immutability in code that can help prevent certain types of errors. ## Strings and Regular Expressions Strings are arguably one of the most important data types in programming. They’re in nearly every higher-level programming language, and being able to work with them effectively is fundamental for developers to create useful programs. By extension, regular expressions are important because of the extra power they give developers to wield on strings. With these facts in mind, the creators of ECMAScript 6 improved strings and regular expressions by adding new capabilities and long-missing functionality. This chapter gives a tour of both types of changes. ### Better Unicode Support Before ECMAScript 6, JavaScript strings revolved around 16-bit character encoding (UTF-16). Each 16-bit sequence is a code unit representing a character. All string properties and methods, like the `length` property and the `charAt()` method, were based on these 16-bit code units. Of course, 16 bits used to be enough to contain any character. That’s no longer true thanks to the expanded character set introduced by Unicode. # UTF-16 Code Points Limiting character length to 16 bits wasn’t possible for Unicode’s stated goal of providing a globally unique identifier to every character in the world. These globally unique identifiers, called code points, are simply numbers starting at 0. Code points are what you may think of as character codes, where a number represents a character. A character encoding must encode code points into code units that are internally consistent. For UTF-16, code points can be made up of many code units. The first 216 code points in UTF-16 are represented as single 16-bit code units. This range is called the Basic Multilingual Plane (BMP). Everything beyond that is considered to be in one of the supplementary planes, where the code points can no longer be represented in just 16-bits. UTF-16 solves this problem by introducing surrogate pairs in which a single code point is represented by two 16-bit code units. That means any single character in a string can be either one code unit for BMP characters, giving a total of 16 bits, or two units for supplementary plane characters, giving a total of 32 bits. In ECMAScript 5, all string operations work on 16-bit code units, meaning that you can get unexpected results from UTF-16 encoded strings containing surrogate pairs, as in this example: The single Unicode character `"𠮷"` is represented using surrogate pairs, and as such, the JavaScript string operations above treat the string as having two 16-bit characters. That means: * The `length` of `text` is 2, when it should be 1. * A regular expression trying to match a single character fails because it thinks there are two characters. * The `charAt()` method is unable to return a valid character string, because neither set of 16 bits corresponds to a printable character. The `charCodeAt()` method also just can’t identify the character properly. It returns the appropriate 16-bit number for each code unit, but that is the closest you could get to the real value of `text` in ECMAScript 5. ECMAScript 6, on the other hand, enforces UTF-16 string encoding to address problems like these. Standardizing string operations based on this character encoding means that JavaScript can support functionality designed to work specifically with surrogate pairs. The rest of this section discusses a few key examples of that functionality. # The codePointAt() Method One method ECMAScript 6 added to fully support UTF-16 is the `codePointAt()` method, which retrieves the Unicode code point that maps to a given position in a string. This method accepts the code unit position rather than the character position and returns an integer value, as these `console.log()` examples show: The `codePointAt()` method returns the same value as the `charCodeAt()` method unless it operates on non-BMP characters. The first character in `text` is non-BMP and is therefore comprised of two code units, meaning the `length` property is 3 rather than 2. The `charCodeAt()` method returns only the first code unit for position 0, but `codePointAt()` returns the full code point even though the code point spans multiple code units. Both methods return the same value for positions 1 (the second code unit of the first character) and 2 (the `"a"` character). Calling the `codePointAt()` method on a character is the easiest way to determine if that character is represented by one or two code units. Here’s a function you could write to check: The upper bound of 16-bit characters is represented in hexadecimal as `FFFF` , so any code point above that number must be represented by two code units, for a total of 32 bits. # The String.fromCodePoint() Method When ECMAScript provides a way to do something, it also tends to provide a way to do the reverse. You can use `codePointAt()` to retrieve the code point for a character in a string, while produces a single-character string from a given code point. For example: Think of as a more complete version of the ``` String.fromCharCode() ``` method. Both give the same result for all characters in the BMP. There’s only a difference when you pass code points for characters outside of the BMP. # The normalize() Method Another interesting aspect of Unicode is that different characters may be considered equivalent for the purpose of sorting or other comparison-based operations. There are two ways to define these relationships. First, canonical equivalence means that two sequences of code points are considered interchangeable in all respects. For example, a combination of two characters can be canonically equivalent to one character. The second relationship is compatibility. Two compatible sequences of code points look different but can be used interchangeably in certain situations. Due to these relationships, two strings representing fundamentally the same text can contain different code point sequences. For example, the character “æ” and the two-character string “ae” may be used interchangeably but are strictly not equivalent unless normalized in some way. ECMAScript 6 supports Unicode normalization forms by giving strings a `normalize()` method. This method optionally accepts a single string parameter indicating one of the following Unicode normalization forms to apply: * Normalization Form Canonical Composition ( `"NFC"` ), which is the default * Normalization Form Canonical Decomposition ( `"NFD"` ) * Normalization Form Compatibility Composition ( `"NFKC"` ) * Normalization Form Compatibility Decomposition ( `"NFKD"` ) It’s beyond the scope of this book to explain the differences between these four forms. Just keep in mind that when comparing strings, both strings must be normalized to the same form. For example: This code converts the strings in the `values` array into a normalized form so that the array can be sorted appropriately. You can also sort the original array by calling `normalize()` as part of the comparator, as follows: Once again, the most important thing to note about this code is that both `first` and `second` are normalized in the same way. These examples have used the default, NFC, but you can just as easily specify one of the others, like this: If you’ve never worried about Unicode normalization before, then you probably won’t have much use for this method now. But if you ever work on an internationalized application, you’ll definitely find the `normalize()` method helpful. Methods aren’t the only improvements that ECMAScript 6 provides for working with Unicode strings, though. The standard also adds two useful syntax elements. You can accomplish many common string operations through regular expressions. But remember, regular expressions assume 16-bit code units, where each represents a single character. To address this problem, ECMAScript 6 defines a `u` flag for regular expressions, which stands for Unicode. # The u Flag in Action When a regular expression has the `u` flag set, it switches modes to work on characters, not code units. That means the regular expression should no longer get confused about surrogate pairs in strings and should behave as expected. For example, consider this code: The regular expression `/^.$/` matches any input string with a single character. When used without the `u` flag, this regular expression matches on code units, and so the Japanese character (which is represented by two code units) doesn’t match the regular expression. When used with the `u` flag, the regular expression compares characters instead of code units and so the Japanese character matches. # Counting Code Points Unfortunately, ECMAScript 6 doesn’t add a method to determine how many code points a string has, but with the `u` flag, you can use regular expressions to figure it out as follows: This example calls `match()` to check `text` for both whitespace and non-whitespace characters (using `[\s\S]` to ensure the pattern matches newlines), using a regular expression that is applied globally with Unicode enabled. The `result` contains an array of matches when there’s at least one match, so the array length is the number of code points in the string. In Unicode, the strings `"abc"` and `"𠮷bc"` both have three characters, so the array length is three. # Determining Support for the u Flag Since the `u` flag is a syntax change, attempting to use it in JavaScript engines that aren’t compatible with ECMAScript 6 throws a syntax error. The safest way to determine if the `u` flag is supported is with a function, like this one: This function uses the `RegExp` constructor to pass in the `u` flag as an argument. This syntax is valid even in older JavaScript engines, but the constructor will throw an error if `u` isn’t supported. ### Other String Changes JavaScript strings have always lagged behind similar features of other languages. It was only in ECMAScript 5 that strings finally gained a `trim()` method, for example, and ECMAScript 6 continues extending JavaScript’s capacity to parse strings with new functionality. # Methods for Identifying Substrings Developers have used the `indexOf()` method to identify strings inside other strings since JavaScript was first introduced. ECMAScript 6 includes the following three methods, which are designed to do just that: * The `includes()` method returns true if the given text is found anywhere within the string. It returns false if not. * The `startsWith()` method returns true if the given text is found at the beginning of the string. It returns false if not. * The `endsWith()` method returns true if the given text is found at the end of the string. It returns false if not. Each methods accept two arguments: the text to search for and an optional index. When the second argument is provided, `includes()` and `startsWith()` start the match from that index while `endsWith()` starts the match from the second argument; when the second argument is omitted, `includes()` and `startsWith()` search from the beginning of the string, while `endsWith()` starts from the end. In effect, the second argument minimizes the amount of the string being searched. Here are some examples showing these three methods in action: The first six calls don’t include a second parameter, so they’ll search the whole string if needed. The last three calls only check part of the string. The call to ``` msg.startsWith("o", 4) ``` starts the match by looking at index 4 of the `msg` string, which is the “o” in “Hello”. The call to `msg.endsWith("o", 8)` starts the search from index 0 and searches up to index 7, which is the “o” in “world”. The call to `msg.includes("o", 8)` starts the match from index 8, which is the “r” in “world”. While these three methods make identifying the existence of substrings easier, each only returns a boolean value. If you need to find the actual position of one string within another, use the `indexOf()` or `lastIndexOf()` methods. # The repeat() Method ECMAScript 6 also adds a `repeat()` method to strings, which accepts the number of times to repeat the string as an argument. It returns a new string containing the original string repeated the specified number of times. For example: This method is a convenience function above all else, and it can be especially useful when manipulating text. It’s particularly useful in code formatting utilities that need to create indentation levels, like this: The first `repeat()` call creates a string of four spaces, and the `indentLevel` variable keeps track of the indent level. Then, you can just call `repeat()` with an incremented `indentLevel` to change the number of spaces. ECMAScript 6 also makes some useful changes to regular expression functionality that don’t fit into a particular category. The next section highlights a few. ### Other Regular Expression Changes Regular expressions are an important part of working with strings in JavaScript, and like many parts of the language, they haven’t changed much in recent versions. ECMAScript 6, however, makes several improvements to regular expressions to go along with the updates to strings. ECMAScript 6 standardized the `y` flag after it was implemented in Firefox as a proprietary extension to regular expressions. The `y` flag affects a regular expression search’s `sticky` property, and it tells the search to start matching characters in a string at the position specified by the regular expression’s `lastIndex` property. If there is no match at that location, then the regular expression stops matching. To see how this works, consider the following code: This example has three regular expressions. The expression in `pattern` has no flags, the one in `globalPattern` uses the `g` flag, and the one in `stickyPattern` uses the `y` flag. In the first trio of `console.log()` calls, all three regular expressions should return `"hello1 "` with a space at the end. After that, the `lastIndex` property is changed to 1 on all three patterns, meaning that the regular expression should start matching from the second character on all of them. The regular expression with no flags completely ignores the change to `lastIndex` and still matches `"hello1 "` without incident. The regular expression with the `g` flag goes on to match `"hello2 "` because it is searching forward from the second character of the string ( `"e"` ). The sticky regular expression doesn’t match anything beginning at the second character so `stickyResult` is `null` . The sticky flag saves the index of the next character after the last match in `lastIndex` whenever an operation is performed. If an operation results in no match, then `lastIndex` is set back to 0. The global flag behaves the same way, as demonstrated here: The value of `lastIndex` changes to 7 after the first call to `exec()` and to 14 after the second call, for both the `stickyPattern` and `globalPattern` variables. There are two more subtle details about the sticky flag to keep in mind: * The `lastIndex` property is only honored when calling methods that exist on the regular expression object, like the `exec()` and `test()` methods. Passing the regular expression to a string method, such as `match()` , will not result in the sticky behavior. * When using the `^` character to match the start of a string, sticky regular expressions only match from the start of the string (or the start of the line in multiline mode). While `lastIndex` is 0, the `^` makes a sticky regular expression no different from a non-sticky one. If `lastIndex` doesn’t correspond to the beginning of the string in single-line mode or the beginning of a line in multiline mode, the sticky regular expression will never match. As with other regular expression flags, you can detect the presence of `y` by using a property. In this case, you’d check the `sticky` property, as follows: The `sticky` property is set to true if the sticky flag is present, and the property is false if not. The `sticky` property is read-only based on the presence of the flag and cannot be changed in code. Similar to the `u` flag, the `y` flag is a syntax change, so it will cause a syntax error in older JavaScript engines. You can use the following approach to detect support: Just like the `u` check, this returns false if it’s unable to create a regular expression with the `y` flag. In one final similarity to `u` , if you need to use `y` in code that runs in older JavaScript engines, be sure to use the `RegExp` constructor when defining those regular expressions to avoid a syntax error. # Duplicating Regular Expressions In ECMAScript 5, you can duplicate regular expressions by passing them into the `RegExp` constructor like this: The `re2` variable is just a copy of the `re1` variable. But if you provide the second argument to the `RegExp` constructor, which specifies the flags for the regular expression, your code won’t work, as in this example: If you execute this code in an ECMAScript 5 environment, you’ll get an error stating that the second argument cannot be used when the first argument is a regular expression. ECMAScript 6 changed this behavior such that the second argument is allowed and overrides any flags present on the first argument. For example: In this code, `re1` has the case-insensitive `i` flag while `re2` has only the global `g` flag. The `RegExp` constructor duplicated the pattern from `re1` and substituted the `g` flag for the `i` flag. Without the second argument, `re2` would have the same flags as `re1` . # The `flags` Property Along with adding a new flag and changing how you can work with flags, ECMAScript 6 added a property associated with them. In ECMAScript 5, you could get the text of a regular expression by using the `source` property, but to get the flag string, you’d have to parse the output of the `toString()` method as shown below: This converts a regular expression into a string and then returns the characters found after the last `/` . Those characters are the flags. ECMAScript 6 makes fetching flags easier by adding a `flags` property to go along with the `source` property. Both properties are prototype accessor properties with only a getter assigned, making them read-only. The `flags` property makes inspecting regular expressions easier for both debugging and inheritance purposes. A late addition to ECMAScript 6, the `flags` property returns the string representation of any flags applied to a regular expression. For example: This fetches all flags on `re` and prints them to the console with far fewer lines of code than the `toString()` technique can. Using `source` and `flags` together allows you to extract the pieces of the regular expression that you need without parsing the regular expression string directly. The changes to strings and regular expressions that this chapter has covered so far are definitely powerful, but ECMAScript 6 improves your power over strings in a much bigger way. It brings a type of literal to the table that makes strings more flexible. ### Template Literals JavaScript’s strings have always had limited functionality compared to strings in other languages. For instance, until ECMAScript 6, strings lacked the methods covered so far in this chapter, and string concatenation is as simple as possible. To allow developers to solve more complex problems, ECMAScript 6’s template literals provide syntax for creating domain-specific languages (DSLs) for working with content in a safer way than the solutions available in ECMAScript 5 and earlier. (A DSL is a programming language designed for a specific, narrow purpose, as opposed to general-purpose languages like JavaScript.) The ECMAScript wiki offers the following description on the template literal strawman: This scheme extends ECMAScript syntax with syntactic sugar to allow libraries to provide DSLs that easily produce, query, and manipulate content from other languages that are immune or resistant to injection attacks such as XSS, SQL Injection, etc. In reality, though, template literals are ECMAScript 6’s answer to the following features that JavaScript lacked all the way through ECMAScript 5: * Multiline strings A formal concept of multiline strings. * Basic string formatting The ability to substitute parts of the string for values contained in variables. * HTML escaping The ability to transform a string such that it is safe to insert into HTML. Rather than trying to add more functionality to JavaScript’s already-existing strings, template literals represent an entirely new approach to solving these problems. # Basic Syntax At their simplest, template literals act like regular strings delimited by backticks ( ``` ) instead of double or single quotes. For example, consider the following: This code demonstrates that the variable `message` contains a normal JavaScript string. The template literal syntax is used to create the string value, which is then assigned to the `message` variable. If you want to use a backtick in your string, then just escape it with a backslash ( `\` ), as in this version of the `message` variable: There’s no need to escape either double or single quotes inside of template literals. # Multiline Strings JavaScript developers have wanted a way to create multiline strings since the first version of the language. But when using double or single quotes, strings must be completely contained on a single line. # Pre-ECMAScript 6 Workarounds Thanks to a long-standing syntax bug, JavaScript does have a workaround. You can create multiline strings if there’s a backslash ( `\` ) before a newline. Here’s an example: The `message` string has no newlines present when printed to the console because the backslash is treated as a continuation rather than a newline. In order to show a newline in output, you’d need to manually include it: This should print `Multiline String` on two separate lines in all major JavaScript engines, but the behavior is defined as a bug and many developers recommend avoiding it. Other pre-ECMAScript 6 attempts to create multiline strings usually relied on arrays or string concatenation, such as: All of the ways developers worked around JavaScript’s lack of multiline strings left something to be desired. # Multiline Strings the Easy Way ECMAScript 6’s template literals make multiline strings easy because there’s no special syntax. Just include a newline where you want, and it shows up in the result. For example: All whitespace inside the backticks is part of the string, so be careful with indentation. For example: In this code, all whitespace before the second line of the template literal is considered part of the string itself. If making the text line up with proper indentation is important to you, then consider leaving nothing on the first line of a multiline template literal and then indenting after that, as follows: This code begins the template literal on the first line but doesn’t have any text until the second. The HTML tags are indented to look correct and then the `trim()` method is called to remove the initial empty line. # Making Substitutions At this point, template literals may look like fancier versions of normal JavaScript strings. The real difference between the two lies in template literal substitutions. Substitutions allow you to embed any valid JavaScript expression inside a template literal and output the result as part of the string. Substitutions are delimited by an opening `${` and a closing `}` that can have any JavaScript expression inside. The simplest substitutions let you embed local variables directly into a resulting string, like this: The substitution `${name}` accesses the local variable `name` to insert `name` into the `message` string. The `message` variable then holds the result of the substitution immediately. Since all substitutions are JavaScript expressions, you can substitute more than just simple variable names. You can easily embed calculations, function calls, and more. For example: This code performs a calculation as part of the template literal. The variables `count` and `price` are multiplied together to get a result, and then formatted to two decimal places using `.toFixed()` . The dollar sign before the second substitution is output as-is because it’s not followed by an opening curly brace. Template literals are also JavaScript expressions, which means you can place a template literal inside of another template literal, as in this example: This example nests a second template literal inside the first. After the first `${` , another template literal begins. The second `${` indicates the beginning of an embedded expression inside the inner template literal. That expression is the variable `name` , which is inserted into the result. # Tagged Templates Now you’ve seen how template literals can create multiline strings and insert values into strings without concatenation. But the real power of template literals comes from tagged templates. A template tag performs a transformation on the template literal and returns the final string value. This tag is specified at the start of the template, just before the first ``` character, as shown here: In this example, `tag` is the template tag to apply to the ``Hello world`` template literal. # Defining Tags A tag is simply a function that is called with the processed template literal data. The tag receives data about the template literal as individual pieces and must combine the pieces to create the result. The first argument is an array containing the literal strings as interpreted by JavaScript. Each subsequent argument is the interpreted value of each substitution. Tag functions are typically defined using rest arguments as follows, to make dealing with the data easier: To better understand what gets passed to tags, consider the following: If you had a function called `passthru()` , that function would receive three arguments. First, it would get a `literals` array, containing the following elements: * The empty string before the first substitution ( `""` ) * The string after the first substitution and before the second ( `" items cost $"` ) * The string after the second substitution ( `"."` ) The next argument would be `10` , which is the interpreted value for the `count` variable. This becomes the first element in a `substitutions` array. The final argument would be `"2.50"` , which is the interpreted value for ``` (count * price).toFixed(2) ``` and the second element in the `substitutions` array. Note that the first item in `literals` is an empty string. This ensures that `literals[0]` is always the start of the string, just like ``` literals[literals.length - 1] ``` is always the end of the string. There is always one fewer substitution than literal, which means the expression ``` substitutions.length === literals.length - 1 ``` is always true. Using this pattern, the `literals` and `substitutions` arrays can be interwoven to create a resulting string. The first item in `literals` comes first, the first item in `substitutions` is next, and so on, until the string is complete. As an example, you can mimic the default behavior of a template literal by alternating values from these two arrays: This example defines a `passthru` tag that performs the same transformation as the default template literal behavior. The only trick is to use `substitutions.length` for the loop rather than `literals.length` to avoid accidentally going past the end of the `substitutions` array. This works because the relationship between `literals` and `substitutions` is well-defined in ECMAScript 6. # Using Raw Values in Template Literals Template tags also have access to raw string information, which primarily means access to character escapes before they are transformed into their character equivalents. The simplest way to work with raw string values is to use the built-in `String.raw()` tag. For example: In this code, the `\n` in `message1` is interpreted as a newline while the `\n` in `message2` is returned in its raw form of `"\\n"` (the slash and `n` characters). Retrieving the raw string information like this allows for more complex processing when necessary. The raw string information is also passed into template tags. The first argument in a tag function is an array with an extra property called `raw` . The `raw` property is an array containing the raw equivalent of each literal value. For example, the value in `literals[0]` always has an equivalent `literals.raw[0]` that contains the raw string information. Knowing that, you can mimic `String.raw()` using the following code: This uses `literals.raw` instead of `literals` to output the string result. That means any character escapes, including Unicode code point escapes, should be returned in their raw form. Raw strings are helpful when you want to output a string containing code in which you’ll need to include the character escaping (for instance, if you want to generate documentation about some code, you may want to output the actual code as it appears). Full Unicode support allows JavaScript to deal with UTF-16 characters in logical ways. The ability to transfer between code point and character via `codePointAt()` and is an important step for string manipulation. The addition of the regular expression `u` flag makes it possible to operate on code points instead of 16-bit characters, and the `normalize()` method allows for more appropriate string comparisons. ECMAScript 6 also added new methods for working with strings, allowing you to more easily identify a substring regardless of its position in the parent string. More functionality was added to regular expressions, too. Template literals are an important addition to ECMAScript 6 that allows you to create domain-specific languages (DSLs) to make creating strings easier. The ability to embed variables directly into template literals means that developers have a safer tool than string concatenation for composing long strings with variables. Built-in support for multiline strings also makes template literals a useful upgrade over normal JavaScript strings, which have never had this ability. Despite allowing newlines directly inside the template literal, you can still use `\n` and other character escape sequences. Template tags are the most important part of this feature for creating DSLs. Tags are functions that receive the pieces of the template literal as arguments. You can then use that data to return an appropriate string value. The data provided includes literals, their raw equivalents, and any substitution values. These pieces of information can then be used to determine the correct output for the tag. ## Functions Functions are an important part of any programming language, and prior to ECMAScript 6, JavaScript functions hadn’t changed much since the language was created. This left a backlog of problems and nuanced behavior that made making mistakes easy and often required more code just to achieve very basic behaviors. ECMAScript 6 functions make a big leap forward, taking into account years of complaints and requests from JavaScript developers. The result is a number of incremental improvements on top of ECMAScript 5 functions that make programming in JavaScript less error-prone and more powerful. ### Functions with Default Parameter Values Functions in JavaScript are unique in that they allow any number of parameters to be passed, regardless of the number of parameters declared in the function definition. This allows you to define functions that can handle different numbers of parameters, often by just filling in default values when parameters aren’t provided. This section covers how default parameters work both in and prior to ECMAScript 6, along with some important information on the `arguments` object, using expressions as parameters, and another TDZ. # Simulating Default Parameter Values in ECMAScript 5 In ECMAScript 5 and earlier, you would likely use the following pattern to create a function with default parameters values: In this example, both `timeout` and `callback` are actually optional because they are given a default value if a parameter isn’t provided. The logical OR operator ( `||` ) always returns the second operand when the first is falsy. Since named function parameters that are not explicitly provided are set to `undefined` , the logical OR operator is frequently used to provide default values for missing parameters. There is a flaw with this approach, however, in that a valid value for `timeout` might actually be `0` , but this would replace it with `2000` because `0` is falsy. In that case, a safer alternative is to check the type of the argument using `typeof` , as in this example: While this approach is safer, it still requires a lot of extra code for a very basic operation. Popular JavaScript libraries are filled with similar patterns, as this represents a common pattern. # Default Parameter Values in ECMAScript 6 ECMAScript 6 makes it easier to provide default values for parameters by providing initializations that are used when the parameter isn’t formally passed. For example: This function only expects the first parameter to always be passed. The other two parameters have default values, which makes the body of the function much smaller because you don’t need to add any code to check for a missing value. When `makeRequest()` is called with all three parameters, the defaults are not used. For example: ECMAScript 6 considers `url` to be required, which is why `"/foo"` is passed in all three calls to `makeRequest()` . The two parameters with a default value are considered optional. It’s possible to specify default values for any arguments, including those that appear before arguments without default values in the function declaration. For example, this is fine: In this case, the default value for `timeout` will only be used if there is no second argument passed in or if the second argument is explicitly passed in as `undefined` , as in this example: In the case of default parameter values, a value of `null` is considered to be valid, meaning that in the third call to `makeRequest()` , the default value for `timeout` will not be used. # How Default Parameter Values Affect the arguments Object Just keep in mind that the behavior of the `arguments` object is different when default parameter values are present. In ECMAScript 5 nonstrict mode, the `arguments` object reflects changes in the named parameters of a function. Here’s some code that illustrates how this works: This outputs: The `arguments` object is always updated in nonstrict mode to reflect changes in the named parameters. Thus, when `first` and `second` are assigned new values, `arguments[0]` and `arguments[1]` are updated accordingly, making all of the `===` comparisons resolve to `true` . ECMAScript 5’s strict mode, however, eliminates this confusing aspect of the `arguments` object. In strict mode, the `arguments` object does not reflect changes to the named parameters. Here’s the `mixArgs()` function again, but in strict mode: The call to `mixArgs()` outputs: This time, changing `first` and `second` doesn’t affect `arguments` , so the output behaves as you’d normally expect it to. The `arguments` object in a function using ECMAScript 6 default parameter values, however, will always behave in the same manner as ECMAScript 5 strict mode, regardless of whether the function is explicitly running in strict mode. The presence of default parameter values triggers the `arguments` object to remain detached from the named parameters. This is a subtle but important detail because of how the `arguments` object may be used. Consider the following: This outputs: In this example, `arguments.length` is 1 because only one argument was passed to `mixArgs()` . That also means `arguments[1]` is `undefined` , which is the expected behavior when only one argument is passed to a function. That means `first` is equal to `arguments[0]` as well. Changing `first` and `second` has no effect on `arguments` . This behavior occurs in both nonstrict and strict mode, so you can rely on `arguments` to always reflect the initial call state. # Default Parameter Expressions Perhaps the most interesting feature of default parameter values is that the default value need not be a primitive value. You can, for example, execute a function to retrieve the default parameter value, like this: Here, if the last argument isn’t provided, the function `getValue()` is called to retrieve the correct default value. Keep in mind that `getValue()` is only called when `add()` is called without a second parameter, not when the function declaration is first parsed. That means if `getValue()` were written differently, it could potentially return a different value. For instance: In this example, `value` begins as five and increments each time `getValue()` is called. The first call to `add(1)` returns 6, while the second call to `add(1)` returns 7 because `value` was incremented. Because the default value for `second` is only evaluated when the function is called, changes to that value can be made at any time. This behavior introduces another interesting capability. You can use a previous parameter as the default for a later parameter. Here’s an example: In this code, the parameter `second` is given a default value of `first` , meaning that passing in just one argument leaves both arguments with the same value. So `add(1, 1)` returns 2 just as `add(1)` returns 2. Taking this a step further, you can pass `first` into a function to get the value for `second` as follows: This example sets `second` equal to the value returned by `getValue(first)` , so while `add(1, 1)` still returns 2, `add(1)` returns 7 (1 + 6). The ability to reference parameters from default parameter assignments works only for previous arguments, so earlier arguments do not have access to later arguments. For example: The call to `add(undefined, 1)` throws an error because `second` is defined after `first` and is therefore unavailable as a default value. To understand why that happens, it’s important to revisit temporal dead zones. # Default Parameter Value Temporal Dead Zone Chapter 1 introduced the temporal dead zone (TDZ) as it relates to `let` and `const` , and default parameter values also have a TDZ where parameters cannot be accessed. Similar to a `let` declaration, each parameter creates a new identifier binding that can’t be referenced before initialization without throwing an error. Parameter initialization happens when the function is called, either by passing a value for the parameter or by using the default parameter value. To explore the default parameter value TDZ, consider this example from “Default Parameter Expressions” again: The calls to `add(1, 1)` and `add(1)` effectively execute the following code to create the `first` and `second` parameter values: When the function `add()` is first executed, the bindings `first` and `second` are added to a parameter-specific TDZ (similar to how `let` behaves). So while `second` can be initialized with the value of `first` because `first` is always initialized at that time, the reverse is not true. Now, consider this rewritten `add()` function: The calls to `add(1, 1)` and `add(undefined, 1)` in this example now map to this code behind the scenes: In this example, the call to `add(undefined, 1)` throws an error because `second` hasn’t yet been initialized when `first` is initialized. At that point, `second` is in the TDZ and therefore any references to `second` throw an error. This mirrors the behavior of `let` bindings discussed in Chapter 1. ### Working with Unnamed Parameters So far, the examples in this chapter have only covered parameters that have been named in the function definition. However, JavaScript functions don’t limit the number of parameters that can be passed to the number of named parameters defined. You can always pass fewer or more parameters than formally specified. Default parameter values make it clear when a function can accept fewer parameters, and ECMAScript 6 sought to make the problem of passing more parameters than defined better as well. # Unnamed Parameters in ECMAScript 5 Early on, JavaScript provided the `arguments` object as a way to inspect all function parameters that are passed without necessarily defining each parameter individually. While inspecting `arguments` works fine in most cases, this object can be a little cumbersome to work with. For example, examine this code, which inspects the `arguments` object: This function mimics the `pick()` method from the Underscore.js library, which returns a copy of a given object with some specified subset of the original object’s properties. This example defines only one argument and expects the first argument to be the object from which to copy properties. Every other argument passed is the name of a property that should be copied on the result. There are a couple of things to notice about this `pick()` function. First, it’s not at all obvious that the function can handle more than one parameter. You could define several more parameters, but you would always fall short of indicating that this function can take any number of parameters. Second, because the first parameter is named and used directly, when you look for the properties to copy, you have to start in the `arguments` object at index 1 instead of index 0. Remembering to use the appropriate indices with `arguments` isn’t necessarily difficult, but it’s one more thing to keep track of. ECMAScript 6 introduces rest parameters to help with these issues. # Rest Parameters A rest parameter is indicated by three dots ( `...` ) preceding a named parameter. That named parameter becomes an `Array` containing the rest of the parameters passed to the function, which is where the name “rest” parameters originates. For example, `pick()` can be rewritten using rest parameters, like this: In this version of the function, `keys` is a rest parameter that contains all parameters passed after `object` (unlike `arguments` , which contains all parameters including the first one). That means you can iterate over `keys` from beginning to end without worry. As a bonus, you can tell by looking at the function that it is capable of handling any number of parameters. # Rest Parameter Restrictions There are two restrictions on rest parameters. The first restriction is that there can be only one rest parameter, and the rest parameter must be last. For example, this code won’t work: Here, the parameter `last` follows the rest parameter `keys` , which would cause a syntax error. The second restriction is that rest parameters cannot be used in an object literal setter. That means this code would also cause a syntax error: This restriction exists because object literal setters are restricted to a single argument. Rest parameters are, by definition, an infinite number of arguments, so they’re not allowed in this context. # How Rest Parameters Affect the arguments Object Rest parameters were designed to replace `arguments` in ECMAScript. Originally, ECMAScript 4 did away with `arguments` and added rest parameters to allow an unlimited number of arguments to be passed to functions. ECMAScript 4 never came into being, but this idea was kept around and reintroduced in ECMAScript 6, despite `arguments` not being removed from the language. The `arguments` object works together with rest parameters by reflecting the arguments that were passed to the function when called, as illustrated in this program: The call to `checkArgs()` outputs: The `arguments` object always correctly reflects the parameters that were passed into a function regardless of rest parameter usage. That’s all you really need to know about rest parameters to get started using them. ### Increased Capabilities of the Function Constructor The `Function` constructor is an infrequently used part of JavaScript that allows you to dynamically create a new function. The arguments to the constructor are the parameters for the function and the function body, all as strings. Here’s an example: ECMAScript 6 augments the capabilities of the `Function` constructor to allow default parameters and rest parameters. You need only add an equals sign and a value to the parameter names, as follows: In this example, the parameter `second` is assigned the value of `first` when only one parameter is passed. The syntax is the same as for function declarations that don’t use `Function` . For rest parameters, just add the `...` before the last parameter, like this: This code creates a function that uses only a single rest parameter and returns the first argument that was passed in. The addition of default and rest parameters ensures that `Function` has all of the same capabilities as the declarative form of creating functions. ### The Spread Operator Closely related to rest parameters is the spread operator. While rest parameters allow you to specify that multiple independent arguments should be combined into an array, the spread operator allows you to specify an array that should be split and have its items passed in as separate arguments to a function. Consider the `Math.max()` method, which accepts any number of arguments and returns the one with the highest value. Here’s a simple use case for this method: When you’re dealing with just two values, as in this example, `Math.max()` is very easy to use. The two values are passed in, and the higher value is returned. But what if you’ve been tracking values in an array, and now you want to find the highest value? The `Math.max()` method doesn’t allow you to pass in an array, so in ECMAScript 5 and earlier, you’d be stuck either searching the array yourself or using `apply()` as follows: This solution works, but using `apply()` in this manner is a bit confusing. It actually seems to obfuscate the true meaning of the code with additional syntax. The ECMAScript 6 spread operator makes this case very simple. Instead of calling `apply()` , you can pass the array to `Math.max()` directly and prefix it with the same `...` pattern used with rest parameters. The JavaScript engine then splits the array into individual arguments and passes them in, like this: Now the call to `Math.max()` looks a bit more conventional and avoids the complexity of specifying a `this` -binding (the first argument to `Math.max.apply()` in the previous example) for a simple mathematical operation. You can mix and match the spread operator with other arguments as well. Suppose you want the smallest number returned from `Math.max()` to be 0 (just in case negative numbers sneak into the array). You can pass that argument separately and still use the spread operator for the other arguments, as follows: In this example, the last argument passed to `Math.max()` is `0` , which comes after the other arguments are passed in using the spread operator. The spread operator for argument passing makes using arrays for function arguments much easier. You’ll likely find it to be a suitable replacement for the `apply()` method in most circumstances. In addition to the uses you’ve seen for default and rest parameters so far, in ECMAScript 6, you can also apply both parameter types to JavaScript’s `Function` constructor. ### ECMAScript 6’s name Property Identifying functions can be challenging in JavaScript given the various ways a function can be defined. Additionally, the prevalence of anonymous function expressions makes debugging a bit more difficult, often resulting in stack traces that are hard to read and decipher. For these reasons, ECMAScript 6 adds the `name` property to all functions. # Choosing Appropriate Names All functions in an ECMAScript 6 program will have an appropriate value for their `name` property. To see this in action, look at the following example, which shows a function and function expression, and prints the `name` properties for both: In this code, `doSomething()` has a `name` property equal to `"doSomething"` because it’s a function declaration. The anonymous function expression `doAnotherThing()` has a `name` of `"doAnotherThing"` because that’s the name of the variable to which it is assigned. # Special Cases of the name Property While appropriate names for function declarations and function expressions are easy to find, ECMAScript 6 goes further to ensure that all functions have appropriate names. To illustrate this, consider the following program: In this example, `doSomething.name` is `"doSomethingElse"` because the function expression itself has a name, and that name takes priority over the variable to which the function was assigned. The `name` property of `person.sayName()` is `"sayName"` , as the value was interpreted from the object literal. Similarly, `person.firstName` is actually a getter function, so its name is `"get firstName"` to indicate this difference. Setter functions are prefixed with `"set"` as well. (Both getter and setter functions must be retrieved using .) There are a couple of other special cases for function names, too. Functions created using `bind()` will have their names prefixed with `"bound"` and functions created using the `Function` constructor have a name of `"anonymous"` , as in this example: The `name` of a bound function will always be the `name` of the function being bound prefixed with the string `"bound "` , so the bound version of `doSomething()` is `"bound doSomething"` . Keep in mind that the value of `name` for any function does not necessarily refer to a variable of the same name. The `name` property is meant to be informative, to help with debugging, so there’s no way to use the value of `name` to get a reference to the function. ### Clarifying the Dual Purpose of Functions In ECMAScript 5 and earlier, functions serve the dual purpose of being callable with or without `new` . When used with `new` , the `this` value inside a function is a new object and that new object is returned, as illustrated in this example: When creating `notAPerson` , calling `Person()` without `new` results in `undefined` (and sets a `name` property on the global object in nonstrict mode). The capitalization of `Person` is the only real indicator that the function is meant to be called using `new` , as is common in JavaScript programs. This confusion over the dual roles of functions led to some changes in ECMAScript 6. JavaScript has two different internal-only methods for functions: `[[Call]]` and `[[Construct]]` . When a function is called without `new` , the `[[Call]]` method is executed, which executes the body of the function as it appears in the code. When a function is called with `new` , that’s when the `[[Construct]]` method is called. The `[[Construct]]` method is responsible for creating a new object, called the new target, and then executing the function body with `this` set to the new target. Functions that have a `[[Construct]]` method are called constructors. # Determining How a Function was Called in ECMAScript 5 The most popular way to determine if a function was called with `new` (and hence, with constructor) in ECMAScript 5 is to use `instanceof` , for example: Here, the `this` value is checked to see if it’s an instance of the constructor, and if so, execution continues as normal. If `this` isn’t an instance of `Person` , then an error is thrown. This works because the `[[Construct]]` method creates a new instance of `Person` and assigns it to `this` . Unfortunately, this approach is not completely reliable because `this` can be an instance of `Person` without using `new` , as in this example: The call to `Person.call()` passes the `person` variable as the first argument, which means `this` is set to `person` inside of the `Person` function. To the function, there’s no way to distinguish this from being called with `new` . # The new.target MetaProperty To solve this problem, ECMAScript 6 introduces the `new.target` metaproperty. A metaproperty is a property of a non-object that provides additional information related to its target (such as `new` ). When a function’s `[[Construct]]` method is called, `new.target` is filled with the target of the `new` operator. That target is typically the constructor of the newly created object instance that will become `this` inside the function body. If `[[Call]]` is executed, then `new.target` is `undefined` . This new metaproperty allows you to safely detect if a function is called with `new` by checking whether `new.target` is defined as follows: By using `new.target` instead of ``` this instanceof Person ``` , the `Person` constructor is now correctly throwing an error when used without `new` . You can also check that `new.target` was called with a specific constructor. For instance, look at this example: In this code, `new.target` must be `Person` in order to work correctly. When ``` new AnotherPerson("Nicholas") ``` is called, the subsequent call to ``` Person.call(this, name) ``` will throw an error because `new.target` is `undefined` inside of the `Person` constructor (it was called without `new` ). By adding `new.target` , ECMAScript 6 helped to clarify some ambiguity around functions calls. Following on this theme, ECMAScript 6 also addresses another previously ambiguous part of the language: declaring functions inside of blocks. ### Block-Level Functions In ECMAScript 3 and earlier, a function declaration occurring inside of a block (a block-level function) was technically a syntax error, but all browsers still supported it. Unfortunately, each browser that allowed the syntax behaved in a slightly different way, so it is considered a best practice to avoid function declarations inside of blocks (the best alternative is to use a function expression). In an attempt to rein in this incompatible behavior, ECMAScript 5 strict mode introduced an error whenever a function declaration was used inside of a block in this way: In ECMAScript 5, this code throws a syntax error. In ECMAScript 6, the `doSomething()` function is considered a block-level declaration and can be accessed and called within the same block in which it was defined. For example: Block level functions are hoisted to the top of the block in which they are defined, so `typeof doSomething` returns `"function"` even though it appears before the function declaration in the code. Once the `if` block is finished executing, `doSomething()` no longer exists. # Deciding When to Use Block-Level Functions Block level functions are similar to `let` function expressions in that the function definition is removed once execution flows out of the block in which it’s defined. The key difference is that block level functions are hoisted to the top of the containing block. Function expressions that use `let` are not hoisted, as this example illustrates: Here, code execution stops when `typeof doSomething` is executed, because the `let` statement hasn’t been executed yet, leaving `doSomething()` in the TDZ. Knowing this difference, you can choose whether to use block level functions or `let` expressions based on whether or not you want the hoisting behavior. # Block-Level Functions in Nonstrict Mode ECMAScript 6 also allows block-level functions in nonstrict mode, but the behavior is slightly different. Instead of hoisting these declarations to the top of the block, they are hoisted all the way to the containing function or global environment. For example: In this example, `doSomething()` is hoisted into the global scope so that it still exists outside of the `if` block. ECMAScript 6 standardized this behavior to remove the incompatible browser behaviors that previously existed, so all ECMAScript 6 runtimes should behave in the same way. Allowing block-level functions improves your ability to declare functions in JavaScript, but ECMAScript 6 also introduced a completely new way to declare functions. ### Arrow Functions One of the most interesting new parts of ECMAScript 6 is the arrow function. Arrow functions are, as the name suggests, functions defined with a new syntax that uses an “arrow” ( `=>` ). But arrow functions behave differently than traditional JavaScript functions in a number of important ways: * No `this` , `super` , `arguments` , and `new.target` bindings - The value of `this` , `super` , `arguments` , and `new.target` inside of the function is by the closest containing nonarrow function. ( `super` is covered in Chapter 4.) * Cannot be called with `new` - Arrow functions do not have a `[[Construct]]` method and therefore cannot be used as constructors. Arrow functions throw an error when used with `new` . * No prototype - since you can’t use `new` on an arrow function, there’s no need for a prototype. The `prototype` property of an arrow function doesn’t exist. * Can’t change `this` - The value of `this` inside of the function can’t be changed. It remains the same throughout the entire lifecycle of the function. * No `arguments` object - Since arrow functions have no `arguments` binding, you must rely on named and rest parameters to access function arguments. * No duplicate named parameters - arrow functions cannot have duplicate named parameters in strict or nonstrict mode, as opposed to nonarrow functions that cannot have duplicate named parameters only in strict mode. There are a few reasons for these differences. First and foremost, `this` binding is a common source of error in JavaScript. It’s very easy to lose track of the `this` value inside a function, which can result in unintended program behavior, and arrow functions eliminate this confusion. Second, by limiting arrow functions to simply executing code with a single `this` value, JavaScript engines can more easily optimize these operations, unlike regular functions, which might be used as a constructor or otherwise modified. The rest of the differences are also focused on reducing errors and ambiguities inside of arrow functions. By doing so, JavaScript engines are better able to optimize arrow function execution. # Arrow Function Syntax The syntax for arrow functions comes in many flavors depending upon what you’re trying to accomplish. All variations begin with function arguments, followed by the arrow, followed by the body of the function. Both the arguments and the body can take different forms depending on usage. For example, the following arrow function takes a single argument and simply returns it: When there is only one argument for an arrow function, that one argument can be used directly without any further syntax. The arrow comes next and the expression to the right of the arrow is evaluated and returned. Even though there is no explicit `return` statement, this arrow function will return the first argument that is passed in. If you are passing in more than one argument, then you must include parentheses around those arguments, like this: The `sum()` function simply adds two arguments together and returns the result. The only difference between this arrow function and the `reflect()` function is that the arguments are enclosed in parentheses with a comma separating them (like traditional functions). If there are no arguments to the function, then you must include an empty set of parentheses in the declaration, as follows: When you want to provide a more traditional function body, perhaps consisting of more than one expression, then you need to wrap the function body in braces and explicitly define a return value, as in this version of `sum()` : You can more or less treat the inside of the curly braces the same as you would in a traditional function, with the exception that `arguments` is not available. If you want to create a function that does nothing, then you need to include curly braces, like this: Curly braces are used to denote the function’s body, which works just fine in the cases you’ve seen so far. But an arrow function that wants to return an object literal outside of a function body must wrap the literal in parentheses. For example: Wrapping the object literal in parentheses signals that the braces are an object literal instead of the function body. # Creating Immediately-Invoked Function Expressions One popular use of functions in JavaScript is creating immediately-invoked function expressions (IIFEs). IIFEs allow you to define an anonymous function and call it immediately without saving a reference. This pattern comes in handy when you want to create a scope that is shielded from the rest of a program. For example: In this code, the IIFE is used to create an object with a `getName()` method. The method uses the `name` argument as the return value, effectively making `name` a private member of the returned object. You can accomplish the same thing using arrow functions, so long as you wrap the arrow function in parentheses: Note that the parentheses are only around the arrow function definition, and not around `("Nicholas")` . This is different from a formal function, where the parentheses can be placed outside of the passed-in parameters as well as just around the function definition. # No this Binding One of the most common areas of error in JavaScript is the binding of `this` inside of functions. Since the value of `this` can change inside a single function depending on the context in which the function is called, it’s possible to mistakenly affect one object when you meant to affect another. Consider the following example: In this code, the object `PageHandler` is designed to handle interactions on the page. The `init()` method is called to set up the interactions, and that method in turn assigns an event handler to call `this.doSomething()` . However, this code doesn’t work exactly as intended. The call to `this.doSomething()` is broken because `this` is a reference to the object that was the target of the event (in this case `document` ), instead of being bound to `PageHandler` . If you tried to run this code, you’d get an error when the event handler fires because `this.doSomething()` doesn’t exist on the target `document` object. You could fix this by binding the value of `this` to `PageHandler` explicitly using the `bind()` method on the function instead, like this: Now the code works as expected, but it may look a little bit strange. By calling `bind(this)` , you’re actually creating a new function whose `this` is bound to the current `this` , which is `PageHandler` . To avoid creating an extra function, a better way to fix this code is to use an arrow function. Arrow functions have no `this` binding, which means the value of `this` inside an arrow function can only be determined by looking up the scope chain. If the arrow function is contained within a nonarrow function, `this` will be the same as the containing function; otherwise, `this` is equivalent to the value of `this` in the global scope. Here’s one way you could write this code using an arrow function: The event handler in this example is an arrow function that calls `this.doSomething()` . The value of `this` is the same as it is within `init()` , so this version of the code works similarly to the one using `bind(this)` . Even though the `doSomething()` method doesn’t return a value, it’s still the only statement executed in the function body, and so there is no need to include braces. Arrow functions are designed to be “throwaway” functions, and so cannot be used to define new types; this is evident from the missing `prototype` property, which regular functions have. If you try to use the `new` operator with an arrow function, you’ll get an error, as in this example: In this code, the call to `new MyType()` fails because `MyType` is an arrow function and therefore has no `[[Construct]]` behavior. Knowing that arrow functions cannot be used with `new` allows JavaScript engines to further optimize their behavior. Also, since the `this` value is determined by the containing function in which the arrow function is defined, you cannot change the value of `this` using `call()` , `apply()` , or `bind()` . # Arrow Functions and Arrays The concise syntax for arrow functions makes them ideal for use with array processing, too. For example, if you want to sort an array using a custom comparator, you’d typically write something like this: That’s a lot of syntax for a very simple procedure. Compare that to the more terse arrow function version: The array methods that accept callback functions such as `sort()` , `map()` , and `reduce()` can all benefit from simpler arrow function syntax, which changes seemingly complex processes into simpler code. # No arguments Binding Even though arrow functions don’t have their own `arguments` object, it’s possible for them to access the `arguments` object from a containing function. That `arguments` object is then available no matter where the arrow function is executed later on. For example: Inside , the `arguments[0]` element is referenced by the created arrow function. That reference contains the first argument passed to the function. When the arrow function is later executed, it returns `5` , which was the first argument passed to . Even though the arrow function is no longer in the scope of the function that created it, `arguments` remains accessible due to scope chain resolution of the `arguments` identifier. # Identifying Arrow Functions Despite the different syntax, arrow functions are still functions, and are identified as such. Consider the following code: The `console.log()` output reveals that both `typeof` and `instanceof` behave the same with arrow functions as they do with other functions. Also like other functions, you can still use `call()` , `apply()` , and `bind()` on arrow functions, although the `this` -binding of the function will not be affected. Here are some examples: The `sum()` function is called using `call()` and `apply()` to pass arguments, as you’d do with any function. The `bind()` method is used to create `boundSum()` , which has its two arguments bound to `1` and `2` so that they don’t need to be passed directly. Arrow functions are appropriate to use anywhere you’re currently using an anonymous function expression, such as with callbacks. The next section covers another major ECMAScript 6 development, but this one is all internal, and has no new syntax. ### Tail Call Optimization Perhaps the most interesting change to functions in ECMAScript 6 is an engine optimization, which changes the tail call system. A tail call is when a function is called as the last statement in another function, like this: Tail calls as implemented in ECMAScript 5 engines are handled just like any other function call: a new stack frame is created and pushed onto the call stack to represent the function call. That means every previous stack frame is kept in memory, which is problematic when the call stack gets too large. # What’s Different? ECMAScript 6 seeks to reduce the size of the call stack for certain tail calls in strict mode (nonstrict mode tail calls are left untouched). With this optimization, instead of creating a new stack frame for a tail call, the current stack frame is cleared and reused so long as the following conditions are met: * The tail call does not require access to variables in the current stack frame (meaning the function is not a closure) * The function making the tail call has no further work to do after the tail call returns * The result of the tail call is returned as the function value As an example, this code can easily be optimized because it fits all three criteria: This function makes a tail call to `doSomethingElse()` , returns the result immediately, and doesn’t access any variables in the local scope. One small change, not returning the result, results in an unoptimized function: Similarly, if you have a function that performs an operation after returning from the tail call, then the function can’t be optimized: This example adds the result of `doSomethingElse()` with 1 before returning the value, and that’s enough to turn off optimization. Another common way to inadvertently turn off optimization is to store the result of a function call in a variable and then return the result, such as: This example cannot be optimized because the value of `doSomethingElse()` isn’t immediately returned. Perhaps the hardest situation to avoid is in using closures. Because a closure has access to variables in the containing scope, tail call optimization may be turned off. For example: The closure `func()` has access to the local variable `num` in this example. Even though the call to `func()` immediately returns the result, optimization can’t occur due to referencing the variable `num` . # How to Harness Tail Call Optimization In practice, tail call optimization happens behind-the-scenes, so you don’t need to think about it unless you’re trying to optimize a function. The primary use case for tail call optimization is in recursive functions, as that is where the optimization has the greatest effect. Consider this function, which computes factorials: This version of the function cannot be optimized, because multiplication must happen after the recursive call to `factorial()` . If `n` is a very large number, the call stack size will grow and could potentially cause a stack overflow. In order to optimize the function, you need to ensure that the multiplication doesn’t happen after the last function call. To do this, you can use a default parameter to move the multiplication operation outside of the `return` statement. The resulting function carries along the temporary result into the next iteration, creating a function that behaves the same but can be optimized by an ECMAScript 6 engine. Here’s the new code: In this rewritten version of `factorial()` , a second argument `p` is added as a parameter with a default value of 1. The `p` parameter holds the previous multiplication result so that the next result can be computed without another function call. When `n` is greater than 1, the multiplication is done first and then passed in as the second argument to `factorial()` . This allows the ECMAScript 6 engine to optimize the recursive call. Tail call optimization is something you should think about whenever you’re writing a recursive function, as it can provide a significant performance improvement, especially when applied in a computationally-expensive function. Functions haven’t undergone a huge change in ECMAScript 6, but rather, a series of incremental changes that make them easier to work with. Default function parameters allow you to easily specify what value to use when a particular argument isn’t passed. Prior to ECMAScript 6, this would require some extra code inside the function, to both check for the presence of arguments and assign a different value. Rest parameters allow you to specify an array into which all remaining parameters should be placed. Using a real array and letting you indicate which parameters to include makes rest parameters a much more flexible solution than `arguments` . The spread operator is a companion to rest parameters, allowing you to deconstruct an array into separate parameters when calling a function. Prior to ECMAScript 6, there were only two ways to pass individual parameters contained in an array: by manually specifying each parameter or using `apply()` . With the spread operator, you can easily pass an array to any function without worrying about the `this` binding of the function. The addition of the `name` property should help you more easily identify functions for debugging and evaluation purposes. Additionally, ECMAScript 6 formally defines the behavior of block-level functions so they are no longer a syntax error in strict mode. In ECMAScript 6, the behavior of a function is defined by `[[Call]]` , normal function execution, and `[[Construct]]` , when a function is called with `new` . The `new.target` metaproperty also allows you to determine if a function was called using `new` or not. The biggest change to functions in ECMAScript 6 was the addition of arrow functions. Arrow functions are designed to be used in place of anonymous function expressions. Arrow functions have a more concise syntax, lexical `this` binding, and no `arguments` object. Additionally, arrow functions can’t change their `this` binding, and so can’t be used as constructors. Tail call optimization allows some function calls to be optimized in order to keep a smaller call stack, use less memory, and prevent stack overflow errors. This optimization is applied by the engine automatically when it is safe to do so, however, you may decide to rewrite recursive functions in order to take advantage of this optimization. ## Expanded Object Functionality ECMAScript 6 focuses heavily on improving the utility of objects, which makes sense because nearly every value in JavaScript is some type of object. Additionally, the number of objects used in an average JavaScript program continues to increase as the complexity of JavaScript applications increases, meaning that programs are creating more objects all the time. With more objects comes the necessity to use them more effectively. ECMAScript 6 improves objects in a number of ways, from simple syntax extensions to options for manipulating and interacting with them. ### Object Categories JavaScript uses a mix of terminology to describe objects found in the standard as opposed to those added by execution environments such as the browser or Node.js, and the ECMAScript 6 specification has clear definitions for each category of object. It’s important to understand this terminology to have a good understanding of the language as a whole. The object categories are: * Ordinary objects Have all the default internal behaviors for objects in JavaScript. * Exotic objects Have internal behavior that differs from the default in some way. * Standard objects Are those defined by ECMAScript 6, such as `Array` , `Date` , and so on. Standard objects may be ordinary or exotic. * Built-in objects Are present in a JavaScript execution environment when a script begins to execute. All standard objects are built-in objects. I will use these terms throughout the book to explain the various objects defined by ECMAScript 6. ### Object Literal Syntax Extensions The object literal is one of the most popular patterns in JavaScript. JSON is built upon its syntax, and it’s in nearly every JavaScript file on the Internet. The object literal is so popular because it’s a succinct syntax for creating objects that otherwise would take several lines of code. Luckily for developers, ECMAScript 6 makes object literals more powerful and even more succinct by extending the syntax in several ways. # Property Initializer Shorthand In ECMAScript 5 and earlier, object literals were simply collections of name-value pairs. That meant there could be some duplication when property values are initialized. For example: The `createPerson()` function creates an object whose property names are the same as the function parameter names. The result appears to be duplication of `name` and `age` even though one is the name of an object property while the other provides the value for that property. The key `name` in the returned object is assigned the value contained in the variable `name` , and the key `age` in the returned object is assigned the value contained in the variable `age` . In ECMAScript 6, you can eliminate the duplication that exists around property names and local variables by using the property initializer shorthand. When an object property name is the same as the local variable name, you can simply include the name without a colon and value. For example, `createPerson()` can be rewritten for ECMAScript 6 as follows: When a property in an object literal only has a name, the JavaScript engine looks into the surrounding scope for a variable of the same name. If it finds one, that variable’s value is assigned to the same name on the object literal. In this example, the object literal property `name` is assigned the value of the local variable `name` . This extension makes object literal initialization even more succinct and helps to eliminate naming errors. Assigning a property with the same name as a local variable is a very common pattern in JavaScript, making this extension a welcome addition. # Concise Methods ECMAScript 6 also improves the syntax for assigning methods to object literals. In ECMAScript 5 and earlier, you must specify a name and then the full function definition to add a method to an object, as follows: In ECMAScript 6, the syntax is made more concise by eliminating the colon and the `function` keyword. That means you can rewrite the previous example like this: This shorthand syntax, also called concise method syntax, creates a method on the `person` object just as the previous example did. The `sayName()` property is assigned an anonymous function and has all the same characteristics as the ECMAScript 5 `sayName()` function. The one difference is that concise methods may use `super` (discussed later in the “Easy Prototype Access with Super References” section), while the nonconcise methods may not. # Computed Property Names ECMAScript 5 and earlier could compute property names on object instances when those properties were set with square brackets instead of dot notation. The square brackets allow you to specify property names using variables and string literals that may contain characters that would cause a syntax error if used in an identifier. Here’s an example: Since `lastName` is assigned a value of `"<NAME>"` , both property names in this example use a space, making it impossible to reference them using dot notation. However, bracket notation allows any string value to be used as a property name, so assigning `"<NAME>"` to `"Nicholas"` and “ `<NAME>"` to `"Zakas"` works. Additionally, you can use string literals directly as property names in object literals, like this: This pattern works for property names that are known ahead of time and can be represented with a string literal. If, however, the property name `"<NAME>"` were contained in a variable (as in the previous example) or had to be calculated, then there would be no way to define that property using an object literal in ECMAScript 5. In ECMAScript 6, computed property names are part of the object literal syntax, and they use the same square bracket notation that has been used to reference computed property names in object instances. For example: The square brackets inside the object literal indicate that the property name is computed, so its contents are evaluated as a string. That means you can also include expressions such as: These properties evaluate to `"<NAME>"` and `"<NAME>"` , and those strings can be used to reference the properties later. Anything you would put inside square brackets while using bracket notation on object instances will also work for computed property names inside object literals. ### New Methods One of the design goals of ECMAScript beginning with ECMAScript 5 was to avoid creating new global functions or methods on `Object.prototype` , and instead try to find objects on which new methods should be available. As a result, the `Object` global has received an increasing number of methods when no other objects are more appropriate. ECMAScript 6 introduces a couple new methods on the `Object` global that are designed to make certain tasks easier. # The Object.is() Method When you want to compare two values in JavaScript, you’re probably used to using either the equals operator ( `==` ) or the identically equals operator ( `===` ). Many developers prefer the latter, to avoid type coercion during comparison. But even the identically equals operator isn’t entirely accurate. For example, the values +0 and -0 are considered equal by `===` even though they are represented differently in the JavaScript engine. Also `NaN === NaN` returns `false` , which necessitates using `isNaN()` to detect `NaN` properly. ECMAScript 6 introduces the `Object.is()` method to make up for the remaining quirks of the identically equals operator. This method accepts two arguments and returns `true` if the values are equivalent. Two values are considered equivalent when they are of the same type and have the same value. Here are some examples: In many cases, `Object.is()` works the same as the `===` operator. The only differences are that +0 and -0 are considered not equivalent and `NaN` is considered equivalent to `NaN` . But there’s no need to stop using equality operators altogether. Choose whether to use `Object.is()` instead of `==` or `===` based on how those special cases affect your code. # The Object.assign() Method Mixins are among the most popular patterns for object composition in JavaScript. In a mixin, one object receives properties and methods from another object. Many JavaScript libraries have a mixin method similar to this: The `mixin()` function iterates over the own properties of `supplier` and copies them onto `receiver` (a shallow copy, where object references are shared when property values are objects). This allows the `receiver` to gain new properties without inheritance, as in this code: Here, `myObject` receives behavior from the ``` EventTarget.prototype ``` object. This gives `myObject` the ability to publish events and subscribe to them using the `emit()` and `on()` methods, respectively. This pattern became popular enough that ECMAScript 6 added the `Object.assign()` method, which behaves the same way, accepting a receiver and any number of suppliers, and then returning the receiver. The name change from `mixin()` to `assign()` reflects the actual operation that occurs. Since the `mixin()` function uses the assignment operator ( `=` ), it cannot copy accessor properties to the receiver as accessor properties. The name `Object.assign()` was chosen to reflect this distinction. You can use `Object.assign()` anywhere the `mixin()` function would have been used. Here’s an example: The `Object.assign()` method accepts any number of suppliers, and the receiver receives the properties in the order in which the suppliers are specified. That means the second supplier might overwrite a value from the first supplier on the receiver, which is what happens in this snippet: The value of `receiver.type` is `"css"` because the second supplier overwrote the value of the first. The `Object.assign()` method isn’t a big addition to ECMAScript 6, but it does formalize a common function found in many JavaScript libraries. ### Duplicate Object Literal Properties ECMAScript 5 strict mode introduced a check for duplicate object literal properties that would throw an error if a duplicate was found. For example, this code was problematic: When running in ECMAScript 5 strict mode, the second `name` property causes a syntax error. But in ECMAScript 6, the duplicate property check was removed. Both strict and nonstrict mode code no longer check for duplicate properties. Instead, the last property of the given name becomes the property’s actual value, as shown here: In this example, the value of `person.name` is `"Greg"` because that’s the last value assigned to the property. ### Own Property Enumeration Order ECMAScript 5 didn’t define the enumeration order of object properties, as it left this up to the JavaScript engine vendors. However, ECMAScript 6 strictly defines the order in which own properties must be returned when they are enumerated. This affects how properties are returned using and `Reflect.ownKeys` (covered in Chapter 12). It also affects the order in which properties are processed by `Object.assign()` . The basic order for own property enumeration is: * All numeric keys in ascending order * All string keys in the order in which they were added to the object * All symbol keys (covered in Chapter 6) in the order in which they were added to the object Here’s an example: The method returns the properties in `obj` in the order `0` , `1` , `2` , `a` , `c` , `b` , `d` . Note that the numeric keys are grouped together and sorted, even though they appear out of order in the object literal. The string keys come after the numeric keys and appear in the order that they were added to `obj` . The keys in the object literal itself come first, followed by any dynamic keys that were added later (in this case, `d` ). While enumeration order is a subtle change to how JavaScript works, it’s not uncommon to find programs that rely on a specific enumeration order to work correctly. ECMAScript 6, by defining the enumeration order, ensures that JavaScript code relying on enumeration will work correctly regardless of where it is executed. ### More Powerful Prototypes Prototypes are the foundation of inheritance in JavaScript, and ECMAScript 6 continues to make prototypes more powerful. Early versions of JavaScript severely limited what could be done with prototypes. However, as the language matured and developers became more familiar with how prototypes work, it became clear that developers wanted more control over prototypes and easier ways to work with them. As a result, ECMAScript 6 introduced some improvements to prototypes. # Changing an Object’s Prototype Normally, the prototype of an object is specified when the object is created, via either a constructor or the `Object.create()` method. The idea that an object’s prototype remains unchanged after instantiation was one of the biggest assumptions in JavaScript programming through ECMAScript 5. ECMAScript 5 did add the method for retrieving the prototype of any given object, but it still lacked a standard way to change an object’s prototype after instantiation. ECMAScript 6 changes that assumption by adding the method, which allows you to change the prototype of any given object. The method accepts two arguments: the object whose prototype should change and the object that should become the first argument’s prototype. For example: This code defines two base objects: `person` and `dog` . Both objects have a `getGreeting()` method that returns a string. The object `friend` first inherits from the `person` object, meaning that `getGreeting()` outputs `"Hello"` . When the prototype becomes the `dog` object, `friend.getGreeting()` outputs `"Woof"` because the original relationship to `person` is broken. The actual value of an object’s prototype is stored in an internal-only property called `[[Prototype]]` . The method returns the value stored in `[[Prototype]]` and changes the value stored in `[[Prototype]]` . However, these aren’t the only ways to work with the value of `[[Prototype]]` . # Easy Prototype Access with Super References As previously mentioned, prototypes are very important for JavaScript and a lot of work went into making them easier to use in ECMAScript 6. Another improvement is the introduction of `super` references, which make accessing functionality on an object’s prototype easier. For example, to override a method on an object instance such that it also calls the prototype method of the same name, you’d do the following: In this example, `getGreeting()` on `friend` calls the prototype method of the same name. The method ensures the correct prototype is called, and then an additional string is appended to the output. The additional `.call(this)` ensures that the `this` value inside the prototype method is set correctly. Remembering to use and `.call(this)` to call a method on the prototype is a bit involved, so ECMAScript 6 introduced `super` . At its simplest, `super` is a pointer to the current object’s prototype, effectively the ``` Object.getPrototypeOf(this) ``` value. Knowing that, you can simplify the `getGreeting()` method as follows: The call to `super.getGreeting()` is the same as ``` Object.getPrototypeOf(this).getGreeting.call(this) ``` in this context. Similarly, you can call any method on an object’s prototype by using a `super` reference, so long as it’s inside a concise method. Attempting to use `super` outside of concise methods results in a syntax error, as in this example: This example uses a named property with a function, and the call to `super.getGreeting()` results in a syntax error because `super` is invalid in this context. The `super` reference is really powerful when you have multiple levels of inheritance, because in that case, no longer works in all circumstances. For example: The call to results in an error when ``` relative.getGreeting() ``` is called. That’s because `this` is `relative` , and the prototype of `relative` is the `friend` object. When ``` friend.getGreeting().call() ``` is called with `relative` as `this` , the process starts over again and continues to call recursively until a stack overflow error occurs. That problem is difficult to solve in ECMAScript 5, but with ECMAScript 6 and `super` , it’s easy: Because `super` references are not dynamic, they always refer to the correct object. In this case, `super.getGreeting()` always refers to `person.getGreeting()` , regardless of how many other objects inherit the method. ### A Formal Method Definition Prior to ECMAScript 6, the concept of a “method” wasn’t formally defined. Methods were just object properties that contained functions instead of data. ECMAScript 6 formally defines a method as a function that has an internal `[[HomeObject]]` property containing the object to which the method belongs. Consider the following: This example defines `person` with a single method called `getGreeting()` . The `[[HomeObject]]` for `getGreeting()` is `person` by virtue of assigning the function directly to an object. The `shareGreeting()` function, on the other hand, has no `[[HomeObject]]` specified because it wasn’t assigned to an object when it was created. In most cases, this difference isn’t important, but it becomes very important when using `super` references. Any reference to `super` uses the `[[HomeObject]]` to determine what to do. The first step is to call on the `[[HomeObject]]` to retrieve a reference to the prototype. Then, the prototype is searched for a function with the same name. Last, the `this` binding is set and the method is called. Here’s an example: Calling `friend.getGreeting()` returns a string, which combines the value from `person.getGreeting()` with `", hi!"` . The `[[HomeObject]]` of `friend.getGreeting()` is `friend` , and the prototype of `friend` is `person` , so `super.getGreeting()` is equivalent to ``` person.getGreeting.call(this) ``` . Objects are the center of programming in JavaScript, and ECMAScript 6 made some helpful changes to objects that both make them easier to deal with and more powerful. ECMAScript 6 makes several changes to object literals. Shorthand property definitions make assigning properties with the same names as in-scope variables easier. Computed property names allow you to specify non-literal values as property names, which you’ve already been able to do in other areas of the language. Shorthand methods let you type a lot fewer characters in order to define methods on object literals, by completely omitting the colon and `function` keyword. ECMAScript 6 loosens the strict mode check for duplicate object literal property names as well, meaning you can have two properties with the same name in a single object literal without throwing an error. The `Object.assign()` method makes it easier to change multiple properties on a single object at once. This can be very useful if you use the mixin pattern. The `Object.is()` method performs strict equality on any value, effectively becoming a safer version of `===` when dealing with special JavaScript values. Enumeration order for own properties is now clearly defined in ECMAScript 6. When enumerating properties, numeric keys always come first in ascending order followed by string keys in insertion order and symbol keys in insertion order. It’s now possible to modify an object’s prototype after it’s already created, thanks to ECMAScript 6’s method. Finally, you can use the `super` keyword to call methods on an object’s prototype. The `this` binding inside a method invoked using `super` is set up to automatically work with the current value of `this` . ## Destructuring for Easier Data Access Object and array literals are two of the most frequently used notations in JavaScript, and thanks to the popular JSON data format, they’ve become a particularly important part of the language. It’s quite common to define objects and arrays, and then systematically pull out relevant pieces of information from those structures. ECMAScript 6 simplifies this task by adding destructuring, which is the process of breaking a data structure down into smaller parts. This chapter shows you how to harness destructuring for both objects and arrays. ### Why is Destructuring Useful? In ECMAScript 5 and earlier, the need to fetch information from objects and arrays could lead to a lot of code that looks the same, just to get certain data into local variables. For example: This code extracts the values of `repeat` and `save` from the `options` object and stores that data in local variables with the same names. While this code looks simple, imagine if you had a large number of variables to assign; you would have to assign them all one by one. And if there was a nested data structure to traverse to find the information instead, you might have to dig through the entire structure just to find one piece of data. That’s why ECMAScript 6 adds destructuring for both objects and arrays. When you break a data structure into smaller parts, getting the information you need out of it becomes much easier. Many languages implement destructuring with a minimal amount of syntax to make the process simpler to use. The ECMAScript 6 implementation actually makes use of syntax you’re already familiar with: the syntax for object and array literals. ### Object Destructuring Object destructuring syntax uses an object literal on the left side of an assignment operation. For example: In this code, the value of `node.type` is stored in a variable called `type` and the value of `node.name` is stored in a variable called `name` . This syntax is the same as the object literal property initializer shorthand introduced in Chapter 4. The identifiers `type` and `name` are both declarations of local variables and the properties to read the value from on the `node` object. The object destructuring examples so far have used variable declarations. However, it’s also possible to use destructuring in assignments. For instance, you may decide to change the values of variables after they are defined, as follows: In this example, `type` and `name` are initialized with values when declared, and then two variables with the same names are initialized with different values. The next line uses destructuring assignment to change those values by reading from the `node` object. Note that you must put parentheses around a destructuring assignment statement. That’s because an opening curly brace is expected to a be a block statement, and a block statement cannot appear on the left side of an assignment. The parentheses signal that the next curly brace is not a block statement and should be interpreted as an expression, allowing the assignment to complete. A destructuring assignment expression evaluates to the right side of the expression (after the `=` ). That means you can use a destructuring assignment expression anywhere a value is expected. For instance, passing a value to a function: The `outputInfo()` function is called with a destructuring assignment expression. The expression evaluates to `node` because that is the value of the right side of the expression. The assignment to `type` and `name` both behave as normal and `node` is passed into `outputInfo()` . When you use a destructuring assignment statement, if you specify a local variable with a property name that doesn’t exist on the object, then that local variable is assigned a value of `undefined` . For example: This code defines an additional local variable called `value` and attempts to assign it a value. However, there is no corresponding `value` property on the `node` object, so the variable is assigned the value of `undefined` as expected. You can optionally define a default value to use when a specified property doesn’t exist. To do so, insert an equals sign ( `=` ) after the property name and specify the default value, like this: In this example, the variable `value` is given `true` as a default value. The default value is only used if the property is missing on `node` or has a value of `undefined` . Since there is no `node.value` property, the variable `value` uses the default value. This works similarly to the default parameter values for functions, as discussed in Chapter 3. # Assigning to Different Local Variable Names Up to this point, each example destructuring assignment has used the object property name as the local variable name; for example, the value of `node.type` was stored in a `type` variable. That works well when you want to use the same name, but what if you don’t? ECMAScript 6 has an extended syntax that allows you to assign to a local variable with a different name, and that syntax looks like the object literal nonshorthand property initializer syntax. Here’s an example: This code uses destructuring assignment to declare the `localType` and `localName` variables, which contain the values from the `node.type` and `node.name` properties, respectively. The syntax `type: localType` says to read the property named `type` and store its value in the `localType` variable. This syntax is effectively the opposite of traditional object literal syntax, where the name is on the left of the colon and the value is on the right. In this case, the name is on the right of the colon and the location of the value to read is on the left. You can add default values when using a different variable name, as well. The equals sign and default value are still placed after the local variable name. For example: Here, the `localName` variable has a default value of `"bar"` . The variable is assigned its default value because there’s no `node.name` property. So far, you’ve seen how to deal with destructuring of an object whose properties are primitive values. Object destructuring can also be used to retrieve values in nested object structures. # Nested Object Destructuring By using a syntax similar to object literals, you can navigate into a nested object structure to retrieve just the information you want. Here’s an example: The destructuring pattern in this example uses curly braces to indicate that the pattern should descend into the property named `loc` on `node` and look for the `start` property. Remember from the last section that whenever there’s a colon in a destructuring pattern, it means the identifier before the colon is giving a location to inspect, and the right side assigns a value. When there’s a curly brace after the colon, that indicates that the destination is nested another level into the object. You can go one step further and use a different name for the local variable as well: In this version of the code, `node.loc.start` is stored in a new local variable called `localStart` . Destructuring patterns can be nested to an arbitrary level of depth, with all capabilities available at each level. Object destructuring is very powerful and has a lot of options, but array destructuring offers some unique capabilities that allow you to extract information from arrays. ### Array Destructuring Array destructuring syntax is very similar to object destructuring; it just uses array literal syntax instead of object literal syntax. The destructuring operates on positions within an array, rather than the named properties that are available in objects. For example: Here, array destructuring pulls out the values `"red"` and `"green"` from the `colors` array and stores them in the `firstColor` and `secondColor` variables. Those values are chosen because of their position in the array; the actual variable names could be anything. Any items not explicitly mentioned in the destructuring pattern are ignored. Keep in mind that the array itself isn’t changed in any way. You can also omit items in the destructuring pattern and only provide variable names for the items you’re interested in. If, for example, you just want the third value of an array, you don’t need to supply variable names for the first and second items. Here’s how that works: This code uses a destructuring assignment to retrieve the third item in `colors` . The commas preceding `thirdColor` in the pattern are placeholders for the array items that come before it. By using this approach, you can easily pick out values from any number of slots in the middle of an array without needing to provide variable names for them. You can use array destructuring in the context of an assignment, but unlike object destructuring, there is no need to wrap the expression in parentheses. For example: The destructured assignment in this code works in a similar manner to the last array destructuring example. The only difference is that `firstColor` and `secondColor` have already been defined. Most of the time, that’s probably all you’ll need to know about array destructuring assignment, but there’s a little bit more to it that you will probably find useful. Array destructuring assignment has a very unique use case that makes it easier to swap the values of two variables. Value swapping is a common operation in sorting algorithms, and the ECMAScript 5 way of swapping variables involves a third, temporary variable, as in this example: The intermediate variable `tmp` is necessary in order to swap the values of `a` and `b` . Using array destructuring assignment, however, there’s no need for that extra variable. Here’s how you can swap variables in ECMAScript 6: The array destructuring assignment in this example looks like a mirror image. The left side of the assignment (before the equals sign) is a destructuring pattern just like those in the other array destructuring examples. The right side is an array literal that is temporarily created for the swap. The destructuring happens on the temporary array, which has the values of `b` and `a` copied into its first and second positions. The effect is that the variables have swapped values. Array destructuring assignment allows you to specify a default value for any position in the array, too. The default value is used when the property at the given position either doesn’t exist or has the value `undefined` . For example: In this code, the `colors` array has only one item, so there is nothing for `secondColor` to match. Since there is a default value, `secondColor` is set to `"green"` instead of `undefined` . # Nested Destructuring You can destructure nested arrays in a manner similar to destructuring nested objects. By inserting another array pattern into the overall pattern, the destructuring will descend into a nested array, like this: Here, the `secondColor` variable refers to the `"green"` value inside the `colors` array. That item is contained within a second array, so the extra square brackets around `secondColor` in the destructuring pattern are necessary. As with objects, you can nest arrays arbitrarily deep. # Rest Items Chapter 3 introduced rest parameters for functions, and array destructuring has a similar concept called rest items. Rest items use the `...` syntax to assign the remaining items in an array to a particular variable. Here’s an example: The first item in `colors` is assigned to `firstColor` , and the rest are assigned into a new `restColors` array. The `restColors` array, therefore, has two items: `"green"` and `"blue"` . Rest items are useful for extracting certain items from an array and keeping the rest available, but there’s another helpful use. A glaring omission from JavaScript arrays is the ability to easily create a clone. In ECMAScript 5, developers frequently used the `concat()` method as an easy way to clone an array. For example: While the `concat()` method is intended to concatenate two arrays together, calling it without an argument returns a clone of the array. In ECMAScript 6, you can use rest items to achieve the same thing through syntax intended to function that way. It works like this: In this example, rest items are used to copy values from the `colors` array into the `clonedColors` array. While it’s a matter of perception as to whether this technique makes the developer’s intent clearer than using the `concat()` method, this is a useful ability to be aware of. ### Mixed Destructuring Object and array destructuring can be used together to create more complex expressions. In doing so, you are able to extract just the pieces of information you want from any mixture of objects and arrays. For example: This code extracts `node.loc.start` and `node.range[0]` into `start` and `startIndex` , respectively. Keep in mind that `loc:` and `range:` in the destructured pattern are just locations that correspond to properties in the `node` object. There is no part of `node` that cannot be extracted using destructuring when you use a mix of object and array destructuring. This approach is particularly useful for pulling values out of JSON configuration structures without navigating the entire structure. ### Destructured Parameters Destructuring has one more particularly helpful use case, and that is when passing function arguments. When a JavaScript function takes a large number of optional parameters, one common pattern is to create an `options` object whose properties specify the additional parameters, like this: Many JavaScript libraries contain `setCookie()` functions that look similar to this one. In this function, the `name` and `value` arguments are required, but `secure` , `path` , `domain` , and `expires` are not. And since there is no priority order for the other data, it’s efficient to just have an `options` object with named properties, rather than list extra named parameters. This approach works, but now you can’t tell what input the function expects just by looking at the function definition; you need to read the function body. Destructured parameters offer an alternative that makes it clearer what arguments a function expects. A destructured parameter uses an object or array destructuring pattern in place of a named parameter. To see this in action, look at this rewritten version of the `setCookie()` function from the last example: This function behaves similarly to the previous example, but now, the third argument uses destructuring to pull out the necessary data. The parameters outside the destructured parameter are clearly expected, and at the same time, it’s clear to someone using `setCookie()` what options are available in terms of extra arguments. And of course, if the third argument is required, the values it should contain are crystal clear. The destructured parameters also act like regular parameters in that they are set to `undefined` if they are not passed. # Destructured Parameters are Required One quirk of using destructured parameters is that, by default, an error is thrown when they are not provided in a function call. For instance, this call to the `setCookie()` function in the last example throws an error: The third argument is missing, and so it evaluates to `undefined` as expected. This causes an error because destructured parameters are really just a shorthand for destructured declaration. When the `setCookie()` function is called, the JavaScript engine actually does this: Since destructuring throws an error when the right side expression evaluates to `null` or `undefined` , the same is true when the third argument isn’t passed to the `setCookie()` function. If you want the destructured parameter to be required, then this behavior isn’t all that troubling. But if you want the destructured parameter to be optional, you can work around this behavior by providing a default value for the destructured parameter, like this: This example provides a new object as the default value for the third parameter. Providing a default value for the destructured parameter means that the `secure` , `path` , `domain` , and `expires` will all be `undefined` if the third argument to `setCookie()` isn’t provided, and no error will be thrown. # Default Values for Destructured Parameters You can specify destructured default values for destructured parameters just as you would in destructured assignment. Just add the equals sign after the parameter and specify the default value. For example: Each property in the destructured parameter has a default value in this code, so you can avoid checking to see if a given property has been included in order to use the correct value. Also, the entire destructured parameter has a default value of an empty object, making the parameter optional. This does make the function declaration look a bit more complicated than usual, but that’s a small price to pay for ensuring each argument has a usable value. Destructuring makes working with objects and arrays in JavaScript easier. Using the familiar object literal and array literal syntax, you can pick data structures apart to get at just the information you’re interested in. Object patterns allow you to extract data from objects while array patterns let you extract data from arrays. Both object and array destructuring can specify default values for any property or item that is `undefined` and both throw errors when the right side of an assignment evaluates to `null` or `undefined` . You can also navigate deeply nested data structures with object and array destructuring, descending to any arbitrary depth. Destructuring declarations use `var` , `let` , or `const` to create variables and must always have an initializer. Destructuring assignments are used in place of other assignments and allow you to destructure into object properties and already-existing variables. Destructured parameters use the destructuring syntax to make “options” objects more transparent when used as function parameters. The actual data you’re interested in can be listed out along with other named parameters. Destructured parameters can be array patterns, object patterns, or a mixture, and you can use all of the features of destructuring. ## Symbols and Symbol Properties Symbols are a primitive type introduced in ECMAScript 6, joining the existing primitive types: strings, numbers, booleans, `null` , and `undefined` . Symbols began as a way to create private object members, a feature JavaScript developers wanted for a long time. Before symbols, any property with a string name was easy to access regardless of the obscurity of the name, and the “private names” feature was meant to let developers create non-string property names. That way, normal techniques for detecting these private names wouldn’t work. The private names proposal eventually evolved into ECMAScript 6 symbols, and this chapter will teach you how to use symbols effectively. While the implementation details remained the same (that is, they added non-string values for property names), the goal of privacy was dropped. Instead, symbol properties are categorized separately from other object properties. ### Creating Symbols Symbols are unique among JavaScript primitives in that they don’t have a literal form, like `true` for booleans or `42` for numbers. You can create a symbol by using the global `Symbol` function, as in this example: Here, the symbol `firstName` is created and used to assign a new property on the `person` object. That symbol must be used each time you want to access that same property. Naming the symbol variable appropriately is a good idea, so you can easily tell what the symbol represents. The `Symbol` function also accepts an optional argument that is the description of the symbol. The description itself cannot be used to access the property, but is used for debugging purposes. For example: A symbol’s description is stored internally in the `[[Description]]` property. This property is read whenever the symbol’s `toString()` method is called either explicitly or implicitly. The `firstName` symbol’s `toString()` method is called implictly by `console.log()` in this example, so the description gets printed to the log. It is not otherwise possible to access `[[Description]]` directly from code. I recommended always providing a description to make both reading and debugging symbols easier. ### Using Symbols You can use symbols anywhere you’d use a computed property name. You’ve already seen bracket notation used with symbols in this chapter, but you can use symbols in computed object literal property names as well as with calls, such as: This example first uses a computed object literal property to create the `firstName` symbol property. The following line then sets the property to be read-only. Later, a read-only `lastName` symbol property is created using the method. A computed object literal property is used once again, but this time, it’s part of the second argument to the call. While symbols can be used in any place that computed property names are allowed, you’ll need to have a system for sharing these symbols between different pieces of code in order to use them effectively. ### Sharing Symbols You may find that you want different parts of your code to use the same symbols. For example, suppose you have two different object types in your application that should use the same symbol property to represent a unique identifier. Keeping track of symbols across files or large codebases can be difficult and error-prone. That’s why ECMAScript 6 provides a global symbol registry that you can access at any point in time. When you want to create a symbol to be shared, use the `Symbol.for()` method instead of calling the `Symbol()` method. The `Symbol.for()` method accepts a single parameter, which is a string identifier for the symbol you want to create. That parameter is also used as the symbol’s description. For example: The `Symbol.for()` method first searches the global symbol registry to see if a symbol with the key `"uid"` exists. If so, the method returns the existing symbol. If no such symbol exists, then a new symbol is created and registered to the global symbol registry using the specified key. The new symbol is then returned. That means subsequent calls to `Symbol.for()` using the same key will return the same symbol, as follows: In this example, `uid` and `uid2` contain the same symbol and so they can be used interchangeably. The first call to `Symbol.for()` creates the symbol, and the second call retrieves the symbol from the global symbol registry. Another unique aspect of shared symbols is that you can retrieve the key associated with a symbol in the global symbol registry by calling the `Symbol.keyFor()` method. For example: Notice that both `uid` and `uid2` return the `"uid"` key. The symbol `uid3` doesn’t exist in the global symbol registry, so it has no key associated with it and `Symbol.keyFor()` returns `undefined` . ### Symbol Coercion Type coercion is a significant part of JavaScript, and there’s a lot of flexibility in the language’s ability to coerce one data type into another. Symbols, however, are quite inflexible when it comes to coercion because other types lack a logical equivalent to a symbol. Specifically, symbols cannot be coerced into strings or numbers so that they cannot accidentally be used as properties that would otherwise be expected to behave as symbols. The examples in this chapter have used `console.log()` to indicate the output for symbols, and that works because `console.log()` calls `String()` on symbols to create useful output. You can use `String()` directly to get the same result. For instance: The `String()` function calls `uid.toString()` and the symbol’s string description is returned. If you try to concatenate the symbol directly with a string, however, an error will be thrown: Concatenating `uid` with an empty string requires that `uid` first be coerced into a string. An error is thrown when the coercion is detected, preventing its use in this manner. Similarly, you cannot coerce a symbol to a number. All mathematical operators cause an error when applied to a symbol. For example: This example attempts to divide the symbol by 1, which causes an error. Errors are thrown regardless of the mathematical operator used (logical operators do not throw an error because all symbols are considered equivalent to `true` , just like any other non-empty value in JavaScript). ### Retrieving Symbol Properties The `Object.keys()` and methods can retrieve all property names in an object. The former method returns all enumerable property names, and the latter returns all properties regardless of enumerability. Neither method returns symbol properties, however, to preserve their ECMAScript 5 functionality. Instead, the method was added in ECMAScript 6 to allow you to retrieve property symbols from an object. The return value of is an array of own property symbols. For example: In this code, `object` has a single symbol property called `uid` . The array returned from is an array containing just that symbol. All objects start with zero own symbol properties, but objects can inherit symbol properties from their prototypes. ECMAScript 6 predefines several such properties, implemented using what are called well-known symbols. ### Exposing Internal Operations with Well-Known Symbols A central theme for ECMAScript 5 was exposing and defining some of the “magic” parts of JavaScript, the parts that developers couldn’t emulate at the time. ECMAScript 6 carries on that tradition by exposing even more of the previously internal logic of the language, primarily by using symbol prototype properties to define the basic behavior of certain objects. ECMAScript 6 has predefined symbols called well-known symbols that represent common behaviors in JavaScript that were previously considered internal-only operations. Each well-known symbol is represented by a property on the `Symbol` object, such as `Symbol.create` . The well-known symbols are: * `Symbol.hasInstance` - A method used by `instanceof` to determine an object’s inheritance. * - A Boolean value indicating that should flatten the collection’s elements if the collection is passed as a parameter to . * `Symbol.iterator` - A method that returns an iterator. (Iterators are covered in Chapter 8.) * `Symbol.match` - A method used by ``` String.prototype.match() ``` to compare strings. * `Symbol.replace` - A method used by ``` String.prototype.replace() ``` to replace substrings. * `Symbol.search` - A method used by ``` String.prototype.search() ``` to locate substrings. * `Symbol.species` - The constructor for making derived objects. (Derived objects are covered in Chapter 9.) * `Symbol.split` - A method used by ``` String.prototype.split() ``` to split up strings. * `Symbol.toPrimitive` - A method that returns a primitive value representation of an object. * `Symbol.toStringTag` - A string used by to create an object description. * `Symbol.unscopables` - An object whose properties are the names of object properties that should not be included in a `with` statement. Some commonly used well-known symbols are discussed in the following sections, while others are discussed throughout the rest of the book to keep them in the correct context. # The Symbol.hasInstance Property Every function has a `Symbol.hasInstance` method that determines whether or not a given object is an instance of that function. The method is defined on `Function.prototype` so that all functions inherit the default behavior for the `instanceof` property and the method is nonwritable and nonconfigurable as well as nonenumerable, to ensure it doesn’t get overwritten by mistake. The `Symbol.hasInstance` method accepts a single argument: the value to check. It returns true if the value passed is an instance of the function. To understand how `Symbol.hasInstance` works, consider the following code: This code is equivalent to: ECMAScript 6 essentially redefined the `instanceof` operator as shorthand syntax for this method call. And now that there’s a method call involved, you can actually change how `instanceof` works. For instance, suppose you want to define a function that claims no object as an instance. You can do so by hardcoding the return value of `Symbol.hasInstance` to `false` , such as: You must use to overwrite a nonwritable property, so this example uses that method to overwrite the `Symbol.hasInstance` method with a new function. The new function always returns `false` , so even though `obj` is actually an instance of the `MyObject` class, the `instanceof` operator returns `false` after the call. Of course, you can also inspect the value and decide whether or not a value should be considered an instance based on any arbitrary condition. For instance, maybe numbers with values between 1 and 100 are to be considered instances of a special number type. To achieve that behavior, you might write code like this: This code defines a `Symbol.hasInstance` method that returns `true` if the value is an instance of `Number` and also has a value between 1 and 100. Thus, `SpecialNumber` will claim `two` as an instance even though there is no directly defined relationship between the `SpecialNumber` function and the `two` variable. Note that the left operand to `instanceof` must be an object to trigger the `Symbol.hasInstance` call, as nonobjects cause `instanceof` to simply return `false` all the time. # The Symbol.isConcatSpreadable Symbol JavaScript arrays have a `concat()` method designed to concatenate two arrays together. Here’s how that method is used: This code concatenates a new array to the end of `colors1` and creates `colors2` , a single array with all items from both arrays. However, the `concat()` method can also accept nonarray arguments and, in that case, those arguments are simply added to the end of the array. For example: Here, the extra argument `"brown"` is passed to `concat()` and it becomes the fifth item in the `colors2` array. Why is an array argument treated differently than a string argument? The JavaScript specification says that arrays are automatically split into their individual items and all other types are not. Prior to ECMAScript 6, there was no way to adjust this behavior. The property is a boolean value indicating that an object has a `length` property and numeric keys, and that its numeric property values should be added individually to the result of a `concat()` call. Unlike other well-known symbols, this symbol property doesn’t appear on any standard objects by default. Instead, the symbol is available as a way to augment how `concat()` works on certain types of objects, effectively short-circuiting the default behavior. You can define any type to behave like arrays do in a `concat()` call, like this: The `collection` object in this example is set up to look like an array: it has a `length` property and two numeric keys. The property is set to `true` to indicate that the property values should be added as individual items to an array. When `collection` is passed to the `concat()` method, the resulting array has `"Hello"` and `"world"` as separate items after the `"Hi"` element. # The Symbol.match, Symbol.replace, Symbol.search, and Symbol.split Symbols Strings and regular expressions have always had a close relationship in JavaScript. The string type, in particular, has several methods that accept regular expressions as arguments: * `match(regex)` - Determines whether the given string matches a regular expression * ``` replace(regex, replacement) ``` - Replaces regular expression matches with a `replacement` * `search(regex)` - Locates a regular expression match inside the string * `split(regex)` - Splits a string into an array on a regular expression match Prior to ECMAScript 6, the way these methods interacted with regular expressions was hidden from developers, leaving no way to mimic regular expressions using developer-defined objects. ECMAScript 6 defines four symbols that correspond to these four methods, effectively outsourcing the native behavior to the `RegExp` builtin object. The `Symbol.match` , `Symbol.replace` , `Symbol.search` , and `Symbol.split` symbols represent methods on the regular expression argument that should be called on the first argument to the `match()` method, the `replace()` method, the `search()` method, and the `split()` method, respectively. The four symbol properties are defined on `RegExp.prototype` as the default implementation that the string methods should use. Knowing this, you can create an object to use with the string methods in a way that is similar to regular expressions. To do, you can use the following symbol functions in code: * `Symbol.match` - A function that accepts a string argument and returns an array of matches, or `null` if no match is found. * `Symbol.replace` - A function that accepts a string argument and a replacement string, and returns a string. * `Symbol.search` - A function that accepts a string argument and returns the numeric index of the match, or -1 if no match is found. * `Symbol.split` - A function that accepts a string argument and returns an array containing pieces of the string split on the match. The ability to define these properties on an object allows you to create objects that implement pattern matching without regular expressions and use them in methods that expect regular expressions. Here’s an example that shows these symbols in action: The `hasLengthOf10` object is intended to work like a regular expression that matches whenever the string length is exactly 10. Each of the four methods on `hasLengthOf10` is implemented using the appropriate symbol, and then the corresponding methods on two strings are called. The first string, `message1` , has 11 characters and so it will not match; the second string, `message2` , has 10 characters and so it will match. Despite not being a regular expression, `hasLengthOf10` is passed to each string method and used correctly due to the additional methods. While this is a simple example, the ability to perform more complex matches than are currently possible with regular expressions opens up a lot of possibilities for custom pattern matchers. # The Symbol.toPrimitive Method JavaScript frequently attempts to convert objects into primitive values implicitly when certain operations are applied. For instance, when you compare a string to an object using the double equals ( `==` ) operator, the object is converted into a primitive value before comparing. Exactly what primitive value should be used was previously an internal operation, but ECMAScript 6 exposes that value (making it changeable) through the `Symbol.toPrimitive` method. The `Symbol.toPrimitive` method is defined on the prototype of each standard type and prescribes what should happen when the object is converted into a primitive. When a primitive conversion is needed, `Symbol.toPrimitive` is called with a single argument, referred to as `hint` in the specification. The `hint` argument is one of three string values. If `hint` is `"number"` then `Symbol.toPrimitive` should return a number. If `hint` is `"string"` then a string should be returned, and if it’s `"default"` then the operation has no preference as to the type. For most standard objects, number mode has the following behaviors, in order by priority: * Call the `valueOf()` method, and if the result is a primitive value, return it. * Otherwise, call the `toString()` method, and if the result is a primitive value, return it. * Otherwise, throw an error. Similarly, for most standard objects, the behaviors of string mode have the following priority: * Call the `toString()` method, and if the result is a primitive value, return it. * Otherwise, call the `valueOf()` method, and if the result is a primitive value, return it. * Otherwise, throw an error. In many cases, standard objects treat default mode as equivalent to number mode (except for `Date` , which treats default mode as equivalent to string mode). By defining an `Symbol.toPrimitive` method, you can override these default coercion behaviors. To override the default conversion behaviors, use `Symbol.toPrimitive` and assign a function as its value. For example: This script defines a `Temperature` constructor and overrides the default `Symbol.toPrimitive` method on the prototype. A different value is returned depending on whether the `hint` argument indicates string, number, or default mode (the `hint` argument is filled in by the JavaScript engine). In string mode, the `Symbol.toPrimitive` method returns the temperature with the Unicode degrees symbol. In number mode, it returns just the numeric value, and in default mode, it appends the word “degrees” after the number. Each of the log statements triggers a different `hint` argument value. The `+` operator triggers default mode by setting `hint` to `"default"` , the `/` operator triggers number mode by setting `hint` to `"number"` , and the `String()` function triggers string mode by setting `hint` to `"string"` . Returning different values for all three modes is possible, it’s much more common to set the default mode to be the same as string or number mode. # The Symbol.toStringTag Symbol One of the most interesting problems in JavaScript has been the availability of multiple global execution environments. This occurs in web browsers when a page includes an iframe, as the page and the iframe each have their own execution environments. In most cases, this isn’t a problem, as data can be passed back and forth between the environments with little cause for concern. The problem arises when trying to identify what type of object you’re dealing with after the object has been passed between different environments. The canonical example of this issue is passing an array from an iframe into the containing page or vice-versa. In ECMAScript 6 terminology, the iframe and the containing page each represent a different realm which is an execution environment for JavaScript. Each realm has its own global scope with its own copy of global objects. In whichever realm the array is created, it is definitely an array. When it’s passed to a different realm, however, an `instanceof Array` call returns `false` because the array was created with a constructor from a different realm and `Array` represents the constructor in the current realm. # A Workaround for the Identification Problem Faced with this problem, developers soon found a good way to identify arrays. They discovered that by calling the standard `toString()` method on the object, a predictable string was always returned. Thus, many JavaScript libraries began including a function like this: This may look a bit roundabout, but it worked quite well for identifying arrays in all browsers. The `toString()` method on arrays isn’t useful for identifying an object because it returns a string representation of the items the object contains. But the `toString()` method on `Object.prototype` had a quirk: it included internally-defined name called `[[Class]]` in the returned result. Developers could use this method on an object to retrieve what the JavaScript environment thought the object’s data type was. Developers quickly realized that since there was no way to change this behavior, it was possible to use the same approach to distinguish between native objects and those created by developers. The most important case of this was the ECMAScript 5 `JSON` object. Prior to ECMAScript 5, many developers used Douglas Crockford’s json2.js, which creates a global `JSON` object. As browsers started to implement the `JSON` global object, figuring out whether the global `JSON` was provided by the JavaScript environment itself or through some other library became necessary. Using the same technique I showed with the `isArray()` function, many developers created functions like this: The same characteristic of `Object.prototype` that allowed developers to identify arrays across iframe boundaries also provided a way to tell if `JSON` was the native `JSON` object or not. A non-native `JSON` object would return `[object Object]` while the native version returned `[object JSON]` instead. This approach became the de facto standard for identifying native objects. # The ECMAScript 6 Answer ECMAScript 6 redefines this behavior through the `Symbol.toStringTag` symbol. This symbol represents a property on each object that defines what value should be produced when is called on it. For an array, the value that function returns is explained by storing `"Array"` in the `Symbol.toStringTag` property. Likewise, you can define the `Symbol.toStringTag` value for your own objects: In this example, a `Symbol.toStringTag` property is defined on `Person.prototype` to provide the default behavior for creating a string representation. Since `Person.prototype` inherits the method, the value returned from `Symbol.toStringTag` is also used when calling the `me.toString()` method. However, you can still define your own `toString()` method that provides a different behavior without affecting the use of the method. Here’s how that might look: This code defines ``` Person.prototype.toString() ``` to return the value of the `name` property. Since `Person` instances no longer inherit the method, calling `me.toString()` exhibits a different behavior. There is no restriction on which values can be used for `Symbol.toStringTag` on developer-defined objects. For example, nothing prevents you from using `"Array"` as the value of the `Symbol.toStringTag` property, such as: The result of calling is `"[object Array]"` in this code, which is the same result you’d get from an actual array. This highlights the fact that is no longer a completely reliable way of identifying an object’s type. Changing the string tag for native objects is also possible. Just assign to `Symbol.toStringTag` on the object’s prototype, like this: Even though `Symbol.toStringTag` is overwritten for arrays in this example, the call to results in `"[object Magic]"` instead. While I recommended not changing built-in objects in this way, there’s nothing in the language that forbids doing so. # The Symbol.unscopables Symbol The `with` statement is one of the most controversial parts of JavaScript. Originally designed to avoid repetitive typing, the `with` statement later became roundly criticized for making code harder to understand and for negative performance implications as well as being error-prone. As a result, the `with` statement is not allowed in strict mode; that restriction also affects classes and modules, which are strict mode by default and have no opt-out. While future code will undoubtedly not use the `with` statement, ECMAScript 6 still supports `with` in nonstrict mode for backwards compatibility and, as such, had to find ways to allow code that does use `with` to continue to work properly. To understand the complexity of this task, consider the following code: In this example, the two calls to `push()` inside the `with` statement are equivalent to `colors.push()` because the `with` statement added `push` as a local binding. The `color` reference refers to the variable created outside the `with` statement, as does the `values` reference. But ECMAScript 6 added a `values` method to arrays. (The `values` method is discussed in detail in Chapter 7, “Iterators and Generators.”) That would mean in an ECMAScript 6 environment, the `values` reference inside the `with` statement should refer not to the local variable `values` , but to the array’s `values` method, which would break the code. This is why the `Symbol.unscopables` symbol exists. The `Symbol.unscopables` symbol is used on `Array.prototype` to indicate which properties shouldn’t create bindings inside of a `with` statement. When present, `Symbol.unscopables` is an object whose keys are the identifiers to omit from `with` statement bindings and whose values are `true` to enforce the block. Here’s the default `Symbol.unscopables` property for arrays: The `Symbol.unscopables` object has a `null` prototype, which is created by the `Object.create(null)` call, and contains all of the new array methods in ECMAScript 6. (These methods are covered in detail in Chapter 7, “Iterators and Generators,” and Chapter 9, “Arrays.”) Bindings for these methods are not created inside a `with` statement, allowing old code to continue working without any problem. In general, you shouldn’t need to define `Symbol.unscopables` for your objects unless you use the `with` statement and are making changes to an existing object in your code base. Symbols are a new type of primitive value in JavaScript and are used to create properties that can’t be accessed without referencing the symbol. While not truly private, these properties are harder to accidentally change or overwrite and are therefore suitable for functionality that needs a level of protection from developers. You can provide descriptions for symbols that allow for easier identification of symbol values. There is a global symbol registry that allows you to use shared symbols in different parts of code by using the same description. In this way, the same symbol can be used for the same reason in multiple places. Methods like `Object.keys()` or don’t return symbols, so a new method called was added in ECMAScript 6 to allow retrieval of symbol properties. You can still make changes to symbol properties by calling the methods. Well-known symbols define previously internal-only functionality for standard objects and use globally-available symbol constants, such as the `Symbol.hasInstance` property. These symbols use the prefix `Symbol.` in the specification and allow developers to modify standard object behavior in a variety of ways. ## Sets and Maps JavaScript only had one type of collection, represented by the `Array` type, for most of its history (though some may argue all non-array objects are just collections of key-value pairs, their intended use was, originally quite different from arrays). Arrays are used in JavaScript just like arrays in other languages, but the lack of other collection options meant arrays were often used as queues and stacks, as well. Since arrays only use numeric indices, developers used non-array objects whenever a non-numeric index was necessary. That technique led to custom implementations of sets and maps using non-array objects. A set is a list of values that cannot contain duplicates. You typically don’t access individual items in a set like you would items in an array; instead, it’s much more common to just check a set to see if a value is present. A map is a collection of keys that correspond to specific values. As such, each item in a map stores two pieces of data, and values are retrieved by specifying the key to read from. Maps are frequently used as caches, for storing data to be quickly retrieved later. While ECMAScript 5 didn’t formally have sets and maps, developers worked around this limitation using non-array objects, too. ECMAScript 6 added sets and maps to JavaScript, and this chapter discusses everything you need to know about these two collection types. First, I will discuss the workarounds developers used to implement sets and maps before ECMAScript 6, and why those implementations were problematic. After that important background information, I will cover how sets and maps work in ECMAScript 6. ### Sets and Maps in ECMAScript 5 In ECMAScript 5, developers mimicked sets and maps by using object properties, like this: The `set` variable in this example is an object with a `null` prototype, ensuring that there are no inherited properties on the object. Using object properties as unique values to be checked is a common approach in ECMAScript 5. When a property is added to the `set` object, it is set to `true` so conditional statements (such as the `if` statement in this example) can easily check whether the value is present. The only real difference between an object used as a set and an object used as a map is the value being stored. For instance, this example uses an object as a map: This code stores a string value `"bar"` under the key `foo` . Unlike sets, maps are mostly used to retrieve information, rather than just checking for the key’s existence. ### Problems with Workarounds While using objects as sets and maps works okay in simple situations, the approach can get more complicated once you run into the limitations of object properties. For example, since all object properties must be strings, you must be certain no two keys evaluate to the same string. Consider the following: This example assigns the string value `"foo"` to a numeric key of `5` . Internally, that numeric value is converted to a string, so `map["5"]` and `map[5]` actually reference the same property. That internal conversion can cause problems when you want to use both numbers and strings as keys. Another problem arises when using objects as keys, like this: Here, `map[key2]` and `map[key1]` reference the same value. The objects `key1` and `key2` are converted to strings because object properties must be strings. Since `"[object Object]"` is the default string representation for objects, both `key1` and `key2` are converted to that string. This can cause errors that may not be obvious because it’s logical to assume that different object keys would, in fact, be different. The conversion to the default string representation makes it difficult to use objects as keys. (The same problem exists when trying to use an object as a set.) Maps with a key whose value is falsy present their own particular problem, too. A falsy value is automatically converted to false when used in situations where a boolean value is required, such as in the condition of an `if` statement. This conversion alone isn’t a problem–so long as you’re careful as to how you use values. For instance, look at this code: This example has some ambiguity as to how `map.count` should be used. Is the `if` statement intended to check for the existence of `map.count` or that the value is nonzero? The code inside the `if` statement will execute because the value 1 is truthy. However, if `map.count` is 0, or if `map.count` doesn’t exist, the code inside the `if` statement would not be executed. These are difficult problems to identify and debug when they occur in large applications, which is a prime reason that ECMAScript 6 adds both sets and maps to the language. ### Sets in ECMAScript 6 ECMAScript 6 adds a `Set` type that is an ordered list of values without duplicates. Sets allow fast access to the data they contain, adding a more efficient manner of tracking discrete values. # Creating Sets and Adding Items Sets are created using `new Set()` and items are added to a set by calling the `add()` method. You can see how many items are in a set by checking the `size` property: Sets do not coerce values to determine whether they are the same. That means a set can contain both the number `5` and the string `"5"` as two separate items. (The only exception is that -0 and +0 are considered to be the same.) You can also add multiple objects to the set, and those objects will remain distinct: Because `key1` and `key2` are not converted to strings, they count as two unique items in the set. (Remember, if they were converted to strings, they would both be equal to `"[object Object]"` .) If the `add()` method is called more than once with the same value, all calls after the first one are effectively ignored: You can initialize a set using an array, and the `Set` constructor will ensure that only unique values are used. For instance: In this example, an array with duplicate values is used to initialize the set. The number `5` only appears once in the set even though it appears four times in the array. This functionality makes converting existing code or JSON structures to use sets easy. You can test which values are in a set using the `has()` method, like this: Here, `set.has(6)` would return false because the set doesn’t have that value. # Removing Values It’s also possible to remove values from a set. You can remove single value by using the `delete()` method, or you can remove all values from the set by calling the `clear()` method. This code shows both in action: After the `delete()` call, only `5` is gone; after the `clear()` method executes, `set` is empty. All of this amounts to a very easy mechanism for tracking unique ordered values. However, what if you want to add items to a set and then perform some operation on each item? That’s where the `forEach()` method comes in. # The forEach() Method for Sets If you’re used to working with arrays, then you may already be familiar with the `forEach()` method. ECMAScript 5 added `forEach()` to arrays to make working on each item in an array without setting up a `for` loop easier. The method proved popular among developers, and so the same method is available on sets and works the same way. The `forEach()` method is passed a callback function that accepts three arguments: * The value from the next position in the set * The same value as the first argument * The set from which the value is read The strange difference between the set version of `forEach()` and the array version is that the first and second arguments to the callback function are the same. While this might look like a mistake, there’s a good reason for the behavior. The other objects that have `forEach()` methods (arrays and maps) pass three arguments to their callback functions. The first two arguments for arrays and maps are the value and the key (the numeric index for arrays). Sets do not have keys, however. The people behind the ECMAScript 6 standard could have made the callback function in the set version of `forEach()` accept two arguments, but that would have made it different from the other two. Instead, they found a way to keep the callback function the same and accept three arguments: each value in a set is considered to be both the key and the value. As such, the first and second argument are always the same in `forEach()` on sets to keep this functionality consistent with the other `forEach()` methods on arrays and maps. Other than the difference in arguments, using `forEach()` is basically the same for a set as it is for an array. Here’s some code that shows the method at work: This code iterates over each item in the set and outputs the values passed to the `forEach()` callback function. Each time the callback function executes, `key` and `value` are the same, and `ownerSet` is always equal to `set` . This code outputs: Also the same as arrays, you can pass a `this` value as the second argument to `forEach()` if you need to use `this` in your callback function: In this example, the `processor.process()` method calls `forEach()` on the set and passes `this` as the `this` value for the callback. That’s necessary so `this.output()` will correctly resolve to the `processor.output()` method. The `forEach()` callback function only makes use of the first argument, `value` , so the others are omitted. You can also use an arrow function to get the same effect without passing the second argument, like this: The arrow function in this example reads `this` from the containing `process()` function, and so it should correctly resolve `this.output()` to a `processor.output()` call. Keep in mind that while sets are great for tracking values and `forEach()` lets you work on each value sequentially, you can’t directly access a value by index like you can with an array. If you need to do so, then the best option is to convert the set into an array. # Converting a Set to an Array It’s easy to convert an array into a set because you can pass the array to the `Set` constructor. It’s also easy to convert a set back into an array using the spread operator. Chapter 3 introduced the spread operator ( `...` ) as a way to split items in an array into separate function parameters. You can also use the spread operator to work on iterable objects, such as sets, to convert them into arrays. For example: Here, a set is initially loaded with an array that contains duplicates. The set removes the duplicates, and then the items are placed into a new array using the spread operator. The set itself still contains the same items ( `1` , `2` , `3` , `4` , and `5` ) it received when it was created. They’ve just been copied to a new array. This approach is useful when you already have an array and want to create an array without duplicates. For example: In the ``` eliminateDuplicates() ``` function, the set is just a temporary intermediary used to filter out duplicate values before creating a new array that has no duplicates. # Weak Sets The `Set` type could alternately be called a strong set, because of the way it stores object references. An object stored in an instance of `Set` is effectively the same as storing that object inside a variable. As long as a reference to that `Set` instance exists, the object cannot be garbage collected to free memory. For example: In this example, setting `key` to `null` clears one reference of the `key` object, but another remains inside `set` . You can still retrieve `key` by converting the set to an array with the spread operator and accessing the first item. That result is fine for most programs, but sometimes, it’s better for references in a set to disappear when all other references disappear. For instance, if your JavaScript code is running in a web page and wants to keep track of DOM elements that might be removed by another script, you don’t want your code holding onto the last reference to a DOM element. (That situation is called a memory leak.) To alleviate such issues, ECMAScript 6 also includes weak sets, which only store weak object references and cannot store primitive values. A weak reference to an object does not prevent garbage collection if it is the only remaining reference. # Creating a Weak Set Weak sets are created using the `WeakSet` constructor and have an `add()` method, a `has()` method, and a `delete()` method. Here’s an example that uses all three: Using a weak set is a lot like using a regular set. You can add, remove, and check for references in the weak set. You can also seed a weak set with values by passing an iterable to the constructor: In this example, an array is passed to the `WeakSet` constructor. Since this array contains two objects, those objects are added into the weak set. Keep in mind that an error will be thrown if the array contains any non-object values, since `WeakSet` can’t accept primitive values. # Key Differences Between Set Types The biggest difference between weak sets and regular sets is the weak reference held to the object value. Here’s an example that demonstrates that difference: After this code executes, the reference to `key` in the weak set is no longer accessible. It is not possible to verify its removal because you would need one reference to that object to pass to the `has()` method. This can make testing weak sets a little confusing, but you can trust that the reference has been properly removed by the JavaScript engine. These examples show that weak sets share some characteristics with regular sets, but there are some key differences. Those are: * In a `WeakSet` instance, the `add()` method throws an error when passed a non-object ( `has()` and `delete()` always return `false` for non-object arguments). * Weak sets are not iterables and therefore cannot be used in a `for-of` loop. * Weak sets do not expose any iterators (such as the `keys()` and `values()` methods), so there is no way to programmatically determine the contents of a weak set. * Weak sets do not have a `forEach()` method. * Weak sets do not have a `size` property. The seemingly limited functionality of weak sets is necessary in order to properly handle memory. In general, if you only need to track object references, then you should use a weak set instead of a regular set. Sets give you a new way to handle lists of values, but they aren’t useful when you need to associate additional information with those values. That’s why ECMAScript 6 also adds maps. ### Maps in ECMAScript 6 The ECMAScript 6 `Map` type is an ordered list of key-value pairs, where both the key and the value can have any type. Keys equivalence is determined by using the same approach as `Set` objects, so you can have both a key of `5` and a key of `"5"` because they are different types. This is quite different from using object properties as keys, as object properties always coerce values into strings. You can add items to maps by calling the `set()` method and passing it a key and the value to associate with the key. You can later retrieve a value by passing the key to the `get()` method. For example: In this example, two key-value pairs are stored. The `"title"` key stores a string while the `"year"` key stores a number. The `get()` method is called later to retrieve the values for both keys. If either key didn’t exist in the map, then `get()` would have returned the special value `undefined` instead of a value. You can also use objects as keys, which isn’t possible when using object properties to create a map in the old workaround approach. Here’s an example: This code uses the objects `key1` and `key2` as keys in the map to store two different values. Because these keys are not coerced into another form, each object is considered unique. This allows you to associate additional data with an object without modifying the object itself. # Map Methods Maps share several methods with sets. That is intentional, and it allows you to interact with maps and sets in similar ways. These three methods are available on both maps and sets: * `has(key)` - Determines if the given key exists in the map * `delete(key)` - Removes the key and its associated value from the map * `clear()` - Removes all keys and values from the map Maps also have a `size` property that indicates how many key-value pairs it contains. This code uses all three methods and `size` in different ways: As with sets, the `size` property always contains the number of key-value pairs in the map. The `Map` instance in this example starts with the `"name"` and `"age"` keys, so `has()` returns `true` when passed either key. After the `"name"` key is removed by the `delete()` method, the `has()` method returns `false` when passed `"name"` and the `size` property indicates one less item. The `clear()` method then removes the remaining key, as indicated by `has()` returning `false` for both keys and `size` being 0. The `clear()` method is a fast way to remove a lot of data from a map, but there’s also a way to add a lot of data to a map at one time. # Map Initialization Also similar to sets, you can initialize a map with data by passing an array to the `Map` constructor. Each item in the array must itself be an array where the first item is the key and the second is that key’s corresponding value. The entire map, therefore, is an array of these two-item arrays, for example: The keys `"name"` and `"age"` are added into `map` through initialization in the constructor. While the array of arrays may look a bit strange, it’s necessary to accurately represent keys, as keys can be any data type. Storing the keys in an array is the only way to ensure they aren’t coerced into another data type before being stored in the map. # The forEach Method on Maps The `forEach()` method for maps is similar to `forEach()` for sets and arrays, in that it accepts a callback function that receives three arguments: * The value from the next position in the map * The key for that value * The map from which the value is read These callback arguments more closely match the `forEach()` behavior in arrays, where the first argument is the value and the second is the key (corresponding to a numeric index in arrays). Here’s an example: The `forEach()` callback function outputs the information that is passed to it. The `value` and `key` are output directly, and `ownerMap` is compared to `map` to show that the values are equivalent. This outputs: The callback passed to `forEach()` receives each key-value pair in the order in which the pairs were inserted into the map. This behavior differs slightly from calling `forEach()` on arrays, where the callback receives each item in order of numeric index. Weak maps are to maps what weak sets are to sets: they’re a way to store weak object references. In weak maps, every key must be an object (an error is thrown if you try to use a non-object key), and those object references are held weakly so they don’t interfere with garbage collection. When there are no references to a weak map key outside a weak map, the key-value pair is removed from the weak map. The most useful place to employ weak maps is when creating an object related to a particular DOM element in a web page. For example, some JavaScript libraries for web pages maintain one custom object for every DOM element referenced in the library, and that mapping is stored in a cache of objects internally. The difficult part of this approach is determining when a DOM element no longer exists in the web page, so that the library can remove its associated object. Otherwise, the library would hold onto the DOM element reference past the reference’s usefulness and cause a memory leak. Tracking the DOM elements with a weak map would still allow the library to associate a custom object with every DOM element, and it could automatically destroy any object in the map when that object’s DOM element no longer exists. The ECMAScript 6 `WeakMap` type is an unordered list of key-value pairs, where a key must be a non-null object and a value can be of any type. The interface for `WeakMap` is very similar to that of `Map` in that `set()` and `get()` are used to add and retrieve data, respectively: In this example, one key-value pair is stored. The `element` key is a DOM element used to store a corresponding string value. That value is then retrieved by passing in the DOM element to the `get()` method. When the DOM element is later removed from the document and the variable referencing it is set to `null` , the data is also removed from the weak map. Similar to weak sets, there is no way to verify that a weak map is empty, because it doesn’t have a `size` property. Because there are no remaining references to the key, you can’t retrieve the value by calling the `get()` method, either. The weak map has cut off access to the value for that key, and when the garbage collector runs, the memory occupied by the value will be freed. # Weak Map Initialization To initialize a weak map, pass an array of arrays to the `WeakMap` constructor. Just like initializing a regular map, each array inside the containing array should have two items, where the first item is the non-null object key and the second item is the value (any data type). For example: The objects `key1` and `key2` are used as keys in the weak map, and the `get()` and `has()` methods can access them. An error is thrown if the `WeakMap` constructor receives a non-object key in any of the key-value pairs. # Weak Map Methods Weak maps have only two additional methods available to interact with key-value pairs. There is a `has()` method to determine if a given key exists in the map and a `delete()` method to remove a specific key-value pair. There is no `clear()` method because that would require enumerating keys, and like weak sets, that isn’t possible with weak maps. This example uses both the `has()` and `delete()` methods: Here, a DOM element is once again used as the key in a weak map. The `has()` method is useful for checking to see if a reference is currently being used as a key in the weak map. Keep in mind that this only works when you have a non-null reference to a key. The key is forcibly removed from the weak map by the `delete()` method, at which point `has()` returns `false` and `get()` returns `undefined` . # Private Object Data While most developers consider the main use case of weak maps to be associated data with DOM elements, there are many other possible uses (and no doubt, some that have yet to be discovered). One practical use of weak maps is to store data that is private to object instances. All object properties are public in ECMAScript 6, and so you need to use some creativity to make data accessible to objects, but not accessible to everything. Consider the following example: This code uses the common convention of a leading underscore to indicate that a property is considered private and should not be modified outside the object instance. The intent is to use `getName()` to read `this._name` and not allow the `_name` value to change. However, there is nothing standing in the way of someone writing to the `_name` property, so it can be overwritten either intentionally or accidentally. In ECMAScript 5, it’s possible to get close to having truly private data, by creating an object using a pattern such as this: This example wraps the definition of `Person` in an IIFE that contains two private variables, `privateData` and `privateId` . The `privateData` object stores private information for each instance while `privateId` is used to generate a unique ID for each instance. When the `Person` constructor is called, a nonenumerable, nonconfigurable, and nonwritable `_id` property is added. Then, an entry is made into the `privateData` object that corresponds to the ID for the object instance; that’s where the `name` is stored. Later, in the `getName()` function, the name can be retrieved by using `this._id` as the key into `privateData` . Because `privateData` is not accessible outside of the IIFE, the actual data is safe, even though `this._id` is exposed publicly. The big problem with this approach is that the data in `privateData` never disappears because there is no way to know when an object instance is destroyed; the `privateData` object will always contain extra data. This problem can be solved by using a weak map instead, as follows: This version of the `Person` example uses a weak map for the private data instead of an object. Because the `Person` object instance itself can be used as a key, there’s no need to keep track of a separate ID. When the `Person` constructor is called, a new entry is made into the weak map with a key of `this` and a value of an object containing private information. In this case, that value is an object containing only `name` . The `getName()` function retrieves that private information by passing `this` to the `privateData.get()` method, which fetches the value object and accesses the `name` property. This technique keeps the private information private, and destroys that information whenever an object instance associated with it is destroyed. # Weak Map Uses and Limitations When deciding whether to use a weak map or a regular map, the primary decision to consider is whether you want to use only object keys. Anytime you’re going to use only object keys, then the best choice is a weak map. That will allow you to optimize memory usage and avoid memory leaks by ensuring that extra data isn’t kept around after it’s no longer accessible. Keep in mind that weak maps give you very little visibility into their contents, so you can’t use the `forEach()` method, the `size` property, or the `clear()` method to manage the items. If you need some inspection capabilities, then regular maps are a better choice. Just be sure to keep an eye on memory usage. Of course, if you only want to use non-object keys, then regular maps are your only choice. ECMAScript 6 formally introduces sets and maps into JavaScript. Prior to this, developers frequently used objects to mimic both sets and maps, often running into problems due to the limitations associated with object properties. Sets are ordered lists of unique values. Values are not coerced to determine equivalence. Sets automatically remove duplicate values, so you can use a set to filter an array for duplicates and return the result. Sets aren’t subclasses of arrays, so you cannot randomly access a set’s values. Instead, you can use the `has()` method to determine if a value is contained in the set and the `size` property to inspect the number of values in the set. The `Set` type also has a `forEach()` method to process each set value. Weak sets are special sets that can contain only objects. The objects are stored with weak references, meaning that an item in a weak set will not block garbage collection if that item is the only remaining reference to an object. Weak set contents can’t be inspected due to the complexities of memory management, so it’s best to use weak sets only for tracking objects that need to be grouped together. Maps are ordered key-value pairs where the key can be any data type. Similar to sets, keys are not coerced to determine equivalence, which means you can have a numeric key `5` and a string `"5"` as two separate keys. A value of any data type can be associated with a key using the `set()` method, and that value can later be retrieved by using the `get()` method. Maps also have a `size` property and a `forEach()` method to allow for easier item access. Weak maps are a special type of map that can only have object keys. As with weak sets, an object key reference is weak and doesn’t prevent garbage collection when it’s the only remaining reference to an object. When a key is garbage collected, the value associated with the key is also removed from the weak map. This memory management aspect makes weak maps uniquely suited for correlating additional information with objects whose lifecycles are managed outside of the code accessing them. ## Iterators and Generators Many programming languages have shifted from iterating over data with `for` loops, which require initializing variables to track position in a collection, to using iterator objects that programmatically return the next item in a collection. Iterators make working with collections of data easier, and ECMAScript 6 adds iterators to JavaScript. When coupled with new array methods and new types of collections (such as sets and maps), iterators are key for efficient data processing, and you will find them in many parts of the language. There’s a new `for-of` loop that works with iterators, the spread ( `...` ) operator uses iterators, and iterators even make asynchronous programming easier. This chapter covers the many uses of iterators, but first, it’s important to understand the history behind why iterators were added to JavaScript. ### The Loop Problem If you’ve ever programmed in JavaScript, you’ve probably written code that looks like this: This standard `for` loop tracks the index into the `colors` array with the `i` variable. The value of `i` increments each time the loop executes if `i` isn’t larger than the length of the array (stored in `len` ). While this loop is fairly straightforward, loops grow in complexity when you nest them and need to keep track of multiple variables. Additional complexity can lead to errors, and the boilerplate nature of the `for` loop lends itself to more errors as similar code is written in multiple places. Iterators are meant to solve that problem. ### What are Iterators? Iterators are just objects with a specific interface designed for iteration. All iterator objects have a `next()` method that returns a result object. The result object has two properties: `value` , which is the next value, and `done` , which is a boolean that’s `true` when there are no more values to return. The iterator keeps an internal pointer to a location within a collection of values and with each call to the `next()` method, it returns the next appropriate value. If you call `next()` after the last value has been returned, the method returns `done` as `true` and `value` contains the return value for the iterator. That return value is not part of the data set, but rather a final piece of related data, or `undefined` if no such data exists. An iterator’s return value is similar to a function’s return value in that it’s a final way to pass information to the caller. With that in mind, creating an iterator using ECMAScript 5 is fairly straightforward: The `createIterator()` function returns an object with a `next()` method. Each time the method is called, the next value in the `items` array is returned as `value` . When `i` is 3, `done` becomes `true` and the ternary conditional operator that sets `value` evaluates to `undefined` . These two results fulfill the special last case for iterators in ECMAScript 6, where `next()` is called on an iterator after the last piece of data has been used. As this example shows, writing iterators that behave according to the rules laid out in ECMAScript 6 is a bit complex. Fortunately, ECMAScript 6 also provides generators, which make creating iterator objects much simpler. ### What Are Generators? A generator is a function that returns an iterator. Generator functions are indicated by a star character ( `*` ) after the `function` keyword and use the new `yield` keyword. It doesn’t matter if the star is directly next to `function` or if there’s some whitespace between it and the `*` character, as in this example: The `*` before `createIterator()` makes this function a generator. The `yield` keyword, also new to ECMAScript 6, specifies values the resulting iterator should return when `next()` is called, in the order they should be returned. The iterator generated in this example has three different values to return on successive calls to the `next()` method: first `1` , then `2` , and finally `3` . A generator gets called like any other function, as shown when `iterator` is created. Perhaps the most interesting aspect of generator functions is that they stop execution after each `yield` statement. For instance, after `yield 1` executes in this code, the function doesn’t execute anything else until the iterator’s `next()` method is called. At that point, `yield 2` executes. This ability to stop execution in the middle of a function is extremely powerful and leads to some interesting uses of generator functions (discussed in the “Advanced Iterator Functionality” section). The `yield` keyword can be used with any value or expression, so you can write generator functions that add items to iterators without just listing the items one by one. For example, here’s one way you could use `yield` inside a `for` loop: This example passes an array called `items` to the `createIterator()` generator function. Inside the function, a `for` loop yields the elements from the array into the iterator as the loop progresses. Each time `yield` is encountered, the loop stops, and each time `next()` is called on `iterator` , the loop picks up with the next `yield` statement. Generator functions are an important feature of ECMAScript 6, and since they are just functions, they can be used in all the same places. The rest of this section focuses on other useful ways to write generators. # Generator Function Expressions You can use function expressions to create generators by just including a star ( `*` ) character between the `function` keyword and the opening parenthesis. For example: In this code, `createIterator()` is a generator function expression instead of a function declaration. The asterisk goes between the `function` keyword and the opening parentheses because the function expression is anonymous. Otherwise, this example is the same as the previous version of the `createIterator()` function, which also used a `for` loop. # Generator Object Methods Because generators are just functions, they can be added to objects, too. For example, you can make a generator in an ECMAScript 5-style object literal with a function expression: You can also use the ECMAScript 6 method shorthand by prepending the method name with a star ( `*` ): These examples are functionally equivalent to the example in the “Generator Function Expressions” section; they just use different syntax. In the shorthand version, because the `createIterator()` method is defined with no `function` keyword, the star is placed immediately before the method name, though you can leave whitespace between the star and the method name. ### Iterables and for-of Closely related to iterators, an iterable is an object with a `Symbol.iterator` property. The well-known `Symbol.iterator` symbol specifies a function that returns an iterator for the given object. All collection objects (arrays, sets, and maps) and strings are iterables in ECMAScript 6 and so they have a default iterator specified. Iterables are designed to be used with a new addition to ECMAScript: the `for-of` loop. At the beginning of this chapter, I mentioned the problem of tracking an index inside a `for` loop. Iterators are the first part of the solution to that problem. The `for-of` loop is the second part: it removes the need to track an index into a collection entirely, leaving you free to focus on working with the contents of the collection. A `for-of` loop calls `next()` on an iterable each time the loop executes and stores the `value` from the result object in a variable. The loop continues this process until the returned object’s `done` property is `true` . Here’s an example: This code outputs the following: This `for-of` loop first calls the `Symbol.iterator` method on the `values` array to retrieve an iterator. (The call to `Symbol.iterator` happens behind the scenes in the JavaScript engine itself.) Then `iterator.next()` is called, and the `value` property on the iterator’s result object is read into `num` . The `num` variable is first 1, then 2, and finally 3. When `done` on the result object is `true` , the loop exits, so `num` is never assigned the value of `undefined` . If you are simply iterating over values in an array or collection, then it’s a good idea to use a `for-of` loop instead of a `for` loop. The `for-of` loop is generally less error-prone because there are fewer conditions to keep track of. Save the traditional `for` loop for more complex control conditions. # Accessing the Default Iterator You can use `Symbol.iterator` to access the default iterator for an object, like this: This code gets the default iterator for `values` and uses that to iterate over the items in the array. This is the same process that happens behind-the-scenes when using a `for-of` loop. Since `Symbol.iterator` specifies the default iterator, you can use it to detect whether an object is iterable as follows: The `isIterable()` function simply checks to see if a default iterator exists on the object and is a function. The `for-of` loop does a similar check before executing. So far, the examples in this section have shown ways to use `Symbol.iterator` with built-in iterable types, but you can also use the `Symbol.iterator` property to create your own iterables. # Creating Iterables Developer-defined objects are not iterable by default, but you can make them iterable by creating a `Symbol.iterator` property containing a generator. For example: This code outputs the following: First, the example defines a default iterator for an object called `collection` . The default iterator is created by the `Symbol.iterator` method, which is a generator (note the star still comes before the name). The generator then uses a `for-of` loop to iterate over the values in `this.items` and uses `yield` to return each one. Instead of manually iterating to define values for the default iterator of `collection` to return, the `collection` object relies on the default iterator of `this.items` to do the work. Now you’ve seen some uses for the default array iterator, but there are many more iterators built in to ECMAScript 6 to make working with collections of data easy. ### Built-in Iterators Iterators are an important part of ECMAScript 6, and as such, you don’t need to create your own iterators for many built-in types; the language includes them by default. You only need to create iterators when the built-in iterators don’t serve your purpose, which will most frequently be when defining your own objects or classes. Otherwise, you can rely on built-in iterators to do your work. Perhaps the most common iterators to use are those that work on collections. # Collection Iterators ECMAScript 6 has three types of collection objects: arrays, maps, and sets. All three have the following built-in iterators to help you navigate their content: * `entries()` - Returns an iterator whose values are a key-value pair * `values()` - Returns an iterator whose values are the values of the collection * `keys()` - Returns an iterator whose values are the keys contained in the collection You can retrieve an iterator for a collection by calling one of these methods. # The entries() Iterator The `entries()` iterator returns a two-item array each time `next()` is called. The two-item array represents the key and value for each item in the collection. For arrays, the first item is the numeric index; for sets, the first item is also the value (since values double as keys in sets); for maps, the first item is the key. Here are some examples that use this iterator: The `console.log()` calls give the following output: This code uses the `entries()` method on each type of collection to retrieve an iterator, and it uses `for-of` loops to iterate the items. The console output shows how the keys and values are returned in pairs for each object. # The values() Iterator The `values()` iterator simply returns values as they are stored in the collection. For example: This code outputs the following: Calling the `values()` iterator, as in this example, returns the exact data contained in each collection without any information about that data’s location in the collection. # The keys() Iterator The `keys()` iterator returns each key present in a collection. For arrays, it only returns numeric keys, never other own properties of the array. For sets, the keys are the same as the values, and so `keys()` and `values()` return the same iterator. For maps, the `keys()` iterator returns each unique key. Here’s an example that demonstrates all three: This example outputs the following: The `keys()` iterator fetches each key in `colors` , `tracking` , and `data` , and those keys are printed from inside the three `for-of` loops. For the array object, only numeric indices are printed, which would still happen even if you added named properties to the array. This is different from the way the `for-in` loop works with arrays, because the `for-in` loop iterates over properties rather than just the numeric indices. # Default Iterators for Collection Types Each collection type also has a default iterator that is used by `for-of` whenever an iterator isn’t explicitly specified. The `values()` method is the default iterator for arrays and sets, while the `entries()` method is the default iterator for maps. These defaults make using collection objects in `for-of` loops a little easier. For instance, consider this example: No iterator is specified, so the default iterator functions will be used. The default iterators for arrays, sets, and maps are designed to reflect how these objects are initialized, so this code outputs the following: Arrays and sets return their values by default, while maps return the same array format that can be passed into the `Map` constructor. Weak sets and weak maps, on the other hand, do not have built-in iterators. Managing weak references means there’s no way to know exactly how many values are in these collections, which also means there’s no way to iterate over them. # String Iterators JavaScript strings have slowly become more like arrays since ECMAScript 5 was released. For example, ECMAScript 5 formalized bracket notation for accessing characters in strings (that is, using `text[0]` to get the first character, and so on). But bracket notation works on code units rather than characters, so it cannot be used to access double-byte characters correctly, as this example demonstrates: This code uses bracket notation and the `length` property to iterate over and print a string containing a Unicode character. The output is a bit unexpected: Since the double-byte character is treated as two separate code units, there are four empty lines between `A` and `B` in the output. Fortunately, ECMAScript 6 aims to fully support Unicode (see Chapter 2), and the default string iterator is an attempt to solve the string iteration problem. As such, the default iterator for strings works on characters rather than code units. Changing this example to use the default string iterator with a `for-of` loop results in more appropriate output. Here’s the tweaked code: This outputs the following: This result is more in line with what you’d expect when working with characters: the loop successfully prints the Unicode character, as well as all the rest. # NodeList Iterators The Document Object Model (DOM) has a `NodeList` type that represents a collection of elements in a document. For those who write JavaScript to run in web browsers, understanding the difference between `NodeList` objects and arrays has always been a bit difficult. Both `NodeList` objects and arrays use the `length` property to indicate the number of items, and both use bracket notation to access individual items. Internally, however, a `NodeList` and an array behave quite differently, which has led to a lot of confusion. With the addition of default iterators in ECMAScript 6, the DOM definition of `NodeList` (included in the HTML specification rather than ECMAScript 6 itself) includes a default iterator that behaves in the same manner as the array default iterator. That means you can use `NodeList` in a `for-of` loop or any other place that uses an object’s default iterator. For example: This code calls ``` getElementsByTagName() ``` to retrieve a `NodeList` that represents all of the `<div>` elements in the `document` object. The `for-of` loop then iterates over each element and outputs the element IDs, effectively making the code the same as it would be for a standard array. ### The Spread Operator and Non-Array Iterables Recall from Chapter 7 that the spread operator ( `...` ) can be used to convert a set into an array. For example: This code uses the spread operator inside an array literal to fill in that array with the values from `set` . The spread operator works on all iterables and uses the default iterator to determine which values to include. All values are read from the iterator and inserted into the array in the order in which values were returned from the iterator. This example works because sets are iterables, but it can work equally well on any iterable. Here’s another example: Here, the spread operator converts `map` into an array of arrays. Since the default iterator for maps returns key-value pairs, the resulting array looks like the array that was passed during the `new Map()` call. You can use the spread operator in an array literal as many times as you want, and you can use it wherever you want to insert multiple items from an iterable. Those items will just appear in order in the new array at the location of the spread operator. For example: The spread operator is used to create `allNumbers` from the values in `smallNumbers` and `bigNumbers` . The values are placed in `allNumbers` in the same order the arrays are added when `allNumbers` is created: `0` is first, followed by the values from `smallNumbers` , followed by the values from `bigNumbers` . The original arrays are unchanged, though, as their values have just been copied into `allNumbers` . Since the spread operator can be used on any iterable, it’s the easiest way to convert an iterable into an array. You can convert strings into arrays of characters (not code units) and `NodeList` objects in the browser into arrays of nodes. Now that you understand the basics of how iterators work, including `for-of` and the spread operator, it’s time to look at some more complex uses of iterators. ### Advanced Iterator Functionality You can accomplish a lot with the basic functionality of iterators and the convenience of creating them using generators. However, iterators are much more powerful when used for tasks other than simply iterating over a collection of values. During the development of ECMAScript 6, a lot of unique ideas and patterns emerged that encouraged the creators to add more functionality. Some of those additions are subtle, but when used together, can accomplish some interesting interactions. # Passing Arguments to Iterators Throughout this chapter, examples have shown iterators passing values out via the `next()` method or by using `yield` in a generator. But you can also pass arguments to the iterator through the `next()` method. When an argument is passed to the `next()` method, that argument becomes the value of the `yield` statement inside a generator. This capability is important for more advanced functionality such as asynchronous programming. Here’s a basic example: The first call to `next()` is a special case where any argument passed to it is lost. Since arguments passed to `next()` become the values returned by `yield` , an argument from the first call to `next()` could only replace the first yield statement in the generator function if it could be accessed before that `yield` statement. That’s not possible, so there’s no reason to pass an argument the first time `next()` is called. On the second call to `next()` , the value `4` is passed as the argument. The `4` ends up assigned to the variable `first` inside the generator function. In a `yield` statement including an assignment, the right side of the expression is evaluated on the first call to `next()` and the left side is evaluated on the second call to `next()` before the function continues executing. Since the second call to `next()` passes in `4` , that value is assigned to `first` and then execution continues. The second `yield` uses the result of the first `yield` and adds two, which means it returns a value of six. When `next()` is called a third time, the value `5` is passed as an argument. That value is assigned to the variable `second` and then used in the third `yield` statement to return `8` . It’s a bit easier to think about what’s happening by considering which code is executing each time execution continues inside the generator function. Figure 8-1 uses colors to show the code being executed before yielding. The color yellow represents the first call to `next()` and all the code executed inside of the generator as a result. The color aqua represents the call to `next(4)` and the code that is executed with that call. The color purple represents the call to `next(5)` and the code that is executed as a result. The tricky part is how the code on the right side of each expression executes and stops before the left side is executed. This makes debugging complicated generators a bit more involved than debugging regular functions. So far, you’ve seen that `yield` can act like `return` when a value is passed to the `next()` method. However, that’s not the only execution trick you can do inside a generator. You can also cause iterators throw an error. # Throwing Errors in Iterators It’s possible to pass not just data into iterators but also error conditions. Iterators can choose to implement a `throw()` method that instructs the iterator to throw an error when it resumes. This is an important capability for asynchronous programming, but also for flexibility inside generators, where you want to be able to mimic both return values and thrown errors (the two ways of exiting a function). You can pass an error object to `throw()` that should be thrown when the iterator continues processing. For example: In this example, the first two `yield` expressions are evaluated as normal, but when `throw()` is called, an error is thrown before `let second` is evaluated. This effectively halts code execution similar to directly throwing an error. The only difference is the location in which the error is thrown. Figure 8-2 shows which code is executed at each step. In this figure, the color red represents the code executed when `throw()` is called, and the red star shows approximately when the error is thrown inside the generator. The first two `yield` statements are executed, and when `throw()` is called, an error is thrown before any other code executes. Knowing this, you can catch such errors inside the generator using a `try-catch` block: In this example, a `try-catch` block is wrapped around the second `yield` statement. While this `yield` executes without error, the error is thrown before any value can be assigned to `second` , so the `catch` block assigns it a value of six. Execution then flows to the next `yield` and returns nine. Notice that something interesting happened: the `throw()` method returned a result object just like the `next()` method. Because the error was caught inside the generator, code execution continued on to the next `yield` and returned the next value, `9` . It helps to think of `next()` and `throw()` as both being instructions to the iterator. The `next()` method instructs the iterator to continue executing (possibly with a given value) and `throw()` instructs the iterator to continue executing by throwing an error. What happens after that point depends on the code inside the generator. The `next()` and `throw()` methods control execution inside an iterator when using `yield` , but you can also use the `return` statement. But `return` works a bit differently than it does in regular functions, as you will see in the next section. # Generator Return Statements Since generators are functions, you can use the `return` statement both to exit early and specify a return value for the last call to the `next()` method. In most examples in this chapter, the last call to `next()` on an iterator returns `undefined` , but you can specify an alternate value by using `return` as you would in any other function. In a generator, `return` indicates that all processing is done, so the `done` property is set to `true` and the value, if provided, becomes the `value` field. Here’s an example that simply exits early using `return` : In this code, the generator has a `yield` statement followed by a `return` statement. The `return` indicates that there are no more values to come, and so the rest of the `yield` statements will not execute (they are unreachable). You can also specify a return value that will end up in the `value` field of the returned object. For example: Here, the value `42` is returned in the `value` field on the second call to the `next()` method (which is the first time that `done` is `true` ). The third call to `next()` returns an object whose `value` property is once again `undefined` . Any value you specify with `return` is only available on the returned object one time before the `value` field is reset to `undefined` . # Delegating Generators In some cases, combining the values from two iterators into one is useful. Generators can delegate to other iterators using a special form of `yield` with a star ( `*` ) character. As with generator definitions, where the star appears doesn’t matter, as long as the star falls between the `yield` keyword and the generator function name. Here’s an example: In this example, the generator delegates first to the iterator returned from and then to the iterator returned from . The iterator returned from appears, from the outside, to be one consistent iterator that has produced all of the values. Each call to `next()` is delegated to the appropriate iterator until the iterators created by are empty. Then the final `yield` is executed to return `true` . Generator delegation also lets you make further use of generator return values. This is the easiest way to access such returned values and can be quite useful in performing complex tasks. For example: Here, the generator delegates to and assigns the return value to `result` . Since contains `return 3` , the returned value is `3` . The `result` variable is then passed to ``` createRepeatingIterator() ``` as an argument indicating how many times to yield the same string (in this case, three times). Notice that the value `3` was never output from any call to the `next()` method. Right now, it exists solely inside the generator. But you can output that value as well by adding another `yield` statement, such as: In this code, the extra `yield` statement explicitly outputs the returned value from the generator. Generator delegation using the return value is a very powerful paradigm that allows for some very interesting possibilities, especially when used in conjunction with asynchronous operations. ### Asynchronous Task Running A lot of the excitement around generators is directly related to asynchronous programming. Asynchronous programming in JavaScript is a double-edged sword: simple tasks are easy to do asynchronously, while complex tasks become an errand in code organization. Since generators allow you to effectively pause code in the middle of execution, they open up a lot of possibilities related to asynchronous processing. The traditional way to perform asynchronous operations is to call a function that has a callback. For example, consider reading a file from the disk in Node.js: The `fs.readFile()` method is called with the filename to read and a callback function. When the operation is finished, the callback function is called. The callback checks to see if there’s an error, and if not, processes the returned `contents` . This works well when you have a small, finite number of asynchronous tasks to complete, but gets complicated when you need to nest callbacks or otherwise sequence a series of asynchronous tasks. This is where generators and `yield` are helpful. # A Simple Task Runner Because `yield` stops execution and waits for the `next()` method to be called before starting again, you can implement asynchronous calls without managing callbacks. To start, you need a function that can call a generator and start the iterator, such as this: The `run()` function accepts a task definition (a generator function) as an argument. It calls the generator to create an iterator and stores the iterator in `task` . The `task` variable is outside the function so it can be accessed by other functions; I will explain why later in this section. The first call to `next()` begins the iterator and the result is stored for later use. The `step()` function checks to see if `result.done` is false and, if so, calls `next()` before recursively calling itself. Each call to `next()` stores the return value in `result` , which is always overwritten to contain the latest information. The initial call to `step()` starts the process of looking at the `result.done` variable to see whether there’s more to do. With this implementation of `run()` , you can run a generator containing multiple `yield` statements, such as: This example just outputs three numbers to the console, which simply shows that all calls to `next()` are being made. However, just yielding a couple of times isn’t very useful. The next step is to pass values into and out of the iterator. # Task Running With Data The easiest way to pass data through the task runner is to pass the value specified by `yield` into the next call to the `next()` method. To do so, you need only pass `result.value` , as in this code: Now that `result.value` is passed to `next()` as an argument, it’s possible to pass data between `yield` calls, like this: This example outputs two values to the console: 1 and 4. The value 1 comes from `yield 1` , as the 1 is passed right back into the `value` variable. The 4 is calculated by adding 3 to `value` and passing that result back to `value` . Now that data is flowing between calls to `yield` , you just need one small change to allow asynchronous calls. # Asynchronous Task Runner The previous example passed static data back and forth between `yield` calls, but waiting for an asynchronous process is slightly different. The task runner needs to know about callbacks and how to use them. And since `yield` expressions pass their values into the task runner, that means any function call must return a value that somehow indicates the call is an asynchronous operation that the task runner should wait for. Here’s one way you might signal that a value is an asynchronous operation: For the purposes of this example, any function meant to be called by the task runner will return a function that executes a callback. The `fetchData()` function returns a function that accepts a callback function as an argument. When the returned function is called, it executes the callback function with a single piece of data (the `"Hi!"` string). The `callback` argument needs to come from the task runner to ensure executing the callback correctly interacts with the underlying iterator. While the `fetchData()` function is synchronous, you can easily extend it to be asynchronous by calling the callback with a slight delay, such as: This version of `fetchData()` introduces a 50ms delay before calling the callback, demonstrating that this pattern works equally well for synchronous and asynchronous code. You just have to make sure each function that wants to be called using `yield` follows the same pattern. With a good understanding of how a function can signal that it’s an asynchronous process, you can modify the task runner to take that fact into account. Anytime `result.value` is a function, the task runner will execute it instead of just passing that value to the `next()` method. Here’s the updated code: When `result.value` is a function (checked with the `===` operator), it is called with a callback function. That callback function follows the Node.js convention of passing any possible error as the first argument ( `err` ) and the result as the second argument. If `err` is present, then that means an error occurred and `task.throw()` is called with the error object instead of `task.next()` so an error is thrown at the correct location. If there is no error, then `data` is passed into `task.next()` and the result is stored. Then, `step()` is called to continue the process. When `result.value` is not a function, it is directly passed to the `next()` method. This new version of the task runner is ready for all asynchronous tasks. To read data from a file in Node.js, you need to create a wrapper around `fs.readFile()` that returns a function similar to the `fetchData()` function from the beginning of this section. For example: The `readFile()` method accepts a single argument, the filename, and returns a function that calls a callback. The callback is passed directly to the `fs.readFile()` method, which will execute the callback upon completion. You can then run this task using `yield` as follows: This example is performing the asynchronous `readFile()` operation without making any callbacks visible in the main code. Aside from `yield` , the code looks the same as synchronous code. As long as the functions performing asynchronous operations all conform to the same interface, you can write logic that reads like synchronous code. Of course, there are downsides to the pattern used in these examples–namely that you can’t always be sure a function that returns a function is asynchronous. For now, though, it’s only important that you understand the theory behind the task running. Using promises offers more powerful ways of scheduling asynchronous tasks, and Chapter 11 covers this topic further. Iterators are an important part of ECMAScript 6 and are at the root of several key language elements. On the surface, iterators provide a simple way to return a sequence of values using a simple API. However, there are far more complex ways to use iterators in ECMAScript 6. The `Symbol.iterator` symbol is used to define default iterators for objects. Both built-in objects and developer-defined objects can use this symbol to provide a method that returns an iterator. When `Symbol.iterator` is provided on an object, the object is considered an iterable. The `for-of` loop uses iterables to return a series of values in a loop. Using `for-of` is easier than iterating with a traditional `for` loop because you no longer need to track values and control when the loop ends. The `for-of` loop automatically reads all values from the iterator until there are no more, and then it exits. To make `for-of` easier to use, many values in ECMAScript 6 have default iterators. All the collection types–that is, arrays, maps, and sets–have iterators designed to make their contents easy to access. Strings also have a default iterator, which makes iterating over the characters of the string (rather than the code units) easy. The spread operator works with any iterable and makes converting iterables into arrays easy, too. The conversion works by reading values from an iterator and inserting them individually into an array. A generator is a special function that automatically creates an iterator when called. Generator definitions are indicated by a star ( `*` ) character and use of the `yield` keyword to indicate which value to return for each successive call to the `next()` method. Generator delegation encourages good encapsulation of iterator behavior by letting you reuse existing generators in new generators. You can use an existing generator inside another generator by calling `yield *` instead of `yield` . This process allows you to create an iterator that returns values from multiple iterators. Perhaps the most interesting and exciting aspect of generators and iterators is the possibility of creating cleaner-looking asynchronous code. Instead of needing to use callbacks everywhere, you can set up code that looks synchronous but in fact uses `yield` to wait for asynchronous operations to complete. ## Introducing JavaScript Classes Unlike most formal object-oriented programming languages, JavaScript didn’t support classes and classical inheritance as the primary way of defining similar and related objects when it was created. This left many developers confused, and from pre-ECMAScript 1 all the way through ECMAScript 5, many libraries created utilities to make JavaScript look like it supports classes. While some JavaScript developers do feel strongly that the language doesn’t need classes, the number of libraries created specifically for this purpose led to the inclusion of classes in ECMAScript 6. While exploring ECMAScript 6 classes, it’s helpful to understand the underlying mechanisms that classes use, so this chapter starts by discussing how ECMAScript 5 developers achieved class-like behavior. As you will see after that, however, ECMAScript 6 classes aren’t exactly the same as classes in other languages. There’s a uniqueness about them that embraces the dynamic nature of JavaScript. ### Class-Like Structures in ECMAScript 5 In ECMAScript 5 and earlier, JavaScript had no classes. The closest equivalent to a class was creating a constructor and then assigning methods to the constructor’s prototype, an approach typically called creating a custom type. For example: In this code, `PersonType` is a constructor function that creates a single property called `name` . The `sayName()` method is assigned to the prototype so the same function is shared by all instances of the `PersonType` object. Then, a new instance of `PersonType` is created via the `new` operator. The resulting `person` object is considered an instance of `PersonType` and of `Object` through prototypal inheritance. This basic pattern underlies a lot of the class-mimicking JavaScript libraries, and that’s where ECMAScript 6 classes start. ### Class Declarations The simplest class form in ECMAScript 6 is the class declaration, which looks similar to classes in other languages. # A Basic Class Declaration Class declarations begin with the `class` keyword followed by the name of the class. The rest of the syntax looks similar to concise methods in object literals, without requiring commas between them. For example, here’s a simple class declaration: The class declaration `PersonClass` behaves quite similarly to `PersonType` from the previous example. But instead of defining a function as the constructor, class declarations allow you to define the constructor directly inside the class with the special `constructor` method name. Since class methods use the concise syntax, there’s no need to use the `function` keyword. All other method names have no special meaning, so you can add as many methods as you want. Interestingly, class declarations are just syntactic sugar on top of the existing custom type declarations. The `PersonClass` declaration actually creates a function that has the behavior of the `constructor` method, which is why `typeof PersonClass` gives `"function"` as the result. The `sayName()` method also ends up as a method on ``` PersonClass.prototype ``` in this example, similar to the relationship between `sayName()` and `PersonType.prototype` in the previous example. These similarities allow you to mix custom types and classes without worrying too much about which you’re using. # Why to Use the Class Syntax Despite the similarities between classes and custom types, there are some important differences to keep in mind: * Class declarations, unlike function declarations, are not hoisted. Class declarations act like `let` declarations and so exist in the temporal dead zone until execution reaches the declaration. * All code inside of class declarations runs in strict mode automatically. There’s no way to opt-out of strict mode inside of classes. * All methods are non-enumerable. This is a significant change from custom types, where you need to use to make a method non-enumerable. * All methods lack an internal `[[Construct]]` method and will throw an error if you try to call them with `new` . * Calling the class constructor without `new` throws an error. * Attempting to overwrite the class name within a class method throws an error. With all of this in mind, the `PersonClass` declaration from the previous example is directly equivalent to the following code, which doesn’t use the class syntax: First, notice that there are two `PersonType2` declarations: a `let` declaration in the outer scope and a `const` declaration inside the IIFE. This is how class methods are forbidden from overwriting the class name while code outside the class is allowed to do so. The constructor function checks `new.target` to ensure that it’s being called with `new` ; if not, an error is thrown. Next, the `sayName()` method is defined as nonenumerable, and the method checks `new.target` to ensure that it wasn’t called with `new` . The final step returns the constructor function. This example shows that while it’s possible to do everything classes do without using the new syntax, the class syntax simplifies all of the functionality significantly. ### Class Expressions Classes and functions are similar in that they have two forms: declarations and expressions. Function and class declarations begin with an appropriate keyword ( `function` or `class` , respectively) followed by an identifier. Functions have an expression form that doesn’t require an identifier after `function` , and similarly, classes have an expression form that doesn’t require an identifier after `class` . These class expressions are designed to be used in variable declarations or passed into functions as arguments. # A Basic Class Expression Here’s the class expression equivalent of the previous `PersonClass` examples, followed by some code that uses it: As this example demonstrates, class expressions do not require identifiers after `class` . Aside from the syntax, class expressions are functionally equivalent to class declarations. Whether you use class declarations or class expressions is mostly a matter of style. Unlike function declarations and function expressions, both class declarations and class expressions are not hoisted, and so the choice has little bearing on the runtime behavior of the code. # Named Class Expressions The previous section used an anonymous class expression in the example, but just like function expressions, you can also name class expressions. To do so, include an identifier after the `class` keyword like this: In this example, the class expression is named `PersonClass2` . The `PersonClass2` identifier exists only within the class definition so that it can be used inside the class methods (such as the `sayName()` method in this example). Outside the class, `typeof PersonClass2` is `"undefined"` because no `PersonClass2` binding exists there. To understand why this is, look at an equivalent declaration that doesn’t use classes: Creating a named class expression slightly changes what’s happening in the JavaScript engine. For class declarations, the outer binding (defined with `let` ) has the same name as the inner binding (defined with `const` ). A named class expression uses its name in the `const` definition, so `PersonClass2` is defined for use only inside the class. While named class expressions behave differently from named function expressions, there are still a lot of similarities between the two. Both can be used as values, and that opens up a lot of possibilities, which I’ll cover next. ### Classes as First-Class Citizens In programming, something is said to be a first-class citizen when it can be used as a value, meaning it can be passed into a function, returned from a function, and assigned to a variable. JavaScript functions are first-class citizens (sometimes they’re just called first class functions), and that’s part of what makes JavaScript unique. ECMAScript 6 continues this tradition by making classes first-class citizens as well. That allows classes to be used in a lot of different ways. For example, they can be passed into functions as arguments: In this example, the `createObject()` function is called with an anonymous class expression as an argument, creates an instance of that class with `new` , and returns the instance. The variable `obj` then stores the returned instance. Another interesting use of class expressions is creating singletons by immediately invoking the class constructor. To do so, you must use `new` with a class expression and include parentheses at the end. For example: Here, an anonymous class expression is created and then executed immediately. This pattern allows you to use the class syntax for creating singletons without leaving a class reference available for inspection. (Remember that `PersonClass` only creates a binding inside of the class, not outside.) The parentheses at the end of the class expression are the indicator that you’re calling a function while also allowing you to pass in an argument. The examples in this chapter so far have focused on classes with methods. But you can also create accessor properties on classes using a syntax similar to object literals. ### Accessor Properties While own properties should be created inside class constructors, classes allow you to define accessor properties on the prototype. To create a getter, use the keyword `get` followed by a space, followed by an identifier; to create a setter, do the same using the keyword `set` . For example: In this code, the `CustomHTMLElement` class is made as a wrapper around an existing DOM element. It has both a getter and setter for `html` that delegates to the `innerHTML` method on the element itself. This accessor property is created on the ``` CustomHTMLElement.prototype ``` and, just like any other method would be, is created as non-enumerable. The equivalent non-class representation is: As with previous examples, this one shows just how much code you can save by using a class instead of the non-class equivalent. The `html` accessor property definition alone is almost the size of the equivalent class declaration. ### Computed Member Names The similarities between object literals and classes aren’t quite over yet. Class methods and accessor properties can also have computed names. Instead of using an identifier, use square brackets around an expression, which is the same syntax used for object literal computed names. For example: This version of `PersonClass` uses a variable to assign a name to a method inside its definition. The string `"sayName"` is assigned to the `methodName` variable, and then `methodName` is used to declare the method. The `sayName()` method is later accessed directly. Accessor properties can use computed names in the same way, like this: Here, the getter and setter for `html` are set using the `propertyName` variable. Accessing the property by using `.html` only affects the definition. You’ve seen that there are a lot of similarities between classes and object literals, with methods, accessor properties, and computed names. There’s just one more similarity to cover: generators. ### Generator Methods When Chapter 8 introduced generators, you learned how to define a generator on an object literal by prepending a star ( `*` ) to the method name. The same syntax works for classes as well, allowing any method to be a generator. Here’s an example: This code creates a class called `MyClass` with a `createIterator()` generator method. The method returns an iterator whose values are hardcoded into the generator. Generator methods are useful when you have an object that represents a collection of values and you’d like to iterate over those values easily. Arrays, sets, and maps all have multiple generator methods to account for the different ways developers need to interact with their items. While generator methods are useful, defining a default iterator for your class is much more helpful if the class represents a collection of values. You can define the default iterator for a class by using `Symbol.iterator` to define a generator method, such as: This example uses a computed name for a generator method that delegates to the `values()` iterator of the `this.items` array. Any class that manages a collection of values should include a default iterator because some collection-specific operations require collections they operate on to have an iterator. Now, any instance of `Collection` can be used directly in a `for-of` loop or with the spread operator. Adding methods and accessor properties to a class prototype is useful when you want those to show up on object instances. If, on the other hand, you’d like methods or accessor properties on the class itself, then you’ll need to use static members. ### Static Members Adding additional methods directly onto constructors to simulate static members is another common pattern in ECMAScript 5 and earlier. For example: In other programming languages, the factory method called `PersonType.create()` would be considered a static method, as it doesn’t depend on an instance of `PersonType` for its data. ECMAScript 6 classes simplify the creation of static members by using the formal `static` annotation before the method or accessor property name. For instance, here’s the class equivalent of the last example: The `PersonClass` definition has a single static method called `create()` . The method syntax is the same used for `sayName()` except for the `static` keyword. You can use the `static` keyword on any method or accessor property definition within a class. The only restriction is that you can’t use `static` with the `constructor` method definition. ### Inheritance with Derived Classes Prior to ECMAScript 6, implementing inheritance with custom types was an extensive process. Proper inheritance required multiple steps. For instance, consider this example: `Square` inherits from `Rectangle` , and to do so, it must overwrite `Square.prototype` with a new object created from `Rectangle.prototype` as well as call the `Rectangle.call()` method. These steps often confused JavaScript newcomers and were a source of errors for experienced developers. Classes make inheritance easier to implement by using the familiar `extends` keyword to specify the function from which the class should inherit. The prototypes are automatically adjusted, and you can access the base class constructor by calling the `super()` method. Here’s the ECMAScript 6 equivalent of the previous example: This time, the `Square` class inherits from `Rectangle` using the `extends` keyword. The `Square` constructor uses `super()` to call the `Rectangle` constructor with the specified arguments. Note that unlike the ECMAScript 5 version of the code, the identifier `Rectangle` is only used within the class declaration (after `extends` ). Classes that inherit from other classes are referred to as derived classes. Derived classes require you to use `super()` if you specify a constructor; if you don’t, an error will occur. If you choose not to use a constructor, then `super()` is automatically called for you with all arguments upon creating a new instance of the class. For instance, the following two classes are identical: The second class in this example shows the equivalent of the default constructor for all derived classes. All of the arguments are passed, in order, to the base class constructor. In this case, the functionality isn’t quite correct because the `Square` constructor needs only one argument, and so it’s best to manually define the constructor. # Shadowing Class Methods The methods on derived classes always shadow methods of the same name on the base class. For instance, you can add `getArea()` to `Square` to redefine that functionality: Since `getArea()` is defined as part of `Square` , the ``` Rectangle.prototype.getArea() ``` method will no longer be called by any instances of `Square` . Of course, you can always decide to call the base class version of the method by using the `super.getArea()` method, like this: Using `super` in this way works the same as the the super references discussed in Chapter 4 (see “Easy Prototype Access With Super References”). The `this` value is automatically set correctly so you can make a simple method call. # Inherited Static Members If a base class has static members, then those static members are also available on the derived class. Inheritance works like that in other languages, but this is a new concept for JavaScript. Here’s an example: In this code, a new static `create()` method is added to the `Rectangle` class. Through inheritance, that method is available as `Square.create()` and behaves in the same manner as the `Rectangle.create()` method. # Derived Classes from Expressions Perhaps the most powerful aspect of derived classes in ECMAScript 6 is the ability to derive a class from an expression. You can use `extends` with any expression as long as the expression resolves to a function with `[[Construct]]` and a prototype. For instance: `Rectangle` is defined as an ECMAScript 5-style constructor while `Square` is a class. Since `Rectangle` has `[[Construct]]` and a prototype, the `Square` class can still inherit directly from it. Accepting any type of expression after `extends` offers powerful possibilities, such as dynamically determining what to inherit from. For example: The `getBase()` function is called directly as part of the class declaration. It returns `Rectangle` , making this example is functionally equivalent to the previous one. And since you can determine the base class dynamically, it’s possible to create different inheritance approaches. For instance, you can effectively create mixins: In this example, mixins are used instead of classical inheritance. The `mixin()` function takes any number of arguments that represent mixin objects. It creates a function called `base` and assigns the properties of each mixin object to the prototype. The function is then returned so `Square` can use `extends` . Keep in mind that since `extends` is still used, you are required to call `super()` in the constructor. The instance of `Square` has both `getArea()` from `AreaMixin` and `serialize` from `SerializableMixin` . This is accomplished through prototypal inheritance. The `mixin()` function dynamically populates the prototype of a new function with all of the own properties of each mixin. (Keep in mind that if multiple mixins have the same property, only the last property added will remain.) # Inheriting from Built-ins For almost as long as JavaScript arrays have existed, developers have wanted to create their own special array types through inheritance. In ECMAScript 5 and earlier, this wasn’t possible. Attempting to use classical inheritance didn’t result in functioning code. For example: The `console.log()` output at the end of this code shows how using the classical form of JavaScript inheritance on an array results in unexpected behavior. The `length` and numeric properties on an instance of `MyArray` don’t behave the same as they do for the built-in array because this functionality isn’t covered either by `Array.apply()` or by assigning the prototype. One goal of ECMAScript 6 classes is to allow inheritance from all built-ins. In order to accomplish this, the inheritance model of classes is slightly different than the classical inheritance model found in ECMAScript 5 and earlier: In ECMAScript 5 classical inheritance, the value of `this` is first created by the derived type (for example, `MyArray` ), and then the base type constructor (like the `Array.apply()` method) is called. That means `this` starts out as an instance of `MyArray` and then is decorated with additional properties from `Array` . In ECMAScript 6 class-based inheritance, the value of `this` is first created by the base ( `Array` ) and then modified by the derived class constructor ( `MyArray` ). The result is that `this` starts with all the built-in functionality of the base and correctly receives all functionality related to it. The following example shows a class-based special array in action: `MyArray` inherits directly from `Array` and therefore works like `Array` . Interacting with numeric properties updates the `length` property, and manipulating the `length` property updates the numeric properties. That means you can both properly inherit from `Array` to create your own derived array classes and inherit from other built-ins as well. With all this added functionality, ECMAScript 6 and derived classes have effectively removed the last special case of inheriting from built-ins, but that case is still worth exploring. # The Symbol.species Property An interesting aspect of inheriting from built-ins is that any method that returns an instance of the built-in will automatically return a derived class instance instead. So, if you have a derived class called `MyArray` that inherits from `Array` , methods such as `slice()` return an instance of `MyArray` . For example: In this code, the `slice()` method returns a `MyArray` instance. The `slice()` method is inherited from `Array` and returns an instance of `Array` normally. Behind the scenes, it’s the `Symbol.species` property that is making this change. The `Symbol.species` well-known symbol is used to define a static accessor property that returns a function. That function is a constructor to use whenever an instance of the class must be created inside of an instance method (instead of using the constructor). The following builtin types have `Symbol.species` defined: * `Array` * `ArrayBuffer` (discussed in Chapter 10) * `Map` * `Promise` * `RegExp` * `Set` * Typed Arrays (discussed in Chapter 10) Each of these types has a default `Symbol.species` property that returns `this` , meaning the property will always return the constructor function. If you were to implement that functionality on a custom class, the code would look like this: In this example, the `Symbol.species` well-known symbol is used to assign a static accessor property to `MyClass` . Note that there’s only a getter without a setter, because changing the species of a class isn’t possible. Any call to returns `MyClass` . The `clone()` method uses that definition to return a new instance rather than directly using `MyClass` , which allows derived classes to override that value. For example: Here, `MyDerivedClass1` inherits from `MyClass` and doesn’t change the `Symbol.species` property. When `clone()` is called, it returns an instance of `MyDerivedClass1` because returns `MyDerivedClass1` . The `MyDerivedClass2` class inherits from `MyClass` and overrides `Symbol.species` to return `MyClass` . When `clone()` is called on an instance of `MyDerivedClass2` , the return value is an instance of `MyClass` . Using `Symbol.species` , any derived class can determine what type of value should be returned when a method returns an instance. For instance, `Array` uses `Symbol.species` to specify the class to use for methods that return an array. In a class derived from `Array` , you can determine the type of object returned from the inherited methods, such as: This code overrides `Symbol.species` on `MyArray` , which inherits from `Array` . All of the inherited methods that return arrays will now use an instance of `Array` instead of `MyArray` . In general, you should use the `Symbol.species` property whenever you might want to use `this.constructor` in a class method. Doing so allows derived classes to override the return type easily. Additionally, if you are creating derived classes from a class that has `Symbol.species` defined, be sure to use that value instead of the constructor. ### Using new.target in Class Constructors In Chapter 3, you learned about `new.target` and how its value changes depending on how a function is called. You can also use `new.target` in class constructors to determine how the class is being invoked. In the simple case, `new.target` is equal to the constructor function for the class, as in this example: This code shows that `new.target` is equivalent to `Rectangle` when `new Rectangle(3, 4)` is called. Class constructors can’t be called without `new` , so the `new.target` property is always defined inside of class constructors. But the value may not always be the same. Consider this code: `Square` is calling the `Rectangle` constructor, so `new.target` is equal to `Square` when the `Rectangle` constructor is called. This is important because it gives each constructor the ability to alter its behavior based on how it’s being called. For instance, you can create an abstract base class (one that can’t be instantiated directly) by using `new.target` as follows: In this example, the `Shape` class constructor throws an error whenever `new.target` is `Shape` , meaning that `new Shape()` always throws an error. However, you can still use `Shape` as a base class, which is what `Rectangle` does. The `super()` call executes the `Shape` constructor and `new.target` is equal to `Rectangle` so the constructor continues without error. ECMAScript 6 classes make inheritance in JavaScript easier to use, so you don’t need to throw away any existing understanding of inheritance you might have from other languages. ECMAScript 6 classes start out as syntactic sugar for the classical inheritance model of ECMAScript 5, but add a lot of features to reduce mistakes. ECMAScript 6 classes work with prototypal inheritance by defining non-static methods on the class prototype, while static methods end up on the constructor itself. All methods are non-enumerable, a feature that better matches the behavior of built-in objects for which methods are typically non-enumerable by default. Additionally, class constructors can’t be called without `new` , ensuring that you can’t accidentally call a class as a function. Class-based inheritance allows you to derive a class from another class, function, or expression. This ability means you can call a function to determine the correct base to inherit from, allowing you to use mixins and other different composition patterns to create a new class. Inheritance works in such a way that inheriting from built-in objects like `Array` is now possible and works as expected. You can use `new.target` in class constructors to behave differently depending on how the class is called. The most common use is to create an abstract base class that throws an error when instantiated directly but still allows inheritance via other classes. Overall, classes are an important addition to JavaScript. They provide a more concise syntax and better functionality for defining custom object types in a safe, consistent manner. ## Improved Array Capabilities The array is a foundational JavaScript object. But while other aspects of JavaScript have evolved over time, arrays remained the same until ECMAScript 5 introduced several methods to make them easier to use. ECMAScript 6 continues to improve arrays by adding a lot more functionality, like new creation methods, several useful convenience methods, and the ability to make typed arrays. ### Creating Arrays Prior to ECMAScript 6, there were two primary ways to create arrays: the `Array` constructor and array literal syntax. Both approaches require listing array items individually and are otherwise fairly limited. Options for converting an array-like object (that is, an object with numeric indices and a `length` property) into an array were also limited and often required extra code. To make JavaScript arrays easier to create, ECMAScript 6 adds the `Array.of()` and `Array.from()` methods. # The Array.of() Method One reason ECMAScript 6 adds new creation methods to JavaScript is to help developers avoid a quirk of creating arrays with the `Array` constructor. The `new Array()` constructor actually behaves differently based on the type and number of arguments passed to it. For example: When the `Array` constructor is passed a single numeric value, the `length` property of the array is set to that value. If a single non-numeric value is passed, then that value becomes the one and only item in the array. If multiple values are passed (numeric or not), then those values become items in the array. This behavior is both confusing and risky, as you may not always be aware of the type of data being passed. ECMAScript 6 introduces `Array.of()` to solve this problem. The `Array.of()` method works similarly to the `Array` constructor but has no special case regarding a single numeric value. The `Array.of()` method always creates an array containing its arguments regardless of the number of arguments or the argument types. Here are some examples that use the `Array.of()` method: To create an array with the `Array.of()` method, just pass it the values you want in your array. The first example here creates an array containing two numbers, the second array contains one number, and the last array contains one string. This is similar to using an array literal, and you can use an array literal instead of `Array.of()` for native arrays most of the time. But if you ever need to pass the `Array` constructor into a function, then you might want to pass `Array.of()` instead to ensure consistent behavior. For example: In this code, the `createArray()` function accepts an array creator function and a value to insert into the array. You can pass `Array.of()` as the first argument to `createArray()` to create a new array. It would be dangerous to pass `Array` directly if you cannot guarantee that `value` won’t be a number. # The Array.from() Method Converting non-array objects into actual arrays has always been cumbersome in JavaScript. For instance, if you have an `arguments` object (which is array-like) and want to use it like an array, then you’d need to convert it first. To convert an array-like object to an array in ECMAScript 5, you’d write a function like the one in this example: This approach manually creates a `result` array and copies each item from `arguments` into the new array. That works but takes a decent amount of code to perform a relatively simple operation. Eventually, developers discovered they could reduce the amount of code by calling the native `slice()` method for arrays on array-like objects, like this: This code is functionally equivalent to the previous example, and it works because it sets the `this` value for `slice()` to the array-like object. Since `slice()` needs only numeric indices and a `length` property to function correctly, any array-like object will work. Even though this technique requires less typing, calling ``` Array.prototype.slice.call(arrayLike) ``` doesn’t obviously translate to, “Convert `arrayLike` to an array.” Fortunately, ECMAScript 6 added the `Array.from()` method as an obvious, yet clean, way to convert objects into arrays. Given either an iterable or an array-like object as the first argument, the `Array.from()` method returns an array. Here’s a simple example: The `Array.from()` call creates a new array based on the items in `arguments` . So `args` is an instance of `Array` that contains the same values in the same positions as `arguments` . # Mapping Conversion If you want to take array conversion a step further, you can provide `Array.from()` with a mapping function as a second argument. That function operates on each value from the array-like object and converts it to some final form before storing the result at the appropriate index in the final array. For example: Here, `Array.from()` is passed `(value) => value + 1` as a mapping function, so it adds 1 to each item in the array before storing the item. If the mapping function is on an object, you can also optionally pass a third argument to `Array.from()` that represents the `this` value for the mapping function: This example passes `helper.add()` as the mapping function for the conversion. Since `helper.add()` uses the `this.diff` property, you need to provide the third argument to `Array.from()` specifying the value of `this` . Thanks to the third argument, `Array.from()` can easily convert data without calling `bind()` or specifying the `this` value in some other way. # Use on Iterables The `Array.from()` method works on both array-like objects and iterables. That means the method can convert any object with a `Symbol.iterator` property into an array. For example: Since the `numbers` object is an iterable, you can pass `numbers` directly to `Array.from()` to convert its values into an array. The mapping function adds one to each number so the resulting array contains 2, 3, and 4 instead of 1, 2, and 3. ### New Methods on All Arrays Continuing the trend from ECMAScript 5, ECMAScript 6 adds several new methods to arrays. The `find()` and `findIndex()` methods are meant to aid developers using arrays with any values, while `fill()` and `copyWithin()` are inspired by use cases for typed arrays, a form of array introduced in ECMAScript 6 that uses only numbers. # The find() and findIndex() Methods Prior to ECMAScript 5, searching through arrays was cumbersome because there were no built-in methods for doing so. ECMAScript 5 added the `indexOf()` and `lastIndexOf()` methods, finally allowing developers to search for specific values inside an array. These two methods were a big improvement, yet they were still fairly limited because you could only search for one value at a time. For example, if you wanted to find the first even number in a series of numbers, you’d need to write your own code to do so. ECMAScript 6 solved that problem by introducing the `find()` and `findIndex()` methods. Both `find()` and `findIndex()` accept two arguments: a callback function and an optional value to use for `this` inside the callback function. The callback function is passed an array element, the index of that element in the array, and the array itself–the same arguments passed to methods like `map()` and `forEach()` . The callback should return `true` if the given value matches some criteria you define. Both `find()` and `findIndex()` also stop searching the array the first time the callback function returns `true` . The only difference between these methods is that `find()` returns the value whereas `findIndex()` returns the index at which the value was found. Here’s an example to demonstrate: This code calls `find()` and `findIndex()` to locate the first value in the `numbers` array that is greater than 33. The call to `find()` returns 35 and `findIndex()` returns 2, the location of 35 in the `numbers` array. Both `find()` and `findIndex()` are useful to find an array element that matches a condition rather than a value. If you only want to find a value, then `indexOf()` and `lastIndexOf()` are better choices. # The fill() Method The `fill()` method fills one or more array elements with a specific value. When passed a value, `fill()` overwrites all of the values in an array with that value. For example: Here, the call to `numbers.fill(1)` changes all values in `numbers` to `1` . If you only want to change some of the elements, rather than all of them, you can optionally include a start index and an exclusive end index, like this: In the `numbers.fill(1,2)` call, the `2` indicates to start filling elements at index 2. The exclusive end index isn’t specified with a third argument, so `numbers.length` is used as the end index, meaning the last two elements in `numbers` are filled with `1` . The ``` numbers.fill(0, 1, 3) ``` operation fills array elements at indices 1 and 2 with `0` . Calling `fill()` with the second and third arguments allows you to fill multiple array elements at once without overwriting the entire array. # The copyWithin() Method The `copyWithin()` method is similar to `fill()` in that it changes multiple array elements at the same time. However, instead of specifying a single value to assign to array elements, `copyWithin()` lets you copy array element values from the array itself. To accomplish that, you need to pass two arguments to the `copyWithin()` method: the index where the method should start filling values and the index where the values to be copied begin. For instance, to copy the values from the first two elements in an array to the last two items in the array, you can do the following: This code pastes values into `numbers` beginning from index 2, so both indices 2 and 3 will be overwritten. Passing `0` as the second argument to `copyWithin()` indicates to start copying values from index 0 and continue until there are no more elements to copy into. By default, `copyWithin()` always copies values up to the end of the array, but you can provide an optional third argument to limit how many elements will be overwritten. That third argument is an exclusive end index at which copying of values stops. Here’s an example: In this example, only the value in index 0 is copied because the optional end index is set to `1` . The last element in the array remains unchanged. The use cases for `fill()` and `copyWithin()` may not be obvious to you at this point. That’s because these methods originated on typed arrays and were added to regular arrays for consistency. As you’ll learn in the next section, however, if you use typed arrays for manipulating the bits of a number, these methods become a lot more useful. ### Typed Arrays Typed arrays are special-purpose arrays designed to work with numeric types (not all types, as the name might imply). The origin of typed arrays can be traced to WebGL, a port of Open GL ES 2.0 designed for use in web pages with the `<canvas>` element. Typed arrays were created as part of the port to provide fast bitwise arithmetic in JavaScript. Arithmetic on native JavaScript numbers was too slow for WebGL because the numbers were stored in a 64-bit floating-point format and converted to 32-bit integers as needed. Typed arrays were introduced to circumvent this limitation and provide better performance for arithmetic operations. The concept is that any single number can be treated like an array of bits and thus can use the familiar methods available on JavaScript arrays. ECMAScript 6 adopted typed arrays as a formal part of the language to ensure better compatibility across JavaScript engines and interoperability with JavaScript arrays. While the ECMAScript 6 version of typed arrays is not exactly the same as the WebGL version, there are enough similarities to make the ECMAScript 6 version an evolution of the WebGL version rather than a different approach. # Numeric Data Types JavaScript numbers are stored in IEEE 754 format, which uses 64 bits to store a floating-point representation of the number. This format represents both integers and floats in JavaScript, with conversion between the two formats happening frequently as numbers change. Typed arrays allow the storage and manipulation of eight different numeric types: * Signed 8-bit integer (int8) * Unsigned 8-bit integer (uint8) * Signed 16-bit integer (int16) * Unsigned 16-bit integer (uint16) * Signed 32-bit integer (int32) * Unsigned 32-bit integer (uint32) * 32-bit float (float32) * 64-bit float (float64) If you represent a number that fits in an int8 as a normal JavaScript number, you’ll waste 56 bits. Those bits might better be used to store additional int8 values or any other number that requires less than 56 bits. Using bits more efficiently is one of the use cases typed arrays address. All of the operations and objects related to typed arrays are centered around these eight data types. In order to use them, though, you’ll need to create an array buffer to store the data. # Array Buffers The foundation for all typed arrays is an array buffer, which is a memory location that can contain a specified number of bytes. Creating an array buffer is akin to calling `malloc()` in C to allocate memory without specifying what the memory block contains. You can create an array buffer by using the `ArrayBuffer` constructor as follows: Just pass the number of bytes the array buffer should contain when you call the constructor. This `let` statement creates an array buffer 10 bytes long. Once an array buffer is created, you can retrieve the number of bytes in it by checking the `byteLength` property: You can also use the `slice()` method to create a new array buffer that contains part of an existing array buffer. The `slice()` method works like the `slice()` method on arrays: you pass it the start index and end index as arguments, and it returns a new `ArrayBuffer` instance comprised of those elements from the original. For example: In this code, `buffer2` is created by extracting the bytes at indices 4 and 5. Just like when you call the array version of this method, the second argument to `slice()` is exclusive. Of course, creating a storage location isn’t very helpful without being able to write data into that location. To do so, you’ll need to create a view. # Manipulating Array Buffers with Views Array buffers represent memory locations, and views are the interfaces you’ll use to manipulate that memory. A view operates on an array buffer or a subset of an array buffer’s bytes, reading and writing data in one of the numeric data types. The `DataView` type is a generic view on an array buffer that allows you to operate on all eight numeric data types. To use a `DataView` , first create an instance of `ArrayBuffer` and use it to create a new `DataView` . Here’s an example: The `view` object in this example has access to all 10 bytes in `buffer` . You can also create a view over just a portion of a buffer. Just provide a byte offset and, optionally, the number of bytes to include from that offset. When a number of bytes isn’t included, the `DataView` will go from the offset to the end of the buffer by default. For example: Here, `view` operates only on the bytes at indices 5 and 6. This approach allows you to create several views over the same array buffer, which can be useful if you want to use a single memory location for an entire application rather than dynamically allocating space as needed. # Retrieving View Information You can retrieve information about a view by fetching the following read-only properties: * `buffer` - The array buffer that the view is tied to * `byteOffset` - The second argument to the `DataView` constructor, if provided (0 by default) * `byteLength` - The third argument to the `DataView` constructor, if provided (the buffer’s `byteLength` by default) Using these properties, you can inspect exactly where a view is operating, like this: This code creates `view1` , a view over the entire array buffer, and `view2` , which operates on a small section of the array buffer. These views have equivalent `buffer` properties because both work on the same array buffer. The `byteOffset` and `byteLength` are different for each view, however. They reflect the portion of the array buffer where each view operates. Of course, reading information about memory isn’t very useful on its own. You need to write data into and read data out of that memory to get any benefit. # Reading and Writing Data For each of JavaScript’s eight numeric data types, the `DataView` prototype has a method to write data and a method to read data from an array buffer. The method names all begin with either “set” or “get” and are followed by the data type abbreviation. For instance, here’s a list of the read and write methods that can operate on int8 and uint8 values: * `getInt8(byteOffset)` - Read an int8 starting at `byteOffset` * ``` setInt8(byteOffset, value) ``` - Write an int8 starting at `byteOffset` * `getUint8(byteOffset)` - Read an uint8 starting at `byteOffset` * ``` setUint8(byteOffset, value) ``` - Write an uint8 starting at `byteOffset` The “get” methods accept a single argument: the byte offset to read from. The “set” methods accept two arguments: the byte offset to write at and the value to write. Though I’ve only shown the methods you can use with 8-bit values, the same methods exist for operating on 16- and 32-bit values. Just replace the `8` in each name with `16` or `32` . Alongside all those integer methods, `DataView` also has the following read and write methods for floating point numbers: * ``` getFloat32(byteOffset, littleEndian) ``` - Read a float32 starting at `byteOffset` * ``` setFloat32(byteOffset, value, littleEndian) ``` - Write a float32 starting at `byteOffset` * ``` getFloat64(byteOffset, littleEndian) ``` - Read a float64 starting at `byteOffset` * ``` setFloat64(byteOffset, value, littleEndian) ``` - Write a float64 starting at `byteOffset` The float-related methods are only different in that they accept an additional optional boolean indicating whether the value should be read or written as little-endian. (Little-endian means the least significant byte is at byte 0, instead of in the last byte.) To see a “set” and a “get” method in action, consider the following example: This code uses a two-byte array buffer to store two int8 values. The first value is set at offset 0 and the second is at offset 1, reflecting that each value spans a full byte (8 bits). Those values are later retrieved from their positions with the `getInt8()` method. While this example uses int8 values, you can use any of the eight numeric types with their corresponding methods. Views are interesting because they allow you to read and write in any format at any point in time, regardless of how data was previously stored. For instance, writing two int8 values and reading the buffer with an int16 method works just fine, as in this example: The call to `view.getInt16(0)` reads all bytes in the view and interprets those bytes as the number 1535. To understand why this happens, take a look at Figure 10-1, which shows what each `setInt8()` line does to the array buffer. The array buffer starts with 16 bits that are all zero. Writing `5` to the first byte with `setInt8()` introduces a couple of 1s (in 8-bit representation, 5 is 00000101). Writing -1 to the second byte sets all bits in that byte to 1, which is the two’s complement representation of -1. After the second `setInt8()` call, the array buffer contains 16 bits, and `getInt16()` reads those bits as a single 16-bit integer, which is 1535 in decimal. The `DataView` object is perfect for use cases that mix different data types in this way. However, if you’re only using one specific data type, then the type-specific views are a better choice. # Typed Arrays Are Views ECMAScript 6 typed arrays are actually type-specific views for array buffers. Instead of using a generic `DataView` object to operate on an array buffer, you can use objects that enforce specific data types. There are eight type-specific views corresponding to the eight numeric data types, plus an additional option for `uint8` values. Table 10-1 shows an abbreviated version of the complete list of type-specific views from section 22.2 of the ECMAScript 6 specification. Constructor Name | Element Size (in bytes) | Description | Equivalent C Type | | --- | --- | --- | --- | | 1 | 8-bit two’s complement signed integer | | | 1 | 8-bit unsigned integer | | | 1 | 8-bit unsigned integer (clamped conversion) | | | 2 | 16-bit two’s complement signed integer | | | 2 | 16-bit unsigned integer | | | 4 | 32-bit two’s complement signed integer | | | 4 | 32-bit unsigned integer | | | 4 | 32-bit IEEE floating point | | | 8 | 64-bit IEEE floating point | | The left column lists the typed array constructors, and the other columns describe the data each typed array can contain. A `Uint8ClampedArray` is the same as a `Uint8Array` unless values in the array buffer are less than 0 or greater than 255. A `Uint8ClampedArray` converts values lower than 0 to 0 (-1 becomes 0, for instance) and converts values higher than 255 to 255 (so 300 becomes 255). Typed array operations only work on a particular type of data. For example, all operations on `Int8Array` use `int8` values. The size of an element in a typed array also depends on the type of array. While an element in an `Int8Array` is a single byte long, `Float64Array` uses eight bytes per element. Fortunately, the elements are accessed using numeric indices just like regular arrays, allowing you to avoid the somewhat awkward calls to the “set” and “get” methods of `DataView` . # Creating Type-Specific Views Typed array constructors accept multiple types of arguments, so there are a few ways to create typed arrays. First, you can create a new typed array by passing the same arguments `DataView` takes (an array buffer, an optional byte offset, and an optional byte length). For example: In this code, the two views are both `Int8Array` instances that use `buffer` . Both `view1` and `view2` have the same `buffer` , `byteOffset` , and `byteLength` properties that exist on `DataView` instances. It’s easy to switch to using a typed array wherever you use a `DataView` so long as you only work with one numeric type. The second way to create a typed array is to pass a single number to the constructor. That number represents the number of elements (not bytes) to allocate to the array. The constructor will create a new buffer with the correct number of bytes to represent that number of array elements, and you can access the number of elements in the array by using the `length` property. For example: The `ints` array is created with space for two elements. Each 16-bit integer requires two bytes per value, so the array is allocated four bytes. The `floats` array is created to hold five elements, so the number of bytes required is 20 (four bytes per element). In both cases, a new buffer is created and can be accessed using the `buffer` property if necessary. The third way to create a typed array is to pass an object as the only argument to the constructor. The object can be any of the following: * A Typed Array - Each element is copied into a new element on the new typed array. For example, if you pass an int8 to the `Int16Array` constructor, the int8 values would be copied into an int16 array. The new typed array has a different array buffer than the one that was passed in. * An Iterable - The object’s iterator is called to retrieve the items to insert into the typed array. The constructor will throw an error if any elements are invalid for the view type. * An Array - The elements of the array are copied into a new typed array. The constructor will throw an error if any elements are invalid for the type. * An Array-Like Object - Behaves the same as an array. In each of these cases, a new typed array is created with the data from the source object. This can be especially useful when you want to initialize a typed array with some values, like this: This example creates an `Int16Array` and initializes it with an array of two values. Then, an `Int32Array` is created and passed the `Int16Array` . The values 25 and 50 are copied from `ints1` into `ints2` as the two typed arrays have completely separate buffers. The same numbers are represented in both typed arrays, but `ints2` has eight bytes to represent the data while `ints1` has only four. ### Similarities Between Typed and Regular Arrays Typed arrays and regular arrays are similar in several ways, and as you’ve already seen in this chapter, typed arrays can be used like regular arrays in many situations. For instance, you can check how many elements are in a typed array using the `length` property, and you can access a typed array’s elements directly using numeric indices. For example: In this code, a new `Int16Array` with two items is created. The items are read from and written to using their numeric indices, and those values are automatically stored and converted into int16 values as part of the operation. The similarities don’t end there, though. # Common Methods Typed arrays also include a large number of methods that are functionally equivalent to regular array methods. You can use the following array methods on typed arrays: * `copyWithin()` * `entries()` * `fill()` * `filter()` * `find()` * `findIndex()` * `forEach()` * `indexOf()` * `join()` * `keys()` * `lastIndexOf()` * `map()` * `reduce()` * `reduceRight()` * `reverse()` * `slice()` * `some()` * `sort()` * `values()` Keep in mind that while these methods act like their counterparts on `Array.prototype` , they are not exactly the same. The typed array methods have additional checks for numeric type safety and, when an array is returned, will return a typed array instead of a regular array (due to `Symbol.species` ). Here’s a simple example to demonstrate the difference: This code uses the `map()` method to create a new array based on the values in `ints` . The mapping function doubles each value in the array and returns a new `Int16Array` . # The Same Iterators Typed arrays have the same three iterators as regular arrays, too. Those are the `entries()` method, the `keys()` method, and the `values()` method. That means you can use the spread operator and `for-of` loops with typed arrays just like you would with regular arrays. For example: This code creates a new array called `intsArray` containing the same data as the typed array `ints` . As with other iterables, the spread operator makes converting typed arrays into regular arrays easy. # of() and from() Methods Lastly, all typed arrays have static `of()` and `from()` methods that work like the `Array.of()` and `Array.from()` methods. The difference is that the methods on typed arrays return a typed array instead of a regular array. Here are some examples that use these methods to create typed arrays: The `of()` and `from()` methods in this example are used to create an `Int16Array` and a `Float32Array` , respectively. These methods ensure that typed arrays can be created just as easily as regular arrays. ### Differences Between Typed and Regular Arrays The most important difference between typed arrays and regular arrays is that typed arrays are not regular arrays. Typed arrays don’t inherit from `Array` and `Array.isArray()` returns `false` when passed a typed array. For example: Since the `ints` variable is a typed array, it isn’t an instance of `Array` and cannot otherwise be identified as an array. This distinction is important because while typed arrays and regular arrays are similar, there are many ways in which typed arrays behave differently. # Behavioral Differences While regular arrays can grow and shrink as you interact with them, typed arrays always remain the same size. You cannot assign a value to a nonexistent numeric index in a typed array like you can with regular arrays, as typed arrays ignore the operation. Here’s an example: Despite assigning `5` to the numeric index `2` in this example, the `ints` array does not grow at all. The `length` remains the same and the value is thrown away. Typed arrays also have checks to ensure that only valid data types are used. Zero is used in place of any invalid values. For example: This code attempts to use the string value `"hi"` in an `Int16Array` . Of course, strings are invalid data types in typed arrays, so the value is inserted as `0` instead. The `length` of the array is still one, and even though the `ints[0]` slot exists, it just contains `0` . All methods that modify values in a typed array enforce the same restriction. For example, if the function passed to `map()` returns an invalid value for the typed array, then `0` is used instead: Since the string value `"hi"` isn’t a 16-bit integer, it’s replaced with `0` in the resulting array. Thanks to this error correction behavior, typed array methods don’t have to worry about throwing errors when invalid data is present, because there will never be invalid data in the array. # Missing Methods While typed arrays do have many of the same methods as regular arrays, they also lack several array methods. The following methods are not available on typed arrays: * `concat()` * `pop()` * `push()` * `shift()` * `splice()` * `unshift()` Except for the `concat()` method, the methods in this list can change the size of an array. Typed arrays can’t change size, which is why these aren’t available for typed arrays. The `concat()` method isn’t available because the result of concatenating two typed arrays (especially if they deal with different data types) would be uncertain, and that would go against the reason for using typed arrays in the first place. # Additional Methods Finally, typed arrays methods have two methods not present on regular arrays: the `set()` and `subarray()` methods. These two methods are opposites in that `set()` copies another array into an existing typed array, whereas `subarray()` extracts part of an existing typed array into a new typed array. The `set()` method accepts an array (either typed or regular) and an optional offset at which to insert the data; if you pass nothing, the offset defaults to zero. The data from the array argument is copied into the destination typed array while ensuring only valid data types are used. Here’s an example: This code creates an `Int16Array` with four elements. The first call to `set()` copies two values to the first and second elements in the array. The second call to `set()` uses an offset of `2` to indicate that the values should be placed in the array starting at the third element. The `subarray()` method accepts an optional start and end index (the end index is exclusive, as in the `slice()` method) and returns a new typed array. You can also omit both arguments to create a clone of the typed array. For example: Three typed arrays are created from the original `ints` array in this example. The `subints1` array is a clone of `ints` that contains the same information. Since the `subints2` array copies data starting from index 2, it only contains the last two elements of the `ints` array (75 and 100). The `subints3` array contains only the middle two elements of the `ints` array, as `subarray()` was called with both a start and an end index. ECMAScript 6 continues the work of ECMAScript 5 by making arrays more useful. There are two more ways to create arrays: the `Array.of()` and `Array.from()` methods. The `Array.from()` method can also convert iterables and array-like objects into arrays. Both methods are inherited by derived array classes and do not use the `Symbol.species` property to determine what type of value should be returned (other inherited methods do use `Symbol.species` when returning an array). There are also several new methods on arrays. The `fill()` and `copyWithin()` methods allow you to alter array elements in-place. The `find()` and `findIndex()` methods are useful for finding the first element in an array that matches some criteria. The former returns the first element that fits the criteria, and the latter returns the element’s index. Typed arrays are not technically arrays, as they do not inherit from `Array` , but they do look and behave a lot like arrays. Typed arrays contain one of eight different numeric data types and are built upon `ArrayBuffer` objects that represent the underlying bits of a number or series of numbers. Typed arrays are a more efficient way of doing bitwise arithmetic because the values are not converted back and forth between formats, as is the case with the JavaScript number type. ## Promises and Asynchronous Programming One of the most powerful aspects of JavaScript is how easily it handles asynchronous programming. As a language created for the Web, JavaScript needed to be able to respond to asynchronous user interactions such as clicks and key presses from the beginning. Node.js further popularized asynchronous programming in JavaScript by using callbacks as an alternative to events. As more and more programs started using asynchronous programming, events and callbacks were no longer powerful enough to support everything developers wanted to do. Promises are the solution to this problem. Promises are another option for asynchronous programming, and they work like futures and deferreds do in other languages. A promise specifies some code to be executed later (as with events and callbacks) and also explicitly indicates whether the code succeeded or failed at its job. You can chain promises together based on success or failure in ways that make your code easier to understand and debug. To have a good understanding of how promises work, however, it’s important to understand some of the basic concepts upon which they are built. ### Asynchronous Programming Background JavaScript engines are built on the concept of a single-threaded event loop. Single-threaded means that only one piece of code is ever executed at a time. Contrast this with languages like Java or C++, where threads can allow multiple different pieces of code to execute at the same time. Maintaining and protecting state when multiple pieces of code can access and change that state is a difficult problem and a frequent source of bugs in thread-based software. JavaScript engines can only execute one piece of code at a time, so they need to keep track of code that is meant to run. That code is kept in a job queue. Whenever a piece of code is ready to be executed, it is added to the job queue. When the JavaScript engine is finished executing code, the event loop executes the next job in the queue. The event loop is a process inside the JavaScript engine that monitors code execution and manages the job queue. Keep in mind that as a queue, job execution runs from the first job in the queue to the last. # The Event Model When a user clicks a button or presses a key on the keyboard, an event like `onclick` is triggered. That event might respond to the interaction by adding a new job to the back of the job queue. This is JavaScript’s most basic form of asynchronous programming. The event handler code doesn’t execute until the event fires, and when it does execute, it has the appropriate context. For example: In this code, ``` console.log("Clicked") ``` will not be executed until `button` is clicked. When `button` is clicked, the function assigned to `onclick` is added to the back of the job queue and will be executed when all other jobs ahead of it are complete. Events work well for simple interactions, but chaining multiple separate asynchronous calls together is more complicated because you must keep track of the event target ( `button` in the previous example) for each event. Additionally, you need to ensure all appropriate event handlers are added before the first time an event occurs. For instance, if `button` were clicked before `onclick` is assigned, nothing would happen. So while events are useful for responding to user interactions and similar infrequent functionality, they aren’t very flexible for more complex needs. # The Callback Pattern When Node.js was created, it advanced the asynchronous programming model by popularizing the callback pattern of programming. The callback pattern is similar to the event model because the asynchronous code doesn’t execute until a later point in time. It’s different because the function to call is passed in as an argument, as shown here: This example uses the traditional Node.js error-first callback style. The `readFile()` function is intended to read from a file on disk (specified as the first argument) and then execute the callback (the second argument) when complete. If there’s an error, the `err` argument of the callback is an error object; otherwise, the `contents` argument contains the file contents as a string. Using the callback pattern, `readFile()` begins executing immediately and pauses when it starts reading from the disk. That means `console.log("Hi!")` is output immediately after `readFile()` is called, before ``` console.log(contents) ``` prints anything. When `readFile()` finishes, it adds a new job to the end of the job queue with the callback function and its arguments. That job is then executed upon completion of all other jobs ahead of it. The callback pattern is more flexible than events because chaining multiple calls together is easier with callbacks. For example: In this code, a successful call to `readFile()` results in another asynchronous call, this time to the `writeFile()` function. Note that the same basic pattern of checking `err` is present in both functions. When `readFile()` is complete, it adds a job to the job queue that results in `writeFile()` being called (assuming no errors). Then, `writeFile()` adds a job to the job queue when it finishes. This pattern works fairly well, but you can quickly find yourself in callback hell. Callback hell occurs when you nest too many callbacks, like this: Nesting multiple method calls as this example does creates a tangled web of code that is hard to understand and debug. Callbacks also present problems when you want to implement more complex functionality. What if you want two asynchronous operations to run in parallel and notify you when they’re both complete? What if you’d like to start two asynchronous operations at a time but only take the result of the first one to complete? In these cases, you’d need to track multiple callbacks and cleanup operations, and promises greatly improve such situations. ### Promise Basics A promise is a placeholder for the result of an asynchronous operation. Instead of subscribing to an event or passing a callback to a function, the function can return a promise, like this: In this code, `readFile()` doesn’t actually start reading the file immediately; that will happen later. Instead, the function returns a promise object representing the asynchronous read operation so you can work with it in the future. Exactly when you’ll be able to work with that result depends entirely on how the promise’s lifecycle plays out. # The Promise Lifecycle Each promise goes through a short lifecycle starting in the pending state, which indicates that the asynchronous operation hasn’t completed yet. A pending promise is considered unsettled. The promise in the last example is in the pending state as soon as the `readFile()` function returns it. Once the asynchronous operation completes, the promise is considered settled and enters one of two possible states: * Fulfilled: The promise’s asynchronous operation has completed successfully. * Rejected: The promise’s asynchronous operation didn’t complete successfully due to either an error or some other cause. An internal `[[PromiseState]]` property is set to `"pending"` , `"fulfilled"` , or `"rejected"` to reflect the promise’s state. This property isn’t exposed on promise objects, so you can’t determine which state the promise is in programmatically. But you can take a specific action when a promise changes state by using the `then()` method. The `then()` method is present on all promises and takes two arguments. The first argument is a function to call when the promise is fulfilled. Any additional data related to the asynchronous operation is passed to this fulfillment function. The second argument is a function to call when the promise is rejected. Similar to the fulfillment function, the rejection function is passed any additional data related to the rejection. Both arguments to `then()` are optional, so you can listen for any combination of fulfillment and rejection. For example, consider this set of `then()` calls: All three `then()` calls operate on the same promise. The first call listens for both fulfillment and rejection. The second only listens for fulfillment; errors won’t be reported. The third just listens for rejection and doesn’t report success. Promises also have a `catch()` method that behaves the same as `then()` when only a rejection handler is passed. For example, the following `catch()` and `then()` calls are functionally equivalent: The intent behind `then()` and `catch()` is for you to use them in combination to properly handle the result of asynchronous operations. This system is better than events and callbacks because it makes whether the operation succeeded or failed completely clear. (Events tend not to fire when there’s an error, and in callbacks you must always remember to check the error argument.) Just know that if you don’t attach a rejection handler to a promise, all failures will happen silently. Always attach a rejection handler, even if the handler just logs the failure. A fulfillment or rejection handler will still be executed even if it is added to the job queue after the promise is already settled. This allows you to add new fulfillment and rejection handlers at any time and guarantee that they will be called. For example: In this code, the fulfillment handler adds another fulfillment handler to the same promise. The promise is already fulfilled at this point, so the new fulfillment handler is added to the job queue and called when ready. Rejection handlers work the same way. # Creating Unsettled Promises New promises are created using the `Promise` constructor. This constructor accepts a single argument: a function called the executor, which contains the code to initialize the promise. The executor is passed two functions named `resolve()` and `reject()` as arguments. The `resolve()` function is called when the executor has finished successfully to signal that the promise is ready to be resolved, while the `reject()` function indicates that the executor has failed. Here’s an example that uses a promise in Node.js to implement the `readFile()` function from earlier in this chapter: In this example, the native Node.js `fs.readFile()` asynchronous call is wrapped in a promise. The executor either passes the error object to the `reject()` function or passes the file contents to the `resolve()` function. Keep in mind that the executor runs immediately when `readFile()` is called. When either `resolve()` or `reject()` is called inside the executor, a job is added to the job queue to resolve the promise. This is called job scheduling, and if you’ve ever used the `setTimeout()` or `setInterval()` functions, then you’re already familiar with it. In job scheduling, you add a new job to the job queue to say, “Don’t execute this right now, but execute it later.” For instance, the `setTimeout()` function lets you specify a delay before a job is added to the queue: This code schedules a job to be added to the job queue after 500ms. The two `console.log()` calls produce the following output: Thanks to the 500ms delay, the output that the function passed to `setTimeout()` was shown after the output from the `console.log("Hi!")` call. Promises work similarly. The promise executor executes immediately, before anything that appears after it in the source code. For instance: The output for this code is: Calling `resolve()` triggers an asynchronous operation. Functions passed to `then()` and `catch()` are executed asynchronously, as these are also added to the job queue. Here’s an example: The output for this example is: Note that even though the call to `then()` appears before the `console.log("Hi!")` line, it doesn’t actually execute until later (unlike the executor). That’s because fulfillment and rejection handlers are always added to the end of the job queue after the executor has completed. # Creating Settled Promises The `Promise` constructor is the best way to create unsettled promises due to the dynamic nature of what the promise executor does. But if you want a promise to represent just a single known value, then it doesn’t make sense to schedule a job that simply passes a value to the `resolve()` function. Instead, there are two methods that create settled promises given a specific value. # Using Promise.resolve() The `Promise.resolve()` method accepts a single argument and returns a promise in the fulfilled state. That means no job scheduling occurs, and you need to add one or more fulfillment handlers to the promise to retrieve the value. For example: This code creates a fulfilled promise so the fulfillment handler receives 42 as `value` . If a rejection handler were added to this promise, the rejection handler would never be called because the promise will never be in the rejected state. # Using Promise.reject() You can also create rejected promises by using the `Promise.reject()` method. This works like `Promise.resolve()` except the created promise is in the rejected state, as follows: Any additional rejection handlers added to this promise would be called, but not fulfillment handlers. # Non-Promise Thenables Both `Promise.resolve()` and `Promise.reject()` also accept non-promise thenables as arguments. When passed a non-promise thenable, these methods create a new promise that is called after the `then()` function. A non-promise thenable is created when an object has a `then()` method that accepts a `resolve` and a `reject` argument, like this: The `thenable` object in this example has no characteristics associated with a promise other than the `then()` method. You can call `Promise.resolve()` to convert `thenable` into a fulfilled promise: In this example, `Promise.resolve()` calls `thenable.then()` so that a promise state can be determined. The promise state for `thenable` is fulfilled because `resolve(42)` is called inside the `then()` method. A new promise called `p1` is created in the fulfilled state with the value passed from `thenable` (that is, 42), and the fulfillment handler for `p1` receives 42 as the value. The same process can be used with `Promise.resolve()` to create a rejected promise from a thenable: This example is similar to the last except that `thenable` is rejected. When `thenable.then()` executes, a new promise is created in the rejected state with a value of 42. That value is then passed to the rejection handler for `p1` . `Promise.resolve()` and `Promise.reject()` work like this to allow you to easily work with non-promise thenables. A lot of libraries used thenables prior to promises being introduced in ECMAScript 6, so the ability to convert thenables into formal promises is important for backwards-compatibility with previously existing libraries. When you’re unsure if an object is a promise, passing the object through `Promise.resolve()` or `Promise.reject()` (depending on your anticipated result) is the best way to find out because promises just pass through unchanged. # Executor Errors If an error is thrown inside an executor, then the promise’s rejection handler is called. For example: In this code, the executor intentionally throws an error. There is an implicit `try-catch` inside every executor such that the error is caught and then passed to the rejection handler. The previous example is equivalent to: The executor handles catching any thrown errors to simplify this common use case, but an error thrown in the executor is only reported when a rejection handler is present. Otherwise, the error is suppressed. This became a problem for developers early on in the use of promises, and JavaScript environments address it by providing hooks for catching rejected promises. ### Global Promise Rejection Handling One of the most controversial aspects of promises is the silent failure that occurs when a promise is rejected without a rejection handler. Some consider this the biggest flaw in the specification as it’s the only part of the JavaScript language that doesn’t make errors apparent. Determining whether a promise rejection was handled isn’t straightforward due to the nature of promises. For instance, consider this example: You can call `then()` or `catch()` at any point and have them work correctly regardless of whether the promise is settled or not, making it hard to know precisely when a promise is going to be handled. In this case, the promise is rejected immediately but isn’t handled until later. While it’s possible that the next version of ECMAScript will address this problem, both browsers and Node.js have implemented changes to address this developer pain point. They aren’t part of the ECMAScript 6 specification but are valuable tools when using promises. # Node.js Rejection Handling In Node.js, there are two events on the `process` object related to promise rejection handling: * `unhandledRejection` : Emitted when a promise is rejected and no rejection handler is called within one turn of the event loop * `rejectionHandled` : Emitted when a promise is rejected and a rejection handler is called after one turn of the event loop These events are designed to work together to help identify promises that are rejected and not handled. The `unhandledRejection` event handler is passed the rejection reason (frequently an error object) and the promise that was rejected as arguments. The following code shows `unhandledRejection` in action: This example creates a rejected promise with an error object and listens for the `unhandledRejection` event. The event handler receives the error object as the first argument and the promise as the second. The `rejectionHandled` event handler has only one argument, which is the promise that was rejected. For example: Here, the `rejectionHandled` event is emitted when the rejection handler is finally called. If the rejection handler were attached directly to `rejected` after `rejected` is created, then the event wouldn’t be emitted. The rejection handler would instead be called during the same turn of the event loop where `rejected` was created, which isn’t useful. To properly track potentially unhandled rejections, use the `rejectionHandled` and `unhandledRejection` events to keep a list of potentially unhandled rejections. Then wait some period of time to inspect the list. For example: This is a simple unhandled rejection tracker. It uses a map to store promises and their rejection reasons. Each promise is a key, and the promise’s reason is the associated value. Each time `unhandledRejection` is emitted, the promise and its rejection reason are added to the map. Each time `rejectionHandled` is emitted, the handled promise is removed from the map. As a result, ``` possiblyUnhandledRejections ``` grows and shrinks as events are called. The `setInterval()` call periodically checks the list of possible unhandled rejections and outputs the information to the console (in reality, you’ll probably want to do something else to log or otherwise handle the rejection). A map is used in this example instead of a weak map because you need to inspect the map periodically to see which promises are present, and that’s not possible with a weak map. While this example is specific to Node.js, browsers have implemented a similar mechanism for notifying developers about unhandled rejections. # Browser Rejection Handling Browsers also emit two events to help identify unhandled rejections. These events are emitted by the `window` object and are effectively the same as their Node.js equivalents: * `unhandledrejection` : Emitted when a promise is rejected and no rejection handler is called within one turn of the event loop. * `rejectionhandled` : Emitted when a promise is rejected and a rejection handler is called after one turn of the event loop. While the Node.js implementation passes individual parameters to the event handler, the event handler for these browser events receives an event object with the following properties: * `type` : The name of the event ( `"unhandledrejection"` or `"rejectionhandled"` ). * `promise` : The promise object that was rejected. * `reason` : The rejection value from the promise. The other difference in the browser implementation is that the rejection value ( `reason` ) is available for both events. For example: This code assigns both event handlers using the DOM Level 0 notation of `onunhandledrejection` and `onrejectionhandled` . (You can also use ``` addEventListener("unhandledrejection") ``` ``` addEventListener("rejectionhandled") ``` if you prefer.) Each event handler receives an event object containing information about the rejected promise. The `type` , `promise` , and `reason` properties are all available in both event handlers. The code to keep track of unhandled rejections in the browser is very similar to the code for Node.js, too: This implementation is almost exactly the same as the Node.js implementation. It uses the same approach of storing promises and their rejection values in a map and then inspecting them later. The only real difference is where the information is retrieved from in the event handlers. Handling promise rejections can be tricky, but you’ve just begun to see how powerful promises can really be. It’s time to take the next step and chain several promises together. ### Chaining Promises To this point, promises may seem like little more than an incremental improvement over using some combination of a callback and the `setTimeout()` function, but there is much more to promises than meets the eye. More specifically, there are a number of ways to chain promises together to accomplish more complex asynchronous behavior. Each call to `then()` or `catch()` actually creates and returns another promise. This second promise is resolved only once the first has been fulfilled or rejected. Consider this example: The code outputs: The call to `p1.then()` returns a second promise on which `then()` is called. The second `then()` fulfillment handler is only called after the first promise has been resolved. If you unchain this example, it looks like this: In this unchained version of the code, the result of `p1.then()` is stored in `p2` , and then `p2.then()` is called to add the final fulfillment handler. As you might have guessed, the call to `p2.then()` also returns a promise. This example just doesn’t use that promise. # Catching Errors Promise chaining allows you to catch errors that may occur in a fulfillment or rejection handler from a previous promise. For example: In this code, the fulfillment handler for `p1` throws an error. The chained call to the `catch()` method, which is on a second promise, is able to receive that error through its rejection handler. The same is true if a rejection handler throws an error: Here, the executor throws an error then triggers the `p1` promise’s rejection handler. That handler then throws another error that is caught by the second promise’s rejection handler. The chained promise calls are aware of errors in other promises in the chain. # Returning Values in Promise Chains Another important aspect of promise chains is the ability to pass data from one promise to the next. You’ve already seen that a value passed to the `resolve()` handler inside an executor is passed to the fulfillment handler for that promise. You can continue passing data along a chain by specifying a return value from the fulfillment handler. For example: The fulfillment handler for `p1` returns `value + 1` when executed. Since `value` is 42 (from the executor), the fulfillment handler returns 43. That value is then passed to the fulfillment handler of the second promise, which outputs it to the console. You could do the same thing with the rejection handler. When a rejection handler is called, it may return a value. If it does, that value is used to fulfill the next promise in the chain, like this: Here, the executor calls `reject()` with 42. That value is passed into the rejection handler for the promise, where `value + 1` is returned. Even though this return value is coming from a rejection handler, it is still used in the fulfillment handler of the next promise in the chain. The failure of one promise can allow recovery of the entire chain if necessary. # Returning Promises in Promise Chains Returning primitive values from fulfillment and rejection handlers allows passing of data between promises, but what if you return an object? If the object is a promise, then there’s an extra step that’s taken to determine how to proceed. Consider the following example: In this code, `p1` schedules a job that resolves to 42. The fulfillment handler for `p1` returns `p2` , a promise already in the resolved state. The second fulfillment handler is called because `p2` has been fulfilled. If `p2` were rejected, a rejection handler (if present) would be called instead of the second fulfillment handler. The important thing to recognize about this pattern is that the second fulfillment handler is not added to `p2` , but rather to a third promise. The second fulfillment handler is therefore attached to that third promise, making the previous example equivalent to this: Here, it’s clear that the second fulfillment handler is attached to `p3` rather than `p2` . This is a subtle but important distinction, as the second fulfillment handler will not be called if `p2` is rejected. For instance: In this example, the second fulfillment handler is never called because `p2` is rejected. You could, however, attach a rejection handler instead: Here, the rejection handler is called as a result of `p2` being rejected. The rejected value 43 from `p2` is passed into that rejection handler. Returning thenables from fulfillment or rejection handlers doesn’t change when the promise executors are executed. The first defined promise will run its executor first, then the second promise executor will run, and so on. Returning thenables simply allows you to define additional responses to the promise results. You defer the execution of fulfillment handlers by creating a new promise within a fulfillment handler. For example: In this example, a new promise is created within the fulfillment handler for `p1` . That means the second fulfillment handler won’t execute until after `p2` is fulfilled. This pattern is useful when you want to wait until a previous promise has been settled before triggering another promise. ### Responding to Multiple Promises Up to this point, each example in this chapter has dealt with responding to one promise at a time. Sometimes, however, you’ll want to monitor the progress of multiple promises in order to determine the next action. ECMAScript 6 provides two methods that monitor multiple promises: `Promise.all()` and `Promise.race()` . # The Promise.all() Method The `Promise.all()` method accepts a single argument, which is an iterable (such as an array) of promises to monitor, and returns a promise that is resolved only when every promise in the iterable is resolved. The returned promise is fulfilled when every promise in the iterable is fulfilled, as in this example: Each promise here resolves with a number. The call to `Promise.all()` creates promise `p4` , which is ultimately fulfilled when promises `p1` , `p2` , and `p3` are fulfilled. The result passed to the fulfillment handler for `p4` is an array containing each resolved value: 42, 43, and 44. The values are stored in the order the promises were passed to `Promise.all` , so you can match promise results to the promises that resolved to them. If any promise passed to `Promise.all()` is rejected, the returned promise is immediately rejected without waiting for the other promises to complete: In this example, `p2` is rejected with a value of 43. The rejection handler for `p4` is called immediately without waiting for `p1` or `p3` to finish executing (They do still finish executing; `p4` just doesn’t wait.) The rejection handler always receives a single value rather than an array, and the value is the rejection value from the promise that was rejected. In this case, the rejection handler is passed 43 to reflect the rejection from `p2` . # The Promise.race() Method The `Promise.race()` method provides a slightly different take on monitoring multiple promises. This method also accepts an iterable of promises to monitor and returns a promise, but the returned promise is settled as soon as the first promise is settled. Instead of waiting for all promises to be fulfilled like the `Promise.all()` method, the `Promise.race()` method returns an appropriate promise as soon as any promise in the array is fulfilled. For example: In this code, `p1` is created as a fulfilled promise while the others schedule jobs. The fulfillment handler for `p4` is then called with the value of 42 and ignores the other promises. The promises passed to `Promise.race()` are truly in a race to see which is settled first. If the first promise to settle is fulfilled, then the returned promise is fulfilled; if the first promise to settle is rejected, then the returned promise is rejected. Here’s an example with a rejection: Here, both `p1` and `p3` use `setTimeout()` (available in both Node.js and web browsers) to delay promise fulfillment. The result is that `p4` is rejected because `p2` is rejected before either `p1` or `p3` is resolved. Even though `p1` and `p3` are eventually fulfilled, those results are ignored because they occur after `p2` is rejected. ### Inheriting from Promises Just like other built-in types, you can use a promise as the base for a derived class. This allows you to define your own variation of promises to extend what built-in promises can do. Suppose, for instance, you’d like to create a promise that can use methods named `success()` and `failure()` in addition to the usual `then()` and `catch()` methods. You could create that promise type as follows: In this example, `MyPromise` is derived from `Promise` and has two additional methods. The `success()` method mimics `then()` and `failure()` mimics the `catch()` method. Each added method uses `this` to call the method it mimics. The derived promise functions the same as a built-in promise, except now you can call `success()` and `failure()` if you want. Since static methods are inherited, the `MyPromise.resolve()` method, the `MyPromise.reject()` method, the `MyPromise.race()` method, and the `MyPromise.all()` method are also present on derived promises. The last two methods behave the same as the built-in methods, but the first two are slightly different. Both `MyPromise.resolve()` and `MyPromise.reject()` will return an instance of `MyPromise` regardless of the value passed because those methods use the `Symbol.species` property (covered under in Chapter 9) to determine the type of promise to return. If a built-in promise is passed to either method, the promise will be resolved or rejected, and the method will return a new `MyPromise` so you can assign fulfillment and rejection handlers. For example: Here, `p1` is a built-in promise that is passed to the `MyPromise.resolve()` method. The result, `p2` , is an instance of `MyPromise` where the resolved value from `p1` is passed into the fulfillment handler. If an instance of `MyPromise` is passed to the `MyPromise.resolve()` or `MyPromise.reject()` methods, it will just be returned directly without being resolved. In all other ways these two methods behave the same as `Promise.resolve()` and `Promise.reject()` . # Asynchronous Task Running In Chapter 8, I introduced generators and showed you how you can use them for asynchronous task running, like this: There are some pain points to this implementation. First, wrapping every function in a function that returns a function is a bit confusing (even this sentence was confusing). Second, there is no way to distinguish between a function return value intended as a callback for the task runner and a return value that isn’t a callback. With promises, you can greatly simplify and generalize this process by ensuring that each asynchronous operation returns a promise. That common interface means you can greatly simplify asynchronous code. Here’s one way you could simplify that task runner: In this version of the code, a generic `run()` function executes a generator to create an iterator. It calls `task.next()` to start the task and recursively calls `step()` until the iterator is complete. Inside the `step()` function, if there’s more work to do, then `result.done` is `false` . At that point, `result.value` should be a promise, but `Promise.resolve()` is called just in case the function in question didn’t return a promise. (Remember, `Promise.resolve()` just passes through any promise passed in and wraps any non-promise in a promise.) Then, a fulfillment handler is added that retrieves the promise value and passes the value back to the iterator. After that, `result` is assigned to the next yield result before the `step()` function calls itself. A rejection handler stores any rejection results in an error object. The `task.throw()` method passes that error object back into the iterator, and if an error is caught in the task, `result` is assigned to the next yield result. Finally, `step()` is called inside `catch()` to continue. This `run()` function can run any generator that uses `yield` to achieve asynchronous code without exposing promises (or callbacks) to the developer. In fact, since the return value of the function call is always coverted into a promise, the function can even return something other than a promise. That means both synchronous and asynchronous methods work correctly when called using `yield` , and you never have to check that the return value is a promise. The only concern is ensuring that asynchronous functions like `readFile()` return a promise that correctly identifies its state. For Node.js built-in methods, that means you’ll have to convert those methods to return promises instead of using callbacks. Promises are designed to improve asynchronous programming in JavaScript by giving you more control and composability over asynchronous operations than events and callbacks can. Promises schedule jobs to be added to the JavaScript engine’s job queue for execution later, while a second job queue tracks promise fulfillment and rejection handlers to ensure proper execution. Promises have three states: pending, fulfilled, and rejected. A promise starts in a pending state and becomes fulfilled on a successful execution or rejected on a failure. In either case, handlers can be added to indicate when a promise is settled. The `then()` method allows you to assign a fulfillment and rejection handler and the `catch()` method allows you to assign only a rejection handler. You can chain promises together in a variety of ways and pass information between them. Each call to `then()` creates and returns a new promise that is resolved when the previous one is resolved. Such chains can be used to trigger responses to a series of asynchronous events. You can also use `Promise.race()` and `Promise.all()` to monitor the progress of multiple promises and respond accordingly. Asynchronous task running is easier when you combine generators and promises, as promises give a common interface that asynchronous operations can return. You can then use generators and the `yield` operator to wait for asynchronous responses and respond appropriately. Most new web APIs are being built on top of promises, and you can expect many more to follow suit in the future. ## Proxies and the Reflection API ECMAScript 5 and ECMAScript 6 were both developed with demystifying JavaScript functionality in mind. For example, JavaScript environments contained nonenumerable and nonwritable object properties before ECMAScript 5, but developers couldn’t define their own nonenumerable or nonwritable properties. ECMAScript 5 included the method to allow developers to do what JavaScript engines could do already. ECMAScript 6 gives developers further access to JavaScript engine capabilities previously available only to built-in objects. The language exposes the inner workings of objects through proxies, which are wrappers that can intercept and alter low-level operations of the JavaScript engine. This chapter starts by describing the problem that proxies are meant to address in detail, and then discusses how you can create and use proxies effectively. ### The Array Problem The JavaScript array object behaves in ways that developers couldn’t mimic in their own objects before ECMASCript 6. An array’s `length` property is affected when you assign values to specific array items, and you can modify array items by modifying the `length` property. For example: The `colors` array starts with three items. Assigning `"black"` to `colors[3]` automatically increments the `length` property to `4` . Setting the `length` property to `2` removes the last two items in the array, leaving only the first two items. Nothing in ECMAScript 5 allows developers to achieve this behavior, but proxies change that. ### What are Proxies and Reflection? You can create a proxy to use in place of another object (called the target) by calling `new Proxy()` . The proxy virtualizes the target so that the proxy and the target appear to be the same object to functionality using the proxy. Proxies allow you to intercept low-level object operations on the target that are otherwise internal to the JavaScript engine. These low-level operations are intercepted using a trap, which is a function that responds to a specific operation. The reflection API, represented by the `Reflect` object, is a collection of methods that provide the default behavior for the same low-level operations that proxies can override. There is a `Reflect` method for every proxy trap. Those methods have the same name and are passed the same arguments as their respective proxy traps. Table 11-1 summarizes this behavior. Proxy Trap | Overrides the Behavior Of | Default Behavior | | --- | --- | --- | | Reading a property value | | | Writing to a property | | | The | | | The | | | | | | | | | | | | | | | | | | | | Proxy Trap Overrides the Behavior Of Default Behavior get Reading a property value Reflect.get() set Writing to a property Reflect.set() has The in operator Reflect.has() deleteProperty The delete operator Reflect.deleteProperty() getPrototypeOf Object.getPrototypeOf() Reflect.getPrototypeOf() setPrototypeOf Object.setPrototypeOf() Reflect.setPrototypeOf() isExtensible Object.isExtensible() Reflect.isExtensible() preventExtensions Object.preventExtensions() Reflect.preventExtensions() getOwnPropertyDescriptor Object.getOwnPropertyDescriptor() Reflect.getOwnPropertyDescriptor() defineProperty Object.defineProperty() Reflect.defineProperty ownKeys Object.keys, Object.getOwnPropertyNames(), Object.getOwnPropertySymbols() Reflect.ownKeys() apply Calling a function Reflect.apply() construct Calling a function with new Reflect.construct() Each trap overrides some built-in behavior of JavaScript objects, allowing you to intercept and modify the behavior. If you still need to use the built-in behavior, then you can use the corresponding reflection API method. The relationship between proxies and the reflection API becomes clear when you start creating proxies, so it’s best to dive in and look at some examples. ### Creating a Simple Proxy When you use the `Proxy` constructor to make a proxy, you’ll pass it two arguments: the target and a handler. A handler is an object that defines one or more traps. The proxy uses the default behavior for all operations except when traps are defined for that operation. To create a simple forwarding proxy, you can use a handler without any traps: In this example, `proxy` forwards all operations directly to `target` . When `"proxy"` is assigned to the `proxy.name` property, `name` is created on `target` . The proxy itself is not storing this property; it’s simply forwarding the operation to `target` . Similarly, the values of `proxy.name` and `target.name` are the same because they are both references to `target.name` . That also means setting `target.name` to a new value causes `proxy.name` to reflect the same change. Of course, proxies without traps aren’t very interesting, so what happens when you define a trap? ### Validating Properties Using the `set` Trap Suppose you want to create an object whose property values must be numbers. That means every new property added to the object must be validated, and an error must be thrown if the value isn’t a number. To accomplish this, you could define a `set` trap that overrides the default behavior of setting a value. The `set` trap receives four arguments: * `trapTarget` - the object that will receive the property (the proxy’s target) * `key` - the property key (string or symbol) to write to * `value` - the value being written to the property * `receiver` - the object on which the operation took place (usually the proxy) `Reflect.set()` is the `set` trap’s corresponding reflection method, and it’s the default behavior for this operation. The `Reflect.set()` method accepts the same four arguments as the `set` proxy trap, making the method easy to use inside of the trap. The trap should return `true` if the property was set or `false` if not. (The `Reflect.set()` method returns the correct value based on whether the operation succeeded.) To validate the values of properties, you’d use the `set` trap and inspect the `value` that is passed in. Here’s an example: This code defines a proxy trap that validates the value of any new property added to `target` . When `proxy.count = 1` is executed, the `set` trap is called. The `trapTarget` value is equal to `target` , `key` is `"count"` , `value` is `1` , and `receiver` (not used in this example) is `proxy` . There is no existing property named `count` in `target` , so the proxy validates `value` by passing it to `isNaN()` . If the result is `NaN` , then the property value is not numeric and an error is thrown. Since this code sets `count` to `1` , the proxy calls `Reflect.set()` with the same four arguments that were passed to the trap to add the new property. When `proxy.name` is assigned a string, the operation completes successfully. Since `target` already has a `name` property, that property is omitted from the validation check by calling the ``` trapTarget.hasOwnProperty() ``` method. This ensures that previously-existing non-numeric property values are still supported. When `proxy.anotherName` is assigned a string, however, an error is thrown. The `anotherName` property doesn’t exist on the target, so its value needs to be validated. During validation, the error is thrown because `"proxy"` isn’t a numeric value. Where the `set` proxy trap lets you intercept when properties are being written to, the `get` proxy trap lets you intercept when properties are being read. ### Object Shape Validation Using the `get` Trap One of the interesting, and sometimes confusing, aspects of JavaScript is that reading nonexistent properties doesn’t throw an error. Instead, the value `undefined` is used for the property value, as in this example: In most other languages, attempting to read `target.name` throws an error because the property doesn’t exist. But JavaScript just uses `undefined` for the value of the `target.name` property. If you’ve ever worked on a large code base, you’ve probably seen how this behavior can cause significant problems, especially when there’s a typo in the property name. Proxies can help you save yourself from this problem by having object shape validation. An object shape is the collection of properties and methods available on the object. JavaScript engines use object shapes to optimize code, often creating classes to represent the objects. If you can safely assume an object will always have the same properties and methods it began with (a behavior you can enforce with the method, the `Object.seal()` method, or the `Object.freeze()` method), then throwing an error on attempts to access nonexistent properties can be helpful. Proxies make object shape validation easy. Since property validation only has to happen when a property is read, you’d use the `get` trap. The `get` trap is called when a property is read, even if that property doesn’t exist on the object, and it takes three arguments: * `trapTarget` - the object from which the property is read (the proxy’s target) * `key` - the property key (a string or symbol) to read * `receiver` - the object on which the operation took place (usually the proxy) These arguments mirror the `set` trap’s arguments, with one noticeable difference. There’s no `value` argument here because `get` traps don’t write values. The `Reflect.get()` method accepts the same three arguments as the `get` trap and returns the property’s default value. You can use the `get` trap and `Reflect.get()` to throw an error when a property doesn’t exist on the target, as follows: In this example, the `get` trap intercepts property read operations. The `in` operator is used to determine if the property already exists on the `receiver` . The `receiver` is used with `in` instead of `trapTarget` in case `receiver` is a proxy with a `has` trap, a type I’ll cover in the next section. Using `trapTarget` in this case would sidestep the `has` trap and potentially give you the wrong result. An error is thrown if the property doesn’t exist, and otherwise, the default behavior is used. This code allows new properties like `proxy.name` to be added, written to, and read from with no problems. The last line contains a typo: `proxy.nme` should probably be `proxy.name` instead. This throws an error because `nme` doesn’t exist as a property. ### Hiding Property Existence Using the `has` Trap The `in` operator determines whether a property exists on a given object and returns `true` if there is either an own property or a prototype property matching the name or symbol. For example: Both `value` and `toString` exist on `object` , so in both cases the `in` operator returns `true` . The `value` property is an own property while `toString` is a prototype property (inherited from `Object` ). Proxies allow you to intercept this operation and return a different value for `in` with the `has` trap. The `has` trap is called whenever the `in` operator is used. When called, two arguments are passed to the `has` trap: * `trapTarget` - the object the property is read from (the proxy’s target) * `key` - the property key (string or symbol) to check The `Reflect.has()` method accepts these same arguments and returns the default response for the `in` operator. Using the `has` trap and `Reflect.has()` allows you to alter the behavior of `in` for some properties while falling back to default behavior for others. For instance, suppose you just want to hide the `value` property. You can do so like this: The `has` trap for `proxy` checks to see if `key` is `"value"` returns `false` if so. Otherwise, the default behavior is used via a call to the `Reflect.has()` method. As a result, the `in` operator returns `false` for the `value` property even though `value` actually exists on the target. The other properties, `name` and `toString` , correctly return `true` when used with the `in` operator. ### Preventing Property Deletion with the `deleteProperty` Trap The `delete` operator removes a property from an object and returns `true` when successful and `false` when unsuccessful. In strict mode, `delete` throws an error when you attempt to delete a nonconfigurable property; in nonstrict mode, `delete` simply returns `false` . Here’s an example: The `value` property is deleted using the `delete` operator and, as a result, the `in` operator returns `false` in the third `console.log()` call. The nonconfigurable `name` property can’t be deleted so the `delete` operator simply returns `false` (if this code is run in strict mode, an error is thrown instead). You can alter this behavior by using the `deleteProperty` trap in a proxy. The `deleteProperty` trap is called whenever the `delete` operator is used on an object property. The trap is passed two arguments: * `trapTarget` - the object from which the property should be deleted (the proxy’s target) * `key` - the property key (string or symbol) to delete The method provides the default implementation of the `deleteProperty` trap and accepts the same two arguments. You can combine and the `deleteProperty` trap to change how the `delete` operator behaves. For instance, you could ensure that the `value` property can’t be deleted: This code is very similar to the `has` trap example in that the `deleteProperty` trap checks to see if the `key` is `"value"` and returns `false` if so. Otherwise, the default behavior is used by calling the method. The `value` property can’t be deleted through `proxy` because the operation is trapped, but the `name` property is deleted as expected. This approach is especially useful when you want to protect properties from deletion without throwing an error in strict mode. ### Prototype Proxy Traps Chapter 4 introduced the method that ECMAScript 6 added to complement the ECMAScript 5 method. Proxies allow you to intercept execution of both methods through the `setPrototypeOf` and `getPrototypeOf` traps. In both cases, the method on `Object` calls the trap of the corresponding name on the proxy, allowing you to alter the methods’ behavior. Since there are two traps associated with prototype proxies, there’s a set of methods associated with each type of trap. The `setPrototypeOf` trap receives these arguments: * `trapTarget` - the object for which the prototype should be set (the proxy’s target) * `proto` - the object to use for as the prototype These are the same arguments passed to the methods. The `getPrototypeOf` trap, on the other hand, only receives the `trapTarget` argument, which is the argument passed to the methods. # How Prototype Proxy Traps Work There are some restrictions on these traps. First, the `getPrototypeOf` trap must return an object or `null` , and any other return value results in a runtime error. The return value check ensures that will always return an expected value. Similarly, the return value of the `setPrototypeOf` trap must be `false` if the operation doesn’t succeed. When `setPrototypeOf` returns `false` , throws an error. If `setPrototypeOf` returns any value other than `false` , then assumes the operation succeeded. The following example hides the prototype of the proxy by always returning `null` and also doesn’t allow the prototype to be changed: This code emphasizes the difference between the behavior of `target` and `proxy` . While returns a value for `target` , it returns `null` for `proxy` because the `getPrototypeOf` trap is called. Similarly, succeeds when used on `target` but throws an error when used on `proxy` due to the `setPrototypeOf` trap. If you want to use the default behavior for these two traps, you can use the corresponding methods on `Reflect` . For instance, this code implements the default behavior for the `getPrototypeOf` and `setPrototypeOf` traps: In this example, you can use `target` and `proxy` interchangeably and get the same results because the `getPrototypeOf` and `setPrototypeOf` traps are just passing through to use the default implementation. It’s important that this example use the methods rather than the methods of the same name on `Object` due to some important differences. # Why Two Sets of Methods? The confusing aspect of is that they look suspiciously similar to the methods. While both sets of methods perform similar operations, there are some distinct differences between the two. To begin, are higher-level operations that were created for developer use from the start. The methods are lower-level operations that give developers access to the previously internal-only `[[GetPrototypeOf]]` and `[[SetPrototypeOf]]` operations. The method is the wrapper for the internal `[[GetPrototypeOf]]` operation (with some input validation). The method and `[[SetPrototypeOf]]` have the same relationship. The corresponding methods on `Object` also call `[[GetPrototypeOf]]` and `[[SetPrototypeOf]]` but perform a few steps before the call and inspect the return value to determine how to behave. The method throws an error if its argument is not an object, while first coerces the value into an object before performing the operation. If you were to pass a number into each method, you’d get a different result: The method allows you to retrieve a prototype for the number `1` because it first coerces the value into a `Number` object and then returns `Number.prototype` . The method doesn’t coerce the value, and since `1` isn’t an object, it throws an error. The method also has a few more differences from the method. First, returns a boolean value indicating whether the operation was successful. A `true` value is returned for success, and `false` is returned for failure. If fails, it throws an error. As the first example under “How Prototype Proxy Traps Work” showed, when the `setPrototypeOf` proxy trap returns `false` , it causes to throw an error. The method returns the first argument as its value and therefore isn’t suitable for implementing the default behavior of the `setPrototypeOf` proxy trap. The following code demonstrates these differences: In this example, returns `target1` as its value, but returns `true` . This subtle difference is very important. You’ll see more seemingly duplicate methods on `Object` and `Reflect` , but always be sure to use the method on `Reflect` inside any proxy traps. ### Object Extensibility Traps ECMAScript 5 added object extensibility modification through the methods, and ECMAScript 6 allows proxies to intercept those method calls to the underlying objects through the `preventExtensions` and `isExtensible` traps. Both traps receive a single argument called `trapTarget` that is the object on which the method was called. The `isExtensible` trap must return a boolean value indicating whether the object is extensible while the `preventExtensions` trap must return a boolean value indicating if the operation succeeded. There are also methods to implement the default behavior. Both return boolean values, so they can be used directly in their corresponding traps. # Two Basic Examples To see object extensibility traps in action, consider the following code, which implements the default behavior for the `isExtensible` and `preventExtensions` traps: This example shows that both correctly pass through from `proxy` to `target` . You can, of course, also change the behavior. For example, if you don’t want to allow to succeed on your proxy, you could return `false` from the `preventExtensions` trap: Here, the call to ``` Object.preventExtensions(proxy) ``` is effectively ignored because the `preventExtensions` trap returns `false` . The operation isn’t forwarded to the underlying `target` , so returns `true` . # Duplicate Extensibility Methods You may have noticed that, once again, there are seemingly duplicate methods on `Object` and `Reflect` . In this case, they’re more similar than not. The methods are similar except when passed a non-object value. In that case, always returns `false` while throws an error. Here’s an example of that behavior: This restriction is similar to the difference between the methods, as the method with lower-level functionality has stricter error checks than its higher-level counterpart. The methods are also very similar. The method always returns the value that was passed to it as an argument even if the value isn’t an object. The method, on the other hand, throws an error if the argument isn’t an object; if the argument is an object, then returns `true` when the operation succeeds or `false` if not. For example: Here, passes through the value `2` as its return value even though `2` isn’t an object. The method returns `true` when an object is passed to it and throws an error when `2` is passed to it. ### Property Descriptor Traps One of the most important features of ECMAScript 5 was the ability to define property attributes using the method. In previous versions of JavaScript, there was no way to define an accessor property, make a property read-only, or make a property nonenumerable. All of these are possible with the method, and you can retrieve those attributes with the method. Proxies let you intercept calls to using the `defineProperty` and traps, respectively. The `defineProperty` trap receives the following arguments: * `trapTarget` - the object on which the property should be defined (the proxy’s target) * `key` - the string or symbol for the property * `descriptor` - the descriptor object for the property The `defineProperty` trap requires you to return `true` if the operation is successful and `false` if not. The traps receives only `trapTarget` and `key` , and you are expected to return the descriptor. The corresponding methods accept the same arguments as their proxy trap counterparts. Here’s an example that just implements the default behavior for each trap: This code defines a property called `"name"` on the proxy with the method. The property descriptor for that property is then retrieved by the method. # Blocking Object.defineProperty() The `defineProperty` trap requires you to return a boolean value to indicate whether the operation was successful. When `true` is returned, succeeds as usual; when `false` is returned, throws an error. You can use this functionality to restrict the kinds of properties that the method can define. For instance, if you want to prevent symbol properties from being defined, you could check that the key is a string and return `false` if not, like this: The `defineProperty` proxy trap returns `false` when `key` is a symbol and otherwise proceeds with the default behavior. When is called with `"name"` as the key, the method succeeds because the key is a string. When is called with `nameSymbol` , it throws an error because the `defineProperty` trap returns `false` . # Descriptor Object Restrictions To ensure consistent behavior when using the methods, descriptor objects passed to the `defineProperty` trap are normalized. Objects returned from trap are always validated for the same reason. No matter what object is passed as the third argument to the method, only the properties `enumerable` , `configurable` , `value` , `writable` , `get` , and `set` will be on the descriptor object passed to the `defineProperty` trap. For example: Here, is called with a nonstandard `name` property on the third argument. When the `defineProperty` trap is called, the `descriptor` object doesn’t have a `name` property but does have a `value` property. That’s because `descriptor` isn’t a reference to the actual third argument passed to the method, but rather a new object that contains only the allowable properties. The method also ignores any nonstandard properties on the descriptor. The trap has a slightly different restriction that requires the return value to be `null` , `undefined` , or an object. If an object is returned, only `enumerable` , `configurable` , `value` , `writable` , `get` , and `set` are allowed as own properties of the object. An error is thrown if you return an object with an own property that isn’t allowed, as this code shows: The property `name` isn’t allowable on property descriptors, so when is called, the return value triggers an error. This restriction ensures that the value returned by always has a reliable structure regardless of use on proxies. # Duplicate Descriptor Methods Once again, ECMAScript 6 has some confusingly similar methods, as the methods appear to do the same thing as the methods, respectively. Like other method pairs discussed earlier in this chapter, these have some subtle but important differences. # defineProperty() Methods methods are exactly the same except for their return values. The method returns the first argument, while returns `true` if the operation succeeded and `false` if not. For example: When is called on `target` , the return value is `target` . When is called on `target` , the return value is `true` , indicating that the operation succeeded. Since the `defineProperty` proxy trap requires a boolean value to be returned, it’s better to use to implement the default behavior when necessary. # getOwnPropertyDescriptor() Methods method coerces its first argument into an object when a primitive value is passed and then continues the operation. On the other hand, the method throws an error if the first argument is a primitive value. Here’s an example showing both: The method returns `undefined` because it coerces `2` into an object, and that object has no `name` property. This is the standard behavior of the method when a property with the given name isn’t found on an object. When is called, however, an error is thrown immediately because that method doesn’t accept primitive values for the first argument. ### The `ownKeys` Trap The `ownKeys` proxy trap intercepts the internal method `[[OwnPropertyKeys]]` and allows you to override that behavior by returning an array of values. This array is used in four methods: the `Object.keys()` method, the method, and the `Object.assign()` method. (The `Object.assign()` method uses the array to determine which properties to copy.) The default behavior for the `ownKeys` trap is implemented by the `Reflect.ownKeys()` method and returns an array of all own property keys, including both strings and symbols. The ``` Object.getOwnProperyNames() ``` method and the `Object.keys()` method filter symbols out of the array and returns the result while filters the strings out of the array and returns the result. The `Object.assign()` method uses the array with both strings and symbols. The `ownKeys` trap receives a single argument, the target, and must always return an array or array-like object; otherwise, an error is thrown. You can use the `ownKeys` trap to, for example, filter out certain property keys that you don’t want used when the `Object.keys()` , the method, or the `Object.assign()` method is used. Suppose you don’t want to include any property names that begin with an underscore character, a common notation in JavaScript indicating that a field is private. You can use the `ownKeys` trap to filter out those keys as follows: This example uses an `ownKeys` trap that first calls `Reflect.ownKeys()` to get the default list of keys for the target. Then, the `filter()` method is used to filter out keys that are strings and begin with an underscore character. Then, three properties are added to the `proxy` object: `name` , `_name` , and `nameSymbol` . When and `Object.keys()` is called on `proxy` , only the `name` property is returned. Similarly, only `nameSymbol` is returned when is called on `proxy` . The `_name` property doesn’t appear in either result because it is filtered out. ### Function Proxies with the `apply` and `construct` Traps Of all the proxy traps, only `apply` and `construct` require the proxy target to be a function. Recall from Chapter 3 that functions have two internal methods called `[[Call]]` and `[[Construct]]` that are executed when a function is called without and with the `new` operator, respectively. The `apply` and `construct` traps correspond to and let you override those internal methods. When a function is called without `new` , the `apply` trap receives, and `Reflect.apply()` expects, the following arguments: * `trapTarget` - the function being executed (the proxy’s target) * `thisArg` - the value of `this` inside of the function during the call * `argumentsList` - an array of arguments passed to the function The `construct` trap, which is called when the function is executed using `new` , receives the following arguments: * `trapTarget` - the function being executed (the proxy’s target) * `argumentsList` - an array of arguments passed to the function The `Reflect.construct()` method also accepts these two arguments and has an optional third argument called `newTarget` . When given, the `newTarget` argument specifies the value of `new.target` inside of the function. Together, the `apply` and `construct` traps completely control the behavior of any proxy target function. To mimic the default behavior of a function, you can do this: This example has a function that returns the number 42. The proxy for that function uses the `apply` and `construct` traps to delegate those behaviors to the `Reflect.apply()` and `Reflect.construct()` methods, respectively. The end result is that the proxy function works exactly like the target function, including identifying itself as a function when `typeof` is used. The proxy is called without `new` to return 42 and then is called with `new` to create an object called `instance` . The `instance` object is considered an instance of both `proxy` and `target` because `instanceof` uses the prototype chain to determine this information. Prototype chain lookup is not affected by this proxy, which is why `proxy` and `target` appear to have the same prototype to the JavaScript engine. # Validating Function Parameters The `apply` and `construct` traps open up a lot of possibilities for altering the way a function is executed. For instance, suppose you want to validate that all arguments are of a specific type. You can check the arguments in the `apply` trap: This example uses the `apply` trap to ensure that all arguments are numbers. The `sum()` function adds up all of the arguments that are passed. If a non-number value is passed, the function will still attempt the operation, which can cause unexpected results. By wrapping `sum()` inside the `sumProxy()` proxy, this code intercepts function calls and ensures that each argument is a number before allowing the call to proceed. To be safe, the code also uses the `construct` trap to ensure that the function can’t be called with `new` . You can also do the opposite, ensuring that a function must be called with `new` and validating its arguments to be numbers: Here, the `apply` trap throws an error while the `construct` trap uses the `Reflect.construct()` method to validate input and return a new instance. Of course, you can accomplish the same thing without proxies using `new.target` instead. # Calling Constructors Without new Chapter 3 introduced the `new.target` metaproperty. To review, `new.target` is a reference to the function on which `new` is called, meaning that you can tell if a function was called using `new` or not by checking the value of `new.target` like this: This example throws an error when `Numbers` is called without using `new` , which is similar to the example in the “Validating Function Parameters” section but doesn’t use a proxy. Writing code like this is much simpler than using a proxy and is preferable if your only goal is to prevent calling the function without `new` . But sometimes you aren’t in control of the function whose behavior needs to be modified. In that case, using a proxy makes sense. Suppose the `Numbers` function is defined in code you can’t modify. You know that the code relies on `new.target` and want to avoid that check while still calling the function. The behavior when using `new` is already set, so you can just use the `apply` trap: The `NumbersProxy` function allows you to call `Numbers` without using `new` and have it behave as if `new` were used. To do so, the `apply` trap calls `Reflect.construct()` with the arguments passed into `apply` . The `new.target` inside of `Numbers` is equal to `Numbers` itself, and no error is thrown. While this is a simple example of modifying `new.target` , you can also do so more directly. # Overriding Abstract Base Class Constructors You can go one step further and specify the third argument to `Reflect.construct()` as the specific value to assign to `new.target` . This is useful when a function is checking `new.target` against a known value, such as when creating an abstract base class constructor (discussed in Chapter 9). In an abstract base class constructor, `new.target` is expected to be something other than the class constructor itself, as in this example: When ``` new AbstractNumbers() ``` is called, `new.target` is equal to `AbstractNumbers` and an error is thrown. Calling `new Numbers()` still works because `new.target` is equal to `Numbers` . You can bypass this restriction by manually assigning `new.target` with a proxy: The `AbstractNumbersProxy` uses the `construct` trap to intercept the call to the ``` new AbstractNumbersProxy() ``` method. Then, the `Reflect.construct()` method is called with arguments from the trap and adds an empty function as the third argument. That empty function is used as the value of `new.target` inside of the constructor. Because `new.target` is not equal to `AbstractNumbers` , no error is thrown and the constructor executes completely. # Callable Class Constructors Chapter 9 explained that class constructors must always be called with `new` . That happens because the internal `[[Call]]` method for class constructors is specified to throw an error. But proxies can intercept calls to the `[[Call]]` method, meaning you can effectively create callable class constructors by using a proxy. For instance, if you want a class constructor to work without using `new` , you can use the `apply` trap to create a new instance. Here’s some sample code: The `PersonProxy` object is a proxy of the `Person` class constructor. Class constructors are just functions, so they behave like functions when used in proxies. The `apply` trap overrides the default behavior and instead returns a new instance of `trapTarget` that’s equal to `Person` . (I used `trapTarget` in this example to show that you don’t need to manually specify the class.) The `argumentList` is passed to `trapTarget` using the spread operator to pass each argument separately. Calling `PersonProxy()` without using `new` returns an instance of `Person` ; if you attempt to call `Person()` without `new` , the constructor will still throw an error. Creating callable class constructors is something that is only possible using proxies. ### Revocable Proxies Normally, a proxy can’t be unbound from its target once the proxy has been created. All of the examples to this point in this chapter have used nonrevocable proxies. But there may be situations when you want to revoke a proxy so that it can no longer be used. You’ll find it most helpful to revoke proxies when you want to provide an object through an API for security purposes and maintain the ability to cut off access to some functionality at any point in time. You can create revocable proxies with the `Proxy.revocable()` method, which takes the same arguments as the `Proxy` constructor–a target object and the proxy handler. The return value is an object with the following properties: * `proxy` - the proxy object that can be revoked * `revoke` - the function to call to revoke the proxy When the `revoke()` function is called, no further operations can be performed through the `proxy` . Any attempt to interact with the proxy object in a way that would trigger a proxy trap throws an error. For example: This example creates a revocable proxy. It uses destructuring to assign the `proxy` and `revoke` variables to the properties of the same name on the object returned by the `Proxy.revocable()` method. After that, the `proxy` object can be used just like a nonrevocable proxy object, so `proxy.name` returns `"target"` because it passes through to `target.name` . Once the `revoke()` function is called, however, `proxy` no longer functions. Attempting to access `proxy.name` throws an error, as will any other operation that would trigger a trap on `proxy` . ### Solving the Array Problem At the beginning of this chapter, I explained how developers couldn’t mimic the behavior of an array accurately in JavaScript prior to ECMAScript 6. Proxies and the reflection API allow you to create an object that behaves in the same manner as the built-in `Array` type when properties are added and removed. To refresh your memory, here’s an example showing the behavior that proxies help to mimick: There are two particularly important behaviors to notice in this example: * The `length` property is increased to 4 when `colors[3]` is assigned a value. * The last two items in the array are deleted when the `length` property is set to 2. These two behaviors are the only ones that need to be mimicked to accurately recreate how built-in arrays work. The next few sections describe how to make an object that correctly mimics them. # Detecting Array Indices Keep in mind that assigning to an integer property key is a special case for arrays, as those are treated differently from non-integer keys. The ECMAScript 6 specification gives these instructions on how to determine if a property key is an array index: A String property name `P` is an array index if and only if ``` ToString(ToUint32(P)) ``` is equal to `P` and `ToUint32(P)` is not equal to 232-1. This operation can be implemented in JavaScript as follows: The `toUint32()` function converts a given value into an unsigned 32-bit integer using an algorithm described in the specification. The `isArrayIndex()` function first converts the key into a uint32 and then performs the comparisons to determine if the key is an array index or not. With these utility functions available, you can start to implement an object that will mimic a built-in array. # Increasing length when Adding New Elements You might have noticed that both array behaviors I described rely on the assignment of a property. That means you really only need to use the `set` proxy trap to accomplish both behaviors. To get started, here’s an example that implements the first of the two behaviors by incrementing the `length` property when an array index larger than `length - 1` is used: This example uses the `set` proxy trap to intercept the setting of an array index. If the key is an array index, then it is converted into a number because keys are always passed as strings. Next, if that numeric value is greater than or equal to the current `length` property, then the `length` property is updated to be one more than the numeric key (setting an item in position 3 means the `length` must be 4). After that, the default behavior for setting a property is used via `Reflect.set()` , since you do want the property to receive the value as specified. The initial custom array is created by calling `createMyArray()` with a `length` of 3 and the values for those three items are added immediately afterward. The `length` property correctly remains 3 until the value `"black"` is assigned to position 3. At that point, `length` is set to 4. With the first behavior working, it’s time to move on to the second. # Deleting Elements on Reducing length The first array behavior to mimic is used only when an array index is greater than or equal to the `length` property. The second behavior does the opposite and removes array items when the `length` property is set to a smaller value than it previously contained. That involves not only changing the `length` property, but also deleting all items that might otherwise exist. For instance, if an array with a `length` of 4 has `length` set to 2, the items in positions 2 and 3 are deleted. You can accomplish this inside the `set` proxy trap alongside the first behavior. Here’s the previous example again, with an updated `createMyArray` method: The `set` proxy trap in this code checks to see if `key` is `"length"` in order to adjust the rest of the object correctly. When that happens, the current length is first retrieved using `Reflect.get()` and compared against the new value. If the new value is less than the current length, then a `for` loop deletes all properties on the target that should no longer be available. The `for` loop goes backward from the current array length ( `currentLength` ) and deletes each property until it reaches the new array length ( `value` ). This example adds four colors to `colors` and then sets the `length` property to 2. That effectively removes the items in positions 2 and 3, so they now return `undefined` when you attempt to access them. The `length` property is correctly set to 2 and the items in positions 0 and 1 are still accessible. With both behaviors implemented, you can easily create an object that mimics the behavior of built-in arrays. But doing so with a function isn’t as desirable as creating a class to encapsulate this behavior, so the next step is to implement this functionality as a class. # Implementing the MyArray Class The simplest way to create a class that uses a proxy is to define the class as usual and then return a proxy from the constructor. That way, the object returned when a class is instantiated will be the proxy instead of the instance. (The instance is the value of `this` inside the constructor.) The instance becomes the target of the proxy and the proxy is returned as if it were the instance. The instance will be completely private and you won’t be able to access it directly, though you’ll be able to access it indirectly through the proxy. Here’s a simple example of returning a proxy from a class constructor: In this example, the class `Thing` returns a proxy from its constructor. The proxy target is `this` and the proxy is returned from the constructor. That means `myThing` is actually a proxy even though it was created by calling the `Thing` constructor. Because proxies pass through their behavior to their targets, `myThing` is still considered an instance of `Thing` , making the proxy completely transparent to anyone using the `Thing` class. With that in mind, creating a custom array class using a proxy in relatively straightforward. The code is mostly the same as the code in the “Deleting Elements on Reducing Length” section. The same proxy code is used, but this time, it’s inside a class constructor. Here’s the complete example: This code creates a `MyArray` class that returns a proxy from its constructor. The `length` property is added in the constructor (initialized to either the value that is passed in or to a default value of 0) and then a proxy is created and returned. This gives the `colors` variable the appearance of being just an instance of `MyArray` and implements both of the key array behaviors. Although returning a proxy from a class constructor is easy, it does mean that a new proxy is created for every instance. There is, however, a way to have all instances share one proxy: you can use the proxy as a prototype. ### Using a Proxy as a Prototype Proxies can be used as prototypes, but doing so is a bit more involved than the previous examples in this chapter. When a proxy is a prototype, the proxy traps are only called when the default operation would normally continue on to the prototype, which does limit a proxy’s capabilities as a prototype. Consider this example: The `newTarget` object is created with a proxy as the prototype. Making `target` the proxy target effectively makes `target` the prototype of `newTarget` because the proxy is transparent. Now, proxy traps will only be called if an operation on `newTarget` would pass the operation through to happen on `target` . The method is called on `newTarget` to create an own property called `name` . Defining a property on an object isn’t an operation that normally continues to the object’s prototype, so the `defineProperty` trap on the proxy is never called and the `name` property is added to `newTarget` as an own property. While proxies are severely limited when used as prototypes, there are a few traps that are still useful. `get` Trap on a Prototype When the internal `[[Get]]` method is called to read a property, the operation looks for own properties first. If an own property with the given name isn’t found, then the operation continues to the prototype and looks for a property there. The process continues until there are no further prototypes to check. Thanks to that process, if you set up a `get` proxy trap, the trap will be called on a prototype whenever an own property of the given name doesn’t exist. You can use the `get` trap to prevent unexpected behavior when accessing properties that you can’t guarantee will exist. Just create an object that throws an error whenever you try to access a property that doesn’t exist: In this code, the `thing` object is created with a proxy as its prototype. The `get` trap throws an error when called to indicate that the given key doesn’t exist on the `thing` object. When `thing.name` is read, the operation never calls the `get` trap on the prototype because the property exists on `thing` . The `get` trap is called only when the `thing.unknown` property, which doesn’t exist, is accessed. When the last line executes, `unknown` isn’t an own property of `thing` , so the operation continues to the prototype. The `get` trap then throws an error. This type of behavior can be very useful in JavaScript, where unknown properties silently return `undefined` instead of throwing an error (as happens in other languages). It’s important to understand that in this example, `trapTarget` and `receiver` are different objects. When a proxy is used as a prototype, the `trapTarget` is the prototype object itself while the `receiver` is the instance object. In this case, that means `trapTarget` is equal to `target` and `receiver` is equal to `thing` . That allows you access both to the original target of the proxy and the object on which the operation is meant to take place. `set` Trap on a Prototype The internal `[[Set]]` method also checks for own properties and then continues to the prototype if needed. When you assign a value to an object property, the value is assigned to the own property with the same name if it exists. If no own property with the given name exists, then the operation continues to the prototype. The tricky part is that even though the assignment operation continues to the prototype, assigning a value to that property will create a property on the instance (not the prototype) by default, regardless of whether a property of that name exists on the prototype. To get a better idea of when the `set` trap will be called on a prototype and when it won’t, consider the following example showing the default behavior: In this example, `target` starts with no own properties. The `thing` object has a proxy as its prototype that defines a `set` trap to catch the creation of any new properties. When `thing.name` is assigned `"thing"` as its value, the `set` proxy trap is called because `thing` doesn’t have an own property called `name` . Inside the `set` trap, `trapTarget` is equal to `target` and `receiver` is equal to `thing` . The operation should ultimately create a new property on `thing` , and fortunately `Reflect.set()` implements this default behavior for you if you pass in `receiver` as the fourth argument. Once the `name` property is created on `thing` , setting `thing.name` to a different value will no longer call the `set` proxy trap. At that point, `name` is an own property so the `[[Set]]` operation never continues on to the prototype. `has` Trap on a Prototype Recall that the `has` trap intercepts the use of the `in` operator on objects. The `in` operator searches first for an object’s own property with the given name. If an own property with that name doesn’t exist, the operation continues to the prototype. If there’s no own property on the prototype, then the search continues through the prototype chain until the own property is found or there are no more prototypes to search. The `has` trap is therefore only called when the search reaches the proxy object in the prototype chain. When using a proxy as a prototype, that only happens when there’s no own property of the given name. For example: This code creates a `has` proxy trap on the prototype of `thing` . The `has` trap isn’t passed a `receiver` object like the `get` and `set` traps are because searching the prototype happens automatically when the `in` operator is used. Instead, the `has` trap must operate only on `trapTarget` , which is equal to `target` . The first time the `in` operator is used in this example, the `has` trap is called because the property `name` doesn’t exist as an own property of `thing` . When `thing.name` is given a value and then the `in` operator is used again, the `has` trap isn’t called because the operation stops after finding the own property `name` on `thing` . The prototype examples to this point have centered around objects created using the `Object.create()` method. But if you want to create a class that has a proxy as a prototype, the process is a bit more involved. # Proxies as Prototypes on Classes Classes cannot be directly modified to use a proxy as a prototype because their `prototype` property is non-writable. You can, however, use a bit of misdirection to create a class that has a proxy as its prototype by using inheritance. To start, you need to create an ECMAScript 5-style type definition using a constructor function. You can then overwrite the prototype to be a proxy. Here’s an example: The `NoSuchProperty` function represents the base from which the class will inherit. There are no restrictions on the `prototype` property of functions, so you can overwrite it with a proxy. The `get` trap is used to throw an error when the property doesn’t exist. The `thing` object is created as an instance of `NoSuchProperty` and throws an error when the nonexistent `name` property is accessed. The next step is to create a class that inherits from `NoSuchProperty` . You can simply use the `extends` syntax discussed in Chapter 9 to introduce the proxy into the class’ prototype chain, like this: The `Square` class inherits from `NoSuchProperty` so the proxy is in the `Square` class’ prototype chain. The `shape` object is then created as a new instance of `Square` and has two own properties: `length` and `width` . Reading the values of those properties succeeds because the `get` proxy trap is never called. Only when a property that doesn’t exist on `shape` is accessed ( `shape.wdth` , an obvious typo) does the `get` proxy trap trigger and throw an error. That proves the proxy is in the prototype chain of `shape` , but it might not be obvious that the proxy is not the direct prototype of `shape` . In fact, the proxy is a couple of steps up the prototype chain from `shape` . You can see this more clearly by slightly altering the preceding example: This version of the code stores the proxy in a variable called `proxy` so it’s easy to identify later. The prototype of `shape` is `Square.prototype` , which is not a proxy. But the prototype of `Square.prototype` is the proxy that was inherited from `NoSuchProperty` . The inheritance adds another step in the prototype chain, and that matters because operations that might result in calling the `get` trap on `proxy` need to go through one extra step before getting there. If there’s a property on `Square.prototype` , then that will prevent the `get` proxy trap from being called, as in this example: Here, the `Square` class has a `getArea()` method. The `getArea()` method is automatically added to `Square.prototype` so when `shape.getArea()` is called, the search for the method `getArea()` starts on the `shape` instance and then proceeds to its prototype. Because `getArea()` is found on the prototype, the search stops and the proxy is never called. That is actually the behavior you want in this situation, as you wouldn’t want to incorrectly throw an error when `getArea()` was called. Even though it takes a little bit of extra code to create a class with a proxy in its prototype chain, it can be worth the effort if you need such functionality. Prior to ECMAScript 6, certain objects (such as arrays) displayed nonstandard behavior that developers couldn’t replicate. Proxies change that. They let you define your own nonstandard behavior for several low-level JavaScript operations, so you can replicate all behaviors of built-in JavaScript objects through proxy traps. These traps are called behind the scenes when various operations take place, like a use of the `in` operator. A reflection API was also introduced in ECMAScript 6 to allow developers to implement the default behavior for each proxy trap. Each proxy trap has a corresponding method of the same name on the `Reflect` object, another ECMAScript 6 addition. Using a combination of proxy traps and reflection API methods, it’s possible to filter some operations to behave differently only in certain conditions while defaulting to the built-in behavior. Revocable proxies are a special proxies that can be effectively disabled by using a `revoke()` function. The `revoke()` function terminates all functionality on the proxy, so any attempt to interact with the proxy’s properties throws an error after `revoke()` is called. Revocable proxies are important for application security where third-party developers may need access to certain objects for a specified amount of time. While using proxies directly is the most powerful use case, you can also use a proxy as the prototype for another object. In that case, you are severely limited in the number of proxy traps you can effectively use. Only the `get` , `set` , and `has` proxy traps will ever be called on a proxy when it’s used as a prototype, making the set of use cases much smaller. ## Encapsulating Code With Modules JavaScript’s “shared everything” approach to loading code is one of the most error-prone and confusing aspects of the language. Other languages use concepts such as packages to define code scope, but before ECMAScript 6, everything defined in every JavaScript file of an application shared one global scope. As web applications became more complex and started using even more JavaScript code, that approach caused problems like naming collisions and security concerns. One goal of ECMAScript 6 was to solve the scope problem and bring some order to JavaScript applications. That’s where modules come in. ### What are Modules? Modules are JavaScript files that are loaded in a different mode (as opposed to scripts, which are loaded in the original way JavaScript worked). This different mode is necessary because modules have very different semantics than scripts: * Module code automatically runs in strict mode, and there’s no way to opt-out of strict mode. * Variables created in the top level of a module aren’t automatically added to the shared global scope. They exist only within the top-level scope of the module. * The value of `this` in the top level of a module is `undefined` . * Modules don’t allow HTML-style comments within code (a leftover feature from JavaScript’s early browser days). * Modules must export anything that should be available to code outside of the module. * Modules may import bindings from other modules. These differences may seem small at first glance, but they represent a significant change in how JavaScript code is loaded and evaluated, which I will discuss over the course of this chapter. The real power of modules is the ability to export and import only bindings you need, rather than everything in a file. A good understanding of exporting and importing is fundamental to understanding how modules differ from scripts. ### Basic Exporting You can use the `export` keyword to expose parts of published code to other modules. In the simplest case, you can place `export` in front of any variable, function, or class declaration to export it from the module, like this: There are a few things to notice in this example. First, apart from the `export` keyword, every declaration is exactly the same as it would be otherwise. Each exported function or class also has a name; that’s because exported function and class declarations require a name. You can’t export anonymous functions or classes using this syntax unless you use the `default` keyword (discussed in detail in the “Default Values in Modules” section). Next, consider the `multiply()` function, which isn’t exported when it’s defined. That works because you need not always export a declaration: you can also export references. Finally, notice that this example doesn’t export the `subtract()` function. That function won’t be accessible from outside this module because any variables, functions, or classes that are not explicitly exported remain private to the module. ### Basic Importing Once you have a module with exports, you can access the functionality in another module by using the `import` keyword. The two parts of an `import` statement are the identifiers you’re importing and the module from which those identifiers should be imported. This is the statement’s basic form: The curly braces after `import` indicate the bindings to import from a given module. The keyword `from` indicates the module from which to import the given binding. The module is specified by a string representing the path to the module (called the module specifier). Browsers use the same path format you might pass to the `<script>` element, which means you must include a file extension. Node.js, on the other hand, follows its traditional convention of differentiating between local files and packages based on a filesystem prefix. For example, `example` would be a package and `./example.js` would be a local file. When importing a binding from a module, the binding acts as if it were defined using `const` . That means you can’t define another variable with the same name (including importing another binding of the same name), use the identifier before the `import` statement, or change its value. # Importing a Single Binding Suppose that the first example in the “Basic Exporting” section is in a module with the filename `example.js` . You can import and use bindings from that module in a number of ways. For instance, you can just import one identifier: Even though `example.js` exports more than just that one function this example imports only the `sum()` function. If you try to assign a new value to `sum` , the result is an error, as you can’t reassign imported bindings. # Importing Multiple Bindings If you want to import multiple bindings from the example module, you can explicitly list them out as follows: Here, three bindings are imported from the example module: `sum` , `multiply` , and `magicNumber` . They are then used as if they were locally defined. # Importing All of a Module There’s also a special case that allows you to import the entire module as a single object. All of the exports are then available on that object as properties. For example: In this code, all exported bindings in `example.js` are loaded into an object called `example` . The named exports (the `sum()` function, the `multiple()` function, and `magicNumber` ) are then accessible as properties on `example` . This import format is called a namespace import because the `example` object doesn’t exist inside of the `example.js` file and is instead created to be used as a namespace object for all of the exported members of `example.js` . Keep in mind, however, that no matter how many times you use a module in `import` statements, the module will only be executed once. After the code to import the module executes, the instantiated module is kept in memory and reused whenever another `import` statement references it. Consider the following: Even though there are three `import` statements in this module, `example.js` will only be executed once. If other modules in the same application were to import bindings from `example.js` , those modules would use the same module instance this code uses. # A Subtle Quirk of Imported Bindings ECMAScript 6’s `import` statements create read-only bindings to variables, functions, and classes rather than simply referencing the original bindings like normal variables. Even though the module that imports the binding can’t change its value, the module that exports that identifier can. For example, suppose you want to use this module: When you import those two bindings, the `setName()` function can change the value of `name` : The call to `setName("Greg")` goes back into the module from which `setName()` was exported and executes there, setting `name` to `"Greg"` instead. Note this change is automatically reflected on the imported `name` binding. That’s because `name` is the local name for the exported `name` identifier. The `name` used in the code above and the `name` used in the module being imported from aren’t the same. ### Renaming Exports and Imports Sometimes, you may not want to use the original name of a variable, function, or class you’ve imported from a module. Fortunately, you can change the name of an export both during the export and during the import. In the first case, suppose you have a function that you’d like to export with a different name. You can use the `as` keyword to specify the name that the function should be known as outside of the module: Here, the `sum()` function ( `sum` is the local name) is exported as `add()` ( `add` is the exported name). That means when another module wants to import this function, it will have to use the name `add` instead: If the module importing the function wants to use a different name, it can also use `as` : This code imports the `add()` function using the import name and renames it to `sum()` (the local name). That means there is no identifier named `add` in this module. ### Default Values in Modules The module syntax is really optimized for exporting and importing default values from modules, as this pattern was quite common in other module systems, like CommonJS (another JavaScript module format popularized by Node.js). The default value for a module is a single variable, function, or class as specified by the `default` keyword, and you can only set one default export per module. Using the `default` keyword with multiple exports is a syntax error. # Exporting Default Values Here’s a simple example that uses the `default` keyword: This module exports a function as its default value. The `default` keyword indicates that this is a default export. The function doesn’t require a name because the module itself represents the function. You can also specify an identifier as the default export by placing it after `export default` , such as: Here, the `sum()` function is defined first and later exported as the default value of the module. You may want to choose this approach if the default value needs to be calculated. A third way to specify an identifier as the default export is by using the renaming syntax as follows: The identifier `default` has special meaning in a renaming export and indicates a value should be the default for the module. Because `default` is a keyword in JavaScript, it can’t be used for a variable, function, or class name (it can be used as a property name). So the use of `default` to rename an export is a special case to create a consistency with how non-default exports are defined. This syntax is useful if you want to use a single `export` statement to specify multiple exports, including the default, at once. # Importing Default Values You can import a default value from a module using the following syntax: This import statement imports the default from the module `example.js` . Note that no curly braces are used, unlike you’d see in a non-default import. The local name `sum` is used to represent whatever default function the module exports. This syntax is the cleanest, and the creators of ECMAScript 6 expect it to be the dominant form of import on the Web, allowing you to use an already-existing object. For modules that export both a default and one or more non-default bindings, you can import all exported bindings with one statement. For instance, suppose you have this module: You can import both `color` and the default function using the following `import` statement: The comma separates the default local name from the non-defaults, which are also surrounded by curly braces. Keep in mind that the default must come before the non-defaults in the `import` statement. As with exporting defaults, you can import defaults with the renaming syntax, too: In this code, the default export ( `default` ) is renamed to `sum` and the additional `color` export is also imported. This example is equivalent to the preceding example. ### Re-exporting a Binding There may be a time when you’d like to re-export something that your module has imported (for instance, if you’re creating a library out of several small modules). You can re-export an imported value with the patterns already discussed in this chapter as follows: That works, but a single statement can also do the same thing: This form of `export` looks into the specified module for the declaration of `sum` and then exports it. Of course, you can also choose to export a different name for the same value: Here, `sum` is imported from `"./example.js"` and then exported as `add` . If you’d like to export everything from another module, you can use the `*` pattern: When you export everything, you are including all named exports and excluding any default export. For instance, if `example.js` has a default export, you would need to import it explicitly and then export it explicitly. ### Importing Without Bindings Some modules may not export anything, and instead, only make modifications to objects in the global scope. Even though top-level variables, functions, and classes inside modules don’t automatically end up in the global scope, that doesn’t mean modules cannot access the global scope. The shared definitions of built-in objects such as `Array` and `Object` are accessible inside a module and changes to those objects will be reflected in other modules. For instance, if you want to add a `pushAll()` method to all arrays, you might define a module like this: This is a valid module even though there are no exports or imports. This code can be used both as a module and a script. Since it doesn’t export anything, you can use a simplified import to execute the module code without importing any bindings: This code imports and executes the module containing the `pushAll()` method, so `pushAll()` is added to the array prototype. That means `pushAll()` is now available for use on all arrays inside of this module. ### Loading Modules While ECMAScript 6 defines the syntax for modules, it doesn’t define how to load them. This is part of the complexity of a specification that’s supposed to be agnostic to implementation environments. Rather than trying to create a single specification that would work for all JavaScript environments, ECMAScript 6 specifies only the syntax and abstracts out the loading mechanism to an undefined internal operation called . Web browsers and Node.js are left to decide how to implement in a way that makes sense for their respective environments. # Using Modules in Web Browsers Even before ECMAScript 6, web browsers had multiple ways of including JavaScript in an web application. Those script loading options are: * Loading JavaScript code files using the `<script>` element with the `src` attribute specifying a location from which to load the code. * Embedding JavaScript code inline using the `<script>` element without the `src` attribute. * Loading JavaScript code files to execute as workers (such as a web worker or service worker). In order to fully support modules, web browsers had to update each of these mechanisms. These details are defined in the HTML specification, and I’ll summarize them in this section. # Using Modules With `<script>` The default behavior of the `<script>` element is to load JavaScript files as scripts (not modules). This happens when the `type` attribute is missing or when the `type` attribute contains a JavaScript content type (such as `"text/javascript"` ). The `<script>` element can then execute inline code or load the file specified in `src` . To support modules, the `"module"` value was added as a `type` option. Setting `type` to `"module"` tells the browser to load any inline code or code contained in the file specified by `src` as a module instead of a script. Here’s a simple example: The first `<script>` element in this example loads an external module file using the `src` attribute. The only difference from loading a script is that `"module"` is given as the `type` . The second `<script>` element contains a module that is embedded directly in the web page. The variable `result` is not exposed globally because it exists only within the module (as defined by the `<script>` element) and is therefore not added to `window` as a property. As you can see, including modules in web pages is fairly simple and similar to including scripts. However, there are some differences in how modules are loaded. Modules are unique in that, unlike scripts, they may use `import` to specify that other files must be loaded to execute correctly. To support that functionality, always acts as if the `defer` attribute is applied. The `defer` attribute is optional for loading script files but is always applied for loading module files. The module file begins downloading as soon as the HTML parser encounters with a `src` attribute but doesn’t execute until after the document has been completely parsed. Modules are also executed in the order in which they appear in the HTML file. That means the first is always guaranteed to execute before the second, even if one module contains inline code instead of specifying `src` . For example: These three `<script>` elements execute in the order they are specified, so `module1.js` is guaranteed to execute before the inline module, and the inline module is guaranteed to execute before `module2.js` . Each module may `import` from one or more other modules, which complicates matters. That’s why modules are parsed completely first to identify all `import` statements. Each `import` statement then triggers a fetch (either from the network or from the cache), and no module is executed until all `import` resources have first been loaded and executed. All modules, both those explicitly included using and those implicitly included using `import` , are loaded and executed in order. In the preceding example, the complete loading sequence is: * Download and parse `module1.js` . * Recursively download and parse `import` resources in `module1.js` . * Parse the inline module. * Recursively download and parse `import` resources in the inline module. * Download and parse `module2.js` . * Recursively download and parse `import` resources in `module2.js` Once loading is complete, nothing is executed until after the document has been completely parsed. After document parsing completes, the following actions happen: * Recursively execute `import` resources for `module1.js` . * Execute `module1.js` . * Recursively execute `import` resources for the inline module. * Execute the inline module. * Recursively execute `import` resources for `module2.js` . * Execute `module2.js` . Notice that the inline module acts like the other two modules except that the code doesn’t have to be downloaded first. Otherwise, the sequence of loading `import` resources and executing modules is exactly the same. You may already be familiar with the `async` attribute on the `<script>` element. When used with scripts, `async` causes the script file to be executed as soon as the file is completely downloaded and parsed. The order of `async` scripts in the document doesn’t affect the order in which the scripts are executed, though. The scripts are always executed as soon as they finish downloading without waiting for the containing document to finish parsing. The `async` attribute can be applied to modules as well. Using `async` on causes the module to execute in a manner similar to a script. The only difference is that all `import` resources for the module are downloaded before the module itself is executed. That guarantees all resources the module needs to function will be downloaded before the module executes; you just can’t guarantee when the module will execute. Consider the following code: In this example, there are two module files loaded asynchronously. It’s not possible to tell which module will execute first simply by looking at this code. If `module1.js` finishes downloading first (including all of its `import` resources), then it will execute first. If `module2.js` finishes downloading first, then that module will execute first instead. # Loading Modules as Workers Workers, such as web workers and service workers, execute JavaScript code outside of the web page context. Creating a new worker involves creating a new instance `Worker` (or another class) and passing in the location of JavaScript file. The default loading mechanism is to load files as scripts, like this: To support loading modules, the developers of the HTML standard added a second argument to these constructors. The second argument is an object with a `type` property with a default value of `"script"` . You can set `type` to `"module"` in order to load module files: This example loads `module.js` as a module instead of a script by passing a second argument with `"module"` as the `type` property’s value. (The `type` property is meant to mimic how the `type` attribute of `<script>` differentiates modules and scripts.) The second argument is supported for all worker types in the browser. Worker modules are generally the same as worker scripts, but there are a couple of exceptions. First, worker scripts are limited to being loaded from the same origin as the web page in which they are referenced, but worker modules aren’t quite as limited. Although worker modules have the same default restriction, they can also load files that have appropriate Cross-Origin Resource Sharing (CORS) headers to allow access. Second, while a worker script can use the `self.importScripts()` method to load additional scripts into the worker, `self.importScripts()` always fails on worker modules because you should use `import` instead. # Browser Module Specifier Resolution All of the examples to this point in the chapter have used a relative module specifier path such as `"./example.js"` . Browsers require module specifiers to be in one of the following formats: * Begin with `/` to resolve from the root directory * Begin with `./` to resolve from the current directory * Begin with `../` to resolve from the parent directory * URL format For example, suppose you have a module file located at ``` https://www.example.com/modules/module.js ``` that contains the following code: Each of the module specifiers in this example is valid for use in a browser, including the complete URL in the final line (you’d need to be sure `ww2.example.com` has properly configured its Cross-Origin Resource Sharing (CORS) headers to allow cross-domain loading). These are the only module specifier formats that browsers can resolve by default (though the not-yet-complete module loader specification will provide ways to resolve other formats). That means some normal looking module specifiers are actually invalid in browsers and will result in an error, such as: Each of these module specifiers cannot be loaded by the browser. The two module specifiers are in an invalid format (missing the correct beginning characters) even though both will work when used as the value of `src` in a `<script>` tag. This is an intentional difference in behavior between `<script>` and `import` . ECMAScript 6 adds modules to the language as a way to package up and encapsulate functionality. Modules behave differently than scripts, as they don’t modify the global scope with their top-level variables, functions, and classes, and `this` is `undefined` . To achieve that behavior, modules are loaded using a different mode. You must export any functionality you’d like to make available to consumers of a module. Variables, functions, and classes can all be exported, and there is also one default export allowed per module. After exporting, another module can import all or some of the exported names. These names act as if defined by `let` and operate as block bindings that can’t be redeclared in the same module. Modules need not export anything if they are manipulating something in the global scope. You can actually import from such a module without introducing any bindings into the module scope. Because modules must run in a different mode, browsers introduced to signal that the source file or inline code should be executed as a module. Module files loaded with are loaded as if the `defer` attribute is applied to them. Modules are also executed in the order in which they appear in the containing document once the document is fully parsed. ## Appendix A: Smaller Changes Along with the major changes this book has already covered, ECMAScript 6 made several other changes that are smaller but still helpful in improving JavaScript. Those changes include making integers easier to use, adding new methods for calculations, a tweak to Unicode identifiers, and formalizing the `__proto__` property. I describe all of those in this appendix. ### Working with Integers JavaScript uses the IEEE 754 encoding system to represent both integers and floats, which has caused a lot of confusion over the years. The language takes great pains to ensure that developers don’t need to worry about the details of number encoding, but problems still leak through from time to time. ECMAScript 6 seeks to address this by making integers easier to identify and work with. # Identifying Integers First, ECMAScript 6 added the `Number.isInteger()` method, which can determine whether a value represents an integer in JavaScript. While JavaScript uses IEEE 754 to represent both types of numbers, floats and integers are stored differently. The `Number.isInteger()` method takes advantage of that, and when the method is called on a value, the JavaScript engine looks at the underlying representation of the value to determine whether that value is an integer. That means numbers that look like floats might actually be stored as integers and cause `Number.isInteger()` to return `true` . For example: In this code, `Number.isInteger()` returns `true` for both `25` and `25.0` even though the latter looks like a float. Simply adding a decimal point to a number doesn’t automatically make it a float in JavaScript. Since `25.0` is really just `25` , it is stored as an integer. The number `25.1` , however, is stored as a float because there is a fraction value. # Safe Integers IEEE 754 can only accurately represent integers between -253 and 253, and outside this “safe” range, binary representations end up reused for multiple numeric values. That means JavaScript can only safely represent integers within the IEEE 754 range before problems become apparent. For instance, consider this code: This example doesn’t contain a typo, yet two different numbers are represented by the same JavaScript integer. The effect becomes more prevalent the further the value falls outside the safe range. ECMAScript 6 introduced the method to better identify integers that the language can accurately represent. It also added the ``` Number.MAX_SAFE_INTEGER ``` ``` Number.MIN_SAFE_INTEGER ``` properties to represent the upper and lower bounds of the integer range, respectively. The method ensures that a value is an integer and falls within the safe range of integer values, as in this example: The number `inside` is the largest safe integer, so it returns `true` for both the `Number.isInteger()` and methods. The number `outside` is the first questionable integer value, and it isn’t considered safe even though it’s still an integer. Most of the time, you only want to deal with safe integers when doing integer arithmetic or comparisons in JavaScript, so using as part of input validation is a good idea. ### New Math Methods The new emphasis on gaming and graphics that led ECMAScript 6 to include typed arrays in JavaScript also led to the realization that a JavaScript engine could do many mathematical calculations more efficiently. But optimization strategies like asm.js, which works on a subset of JavaScript to improve performance, need more information to perform calculations in the fastest way possible. For instance, knowing whether the numbers should be treated as 32-bit integers or as 64-bit floats is important for hardware-based operations, which are much faster than software-based operations. As a result, ECMAScript 6 added several methods to the `Math` object to improve the speed of common mathematical calculations. Improving the speed of common calculations also improves the overall speed of applications that perform many calculations, such as graphics programs. The new methods are listed below: * `Math.acosh(x)` Returns the inverse hyperbolic cosine of `x` . * `Math.asinh(x)` Returns the inverse hyperbolic sine of `x` . * `Math.atanh(x)` Returns the inverse hyperbolic tangent of `x` * `Math.cbrt(x)` Returns the cubed root of `x` . * `Math.clz32(x)` Returns the number of leading zero bits in the 32-bit integer representation of `x` . * `Math.cosh(x)` Returns the hyperbolic cosine of `x` . * `Math.expm1(x)` Returns the result of subtracting 1 from the exponential function of `x` * `Math.fround(x)` Returns the nearest single-precision float of `x` . * ``` Math.hypot(...values) ``` Returns the square root of the sum of the squares of each argument. * `Math.imul(x, y)` Returns the result of performing true 32-bit multiplication of the two arguments. * `Math.log1p(x)` Returns the natural logarithm of `1 + x` . * `Math.log10(x)` Returns the base 10 logarithm of `x` . * `Math.log2(x)` Returns the base 2 logarithm of `x` . * `Math.sign(x)` Returns -1 if the `x` is negative, 0 if `x` is +0 or -0, or 1 if `x` is positive. * `Math.sinh(x)` Returns the hyperbolic sine of `x` . * `Math.tanh(x)` Returns the hyperbolic tangent of `x` . * `Math.trunc(x)` Removes fraction digits from a float and returns an integer. It’s beyond the scope of this book to explain each new method and what it does in detail. But if your application needs to do a reasonably common calculation, be sure to check the new `Math` methods before implementing it yourself. ### Unicode Identifiers ECMAScript 6 offers better Unicode support than previous versions of JavaScript, and it also changes what characters may be used as identifiers. In ECMAScript 5, it was already possible to use Unicode escape sequences for identifiers. For example: After the `var` statement in this example, you can use either `\u0061` or `a` to access the variable. In ECMAScript 6, you can also use Unicode code point escape sequences as identifiers, like this: This example just replaces `\u0061` with its code point equivalent. Otherwise, it does exactly the same thing as the previous example. Additionally, ECMAScript 6 formally specifies valid identifiers in terms of Unicode Standard Annex #31: Unicode Identifier and Pattern Syntax, which gives the following rules: * The first character must be `$` , `_` , or any Unicode symbol with a derived core property of `ID_Start` . * Each subsequent character must be `$` , `_` , `\u200c` (a zero-width non-joiner), `\u200d` (a zero-width joiner), or any Unicode symbol with a derived core property of `ID_Continue` . The `ID_Start` and `ID_Continue` derived core properties are defined in Unicode Identifier and Pattern Syntax as a way to identify symbols that are appropriate for use in identifiers such as variables and domain names. The specification is not specific to JavaScript. ### Formalizing the `__proto__` Property Even before ECMAScript 5 was finished, several JavaScript engines already implemented a custom property called `__proto__` that could be used to both get and set the `[[Prototype]]` property. Effectively, `__proto__` was an early precursor to both the methods. Expecting all JavaScript engines to remove this property is unrealistic (there were popular JavaScript libraries making use of `__proto__` ), so ECMAScript 6 also formalized the `__proto__` behavior. But the formalization appears in Appendix B of ECMA-262 along with this warning: These features are not considered part of the core ECMAScript language. Programmers should not use or assume the existence of these features and behaviours when writing new ECMAScript code. ECMAScript implementations are discouraged from implementing these features unless the implementation is part of a web browser or is required to run the same legacy ECMAScript code that web browsers encounter. The ECMAScript specification recommends using instead because `__proto__` has the following characteristics: * You can only specify `__proto__` once in an object literal. If you specify two `__proto__` properties, then an error is thrown. This is the only object literal property with that restriction. * The computed form `["__proto__"]` acts like a regular property and doesn’t set or return the current object’s prototype. All rules related to object literal properties apply in this form, as opposed to the non-computed form, which has exceptions. While you should avoid using the `__proto__` property, the way the specification defined it is interesting. In ECMAScript 6 engines, ``` Object.prototype.__proto__ ``` is defined as an accessor property whose `get` method calls and whose `set` method calls the method. This leaves no real difference between using `__proto__` and / , except that `__proto__` allows you to set the prototype of an object literal directly. Here’s how that works: Instead of calling `Object.create()` to make the `friend` object, this example creates a standard object literal that assigns a value to the `__proto__` property. When creating an object with the `Object.create()` method, on the other hand, you’d have to specify full property descriptors for any additional object properties. ## Appendix B: Understanding ECMAScript 7 (2016) The development of ECMAScript 6 took about four years, and after that, TC-39 decided that such a long development process was unsustainable. Instead, they moved to a yearly release cycle to ensure new language features would make it into development sooner. More frequent releases mean that each new edition of ECMAScript should have fewer new features than ECMAScript 6. To signify this change, new versions of the specification no longer prominently feature the edition number, and instead refer to the year in which the specification was published. As a result, ECMAScript 6 is also known as ECMAScript 2015, and ECMAScript 7 is formally known as ECMAScript 2016. TC-39 expects to use the year-based naming system for all future ECMAScript editions. ECMAScript 2016 was finalized in March 2016 and contained only three additions to the language: a new mathematical operator, a new array method, and a new syntax error. Both are covered in this appendix. ### The Exponentiation Operator The only change to JavaScript syntax introduced in ECMAScript 2016 is the exponentiation operator, which is a mathematical operation that applies an exponent to a base. JavaScript already had the `Math.pow()` method to perform exponentiation, but JavaScript was also one of the only languages that required a method rather than a formal operator. (And some developers argue an operator is easier to read and reason about.) The exponentiation operator is two asterisks ( `**` ) where the left operand is the base and the right operand is the exponent. For example: This example calculates 52, which is equal to 25. You can still use `Math.pow()` to achieve the same result. # Order of Operations The exponentiation operator has the highest precedence of all binary operators in JavaScript (unary operators have higher precedence than `**` ). That means it is applied first to any compound operation, as in this example: The calculation of 52 happens first. The resulting value is then multiplied by 2 for a final result of 50. # Operand Restriction The exponentiation operator does have a somewhat unusual restriction that isn’t present for other operators. The left side of an exponentiation operation cannot be a unary expression other than `++` or `--` . For example, this is invalid syntax: The `-5` in this example is a syntax error because the order of operations is ambiguous. Does the `-` apply just to `5` or the result of the `5 ** 2` expression? Disallowing unary expressions on the left side of the exponentiation operator eliminates that ambiguity. In order to clearly specify intent, you need to include parentheses either around `-5` or around `5 ** 2` as follows: If you put the parentheses around the expression, the `-` is applied to the whole thing. When the parentheses surround `-5` , it’s clear that you want to raise -5 to the second power. You don’t need parentheses to use `++` and `--` on the left side of the exponentiation operator because both operators have clearly-defined behavior on their operands. A prefix `++` or `--` changes the operand before any other operations take place, and the postfix versions don’t apply any changes until after the entire expression has been evaluated. Both use cases are safe on the left side of this operator, as this code demonstrates: In this example, `num1` is incremented before the exponentiation operator is applied, so `num1` becomes 3 and the result of the operation is 9. For `num2` , the value remains 2 for the exponentiation operation and then is decremented to 1. ### The Array.prototype.includes() Method You might recall that ECMAScript 6 added ``` String.prototype.includes() ``` in order to check whether certain substrings exist within a given string. Originally, ECMAScript 6 was also going to introduce an method to continue the trend of treating strings and arrays similarly. But the specification for was incomplete by the ECMAScript 6 deadline, and so ended up in ECMAScript 2016 instead. # How to Use Array.prototype.includes() method accepts two arguments: the value to search for and an optional index from which to start the search. When the second argument is provided, `includes()` starts the match from that index. (The default starting index is `0` .) The return value is `true` if the value is found inside the array and `false` if not. For example: Here, calling `values.includes()` returns `true` for the value of `1` and `false` for the value of `0` because `0` isn’t in the array. When the second argument is used to start the search at index 2 (which contains the value `3` ), the `values.includes()` method returns `false` because the number `1` is not found between index 2 and the end of the array. # Value Comparison The value comparison performed by the `includes()` method uses the `===` operator with one exception: `NaN` is considered equal to `NaN` even though `NaN === NaN` evaluates to `false` . This is different than the behavior of the `indexOf()` method, which strictly uses `===` for comparison. To see the difference, consider this code: The `values.indexOf()` method returns `-1` for `NaN` even though `NaN` is contained in the `values` array. On the other hand, `values.includes()` returns `true` for `NaN` because it uses a different value comparison operator. Another quirk of this implementation is that `+0` and `-0` are considered to be equal. In this case, the behavior of `indexOf()` and `includes()` is the same: Here, both `indexOf()` and `includes()` find `+0` when `-0` is passed because the two values are considered equal. Note that this is different than the behavior of the `Object.is()` method, which considers `+0` and `-0` to be different values. ### Change to Function-Scoped Strict Mode When strict mode was introduced in ECMAScript 5, the language was quite a bit simpler than it became in ECMAScript 6. Despite that, ECMAScript 6 still allowed you to specify strict mode using the `"use strict"` directive either in the global scope (which would make all code run in strict mode) or in a function scope (so only the function would run in strict mode). The latter ended up being a problem in ECMAScript 6 due to the more complex ways that parameters could be defined, specifically, with destructuring and default parameter values. To understand the problem, consider the following code: Here, the named parameter `first` is assigned a default value of `this` . What would you expect the value of `first` to be? The ECMAScript 6 specification instructed JavaScript engines to treat the parameters as being run in strict mode in this case, so `this` should be equal to `undefined` . However, implementing parameters running in strict mode when `"use strict"` is present inside the function turned out to be quite difficult because parameter default values can be functions as well. This difficulty led to most JavaScript engines not implementing this feature (so `this` would be equal to the global object). As a result of the implementation difficulty, ECMAScript 2016 makes it illegal to have a `"use strict"` directive inside of a function whose parameters are either destructured or have default values. Only simple parameter lists, those that don’t contain destructuring or default values, are allowed when `"use strict"` is present in the body of a function. Here are some examples: You can still use `"use strict"` with simple parameter lists, which is why `okay()` works as you would expect (the same as it would in ECMAScript 5). The `notOkay1()` function is a syntax error because you can no longer use `"use strict"` in functions with default parameter values. Similarly, the `notOkay2()` function is a syntax error because you can’t use `"use strict"` in a function with destructured parameters. Overall, this change removes both a point of confusion for JavaScript developers and an implementation problem for JavaScript engines.
guide
ctan
TeX
# The GETOPTK package <NAME> 6th of June 2011 ###### Abstract The _getoptk_ package eases the definition of macros accepting optional arguments in the same style as \hrule or \hbox. It is meant to be used with _plain TeX_. ## 1 Introduction A flexible way to pass optional arguments to a procedure is to rely on _dictionaries_ of optional arguments, that is, a set of bindings between formal names of arguments and their values. Some TeX primitives, like \hrule or \hbox, use such an interface style. We call this style the _keyword_ interface style. There is no facility in TeX to define new macros using the _keyword_ interface style. The _getoptk_ package provides such a service. ## 2 Quick guide In order to define a macro using the _keyword_ interface style, we have to setup first a _behaviour dictionary_ binding _keywords_ and _behaviours_. A keyword introduces an optional argument and a behaviour describes its effect, we will soon show an example of this. In the definition of the macro itself, we first select the behaviour dictionary we want to use and call \getoptk, the control sequence responsible of the detection of optional arguments. In this call, we need to provide a _callback_ as argument to \getoptk, this callback is a macro taking control of the execution after \getoptk has completed its task. It will be called with an argument, that is derived from the list of optional arguments. For explanatory purposes, let us assume that we want to define a macro \begindisplay using the _keyword_ interface style and accepting the following optional arguments: **ragged**: Fill, but do not adjust the right margin (only left-justify). **literal**: Display block with literal font (usually fixed-width). Useful for source code or simple tabbed or spaced text. **file** \(_file name_): The file whose name, enclosed in curly braces, follows the _file_ keyword is read and displayed using the selected display type. **offset** \**_dimen_): Use _dimen_ as indentation for the display. We first create a fresh new behaviour dictionary: \newgetoptkdictionary{display} and fill it with behaviours: \defgetoptkflag{ragged}{\raggedright} \defgetoptkflag{literal}{\let\displayfont\literalfont} \defgetoptktokos{file}{\input#1} \defgetoptkdimen{dimen}{\displayindent=#1\relax} Besides registering the behaviours in the dictionary _display_, these commands also bind the behaviours to the following control sequences: \getoptk@behaviour@display@ragged \getoptk@behaviour@display@literal \getoptk@behaviour@display@file \getoptk@behaviour@display@dimen The control sequences created with \getoptkflag must do not have an argument, while those created by \getoptktokos or \getoptkdimen do have one. The definition of \begindisplay is \def\begindisplay{% \getoptkdictionary{display}% \getoptk\display@M } The control sequence \getoptk is such that the input text \begindisplay file {chapter1} literal offset 20pt is _replaced_ by \display@M{% \getoptk@behaviour@display@file{chapter1}% \getoptk@behaviour@display@literal \getoptk@behaviour@display@dimen{20pt}% } so that \display@M can do its job and trigger the behaviours at the appropriate time. and the predicate \ifgetoptkbracket is bound to \iftrue. If no such an argument is found, then the empty argument is supplied to \(\langle\)_behaviour_\(\rangle\) when behaviours are triggered and the predicate \ifgetoptkbracket is bound to \iffalse. ## 4 Licence The _getoptk_ software id copyright (c) 2011 <NAME>. The _getoptk_ software is distributed under the terms of the CeCILL-B licence, version 1.0. See the files COPYING and COPYING-FR in the distribution.
diff_match_patch
hex
Erlang
diff_match_patch === A translation of Google's public-domain [diff_match_patch](https://github.com/google/diff-match-patch) code into pure Elixir. For information about the original Google project and its application, see that repository's [wiki pages](https://github.com/google/diff-match-patch/wiki). References from the Google project: * diff: [An O(ND) Difference Algorithm and Its Variations (Meyers, 1986)](http://www.xmailserver.org/diff2.pdf) * match: [Fast Text Searching with Errors (Wu and Manber, 1991)](http://www.club.cc.cmu.edu/~ajo/docs/agrep.pdf) Prior art on the BEAM: * diffy: [A Diff, Match and Patch implementation for Erlang](https://github.com/zotonic/diffy) [installation](#installation) Installation --- If [available in Hex](https://hex.pm/docs/publish), the package can be installed by adding `diff_match_patch` to your list of dependencies in `mix.exs`: ``` def deps do [ {:diff_match_patch, "~> 0.2.0"} ] end ``` Documentation can be generated with [ExDoc](https://github.com/elixir-lang/ex_doc) and published on [HexDocs](https://hexdocs.pm). Once published, the docs can be found at <https://hexdocs.pm/cursor>. [usage](#usage) Usage --- 1. Find a **match** for a pattern inside at text using fuzzy search. The `match_threshold` option determines the exactness required: ``` loc = Dmp.Match.main( "I am the very model of a modern major general.", " that berry ", 5, match_threshold: 0.7) ``` The value of `loc` is 4, finding the closest match at the string starting " the very". If no close match is found, the `loc` returned is -1. 2. Create a **diff** between two texts: ``` diffs = Dmp.Diff.main( "The quick brown fox jumps over the lazy dog.", "That quick brown fox jumped over a lazy dog.") ``` 3. Create a **patch** from the the original text and the diff we just created. The patch can also be created directly from the two texts. These two patches are equal: ``` patches1 = Dmp.Patch.make( "The quick brown fox jumps over the lazy dog.", diffs) patches2 = Dmp.Patch.make( "The quick brown fox jumps over the lazy dog.", "That quick brown fox jumped over a lazy dog.") ``` 4. Apply the patch to a third text: ``` {new_text, matched} = Patch.apply(patches, "The quick red rabbit jumps over the tired tiger.") IO.puts(new_text) ``` The `new_text` prints out as "That quick red rabbit jumped over a tired tiger." The `matched` value is a list of booleans, showing whether the results of whether matches were made between the (expanded) list of patches and the third text. [API Reference](api-reference.html) Dmp.Cursor === A container for Elixir lists, that can be used to iterate forward and backward, with a focused "current" item in the list, and "prev" and "next" lists of items that come before and after the current item. In Elm this has been described as a "Zipper". [Link to this section](#summary) Summary === [Types](#types) --- [init_option()](#t:init_option/0) [init_options()](#t:init_options/0) Use `position: 0` for example, to set the initial position of the Cursor to the first item. [position_value()](#t:position_value/0) A value used to set the Cursor position. [t()](#t:t/0) [Functions](#functions) --- [after_last?(cursor)](#after_last?/1) Returns `true` if the Cursor is positioned after the last item. [before_first?(cursor)](#before_first?/1) Returns `true` if the Cursor is positioned before the first item. [count(cursor)](#count/1) Returns the total number of items in the Cursor. [delete(cursor, count \\ 1)](#delete/2) Remove items at the Cursor's current position, leaving the previous items alone. [delete_before(cursor, count \\ 1)](#delete_before/2) Remove items before the Cursor's current position, leaving the current and next items alone. [empty?(cursor)](#empty?/1) Returns `true` if there are no items in the Cursor. [find_back!(c, item)](#find_back!/2) Moves the position of the Cursor back through the "prev" list to the given item. [find_back(c, item)](#find_back/2) Moves the position of the Cursor back through the "prev" list until the given item is found. Returns `nil` and if the item cannot be found. [find_forward(c, item)](#find_forward/2) Moves the position of the Cursor forward through the "next" list until the given item is found. Returns `nil` and if the item cannot be found. [from_list(items, opts \\ [])](#from_list/2) Create a Cursor from a list of items. [from_split(prev, next, opts \\ [])](#from_split/3) Create a Cursor from two lists. [get(cursor)](#get/1) Return a 3-tuple of the previous, current, and next items relative to the Cursor's current position. [has_next?(cursor)](#has_next?/1) Returns `false` if the Cursor is positioned at or after the last item. [has_previous?(cursor)](#has_previous?/1) Returns `false` if the Cursor is positioned at or before the first item. [insert(c, list)](#insert/2) Insert items at the Cursor's current position, leaving the previous items alone. [insert_at_head(c, items)](#insert_at_head/2) Insert items at the Cursor's head position, leaving the current position pointer alone. [insert_before(c, items)](#insert_before/2) Insert items before the Cursor's current position, leaving the current and next items alone. [move_back(cursor, count \\ 1)](#move_back/2) Moves the position of the Cursor back a number of steps. [move_first(c)](#move_first/1) Moves the position of the Cursor to the first item. An alias of `Cursor.move_to(c, 0)`. [move_forward(cursor, count \\ 1)](#move_forward/2) Moves the position of the Cursor forward a number of steps. [move_to(c, pos)](#move_to/2) Changes the current position of the Cursor. [new()](#new/0) Create a Cursor containing no items. [position(cursor)](#position/1) Returns the current position of the Cursor. [reset(c)](#reset/1) Resets the position of the Cursor to before the first item. An alias for `Cursor.move_to(c, -1)`. [to_list(cursor)](#to_list/1) Extract the list from a Cursor. [Link to this section](#types) Types === [Link to this section](#functions) Functions === Dmp.DebugUtils === Utilities for debugging bitarrays. [Link to this section](#summary) Summary === [Functions](#functions) --- [bitmap_to_list(value, padding \\ 0)](#bitmap_to_list/2) Returns a list of "codepoints" (single-character strings) showing the base-2 value of `value`. [debug_alphabet(s, pattern)](#debug_alphabet/2) Formats an alphabet bitarray into a list of lines, showing binary values. [debug_rd(rd, text, pattern, d, start \\ 0, best_loc \\ -1)](#debug_rd/6) Formats the `rd` bitarray into a list of lines, showing binary values. [Link to this section](#functions) Functions === Dmp.Diff === Compare two blocks of plain text and efficiently return a list of differences. [Link to this section](#summary) Summary === [Types](#types) --- [difflist()](#t:difflist/0) A list of diff operations, representing the difference between two text versions. [expiry()](#t:expiry/0) [first_pass_acc()](#t:first_pass_acc/0) [half_match_result()](#t:half_match_result/0) The result of a successful `Diff.half_match/3` call. [op()](#t:op/0) A diff's operation type. The operation `:nil` is used internally to indicate a nil value for the diff. [options()](#t:options/0) [t()](#t:t/0) The diff tuple, consisting of two elements: the operation and the associated text. [Functions](#functions) --- [bisect(text1, text2, deadline)](#bisect/3) Find the "middle snake" of a diff, split the problem in two and return the recursively constructed diff. [bisect_split(text1, text2, x, y, deadline)](#bisect_split/5) Given the location of the "middle snake", split the diff in two parts and recurse. [chars_to_lines(diffs, line_array)](#chars_to_lines/2) Rehydrate the text in a diff from a string of line hashes to real lines of text. [cleanup_efficiency(diffs, diff_edit_cost)](#cleanup_efficiency/2) Reduce the number of edits in a diff by eliminating operationally trivial equalities. [cleanup_merge(diffs)](#cleanup_merge/1) Reorder and merge like edit sections in a diff, merging equalities. [cleanup_semantic(diffs)](#cleanup_semantic/1) Reduce the number of edits in a diff by eliminating semantically trivial equalities. [cleanup_semantic_lossless(diffs)](#cleanup_semantic_lossless/1) Look for single edits in a diff that are surrounded on both sides by equalities which can be shifted sideways to align the edit to a word boundary. [combine_previous_inequalities(diffs, text, count_delete, count_insert, text_delete, text_insert)](#combine_previous_inequalities/6) [common_overlap(text1, text2)](#common_overlap/2) Determine if the suffix of one string is the prefix of another. [common_prefix(text1, text2)](#common_prefix/2) Determine the common prefix of two strings. [common_suffix(text1, text2)](#common_suffix/2) Determine the common suffix of two strings. [compute(text1, text2, check_lines, deadline)](#compute/4) Find the differences between two texts. [factor_out_prefixes(diffs, text_delete, text_insert)](#factor_out_prefixes/3) [factor_out_suffixes(diffs, text, text_delete, text_insert)](#factor_out_suffixes/4) [from_delta(text1, delta)](#from_delta/2) Given the original `text1`, and an encoded string which describes the operations required to transform `text1` into `text2`, compute the full diff. [half_match(text1, text2, deadline)](#half_match/3) Do the two texts share a substring which is at least half the length of the longer text? [levenshtein(diffs)](#levenshtein/1) Compute the Levenshtein distance of a diff--the number of inserted, deleted or substituted characters. [line_mode(text1, text2, deadline)](#line_mode/3) Do a quick line-level diff on both strings, then rediff the parts for greater accuracy. [lines_to_chars(text1, text2)](#lines_to_chars/2) Split two texts into a list of strings. [main(text1, text2, check_lines \\ true, opts \\ [])](#main/4) Find the differences between two texts. [main_(text1, text2, check_lines, opts)](#main_/4) Skips validation of options. Used internally by `Patch.apply`. [pretty_html(diffs)](#pretty_html/1) Generate a pretty HTML report from a difflist. [semantic_score(one, two)](#semantic_score/2) Given two strings, compute a score representing whether the internal boundary falls on logical boundaries. [sorted_half_match(hm, arg2)](#sorted_half_match/2) [text1(diffs)](#text1/1) Compute and return the source text of a diff (all equalities and deletions). [text2(diffs)](#text2/1) Compute and return the destination text of a diff (all equalities and insertions). [to_delta(diffs)](#to_delta/1) Crush a diff into an encoded string which describes the operations required to transform `text1` into `text2`. [undiff(arg1)](#undiff/1) Returns the diff tuple, or a "nil" pseudo-diff (with op `:nil` and empty text). [x_index(diffs, loc)](#x_index/2) Given `loc`, a location in `text1`, compute and return the equivalent location in `text2`. [Link to this section](#types) Types === [Link to this section](#functions) Functions === Dmp.Match === Given a search string, find its best fuzzy match in a block of plain text. Weighted for both accuracy and location. [Link to this section](#summary) Summary === [Types](#types) --- [alpha()](#t:alpha/0) A bitarray encoding the locations of characters within the search pattern. [bitap_array()](#t:bitap_array/0) A bitarray encoding possible match sequences of the search pattern within the text. [options()](#t:options/0) [update_acc()](#t:update_acc/0) Accumulator for [`bitap_update/3`](#bitap_update/3). A tuple with these elements [update_constants()](#t:update_constants/0) Constants needed for [`bitap_update/3`](#bitap_update/3). A tuple with these elements [Functions](#functions) --- [alphabet(pattern)](#alphabet/1) Initialise the alphabet for the Bitap algorithm. [bitap(text, pattern, loc, match_threshold \\ 0.5, match_distance \\ 1000, more_results \\ false)](#bitap/6) Search for the best instance of `pattern` in `text` near `loc`, with errors, using the Bitap algorithm. [bitap_score(e, x, loc, pattern_length, match_distance)](#bitap_score/5) Compute and return a weighted score for a match with `e` errors and `x` location. [bitap_update(j, acc, constants)](#bitap_update/3) Perform the bitap algorithm and calculate error score if a match is found. [character_mask(s, ch)](#character_mask/2) Look up a character in the alphabet and return its encoded bitmap. [main(text, pattern, loc, opts \\ [])](#main/4) Locate the best instance of `pattern` in `text` near `loc`. [Link to this section](#types) Types === [Link to this section](#functions) Functions === Dmp.Options === Adjustable parameters that control algorithm efficiency and accuracy. * `:diff_timeout` - Number of seconds to map a diff before giving up (0 for infinity). * `:diff_edit_cost` - Cost of an empty edit operation in terms of edit characters. * `:match_max_bits` - The number of bits in an integer (default is expected 32). This parameter controls the lengths of patterns used in matching and patch splitting. Set `:match_max_bits` to 0 to disable patch splitting. To avoid long patches in certain pathological cases, use 32. Elixir supports arbitrarily large integers, so we allow values of 64 and 128, as well as smaller values. Multiple short patches (using native ints, `:match_max_bits` of 32 or less) should be much faster than long ones. * `:match_threshold` - At what point is no match declared (0.0 = perfection, 1.0 = very loose). * `:match_distance` - How far to search for a match (0 = exact location, 1000+ = broad match). A match this many characters away from the expected location will add 1.0 to the score (0.0 is a perfect match). * `:patch_delete_threshold` - When deleting a large block of text (over ~64 characters), how close do the contents have to be to match the expected contents. (0.0 = perfection, 1.0 = very loose). Note that `:match_threshold` controls how closely the end points of a delete need to match. * `:patch_margin` - Chunk size for context length. 4 is a good value. [Link to this section](#summary) Summary === [Types](#types) --- [option()](#t:option/0) [t()](#t:t/0) [Functions](#functions) --- [default()](#default/0) Returns an `Options` struct with good default values [valid_options!(opts)](#valid_options!/1) Validates an `Options` list, raising an [`ArgumentError`](https://hexdocs.pm/elixir/ArgumentError.html) if it contains invalid values. [Link to this section](#types) Types === [Link to this section](#functions) Functions === Dmp.Patch === Apply a list of patches onto plain text. Use best effort to apply patch even when the underlying text doesn't match. [Link to this section](#summary) Summary === [Types](#types) --- [apply_loop_acc()](#t:apply_loop_acc/0) [options()](#t:options/0) [patchlist()](#t:patchlist/0) [t()](#t:t/0) [Functions](#functions) --- [add_context(patch, text, patch_margin, match_max_bits \\ 32)](#add_context/4) Increase the context until it is unique, but don't let the pattern expand beyond match_max_bits. [add_diff_to_subpatch(first_diff, rest, patch, acc)](#add_diff_to_subpatch/4) [add_other_diff_to_subpatch(arg1, rest, patch, arg2)](#add_other_diff_to_subpatch/4) [add_padding(patches, patch_margin)](#add_padding/2) Add some padding on text start and end so that edges can match something. [apply(list, text)](#apply/2) Merge a set of patches onto the text. Return a patched text, as well as an array of true/false values indicating which patches were applied. [apply(patches, text, opts \\ [])](#apply/3) [apply_match_diff(arg, acc_text, index1, diffs, start_loc)](#apply_match_diff/5) [bad_match?(diffs, text1, opts)](#bad_match?/3) [from_diffs(diffs, opts \\ [])](#from_diffs/2) Compute a list of patches to turn `text1` into `text2`. `text1` will be derived from the provided diffs. [from_text(text)](#from_text/1) Parse a textual representation of patches and return a patchlist. [from_texts_and_diffs(text1, text2, diffs, opts \\ [])](#from_texts_and_diffs/4) Deprecated [make(a, b, opts \\ [])](#make/3) This function can be called two ways. In either case the first argument, `a` is the original text (`text1`). [split_max(patches, patch_margin, match_max_bits \\ 32)](#split_max/3) Look through the patches and break up any which are longer than the maximum limit of the match algorithm. [subpatch_loop(bigpatch_diffs, patch, acc)](#subpatch_loop/3) [to_text(patches)](#to_text/1) Return the textual representation of a patchlist. [Link to this section](#types) Types === [Link to this section](#functions) Functions === Dmp.StringUtils === Java.String- and Javascript-compatible functions missing in Elixir's [`String`](https://hexdocs.pm/elixir/String.html) module. [Link to this section](#summary) Summary === [Functions](#functions) --- [index_of(s, str)](#index_of/2) Returns the index within this string of the first occurrence of the specified substring, or -1 if there is no such occurence. [index_of(s, str, from_index)](#index_of/3) Returns the index within this string of the first occurrence of the specified substring, starting the search at the specified index, or -1 if there is no such occurence. [last_index_of(s, str)](#last_index_of/2) Returns the index within this string of the last occurrence of the specified substring, or -1 if there is no such occurence. [last_index_of(s, str, begin_index)](#last_index_of/3) Returns the index within this string of the last occurrence of the specified substring, starting the search at the specified index, or -1 if there is no such occurence. [substring(s, begin_index)](#substring/2) Returns a new string that is a substring of this string. [substring(s, begin_index, end_index)](#substring/3) Returns a new string that is a substring of this string. [unescape_for_encode_uri_compatability(str)](#unescape_for_encode_uri_compatability/1) Unescape selected chars for compatability with JavaScript's `encodeURI`. [uri_encode(str)](#uri_encode/1) A URI encoding, but with spaces and asterisks left as is, for use with diffs. [Link to this section](#functions) Functions ===
mypy
readthedoc
Python
mypy 1.6.0 documentation Hide navigation sidebar Hide table of contents sidebar Toggle site navigation sidebar [mypy](#) Toggle Light / Dark / Auto color theme Toggle table of contents sidebar [mypy](#) First steps * [Getting started](index.html#document-getting_started) * [Type hints cheat sheet](index.html#document-cheat_sheet_py3) * [Using mypy with an existing codebase](index.html#document-existing_code) Type system reference * [Built-in types](index.html#document-builtin_types) * [Type inference and type annotations](index.html#document-type_inference_and_annotations) * [Kinds of types](index.html#document-kinds_of_types) * [Class basics](index.html#document-class_basics) * [Annotation issues at runtime](index.html#document-runtime_troubles) * [Protocols and structural subtyping](index.html#document-protocols) * [Dynamically typed code](index.html#document-dynamic_typing) * [Type narrowing](index.html#document-type_narrowing) * [Duck type compatibility](index.html#document-duck_type_compatibility) * [Stub files](index.html#document-stubs) * [Generics](index.html#document-generics) * [More types](index.html#document-more_types) * [Literal types and Enums](index.html#document-literal_types) * [TypedDict](index.html#document-typed_dict) * [Final names, methods and classes](index.html#document-final_attrs) * [Metaclasses](index.html#document-metaclasses) Configuring and running mypy * [Running mypy and managing imports](index.html#document-running_mypy) * [The mypy command line](index.html#document-command_line) * [The mypy configuration file](index.html#document-config_file) * [Inline configuration](index.html#document-inline_config) * [Mypy daemon (mypy server)](index.html#document-mypy_daemon) * [Using installed packages](index.html#document-installed_packages) * [Extending and integrating mypy](index.html#document-extending_mypy) * [Automatic stub generation (stubgen)](index.html#document-stubgen) * [Automatic stub testing (stubtest)](index.html#document-stubtest) Miscellaneous * [Common issues and solutions](index.html#document-common_issues) * [Supported Python features](index.html#document-supported_python_features) * [Error codes](index.html#document-error_codes) * [Error codes enabled by default](index.html#document-error_code_list) * [Error codes for optional checks](index.html#document-error_code_list2) * [Additional features](index.html#document-additional_features) * [Frequently Asked Questions](index.html#document-faq) Project Links * [GitHub](https://github.com/python/mypy) * [Website](https://mypy-lang.org/) [Back to top](#) Toggle Light / Dark / Auto color theme Toggle table of contents sidebar Welcome to mypy documentation![#](#welcome-to-mypy-documentation) === Mypy is a static type checker for Python. Type checkers help ensure that you’re using variables and functions in your code correctly. With mypy, add type hints ([**PEP 484**](https://peps.python.org/pep-0484/)) to your Python programs, and mypy will warn you when you use those types incorrectly. Python is a dynamic language, so usually you’ll only see errors in your code when you attempt to run it. Mypy is a *static* checker, so it finds bugs in your programs without even running them! Here is a small example to whet your appetite: ``` number = input("What is your favourite number?") print("It is", number + 1) # error: Unsupported operand types for + ("str" and "int") ``` Adding type hints for mypy does not interfere with the way your program would otherwise run. Think of type hints as similar to comments! You can always use the Python interpreter to run your code, even if mypy reports errors. Mypy is designed with gradual typing in mind. This means you can add type hints to your code base slowly and that you can always fall back to dynamic typing when static typing is not convenient. Mypy has a powerful and easy-to-use type system, supporting features such as type inference, generics, callable types, tuple types, union types, structural subtyping and more. Using mypy will make your programs easier to understand, debug, and maintain. Note Although mypy is production ready, there may be occasional changes that break backward compatibility. The mypy development team tries to minimize the impact of changes to user code. In case of a major breaking change, mypy’s major version will be bumped. Contents[#](#contents) --- ### Getting started[#](#getting-started) This chapter introduces some core concepts of mypy, including function annotations, the [`typing`](https://docs.python.org/3/library/typing.html#module-typing) module, stub files, and more. If you’re looking for a quick intro, see the [mypy cheatsheet](index.html#cheat-sheet-py3). If you’re unfamiliar with the concepts of static and dynamic type checking, be sure to read this chapter carefully, as the rest of the documentation may not make much sense otherwise. #### Installing and running mypy[#](#installing-and-running-mypy) Mypy requires Python 3.8 or later to run. You can install mypy using pip: ``` $ python3 -m pip install mypy ``` Once mypy is installed, run it by using the `mypy` tool: ``` $ mypy program.py ``` This command makes mypy *type check* your `program.py` file and print out any errors it finds. Mypy will type check your code *statically*: this means that it will check for errors without ever running your code, just like a linter. This also means that you are always free to ignore the errors mypy reports, if you so wish. You can always use the Python interpreter to run your code, even if mypy reports errors. However, if you try directly running mypy on your existing Python code, it will most likely report little to no errors. This is a feature! It makes it easy to adopt mypy incrementally. In order to get useful diagnostics from mypy, you must add *type annotations* to your code. See the section below for details. #### Dynamic vs static typing[#](#dynamic-vs-static-typing) A function without type annotations is considered to be *dynamically typed* by mypy: ``` def greeting(name): return 'Hello ' + name ``` By default, mypy will **not** type check dynamically typed functions. This means that with a few exceptions, mypy will not report any errors with regular unannotated Python. This is the case even if you misuse the function! ``` def greeting(name): return 'Hello ' + name # These calls will fail when the program run, but mypy does not report an error # because "greeting" does not have type annotations. greeting(123) greeting(b"Alice") ``` We can get mypy to detect these kinds of bugs by adding *type annotations* (also known as *type hints*). For example, you can tell mypy that `greeting` both accepts and returns a string like so: ``` # The "name: str" annotation says that the "name" argument should be a string # The "-> str" annotation says that "greeting" will return a string def greeting(name: str) -> str: return 'Hello ' + name ``` This function is now *statically typed*: mypy will use the provided type hints to detect incorrect use of the `greeting` function and incorrect use of variables within the `greeting` function. For example: ``` def greeting(name: str) -> str: return 'Hello ' + name greeting(3) # Argument 1 to "greeting" has incompatible type "int"; expected "str" greeting(b'Alice') # Argument 1 to "greeting" has incompatible type "bytes"; expected "str" greeting("World!") # No error def bad_greeting(name: str) -> str: return 'Hello ' * name # Unsupported operand types for * ("str" and "str") ``` Being able to pick whether you want a function to be dynamically or statically typed can be very helpful. For example, if you are migrating an existing Python codebase to use static types, it’s usually easier to migrate by incrementally adding type hints to your code rather than adding them all at once. Similarly, when you are prototyping a new feature, it may be convenient to initially implement the code using dynamic typing and only add type hints later once the code is more stable. Once you are finished migrating or prototyping your code, you can make mypy warn you if you add a dynamic function by mistake by using the [`--disallow-untyped-defs`](index.html#cmdoption-mypy-disallow-untyped-defs) flag. You can also get mypy to provide some limited checking of dynamically typed functions by using the [`--check-untyped-defs`](index.html#cmdoption-mypy-check-untyped-defs) flag. See [The mypy command line](index.html#command-line) for more information on configuring mypy. #### Strict mode and configuration[#](#strict-mode-and-configuration) Mypy has a *strict mode* that enables a number of additional checks, like [`--disallow-untyped-defs`](index.html#cmdoption-mypy-disallow-untyped-defs). If you run mypy with the [`--strict`](index.html#cmdoption-mypy-strict) flag, you will basically never get a type related error at runtime without a corresponding mypy error, unless you explicitly circumvent mypy somehow. However, this flag will probably be too aggressive if you are trying to add static types to a large, existing codebase. See [Using mypy with an existing codebase](index.html#existing-code) for suggestions on how to handle that case. Mypy is very configurable, so you can start with using `--strict` and toggle off individual checks. For instance, if you use many third party libraries that do not have types, [`--ignore-missing-imports`](index.html#cmdoption-mypy-ignore-missing-imports) may be useful. See [Introduce stricter options](index.html#getting-to-strict) for how to build up to `--strict`. See [The mypy command line](index.html#command-line) and [The mypy configuration file](index.html#config-file) for a complete reference on configuration options. #### More complex types[#](#more-complex-types) So far, we’ve added type hints that use only basic concrete types like `str` and `float`. What if we want to express more complex types, such as “a list of strings” or “an iterable of ints”? For example, to indicate that some function can accept a list of strings, use the `list[str]` type (Python 3.9 and later): ``` def greet_all(names: list[str]) -> None: for name in names: print('Hello ' + name) names = ["Alice", "Bob", "Charlie"] ages = [10, 20, 30] greet_all(names) # Ok! greet_all(ages) # Error due to incompatible types ``` The [`list`](https://docs.python.org/3/library/stdtypes.html#list) type is an example of something called a *generic type*: it can accept one or more *type parameters*. In this case, we *parameterized* [`list`](https://docs.python.org/3/library/stdtypes.html#list) by writing `list[str]`. This lets mypy know that `greet_all` accepts specifically lists containing strings, and not lists containing ints or any other type. In the above examples, the type signature is perhaps a little too rigid. After all, there’s no reason why this function must accept *specifically* a list – it would run just fine if you were to pass in a tuple, a set, or any other custom iterable. You can express this idea using [`collections.abc.Iterable`](https://docs.python.org/3/library/collections.abc.html#collections.abc.Iterable): ``` from collections.abc import Iterable # or "from typing import Iterable" def greet_all(names: Iterable[str]) -> None: for name in names: print('Hello ' + name) ``` This behavior is actually a fundamental aspect of the PEP 484 type system: when we annotate some variable with a type `T`, we are actually telling mypy that variable can be assigned an instance of `T`, or an instance of a *subtype* of `T`. That is, `list[str]` is a subtype of `Iterable[str]`. This also applies to inheritance, so if you have a class `Child` that inherits from `Parent`, then a value of type `Child` can be assigned to a variable of type `Parent`. For example, a `RuntimeError` instance can be passed to a function that is annotated as taking an `Exception`. As another example, suppose you want to write a function that can accept *either* ints or strings, but no other types. You can express this using the [`Union`](https://docs.python.org/3/library/typing.html#typing.Union) type. For example, `int` is a subtype of `Union[int, str]`: ``` from typing import Union def normalize_id(user_id: Union[int, str]) -> str: if isinstance(user_id, int): return f'user-{100_000 + user_id}' else: return user_id ``` The [`typing`](https://docs.python.org/3/library/typing.html#module-typing) module contains many other useful types. For a quick overview, look through the [mypy cheatsheet](index.html#cheat-sheet-py3). For a detailed overview (including information on how to make your own generic types or your own type aliases), look through the [type system reference](index.html#overview-type-system-reference). Note When adding types, the convention is to import types using the form `from typing import Union` (as opposed to doing just `import typing` or `import typing as t` or `from typing import *`). For brevity, we often omit imports from [`typing`](https://docs.python.org/3/library/typing.html#module-typing) or [`collections.abc`](https://docs.python.org/3/library/collections.abc.html#module-collections.abc) in code examples, but mypy will give an error if you use types such as [`Iterable`](https://docs.python.org/3/library/typing.html#typing.Iterable) without first importing them. Note In some examples we use capitalized variants of types, such as `List`, and sometimes we use plain `list`. They are equivalent, but the prior variant is needed if you are using Python 3.8 or earlier. #### Local type inference[#](#local-type-inference) Once you have added type hints to a function (i.e. made it statically typed), mypy will automatically type check that function’s body. While doing so, mypy will try and *infer* as many details as possible. We saw an example of this in the `normalize_id` function above – mypy understands basic [`isinstance`](https://docs.python.org/3/library/functions.html#isinstance) checks and so can infer that the `user_id` variable was of type `int` in the if-branch and of type `str` in the else-branch. As another example, consider the following function. Mypy can type check this function without a problem: it will use the available context and deduce that `output` must be of type `list[float]` and that `num` must be of type `float`: ``` def nums_below(numbers: Iterable[float], limit: float) -> list[float]: output = [] for num in numbers: if num < limit: output.append(num) return output ``` For more details, see [Type inference and type annotations](index.html#type-inference-and-annotations). #### Types from libraries[#](#types-from-libraries) Mypy can also understand how to work with types from libraries that you use. For instance, mypy comes out of the box with an intimate knowledge of the Python standard library. For example, here is a function which uses the `Path` object from the [pathlib standard library module](https://docs.python.org/3/library/pathlib.html): ``` from pathlib import Path def load_template(template_path: Path, name: str) -> str: # Mypy knows that `template_path` has a `read_text` method that returns a str template = template_path.read_text() # ...so it understands this line type checks return template.replace('USERNAME', name) ``` If a third party library you use [declares support for type checking](index.html#installed-packages), mypy will type check your use of that library based on the type hints it contains. However, if the third party library does not have type hints, mypy will complain about missing type information. ``` prog.py:1: error: Library stubs not installed for "yaml" prog.py:1: note: Hint: "python3 -m pip install types-PyYAML" prog.py:2: error: Library stubs not installed for "requests" prog.py:2: note: Hint: "python3 -m pip install types-requests" ... ``` In this case, you can provide mypy a different source of type information, by installing a *stub* package. A stub package is a package that contains type hints for another library, but no actual code. ``` $ python3 -m pip install types-PyYAML types-requests ``` Stubs packages for a distribution are often named `types-<distribution>`. Note that a distribution name may be different from the name of the package that you import. For example, `types-PyYAML` contains stubs for the `yaml` package. For more discussion on strategies for handling errors about libraries without type information, refer to [Missing imports](index.html#fix-missing-imports). For more information about stubs, see [Stub files](index.html#stub-files). #### Next steps[#](#next-steps) If you are in a hurry and don’t want to read lots of documentation before getting started, here are some pointers to quick learning resources: * Read the [mypy cheatsheet](index.html#cheat-sheet-py3). * Read [Using mypy with an existing codebase](index.html#existing-code) if you have a significant existing codebase without many type annotations. * Read the [blog post](https://blog.zulip.org/2016/10/13/static-types-in-python-oh-mypy/) about the Zulip project’s experiences with adopting mypy. * If you prefer watching talks instead of reading, here are some ideas: + <NAME>: [Type Checked Python in the Real World](https://www.youtube.com/watch?v=pMgmKJyWKn8) (PyCon 2018) + <NAME>: [Clearer Code at Scale: Static Types at Zulip and Dropbox](https://www.youtube.com/watch?v=0c46YHS3RY8) (PyCon 2018) * Look at [solutions to common issues](index.html#common-issues) with mypy if you encounter problems. * You can ask questions about mypy in the [mypy issue tracker](https://github.com/python/mypy/issues) and typing [Gitter chat](https://gitter.im/python/typing). * For general questions about Python typing, try posting at [typing discussions](https://github.com/python/typing/discussions). You can also continue reading this document and skip sections that aren’t relevant for you. You don’t need to read sections in order. ### Type hints cheat sheet[#](#type-hints-cheat-sheet) This document is a quick cheat sheet showing how to use type annotations for various common types in Python. #### Variables[#](#variables) Technically many of the type annotations shown below are redundant, since mypy can usually infer the type of a variable from its value. See [Type inference and type annotations](index.html#type-inference-and-annotations) for more details. ``` # This is how you declare the type of a variable age: int = 1 # You don't need to initialize a variable to annotate it a: int # Ok (no value at runtime until assigned) # Doing so can be useful in conditional branches child: bool if age < 18: child = True else: child = False ``` #### Useful built-in types[#](#useful-built-in-types) ``` # For most types, just use the name of the type in the annotation # Note that mypy can usually infer the type of a variable from its value, # so technically these annotations are redundant x: int = 1 x: float = 1.0 x: bool = True x: str = "test" x: bytes = b"test" # For collections on Python 3.9+, the type of the collection item is in brackets x: list[int] = [1] x: set[int] = {6, 7} # For mappings, we need the types of both keys and values x: dict[str, float] = {"field": 2.0} # Python 3.9+ # For tuples of fixed size, we specify the types of all the elements x: tuple[int, str, float] = (3, "yes", 7.5) # Python 3.9+ # For tuples of variable size, we use one type and ellipsis x: tuple[int, ...] = (1, 2, 3) # Python 3.9+ # On Python 3.8 and earlier, the name of the collection type is # capitalized, and the type is imported from the 'typing' module from typing import List, Set, Dict, Tuple x: List[int] = [1] x: Set[int] = {6, 7} x: Dict[str, float] = {"field": 2.0} x: Tuple[int, str, float] = (3, "yes", 7.5) x: Tuple[int, ...] = (1, 2, 3) from typing import Union, Optional # On Python 3.10+, use the | operator when something could be one of a few types x: list[int | str] = [3, 5, "test", "fun"] # Python 3.10+ # On earlier versions, use Union x: list[Union[int, str]] = [3, 5, "test", "fun"] # Use Optional[X] for a value that could be None # Optional[X] is the same as X | None or Union[X, None] x: Optional[str] = "something" if some_condition() else None if x is not None: # Mypy understands x won't be None here because of the if-statement print(x.upper()) # If you know a value can never be None due to some logic that mypy doesn't # understand, use an assert assert x is not None print(x.upper()) ``` #### Functions[#](#functions) ``` from typing import Callable, Iterator, Union, Optional # This is how you annotate a function definition def stringify(num: int) -> str: return str(num) # And here's how you specify multiple arguments def plus(num1: int, num2: int) -> int: return num1 + num2 # If a function does not return a value, use None as the return type # Default value for an argument goes after the type annotation def show(value: str, excitement: int = 10) -> None: print(value + "!" * excitement) # Note that arguments without a type are dynamically typed (treated as Any) # and that functions without any annotations are not checked def untyped(x): x.anything() + 1 + "string" # no errors # This is how you annotate a callable (function) value x: Callable[[int, float], float] = f def register(callback: Callable[[str], int]) -> None: ... # A generator function that yields ints is secretly just a function that # returns an iterator of ints, so that's how we annotate it def gen(n: int) -> Iterator[int]: i = 0 while i < n: yield i i += 1 # You can of course split a function annotation over multiple lines def send_email(address: Union[str, list[str]], sender: str, cc: Optional[list[str]], bcc: Optional[list[str]], subject: str = '', body: Optional[list[str]] = None ) -> bool: ... # Mypy understands positional-only and keyword-only arguments # Positional-only arguments can also be marked by using a name starting with # two underscores def quux(x: int, /, *, y: int) -> None: pass quux(3, y=5) # Ok quux(3, 5) # error: Too many positional arguments for "quux" quux(x=3, y=5) # error: Unexpected keyword argument "x" for "quux" # This says each positional arg and each keyword arg is a "str" def call(self, *args: str, **kwargs: str) -> str: reveal_type(args) # Revealed type is "tuple[str, ...]" reveal_type(kwargs) # Revealed type is "dict[str, str]" request = make_request(*args, **kwargs) return self.do_api_query(request) ``` #### Classes[#](#classes) ``` class BankAccount: # The "__init__" method doesn't return anything, so it gets return # type "None" just like any other method that doesn't return anything def __init__(self, account_name: str, initial_balance: int = 0) -> None: # mypy will infer the correct types for these instance variables # based on the types of the parameters. self.account_name = account_name self.balance = initial_balance # For instance methods, omit type for "self" def deposit(self, amount: int) -> None: self.balance += amount def withdraw(self, amount: int) -> None: self.balance -= amount # User-defined classes are valid as types in annotations account: BankAccount = BankAccount("Alice", 400) def transfer(src: BankAccount, dst: BankAccount, amount: int) -> None: src.withdraw(amount) dst.deposit(amount) # Functions that accept BankAccount also accept any subclass of BankAccount! class AuditedBankAccount(BankAccount): # You can optionally declare instance variables in the class body audit_log: list[str] def __init__(self, account_name: str, initial_balance: int = 0) -> None: super().__init__(account_name, initial_balance) self.audit_log: list[str] = [] def deposit(self, amount: int) -> None: self.audit_log.append(f"Deposited {amount}") self.balance += amount def withdraw(self, amount: int) -> None: self.audit_log.append(f"Withdrew {amount}") self.balance -= amount audited = AuditedBankAccount("Bob", 300) transfer(audited, account, 100) # type checks! # You can use the ClassVar annotation to declare a class variable class Car: seats: ClassVar[int] = 4 passengers: ClassVar[list[str]] # If you want dynamic attributes on your class, have it # override "__setattr__" or "__getattr__" class A: # This will allow assignment to any A.x, if x is the same type as "value" # (use "value: Any" to allow arbitrary types) def __setattr__(self, name: str, value: int) -> None: ... # This will allow access to any A.x, if x is compatible with the return type def __getattr__(self, name: str) -> int: ... a.foo = 42 # Works a.bar = 'Ex-parrot' # Fails type checking ``` #### When you’re puzzled or when things are complicated[#](#when-you-re-puzzled-or-when-things-are-complicated) ``` from typing import Union, Any, Optional, TYPE_CHECKING, cast # To find out what type mypy infers for an expression anywhere in # your program, wrap it in reveal_type(). Mypy will print an error # message with the type; remove it again before running the code. reveal_type(1) # Revealed type is "builtins.int" # If you initialize a variable with an empty container or "None" # you may have to help mypy a bit by providing an explicit type annotation x: list[str] = [] x: Optional[str] = None # Use Any if you don't know the type of something or it's too # dynamic to write a type for x: Any = mystery_function() # Mypy will let you do anything with x! x.whatever() * x["you"] + x("want") - any(x) and all(x) is super # no errors # Use a "type: ignore" comment to suppress errors on a given line, # when your code confuses mypy or runs into an outright bug in mypy. # Good practice is to add a comment explaining the issue. x = confusing_function() # type: ignore # confusing_function won't return None here because ... # "cast" is a helper function that lets you override the inferred # type of an expression. It's only for mypy -- there's no runtime check. a = [4] b = cast(list[int], a) # Passes fine c = cast(list[str], a) # Passes fine despite being a lie (no runtime check) reveal_type(c) # Revealed type is "builtins.list[builtins.str]" print(c) # Still prints [4] ... the object is not changed or casted at runtime # Use "TYPE_CHECKING" if you want to have code that mypy can see but will not # be executed at runtime (or to have code that mypy can't see) if TYPE_CHECKING: import json else: import orjson as json # mypy is unaware of this ``` In some cases type annotations can cause issues at runtime, see [Annotation issues at runtime](index.html#runtime-troubles) for dealing with this. See [Silencing type errors](index.html#silencing-type-errors) for details on how to silence errors. #### Standard “duck types”[#](#standard-duck-types) In typical Python code, many functions that can take a list or a dict as an argument only need their argument to be somehow “list-like” or “dict-like”. A specific meaning of “list-like” or “dict-like” (or something-else-like) is called a “duck type”, and several duck types that are common in idiomatic Python are standardized. ``` from typing import Mapping, MutableMapping, Sequence, Iterable # Use Iterable for generic iterables (anything usable in "for"), # and Sequence where a sequence (supporting "len" and "__getitem__") is # required def f(ints: Iterable[int]) -> list[str]: return [str(x) for x in ints] f(range(1, 3)) # Mapping describes a dict-like object (with "__getitem__") that we won't # mutate, and MutableMapping one (with "__setitem__") that we might def f(my_mapping: Mapping[int, str]) -> list[int]: my_mapping[5] = 'maybe' # mypy will complain about this line... return list(my_mapping.keys()) f({3: 'yes', 4: 'no'}) def f(my_mapping: MutableMapping[int, str]) -> set[str]: my_mapping[5] = 'maybe' # ...but mypy is OK with this. return set(my_mapping.values()) f({3: 'yes', 4: 'no'}) import sys from typing import IO # Use IO[str] or IO[bytes] for functions that should accept or return # objects that come from an open() call (note that IO does not # distinguish between reading, writing or other modes) def get_sys_IO(mode: str = 'w') -> IO[str]: if mode == 'w': return sys.stdout elif mode == 'r': return sys.stdin else: return sys.stdout ``` You can even make your own duck types using [Protocols and structural subtyping](index.html#protocol-types). #### Forward references[#](#forward-references) ``` # You may want to reference a class before it is defined. # This is known as a "forward reference". def f(foo: A) -> int: # This will fail at runtime with 'A' is not defined ... # However, if you add the following special import: from __future__ import annotations # It will work at runtime and type checking will succeed as long as there # is a class of that name later on in the file def f(foo: A) -> int: # Ok ... # Another option is to just put the type in quotes def f(foo: 'A') -> int: # Also ok ... class A: # This can also come up if you need to reference a class in a type # annotation inside the definition of that class @classmethod def create(cls) -> A: ... ``` See [Class name forward references](index.html#forward-references) for more details. #### Decorators[#](#decorators) Decorator functions can be expressed via generics. See [Declaring decorators](index.html#declaring-decorators) for more details. ``` from typing import Any, Callable, TypeVar F = TypeVar('F', bound=Callable[..., Any]) def bare_decorator(func: F) -> F: ... def decorator_args(url: str) -> Callable[[F], F]: ... ``` #### Coroutines and asyncio[#](#coroutines-and-asyncio) See [Typing async/await](index.html#async-and-await) for the full detail on typing coroutines and asynchronous code. ``` import asyncio # A coroutine is typed like a normal function async def countdown(tag: str, count: int) -> str: while count > 0: print(f'T-minus {count} ({tag})') await asyncio.sleep(0.1) count -= 1 return "Blastoff!" ``` ### Using mypy with an existing codebase[#](#using-mypy-with-an-existing-codebase) This section explains how to get started using mypy with an existing, significant codebase that has little or no type annotations. If you are a beginner, you can skip this section. #### Start small[#](#start-small) If your codebase is large, pick a subset of your codebase (say, 5,000 to 50,000 lines) and get mypy to run successfully only on this subset at first, *before adding annotations*. This should be doable in a day or two. The sooner you get some form of mypy passing on your codebase, the sooner you benefit. You’ll likely need to fix some mypy errors, either by inserting annotations requested by mypy or by adding `# type: ignore` comments to silence errors you don’t want to fix now. We’ll mention some tips for getting mypy passing on your codebase in various sections below. #### Run mypy consistently and prevent regressions[#](#run-mypy-consistently-and-prevent-regressions) Make sure all developers on your codebase run mypy the same way. One way to ensure this is adding a small script with your mypy invocation to your codebase, or adding your mypy invocation to existing tools you use to run tests, like `tox`. * Make sure everyone runs mypy with the same options. Checking a mypy [configuration file](index.html#config-file) into your codebase can help with this. * Make sure everyone type checks the same set of files. See [Specifying code to be checked](index.html#specifying-code-to-be-checked) for details. * Make sure everyone runs mypy with the same version of mypy, for instance by pinning mypy with the rest of your dev requirements. In particular, you’ll want to make sure to run mypy as part of your Continuous Integration (CI) system as soon as possible. This will prevent new type errors from being introduced into your codebase. A simple CI script could look something like this: ``` python3 -m pip install mypy==0.971 # Run your standardised mypy invocation, e.g. mypy my_project # This could also look like `scripts/run_mypy.sh`, `tox run -e mypy`, `make mypy`, etc ``` #### Ignoring errors from certain modules[#](#ignoring-errors-from-certain-modules) By default mypy will follow imports in your code and try to check everything. This means even if you only pass in a few files to mypy, it may still process a large number of imported files. This could potentially result in lots of errors you don’t want to deal with at the moment. One way to deal with this is to ignore errors in modules you aren’t yet ready to type check. The [`ignore_errors`](index.html#confval-ignore_errors) option is useful for this, for instance, if you aren’t yet ready to deal with errors from `package_to_fix_later`: ``` [mypy-package_to_fix_later.*] ignore_errors = True ``` You could even invert this, by setting `ignore_errors = True` in your global config section and only enabling error reporting with `ignore_errors = False` for the set of modules you are ready to type check. #### Fixing errors related to imports[#](#fixing-errors-related-to-imports) A common class of error you will encounter is errors from mypy about modules that it can’t find, that don’t have types, or don’t have stub files: ``` core/config.py:7: error: Cannot find implementation or library stub for module named 'frobnicate' core/model.py:9: error: Cannot find implementation or library stub for module named 'acme' ... ``` Sometimes these can be fixed by installing the relevant packages or stub libraries in the environment you’re running `mypy` in. See [Missing imports](index.html#ignore-missing-imports) for a complete reference on these errors and the ways in which you can fix them. You’ll likely find that you want to suppress all errors from importing a given module that doesn’t have types. If you only import that module in one or two places, you can use `# type: ignore` comments. For example, here we ignore an error about a third-party module `frobnicate` that doesn’t have stubs using `# type: ignore`: ``` import frobnicate # type: ignore ... frobnicate.initialize() # OK (but not checked) ``` But if you import the module in many places, this becomes unwieldy. In this case, we recommend using a [configuration file](index.html#config-file). For example, to disable errors about importing `frobnicate` and `acme` everywhere in your codebase, use a config like this: ``` [mypy-frobnicate.*] ignore_missing_imports = True [mypy-acme.*] ignore_missing_imports = True ``` If you get a large number of errors, you may want to ignore all errors about missing imports, for instance by setting [`ignore_missing_imports`](index.html#confval-ignore_missing_imports) to true globally. This can hide errors later on, so we recommend avoiding this if possible. Finally, mypy allows fine-grained control over specific import following behaviour. It’s very easy to silently shoot yourself in the foot when playing around with these, so it’s mostly recommended as a last resort. For more details, look [here](index.html#follow-imports). #### Prioritise annotating widely imported modules[#](#prioritise-annotating-widely-imported-modules) Most projects have some widely imported modules, such as utilities or model classes. It’s a good idea to annotate these pretty early on, since this allows code using these modules to be type checked more effectively. Mypy is designed to support gradual typing, i.e. letting you add annotations at your own pace, so it’s okay to leave some of these modules unannotated. The more you annotate, the more useful mypy will be, but even a little annotation coverage is useful. #### Write annotations as you go[#](#write-annotations-as-you-go) Consider adding something like these in your code style conventions: 1. Developers should add annotations for any new code. 2. It’s also encouraged to write annotations when you modify existing code. This way you’ll gradually increase annotation coverage in your codebase without much effort. #### Automate annotation of legacy code[#](#automate-annotation-of-legacy-code) There are tools for automatically adding draft annotations based on simple static analysis or on type profiles collected at runtime. Tools include [MonkeyType](https://monkeytype.readthedocs.io/en/latest/index.html), [autotyping](https://github.com/JelleZijlstra/autotyping) and [PyAnnotate](https://github.com/dropbox/pyannotate). A simple approach is to collect types from test runs. This may work well if your test coverage is good (and if your tests aren’t very slow). Another approach is to enable type collection for a small, random fraction of production network requests. This clearly requires more care, as type collection could impact the reliability or the performance of your service. #### Introduce stricter options[#](#introduce-stricter-options) Mypy is very configurable. Once you get started with static typing, you may want to explore the various strictness options mypy provides to catch more bugs. For example, you can ask mypy to require annotations for all functions in certain modules to avoid accidentally introducing code that won’t be type checked using [`disallow_untyped_defs`](index.html#confval-disallow_untyped_defs). Refer to [The mypy configuration file](index.html#config-file) for the details. An excellent goal to aim for is to have your codebase pass when run against `mypy --strict`. This basically ensures that you will never have a type related error without an explicit circumvention somewhere (such as a `# type: ignore` comment). The following config is equivalent to `--strict` (as of mypy 1.0): ``` # Start off with these warn_unused_configs = True warn_redundant_casts = True warn_unused_ignores = True # Getting these passing should be easy strict_equality = True strict_concatenate = True # Strongly recommend enabling this one as soon as you can check_untyped_defs = True # These shouldn't be too much additional work, but may be tricky to # get passing if you use a lot of untyped libraries disallow_subclassing_any = True disallow_untyped_decorators = True disallow_any_generics = True # These next few are various gradations of forcing use of type annotations disallow_untyped_calls = True disallow_incomplete_defs = True disallow_untyped_defs = True # This one isn't too hard to get passing, but return on investment is lower no_implicit_reexport = True # This one can be tricky to get passing if you use a lot of untyped libraries warn_return_any = True ``` Note that you can also start with `--strict` and subtract, for instance: ``` strict = True warn_return_any = False ``` Remember that many of these options can be enabled on a per-module basis. For instance, you may want to enable `disallow_untyped_defs` for modules which you’ve completed annotations for, in order to prevent new code from being added without annotations. And if you want, it doesn’t stop at `--strict`. Mypy has additional checks that are not part of `--strict` that can be useful. See the complete [The mypy command line](index.html#command-line) reference and [Error codes for optional checks](index.html#error-codes-optional). #### Speed up mypy runs[#](#speed-up-mypy-runs) You can use [mypy daemon](index.html#mypy-daemon) to get much faster incremental mypy runs. The larger your project is, the more useful this will be. If your project has at least 100,000 lines of code or so, you may also want to set up [remote caching](index.html#remote-cache) for further speedups. ### Built-in types[#](#built-in-types) This chapter introduces some commonly used built-in types. We will cover many other kinds of types later. #### Simple types[#](#simple-types) Here are examples of some common built-in types: | Type | Description | | --- | --- | | `int` | integer | | `float` | floating point number | | `bool` | boolean value (subclass of `int`) | | `str` | text, sequence of unicode codepoints | | `bytes` | 8-bit string, sequence of byte values | | `object` | an arbitrary object (`object` is the common base class) | All built-in classes can be used as types. #### Any type[#](#any-type) If you can’t find a good type for some value, you can always fall back to `Any`: | Type | Description | | --- | --- | | `Any` | dynamically typed value with an arbitrary type | The type `Any` is defined in the [`typing`](https://docs.python.org/3/library/typing.html#module-typing) module. See [Dynamically typed code](index.html#dynamic-typing) for more details. #### Generic types[#](#generic-types) In Python 3.9 and later, built-in collection type objects support indexing: | Type | Description | | --- | --- | | `list[str]` | list of `str` objects | | `tuple[int, int]` | tuple of two `int` objects (`tuple[()]` is the empty tuple) | | `tuple[int, ...]` | tuple of an arbitrary number of `int` objects | | `dict[str, int]` | dictionary from `str` keys to `int` values | | `Iterable[int]` | iterable object containing ints | | `Sequence[bool]` | sequence of booleans (read-only) | | `Mapping[str, int]` | mapping from `str` keys to `int` values (read-only) | | `type[C]` | type object of `C` (`C` is a class/type variable/union of types) | The type `dict` is a *generic* class, signified by type arguments within `[...]`. For example, `dict[int, str]` is a dictionary from integers to strings and `dict[Any, Any]` is a dictionary of dynamically typed (arbitrary) values and keys. `list` is another generic class. `Iterable`, `Sequence`, and `Mapping` are generic types that correspond to Python protocols. For example, a `str` object or a `list[str]` object is valid when `Iterable[str]` or `Sequence[str]` is expected. You can import them from [`collections.abc`](https://docs.python.org/3/library/collections.abc.html#module-collections.abc) instead of importing from [`typing`](https://docs.python.org/3/library/typing.html#module-typing) in Python 3.9. See [Using generic builtins](index.html#generic-builtins) for more details, including how you can use these in annotations also in Python 3.7 and 3.8. These legacy types defined in [`typing`](https://docs.python.org/3/library/typing.html#module-typing) are needed if you need to support Python 3.8 and earlier: | Type | Description | | --- | --- | | `List[str]` | list of `str` objects | | `Tuple[int, int]` | tuple of two `int` objects (`Tuple[()]` is the empty tuple) | | `Tuple[int, ...]` | tuple of an arbitrary number of `int` objects | | `Dict[str, int]` | dictionary from `str` keys to `int` values | | `Iterable[int]` | iterable object containing ints | | `Sequence[bool]` | sequence of booleans (read-only) | | `Mapping[str, int]` | mapping from `str` keys to `int` values (read-only) | | `Type[C]` | type object of `C` (`C` is a class/type variable/union of types) | `List` is an alias for the built-in type `list` that supports indexing (and similarly for `dict`/`Dict` and `tuple`/`Tuple`). Note that even though `Iterable`, `Sequence` and `Mapping` look similar to abstract base classes defined in [`collections.abc`](https://docs.python.org/3/library/collections.abc.html#module-collections.abc) (formerly `collections`), they are not identical, since the latter don’t support indexing prior to Python 3.9. ### Type inference and type annotations[#](#type-inference-and-type-annotations) #### Type inference[#](#type-inference) For most variables, if you do not explicitly specify its type, mypy will infer the correct type based on what is initially assigned to the variable. ``` # Mypy will infer the type of these variables, despite no annotations i = 1 reveal_type(i) # Revealed type is "builtins.int" l = [1, 2] reveal_type(l) # Revealed type is "builtins.list[builtins.int]" ``` Note Note that mypy will not use type inference in dynamically typed functions (those without a function type annotation) — every local variable type defaults to `Any` in such functions. For more details, see [Dynamically typed code](index.html#dynamic-typing). ``` def untyped_function(): i = 1 reveal_type(i) # Revealed type is "Any" # 'reveal_type' always outputs 'Any' in unchecked functions ``` #### Explicit types for variables[#](#explicit-types-for-variables) You can override the inferred type of a variable by using a variable type annotation: ``` from typing import Union x: Union[int, str] = 1 ``` Without the type annotation, the type of `x` would be just `int`. We use an annotation to give it a more general type `Union[int, str]` (this type means that the value can be either an `int` or a `str`). The best way to think about this is that the type annotation sets the type of the variable, not the type of the expression. For instance, mypy will complain about the following code: ``` x: Union[int, str] = 1.1 # error: Incompatible types in assignment # (expression has type "float", variable has type "Union[int, str]") ``` Note To explicitly override the type of an expression you can use [`cast(<type>, <expression>)`](https://docs.python.org/3/library/typing.html#typing.cast). See [Casts](index.html#casts) for details. Note that you can explicitly declare the type of a variable without giving it an initial value: ``` # We only unpack two values, so there's no right-hand side value # for mypy to infer the type of "cs" from: a, b, *cs = 1, 2 # error: Need type annotation for "cs" rs: list[int] # no assignment! p, q, *rs = 1, 2 # OK ``` #### Explicit types for collections[#](#explicit-types-for-collections) The type checker cannot always infer the type of a list or a dictionary. This often arises when creating an empty list or dictionary and assigning it to a new variable that doesn’t have an explicit variable type. Here is an example where mypy can’t infer the type without some help: ``` l = [] # Error: Need type annotation for "l" ``` In these cases you can give the type explicitly using a type annotation: ``` l: list[int] = [] # Create empty list of int d: dict[str, int] = {} # Create empty dictionary (str -> int) ``` Note Using type arguments (e.g. `list[int]`) on builtin collections like [`list`](https://docs.python.org/3/library/stdtypes.html#list), [`dict`](https://docs.python.org/3/library/stdtypes.html#dict), [`tuple`](https://docs.python.org/3/library/stdtypes.html#tuple), and [`set`](https://docs.python.org/3/library/stdtypes.html#set) only works in Python 3.9 and later. For Python 3.8 and earlier, you must use [`List`](https://docs.python.org/3/library/typing.html#typing.List) (e.g. `List[int]`), [`Dict`](https://docs.python.org/3/library/typing.html#typing.Dict), and so on. #### Compatibility of container types[#](#compatibility-of-container-types) A quick note: container types can sometimes be unintuitive. We’ll discuss this more in [Invariance vs covariance](index.html#variance). For example, the following program generates a mypy error, because mypy treats `list[int]` as incompatible with `list[object]`: ``` def f(l: list[object], k: list[int]) -> None: l = k # error: Incompatible types in assignment ``` The reason why the above assignment is disallowed is that allowing the assignment could result in non-int values stored in a list of `int`: ``` def f(l: list[object], k: list[int]) -> None: l = k l.append('x') print(k[-1]) # Ouch; a string in list[int] ``` Other container types like [`dict`](https://docs.python.org/3/library/stdtypes.html#dict) and [`set`](https://docs.python.org/3/library/stdtypes.html#set) behave similarly. You can still run the above program; it prints `x`. This illustrates the fact that static types do not affect the runtime behavior of programs. You can run programs with type check failures, which is often very handy when performing a large refactoring. Thus you can always ‘work around’ the type system, and it doesn’t really limit what you can do in your program. #### Context in type inference[#](#context-in-type-inference) Type inference is *bidirectional* and takes context into account. Mypy will take into account the type of the variable on the left-hand side of an assignment when inferring the type of the expression on the right-hand side. For example, the following will type check: ``` def f(l: list[object]) -> None: l = [1, 2] # Infer type list[object] for [1, 2], not list[int] ``` The value expression `[1, 2]` is type checked with the additional context that it is being assigned to a variable of type `list[object]`. This is used to infer the type of the *expression* as `list[object]`. Declared argument types are also used for type context. In this program mypy knows that the empty list `[]` should have type `list[int]` based on the declared type of `arg` in `foo`: ``` def foo(arg: list[int]) -> None: print('Items:', ''.join(str(a) for a in arg)) foo([]) # OK ``` However, context only works within a single statement. Here mypy requires an annotation for the empty list, since the context would only be available in the following statement: ``` def foo(arg: list[int]) -> None: print('Items:', ', '.join(arg)) a = [] # Error: Need type annotation for "a" foo(a) ``` Working around the issue is easy by adding a type annotation: ``` ... a: list[int] = [] # OK foo(a) ``` #### Silencing type errors[#](#silencing-type-errors) You might want to disable type checking on specific lines, or within specific files in your codebase. To do that, you can use a `# type: ignore` comment. For example, say in its latest update, the web framework you use can now take an integer argument to `run()`, which starts it on localhost on that port. Like so: ``` # Starting app on http://localhost:8000 app.run(8000) ``` However, the devs forgot to update their type annotations for `run`, so mypy still thinks `run` only expects `str` types. This would give you the following error: ``` error: Argument 1 to "run" of "A" has incompatible type "int"; expected "str" ``` If you cannot directly fix the web framework yourself, you can temporarily disable type checking on that line, by adding a `# type: ignore`: ``` # Starting app on http://localhost:8000 app.run(8000) # type: ignore ``` This will suppress any mypy errors that would have raised on that specific line. You should probably add some more information on the `# type: ignore` comment, to explain why the ignore was added in the first place. This could be a link to an issue on the repository responsible for the type stubs, or it could be a short explanation of the bug. To do that, use this format: ``` # Starting app on http://localhost:8000 app.run(8000) # type: ignore # `run()` in v2.0 accepts an `int`, as a port ``` ##### Type ignore error codes[#](#type-ignore-error-codes) By default, mypy displays an error code for each error: ``` error: "str" has no attribute "trim" [attr-defined] ``` It is possible to add a specific error-code in your ignore comment (e.g. `# type: ignore[attr-defined]`) to clarify what’s being silenced. You can find more information about error codes [here](index.html#silence-error-codes). ##### Other ways to silence errors[#](#other-ways-to-silence-errors) You can get mypy to silence errors about a specific variable by dynamically typing it with `Any`. See [Dynamically typed code](index.html#dynamic-typing) for more information. ``` from typing import Any def f(x: Any, y: str) -> None: x = 'hello' x += 1 # OK ``` You can ignore all mypy errors in a file by adding a `# mypy: ignore-errors` at the top of the file: ``` # mypy: ignore-errors # This is a test file, skipping type checking in it. import unittest ... ``` You can also specify per-module configuration options in your [The mypy configuration file](index.html#config-file). For example: ``` # Don't report errors in the 'package_to_fix_later' package [mypy-package_to_fix_later.*] ignore_errors = True # Disable specific error codes in the 'tests' package # Also don't require type annotations [mypy-tests.*] disable_error_code = var-annotated, has-type allow_untyped_defs = True # Silence import errors from the 'library_missing_types' package [mypy-library_missing_types.*] ignore_missing_imports = True ``` Finally, adding a `@typing.no_type_check` decorator to a class, method or function causes mypy to avoid type checking that class, method or function and to treat it as not having any type annotations. ``` @typing.no_type_check def foo() -> str: return 12345 # No error! ``` ### Kinds of types[#](#kinds-of-types) We’ve mostly restricted ourselves to built-in types until now. This section introduces several additional kinds of types. You are likely to need at least some of them to type check any non-trivial programs. #### Class types[#](#class-types) Every class is also a valid type. Any instance of a subclass is also compatible with all superclasses – it follows that every value is compatible with the [`object`](https://docs.python.org/3/library/functions.html#object) type (and incidentally also the `Any` type, discussed below). Mypy analyzes the bodies of classes to determine which methods and attributes are available in instances. This example uses subclassing: ``` class A: def f(self) -> int: # Type of self inferred (A) return 2 class B(A): def f(self) -> int: return 3 def g(self) -> int: return 4 def foo(a: A) -> None: print(a.f()) # 3 a.g() # Error: "A" has no attribute "g" foo(B()) # OK (B is a subclass of A) ``` #### The Any type[#](#the-any-type) A value with the `Any` type is dynamically typed. Mypy doesn’t know anything about the possible runtime types of such value. Any operations are permitted on the value, and the operations are only checked at runtime. You can use `Any` as an “escape hatch” when you can’t use a more precise type for some reason. `Any` is compatible with every other type, and vice versa. You can freely assign a value of type `Any` to a variable with a more precise type: ``` a: Any = None s: str = '' a = 2 # OK (assign "int" to "Any") s = a # OK (assign "Any" to "str") ``` Declared (and inferred) types are ignored (or *erased*) at runtime. They are basically treated as comments, and thus the above code does not generate a runtime error, even though `s` gets an `int` value when the program is run, while the declared type of `s` is actually `str`! You need to be careful with `Any` types, since they let you lie to mypy, and this could easily hide bugs. If you do not define a function return value or argument types, these default to `Any`: ``` def show_heading(s) -> None: print('=== ' + s + ' ===') # No static type checking, as s has type Any show_heading(1) # OK (runtime error only; mypy won't generate an error) ``` You should give a statically typed function an explicit `None` return type even if it doesn’t return a value, as this lets mypy catch additional type errors: ``` def wait(t: float): # Implicit Any return value print('Waiting...') time.sleep(t) if wait(2) > 1: # Mypy doesn't catch this error! ... ``` If we had used an explicit `None` return type, mypy would have caught the error: ``` def wait(t: float) -> None: print('Waiting...') time.sleep(t) if wait(2) > 1: # Error: can't compare None and int ... ``` The `Any` type is discussed in more detail in section [Dynamically typed code](index.html#dynamic-typing). Note A function without any types in the signature is dynamically typed. The body of a dynamically typed function is not checked statically, and local variables have implicit `Any` types. This makes it easier to migrate legacy Python code to mypy, as mypy won’t complain about dynamically typed functions. #### Tuple types[#](#tuple-types) The type `tuple[T1, ..., Tn]` represents a tuple with the item types `T1`, …, `Tn`: ``` # Use `typing.Tuple` in Python 3.8 and earlier def f(t: tuple[int, str]) -> None: t = 1, 'foo' # OK t = 'foo', 1 # Type check error ``` A tuple type of this kind has exactly a specific number of items (2 in the above example). Tuples can also be used as immutable, varying-length sequences. You can use the type `tuple[T, ...]` (with a literal `...` – it’s part of the syntax) for this purpose. Example: ``` def print_squared(t: tuple[int, ...]) -> None: for n in t: print(n, n ** 2) print_squared(()) # OK print_squared((1, 3, 5)) # OK print_squared([1, 2]) # Error: only a tuple is valid ``` Note Usually it’s a better idea to use `Sequence[T]` instead of `tuple[T, ...]`, as [`Sequence`](https://docs.python.org/3/library/typing.html#typing.Sequence) is also compatible with lists and other non-tuple sequences. Note `tuple[...]` is valid as a base class in Python 3.6 and later, and always in stub files. In earlier Python versions you can sometimes work around this limitation by using a named tuple as a base class (see section [Named tuples](#named-tuples)). #### Callable types (and lambdas)[#](#callable-types-and-lambdas) You can pass around function objects and bound methods in statically typed code. The type of a function that accepts arguments `A1`, …, `An` and returns `Rt` is `Callable[[A1, ..., An], Rt]`. Example: ``` from typing import Callable def twice(i: int, next: Callable[[int], int]) -> int: return next(next(i)) def add(i: int) -> int: return i + 1 print(twice(3, add)) # 5 ``` You can only have positional arguments, and only ones without default values, in callable types. These cover the vast majority of uses of callable types, but sometimes this isn’t quite enough. Mypy recognizes a special form `Callable[..., T]` (with a literal `...`) which can be used in less typical cases. It is compatible with arbitrary callable objects that return a type compatible with `T`, independent of the number, types or kinds of arguments. Mypy lets you call such callable values with arbitrary arguments, without any checking – in this respect they are treated similar to a `(*args: Any, **kwargs: Any)` function signature. Example: ``` from typing import Callable def arbitrary_call(f: Callable[..., int]) -> int: return f('x') + f(y=2) # OK arbitrary_call(ord) # No static error, but fails at runtime arbitrary_call(open) # Error: does not return an int arbitrary_call(1) # Error: 'int' is not callable ``` In situations where more precise or complex types of callbacks are necessary one can use flexible [callback protocols](index.html#callback-protocols). Lambdas are also supported. The lambda argument and return value types cannot be given explicitly; they are always inferred based on context using bidirectional type inference: ``` l = map(lambda x: x + 1, [1, 2, 3]) # Infer x as int and l as list[int] ``` If you want to give the argument or return value types explicitly, use an ordinary, perhaps nested function definition. Callables can also be used against type objects, matching their `__init__` or `__new__` signature: ``` from typing import Callable class C: def __init__(self, app: str) -> None: pass CallableType = Callable[[str], C] def class_or_callable(arg: CallableType) -> None: inst = arg("my_app") reveal_type(inst) # Revealed type is "C" ``` This is useful if you want `arg` to be either a `Callable` returning an instance of `C` or the type of `C` itself. This also works with [callback protocols](index.html#callback-protocols). #### Union types[#](#union-types) Python functions often accept values of two or more different types. You can use [overloading](index.html#function-overloading) to represent this, but union types are often more convenient. Use the `Union[T1, ..., Tn]` type constructor to construct a union type. For example, if an argument has type `Union[int, str]`, both integers and strings are valid argument values. You can use an [`isinstance()`](https://docs.python.org/3/library/functions.html#isinstance) check to narrow down a union type to a more specific type: ``` from typing import Union def f(x: Union[int, str]) -> None: x + 1 # Error: str + int is not valid if isinstance(x, int): # Here type of x is int. x + 1 # OK else: # Here type of x is str. x + 'a' # OK f(1) # OK f('x') # OK f(1.1) # Error ``` Note Operations are valid for union types only if they are valid for *every* union item. This is why it’s often necessary to use an [`isinstance()`](https://docs.python.org/3/library/functions.html#isinstance) check to first narrow down a union type to a non-union type. This also means that it’s recommended to avoid union types as function return types, since the caller may have to use [`isinstance()`](https://docs.python.org/3/library/functions.html#isinstance) before doing anything interesting with the value. #### Optional types and the None type[#](#optional-types-and-the-none-type) You can use the [`Optional`](https://docs.python.org/3/library/typing.html#typing.Optional) type modifier to define a type variant that allows `None`, such as `Optional[int]` (`Optional[X]` is the preferred shorthand for `Union[X, None]`): ``` from typing import Optional def strlen(s: str) -> Optional[int]: if not s: return None # OK return len(s) def strlen_invalid(s: str) -> int: if not s: return None # Error: None not compatible with int return len(s) ``` Most operations will not be allowed on unguarded `None` or [`Optional`](https://docs.python.org/3/library/typing.html#typing.Optional) values: ``` def my_inc(x: Optional[int]) -> int: return x + 1 # Error: Cannot add None and int ``` Instead, an explicit `None` check is required. Mypy has powerful type inference that lets you use regular Python idioms to guard against `None` values. For example, mypy recognizes `is None` checks: ``` def my_inc(x: Optional[int]) -> int: if x is None: return 0 else: # The inferred type of x is just int here. return x + 1 ``` Mypy will infer the type of `x` to be `int` in the else block due to the check against `None` in the if condition. Other supported checks for guarding against a `None` value include `if x is not None`, `if x` and `if not x`. Additionally, mypy understands `None` checks within logical expressions: ``` def concat(x: Optional[str], y: Optional[str]) -> Optional[str]: if x is not None and y is not None: # Both x and y are not None here return x + y else: return None ``` Sometimes mypy doesn’t realize that a value is never `None`. This notably happens when a class instance can exist in a partially defined state, where some attribute is initialized to `None` during object construction, but a method assumes that the attribute is no longer `None`. Mypy will complain about the possible `None` value. You can use `assert x is not None` to work around this in the method: ``` class Resource: path: Optional[str] = None def initialize(self, path: str) -> None: self.path = path def read(self) -> str: # We require that the object has been initialized. assert self.path is not None with open(self.path) as f: # OK return f.read() r = Resource() r.initialize('/foo/bar') r.read() ``` When initializing a variable as `None`, `None` is usually an empty place-holder value, and the actual value has a different type. This is why you need to annotate an attribute in cases like the class `Resource` above: ``` class Resource: path: Optional[str] = None ... ``` This also works for attributes defined within methods: ``` class Counter: def __init__(self) -> None: self.count: Optional[int] = None ``` This is not a problem when using variable annotations, since no initial value is needed: ``` class Container: items: list[str] # No initial value ``` Mypy generally uses the first assignment to a variable to infer the type of the variable. However, if you assign both a `None` value and a non-`None` value in the same scope, mypy can usually do the right thing without an annotation: ``` def f(i: int) -> None: n = None # Inferred type Optional[int] because of the assignment below if i > 0: n = i ... ``` Sometimes you may get the error “Cannot determine type of <something>”. In this case you should add an explicit `Optional[...]` annotation (or type comment). Note `None` is a type with only one value, `None`. `None` is also used as the return type for functions that don’t return a value, i.e. functions that implicitly return `None`. Note The Python interpreter internally uses the name `NoneType` for the type of `None`, but `None` is always used in type annotations. The latter is shorter and reads better. (`NoneType` is available as [`types.NoneType`](https://docs.python.org/3/library/types.html#types.NoneType) on Python 3.10+, but is not exposed at all on earlier versions of Python.) Note `Optional[...]` *does not* mean a function argument with a default value. It simply means that `None` is a valid value for the argument. This is a common confusion because `None` is a common default value for arguments. ##### X | Y syntax for Unions[#](#x-y-syntax-for-unions) [**PEP 604**](https://peps.python.org/pep-0604/) introduced an alternative way for spelling union types. In Python 3.10 and later, you can write `Union[int, str]` as `int | str`. It is possible to use this syntax in versions of Python where it isn’t supported by the runtime with some limitations (see [Annotation issues at runtime](index.html#runtime-troubles)). ``` t1: int | str # equivalent to Union[int, str] t2: int | None # equivalent to Optional[int] ``` #### Disabling strict optional checking[#](#disabling-strict-optional-checking) Mypy also has an option to treat `None` as a valid value for every type (in case you know Java, it’s useful to think of it as similar to the Java `null`). In this mode `None` is also valid for primitive types such as `int` and `float`, and [`Optional`](https://docs.python.org/3/library/typing.html#typing.Optional) types are not required. The mode is enabled through the [`--no-strict-optional`](index.html#cmdoption-mypy-no-strict-optional) command-line option. In mypy versions before 0.600 this was the default mode. You can enable this option explicitly for backward compatibility with earlier mypy versions, in case you don’t want to introduce optional types to your codebase yet. It will cause mypy to silently accept some buggy code, such as this example – it’s not recommended if you can avoid it: ``` def inc(x: int) -> int: return x + 1 x = inc(None) # No error reported by mypy if strict optional mode disabled! ``` However, making code “optional clean” can take some work! You can also use [the mypy configuration file](index.html#config-file) to migrate your code to strict optional checking one file at a time, since there exists the per-module flag [`strict_optional`](index.html#confval-strict_optional) to control strict optional mode. Often it’s still useful to document whether a variable can be `None`. For example, this function accepts a `None` argument, but it’s not obvious from its signature: ``` def greeting(name: str) -> str: if name: return f'Hello, {name}' else: return 'Hello, stranger' print(greeting('Python')) # Okay! print(greeting(None)) # Also okay! ``` You can still use [`Optional[t]`](https://docs.python.org/3/library/typing.html#typing.Optional) to document that `None` is a valid argument type, even if strict `None` checking is not enabled: ``` from typing import Optional def greeting(name: Optional[str]) -> str: if name: return f'Hello, {name}' else: return 'Hello, stranger' ``` Mypy treats this as semantically equivalent to the previous example if strict optional checking is disabled, since `None` is implicitly valid for any type, but it’s much more useful for a programmer who is reading the code. This also makes it easier to migrate to strict `None` checking in the future. #### Type aliases[#](#type-aliases) In certain situations, type names may end up being long and painful to type: ``` def f() -> Union[list[dict[tuple[int, str], set[int]]], tuple[str, list[str]]]: ... ``` When cases like this arise, you can define a type alias by simply assigning the type to a variable: ``` AliasType = Union[list[dict[tuple[int, str], set[int]]], tuple[str, list[str]]] # Now we can use AliasType in place of the full name: def f() -> AliasType: ... ``` Note A type alias does not create a new type. It’s just a shorthand notation for another type – it’s equivalent to the target type except for [generic aliases](index.html#generic-type-aliases). Since Mypy 0.930 you can also use *explicit type aliases*, which were introduced in [**PEP 613**](https://peps.python.org/pep-0613/). There can be confusion about exactly when an assignment defines an implicit type alias – for example, when the alias contains forward references, invalid types, or violates some other restrictions on type alias declarations. Because the distinction between an unannotated variable and a type alias is implicit, ambiguous or incorrect type alias declarations default to defining a normal variable instead of a type alias. Explicit type aliases are unambiguous and can also improve readability by making the intent clear: ``` from typing import TypeAlias # "from typing_extensions" in Python 3.9 and earlier AliasType: TypeAlias = Union[list[dict[tuple[int, str], set[int]]], tuple[str, list[str]]] ``` #### Named tuples[#](#named-tuples) Mypy recognizes named tuples and can type check code that defines or uses them. In this example, we can detect code trying to access a missing attribute: ``` Point = namedtuple('Point', ['x', 'y']) p = Point(x=1, y=2) print(p.z) # Error: Point has no attribute 'z' ``` If you use [`namedtuple`](https://docs.python.org/3/library/collections.html#collections.namedtuple) to define your named tuple, all the items are assumed to have `Any` types. That is, mypy doesn’t know anything about item types. You can use [`NamedTuple`](https://docs.python.org/3/library/typing.html#typing.NamedTuple) to also define item types: ``` from typing import NamedTuple Point = NamedTuple('Point', [('x', int), ('y', int)]) p = Point(x=1, y='x') # Argument has incompatible type "str"; expected "int" ``` Python 3.6 introduced an alternative, class-based syntax for named tuples with types: ``` from typing import NamedTuple class Point(NamedTuple): x: int y: int p = Point(x=1, y='x') # Argument has incompatible type "str"; expected "int" ``` Note You can use the raw `NamedTuple` “pseudo-class” in type annotations if any `NamedTuple` object is valid. For example, it can be useful for deserialization: ``` def deserialize_named_tuple(arg: NamedTuple) -> Dict[str, Any]: return arg._asdict() Point = namedtuple('Point', ['x', 'y']) Person = NamedTuple('Person', [('name', str), ('age', int)]) deserialize_named_tuple(Point(x=1, y=2)) # ok deserialize_named_tuple(Person(name='Nikita', age=18)) # ok # Error: Argument 1 to "deserialize_named_tuple" has incompatible type # "Tuple[int, int]"; expected "NamedTuple" deserialize_named_tuple((1, 2)) ``` Note that this behavior is highly experimental, non-standard, and may not be supported by other type checkers and IDEs. #### The type of class objects[#](#the-type-of-class-objects) (Freely after [**PEP 484: The type of class objects**](https://peps.python.org/pep-0484/#the-type-of-class-objects).) Sometimes you want to talk about class objects that inherit from a given class. This can be spelled as `type[C]` (or, on Python 3.8 and lower, [`typing.Type[C]`](https://docs.python.org/3/library/typing.html#typing.Type)) where `C` is a class. In other words, when `C` is the name of a class, using `C` to annotate an argument declares that the argument is an instance of `C` (or of a subclass of `C`), but using `type[C]` as an argument annotation declares that the argument is a class object deriving from `C` (or `C` itself). For example, assume the following classes: ``` class User: # Defines fields like name, email class BasicUser(User): def upgrade(self): """Upgrade to Pro""" class ProUser(User): def pay(self): """Pay bill""" ``` Note that `ProUser` doesn’t inherit from `BasicUser`. Here’s a function that creates an instance of one of these classes if you pass it the right class object: ``` def new_user(user_class): user = user_class() # (Here we could write the user object to a database) return user ``` How would we annotate this function? Without the ability to parameterize `type`, the best we could do would be: ``` def new_user(user_class: type) -> User: # Same implementation as before ``` This seems reasonable, except that in the following example, mypy doesn’t see that the `buyer` variable has type `ProUser`: ``` buyer = new_user(ProUser) buyer.pay() # Rejected, not a method on User ``` However, using the `type[C]` syntax and a type variable with an upper bound (see [Type variables with upper bounds](index.html#type-variable-upper-bound)) we can do better: ``` U = TypeVar('U', bound=User) def new_user(user_class: type[U]) -> U: # Same implementation as before ``` Now mypy will infer the correct type of the result when we call `new_user()` with a specific subclass of `User`: ``` beginner = new_user(BasicUser) # Inferred type is BasicUser beginner.upgrade() # OK ``` Note The value corresponding to `type[C]` must be an actual class object that’s a subtype of `C`. Its constructor must be compatible with the constructor of `C`. If `C` is a type variable, its upper bound must be a class object. For more details about `type[]` and [`typing.Type[]`](https://docs.python.org/3/library/typing.html#typing.Type), see [**PEP 484: The type of class objects**](https://peps.python.org/pep-0484/#the-type-of-class-objects). #### Generators[#](#generators) A basic generator that only yields values can be succinctly annotated as having a return type of either [`Iterator[YieldType]`](https://docs.python.org/3/library/typing.html#typing.Iterator) or [`Iterable[YieldType]`](https://docs.python.org/3/library/typing.html#typing.Iterable). For example: ``` def squares(n: int) -> Iterator[int]: for i in range(n): yield i * i ``` A good rule of thumb is to annotate functions with the most specific return type possible. However, you should also take care to avoid leaking implementation details into a function’s public API. In keeping with these two principles, prefer [`Iterator[YieldType]`](https://docs.python.org/3/library/typing.html#typing.Iterator) over [`Iterable[YieldType]`](https://docs.python.org/3/library/typing.html#typing.Iterable) as the return-type annotation for a generator function, as it lets mypy know that users are able to call [`next()`](https://docs.python.org/3/library/functions.html#next) on the object returned by the function. Nonetheless, bear in mind that `Iterable` may sometimes be the better option, if you consider it an implementation detail that `next()` can be called on the object returned by your function. If you want your generator to accept values via the [`send()`](https://docs.python.org/3/reference/expressions.html#generator.send) method or return a value, on the other hand, you should use the [`Generator[YieldType, SendType, ReturnType]`](https://docs.python.org/3/library/typing.html#typing.Generator) generic type instead of either `Iterator` or `Iterable`. For example: ``` def echo_round() -> Generator[int, float, str]: sent = yield 0 while sent >= 0: sent = yield round(sent) return 'Done' ``` Note that unlike many other generics in the typing module, the `SendType` of [`Generator`](https://docs.python.org/3/library/typing.html#typing.Generator) behaves contravariantly, not covariantly or invariantly. If you do not plan on receiving or returning values, then set the `SendType` or `ReturnType` to `None`, as appropriate. For example, we could have annotated the first example as the following: ``` def squares(n: int) -> Generator[int, None, None]: for i in range(n): yield i * i ``` This is slightly different from using `Iterator[int]` or `Iterable[int]`, since generators have [`close()`](https://docs.python.org/3/reference/expressions.html#generator.close), [`send()`](https://docs.python.org/3/reference/expressions.html#generator.send), and [`throw()`](https://docs.python.org/3/reference/expressions.html#generator.throw) methods that generic iterators and iterables don’t. If you plan to call these methods on the returned generator, use the [`Generator`](https://docs.python.org/3/library/typing.html#typing.Generator) type instead of [`Iterator`](https://docs.python.org/3/library/typing.html#typing.Iterator) or [`Iterable`](https://docs.python.org/3/library/typing.html#typing.Iterable). ### Class basics[#](#class-basics) This section will help get you started annotating your classes. Built-in classes such as `int` also follow these same rules. #### Instance and class attributes[#](#instance-and-class-attributes) The mypy type checker detects if you are trying to access a missing attribute, which is a very common programming error. For this to work correctly, instance and class attributes must be defined or initialized within the class. Mypy infers the types of attributes: ``` class A: def __init__(self, x: int) -> None: self.x = x # Aha, attribute 'x' of type 'int' a = A(1) a.x = 2 # OK! a.y = 3 # Error: "A" has no attribute "y" ``` This is a bit like each class having an implicitly defined [`__slots__`](https://docs.python.org/3/reference/datamodel.html#object.__slots__) attribute. This is only enforced during type checking and not when your program is running. You can declare types of variables in the class body explicitly using a type annotation: ``` class A: x: list[int] # Declare attribute 'x' of type list[int] a = A() a.x = [1] # OK ``` As in Python generally, a variable defined in the class body can be used as a class or an instance variable. (As discussed in the next section, you can override this with a [`ClassVar`](https://docs.python.org/3/library/typing.html#typing.ClassVar) annotation.) Similarly, you can give explicit types to instance variables defined in a method: ``` class A: def __init__(self) -> None: self.x: list[int] = [] def f(self) -> None: self.y: Any = 0 ``` You can only define an instance variable within a method if you assign to it explicitly using `self`: ``` class A: def __init__(self) -> None: self.y = 1 # Define 'y' a = self a.x = 1 # Error: 'x' not defined ``` #### Annotating __init__ methods[#](#annotating-init-methods) The [`__init__`](https://docs.python.org/3/reference/datamodel.html#object.__init__) method is somewhat special – it doesn’t return a value. This is best expressed as `-> None`. However, since many feel this is redundant, it is allowed to omit the return type declaration on [`__init__`](https://docs.python.org/3/reference/datamodel.html#object.__init__) methods **if at least one argument is annotated**. For example, in the following classes [`__init__`](https://docs.python.org/3/reference/datamodel.html#object.__init__) is considered fully annotated: ``` class C1: def __init__(self) -> None: self.var = 42 class C2: def __init__(self, arg: int): self.var = arg ``` However, if [`__init__`](https://docs.python.org/3/reference/datamodel.html#object.__init__) has no annotated arguments and no return type annotation, it is considered an untyped method: ``` class C3: def __init__(self): # This body is not type checked self.var = 42 + 'abc' ``` #### Class attribute annotations[#](#class-attribute-annotations) You can use a [`ClassVar[t]`](https://docs.python.org/3/library/typing.html#typing.ClassVar) annotation to explicitly declare that a particular attribute should not be set on instances: ``` from typing import ClassVar class A: x: ClassVar[int] = 0 # Class variable only A.x += 1 # OK a = A() a.x = 1 # Error: Cannot assign to class variable "x" via instance print(a.x) # OK -- can be read through an instance ``` It’s not necessary to annotate all class variables using [`ClassVar`](https://docs.python.org/3/library/typing.html#typing.ClassVar). An attribute without the [`ClassVar`](https://docs.python.org/3/library/typing.html#typing.ClassVar) annotation can still be used as a class variable. However, mypy won’t prevent it from being used as an instance variable, as discussed previously: ``` class A: x = 0 # Can be used as a class or instance variable A.x += 1 # OK a = A() a.x = 1 # Also OK ``` Note that [`ClassVar`](https://docs.python.org/3/library/typing.html#typing.ClassVar) is not a class, and you can’t use it with [`isinstance()`](https://docs.python.org/3/library/functions.html#isinstance) or [`issubclass()`](https://docs.python.org/3/library/functions.html#issubclass). It does not change Python runtime behavior – it’s only for type checkers such as mypy (and also helpful for human readers). You can also omit the square brackets and the variable type in a [`ClassVar`](https://docs.python.org/3/library/typing.html#typing.ClassVar) annotation, but this might not do what you’d expect: ``` class A: y: ClassVar = 0 # Type implicitly Any! ``` In this case the type of the attribute will be implicitly `Any`. This behavior will change in the future, since it’s surprising. An explicit [`ClassVar`](https://docs.python.org/3/library/typing.html#typing.ClassVar) may be particularly handy to distinguish between class and instance variables with callable types. For example: ``` from typing import Callable, ClassVar class A: foo: Callable[[int], None] bar: ClassVar[Callable[[A, int], None]] bad: Callable[[A], None] A().foo(42) # OK A().bar(42) # OK A().bad() # Error: Too few arguments ``` Note A [`ClassVar`](https://docs.python.org/3/library/typing.html#typing.ClassVar) type parameter cannot include type variables: `ClassVar[T]` and `ClassVar[list[T]]` are both invalid if `T` is a type variable (see [Defining generic classes](index.html#generic-classes) for more about type variables). #### Overriding statically typed methods[#](#overriding-statically-typed-methods) When overriding a statically typed method, mypy checks that the override has a compatible signature: ``` class Base: def f(self, x: int) -> None: ... class Derived1(Base): def f(self, x: str) -> None: # Error: type of 'x' incompatible ... class Derived2(Base): def f(self, x: int, y: int) -> None: # Error: too many arguments ... class Derived3(Base): def f(self, x: int) -> None: # OK ... class Derived4(Base): def f(self, x: float) -> None: # OK: mypy treats int as a subtype of float ... class Derived5(Base): def f(self, x: int, y: int = 0) -> None: # OK: accepts more than the base ... # class method ``` Note You can also vary return types **covariantly** in overriding. For example, you could override the return type `Iterable[int]` with a subtype such as `list[int]`. Similarly, you can vary argument types **contravariantly** – subclasses can have more general argument types. In order to ensure that your code remains correct when renaming methods, it can be helpful to explicitly mark a method as overriding a base method. This can be done with the `@override` decorator. `@override` can be imported from `typing` starting with Python 3.12 or from `typing_extensions` for use with older Python versions. If the base method is then renamed while the overriding method is not, mypy will show an error: ``` from typing import override class Base: def f(self, x: int) -> None: ... def g_renamed(self, y: str) -> None: ... class Derived1(Base): @override def f(self, x: int) -> None: # OK ... @override def g(self, y: str) -> None: # Error: no corresponding base method found ... ``` Note Use [–enable-error-code explicit-override](index.html#code-explicit-override) to require that method overrides use the `@override` decorator. Emit an error if it is missing. You can also override a statically typed method with a dynamically typed one. This allows dynamically typed code to override methods defined in library classes without worrying about their type signatures. As always, relying on dynamically typed code can be unsafe. There is no runtime enforcement that the method override returns a value that is compatible with the original return type, since annotations have no effect at runtime: ``` class Base: def inc(self, x: int) -> int: return x + 1 class Derived(Base): def inc(self, x): # Override, dynamically typed return 'hello' # Incompatible with 'Base', but no mypy error ``` #### Abstract base classes and multiple inheritance[#](#abstract-base-classes-and-multiple-inheritance) Mypy supports Python [abstract base classes](https://docs.python.org/3/library/abc.html) (ABCs). Abstract classes have at least one abstract method or property that must be implemented by any *concrete* (non-abstract) subclass. You can define abstract base classes using the [`abc.ABCMeta`](https://docs.python.org/3/library/abc.html#abc.ABCMeta) metaclass and the [`@abc.abstractmethod`](https://docs.python.org/3/library/abc.html#abc.abstractmethod) function decorator. Example: ``` from abc import ABCMeta, abstractmethod class Animal(metaclass=ABCMeta): @abstractmethod def eat(self, food: str) -> None: pass @property @abstractmethod def can_walk(self) -> bool: pass class Cat(Animal): def eat(self, food: str) -> None: ... # Body omitted @property def can_walk(self) -> bool: return True x = Animal() # Error: 'Animal' is abstract due to 'eat' and 'can_walk' y = Cat() # OK ``` Note that mypy performs checking for unimplemented abstract methods even if you omit the [`ABCMeta`](https://docs.python.org/3/library/abc.html#abc.ABCMeta) metaclass. This can be useful if the metaclass would cause runtime metaclass conflicts. Since you can’t create instances of ABCs, they are most commonly used in type annotations. For example, this method accepts arbitrary iterables containing arbitrary animals (instances of concrete `Animal` subclasses): ``` def feed_all(animals: Iterable[Animal], food: str) -> None: for animal in animals: animal.eat(food) ``` There is one important peculiarity about how ABCs work in Python – whether a particular class is abstract or not is somewhat implicit. In the example below, `Derived` is treated as an abstract base class since `Derived` inherits an abstract `f` method from `Base` and doesn’t explicitly implement it. The definition of `Derived` generates no errors from mypy, since it’s a valid ABC: ``` from abc import ABCMeta, abstractmethod class Base(metaclass=ABCMeta): @abstractmethod def f(self, x: int) -> None: pass class Derived(Base): # No error -- Derived is implicitly abstract def g(self) -> None: ... ``` Attempting to create an instance of `Derived` will be rejected, however: ``` d = Derived() # Error: 'Derived' is abstract ``` Note It’s a common error to forget to implement an abstract method. As shown above, the class definition will not generate an error in this case, but any attempt to construct an instance will be flagged as an error. Mypy allows you to omit the body for an abstract method, but if you do so, it is unsafe to call such method via `super()`. For example: ``` from abc import abstractmethod class Base: @abstractmethod def foo(self) -> int: pass @abstractmethod def bar(self) -> int: return 0 class Sub(Base): def foo(self) -> int: return super().foo() + 1 # error: Call to abstract method "foo" of "Base" # with trivial body via super() is unsafe @abstractmethod def bar(self) -> int: return super().bar() + 1 # This is OK however. ``` A class can inherit any number of classes, both abstract and concrete. As with normal overrides, a dynamically typed method can override or implement a statically typed method defined in any base class, including an abstract method defined in an abstract base class. You can implement an abstract property using either a normal property or an instance variable. #### Slots[#](#slots) When a class has explicitly defined [__slots__](https://docs.python.org/3/reference/datamodel.html#slots), mypy will check that all attributes assigned to are members of `__slots__`: ``` class Album: __slots__ = ('name', 'year') def __init__(self, name: str, year: int) -> None: self.name = name self.year = year # Error: Trying to assign name "released" that is not in "__slots__" of type "Album" self.released = True my_album = Album('Songs about Python', 2021) ``` Mypy will only check attribute assignments against `__slots__` when the following conditions hold: 1. All base classes (except builtin ones) must have explicit `__slots__` defined (this mirrors Python semantics). 2. `__slots__` does not include `__dict__`. If `__slots__` includes `__dict__`, arbitrary attributes can be set, similar to when `__slots__` is not defined (this mirrors Python semantics). 3. All values in `__slots__` must be string literals. ### Annotation issues at runtime[#](#annotation-issues-at-runtime) Idiomatic use of type annotations can sometimes run up against what a given version of Python considers legal code. This section describes these scenarios and explains how to get your code running again. Generally speaking, we have three tools at our disposal: * Use of `from __future__ import annotations` ([**PEP 563**](https://peps.python.org/pep-0563/)) (this behaviour may eventually be made the default in a future Python version) * Use of string literal types or type comments * Use of `typing.TYPE_CHECKING` We provide a description of these before moving onto discussion of specific problems you may encounter. #### String literal types and type comments[#](#string-literal-types-and-type-comments) Mypy allows you to add type annotations using `# type:` type comments. For example: ``` a = 1 # type: int def f(x): # type: (int) -> int return x + 1 # Alternative type comment syntax for functions with many arguments def send_email( address, # type: Union[str, List[str]] sender, # type: str cc, # type: Optional[List[str]] subject='', body=None # type: List[str] ): # type: (...) -> bool ``` Type comments can’t cause runtime errors because comments are not evaluated by Python. In a similar way, using string literal types sidesteps the problem of annotations that would cause runtime errors. Any type can be entered as a string literal, and you can combine string-literal types with non-string-literal types freely: ``` def f(a: list['A']) -> None: ... # OK, prevents NameError since A is defined later def g(n: 'int') -> None: ... # Also OK, though not useful class A: pass ``` String literal types are never needed in `# type:` comments and [stub files](index.html#stub-files). String literal types must be defined (or imported) later *in the same module*. They cannot be used to leave cross-module references unresolved. (For dealing with import cycles, see [Import cycles](#import-cycles).) #### Future annotations import (PEP 563)[#](#future-annotations-import-pep-563) Many of the issues described here are caused by Python trying to evaluate annotations. Future Python versions (potentially Python 3.12) will by default no longer attempt to evaluate function and variable annotations. This behaviour is made available in Python 3.7 and later through the use of `from __future__ import annotations`. This can be thought of as automatic string literal-ification of all function and variable annotations. Note that function and variable annotations are still required to be valid Python syntax. For more details, see [**PEP 563**](https://peps.python.org/pep-0563/). Note Even with the `__future__` import, there are some scenarios that could still require string literals or result in errors, typically involving use of forward references or generics in: * [type aliases](index.html#type-aliases); * [type narrowing](index.html#type-narrowing); * type definitions (see [`TypeVar`](https://docs.python.org/3/library/typing.html#typing.TypeVar), [`NewType`](https://docs.python.org/3/library/typing.html#typing.NewType), [`NamedTuple`](https://docs.python.org/3/library/typing.html#typing.NamedTuple)); * base classes. ``` # base class example from __future__ import annotations class A(tuple['B', 'C']): ... # String literal types needed here class B: ... class C: ... ``` Warning Some libraries may have use cases for dynamic evaluation of annotations, for instance, through use of `typing.get_type_hints` or `eval`. If your annotation would raise an error when evaluated (say by using [**PEP 604**](https://peps.python.org/pep-0604/) syntax with Python 3.9), you may need to be careful when using such libraries. #### typing.TYPE_CHECKING[#](#typing-type-checking) The [`typing`](https://docs.python.org/3/library/typing.html#module-typing) module defines a [`TYPE_CHECKING`](https://docs.python.org/3/library/typing.html#typing.TYPE_CHECKING) constant that is `False` at runtime but treated as `True` while type checking. Since code inside `if TYPE_CHECKING:` is not executed at runtime, it provides a convenient way to tell mypy something without the code being evaluated at runtime. This is most useful for resolving [import cycles](#import-cycles). #### Class name forward references[#](#class-name-forward-references) Python does not allow references to a class object before the class is defined (aka forward reference). Thus this code does not work as expected: ``` def f(x: A) -> None: ... # NameError: name "A" is not defined class A: ... ``` Starting from Python 3.7, you can add `from __future__ import annotations` to resolve this, as discussed earlier: ``` from __future__ import annotations def f(x: A) -> None: ... # OK class A: ... ``` For Python 3.6 and below, you can enter the type as a string literal or type comment: ``` def f(x: 'A') -> None: ... # OK # Also OK def g(x): # type: (A) -> None ... class A: ... ``` Of course, instead of using future annotations import or string literal types, you could move the function definition after the class definition. This is not always desirable or even possible, though. #### Import cycles[#](#import-cycles) An import cycle occurs where module A imports module B and module B imports module A (perhaps indirectly, e.g. `A -> B -> C -> A`). Sometimes in order to add type annotations you have to add extra imports to a module and those imports cause cycles that didn’t exist before. This can lead to errors at runtime like: ``` ImportError: cannot import name 'b' from partially initialized module 'A' (most likely due to a circular import) ``` If those cycles do become a problem when running your program, there’s a trick: if the import is only needed for type annotations and you’re using a) the [future annotations import](#future-annotations), or b) string literals or type comments for the relevant annotations, you can write the imports inside `if TYPE_CHECKING:` so that they are not executed at runtime. Example: File `foo.py`: ``` from typing import TYPE_CHECKING if TYPE_CHECKING: import bar def listify(arg: 'bar.BarClass') -> 'list[bar.BarClass]': return [arg] ``` File `bar.py`: ``` from foo import listify class BarClass: def listifyme(self) -> 'list[BarClass]': return listify(self) ``` #### Using classes that are generic in stubs but not at runtime[#](#using-classes-that-are-generic-in-stubs-but-not-at-runtime) Some classes are declared as [generic](index.html#generic-classes) in stubs, but not at runtime. In Python 3.8 and earlier, there are several examples within the standard library, for instance, [`os.PathLike`](https://docs.python.org/3/library/os.html#os.PathLike) and [`queue.Queue`](https://docs.python.org/3/library/queue.html#queue.Queue). Subscripting such a class will result in a runtime error: ``` from queue import Queue class Tasks(Queue[str]): # TypeError: 'type' object is not subscriptable ... results: Queue[int] = Queue() # TypeError: 'type' object is not subscriptable ``` To avoid errors from use of these generics in annotations, just use the [future annotations import](#future-annotations) (or string literals or type comments for Python 3.6 and below). To avoid errors when inheriting from these classes, things are a little more complicated and you need to use [typing.TYPE_CHECKING](#typing-type-checking): ``` from typing import TYPE_CHECKING from queue import Queue if TYPE_CHECKING: BaseQueue = Queue[str] # this is only processed by mypy else: BaseQueue = Queue # this is not seen by mypy but will be executed at runtime class Tasks(BaseQueue): # OK ... task_queue: Tasks reveal_type(task_queue.get()) # Reveals str ``` If your subclass is also generic, you can use the following: ``` from typing import TYPE_CHECKING, TypeVar, Generic from queue import Queue _T = TypeVar("_T") if TYPE_CHECKING: class _MyQueueBase(Queue[_T]): pass else: class _MyQueueBase(Generic[_T], Queue): pass class MyQueue(_MyQueueBase[_T]): pass task_queue: MyQueue[str] reveal_type(task_queue.get()) # Reveals str ``` In Python 3.9, we can just inherit directly from `Queue[str]` or `Queue[T]` since its [`queue.Queue`](https://docs.python.org/3/library/queue.html#queue.Queue) implements [`__class_getitem__()`](https://docs.python.org/3/reference/datamodel.html#object.__class_getitem__), so the class object can be subscripted at runtime without issue. #### Using types defined in stubs but not at runtime[#](#using-types-defined-in-stubs-but-not-at-runtime) Sometimes stubs that you’re using may define types you wish to re-use that do not exist at runtime. Importing these types naively will cause your code to fail at runtime with `ImportError` or `ModuleNotFoundError`. Similar to previous sections, these can be dealt with by using [typing.TYPE_CHECKING](#typing-type-checking): ``` from __future__ import annotations from typing import TYPE_CHECKING if TYPE_CHECKING: from _typeshed import SupportsRichComparison def f(x: SupportsRichComparison) -> None ``` The `from __future__ import annotations` is required to avoid a `NameError` when using the imported symbol. For more information and caveats, see the section on [future annotations](#future-annotations). #### Using generic builtins[#](#using-generic-builtins) Starting with Python 3.9 ([**PEP 585**](https://peps.python.org/pep-0585/)), the type objects of many collections in the standard library support subscription at runtime. This means that you no longer have to import the equivalents from [`typing`](https://docs.python.org/3/library/typing.html#module-typing); you can simply use the built-in collections or those from [`collections.abc`](https://docs.python.org/3/library/collections.abc.html#module-collections.abc): ``` from collections.abc import Sequence x: list[str] y: dict[int, str] z: Sequence[str] = x ``` There is limited support for using this syntax in Python 3.7 and later as well: if you use `from __future__ import annotations`, mypy will understand this syntax in annotations. However, since this will not be supported by the Python interpreter at runtime, make sure you’re aware of the caveats mentioned in the notes at [future annotations import](#future-annotations). #### Using X | Y syntax for Unions[#](#using-x-y-syntax-for-unions) Starting with Python 3.10 ([**PEP 604**](https://peps.python.org/pep-0604/)), you can spell union types as `x: int | str`, instead of `x: typing.Union[int, str]`. There is limited support for using this syntax in Python 3.7 and later as well: if you use `from __future__ import annotations`, mypy will understand this syntax in annotations, string literal types, type comments and stub files. However, since this will not be supported by the Python interpreter at runtime (if evaluated, `int | str` will raise `TypeError: unsupported operand type(s) for |: 'type' and 'type'`), make sure you’re aware of the caveats mentioned in the notes at [future annotations import](#future-annotations). #### Using new additions to the typing module[#](#using-new-additions-to-the-typing-module) You may find yourself wanting to use features added to the [`typing`](https://docs.python.org/3/library/typing.html#module-typing) module in earlier versions of Python than the addition, for example, using any of `Literal`, `Protocol`, `TypedDict` with Python 3.6. The easiest way to do this is to install and use the `typing_extensions` package from PyPI for the relevant imports, for example: ``` from typing_extensions import Literal x: Literal["open", "close"] ``` If you don’t want to rely on `typing_extensions` being installed on newer Pythons, you could alternatively use: ``` import sys if sys.version_info >= (3, 8): from typing import Literal else: from typing_extensions import Literal x: Literal["open", "close"] ``` This plays nicely well with following [**PEP 508**](https://peps.python.org/pep-0508/) dependency specification: `typing_extensions; python_version<"3.8"` ### Protocols and structural subtyping[#](#protocols-and-structural-subtyping) The Python type system supports two ways of deciding whether two objects are compatible as types: nominal subtyping and structural subtyping. *Nominal* subtyping is strictly based on the class hierarchy. If class `Dog` inherits class `Animal`, it’s a subtype of `Animal`. Instances of `Dog` can be used when `Animal` instances are expected. This form of subtyping subtyping is what Python’s type system predominantly uses: it’s easy to understand and produces clear and concise error messages, and matches how the native [`isinstance`](https://docs.python.org/3/library/functions.html#isinstance) check works – based on class hierarchy. *Structural* subtyping is based on the operations that can be performed with an object. Class `Dog` is a structural subtype of class `Animal` if the former has all attributes and methods of the latter, and with compatible types. Structural subtyping can be seen as a static equivalent of duck typing, which is well known to Python programmers. See [**PEP 544**](https://peps.python.org/pep-0544/) for the detailed specification of protocols and structural subtyping in Python. #### Predefined protocols[#](#predefined-protocols) The [`typing`](https://docs.python.org/3/library/typing.html#module-typing) module defines various protocol classes that correspond to common Python protocols, such as [`Iterable[T]`](https://docs.python.org/3/library/typing.html#typing.Iterable). If a class defines a suitable [`__iter__`](https://docs.python.org/3/reference/datamodel.html#object.__iter__) method, mypy understands that it implements the iterable protocol and is compatible with [`Iterable[T]`](https://docs.python.org/3/library/typing.html#typing.Iterable). For example, `IntList` below is iterable, over `int` values: ``` from typing import Iterator, Iterable, Optional class IntList: def __init__(self, value: int, next: Optional['IntList']) -> None: self.value = value self.next = next def __iter__(self) -> Iterator[int]: current = self while current: yield current.value current = current.next def print_numbered(items: Iterable[int]) -> None: for n, x in enumerate(items): print(n + 1, x) x = IntList(3, IntList(5, None)) print_numbered(x) # OK print_numbered([4, 5]) # Also OK ``` [Predefined protocol reference](#predefined-protocols-reference) lists all protocols defined in [`typing`](https://docs.python.org/3/library/typing.html#module-typing) and the signatures of the corresponding methods you need to define to implement each protocol. #### Simple user-defined protocols[#](#simple-user-defined-protocols) You can define your own protocol class by inheriting the special `Protocol` class: ``` from typing import Iterable from typing_extensions import Protocol class SupportsClose(Protocol): # Empty method body (explicit '...') def close(self) -> None: ... class Resource: # No SupportsClose base class! def close(self) -> None: self.resource.release() # ... other methods ... def close_all(items: Iterable[SupportsClose]) -> None: for item in items: item.close() close_all([Resource(), open('some/file')]) # OK ``` `Resource` is a subtype of the `SupportsClose` protocol since it defines a compatible `close` method. Regular file objects returned by [`open()`](https://docs.python.org/3/library/functions.html#open) are similarly compatible with the protocol, as they support `close()`. #### Defining subprotocols and subclassing protocols[#](#defining-subprotocols-and-subclassing-protocols) You can also define subprotocols. Existing protocols can be extended and merged using multiple inheritance. Example: ``` # ... continuing from the previous example class SupportsRead(Protocol): def read(self, amount: int) -> bytes: ... class TaggedReadableResource(SupportsClose, SupportsRead, Protocol): label: str class AdvancedResource(Resource): def __init__(self, label: str) -> None: self.label = label def read(self, amount: int) -> bytes: # some implementation ... resource: TaggedReadableResource resource = AdvancedResource('handle with care') # OK ``` Note that inheriting from an existing protocol does not automatically turn the subclass into a protocol – it just creates a regular (non-protocol) class or ABC that implements the given protocol (or protocols). The `Protocol` base class must always be explicitly present if you are defining a protocol: ``` class NotAProtocol(SupportsClose): # This is NOT a protocol new_attr: int class Concrete: new_attr: int = 0 def close(self) -> None: ... # Error: nominal subtyping used by default x: NotAProtocol = Concrete() # Error! ``` You can also include default implementations of methods in protocols. If you explicitly subclass these protocols you can inherit these default implementations. Explicitly including a protocol as a base class is also a way of documenting that your class implements a particular protocol, and it forces mypy to verify that your class implementation is actually compatible with the protocol. In particular, omitting a value for an attribute or a method body will make it implicitly abstract: ``` class SomeProto(Protocol): attr: int # Note, no right hand side def method(self) -> str: ... # Literally just ... here class ExplicitSubclass(SomeProto): pass ExplicitSubclass() # error: Cannot instantiate abstract class 'ExplicitSubclass' # with abstract attributes 'attr' and 'method' ``` Similarly, explicitly assigning to a protocol instance can be a way to ask the type checker to verify that your class implements a protocol: ``` _proto: SomeProto = cast(ExplicitSubclass, None) ``` #### Invariance of protocol attributes[#](#invariance-of-protocol-attributes) A common issue with protocols is that protocol attributes are invariant. For example: ``` class Box(Protocol): content: object class IntBox: content: int def takes_box(box: Box) -> None: ... takes_box(IntBox()) # error: Argument 1 to "takes_box" has incompatible type "IntBox"; expected "Box" # note: Following member(s) of "IntBox" have conflicts: # note: content: expected "object", got "int" ``` This is because `Box` defines `content` as a mutable attribute. Here’s why this is problematic: ``` def takes_box_evil(box: Box) -> None: box.content = "asdf" # This is bad, since box.content is supposed to be an object my_int_box = IntBox() takes_box_evil(my_int_box) my_int_box.content + 1 # Oops, TypeError! ``` This can be fixed by declaring `content` to be read-only in the `Box` protocol using `@property`: ``` class Box(Protocol): @property def content(self) -> object: ... class IntBox: content: int def takes_box(box: Box) -> None: ... takes_box(IntBox(42)) # OK ``` #### Recursive protocols[#](#recursive-protocols) Protocols can be recursive (self-referential) and mutually recursive. This is useful for declaring abstract recursive collections such as trees and linked lists: ``` from typing import TypeVar, Optional from typing_extensions import Protocol class TreeLike(Protocol): value: int @property def left(self) -> Optional['TreeLike']: ... @property def right(self) -> Optional['TreeLike']: ... class SimpleTree: def __init__(self, value: int) -> None: self.value = value self.left: Optional['SimpleTree'] = None self.right: Optional['SimpleTree'] = None root: TreeLike = SimpleTree(0) # OK ``` #### Using isinstance() with protocols[#](#using-isinstance-with-protocols) You can use a protocol class with [`isinstance()`](https://docs.python.org/3/library/functions.html#isinstance) if you decorate it with the `@runtime_checkable` class decorator. The decorator adds rudimentary support for runtime structural checks: ``` from typing_extensions import Protocol, runtime_checkable @runtime_checkable class Portable(Protocol): handles: int class Mug: def __init__(self) -> None: self.handles = 1 def use(handles: int) -> None: ... mug = Mug() if isinstance(mug, Portable): # Works at runtime! use(mug.handles) ``` [`isinstance()`](https://docs.python.org/3/library/functions.html#isinstance) also works with the [predefined protocols](#predefined-protocols) in [`typing`](https://docs.python.org/3/library/typing.html#module-typing) such as [`Iterable`](https://docs.python.org/3/library/typing.html#typing.Iterable). Warning [`isinstance()`](https://docs.python.org/3/library/functions.html#isinstance) with protocols is not completely safe at runtime. For example, signatures of methods are not checked. The runtime implementation only checks that all protocol members exist, not that they have the correct type. [`issubclass()`](https://docs.python.org/3/library/functions.html#issubclass) with protocols will only check for the existence of methods. Note [`isinstance()`](https://docs.python.org/3/library/functions.html#isinstance) with protocols can also be surprisingly slow. In many cases, you’re better served by using [`hasattr()`](https://docs.python.org/3/library/functions.html#hasattr) to check for the presence of attributes. #### Callback protocols[#](#callback-protocols) Protocols can be used to define flexible callback types that are hard (or even impossible) to express using the [`Callable[...]`](https://docs.python.org/3/library/typing.html#typing.Callable) syntax, such as variadic, overloaded, and complex generic callbacks. They are defined with a special [`__call__`](https://docs.python.org/3/reference/datamodel.html#object.__call__) member: ``` from typing import Optional, Iterable from typing_extensions import Protocol class Combiner(Protocol): def __call__(self, *vals: bytes, maxlen: Optional[int] = None) -> list[bytes]: ... def batch_proc(data: Iterable[bytes], cb_results: Combiner) -> bytes: for item in data: ... def good_cb(*vals: bytes, maxlen: Optional[int] = None) -> list[bytes]: ... def bad_cb(*vals: bytes, maxitems: Optional[int]) -> list[bytes]: ... batch_proc([], good_cb) # OK batch_proc([], bad_cb) # Error! Argument 2 has incompatible type because of # different name and kind in the callback ``` Callback protocols and [`Callable`](https://docs.python.org/3/library/typing.html#typing.Callable) types can be used mostly interchangeably. Argument names in [`__call__`](https://docs.python.org/3/reference/datamodel.html#object.__call__) methods must be identical, unless a double underscore prefix is used. For example: ``` from typing import Callable, TypeVar from typing_extensions import Protocol T = TypeVar('T') class Copy(Protocol): def __call__(self, __origin: T) -> T: ... copy_a: Callable[[T], T] copy_b: Copy copy_a = copy_b # OK copy_b = copy_a # Also OK ``` #### Predefined protocol reference[#](#predefined-protocol-reference) ##### Iteration protocols[#](#iteration-protocols) The iteration protocols are useful in many contexts. For example, they allow iteration of objects in for loops. ###### Iterable[T][#](#iterable-t) The [example above](#predefined-protocols) has a simple implementation of an [`__iter__`](https://docs.python.org/3/reference/datamodel.html#object.__iter__) method. ``` def __iter__(self) -> Iterator[T] ``` See also [`Iterable`](https://docs.python.org/3/library/typing.html#typing.Iterable). ###### Iterator[T][#](#iterator-t) ``` def __next__(self) -> T def __iter__(self) -> Iterator[T] ``` See also [`Iterator`](https://docs.python.org/3/library/typing.html#typing.Iterator). ##### Collection protocols[#](#collection-protocols) Many of these are implemented by built-in container types such as [`list`](https://docs.python.org/3/library/stdtypes.html#list) and [`dict`](https://docs.python.org/3/library/stdtypes.html#dict), and these are also useful for user-defined collection objects. ###### Sized[#](#sized) This is a type for objects that support [`len(x)`](https://docs.python.org/3/library/functions.html#len). ``` def __len__(self) -> int ``` See also [`Sized`](https://docs.python.org/3/library/typing.html#typing.Sized). ###### Container[T][#](#container-t) This is a type for objects that support the `in` operator. ``` def __contains__(self, x: object) -> bool ``` See also [`Container`](https://docs.python.org/3/library/typing.html#typing.Container). ###### Collection[T][#](#collection-t) ``` def __len__(self) -> int def __iter__(self) -> Iterator[T] def __contains__(self, x: object) -> bool ``` See also [`Collection`](https://docs.python.org/3/library/typing.html#typing.Collection). ##### One-off protocols[#](#one-off-protocols) These protocols are typically only useful with a single standard library function or class. ###### Reversible[T][#](#reversible-t) This is a type for objects that support [`reversed(x)`](https://docs.python.org/3/library/functions.html#reversed). ``` def __reversed__(self) -> Iterator[T] ``` See also [`Reversible`](https://docs.python.org/3/library/typing.html#typing.Reversible). ###### SupportsAbs[T][#](#supportsabs-t) This is a type for objects that support [`abs(x)`](https://docs.python.org/3/library/functions.html#abs). `T` is the type of value returned by [`abs(x)`](https://docs.python.org/3/library/functions.html#abs). ``` def __abs__(self) -> T ``` See also [`SupportsAbs`](https://docs.python.org/3/library/typing.html#typing.SupportsAbs). ###### SupportsBytes[#](#supportsbytes) This is a type for objects that support [`bytes(x)`](https://docs.python.org/3/library/stdtypes.html#bytes). ``` def __bytes__(self) -> bytes ``` See also [`SupportsBytes`](https://docs.python.org/3/library/typing.html#typing.SupportsBytes). ###### SupportsComplex[#](#supportscomplex) This is a type for objects that support [`complex(x)`](https://docs.python.org/3/library/functions.html#complex). Note that no arithmetic operations are supported. ``` def __complex__(self) -> complex ``` See also [`SupportsComplex`](https://docs.python.org/3/library/typing.html#typing.SupportsComplex). ###### SupportsFloat[#](#supportsfloat) This is a type for objects that support [`float(x)`](https://docs.python.org/3/library/functions.html#float). Note that no arithmetic operations are supported. ``` def __float__(self) -> float ``` See also [`SupportsFloat`](https://docs.python.org/3/library/typing.html#typing.SupportsFloat). ###### SupportsInt[#](#supportsint) This is a type for objects that support [`int(x)`](https://docs.python.org/3/library/functions.html#int). Note that no arithmetic operations are supported. ``` def __int__(self) -> int ``` See also [`SupportsInt`](https://docs.python.org/3/library/typing.html#typing.SupportsInt). ###### SupportsRound[T][#](#supportsround-t) This is a type for objects that support [`round(x)`](https://docs.python.org/3/library/functions.html#round). ``` def __round__(self) -> T ``` See also [`SupportsRound`](https://docs.python.org/3/library/typing.html#typing.SupportsRound). ##### Async protocols[#](#async-protocols) These protocols can be useful in async code. See [Typing async/await](index.html#async-and-await) for more information. ###### Awaitable[T][#](#awaitable-t) ``` def __await__(self) -> Generator[Any, None, T] ``` See also [`Awaitable`](https://docs.python.org/3/library/typing.html#typing.Awaitable). ###### AsyncIterable[T][#](#asynciterable-t) ``` def __aiter__(self) -> AsyncIterator[T] ``` See also [`AsyncIterable`](https://docs.python.org/3/library/typing.html#typing.AsyncIterable). ###### AsyncIterator[T][#](#asynciterator-t) ``` def __anext__(self) -> Awaitable[T] def __aiter__(self) -> AsyncIterator[T] ``` See also [`AsyncIterator`](https://docs.python.org/3/library/typing.html#typing.AsyncIterator). ##### Context manager protocols[#](#context-manager-protocols) There are two protocols for context managers – one for regular context managers and one for async ones. These allow defining objects that can be used in `with` and `async with` statements. ###### ContextManager[T][#](#contextmanager-t) ``` def __enter__(self) -> T def __exit__(self, exc_type: Optional[Type[BaseException]], exc_value: Optional[BaseException], traceback: Optional[TracebackType]) -> Optional[bool] ``` See also [`ContextManager`](https://docs.python.org/3/library/typing.html#typing.ContextManager). ###### AsyncContextManager[T][#](#asynccontextmanager-t) ``` def __aenter__(self) -> Awaitable[T] def __aexit__(self, exc_type: Optional[Type[BaseException]], exc_value: Optional[BaseException], traceback: Optional[TracebackType]) -> Awaitable[Optional[bool]] ``` See also [`AsyncContextManager`](https://docs.python.org/3/library/typing.html#typing.AsyncContextManager). ### Dynamically typed code[#](#dynamically-typed-code) In [Dynamic vs static typing](index.html#getting-started-dynamic-vs-static), we discussed how bodies of functions that don’t have any explicit type annotations in their function are “dynamically typed” and that mypy will not check them. In this section, we’ll talk a little bit more about what that means and how you can enable dynamic typing on a more fine grained basis. In cases where your code is too magical for mypy to understand, you can make a variable or parameter dynamically typed by explicitly giving it the type `Any`. Mypy will let you do basically anything with a value of type `Any`, including assigning a value of type `Any` to a variable of any type (or vice versa). ``` from typing import Any num = 1 # Statically typed (inferred to be int) num = 'x' # error: Incompatible types in assignment (expression has type "str", variable has type "int") dyn: Any = 1 # Dynamically typed (type Any) dyn = 'x' # OK num = dyn # No error, mypy will let you assign a value of type Any to any variable num += 1 # Oops, mypy still thinks num is an int ``` You can think of `Any` as a way to locally disable type checking. See [Silencing type errors](index.html#silencing-type-errors) for other ways you can shut up the type checker. #### Operations on Any values[#](#operations-on-any-values) You can do anything using a value with type `Any`, and the type checker will not complain: ``` def f(x: Any) -> int: # All of these are valid! x.foobar(1, y=2) print(x[3] + 'f') if x: x.z = x(2) open(x).read() return x ``` Values derived from an `Any` value also usually have the type `Any` implicitly, as mypy can’t infer a more precise result type. For example, if you get the attribute of an `Any` value or call a `Any` value the result is `Any`: ``` def f(x: Any) -> None: y = x.foo() reveal_type(y) # Revealed type is "Any" z = y.bar("mypy will let you do anything to y") reveal_type(z) # Revealed type is "Any" ``` `Any` types may propagate through your program, making type checking less effective, unless you are careful. Function parameters without annotations are also implicitly `Any`: ``` def f(x) -> None: reveal_type(x) # Revealed type is "Any" x.can.do["anything", x]("wants", 2) ``` You can make mypy warn you about untyped function parameters using the [`--disallow-untyped-defs`](index.html#cmdoption-mypy-disallow-untyped-defs) flag. Generic types missing type parameters will have those parameters implicitly treated as `Any`: ``` from typing import List def f(x: List) -> None: reveal_type(x) # Revealed type is "builtins.list[Any]" reveal_type(x[0]) # Revealed type is "Any" x[0].anything_goes() # OK ``` You can make mypy warn you about untyped function parameters using the [`--disallow-any-generics`](index.html#cmdoption-mypy-disallow-any-generics) flag. Finally, another major source of `Any` types leaking into your program is from third party libraries that mypy does not know about. This is particularly the case when using the [`--ignore-missing-imports`](index.html#cmdoption-mypy-ignore-missing-imports) flag. See [Missing imports](index.html#fix-missing-imports) for more information about this. #### Any vs. object[#](#any-vs-object) The type [`object`](https://docs.python.org/3/library/functions.html#object) is another type that can have an instance of arbitrary type as a value. Unlike `Any`, [`object`](https://docs.python.org/3/library/functions.html#object) is an ordinary static type (it is similar to `Object` in Java), and only operations valid for *all* types are accepted for [`object`](https://docs.python.org/3/library/functions.html#object) values. These are all valid: ``` def f(o: object) -> None: if o: print(o) print(isinstance(o, int)) o = 2 o = 'foo' ``` These are, however, flagged as errors, since not all objects support these operations: ``` def f(o: object) -> None: o.foo() # Error! o + 2 # Error! open(o) # Error! n: int = 1 n = o # Error! ``` If you’re not sure whether you need to use [`object`](https://docs.python.org/3/library/functions.html#object) or `Any`, use [`object`](https://docs.python.org/3/library/functions.html#object) – only switch to using `Any` if you get a type checker complaint. You can use different [type narrowing](index.html#type-narrowing) techniques to narrow [`object`](https://docs.python.org/3/library/functions.html#object) to a more specific type (subtype) such as `int`. Type narrowing is not needed with dynamically typed values (values with type `Any`). ### Type narrowing[#](#type-narrowing) This section is dedicated to several type narrowing techniques which are supported by mypy. Type narrowing is when you convince a type checker that a broader type is actually more specific, for instance, that an object of type `Shape` is actually of the narrower type `Square`. #### Type narrowing expressions[#](#type-narrowing-expressions) The simplest way to narrow a type is to use one of the supported expressions: * [`isinstance()`](https://docs.python.org/3/library/functions.html#isinstance) like in `isinstance(obj, float)` will narrow `obj` to have `float` type * [`issubclass()`](https://docs.python.org/3/library/functions.html#issubclass) like in `issubclass(cls, MyClass)` will narrow `cls` to be `Type[MyClass]` * [`type`](https://docs.python.org/3/library/functions.html#type) like in `type(obj) is int` will narrow `obj` to have `int` type * [`callable()`](https://docs.python.org/3/library/functions.html#callable) like in `callable(obj)` will narrow object to callable type Type narrowing is contextual. For example, based on the condition, mypy will narrow an expression only within an `if` branch: ``` def function(arg: object): if isinstance(arg, int): # Type is narrowed within the ``if`` branch only reveal_type(arg) # Revealed type: "builtins.int" elif isinstance(arg, str) or isinstance(arg, bool): # Type is narrowed differently within this ``elif`` branch: reveal_type(arg) # Revealed type: "builtins.str | builtins.bool" # Subsequent narrowing operations will narrow the type further if isinstance(arg, bool): reveal_type(arg) # Revealed type: "builtins.bool" # Back outside of the ``if`` statement, the type isn't narrowed: reveal_type(arg) # Revealed type: "builtins.object" ``` Mypy understands the implications `return` or exception raising can have for what type an object could be: ``` def function(arg: int | str): if isinstance(arg, int): return # `arg` can't be `int` at this point: reveal_type(arg) # Revealed type: "builtins.str" ``` We can also use `assert` to narrow types in the same context: ``` def function(arg: Any): assert isinstance(arg, int) reveal_type(arg) # Revealed type: "builtins.int" ``` Note With [`--warn-unreachable`](index.html#cmdoption-mypy-warn-unreachable) narrowing types to some impossible state will be treated as an error. ``` def function(arg: int): # error: Subclass of "int" and "str" cannot exist: # would have incompatible method signatures assert isinstance(arg, str) # error: Statement is unreachable print("so mypy concludes the assert will always trigger") ``` Without `--warn-unreachable` mypy will simply not check code it deems to be unreachable. See [Unreachable code](index.html#unreachable) for more information. ``` x: int = 1 assert isinstance(x, str) reveal_type(x) # Revealed type is "builtins.int" print(x + '!') # Typechecks with `mypy`, but fails in runtime. ``` ##### issubclass[#](#issubclass) Mypy can also use [`issubclass()`](https://docs.python.org/3/library/functions.html#issubclass) for better type inference when working with types and metaclasses: ``` class MyCalcMeta(type): @classmethod def calc(cls) -> int: ... def f(o: object) -> None: t = type(o) # We must use a variable here reveal_type(t) # Revealed type is "builtins.type" if issubclass(t, MyCalcMeta): # `issubclass(type(o), MyCalcMeta)` won't work reveal_type(t) # Revealed type is "Type[MyCalcMeta]" t.calc() # Okay ``` ##### callable[#](#callable) Mypy knows what types are callable and which ones are not during type checking. So, we know what `callable()` will return. For example: ``` from typing import Callable x: Callable[[], int] if callable(x): reveal_type(x) # N: Revealed type is "def () -> builtins.int" else: ... # Will never be executed and will raise error with `--warn-unreachable` ``` `callable` function can even split `Union` type for callable and non-callable parts: ``` from typing import Callable, Union x: Union[int, Callable[[], int]] if callable(x): reveal_type(x) # N: Revealed type is "def () -> builtins.int" else: reveal_type(x) # N: Revealed type is "builtins.int" ``` #### Casts[#](#casts) Mypy supports type casts that are usually used to coerce a statically typed value to a subtype. Unlike languages such as Java or C#, however, mypy casts are only used as hints for the type checker, and they don’t perform a runtime type check. Use the function [`cast()`](https://docs.python.org/3/library/typing.html#typing.cast) to perform a cast: ``` from typing import cast o: object = [1] x = cast(list[int], o) # OK y = cast(list[str], o) # OK (cast performs no actual runtime check) ``` To support runtime checking of casts such as the above, we’d have to check the types of all list items, which would be very inefficient for large lists. Casts are used to silence spurious type checker warnings and give the type checker a little help when it can’t quite understand what is going on. Note You can use an assertion if you want to perform an actual runtime check: ``` def foo(o: object) -> None: print(o + 5) # Error: can't add 'object' and 'int' assert isinstance(o, int) print(o + 5) # OK: type of 'o' is 'int' here ``` You don’t need a cast for expressions with type `Any`, or when assigning to a variable with type `Any`, as was explained earlier. You can also use `Any` as the cast target type – this lets you perform any operations on the result. For example: ``` from typing import cast, Any x = 1 x.whatever() # Type check error y = cast(Any, x) y.whatever() # Type check OK (runtime error) ``` #### User-Defined Type Guards[#](#user-defined-type-guards) Mypy supports User-Defined Type Guards ([**PEP 647**](https://peps.python.org/pep-0647/)). A type guard is a way for programs to influence conditional type narrowing employed by a type checker based on runtime checks. Basically, a `TypeGuard` is a “smart” alias for a `bool` type. Let’s have a look at the regular `bool` example: ``` def is_str_list(val: list[object]) -> bool: """Determines whether all objects in the list are strings""" return all(isinstance(x, str) for x in val) def func1(val: list[object]) -> None: if is_str_list(val): reveal_type(val) # Reveals list[object] print(" ".join(val)) # Error: incompatible type ``` The same example with `TypeGuard`: ``` from typing import TypeGuard # use `typing_extensions` for Python 3.9 and below def is_str_list(val: list[object]) -> TypeGuard[list[str]]: """Determines whether all objects in the list are strings""" return all(isinstance(x, str) for x in val) def func1(val: list[object]) -> None: if is_str_list(val): reveal_type(val) # list[str] print(" ".join(val)) # ok ``` How does it work? `TypeGuard` narrows the first function argument (`val`) to the type specified as the first type parameter (`list[str]`). Note Narrowing is [not strict](https://www.python.org/dev/peps/pep-0647/#enforcing-strict-narrowing). For example, you can narrow `str` to `int`: ``` def f(value: str) -> TypeGuard[int]: return True ``` Note: since strict narrowing is not enforced, it’s easy to break type safety. However, there are many ways a determined or uninformed developer can subvert type safety – most commonly by using cast or Any. If a Python developer takes the time to learn about and implement user-defined type guards within their code, it is safe to assume that they are interested in type safety and will not write their type guard functions in a way that will undermine type safety or produce nonsensical results. ##### Generic TypeGuards[#](#generic-typeguards) `TypeGuard` can also work with generic types: ``` from typing import TypeVar from typing import TypeGuard # use `typing_extensions` for `python<3.10` _T = TypeVar("_T") def is_two_element_tuple(val: tuple[_T, ...]) -> TypeGuard[tuple[_T, _T]]: return len(val) == 2 def func(names: tuple[str, ...]): if is_two_element_tuple(names): reveal_type(names) # tuple[str, str] else: reveal_type(names) # tuple[str, ...] ``` ##### TypeGuards with parameters[#](#typeguards-with-parameters) Type guard functions can accept extra arguments: ``` from typing import Type, TypeVar from typing import TypeGuard # use `typing_extensions` for `python<3.10` _T = TypeVar("_T") def is_set_of(val: set[Any], type: Type[_T]) -> TypeGuard[set[_T]]: return all(isinstance(x, type) for x in val) items: set[Any] if is_set_of(items, str): reveal_type(items) # set[str] ``` ##### TypeGuards as methods[#](#typeguards-as-methods) A method can also serve as a `TypeGuard`: ``` class StrValidator: def is_valid(self, instance: object) -> TypeGuard[str]: return isinstance(instance, str) def func(to_validate: object) -> None: if StrValidator().is_valid(to_validate): reveal_type(to_validate) # Revealed type is "builtins.str" ``` Note Note, that `TypeGuard` [does not narrow](https://www.python.org/dev/peps/pep-0647/#narrowing-of-implicit-self-and-cls-parameters) types of `self` or `cls` implicit arguments. If narrowing of `self` or `cls` is required, the value can be passed as an explicit argument to a type guard function: ``` class Parent: def method(self) -> None: reveal_type(self) # Revealed type is "Parent" if is_child(self): reveal_type(self) # Revealed type is "Child" class Child(Parent): ... def is_child(instance: Parent) -> TypeGuard[Child]: return isinstance(instance, Child) ``` ##### Assignment expressions as TypeGuards[#](#assignment-expressions-as-typeguards) Sometimes you might need to create a new variable and narrow it to some specific type at the same time. This can be achieved by using `TypeGuard` together with [:= operator](https://docs.python.org/3/whatsnew/3.8.html#assignment-expressions). ``` from typing import TypeGuard # use `typing_extensions` for `python<3.10` def is_float(a: object) -> TypeGuard[float]: return isinstance(a, float) def main(a: object) -> None: if is_float(x := a): reveal_type(x) # N: Revealed type is 'builtins.float' reveal_type(a) # N: Revealed type is 'builtins.object' reveal_type(x) # N: Revealed type is 'builtins.object' reveal_type(a) # N: Revealed type is 'builtins.object' ``` What happens here? 1. We create a new variable `x` and assign a value of `a` to it 2. We run `is_float()` type guard on `x` 3. It narrows `x` to be `float` in the `if` context and does not touch `a` Note The same will work with `isinstance(x := a, float)` as well. ### Duck type compatibility[#](#duck-type-compatibility) In Python, certain types are compatible even though they aren’t subclasses of each other. For example, `int` objects are valid whenever `float` objects are expected. Mypy supports this idiom via *duck type compatibility*. This is supported for a small set of built-in types: * `int` is duck type compatible with `float` and `complex`. * `float` is duck type compatible with `complex`. * `bytearray` and `memoryview` are duck type compatible with `bytes`. For example, mypy considers an `int` object to be valid whenever a `float` object is expected. Thus code like this is nice and clean and also behaves as expected: ``` import math def degrees_to_radians(degrees: float) -> float: return math.pi * degrees / 180 n = 90 # Inferred type 'int' print(degrees_to_radians(n)) # Okay! ``` You can also often use [Protocols and structural subtyping](index.html#protocol-types) to achieve a similar effect in a more principled and extensible fashion. Protocols don’t apply to cases like `int` being compatible with `float`, since `float` is not a protocol class but a regular, concrete class, and many standard library functions expect concrete instances of `float` (or `int`). ### Stub files[#](#stub-files) A *stub file* is a file containing a skeleton of the public interface of that Python module, including classes, variables, functions – and most importantly, their types. Mypy uses stub files stored in the [typeshed](https://github.com/python/typeshed) repository to determine the types of standard library and third-party library functions, classes, and other definitions. You can also create your own stubs that will be used to type check your code. #### Creating a stub[#](#creating-a-stub) Here is an overview of how to create a stub file: * Write a stub file for the library (or an arbitrary module) and store it as a `.pyi` file in the same directory as the library module. * Alternatively, put your stubs (`.pyi` files) in a directory reserved for stubs (e.g., `myproject/stubs`). In this case you have to set the environment variable `MYPYPATH` to refer to the directory. For example: ``` $ export MYPYPATH=~/work/myproject/stubs ``` Use the normal Python file name conventions for modules, e.g. `csv.pyi` for module `csv`. Use a subdirectory with `__init__.pyi` for packages. Note that [**PEP 561**](https://peps.python.org/pep-0561/) stub-only packages must be installed, and may not be pointed at through the `MYPYPATH` (see [PEP 561 support](index.html#installed-packages)). If a directory contains both a `.py` and a `.pyi` file for the same module, the `.pyi` file takes precedence. This way you can easily add annotations for a module even if you don’t want to modify the source code. This can be useful, for example, if you use 3rd party open source libraries in your program (and there are no stubs in typeshed yet). That’s it! Now you can access the module in mypy programs and type check code that uses the library. If you write a stub for a library module, consider making it available for other programmers that use mypy by contributing it back to the typeshed repo. Mypy also ships with two tools for making it easier to create and maintain stubs: [Automatic stub generation (stubgen)](index.html#stubgen) and [Automatic stub testing (stubtest)](index.html#stubtest). The following sections explain the kinds of type annotations you can use in your programs and stub files. Note You may be tempted to point `MYPYPATH` to the standard library or to the `site-packages` directory where your 3rd party packages are installed. This is almost always a bad idea – you will likely get tons of error messages about code you didn’t write and that mypy can’t analyze all that well yet, and in the worst case scenario mypy may crash due to some construct in a 3rd party package that it didn’t expect. #### Stub file syntax[#](#stub-file-syntax) Stub files are written in normal Python syntax, but generally leaving out runtime logic like variable initializers, function bodies, and default arguments. If it is not possible to completely leave out some piece of runtime logic, the recommended convention is to replace or elide them with ellipsis expressions (`...`). Each ellipsis below is literally written in the stub file as three dots: ``` # Variables with annotations do not need to be assigned a value. # So by convention, we omit them in the stub file. x: int # Function bodies cannot be completely removed. By convention, # we replace them with `...` instead of the `pass` statement. def func_1(code: str) -> int: ... # We can do the same with default arguments. def func_2(a: int, b: int = ...) -> int: ... ``` Note The ellipsis `...` is also used with a different meaning in [callable types](index.html#callable-types) and [tuple types](index.html#tuple-types). #### Using stub file syntax at runtime[#](#using-stub-file-syntax-at-runtime) You may also occasionally need to elide actual logic in regular Python code – for example, when writing methods in [overload variants](index.html#function-overloading) or [custom protocols](index.html#protocol-types). The recommended style is to use ellipses to do so, just like in stub files. It is also considered stylistically acceptable to throw a [`NotImplementedError`](https://docs.python.org/3/library/exceptions.html#NotImplementedError) in cases where the user of the code may accidentally call functions with no actual logic. You can also elide default arguments as long as the function body also contains no runtime logic: the function body only contains a single ellipsis, the pass statement, or a `raise NotImplementedError()`. It is also acceptable for the function body to contain a docstring. For example: ``` from typing_extensions import Protocol class Resource(Protocol): def ok_1(self, foo: list[str] = ...) -> None: ... def ok_2(self, foo: list[str] = ...) -> None: raise NotImplementedError() def ok_3(self, foo: list[str] = ...) -> None: """Some docstring""" pass # Error: Incompatible default for argument "foo" (default has # type "ellipsis", argument has type "list[str]") def not_ok(self, foo: list[str] = ...) -> None: print(foo) ``` ### Generics[#](#generics) This section explains how you can define your own generic classes that take one or more type parameters, similar to built-in types such as `list[X]`. User-defined generics are a moderately advanced feature and you can get far without ever using them – feel free to skip this section and come back later. #### Defining generic classes[#](#defining-generic-classes) The built-in collection classes are generic classes. Generic types have one or more type parameters, which can be arbitrary types. For example, `dict[int, str]` has the type parameters `int` and `str`, and `list[int]` has a type parameter `int`. Programs can also define new generic classes. Here is a very simple generic class that represents a stack: ``` from typing import TypeVar, Generic T = TypeVar('T') class Stack(Generic[T]): def __init__(self) -> None: # Create an empty list with items of type T self.items: list[T] = [] def push(self, item: T) -> None: self.items.append(item) def pop(self) -> T: return self.items.pop() def empty(self) -> bool: return not self.items ``` The `Stack` class can be used to represent a stack of any type: `Stack[int]`, `Stack[tuple[int, str]]`, etc. Using `Stack` is similar to built-in container types: ``` # Construct an empty Stack[int] instance stack = Stack[int]() stack.push(2) stack.pop() stack.push('x') # error: Argument 1 to "push" of "Stack" has incompatible type "str"; expected "int" ``` Construction of instances of generic types is type checked: ``` class Box(Generic[T]): def __init__(self, content: T) -> None: self.content = content Box(1) # OK, inferred type is Box[int] Box[int](1) # Also OK Box[int]('some string') # error: Argument 1 to "Box" has incompatible type "str"; expected "int" ``` #### Defining subclasses of generic classes[#](#defining-subclasses-of-generic-classes) User-defined generic classes and generic classes defined in [`typing`](https://docs.python.org/3/library/typing.html#module-typing) can be used as a base class for another class (generic or non-generic). For example: ``` from typing import Generic, TypeVar, Mapping, Iterator KT = TypeVar('KT') VT = TypeVar('VT') # This is a generic subclass of Mapping class MyMap(Mapping[KT, VT]): def __getitem__(self, k: KT) -> VT: ... def __iter__(self) -> Iterator[KT]: ... def __len__(self) -> int: ... items: MyMap[str, int] # OK # This is a non-generic subclass of dict class StrDict(dict[str, str]): def __str__(self) -> str: return f'StrDict({super().__str__()})' data: StrDict[int, int] # Error! StrDict is not generic data2: StrDict # OK # This is a user-defined generic class class Receiver(Generic[T]): def accept(self, value: T) -> None: ... # This is a generic subclass of Receiver class AdvancedReceiver(Receiver[T]): ... ``` Note You have to add an explicit [`Mapping`](https://docs.python.org/3/library/typing.html#typing.Mapping) base class if you want mypy to consider a user-defined class as a mapping (and [`Sequence`](https://docs.python.org/3/library/typing.html#typing.Sequence) for sequences, etc.). This is because mypy doesn’t use *structural subtyping* for these ABCs, unlike simpler protocols like [`Iterable`](https://docs.python.org/3/library/typing.html#typing.Iterable), which use [structural subtyping](index.html#protocol-types). [`Generic`](https://docs.python.org/3/library/typing.html#typing.Generic) can be omitted from bases if there are other base classes that include type variables, such as `Mapping[KT, VT]` in the above example. If you include `Generic[...]` in bases, then it should list all type variables present in other bases (or more, if needed). The order of type variables is defined by the following rules: * If `Generic[...]` is present, then the order of variables is always determined by their order in `Generic[...]`. * If there are no `Generic[...]` in bases, then all type variables are collected in the lexicographic order (i.e. by first appearance). For example: ``` from typing import Generic, TypeVar, Any T = TypeVar('T') S = TypeVar('S') U = TypeVar('U') class One(Generic[T]): ... class Another(Generic[T]): ... class First(One[T], Another[S]): ... class Second(One[T], Another[S], Generic[S, U, T]): ... x: First[int, str] # Here T is bound to int, S is bound to str y: Second[int, str, Any] # Here T is Any, S is int, and U is str ``` #### Generic functions[#](#generic-functions) Type variables can be used to define generic functions: ``` from typing import TypeVar, Sequence T = TypeVar('T') # A generic function! def first(seq: Sequence[T]) -> T: return seq[0] ``` As with generic classes, the type variable can be replaced with any type. That means `first` can be used with any sequence type, and the return type is derived from the sequence item type. For example: ``` reveal_type(first([1, 2, 3])) # Revealed type is "builtins.int" reveal_type(first(['a', 'b'])) # Revealed type is "builtins.str" ``` Note also that a single definition of a type variable (such as `T` above) can be used in multiple generic functions or classes. In this example we use the same type variable in two generic functions: ``` from typing import TypeVar, Sequence T = TypeVar('T') # Declare type variable def first(seq: Sequence[T]) -> T: return seq[0] def last(seq: Sequence[T]) -> T: return seq[-1] ``` A variable cannot have a type variable in its type unless the type variable is bound in a containing generic class or function. #### Generic methods and generic self[#](#generic-methods-and-generic-self) You can also define generic methods — just use a type variable in the method signature that is different from class type variables. In particular, the `self` argument may also be generic, allowing a method to return the most precise type known at the point of access. In this way, for example, you can type check a chain of setter methods: ``` from typing import TypeVar T = TypeVar('T', bound='Shape') class Shape: def set_scale(self: T, scale: float) -> T: self.scale = scale return self class Circle(Shape): def set_radius(self, r: float) -> 'Circle': self.radius = r return self class Square(Shape): def set_width(self, w: float) -> 'Square': self.width = w return self circle: Circle = Circle().set_scale(0.5).set_radius(2.7) square: Square = Square().set_scale(0.5).set_width(3.2) ``` Without using generic `self`, the last two lines could not be type checked properly, since the return type of `set_scale` would be `Shape`, which doesn’t define `set_radius` or `set_width`. Other uses are factory methods, such as copy and deserialization. For class methods, you can also define generic `cls`, using [`Type[T]`](https://docs.python.org/3/library/typing.html#typing.Type): ``` from typing import TypeVar, Type T = TypeVar('T', bound='Friend') class Friend: other: "Friend" = None @classmethod def make_pair(cls: Type[T]) -> tuple[T, T]: a, b = cls(), cls() a.other = b b.other = a return a, b class SuperFriend(Friend): pass a, b = SuperFriend.make_pair() ``` Note that when overriding a method with generic `self`, you must either return a generic `self` too, or return an instance of the current class. In the latter case, you must implement this method in all future subclasses. Note also that mypy cannot always verify that the implementation of a copy or a deserialization method returns the actual type of self. Therefore you may need to silence mypy inside these methods (but not at the call site), possibly by making use of the `Any` type or a `# type: ignore` comment. Note that mypy lets you use generic self types in certain unsafe ways in order to support common idioms. For example, using a generic self type in an argument type is accepted even though it’s unsafe: ``` from typing import TypeVar T = TypeVar("T") class Base: def compare(self: T, other: T) -> bool: return False class Sub(Base): def __init__(self, x: int) -> None: self.x = x # This is unsafe (see below) but allowed because it's # a common pattern and rarely causes issues in practice. def compare(self, other: Sub) -> bool: return self.x > other.x b: Base = Sub(42) b.compare(Base()) # Runtime error here: 'Base' object has no attribute 'x' ``` For some advanced uses of self types, see [additional examples](index.html#advanced-self). #### Automatic self types using typing.Self[#](#automatic-self-types-using-typing-self) Since the patterns described above are quite common, mypy supports a simpler syntax, introduced in [**PEP 673**](https://peps.python.org/pep-0673/), to make them easier to use. Instead of defining a type variable and using an explicit annotation for `self`, you can import the special type `typing.Self` that is automatically transformed into a type variable with the current class as the upper bound, and you don’t need an annotation for `self` (or `cls` in class methods). The example from the previous section can be made simpler by using `Self`: ``` from typing import Self class Friend: other: Self | None = None @classmethod def make_pair(cls) -> tuple[Self, Self]: a, b = cls(), cls() a.other = b b.other = a return a, b class SuperFriend(Friend): pass a, b = SuperFriend.make_pair() ``` This is more compact than using explicit type variables. Also, you can use `Self` in attribute annotations in addition to methods. Note To use this feature on Python versions earlier than 3.11, you will need to import `Self` from `typing_extensions` (version 4.0 or newer). #### Variance of generic types[#](#variance-of-generic-types) There are three main kinds of generic types with respect to subtype relations between them: invariant, covariant, and contravariant. Assuming that we have a pair of types `A` and `B`, and `B` is a subtype of `A`, these are defined as follows: * A generic class `MyCovGen[T]` is called covariant in type variable `T` if `MyCovGen[B]` is always a subtype of `MyCovGen[A]`. * A generic class `MyContraGen[T]` is called contravariant in type variable `T` if `MyContraGen[A]` is always a subtype of `MyContraGen[B]`. * A generic class `MyInvGen[T]` is called invariant in `T` if neither of the above is true. Let us illustrate this by few simple examples: ``` # We'll use these classes in the examples below class Shape: ... class Triangle(Shape): ... class Square(Shape): ... ``` * Most immutable containers, such as [`Sequence`](https://docs.python.org/3/library/typing.html#typing.Sequence) and [`FrozenSet`](https://docs.python.org/3/library/typing.html#typing.FrozenSet) are covariant. [`Union`](https://docs.python.org/3/library/typing.html#typing.Union) is also covariant in all variables: `Union[Triangle, int]` is a subtype of `Union[Shape, int]`. ``` def count_lines(shapes: Sequence[Shape]) -> int: return sum(shape.num_sides for shape in shapes) triangles: Sequence[Triangle] count_lines(triangles) # OK def foo(triangle: Triangle, num: int): shape_or_number: Union[Shape, int] # a Triangle is a Shape, and a Shape is a valid Union[Shape, int] shape_or_number = triangle ``` Covariance should feel relatively intuitive, but contravariance and invariance can be harder to reason about. * [`Callable`](https://docs.python.org/3/library/typing.html#typing.Callable) is an example of type that behaves contravariant in types of arguments. That is, `Callable[[Shape], int]` is a subtype of `Callable[[Triangle], int]`, despite `Shape` being a supertype of `Triangle`. To understand this, consider: ``` def cost_of_paint_required( triangle: Triangle, area_calculator: Callable[[Triangle], float] ) -> float: return area_calculator(triangle) * DOLLAR_PER_SQ_FT # This straightforwardly works def area_of_triangle(triangle: Triangle) -> float: ... cost_of_paint_required(triangle, area_of_triangle) # OK # But this works as well! def area_of_any_shape(shape: Shape) -> float: ... cost_of_paint_required(triangle, area_of_any_shape) # OK ``` `cost_of_paint_required` needs a callable that can calculate the area of a triangle. If we give it a callable that can calculate the area of an arbitrary shape (not just triangles), everything still works. * [`List`](https://docs.python.org/3/library/typing.html#typing.List) is an invariant generic type. Naively, one would think that it is covariant, like [`Sequence`](https://docs.python.org/3/library/typing.html#typing.Sequence) above, but consider this code: ``` class Circle(Shape): # The rotate method is only defined on Circle, not on Shape def rotate(self): ... def add_one(things: list[Shape]) -> None: things.append(Shape()) my_circles: list[Circle] = [] add_one(my_circles) # This may appear safe, but... my_circles[-1].rotate() # ...this will fail, since my_circles[0] is now a Shape, not a Circle ``` Another example of invariant type is [`Dict`](https://docs.python.org/3/library/typing.html#typing.Dict). Most mutable containers are invariant. By default, mypy assumes that all user-defined generics are invariant. To declare a given generic class as covariant or contravariant use type variables defined with special keyword arguments `covariant` or `contravariant`. For example: ``` from typing import Generic, TypeVar T_co = TypeVar('T_co', covariant=True) class Box(Generic[T_co]): # this type is declared covariant def __init__(self, content: T_co) -> None: self._content = content def get_content(self) -> T_co: return self._content def look_into(box: Box[Animal]): ... my_box = Box(Cat()) look_into(my_box) # OK, but mypy would complain here for an invariant type ``` #### Type variables with upper bounds[#](#type-variables-with-upper-bounds) A type variable can also be restricted to having values that are subtypes of a specific type. This type is called the upper bound of the type variable, and is specified with the `bound=...` keyword argument to [`TypeVar`](https://docs.python.org/3/library/typing.html#typing.TypeVar). ``` from typing import TypeVar, SupportsAbs T = TypeVar('T', bound=SupportsAbs[float]) ``` In the definition of a generic function that uses such a type variable `T`, the type represented by `T` is assumed to be a subtype of its upper bound, so the function can use methods of the upper bound on values of type `T`. ``` def largest_in_absolute_value(*xs: T) -> T: return max(xs, key=abs) # Okay, because T is a subtype of SupportsAbs[float]. ``` In a call to such a function, the type `T` must be replaced by a type that is a subtype of its upper bound. Continuing the example above: ``` largest_in_absolute_value(-3.5, 2) # Okay, has type float. largest_in_absolute_value(5+6j, 7) # Okay, has type complex. largest_in_absolute_value('a', 'b') # Error: 'str' is not a subtype of SupportsAbs[float]. ``` Type parameters of generic classes may also have upper bounds, which restrict the valid values for the type parameter in the same way. #### Type variables with value restriction[#](#type-variables-with-value-restriction) By default, a type variable can be replaced with any type. However, sometimes it’s useful to have a type variable that can only have some specific types as its value. A typical example is a type variable that can only have values `str` and `bytes`: ``` from typing import TypeVar AnyStr = TypeVar('AnyStr', str, bytes) ``` This is actually such a common type variable that [`AnyStr`](https://docs.python.org/3/library/typing.html#typing.AnyStr) is defined in [`typing`](https://docs.python.org/3/library/typing.html#module-typing) and we don’t need to define it ourselves. We can use [`AnyStr`](https://docs.python.org/3/library/typing.html#typing.AnyStr) to define a function that can concatenate two strings or bytes objects, but it can’t be called with other argument types: ``` from typing import AnyStr def concat(x: AnyStr, y: AnyStr) -> AnyStr: return x + y concat('a', 'b') # Okay concat(b'a', b'b') # Okay concat(1, 2) # Error! ``` Importantly, this is different from a union type, since combinations of `str` and `bytes` are not accepted: ``` concat('string', b'bytes') # Error! ``` In this case, this is exactly what we want, since it’s not possible to concatenate a string and a bytes object! If we tried to use `Union`, the type checker would complain about this possibility: ``` def union_concat(x: Union[str, bytes], y: Union[str, bytes]) -> Union[str, bytes]: return x + y # Error: can't concatenate str and bytes ``` Another interesting special case is calling `concat()` with a subtype of `str`: ``` class S(str): pass ss = concat(S('foo'), S('bar')) reveal_type(ss) # Revealed type is "builtins.str" ``` You may expect that the type of `ss` is `S`, but the type is actually `str`: a subtype gets promoted to one of the valid values for the type variable, which in this case is `str`. This is thus subtly different from *bounded quantification* in languages such as Java, where the return type would be `S`. The way mypy implements this is correct for `concat`, since `concat` actually returns a `str` instance in the above example: ``` >>> print(type(ss)) <class 'str'> ``` You can also use a [`TypeVar`](https://docs.python.org/3/library/typing.html#typing.TypeVar) with a restricted set of possible values when defining a generic class. For example, mypy uses the type [`Pattern[AnyStr]`](https://docs.python.org/3/library/typing.html#typing.Pattern) for the return value of [`re.compile()`](https://docs.python.org/3/library/re.html#re.compile), since regular expressions can be based on a string or a bytes pattern. A type variable may not have both a value restriction (see [Type variables with upper bounds](#type-variable-upper-bound)) and an upper bound. #### Declaring decorators[#](#declaring-decorators) Decorators are typically functions that take a function as an argument and return another function. Describing this behaviour in terms of types can be a little tricky; we’ll show how you can use `TypeVar` and a special kind of type variable called a *parameter specification* to do so. Suppose we have the following decorator, not type annotated yet, that preserves the original function’s signature and merely prints the decorated function’s name: ``` def printing_decorator(func): def wrapper(*args, **kwds): print("Calling", func) return func(*args, **kwds) return wrapper ``` and we use it to decorate function `add_forty_two`: ``` # A decorated function. @printing_decorator def add_forty_two(value: int) -> int: return value + 42 a = add_forty_two(3) ``` Since `printing_decorator` is not type-annotated, the following won’t get type checked: ``` reveal_type(a) # Revealed type is "Any" add_forty_two('foo') # No type checker error :( ``` This is a sorry state of affairs! If you run with `--strict`, mypy will even alert you to this fact: `Untyped decorator makes function "add_forty_two" untyped` Note that class decorators are handled differently than function decorators in mypy: decorating a class does not erase its type, even if the decorator has incomplete type annotations. Here’s how one could annotate the decorator: ``` from typing import Any, Callable, TypeVar, cast F = TypeVar('F', bound=Callable[..., Any]) # A decorator that preserves the signature. def printing_decorator(func: F) -> F: def wrapper(*args, **kwds): print("Calling", func) return func(*args, **kwds) return cast(F, wrapper) @printing_decorator def add_forty_two(value: int) -> int: return value + 42 a = add_forty_two(3) reveal_type(a) # Revealed type is "builtins.int" add_forty_two('x') # Argument 1 to "add_forty_two" has incompatible type "str"; expected "int" ``` This still has some shortcomings. First, we need to use the unsafe [`cast()`](https://docs.python.org/3/library/typing.html#typing.cast) to convince mypy that `wrapper()` has the same signature as `func`. See [casts](index.html#casts). Second, the `wrapper()` function is not tightly type checked, although wrapper functions are typically small enough that this is not a big problem. This is also the reason for the [`cast()`](https://docs.python.org/3/library/typing.html#typing.cast) call in the `return` statement in `printing_decorator()`. However, we can use a parameter specification ([`ParamSpec`](https://docs.python.org/3/library/typing.html#typing.ParamSpec)), for a more faithful type annotation: ``` from typing import Callable, TypeVar from typing_extensions import ParamSpec P = ParamSpec('P') T = TypeVar('T') def printing_decorator(func: Callable[P, T]) -> Callable[P, T]: def wrapper(*args: P.args, **kwds: P.kwargs) -> T: print("Calling", func) return func(*args, **kwds) return wrapper ``` Parameter specifications also allow you to describe decorators that alter the signature of the input function: ``` from typing import Callable, TypeVar from typing_extensions import ParamSpec P = ParamSpec('P') T = TypeVar('T') # We reuse 'P' in the return type, but replace 'T' with 'str' def stringify(func: Callable[P, T]) -> Callable[P, str]: def wrapper(*args: P.args, **kwds: P.kwargs) -> str: return str(func(*args, **kwds)) return wrapper @stringify def add_forty_two(value: int) -> int: return value + 42 a = add_forty_two(3) reveal_type(a) # Revealed type is "builtins.str" add_forty_two('x') # error: Argument 1 to "add_forty_two" has incompatible type "str"; expected "int" ``` Or insert an argument: ``` from typing import Callable, TypeVar from typing_extensions import Concatenate, ParamSpec P = ParamSpec('P') T = TypeVar('T') def printing_decorator(func: Callable[P, T]) -> Callable[Concatenate[str, P], T]: def wrapper(msg: str, /, *args: P.args, **kwds: P.kwargs) -> T: print("Calling", func, "with", msg) return func(*args, **kwds) return wrapper @printing_decorator def add_forty_two(value: int) -> int: return value + 42 a = add_forty_two('three', 3) ``` ##### Decorator factories[#](#decorator-factories) Functions that take arguments and return a decorator (also called second-order decorators), are similarly supported via generics: ``` from typing import Any, Callable, TypeVar F = TypeVar('F', bound=Callable[..., Any]) def route(url: str) -> Callable[[F], F]: ... @route(url='/') def index(request: Any) -> str: return 'Hello world' ``` Sometimes the same decorator supports both bare calls and calls with arguments. This can be achieved by combining with [`@overload`](https://docs.python.org/3/library/typing.html#typing.overload): ``` from typing import Any, Callable, Optional, TypeVar, overload F = TypeVar('F', bound=Callable[..., Any]) # Bare decorator usage @overload def atomic(__func: F) -> F: ... # Decorator with arguments @overload def atomic(*, savepoint: bool = True) -> Callable[[F], F]: ... # Implementation def atomic(__func: Optional[Callable[..., Any]] = None, *, savepoint: bool = True): def decorator(func: Callable[..., Any]): ... # Code goes here if __func is not None: return decorator(__func) else: return decorator # Usage @atomic def func1() -> None: ... @atomic(savepoint=False) def func2() -> None: ... ``` #### Generic protocols[#](#generic-protocols) Mypy supports generic protocols (see also [Protocols and structural subtyping](index.html#protocol-types)). Several [predefined protocols](index.html#predefined-protocols) are generic, such as [`Iterable[T]`](https://docs.python.org/3/library/typing.html#typing.Iterable), and you can define additional generic protocols. Generic protocols mostly follow the normal rules for generic classes. Example: ``` from typing import TypeVar from typing_extensions import Protocol T = TypeVar('T') class Box(Protocol[T]): content: T def do_stuff(one: Box[str], other: Box[bytes]) -> None: ... class StringWrapper: def __init__(self, content: str) -> None: self.content = content class BytesWrapper: def __init__(self, content: bytes) -> None: self.content = content do_stuff(StringWrapper('one'), BytesWrapper(b'other')) # OK x: Box[float] = ... y: Box[int] = ... x = y # Error -- Box is invariant ``` Note that `class ClassName(Protocol[T])` is allowed as a shorthand for `class ClassName(Protocol, Generic[T])`, as per [**PEP 544: Generic protocols**](https://peps.python.org/pep-0544/#generic-protocols), The main difference between generic protocols and ordinary generic classes is that mypy checks that the declared variances of generic type variables in a protocol match how they are used in the protocol definition. The protocol in this example is rejected, since the type variable `T` is used covariantly as a return type, but the type variable is invariant: ``` from typing import Protocol, TypeVar T = TypeVar('T') class ReadOnlyBox(Protocol[T]): # error: Invariant type variable "T" used in protocol where covariant one is expected def content(self) -> T: ... ``` This example correctly uses a covariant type variable: ``` from typing import Protocol, TypeVar T_co = TypeVar('T_co', covariant=True) class ReadOnlyBox(Protocol[T_co]): # OK def content(self) -> T_co: ... ax: ReadOnlyBox[float] = ... ay: ReadOnlyBox[int] = ... ax = ay # OK -- ReadOnlyBox is covariant ``` See [Variance of generic types](#variance-of-generics) for more about variance. Generic protocols can also be recursive. Example: ``` T = TypeVar('T') class Linked(Protocol[T]): val: T def next(self) -> 'Linked[T]': ... class L: val: int def next(self) -> 'L': ... def last(seq: Linked[T]) -> T: ... result = last(L()) reveal_type(result) # Revealed type is "builtins.int" ``` #### Generic type aliases[#](#generic-type-aliases) Type aliases can be generic. In this case they can be used in two ways: Subscripted aliases are equivalent to original types with substituted type variables, so the number of type arguments must match the number of free type variables in the generic type alias. Unsubscripted aliases are treated as original types with free variables replaced with `Any`. Examples (following [**PEP 484: Type aliases**](https://peps.python.org/pep-0484/#type-aliases)): ``` from typing import TypeVar, Iterable, Union, Callable S = TypeVar('S') TInt = tuple[int, S] UInt = Union[S, int] CBack = Callable[..., S] def response(query: str) -> UInt[str]: # Same as Union[str, int] ... def activate(cb: CBack[S]) -> S: # Same as Callable[..., S] ... table_entry: TInt # Same as tuple[int, Any] T = TypeVar('T', int, float, complex) Vec = Iterable[tuple[T, T]] def inproduct(v: Vec[T]) -> T: return sum(x*y for x, y in v) def dilate(v: Vec[T], scale: T) -> Vec[T]: return ((x * scale, y * scale) for x, y in v) v1: Vec[int] = [] # Same as Iterable[tuple[int, int]] v2: Vec = [] # Same as Iterable[tuple[Any, Any]] v3: Vec[int, int] = [] # Error: Invalid alias, too many type arguments! ``` Type aliases can be imported from modules just like other names. An alias can also target another alias, although building complex chains of aliases is not recommended – this impedes code readability, thus defeating the purpose of using aliases. Example: ``` from typing import TypeVar, Generic, Optional from example1 import AliasType from example2 import Vec # AliasType and Vec are type aliases (Vec as defined above) def fun() -> AliasType: ... T = TypeVar('T') class NewVec(Vec[T]): ... for i, j in NewVec[int](): ... OIntVec = Optional[Vec[int]] ``` Using type variable bounds or values in generic aliases has the same effect as in generic classes/functions. #### Generic class internals[#](#generic-class-internals) You may wonder what happens at runtime when you index a generic class. Indexing returns a *generic alias* to the original class that returns instances of the original class on instantiation: ``` >>> from typing import TypeVar, Generic >>> T = TypeVar('T') >>> class Stack(Generic[T]): ... >>> Stack __main__.Stack >>> Stack[int] __main__.Stack[int] >>> instance = Stack[int]() >>> instance.__class__ __main__.Stack ``` Generic aliases can be instantiated or subclassed, similar to real classes, but the above examples illustrate that type variables are erased at runtime. Generic `Stack` instances are just ordinary Python objects, and they have no extra runtime overhead or magic due to being generic, other than a metaclass that overloads the indexing operator. Note that in Python 3.8 and lower, the built-in types [`list`](https://docs.python.org/3/library/stdtypes.html#list), [`dict`](https://docs.python.org/3/library/stdtypes.html#dict) and others do not support indexing. This is why we have the aliases [`List`](https://docs.python.org/3/library/typing.html#typing.List), [`Dict`](https://docs.python.org/3/library/typing.html#typing.Dict) and so on in the [`typing`](https://docs.python.org/3/library/typing.html#module-typing) module. Indexing these aliases gives you a generic alias that resembles generic aliases constructed by directly indexing the target class in more recent versions of Python: ``` >>> # Only relevant for Python 3.8 and below >>> # For Python 3.9 onwards, prefer `list[int]` syntax >>> from typing import List >>> List[int] typing.List[int] ``` Note that the generic aliases in `typing` don’t support constructing instances: ``` >>> from typing import List >>> List[int]() Traceback (most recent call last): ... TypeError: Type List cannot be instantiated; use list() instead ``` ### More types[#](#more-types) This section introduces a few additional kinds of types, including [`NoReturn`](https://docs.python.org/3/library/typing.html#typing.NoReturn), [`NewType`](https://docs.python.org/3/library/typing.html#typing.NewType), and types for async code. It also discusses how to give functions more precise types using overloads. All of these are only situationally useful, so feel free to skip this section and come back when you have a need for some of them. Here’s a quick summary of what’s covered here: * [`NoReturn`](https://docs.python.org/3/library/typing.html#typing.NoReturn) lets you tell mypy that a function never returns normally. * [`NewType`](https://docs.python.org/3/library/typing.html#typing.NewType) lets you define a variant of a type that is treated as a separate type by mypy but is identical to the original type at runtime. For example, you can have `UserId` as a variant of `int` that is just an `int` at runtime. * [`@overload`](https://docs.python.org/3/library/typing.html#typing.overload) lets you define a function that can accept multiple distinct signatures. This is useful if you need to encode a relationship between the arguments and the return type that would be difficult to express normally. * Async types let you type check programs using `async` and `await`. #### The NoReturn type[#](#the-noreturn-type) Mypy provides support for functions that never return. For example, a function that unconditionally raises an exception: ``` from typing import NoReturn def stop() -> NoReturn: raise Exception('no way') ``` Mypy will ensure that functions annotated as returning [`NoReturn`](https://docs.python.org/3/library/typing.html#typing.NoReturn) truly never return, either implicitly or explicitly. Mypy will also recognize that the code after calls to such functions is unreachable and will behave accordingly: ``` def f(x: int) -> int: if x == 0: return x stop() return 'whatever works' # No error in an unreachable block ``` In earlier Python versions you need to install `typing_extensions` using pip to use [`NoReturn`](https://docs.python.org/3/library/typing.html#typing.NoReturn) in your code. Python 3 command line: ``` python3 -m pip install --upgrade typing-extensions ``` #### NewTypes[#](#newtypes) There are situations where you may want to avoid programming errors by creating simple derived classes that are only used to distinguish certain values from base class instances. Example: ``` class UserId(int): pass def get_by_user_id(user_id: UserId): ... ``` However, this approach introduces some runtime overhead. To avoid this, the typing module provides a helper object [`NewType`](https://docs.python.org/3/library/typing.html#typing.NewType) that creates simple unique types with almost zero runtime overhead. Mypy will treat the statement `Derived = NewType('Derived', Base)` as being roughly equivalent to the following definition: ``` class Derived(Base): def __init__(self, _x: Base) -> None: ... ``` However, at runtime, `NewType('Derived', Base)` will return a dummy callable that simply returns its argument: ``` def Derived(_x): return _x ``` Mypy will require explicit casts from `int` where `UserId` is expected, while implicitly casting from `UserId` where `int` is expected. Examples: ``` from typing import NewType UserId = NewType('UserId', int) def name_by_id(user_id: UserId) -> str: ... UserId('user') # Fails type check name_by_id(42) # Fails type check name_by_id(UserId(42)) # OK num: int = UserId(5) + 1 ``` [`NewType`](https://docs.python.org/3/library/typing.html#typing.NewType) accepts exactly two arguments. The first argument must be a string literal containing the name of the new type and must equal the name of the variable to which the new type is assigned. The second argument must be a properly subclassable class, i.e., not a type construct like [`Union`](https://docs.python.org/3/library/typing.html#typing.Union), etc. The callable returned by [`NewType`](https://docs.python.org/3/library/typing.html#typing.NewType) accepts only one argument; this is equivalent to supporting only one constructor accepting an instance of the base class (see above). Example: ``` from typing import NewType class PacketId: def __init__(self, major: int, minor: int) -> None: self._major = major self._minor = minor TcpPacketId = NewType('TcpPacketId', PacketId) packet = PacketId(100, 100) tcp_packet = TcpPacketId(packet) # OK tcp_packet = TcpPacketId(127, 0) # Fails in type checker and at runtime ``` You cannot use [`isinstance()`](https://docs.python.org/3/library/functions.html#isinstance) or [`issubclass()`](https://docs.python.org/3/library/functions.html#issubclass) on the object returned by [`NewType`](https://docs.python.org/3/library/typing.html#typing.NewType), nor can you subclass an object returned by [`NewType`](https://docs.python.org/3/library/typing.html#typing.NewType). Note Unlike type aliases, [`NewType`](https://docs.python.org/3/library/typing.html#typing.NewType) will create an entirely new and unique type when used. The intended purpose of [`NewType`](https://docs.python.org/3/library/typing.html#typing.NewType) is to help you detect cases where you accidentally mixed together the old base type and the new derived type. For example, the following will successfully typecheck when using type aliases: ``` UserId = int def name_by_id(user_id: UserId) -> str: ... name_by_id(3) # ints and UserId are synonymous ``` But a similar example using [`NewType`](https://docs.python.org/3/library/typing.html#typing.NewType) will not typecheck: ``` from typing import NewType UserId = NewType('UserId', int) def name_by_id(user_id: UserId) -> str: ... name_by_id(3) # int is not the same as UserId ``` #### Function overloading[#](#function-overloading) Sometimes the arguments and types in a function depend on each other in ways that can’t be captured with a [`Union`](https://docs.python.org/3/library/typing.html#typing.Union). For example, suppose we want to write a function that can accept x-y coordinates. If we pass in just a single x-y coordinate, we return a `ClickEvent` object. However, if we pass in two x-y coordinates, we return a `DragEvent` object. Our first attempt at writing this function might look like this: ``` from typing import Union, Optional def mouse_event(x1: int, y1: int, x2: Optional[int] = None, y2: Optional[int] = None) -> Union[ClickEvent, DragEvent]: if x2 is None and y2 is None: return ClickEvent(x1, y1) elif x2 is not None and y2 is not None: return DragEvent(x1, y1, x2, y2) else: raise TypeError("Bad arguments") ``` While this function signature works, it’s too loose: it implies `mouse_event` could return either object regardless of the number of arguments we pass in. It also does not prohibit a caller from passing in the wrong number of ints: mypy would treat calls like `mouse_event(1, 2, 20)` as being valid, for example. We can do better by using [**overloading**](https://peps.python.org/pep-0484/#function-method-overloading) which lets us give the same function multiple type annotations (signatures) to more accurately describe the function’s behavior: ``` from typing import Union, overload # Overload *variants* for 'mouse_event'. # These variants give extra information to the type checker. # They are ignored at runtime. @overload def mouse_event(x1: int, y1: int) -> ClickEvent: ... @overload def mouse_event(x1: int, y1: int, x2: int, y2: int) -> DragEvent: ... # The actual *implementation* of 'mouse_event'. # The implementation contains the actual runtime logic. # # It may or may not have type hints. If it does, mypy # will check the body of the implementation against the # type hints. # # Mypy will also check and make sure the signature is # consistent with the provided variants. def mouse_event(x1: int, y1: int, x2: Optional[int] = None, y2: Optional[int] = None) -> Union[ClickEvent, DragEvent]: if x2 is None and y2 is None: return ClickEvent(x1, y1) elif x2 is not None and y2 is not None: return DragEvent(x1, y1, x2, y2) else: raise TypeError("Bad arguments") ``` This allows mypy to understand calls to `mouse_event` much more precisely. For example, mypy will understand that `mouse_event(5, 25)` will always have a return type of `ClickEvent` and will report errors for calls like `mouse_event(5, 25, 2)`. As another example, suppose we want to write a custom container class that implements the [`__getitem__`](https://docs.python.org/3/reference/datamodel.html#object.__getitem__) method (`[]` bracket indexing). If this method receives an integer we return a single item. If it receives a `slice`, we return a [`Sequence`](https://docs.python.org/3/library/typing.html#typing.Sequence) of items. We can precisely encode this relationship between the argument and the return type by using overloads like so: ``` from typing import Sequence, TypeVar, Union, overload T = TypeVar('T') class MyList(Sequence[T]): @overload def __getitem__(self, index: int) -> T: ... @overload def __getitem__(self, index: slice) -> Sequence[T]: ... def __getitem__(self, index: Union[int, slice]) -> Union[T, Sequence[T]]: if isinstance(index, int): # Return a T here elif isinstance(index, slice): # Return a sequence of Ts here else: raise TypeError(...) ``` Note If you just need to constrain a type variable to certain types or subtypes, you can use a [value restriction](index.html#type-variable-value-restriction). The default values of a function’s arguments don’t affect its signature – only the absence or presence of a default value does. So in order to reduce redundancy, it’s possible to replace default values in overload definitions with `...` as a placeholder: ``` from typing import overload class M: ... @overload def get_model(model_or_pk: M, flag: bool = ...) -> M: ... @overload def get_model(model_or_pk: int, flag: bool = ...) -> M | None: ... def get_model(model_or_pk: int | M, flag: bool = True) -> M | None: ... ``` ##### Runtime behavior[#](#runtime-behavior) An overloaded function must consist of two or more overload *variants* followed by an *implementation*. The variants and the implementations must be adjacent in the code: think of them as one indivisible unit. The variant bodies must all be empty; only the implementation is allowed to contain code. This is because at runtime, the variants are completely ignored: they’re overridden by the final implementation function. This means that an overloaded function is still an ordinary Python function! There is no automatic dispatch handling and you must manually handle the different types in the implementation (e.g. by using `if` statements and [`isinstance`](https://docs.python.org/3/library/functions.html#isinstance) checks). If you are adding an overload within a stub file, the implementation function should be omitted: stubs do not contain runtime logic. Note While we can leave the variant body empty using the `pass` keyword, the more common convention is to instead use the ellipsis (`...`) literal. ##### Type checking calls to overloads[#](#type-checking-calls-to-overloads) When you call an overloaded function, mypy will infer the correct return type by picking the best matching variant, after taking into consideration both the argument types and arity. However, a call is never type checked against the implementation. This is why mypy will report calls like `mouse_event(5, 25, 3)` as being invalid even though it matches the implementation signature. If there are multiple equally good matching variants, mypy will select the variant that was defined first. For example, consider the following program: ``` # For Python 3.8 and below you must use `typing.List` instead of `list`. e.g. # from typing import List from typing import overload @overload def summarize(data: list[int]) -> float: ... @overload def summarize(data: list[str]) -> str: ... def summarize(data): if not data: return 0.0 elif isinstance(data[0], int): # Do int specific code else: # Do str-specific code # What is the type of 'output'? float or str? output = summarize([]) ``` The `summarize([])` call matches both variants: an empty list could be either a `list[int]` or a `list[str]`. In this case, mypy will break the tie by picking the first matching variant: `output` will have an inferred type of `float`. The implementor is responsible for making sure `summarize` breaks ties in the same way at runtime. However, there are two exceptions to the “pick the first match” rule. First, if multiple variants match due to an argument being of type `Any`, mypy will make the inferred type also be `Any`: ``` dynamic_var: Any = some_dynamic_function() # output2 is of type 'Any' output2 = summarize(dynamic_var) ``` Second, if multiple variants match due to one or more of the arguments being a union, mypy will make the inferred type be the union of the matching variant returns: ``` some_list: Union[list[int], list[str]] # output3 is of type 'Union[float, str]' output3 = summarize(some_list) ``` Note Due to the “pick the first match” rule, changing the order of your overload variants can change how mypy type checks your program. To minimize potential issues, we recommend that you: 1. Make sure your overload variants are listed in the same order as the runtime checks (e.g. [`isinstance`](https://docs.python.org/3/library/functions.html#isinstance) checks) in your implementation. 2. Order your variants and runtime checks from most to least specific. (See the following section for an example). ##### Type checking the variants[#](#type-checking-the-variants) Mypy will perform several checks on your overload variant definitions to ensure they behave as expected. First, mypy will check and make sure that no overload variant is shadowing a subsequent one. For example, consider the following function which adds together two `Expression` objects, and contains a special-case to handle receiving two `Literal` types: ``` from typing import overload, Union class Expression: # ...snip... class Literal(Expression): # ...snip... # Warning -- the first overload variant shadows the second! @overload def add(left: Expression, right: Expression) -> Expression: ... @overload def add(left: Literal, right: Literal) -> Literal: ... def add(left: Expression, right: Expression) -> Expression: # ...snip... ``` While this code snippet is technically type-safe, it does contain an anti-pattern: the second variant will never be selected! If we try calling `add(Literal(3), Literal(4))`, mypy will always pick the first variant and evaluate the function call to be of type `Expression`, not `Literal`. This is because `Literal` is a subtype of `Expression`, which means the “pick the first match” rule will always halt after considering the first overload. Because having an overload variant that can never be matched is almost certainly a mistake, mypy will report an error. To fix the error, we can either 1) delete the second overload or 2) swap the order of the overloads: ``` # Everything is ok now -- the variants are correctly ordered # from most to least specific. @overload def add(left: Literal, right: Literal) -> Literal: ... @overload def add(left: Expression, right: Expression) -> Expression: ... def add(left: Expression, right: Expression) -> Expression: # ...snip... ``` Mypy will also type check the different variants and flag any overloads that have inherently unsafely overlapping variants. For example, consider the following unsafe overload definition: ``` from typing import overload, Union @overload def unsafe_func(x: int) -> int: ... @overload def unsafe_func(x: object) -> str: ... def unsafe_func(x: object) -> Union[int, str]: if isinstance(x, int): return 42 else: return "some string" ``` On the surface, this function definition appears to be fine. However, it will result in a discrepancy between the inferred type and the actual runtime type when we try using it like so: ``` some_obj: object = 42 unsafe_func(some_obj) + " danger danger" # Type checks, yet crashes at runtime! ``` Since `some_obj` is of type [`object`](https://docs.python.org/3/library/functions.html#object), mypy will decide that `unsafe_func` must return something of type `str` and concludes the above will type check. But in reality, `unsafe_func` will return an int, causing the code to crash at runtime! To prevent these kinds of issues, mypy will detect and prohibit inherently unsafely overlapping overloads on a best-effort basis. Two variants are considered unsafely overlapping when both of the following are true: 1. All of the arguments of the first variant are compatible with the second. 2. The return type of the first variant is *not* compatible with (e.g. is not a subtype of) the second. So in this example, the `int` argument in the first variant is a subtype of the `object` argument in the second, yet the `int` return type is not a subtype of `str`. Both conditions are true, so mypy will correctly flag `unsafe_func` as being unsafe. However, mypy will not detect *all* unsafe uses of overloads. For example, suppose we modify the above snippet so it calls `summarize` instead of `unsafe_func`: ``` some_list: list[str] = [] summarize(some_list) + "danger danger" # Type safe, yet crashes at runtime! ``` We run into a similar issue here. This program type checks if we look just at the annotations on the overloads. But since `summarize(...)` is designed to be biased towards returning a float when it receives an empty list, this program will actually crash during runtime. The reason mypy does not flag definitions like `summarize` as being potentially unsafe is because if it did, it would be extremely difficult to write a safe overload. For example, suppose we define an overload with two variants that accept types `A` and `B` respectively. Even if those two types were completely unrelated, the user could still potentially trigger a runtime error similar to the ones above by passing in a value of some third type `C` that inherits from both `A` and `B`. Thankfully, these types of situations are relatively rare. What this does mean, however, is that you should exercise caution when designing or using an overloaded function that can potentially receive values that are an instance of two seemingly unrelated types. ##### Type checking the implementation[#](#type-checking-the-implementation) The body of an implementation is type-checked against the type hints provided on the implementation. For example, in the `MyList` example up above, the code in the body is checked with argument list `index: Union[int, slice]` and a return type of `Union[T, Sequence[T]]`. If there are no annotations on the implementation, then the body is not type checked. If you want to force mypy to check the body anyways, use the [`--check-untyped-defs`](index.html#cmdoption-mypy-check-untyped-defs) flag ([more details here](index.html#untyped-definitions-and-calls)). The variants must also also be compatible with the implementation type hints. In the `MyList` example, mypy will check that the parameter type `int` and the return type `T` are compatible with `Union[int, slice]` and `Union[T, Sequence]` for the first variant. For the second variant it verifies the parameter type `slice` and the return type `Sequence[T]` are compatible with `Union[int, slice]` and `Union[T, Sequence]`. Note The overload semantics documented above are new as of mypy 0.620. Previously, mypy used to perform type erasure on all overload variants. For example, the `summarize` example from the previous section used to be illegal because `list[str]` and `list[int]` both erased to just `list[Any]`. This restriction was removed in mypy 0.620. Mypy also previously used to select the best matching variant using a different algorithm. If this algorithm failed to find a match, it would default to returning `Any`. The new algorithm uses the “pick the first match” rule and will fall back to returning `Any` only if the input arguments also contain `Any`. ##### Conditional overloads[#](#conditional-overloads) Sometimes it is useful to define overloads conditionally. Common use cases include types that are unavailable at runtime or that only exist in a certain Python version. All existing overload rules still apply. For example, there must be at least two overloads. Note Mypy can only infer a limited number of conditions. Supported ones currently include [`TYPE_CHECKING`](https://docs.python.org/3/library/typing.html#typing.TYPE_CHECKING), `MYPY`, [Python version and system platform checks](index.html#version-and-platform-checks), [`--always-true`](index.html#cmdoption-mypy-always-true), and [`--always-false`](index.html#cmdoption-mypy-always-false) values. ``` from typing import TYPE_CHECKING, Any, overload if TYPE_CHECKING: class A: ... class B: ... if TYPE_CHECKING: @overload def func(var: A) -> A: ... @overload def func(var: B) -> B: ... def func(var: Any) -> Any: return var reveal_type(func(A())) # Revealed type is "A" ``` ``` # flags: --python-version 3.10 import sys from typing import Any, overload class A: ... class B: ... class C: ... class D: ... if sys.version_info < (3, 7): @overload def func(var: A) -> A: ... elif sys.version_info >= (3, 10): @overload def func(var: B) -> B: ... else: @overload def func(var: C) -> C: ... @overload def func(var: D) -> D: ... def func(var: Any) -> Any: return var reveal_type(func(B())) # Revealed type is "B" reveal_type(func(C())) # No overload variant of "func" matches argument type "C" # Possible overload variants: # def func(var: B) -> B # def func(var: D) -> D # Revealed type is "Any" ``` Note In the last example, mypy is executed with [`--python-version 3.10`](index.html#cmdoption-mypy-python-version). Therefore, the condition `sys.version_info >= (3, 10)` will match and the overload for `B` will be added. The overloads for `A` and `C` are ignored! The overload for `D` is not defined conditionally and thus is also added. When mypy cannot infer a condition to be always `True` or always `False`, an error is emitted. ``` from typing import Any, overload class A: ... class B: ... def g(bool_var: bool) -> None: if bool_var: # Condition can't be inferred, unable to merge overloads @overload def func(var: A) -> A: ... @overload def func(var: B) -> B: ... def func(var: Any) -> Any: ... reveal_type(func(A())) # Revealed type is "Any" ``` #### Advanced uses of self-types[#](#advanced-uses-of-self-types) Normally, mypy doesn’t require annotations for the first arguments of instance and class methods. However, they may be needed to have more precise static typing for certain programming patterns. ##### Restricted methods in generic classes[#](#restricted-methods-in-generic-classes) In generic classes some methods may be allowed to be called only for certain values of type arguments: ``` T = TypeVar('T') class Tag(Generic[T]): item: T def uppercase_item(self: Tag[str]) -> str: return self.item.upper() def label(ti: Tag[int], ts: Tag[str]) -> None: ti.uppercase_item() # E: Invalid self argument "Tag[int]" to attribute function # "uppercase_item" with type "Callable[[Tag[str]], str]" ts.uppercase_item() # This is OK ``` This pattern also allows matching on nested types in situations where the type argument is itself generic: ``` T = TypeVar('T', covariant=True) S = TypeVar('S') class Storage(Generic[T]): def __init__(self, content: T) -> None: self.content = content def first_chunk(self: Storage[Sequence[S]]) -> S: return self.content[0] page: Storage[list[str]] page.first_chunk() # OK, type is "str" Storage(0).first_chunk() # Error: Invalid self argument "Storage[int]" to attribute function # "first_chunk" with type "Callable[[Storage[Sequence[S]]], S]" ``` Finally, one can use overloads on self-type to express precise types of some tricky methods: ``` T = TypeVar('T') class Tag(Generic[T]): @overload def export(self: Tag[str]) -> str: ... @overload def export(self, converter: Callable[[T], str]) -> str: ... def export(self, converter=None): if isinstance(self.item, str): return self.item return converter(self.item) ``` In particular, an [`__init__()`](https://docs.python.org/3/reference/datamodel.html#object.__init__) method overloaded on self-type may be useful to annotate generic class constructors where type arguments depend on constructor parameters in a non-trivial way, see e.g. [`Popen`](https://docs.python.org/3/library/subprocess.html#subprocess.Popen). ##### Mixin classes[#](#mixin-classes) Using host class protocol as a self-type in mixin methods allows more code re-usability for static typing of mixin classes. For example, one can define a protocol that defines common functionality for host classes instead of adding required abstract methods to every mixin: ``` class Lockable(Protocol): @property def lock(self) -> Lock: ... class AtomicCloseMixin: def atomic_close(self: Lockable) -> int: with self.lock: # perform actions class AtomicOpenMixin: def atomic_open(self: Lockable) -> int: with self.lock: # perform actions class File(AtomicCloseMixin, AtomicOpenMixin): def __init__(self) -> None: self.lock = Lock() class Bad(AtomicCloseMixin): pass f = File() b: Bad f.atomic_close() # OK b.atomic_close() # Error: Invalid self type for "atomic_close" ``` Note that the explicit self-type is *required* to be a protocol whenever it is not a supertype of the current class. In this case mypy will check the validity of the self-type only at the call site. ##### Precise typing of alternative constructors[#](#precise-typing-of-alternative-constructors) Some classes may define alternative constructors. If these classes are generic, self-type allows giving them precise signatures: ``` T = TypeVar('T') class Base(Generic[T]): Q = TypeVar('Q', bound='Base[T]') def __init__(self, item: T) -> None: self.item = item @classmethod def make_pair(cls: Type[Q], item: T) -> tuple[Q, Q]: return cls(item), cls(item) class Sub(Base[T]): ... pair = Sub.make_pair('yes') # Type is "tuple[Sub[str], Sub[str]]" bad = Sub[int].make_pair('no') # Error: Argument 1 to "make_pair" of "Base" # has incompatible type "str"; expected "int" ``` #### Typing async/await[#](#typing-async-await) Mypy lets you type coroutines that use the `async/await` syntax. For more information regarding coroutines, see [**PEP 492**](https://peps.python.org/pep-0492/) and the [asyncio documentation](https://docs.python.org/3/library/asyncio.html). Functions defined using `async def` are typed similar to normal functions. The return type annotation should be the same as the type of the value you expect to get back when `await`-ing the coroutine. ``` import asyncio async def format_string(tag: str, count: int) -> str: return f'T-minus {count} ({tag})' async def countdown(tag: str, count: int) -> str: while count > 0: my_str = await format_string(tag, count) # type is inferred to be str print(my_str) await asyncio.sleep(0.1) count -= 1 return "Blastoff!" asyncio.run(countdown("Millennium Falcon", 5)) ``` The result of calling an `async def` function *without awaiting* will automatically be inferred to be a value of type [`Coroutine[Any, Any, T]`](https://docs.python.org/3/library/typing.html#typing.Coroutine), which is a subtype of [`Awaitable[T]`](https://docs.python.org/3/library/typing.html#typing.Awaitable): ``` my_coroutine = countdown("Millennium Falcon", 5) reveal_type(my_coroutine) # Revealed type is "typing.Coroutine[Any, Any, builtins.str]" ``` ##### Asynchronous iterators[#](#asynchronous-iterators) If you have an asynchronous iterator, you can use the [`AsyncIterator`](https://docs.python.org/3/library/typing.html#typing.AsyncIterator) type in your annotations: ``` from typing import Optional, AsyncIterator import asyncio class arange: def __init__(self, start: int, stop: int, step: int) -> None: self.start = start self.stop = stop self.step = step self.count = start - step def __aiter__(self) -> AsyncIterator[int]: return self async def __anext__(self) -> int: self.count += self.step if self.count == self.stop: raise StopAsyncIteration else: return self.count async def run_countdown(tag: str, countdown: AsyncIterator[int]) -> str: async for i in countdown: print(f'T-minus {i} ({tag})') await asyncio.sleep(0.1) return "Blastoff!" asyncio.run(run_countdown("Serenity", arange(5, 0, -1))) ``` Async generators (introduced in [**PEP 525**](https://peps.python.org/pep-0525/)) are an easy way to create async iterators: ``` from typing import AsyncGenerator, Optional import asyncio # Could also type this as returning AsyncIterator[int] async def arange(start: int, stop: int, step: int) -> AsyncGenerator[int, None]: current = start while (step > 0 and current < stop) or (step < 0 and current > stop): yield current current += step asyncio.run(run_countdown("Battlestar Galactica", arange(5, 0, -1))) ``` One common confusion is that the presence of a `yield` statement in an `async def` function has an effect on the type of the function: ``` from typing import AsyncIterator async def arange(stop: int) -> AsyncIterator[int]: # When called, arange gives you an async iterator # Equivalent to Callable[[int], AsyncIterator[int]] i = 0 while i < stop: yield i i += 1 async def coroutine(stop: int) -> AsyncIterator[int]: # When called, coroutine gives you something you can await to get an async iterator # Equivalent to Callable[[int], Coroutine[Any, Any, AsyncIterator[int]]] return arange(stop) async def main() -> None: reveal_type(arange(5)) # Revealed type is "typing.AsyncIterator[builtins.int]" reveal_type(coroutine(5)) # Revealed type is "typing.Coroutine[Any, Any, typing.AsyncIterator[builtins.int]]" await arange(5) # Error: Incompatible types in "await" (actual type "AsyncIterator[int]", expected type "Awaitable[Any]") reveal_type(await coroutine(5)) # Revealed type is "typing.AsyncIterator[builtins.int]" ``` This can sometimes come up when trying to define base classes, Protocols or overloads: ``` from typing import AsyncIterator, Protocol, overload class LauncherIncorrect(Protocol): # Because launch does not have yield, this has type # Callable[[], Coroutine[Any, Any, AsyncIterator[int]]] # instead of # Callable[[], AsyncIterator[int]] async def launch(self) -> AsyncIterator[int]: raise NotImplementedError class LauncherCorrect(Protocol): def launch(self) -> AsyncIterator[int]: raise NotImplementedError class LauncherAlsoCorrect(Protocol): async def launch(self) -> AsyncIterator[int]: raise NotImplementedError if False: yield 0 # The type of the overloads is independent of the implementation. # In particular, their type is not affected by whether or not the # implementation contains a `yield`. # Use of `def`` makes it clear the type is Callable[..., AsyncIterator[int]], # whereas with `async def` it would be Callable[..., Coroutine[Any, Any, AsyncIterator[int]]] @overload def launch(*, count: int = ...) -> AsyncIterator[int]: ... @overload def launch(*, time: float = ...) -> AsyncIterator[int]: ... async def launch(*, count: int = 0, time: float = 0) -> AsyncIterator[int]: # The implementation of launch is an async generator and contains a yield yield 0 ``` ### Literal types and Enums[#](#literal-types-and-enums) #### Literal types[#](#literal-types) Literal types let you indicate that an expression is equal to some specific primitive value. For example, if we annotate a variable with type `Literal["foo"]`, mypy will understand that variable is not only of type `str`, but is also equal to specifically the string `"foo"`. This feature is primarily useful when annotating functions that behave differently based on the exact value the caller provides. For example, suppose we have a function `fetch_data(...)` that returns `bytes` if the first argument is `True`, and `str` if it’s `False`. We can construct a precise type signature for this function using `Literal[...]` and overloads: ``` from typing import overload, Union, Literal # The first two overloads use Literal[...] so we can # have precise return types: @overload def fetch_data(raw: Literal[True]) -> bytes: ... @overload def fetch_data(raw: Literal[False]) -> str: ... # The last overload is a fallback in case the caller # provides a regular bool: @overload def fetch_data(raw: bool) -> Union[bytes, str]: ... def fetch_data(raw: bool) -> Union[bytes, str]: # Implementation is omitted ... reveal_type(fetch_data(True)) # Revealed type is "bytes" reveal_type(fetch_data(False)) # Revealed type is "str" # Variables declared without annotations will continue to have an # inferred type of 'bool'. variable = True reveal_type(fetch_data(variable)) # Revealed type is "Union[bytes, str]" ``` Note The examples in this page import `Literal` as well as `Final` and `TypedDict` from the `typing` module. These types were added to `typing` in Python 3.8, but are also available for use in Python 3.4 - 3.7 via the `typing_extensions` package. ##### Parameterizing Literals[#](#parameterizing-literals) Literal types may contain one or more literal bools, ints, strs, bytes, and enum values. However, literal types **cannot** contain arbitrary expressions: types like `Literal[my_string.trim()]`, `Literal[x > 3]`, or `Literal[3j + 4]` are all illegal. Literals containing two or more values are equivalent to the union of those values. So, `Literal[-3, b"foo", MyEnum.A]` is equivalent to `Union[Literal[-3], Literal[b"foo"], Literal[MyEnum.A]]`. This makes writing more complex types involving literals a little more convenient. Literal types may also contain `None`. Mypy will treat `Literal[None]` as being equivalent to just `None`. This means that `Literal[4, None]`, `Union[Literal[4], None]`, and `Optional[Literal[4]]` are all equivalent. Literals may also contain aliases to other literal types. For example, the following program is legal: ``` PrimaryColors = Literal["red", "blue", "yellow"] SecondaryColors = Literal["purple", "green", "orange"] AllowedColors = Literal[PrimaryColors, SecondaryColors] def paint(color: AllowedColors) -> None: ... paint("red") # Type checks! paint("turquoise") # Does not type check ``` Literals may not contain any other kind of type or expression. This means doing `Literal[my_instance]`, `Literal[Any]`, `Literal[3.14]`, or `Literal[{"foo": 2, "bar": 5}]` are all illegal. ##### Declaring literal variables[#](#declaring-literal-variables) You must explicitly add an annotation to a variable to declare that it has a literal type: ``` a: Literal[19] = 19 reveal_type(a) # Revealed type is "Literal[19]" ``` In order to preserve backwards-compatibility, variables without this annotation are **not** assumed to be literals: ``` b = 19 reveal_type(b) # Revealed type is "int" ``` If you find repeating the value of the variable in the type hint to be tedious, you can instead change the variable to be `Final` (see [Final names, methods and classes](index.html#final-attrs)): ``` from typing import Final, Literal def expects_literal(x: Literal[19]) -> None: pass c: Final = 19 reveal_type(c) # Revealed type is "Literal[19]?" expects_literal(c) # ...and this type checks! ``` If you do not provide an explicit type in the `Final`, the type of `c` becomes *context-sensitive*: mypy will basically try “substituting” the original assigned value whenever it’s used before performing type checking. This is why the revealed type of `c` is `Literal[19]?`: the question mark at the end reflects this context-sensitive nature. For example, mypy will type check the above program almost as if it were written like so: ``` from typing import Final, Literal def expects_literal(x: Literal[19]) -> None: pass reveal_type(19) expects_literal(19) ``` This means that while changing a variable to be `Final` is not quite the same thing as adding an explicit `Literal[...]` annotation, it often leads to the same effect in practice. The main cases where the behavior of context-sensitive vs true literal types differ are when you try using those types in places that are not explicitly expecting a `Literal[...]`. For example, compare and contrast what happens when you try appending these types to a list: ``` from typing import Final, Literal a: Final = 19 b: Literal[19] = 19 # Mypy will choose to infer list[int] here. list_of_ints = [] list_of_ints.append(a) reveal_type(list_of_ints) # Revealed type is "list[int]" # But if the variable you're appending is an explicit Literal, mypy # will infer list[Literal[19]]. list_of_lits = [] list_of_lits.append(b) reveal_type(list_of_lits) # Revealed type is "list[Literal[19]]" ``` ##### Intelligent indexing[#](#intelligent-indexing) We can use Literal types to more precisely index into structured heterogeneous types such as tuples, NamedTuples, and TypedDicts. This feature is known as *intelligent indexing*. For example, when we index into a tuple using some int, the inferred type is normally the union of the tuple item types. However, if we want just the type corresponding to some particular index, we can use Literal types like so: ``` from typing import TypedDict tup = ("foo", 3.4) # Indexing with an int literal gives us the exact type for that index reveal_type(tup[0]) # Revealed type is "str" # But what if we want the index to be a variable? Normally mypy won't # know exactly what the index is and so will return a less precise type: int_index = 0 reveal_type(tup[int_index]) # Revealed type is "Union[str, float]" # But if we use either Literal types or a Final int, we can gain back # the precision we originally had: lit_index: Literal[0] = 0 fin_index: Final = 0 reveal_type(tup[lit_index]) # Revealed type is "str" reveal_type(tup[fin_index]) # Revealed type is "str" # We can do the same thing with with TypedDict and str keys: class MyDict(TypedDict): name: str main_id: int backup_id: int d: MyDict = {"name": "Saanvi", "main_id": 111, "backup_id": 222} name_key: Final = "name" reveal_type(d[name_key]) # Revealed type is "str" # You can also index using unions of literals id_key: Literal["main_id", "backup_id"] reveal_type(d[id_key]) # Revealed type is "int" ``` ##### Tagged unions[#](#tagged-unions) When you have a union of types, you can normally discriminate between each type in the union by using `isinstance` checks. For example, if you had a variable `x` of type `Union[int, str]`, you could write some code that runs only if `x` is an int by doing `if isinstance(x, int): ...`. However, it is not always possible or convenient to do this. For example, it is not possible to use `isinstance` to distinguish between two different TypedDicts since at runtime, your variable will simply be just a dict. Instead, what you can do is *label* or *tag* your TypedDicts with a distinct Literal type. Then, you can discriminate between each kind of TypedDict by checking the label: ``` from typing import Literal, TypedDict, Union class NewJobEvent(TypedDict): tag: Literal["new-job"] job_name: str config_file_path: str class CancelJobEvent(TypedDict): tag: Literal["cancel-job"] job_id: int Event = Union[NewJobEvent, CancelJobEvent] def process_event(event: Event) -> None: # Since we made sure both TypedDicts have a key named 'tag', it's # safe to do 'event["tag"]'. This expression normally has the type # Literal["new-job", "cancel-job"], but the check below will narrow # the type to either Literal["new-job"] or Literal["cancel-job"]. # # This in turns narrows the type of 'event' to either NewJobEvent # or CancelJobEvent. if event["tag"] == "new-job": print(event["job_name"]) else: print(event["job_id"]) ``` While this feature is mostly useful when working with TypedDicts, you can also use the same technique with regular objects, tuples, or namedtuples. Similarly, tags do not need to be specifically str Literals: they can be any type you can normally narrow within `if` statements and the like. For example, you could have your tags be int or Enum Literals or even regular classes you narrow using `isinstance()`: ``` from typing import Generic, TypeVar, Union T = TypeVar('T') class Wrapper(Generic[T]): def __init__(self, inner: T) -> None: self.inner = inner def process(w: Union[Wrapper[int], Wrapper[str]]) -> None: # Doing `if isinstance(w, Wrapper[int])` does not work: isinstance requires # that the second argument always be an *erased* type, with no generics. # This is because generics are a typing-only concept and do not exist at # runtime in a way `isinstance` can always check. # # However, we can side-step this by checking the type of `w.inner` to # narrow `w` itself: if isinstance(w.inner, int): reveal_type(w) # Revealed type is "Wrapper[int]" else: reveal_type(w) # Revealed type is "Wrapper[str]" ``` This feature is sometimes called “sum types” or “discriminated union types” in other programming languages. ##### Exhaustiveness checking[#](#exhaustiveness-checking) You may want to check that some code covers all possible `Literal` or `Enum` cases. Example: ``` from typing import Literal PossibleValues = Literal['one', 'two'] def validate(x: PossibleValues) -> bool: if x == 'one': return True elif x == 'two': return False raise ValueError(f'Invalid value: {x}') assert validate('one') is True assert validate('two') is False ``` In the code above, it’s easy to make a mistake. You can add a new literal value to `PossibleValues` but forget to handle it in the `validate` function: ``` PossibleValues = Literal['one', 'two', 'three'] ``` Mypy won’t catch that `'three'` is not covered. If you want mypy to perform an exhaustiveness check, you need to update your code to use an `assert_never()` check: ``` from typing import Literal, NoReturn from typing_extensions import assert_never PossibleValues = Literal['one', 'two'] def validate(x: PossibleValues) -> bool: if x == 'one': return True elif x == 'two': return False assert_never(x) ``` Now if you add a new value to `PossibleValues` but don’t update `validate`, mypy will spot the error: ``` PossibleValues = Literal['one', 'two', 'three'] def validate(x: PossibleValues) -> bool: if x == 'one': return True elif x == 'two': return False # Error: Argument 1 to "assert_never" has incompatible type "Literal['three']"; # expected "NoReturn" assert_never(x) ``` If runtime checking against unexpected values is not needed, you can leave out the `assert_never` call in the above example, and mypy will still generate an error about function `validate` returning without a value: ``` PossibleValues = Literal['one', 'two', 'three'] # Error: Missing return statement def validate(x: PossibleValues) -> bool: if x == 'one': return True elif x == 'two': return False ``` Exhaustiveness checking is also supported for match statements (Python 3.10 and later): ``` def validate(x: PossibleValues) -> bool: match x: case 'one': return True case 'two': return False assert_never(x) ``` ##### Limitations[#](#limitations) Mypy will not understand expressions that use variables of type `Literal[..]` on a deep level. For example, if you have a variable `a` of type `Literal[3]` and another variable `b` of type `Literal[5]`, mypy will infer that `a + b` has type `int`, **not** type `Literal[8]`. The basic rule is that literal types are treated as just regular subtypes of whatever type the parameter has. For example, `Literal[3]` is treated as a subtype of `int` and so will inherit all of `int`’s methods directly. This means that `Literal[3].__add__` accepts the same arguments and has the same return type as `int.__add__`. #### Enums[#](#enums) Mypy has special support for [`enum.Enum`](https://docs.python.org/3/library/enum.html#enum.Enum) and its subclasses: [`enum.IntEnum`](https://docs.python.org/3/library/enum.html#enum.IntEnum), [`enum.Flag`](https://docs.python.org/3/library/enum.html#enum.Flag), [`enum.IntFlag`](https://docs.python.org/3/library/enum.html#enum.IntFlag), and [`enum.StrEnum`](https://docs.python.org/3/library/enum.html#enum.StrEnum). ``` from enum import Enum class Direction(Enum): up = 'up' down = 'down' reveal_type(Direction.up) # Revealed type is "Literal[Direction.up]?" reveal_type(Direction.down) # Revealed type is "Literal[Direction.down]?" ``` You can use enums to annotate types as you would expect: ``` class Movement: def __init__(self, direction: Direction, speed: float) -> None: self.direction = direction self.speed = speed Movement(Direction.up, 5.0) # ok Movement('up', 5.0) # E: Argument 1 to "Movement" has incompatible type "str"; expected "Direction" ``` ##### Exhaustiveness checking[#](#id3) Similar to `Literal` types, `Enum` supports exhaustiveness checking. Let’s start with a definition: ``` from enum import Enum from typing import NoReturn from typing_extensions import assert_never class Direction(Enum): up = 'up' down = 'down' ``` Now, let’s use an exhaustiveness check: ``` def choose_direction(direction: Direction) -> None: if direction is Direction.up: reveal_type(direction) # N: Revealed type is "Literal[Direction.up]" print('Going up!') return elif direction is Direction.down: print('Down') return # This line is never reached assert_never(direction) ``` If we forget to handle one of the cases, mypy will generate an error: ``` def choose_direction(direction: Direction) -> None: if direction == Direction.up: print('Going up!') return assert_never(direction) # E: Argument 1 to "assert_never" has incompatible type "Direction"; expected "NoReturn" ``` Exhaustiveness checking is also supported for match statements (Python 3.10 and later). ##### Extra Enum checks[#](#extra-enum-checks) Mypy also tries to support special features of `Enum` the same way Python’s runtime does: * Any `Enum` class with values is implicitly [final](index.html#final-attrs). This is what happens in CPython: ``` >>> class AllDirection(Direction): ... left = 'left' ... right = 'right' Traceback (most recent call last): ... TypeError: AllDirection: cannot extend enumeration 'Direction' ``` Mypy also catches this error: ``` class AllDirection(Direction): # E: Cannot inherit from final class "Direction" left = 'left' right = 'right' ``` * All `Enum` fields are implicitly `final` as well. ``` Direction.up = '^' # E: Cannot assign to final attribute "up" ``` * All field names are checked to be unique. ``` class Some(Enum): x = 1 x = 2 # E: Attempted to reuse member name "x" in Enum definition "Some" ``` * Base classes have no conflicts and mixin types are correct. ``` class WrongEnum(str, int, enum.Enum): # E: Only a single data type mixin is allowed for Enum subtypes, found extra "int" ... class MixinAfterEnum(enum.Enum, Mixin): # E: No base classes are allowed after "enum.Enum" ... ``` ### TypedDict[#](#typeddict) Python programs often use dictionaries with string keys to represent objects. `TypedDict` lets you give precise types for dictionaries that represent objects with a fixed schema, such as `{'id': 1, 'items': ['x']}`. Here is a typical example: ``` movie = {'name': 'Blade Runner', 'year': 1982} ``` Only a fixed set of string keys is expected (`'name'` and `'year'` above), and each key has an independent value type (`str` for `'name'` and `int` for `'year'` above). We’ve previously seen the `dict[K, V]` type, which lets you declare uniform dictionary types, where every value has the same type, and arbitrary keys are supported. This is clearly not a good fit for `movie` above. Instead, you can use a `TypedDict` to give a precise type for objects like `movie`, where the type of each dictionary value depends on the key: ``` from typing_extensions import TypedDict Movie = TypedDict('Movie', {'name': str, 'year': int}) movie: Movie = {'name': 'Blade Runner', 'year': 1982} ``` `Movie` is a `TypedDict` type with two items: `'name'` (with type `str`) and `'year'` (with type `int`). Note that we used an explicit type annotation for the `movie` variable. This type annotation is important – without it, mypy will try to infer a regular, uniform [`dict`](https://docs.python.org/3/library/stdtypes.html#dict) type for `movie`, which is not what we want here. Note If you pass a `TypedDict` object as an argument to a function, no type annotation is usually necessary since mypy can infer the desired type based on the declared argument type. Also, if an assignment target has been previously defined, and it has a `TypedDict` type, mypy will treat the assigned value as a `TypedDict`, not [`dict`](https://docs.python.org/3/library/stdtypes.html#dict). Now mypy will recognize these as valid: ``` name = movie['name'] # Okay; type of name is str year = movie['year'] # Okay; type of year is int ``` Mypy will detect an invalid key as an error: ``` director = movie['director'] # Error: 'director' is not a valid key ``` Mypy will also reject a runtime-computed expression as a key, as it can’t verify that it’s a valid key. You can only use string literals as `TypedDict` keys. The `TypedDict` type object can also act as a constructor. It returns a normal [`dict`](https://docs.python.org/3/library/stdtypes.html#dict) object at runtime – a `TypedDict` does not define a new runtime type: ``` toy_story = Movie(name='Toy Story', year=1995) ``` This is equivalent to just constructing a dictionary directly using `{ ... }` or `dict(key=value, ...)`. The constructor form is sometimes convenient, since it can be used without a type annotation, and it also makes the type of the object explicit. Like all types, `TypedDict`s can be used as components to build arbitrarily complex types. For example, you can define nested `TypedDict`s and containers with `TypedDict` items. Unlike most other types, mypy uses structural compatibility checking (or structural subtyping) with `TypedDict`s. A `TypedDict` object with extra items is compatible with (a subtype of) a narrower `TypedDict`, assuming item types are compatible (*totality* also affects subtyping, as discussed below). A `TypedDict` object is not a subtype of the regular `dict[...]` type (and vice versa), since [`dict`](https://docs.python.org/3/library/stdtypes.html#dict) allows arbitrary keys to be added and removed, unlike `TypedDict`. However, any `TypedDict` object is a subtype of (that is, compatible with) `Mapping[str, object]`, since [`Mapping`](https://docs.python.org/3/library/typing.html#typing.Mapping) only provides read-only access to the dictionary items: ``` def print_typed_dict(obj: Mapping[str, object]) -> None: for key, value in obj.items(): print(f'{key}: {value}') print_typed_dict(Movie(name='Toy Story', year=1995)) # OK ``` Note Unless you are on Python 3.8 or newer (where `TypedDict` is available in standard library [`typing`](https://docs.python.org/3/library/typing.html#module-typing) module) you need to install `typing_extensions` using pip to use `TypedDict`: ``` python3 -m pip install --upgrade typing-extensions ``` #### Totality[#](#totality) By default mypy ensures that a `TypedDict` object has all the specified keys. This will be flagged as an error: ``` # Error: 'year' missing toy_story: Movie = {'name': 'Toy Story'} ``` Sometimes you want to allow keys to be left out when creating a `TypedDict` object. You can provide the `total=False` argument to `TypedDict(...)` to achieve this: ``` GuiOptions = TypedDict( 'GuiOptions', {'language': str, 'color': str}, total=False) options: GuiOptions = {} # Okay options['language'] = 'en' ``` You may need to use [`get()`](https://docs.python.org/3/library/stdtypes.html#dict.get) to access items of a partial (non-total) `TypedDict`, since indexing using `[]` could fail at runtime. However, mypy still lets use `[]` with a partial `TypedDict` – you just need to be careful with it, as it could result in a [`KeyError`](https://docs.python.org/3/library/exceptions.html#KeyError). Requiring [`get()`](https://docs.python.org/3/library/stdtypes.html#dict.get) everywhere would be too cumbersome. (Note that you are free to use [`get()`](https://docs.python.org/3/library/stdtypes.html#dict.get) with total `TypedDict`s as well.) Keys that aren’t required are shown with a `?` in error messages: ``` # Revealed type is "TypedDict('GuiOptions', {'language'?: builtins.str, # 'color'?: builtins.str})" reveal_type(options) ``` Totality also affects structural compatibility. You can’t use a partial `TypedDict` when a total one is expected. Also, a total `TypedDict` is not valid when a partial one is expected. #### Supported operations[#](#supported-operations) `TypedDict` objects support a subset of dictionary operations and methods. You must use string literals as keys when calling most of the methods, as otherwise mypy won’t be able to check that the key is valid. List of supported operations: * Anything included in [`Mapping`](https://docs.python.org/3/library/typing.html#typing.Mapping): + `d[key]` + `key in d` + `len(d)` + `for key in d` (iteration) + [`d.get(key[, default])`](https://docs.python.org/3/library/stdtypes.html#dict.get) + [`d.keys()`](https://docs.python.org/3/library/stdtypes.html#dict.keys) + [`d.values()`](https://docs.python.org/3/library/stdtypes.html#dict.values) + [`d.items()`](https://docs.python.org/3/library/stdtypes.html#dict.items) * [`d.copy()`](https://docs.python.org/3/library/stdtypes.html#dict.copy) * [`d.setdefault(key, default)`](https://docs.python.org/3/library/stdtypes.html#dict.setdefault) * [`d1.update(d2)`](https://docs.python.org/3/library/stdtypes.html#dict.update) * [`d.pop(key[, default])`](https://docs.python.org/3/library/stdtypes.html#dict.pop) (partial `TypedDict`s only) * `del d[key]` (partial `TypedDict`s only) Note [`clear()`](https://docs.python.org/3/library/stdtypes.html#dict.clear) and [`popitem()`](https://docs.python.org/3/library/stdtypes.html#dict.popitem) are not supported since they are unsafe – they could delete required `TypedDict` items that are not visible to mypy because of structural subtyping. #### Class-based syntax[#](#class-based-syntax) An alternative, class-based syntax to define a `TypedDict` is supported in Python 3.6 and later: ``` from typing_extensions import TypedDict class Movie(TypedDict): name: str year: int ``` The above definition is equivalent to the original `Movie` definition. It doesn’t actually define a real class. This syntax also supports a form of inheritance – subclasses can define additional items. However, this is primarily a notational shortcut. Since mypy uses structural compatibility with `TypedDict`s, inheritance is not required for compatibility. Here is an example of inheritance: ``` class Movie(TypedDict): name: str year: int class BookBasedMovie(Movie): based_on: str ``` Now `BookBasedMovie` has keys `name`, `year` and `based_on`. #### Mixing required and non-required items[#](#mixing-required-and-non-required-items) In addition to allowing reuse across `TypedDict` types, inheritance also allows you to mix required and non-required (using `total=False`) items in a single `TypedDict`. Example: ``` class MovieBase(TypedDict): name: str year: int class Movie(MovieBase, total=False): based_on: str ``` Now `Movie` has required keys `name` and `year`, while `based_on` can be left out when constructing an object. A `TypedDict` with a mix of required and non-required keys, such as `Movie` above, will only be compatible with another `TypedDict` if all required keys in the other `TypedDict` are required keys in the first `TypedDict`, and all non-required keys of the other `TypedDict` are also non-required keys in the first `TypedDict`. #### Unions of TypedDicts[#](#unions-of-typeddicts) Since TypedDicts are really just regular dicts at runtime, it is not possible to use `isinstance` checks to distinguish between different variants of a Union of TypedDict in the same way you can with regular objects. Instead, you can use the [tagged union pattern](index.html#tagged-unions). The referenced section of the docs has a full description with an example, but in short, you will need to give each TypedDict the same key where each value has a unique [Literal type](index.html#literal-types). Then, check that key to distinguish between your TypedDicts. ### Final names, methods and classes[#](#final-names-methods-and-classes) This section introduces these related features: 1. *Final names* are variables or attributes that should not be reassigned after initialization. They are useful for declaring constants. 2. *Final methods* should not be overridden in a subclass. 3. *Final classes* should not be subclassed. All of these are only enforced by mypy, and only in annotated code. There is no runtime enforcement by the Python runtime. Note The examples in this page import `Final` and `final` from the `typing` module. These types were added to `typing` in Python 3.8, but are also available for use in Python 3.4 - 3.7 via the `typing_extensions` package. #### Final names[#](#final-names) You can use the `typing.Final` qualifier to indicate that a name or attribute should not be reassigned, redefined, or overridden. This is often useful for module and class level constants as a way to prevent unintended modification. Mypy will prevent further assignments to final names in type-checked code: ``` from typing import Final RATE: Final = 3_000 class Base: DEFAULT_ID: Final = 0 RATE = 300 # Error: can't assign to final attribute Base.DEFAULT_ID = 1 # Error: can't override a final attribute ``` Another use case for final attributes is to protect certain attributes from being overridden in a subclass: ``` from typing import Final class Window: BORDER_WIDTH: Final = 2.5 ... class ListView(Window): BORDER_WIDTH = 3 # Error: can't override a final attribute ``` You can use [`@property`](https://docs.python.org/3/library/functions.html#property) to make an attribute read-only, but unlike `Final`, it doesn’t work with module attributes, and it doesn’t prevent overriding in subclasses. ##### Syntax variants[#](#syntax-variants) You can use `Final` in one of these forms: * You can provide an explicit type using the syntax `Final[<type>]`. Example: ``` ID: Final[int] = 1 ``` Here mypy will infer type `int` for `ID`. * You can omit the type: ``` ID: Final = 1 ``` Here mypy will infer type `Literal[1]` for `ID`. Note that unlike for generic classes this is *not* the same as `Final[Any]`. * In class bodies and stub files you can omit the right hand side and just write `ID: Final[int]`. * Finally, you can write `self.id: Final = 1` (also optionally with a type in square brackets). This is allowed *only* in [`__init__`](https://docs.python.org/3/reference/datamodel.html#object.__init__) methods, so that the final instance attribute is assigned only once when an instance is created. ##### Details of using `Final`[#](#details-of-using-final) These are the two main rules for defining a final name: * There can be *at most one* final declaration per module or class for a given attribute. There can’t be separate class-level and instance-level constants with the same name. * There must be *exactly one* assignment to a final name. A final attribute declared in a class body without an initializer must be initialized in the [`__init__`](https://docs.python.org/3/reference/datamodel.html#object.__init__) method (you can skip the initializer in stub files): ``` class ImmutablePoint: x: Final[int] y: Final[int] # Error: final attribute without an initializer def __init__(self) -> None: self.x = 1 # Good ``` `Final` can only be used as the outermost type in assignments or variable annotations. Using it in any other position is an error. In particular, `Final` can’t be used in annotations for function arguments: ``` x: list[Final[int]] = [] # Error! def fun(x: Final[list[int]]) -> None: # Error! ... ``` `Final` and [`ClassVar`](https://docs.python.org/3/library/typing.html#typing.ClassVar) should not be used together. Mypy will infer the scope of a final declaration automatically depending on whether it was initialized in the class body or in [`__init__`](https://docs.python.org/3/reference/datamodel.html#object.__init__). A final attribute can’t be overridden by a subclass (even with another explicit final declaration). Note however that a final attribute can override a read-only property: ``` class Base: @property def ID(self) -> int: ... class Derived(Base): ID: Final = 1 # OK ``` Declaring a name as final only guarantees that the name will not be re-bound to another value. It doesn’t make the value immutable. You can use immutable ABCs and containers to prevent mutating such values: ``` x: Final = ['a', 'b'] x.append('c') # OK y: Final[Sequence[str]] = ['a', 'b'] y.append('x') # Error: Sequence is immutable z: Final = ('a', 'b') # Also an option ``` #### Final methods[#](#final-methods) Like with attributes, sometimes it is useful to protect a method from overriding. You can use the `typing.final` decorator for this purpose: ``` from typing import final class Base: @final def common_name(self) -> None: ... class Derived(Base): def common_name(self) -> None: # Error: cannot override a final method ... ``` This `@final` decorator can be used with instance methods, class methods, static methods, and properties. For overloaded methods you should add `@final` on the implementation to make it final (or on the first overload in stubs): ``` from typing import Any, overload class Base: @overload def method(self) -> None: ... @overload def method(self, arg: int) -> int: ... @final def method(self, x=None): ... ``` #### Final classes[#](#final-classes) You can apply the `typing.final` decorator to a class to indicate to mypy that it should not be subclassed: ``` from typing import final @final class Leaf: ... class MyLeaf(Leaf): # Error: Leaf can't be subclassed ... ``` The decorator acts as a declaration for mypy (and as documentation for humans), but it doesn’t actually prevent subclassing at runtime. Here are some situations where using a final class may be useful: * A class wasn’t designed to be subclassed. Perhaps subclassing would not work as expected, or subclassing would be error-prone. * Subclassing would make code harder to understand or maintain. For example, you may want to prevent unnecessarily tight coupling between base classes and subclasses. * You want to retain the freedom to arbitrarily change the class implementation in the future, and these changes might break subclasses. An abstract class that defines at least one abstract method or property and has `@final` decorator will generate an error from mypy, since those attributes could never be implemented. ``` from abc import ABCMeta, abstractmethod from typing import final @final class A(metaclass=ABCMeta): # error: Final class A has abstract attributes "f" @abstractmethod def f(self, x: int) -> None: pass ``` ### Metaclasses[#](#metaclasses) A [metaclass](https://docs.python.org/3/reference/datamodel.html#metaclasses) is a class that describes the construction and behavior of other classes, similarly to how classes describe the construction and behavior of objects. The default metaclass is [`type`](https://docs.python.org/3/library/functions.html#type), but it’s possible to use other metaclasses. Metaclasses allows one to create “a different kind of class”, such as [`Enum`](https://docs.python.org/3/library/enum.html#enum.Enum)s, [`NamedTuple`](https://docs.python.org/3/library/typing.html#typing.NamedTuple)s and singletons. Mypy has some special understanding of [`ABCMeta`](https://docs.python.org/3/library/abc.html#abc.ABCMeta) and `EnumMeta`. #### Defining a metaclass[#](#defining-a-metaclass) ``` class M(type): pass class A(metaclass=M): pass ``` #### Metaclass usage example[#](#metaclass-usage-example) Mypy supports the lookup of attributes in the metaclass: ``` from typing import Type, TypeVar, ClassVar T = TypeVar('T') class M(type): count: ClassVar[int] = 0 def make(cls: Type[T]) -> T: M.count += 1 return cls() class A(metaclass=M): pass a: A = A.make() # make() is looked up at M; the result is an object of type A print(A.count) class B(A): pass b: B = B.make() # metaclasses are inherited print(B.count + " objects were created") # Error: Unsupported operand types for + ("int" and "str") ``` #### Gotchas and limitations of metaclass support[#](#gotchas-and-limitations-of-metaclass-support) Note that metaclasses pose some requirements on the inheritance structure, so it’s better not to combine metaclasses and class hierarchies: ``` class M1(type): pass class M2(type): pass class A1(metaclass=M1): pass class A2(metaclass=M2): pass class B1(A1, metaclass=M2): pass # Mypy Error: metaclass conflict # At runtime the above definition raises an exception # TypeError: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases class B12(A1, A2): pass # Mypy Error: metaclass conflict # This can be solved via a common metaclass subtype: class CorrectMeta(M1, M2): pass class B2(A1, A2, metaclass=CorrectMeta): pass # OK, runtime is also OK ``` * Mypy does not understand dynamically-computed metaclasses, such as `class A(metaclass=f()): ...` * Mypy does not and cannot understand arbitrary metaclass code. * Mypy only recognizes subclasses of [`type`](https://docs.python.org/3/library/functions.html#type) as potential metaclasses. ### Running mypy and managing imports[#](#running-mypy-and-managing-imports) The [Getting started](index.html#getting-started) page should have already introduced you to the basics of how to run mypy – pass in the files and directories you want to type check via the command line: ``` $ mypy foo.py bar.py some_directory ``` This page discusses in more detail how exactly to specify what files you want mypy to type check, how mypy discovers imported modules, and recommendations on how to handle any issues you may encounter along the way. If you are interested in learning about how to configure the actual way mypy type checks your code, see our [The mypy command line](index.html#command-line) guide. #### Specifying code to be checked[#](#specifying-code-to-be-checked) Mypy lets you specify what files it should type check in several different ways. 1. First, you can pass in paths to Python files and directories you want to type check. For example: ``` $ mypy file_1.py foo/file_2.py file_3.pyi some/directory ``` The above command tells mypy it should type check all of the provided files together. In addition, mypy will recursively type check the entire contents of any provided directories. For more details about how exactly this is done, see [Mapping file paths to modules](#mapping-paths-to-modules). 2. Second, you can use the [`-m`](index.html#cmdoption-mypy-m) flag (long form: [`--module`](index.html#cmdoption-mypy-m)) to specify a module name to be type checked. The name of a module is identical to the name you would use to import that module within a Python program. For example, running: ``` $ mypy -m html.parser ``` …will type check the module `html.parser` (this happens to be a library stub). Mypy will use an algorithm very similar to the one Python uses to find where modules and imports are located on the file system. For more details, see [How imports are found](#finding-imports). 3. Third, you can use the [`-p`](index.html#cmdoption-mypy-p) (long form: [`--package`](index.html#cmdoption-mypy-p)) flag to specify a package to be (recursively) type checked. This flag is almost identical to the [`-m`](index.html#cmdoption-mypy-m) flag except that if you give it a package name, mypy will recursively type check all submodules and subpackages of that package. For example, running: ``` $ mypy -p html ``` …will type check the entire `html` package (of library stubs). In contrast, if we had used the [`-m`](index.html#cmdoption-mypy-m) flag, mypy would have type checked just `html`’s `__init__.py` file and anything imported from there. Note that we can specify multiple packages and modules on the command line. For example: ``` $ mypy --package p.a --package p.b --module c ``` 4. Fourth, you can also instruct mypy to directly type check small strings as programs by using the [`-c`](index.html#cmdoption-mypy-c) (long form: [`--command`](index.html#cmdoption-mypy-c)) flag. For example: ``` $ mypy -c 'x = [1, 2]; print(x())' ``` …will type check the above string as a mini-program (and in this case, will report that `list[int]` is not callable). You can also use the [`files`](index.html#confval-files) option in your `mypy.ini` file to specify which files to check, in which case you can simply run `mypy` with no arguments. #### Reading a list of files from a file[#](#reading-a-list-of-files-from-a-file) Finally, any command-line argument starting with `@` reads additional command-line arguments from the file following the `@` character. This is primarily useful if you have a file containing a list of files that you want to be type-checked: instead of using shell syntax like: ``` $ mypy $(cat file_of_files.txt) ``` you can use this instead: ``` $ mypy @file_of_files.txt ``` This file can technically also contain any command line flag, not just file paths. However, if you want to configure many different flags, the recommended approach is to use a [configuration file](index.html#config-file) instead. #### Mapping file paths to modules[#](#mapping-file-paths-to-modules) One of the main ways you can tell mypy what to type check is by providing mypy a list of paths. For example: ``` $ mypy file_1.py foo/file_2.py file_3.pyi some/directory ``` This section describes how exactly mypy maps the provided paths to modules to type check. * Mypy will check all paths provided that correspond to files. * Mypy will recursively discover and check all files ending in `.py` or `.pyi` in directory paths provided, after accounting for [`--exclude`](index.html#cmdoption-mypy-exclude). * For each file to be checked, mypy will attempt to associate the file (e.g. `project/foo/bar/baz.py`) with a fully qualified module name (e.g. `foo.bar.baz`). The directory the package is in (`project`) is then added to mypy’s module search paths. How mypy determines fully qualified module names depends on if the options [`--no-namespace-packages`](index.html#cmdoption-mypy-no-namespace-packages) and [`--explicit-package-bases`](index.html#cmdoption-mypy-explicit-package-bases) are set. 1. If [`--no-namespace-packages`](index.html#cmdoption-mypy-no-namespace-packages) is set, mypy will rely solely upon the presence of `__init__.py[i]` files to determine the fully qualified module name. That is, mypy will crawl up the directory tree for as long as it continues to find `__init__.py` (or `__init__.pyi`) files. For example, if your directory tree consists of `pkg/subpkg/mod.py`, mypy would require `pkg/__init__.py` and `pkg/subpkg/__init__.py` to exist in order correctly associate `mod.py` with `pkg.subpkg.mod` 2. The default case. If [`--namespace-packages`](index.html#cmdoption-mypy-no-namespace-packages) is on, but [`--explicit-package-bases`](index.html#cmdoption-mypy-explicit-package-bases) is off, mypy will allow for the possibility that directories without `__init__.py[i]` are packages. Specifically, mypy will look at all parent directories of the file and use the location of the highest `__init__.py[i]` in the directory tree to determine the top-level package. For example, say your directory tree consists solely of `pkg/__init__.py` and `pkg/a/b/c/d/mod.py`. When determining `mod.py`’s fully qualified module name, mypy will look at `pkg/__init__.py` and conclude that the associated module name is `pkg.a.b.c.d.mod`. 3. You’ll notice that the above case still relies on `__init__.py`. If you can’t put an `__init__.py` in your top-level package, but still wish to pass paths (as opposed to packages or modules using the `-p` or `-m` flags), [`--explicit-package-bases`](index.html#cmdoption-mypy-explicit-package-bases) provides a solution. With [`--explicit-package-bases`](index.html#cmdoption-mypy-explicit-package-bases), mypy will locate the nearest parent directory that is a member of the `MYPYPATH` environment variable, the [`mypy_path`](index.html#confval-mypy_path) config or is the current working directory. Mypy will then use the relative path to determine the fully qualified module name. For example, say your directory tree consists solely of `src/namespace_pkg/mod.py`. If you run the following command, mypy will correctly associate `mod.py` with `namespace_pkg.mod`: ``` $ MYPYPATH=src mypy --namespace-packages --explicit-package-bases . ``` If you pass a file not ending in `.py[i]`, the module name assumed is `__main__` (matching the behavior of the Python interpreter), unless [`--scripts-are-modules`](index.html#cmdoption-mypy-scripts-are-modules) is passed. Passing [`-v`](index.html#cmdoption-mypy-v) will show you the files and associated module names that mypy will check. #### How mypy handles imports[#](#how-mypy-handles-imports) When mypy encounters an `import` statement, it will first [attempt to locate](#finding-imports) that module or type stubs for that module in the file system. Mypy will then type check the imported module. There are three different outcomes of this process: 1. Mypy is unable to follow the import: the module either does not exist, or is a third party library that does not use type hints. 2. Mypy is able to follow and type check the import, but you did not want mypy to type check that module at all. 3. Mypy is able to successfully both follow and type check the module, and you want mypy to type check that module. The third outcome is what mypy will do in the ideal case. The following sections will discuss what to do in the other two cases. #### Missing imports[#](#missing-imports) When you import a module, mypy may report that it is unable to follow the import. This can cause errors that look like the following: ``` main.py:1: error: Skipping analyzing 'django': module is installed, but missing library stubs or py.typed marker main.py:2: error: Library stubs not installed for "requests" main.py:3: error: Cannot find implementation or library stub for module named "this_module_does_not_exist" ``` If you get any of these errors on an import, mypy will assume the type of that module is `Any`, the dynamic type. This means attempting to access any attribute of the module will automatically succeed: ``` # Error: Cannot find implementation or library stub for module named 'does_not_exist' import does_not_exist # But this type checks, and x will have type 'Any' x = does_not_exist.foobar() ``` This can result in mypy failing to warn you about errors in your code. Since operations on `Any` result in `Any`, these dynamic types can propagate through your code, making type checking less effective. See [Dynamically typed code](index.html#dynamic-typing) for more information. The next sections describe what each of these errors means and recommended next steps; scroll to the section that matches your error. ##### Missing library stubs or py.typed marker[#](#missing-library-stubs-or-py-typed-marker) If you are getting a `Skipping analyzing X: module is installed, but missing library stubs or py.typed marker`, error, this means mypy was able to find the module you were importing, but no corresponding type hints. Mypy will not try inferring the types of any 3rd party libraries you have installed unless they either have declared themselves to be [PEP 561 compliant stub package](index.html#installed-packages) (e.g. with a `py.typed` file) or have registered themselves on [typeshed](https://github.com/python/typeshed), the repository of types for the standard library and some 3rd party libraries. If you are getting this error, try to obtain type hints for the library you’re using: 1. Upgrading the version of the library you’re using, in case a newer version has started to include type hints. 2. Searching to see if there is a [PEP 561 compliant stub package](index.html#installed-packages) corresponding to your third party library. Stub packages let you install type hints independently from the library itself. For example, if you want type hints for the `django` library, you can install the [django-stubs](https://pypi.org/project/django-stubs/) package. 3. [Writing your own stub files](index.html#stub-files) containing type hints for the library. You can point mypy at your type hints either by passing them in via the command line, by using the [`files`](index.html#confval-files) or [`mypy_path`](index.html#confval-mypy_path) config file options, or by adding the location to the `MYPYPATH` environment variable. These stub files do not need to be complete! A good strategy is to use [stubgen](index.html#stubgen), a program that comes bundled with mypy, to generate a first rough draft of the stubs. You can then iterate on just the parts of the library you need. If you want to share your work, you can try contributing your stubs back to the library – see our documentation on creating [PEP 561 compliant packages](index.html#installed-packages). If you are unable to find any existing type hints nor have time to write your own, you can instead *suppress* the errors. All this will do is make mypy stop reporting an error on the line containing the import: the imported module will continue to be of type `Any`, and mypy may not catch errors in its use. 1. To suppress a *single* missing import error, add a `# type: ignore` at the end of the line containing the import. 2. To suppress *all* missing import errors from a single library, add a per-module section to your [mypy config file](index.html#config-file) setting [`ignore_missing_imports`](index.html#confval-ignore_missing_imports) to True for that library. For example, suppose your codebase makes heavy use of an (untyped) library named `foobar`. You can silence all import errors associated with that library and that library alone by adding the following section to your config file: ``` [mypy-foobar.*] ignore_missing_imports = True ``` Note: this option is equivalent to adding a `# type: ignore` to every import of `foobar` in your codebase. For more information, see the documentation about configuring [import discovery](index.html#config-file-import-discovery) in config files. The `.*` after `foobar` will ignore imports of `foobar` modules and subpackages in addition to the `foobar` top-level package namespace. 3. To suppress *all* missing import errors for *all* libraries in your codebase, invoke mypy with the [`--ignore-missing-imports`](index.html#cmdoption-mypy-ignore-missing-imports) command line flag or set the [`ignore_missing_imports`](index.html#confval-ignore_missing_imports) config file option to True in the *global* section of your mypy config file: ``` [mypy] ignore_missing_imports = True ``` We recommend using this approach only as a last resort: it’s equivalent to adding a `# type: ignore` to all unresolved imports in your codebase. ##### Library stubs not installed[#](#library-stubs-not-installed) If mypy can’t find stubs for a third-party library, and it knows that stubs exist for the library, you will get a message like this: ``` main.py:1: error: Library stubs not installed for "yaml" main.py:1: note: Hint: "python3 -m pip install types-PyYAML" main.py:1: note: (or run "mypy --install-types" to install all missing stub packages) ``` You can resolve the issue by running the suggested pip commands. If you’re running mypy in CI, you can ensure the presence of any stub packages you need the same as you would any other test dependency, e.g. by adding them to the appropriate `requirements.txt` file. Alternatively, add the [`--install-types`](index.html#cmdoption-mypy-install-types) to your mypy command to install all known missing stubs: ``` mypy --install-types ``` This is slower than explicitly installing stubs, since it effectively runs mypy twice – the first time to find the missing stubs, and the second time to type check your code properly after mypy has installed the stubs. It also can make controlling stub versions harder, resulting in less reproducible type checking. By default, [`--install-types`](index.html#cmdoption-mypy-install-types) shows a confirmation prompt. Use [`--non-interactive`](index.html#cmdoption-mypy-non-interactive) to install all suggested stub packages without asking for confirmation *and* type check your code: If you’ve already installed the relevant third-party libraries in an environment other than the one mypy is running in, you can use [`--python-executable`](index.html#cmdoption-mypy-python-executable) flag to point to the Python executable for that environment, and mypy will find packages installed for that Python executable. If you’ve installed the relevant stub packages and are still getting this error, see the [section below](#missing-type-hints-for-third-party-library). ##### Cannot find implementation or library stub[#](#cannot-find-implementation-or-library-stub) If you are getting a `Cannot find implementation or library stub for module` error, this means mypy was not able to find the module you are trying to import, whether it comes bundled with type hints or not. If you are getting this error, try: 1. Making sure your import does not contain a typo. 2. If the module is a third party library, making sure that mypy is able to find the interpreter containing the installed library. For example, if you are running your code in a virtualenv, make sure to install and use mypy within the virtualenv. Alternatively, if you want to use a globally installed mypy, set the [`--python-executable`](index.html#cmdoption-mypy-python-executable) command line flag to point the Python interpreter containing your installed third party packages. You can confirm that you are running mypy from the environment you expect by running it like `python -m mypy ...`. You can confirm that you are installing into the environment you expect by running pip like `python -m pip ...`. 2. Reading the [How imports are found](#finding-imports) section below to make sure you understand how exactly mypy searches for and finds modules and modify how you’re invoking mypy accordingly. 3. Directly specifying the directory containing the module you want to type check from the command line, by using the [`mypy_path`](index.html#confval-mypy_path) or [`files`](index.html#confval-files) config file options, or by using the `MYPYPATH` environment variable. Note: if the module you are trying to import is actually a *submodule* of some package, you should specify the directory containing the *entire* package. For example, suppose you are trying to add the module `foo.bar.baz` which is located at `~/foo-project/src/foo/bar/baz.py`. In this case, you must run `mypy ~/foo-project/src` (or set the `MYPYPATH` to `~/foo-project/src`. #### How imports are found[#](#how-imports-are-found) When mypy encounters an `import` statement or receives module names from the command line via the [`--module`](index.html#cmdoption-mypy-m) or [`--package`](index.html#cmdoption-mypy-p) flags, mypy tries to find the module on the file system similar to the way Python finds it. However, there are some differences. First, mypy has its own search path. This is computed from the following items: * The `MYPYPATH` environment variable (a list of directories, colon-separated on UNIX systems, semicolon-separated on Windows). * The [`mypy_path`](index.html#confval-mypy_path) config file option. * The directories containing the sources given on the command line (see [Mapping file paths to modules](#mapping-paths-to-modules)). * The installed packages marked as safe for type checking (see [PEP 561 support](index.html#installed-packages)) * The relevant directories of the [typeshed](https://github.com/python/typeshed) repo. Note You cannot point to a stub-only package ([**PEP 561**](https://peps.python.org/pep-0561/)) via the `MYPYPATH`, it must be installed (see [PEP 561 support](index.html#installed-packages)) Second, mypy searches for stub files in addition to regular Python files and packages. The rules for searching for a module `foo` are as follows: * The search looks in each of the directories in the search path (see above) until a match is found. * If a package named `foo` is found (i.e. a directory `foo` containing an `__init__.py` or `__init__.pyi` file) that’s a match. * If a stub file named `foo.pyi` is found, that’s a match. * If a Python module named `foo.py` is found, that’s a match. These matches are tried in order, so that if multiple matches are found in the same directory on the search path (e.g. a package and a Python file, or a stub file and a Python file) the first one in the above list wins. In particular, if a Python file and a stub file are both present in the same directory on the search path, only the stub file is used. (However, if the files are in different directories, the one found in the earlier directory is used.) Setting [`mypy_path`](index.html#confval-mypy_path)/`MYPYPATH` is mostly useful in the case where you want to try running mypy against multiple distinct sets of files that happen to share some common dependencies. For example, if you have multiple projects that happen to be using the same set of work-in-progress stubs, it could be convenient to just have your `MYPYPATH` point to a single directory containing the stubs. #### Following imports[#](#following-imports) Mypy is designed to [doggedly follow all imports](#finding-imports), even if the imported module is not a file you explicitly wanted mypy to check. For example, suppose we have two modules `mycode.foo` and `mycode.bar`: the former has type hints and the latter does not. We run [`mypy -m mycode.foo`](index.html#cmdoption-mypy-m) and mypy discovers that `mycode.foo` imports `mycode.bar`. How do we want mypy to type check `mycode.bar`? Mypy’s behaviour here is configurable – although we **strongly recommend** using the default – by using the [`--follow-imports`](index.html#cmdoption-mypy-follow-imports) flag. This flag accepts one of four string values: * `normal` (the default, recommended) follows all imports normally and type checks all top level code (as well as the bodies of all functions and methods with at least one type annotation in the signature). * `silent` behaves in the same way as `normal` but will additionally *suppress* any error messages. * `skip` will *not* follow imports and instead will silently replace the module (and *anything imported from it*) with an object of type `Any`. * `error` behaves in the same way as `skip` but is not quite as silent – it will flag the import as an error, like this: ``` main.py:1: note: Import of "mycode.bar" ignored main.py:1: note: (Using --follow-imports=error, module not passed on command line) ``` If you are starting a new codebase and plan on using type hints from the start, we recommend you use either [`--follow-imports=normal`](index.html#cmdoption-mypy-follow-imports) (the default) or [`--follow-imports=error`](index.html#cmdoption-mypy-follow-imports). Either option will help make sure you are not skipping checking any part of your codebase by accident. If you are planning on adding type hints to a large, existing code base, we recommend you start by trying to make your entire codebase (including files that do not use type hints) pass under [`--follow-imports=normal`](index.html#cmdoption-mypy-follow-imports). This is usually not too difficult to do: mypy is designed to report as few error messages as possible when it is looking at unannotated code. Only if doing this is intractable, we recommend passing mypy just the files you want to type check and use [`--follow-imports=silent`](index.html#cmdoption-mypy-follow-imports). Even if mypy is unable to perfectly type check a file, it can still glean some useful information by parsing it (for example, understanding what methods a given object has). See [Using mypy with an existing codebase](index.html#existing-code) for more recommendations. We do not recommend using `skip` unless you know what you are doing: while this option can be quite powerful, it can also cause many hard-to-debug errors. Adjusting import following behaviour is often most useful when restricted to specific modules. This can be accomplished by setting a per-module [`follow_imports`](index.html#confval-follow_imports) config option. ### The mypy command line[#](#the-mypy-command-line) This section documents mypy’s command line interface. You can view a quick summary of the available flags by running [`mypy --help`](#cmdoption-mypy-h). Note Command line flags are liable to change between releases. #### Specifying what to type check[#](#specifying-what-to-type-check) By default, you can specify what code you want mypy to type check by passing in the paths to what you want to have type checked: ``` $ mypy foo.py bar.py some_directory ``` Note that directories are checked recursively. Mypy also lets you specify what code to type check in several other ways. A short summary of the relevant flags is included below: for full details, see [Running mypy and managing imports](index.html#running-mypy). -m MODULE, --module MODULE[#](#cmdoption-mypy-m) Asks mypy to type check the provided module. This flag may be repeated multiple times. Mypy *will not* recursively type check any submodules of the provided module. -p PACKAGE, --package PACKAGE[#](#cmdoption-mypy-p) Asks mypy to type check the provided package. This flag may be repeated multiple times. Mypy *will* recursively type check any submodules of the provided package. This flag is identical to [`--module`](#cmdoption-mypy-m) apart from this behavior. -c PROGRAM_TEXT, --command PROGRAM_TEXT[#](#cmdoption-mypy-c) Asks mypy to type check the provided string as a program. --exclude[#](#cmdoption-mypy-exclude) A regular expression that matches file names, directory names and paths which mypy should ignore while recursively discovering files to check. Use forward slashes on all platforms. For instance, to avoid discovering any files named setup.py you could pass `--exclude '/setup\.py$'`. Similarly, you can ignore discovering directories with a given name by e.g. `--exclude /build/` or those matching a subpath with `--exclude /project/vendor/`. To ignore multiple files / directories / paths, you can provide the –exclude flag more than once, e.g `--exclude '/setup\.py$' --exclude '/build/'`. Note that this flag only affects recursive directory tree discovery, that is, when mypy is discovering files within a directory tree or submodules of a package to check. If you pass a file or module explicitly it will still be checked. For instance, `mypy --exclude '/setup.py$' but_still_check/setup.py`. In particular, `--exclude` does not affect mypy’s [import following](index.html#follow-imports). You can use a per-module [`follow_imports`](index.html#confval-follow_imports) config option to additionally avoid mypy from following imports and checking code you do not wish to be checked. Note that mypy will never recursively discover files and directories named “site-packages”, “node_modules” or “__pycache__”, or those whose name starts with a period, exactly as `--exclude '/(site-packages|node_modules|__pycache__|\..*)/$'` would. Mypy will also never recursively discover files with extensions other than `.py` or `.pyi`. #### Optional arguments[#](#optional-arguments) -h, --help[#](#cmdoption-mypy-h) Show help message and exit. -v, --verbose[#](#cmdoption-mypy-v) More verbose messages. -V, --version[#](#cmdoption-mypy-V) Show program’s version number and exit. #### Config file[#](#config-file) --config-file CONFIG_FILE[#](#cmdoption-mypy-config-file) This flag makes mypy read configuration settings from the given file. By default settings are read from `mypy.ini`, `.mypy.ini`, `pyproject.toml`, or `setup.cfg` in the current directory. Settings override mypy’s built-in defaults and command line flags can override settings. Specifying [`--config-file=`](#cmdoption-mypy-config-file) (with no filename) will ignore *all* config files. See [The mypy configuration file](index.html#config-file) for the syntax of configuration files. --warn-unused-configs[#](#cmdoption-mypy-warn-unused-configs) This flag makes mypy warn about unused `[mypy-<pattern>]` config file sections. (This requires turning off incremental mode using [`--no-incremental`](#cmdoption-mypy-no-incremental).) #### Import discovery[#](#import-discovery) The following flags customize how exactly mypy discovers and follows imports. --explicit-package-bases[#](#cmdoption-mypy-explicit-package-bases) This flag tells mypy that top-level packages will be based in either the current directory, or a member of the `MYPYPATH` environment variable or [`mypy_path`](index.html#confval-mypy_path) config option. This option is only useful in in the absence of __init__.py. See [Mapping file paths to modules](index.html#mapping-paths-to-modules) for details. --ignore-missing-imports[#](#cmdoption-mypy-ignore-missing-imports) This flag makes mypy ignore all missing imports. It is equivalent to adding `# type: ignore` comments to all unresolved imports within your codebase. Note that this flag does *not* suppress errors about missing names in successfully resolved modules. For example, if one has the following files: ``` package/__init__.py package/mod.py ``` Then mypy will generate the following errors with [`--ignore-missing-imports`](#cmdoption-mypy-ignore-missing-imports): ``` import package.unknown # No error, ignored x = package.unknown.func() # OK. 'func' is assumed to be of type 'Any' from package import unknown # No error, ignored from package.mod import NonExisting # Error: Module has no attribute 'NonExisting' ``` For more details, see [Missing imports](index.html#ignore-missing-imports). --follow-imports {normal,silent,skip,error}[#](#cmdoption-mypy-follow-imports) This flag adjusts how mypy follows imported modules that were not explicitly passed in via the command line. The default option is `normal`: mypy will follow and type check all modules. For more information on what the other options do, see [Following imports](index.html#follow-imports). --python-executable EXECUTABLE[#](#cmdoption-mypy-python-executable) This flag will have mypy collect type information from [**PEP 561**](https://peps.python.org/pep-0561/) compliant packages installed for the Python executable `EXECUTABLE`. If not provided, mypy will use PEP 561 compliant packages installed for the Python executable running mypy. See [Using installed packages](index.html#installed-packages) for more on making PEP 561 compliant packages. --no-site-packages[#](#cmdoption-mypy-no-site-packages) This flag will disable searching for [**PEP 561**](https://peps.python.org/pep-0561/) compliant packages. This will also disable searching for a usable Python executable. Use this flag if mypy cannot find a Python executable for the version of Python being checked, and you don’t need to use PEP 561 typed packages. Otherwise, use [`--python-executable`](#cmdoption-mypy-python-executable). --no-silence-site-packages[#](#cmdoption-mypy-no-silence-site-packages) By default, mypy will suppress any error messages generated within [**PEP 561**](https://peps.python.org/pep-0561/) compliant packages. Adding this flag will disable this behavior. --fast-module-lookup[#](#cmdoption-mypy-fast-module-lookup) The default logic used to scan through search paths to resolve imports has a quadratic worse-case behavior in some cases, which is for instance triggered by a large number of folders sharing a top-level namespace as in: ``` foo/ company/ foo/ a.py bar/ company/ bar/ b.py baz/ company/ baz/ c.py ... ``` If you are in this situation, you can enable an experimental fast path by setting the [`--fast-module-lookup`](#cmdoption-mypy-fast-module-lookup) option. --no-namespace-packages[#](#cmdoption-mypy-no-namespace-packages) This flag disables import discovery of namespace packages (see [**PEP 420**](https://peps.python.org/pep-0420/)). In particular, this prevents discovery of packages that don’t have an `__init__.py` (or `__init__.pyi`) file. This flag affects how mypy finds modules and packages explicitly passed on the command line. It also affects how mypy determines fully qualified module names for files passed on the command line. See [Mapping file paths to modules](index.html#mapping-paths-to-modules) for details. #### Platform configuration[#](#platform-configuration) By default, mypy will assume that you intend to run your code using the same operating system and Python version you are using to run mypy itself. The following flags let you modify this behavior. For more information on how to use these flags, see [Python version and system platform checks](index.html#version-and-platform-checks). --python-version X.Y[#](#cmdoption-mypy-python-version) This flag will make mypy type check your code as if it were run under Python version X.Y. Without this option, mypy will default to using whatever version of Python is running mypy. This flag will attempt to find a Python executable of the corresponding version to search for [**PEP 561**](https://peps.python.org/pep-0561/) compliant packages. If you’d like to disable this, use the [`--no-site-packages`](#cmdoption-mypy-no-site-packages) flag (see [Import discovery](#import-discovery) for more details). --platform PLATFORM[#](#cmdoption-mypy-platform) This flag will make mypy type check your code as if it were run under the given operating system. Without this option, mypy will default to using whatever operating system you are currently using. The `PLATFORM` parameter may be any string supported by [`sys.platform`](https://docs.python.org/3/library/sys.html#sys.platform). --always-true NAME[#](#cmdoption-mypy-always-true) This flag will treat all variables named `NAME` as compile-time constants that are always true. This flag may be repeated. --always-false NAME[#](#cmdoption-mypy-always-false) This flag will treat all variables named `NAME` as compile-time constants that are always false. This flag may be repeated. #### Disallow dynamic typing[#](#disallow-dynamic-typing) The `Any` type is used to represent a value that has a [dynamic type](index.html#dynamic-typing). The `--disallow-any` family of flags will disallow various uses of the `Any` type in a module – this lets us strategically disallow the use of dynamic typing in a controlled way. The following options are available: --disallow-any-unimported[#](#cmdoption-mypy-disallow-any-unimported) This flag disallows usage of types that come from unfollowed imports (such types become aliases for `Any`). Unfollowed imports occur either when the imported module does not exist or when [`--follow-imports=skip`](#cmdoption-mypy-follow-imports) is set. --disallow-any-expr[#](#cmdoption-mypy-disallow-any-expr) This flag disallows all expressions in the module that have type `Any`. If an expression of type `Any` appears anywhere in the module mypy will output an error unless the expression is immediately used as an argument to [`cast()`](https://docs.python.org/3/library/typing.html#typing.cast) or assigned to a variable with an explicit type annotation. In addition, declaring a variable of type `Any` or casting to type `Any` is not allowed. Note that calling functions that take parameters of type `Any` is still allowed. --disallow-any-decorated[#](#cmdoption-mypy-disallow-any-decorated) This flag disallows functions that have `Any` in their signature after decorator transformation. --disallow-any-explicit[#](#cmdoption-mypy-disallow-any-explicit) This flag disallows explicit `Any` in type positions such as type annotations and generic type parameters. --disallow-any-generics[#](#cmdoption-mypy-disallow-any-generics) This flag disallows usage of generic types that do not specify explicit type parameters. For example, you can’t use a bare `x: list`. Instead, you must always write something like `x: list[int]`. --disallow-subclassing-any[#](#cmdoption-mypy-disallow-subclassing-any) This flag reports an error whenever a class subclasses a value of type `Any`. This may occur when the base class is imported from a module that doesn’t exist (when using [`--ignore-missing-imports`](#cmdoption-mypy-ignore-missing-imports)) or is ignored due to [`--follow-imports=skip`](#cmdoption-mypy-follow-imports) or a `# type: ignore` comment on the `import` statement. Since the module is silenced, the imported class is given a type of `Any`. By default mypy will assume that the subclass correctly inherited the base class even though that may not actually be the case. This flag makes mypy raise an error instead. #### Untyped definitions and calls[#](#untyped-definitions-and-calls) The following flags configure how mypy handles untyped function definitions or calls. --disallow-untyped-calls[#](#cmdoption-mypy-disallow-untyped-calls) This flag reports an error whenever a function with type annotations calls a function defined without annotations. --untyped-calls-exclude[#](#cmdoption-mypy-untyped-calls-exclude) This flag allows to selectively disable [`--disallow-untyped-calls`](#cmdoption-mypy-disallow-untyped-calls) for functions and methods defined in specific packages, modules, or classes. Note that each exclude entry acts as a prefix. For example (assuming there are no type annotations for `third_party_lib` available): ``` # mypy --disallow-untyped-calls # --untyped-calls-exclude=third_party_lib.module_a # --untyped-calls-exclude=foo.A from third_party_lib.module_a import some_func from third_party_lib.module_b import other_func import foo some_func() # OK, function comes from module `third_party_lib.module_a` other_func() # E: Call to untyped function "other_func" in typed context foo.A().meth() # OK, method was defined in class `foo.A` foo.B().meth() # E: Call to untyped function "meth" in typed context # file foo.py class A: def meth(self): pass class B: def meth(self): pass ``` --disallow-untyped-defs[#](#cmdoption-mypy-disallow-untyped-defs) This flag reports an error whenever it encounters a function definition without type annotations or with incomplete type annotations. (a superset of [`--disallow-incomplete-defs`](#cmdoption-mypy-disallow-incomplete-defs)). For example, it would report an error for `def f(a, b)` and `def f(a: int, b)`. --disallow-incomplete-defs[#](#cmdoption-mypy-disallow-incomplete-defs) This flag reports an error whenever it encounters a partly annotated function definition, while still allowing entirely unannotated definitions. For example, it would report an error for `def f(a: int, b)` but not `def f(a, b)`. --check-untyped-defs[#](#cmdoption-mypy-check-untyped-defs) This flag is less severe than the previous two options – it type checks the body of every function, regardless of whether it has type annotations. (By default the bodies of functions without annotations are not type checked.) It will assume all arguments have type `Any` and always infer `Any` as the return type. --disallow-untyped-decorators[#](#cmdoption-mypy-disallow-untyped-decorators) This flag reports an error whenever a function with type annotations is decorated with a decorator without annotations. #### None and Optional handling[#](#none-and-optional-handling) The following flags adjust how mypy handles values of type `None`. For more details, see [Disabling strict optional checking](index.html#no-strict-optional). --implicit-optional[#](#cmdoption-mypy-implicit-optional) This flag causes mypy to treat arguments with a `None` default value as having an implicit [`Optional`](https://docs.python.org/3/library/typing.html#typing.Optional) type. For example, if this flag is set, mypy would assume that the `x` parameter is actually of type `Optional[int]` in the code snippet below since the default parameter is `None`: ``` def foo(x: int = None) -> None: print(x) ``` **Note:** This was disabled by default starting in mypy 0.980. --no-strict-optional[#](#cmdoption-mypy-no-strict-optional) This flag disables strict checking of [`Optional`](https://docs.python.org/3/library/typing.html#typing.Optional) types and `None` values. With this option, mypy doesn’t generally check the use of `None` values – they are valid everywhere. See [Disabling strict optional checking](index.html#no-strict-optional) for more about this feature. **Note:** Strict optional checking was enabled by default starting in mypy 0.600, and in previous versions it had to be explicitly enabled using `--strict-optional` (which is still accepted). #### Configuring warnings[#](#configuring-warnings) The following flags enable warnings for code that is sound but is potentially problematic or redundant in some way. --warn-redundant-casts[#](#cmdoption-mypy-warn-redundant-casts) This flag will make mypy report an error whenever your code uses an unnecessary cast that can safely be removed. --warn-unused-ignores[#](#cmdoption-mypy-warn-unused-ignores) This flag will make mypy report an error whenever your code uses a `# type: ignore` comment on a line that is not actually generating an error message. This flag, along with the [`--warn-redundant-casts`](#cmdoption-mypy-warn-redundant-casts) flag, are both particularly useful when you are upgrading mypy. Previously, you may have needed to add casts or `# type: ignore` annotations to work around bugs in mypy or missing stubs for 3rd party libraries. These two flags let you discover cases where either workarounds are no longer necessary. --no-warn-no-return[#](#cmdoption-mypy-no-warn-no-return) By default, mypy will generate errors when a function is missing return statements in some execution paths. The only exceptions are when: * The function has a `None` or `Any` return type * The function has an empty body and is marked as an abstract method, is in a protocol class, or is in a stub file * The execution path can never return; for example, if an exceptionis always raised Passing in [`--no-warn-no-return`](#cmdoption-mypy-no-warn-no-return) will disable these error messages in all cases. --warn-return-any[#](#cmdoption-mypy-warn-return-any) This flag causes mypy to generate a warning when returning a value with type `Any` from a function declared with a non-`Any` return type. --warn-unreachable[#](#cmdoption-mypy-warn-unreachable) This flag will make mypy report an error whenever it encounters code determined to be unreachable or redundant after performing type analysis. This can be a helpful way of detecting certain kinds of bugs in your code. For example, enabling this flag will make mypy report that the `x > 7` check is redundant and that the `else` block below is unreachable. ``` def process(x: int) -> None: # Error: Right operand of "or" is never evaluated if isinstance(x, int) or x > 7: # Error: Unsupported operand types for + ("int" and "str") print(x + "bad") else: # Error: 'Statement is unreachable' error print(x + "bad") ``` To help prevent mypy from generating spurious warnings, the “Statement is unreachable” warning will be silenced in exactly two cases: 1. When the unreachable statement is a `raise` statement, is an `assert False` statement, or calls a function that has the [`NoReturn`](https://docs.python.org/3/library/typing.html#typing.NoReturn) return type hint. In other words, when the unreachable statement throws an error or terminates the program in some way. 2. When the unreachable statement was *intentionally* marked as unreachable using [Python version and system platform checks](index.html#version-and-platform-checks). Note Mypy currently cannot detect and report unreachable or redundant code inside any functions using [Type variables with value restriction](index.html#type-variable-value-restriction). This limitation will be removed in future releases of mypy. #### Miscellaneous strictness flags[#](#miscellaneous-strictness-flags) This section documents any other flags that do not neatly fall under any of the above sections. --allow-untyped-globals[#](#cmdoption-mypy-allow-untyped-globals) This flag causes mypy to suppress errors caused by not being able to fully infer the types of global and class variables. --allow-redefinition[#](#cmdoption-mypy-allow-redefinition) By default, mypy won’t allow a variable to be redefined with an unrelated type. This flag enables redefinition of a variable with an arbitrary type *in some contexts*: only redefinitions within the same block and nesting depth as the original definition are allowed. Example where this can be useful: ``` def process(items: list[str]) -> None: # 'items' has type list[str] items = [item.split() for item in items] # 'items' now has type list[list[str]] ``` The variable must be used before it can be redefined: ``` def process(items: list[str]) -> None: items = "mypy" # invalid redefinition to str because the variable hasn't been used yet print(items) items = "100" # valid, items now has type str items = int(items) # valid, items now has type int ``` --local-partial-types[#](#cmdoption-mypy-local-partial-types) In mypy, the most common cases for partial types are variables initialized using `None`, but without explicit `Optional` annotations. By default, mypy won’t check partial types spanning module top level or class top level. This flag changes the behavior to only allow partial types at local level, therefore it disallows inferring variable type for `None` from two assignments in different scopes. For example: ``` from typing import Optional a = None # Need type annotation here if using --local-partial-types b: Optional[int] = None class Foo: bar = None # Need type annotation here if using --local-partial-types baz: Optional[int] = None def __init__(self) -> None: self.bar = 1 reveal_type(Foo().bar) # Union[int, None] without --local-partial-types ``` Note: this option is always implicitly enabled in mypy daemon and will become enabled by default for mypy in a future release. --no-implicit-reexport[#](#cmdoption-mypy-no-implicit-reexport) By default, imported values to a module are treated as exported and mypy allows other modules to import them. This flag changes the behavior to not re-export unless the item is imported using from-as or is included in `__all__`. Note this is always treated as enabled for stub files. For example: ``` # This won't re-export the value from foo import bar # Neither will this from foo import bar as bang # This will re-export it as bar and allow other modules to import it from foo import bar as bar # This will also re-export bar from foo import bar __all__ = ['bar'] ``` --strict-equality[#](#cmdoption-mypy-strict-equality) By default, mypy allows always-false comparisons like `42 == 'no'`. Use this flag to prohibit such comparisons of non-overlapping types, and similar identity and container checks: ``` from typing import Text items: list[int] if 'some string' in items: # Error: non-overlapping container check! ... text: Text if text != b'other bytes': # Error: non-overlapping equality check! ... assert text is not None # OK, check against None is allowed as a special case. ``` --extra-checks[#](#cmdoption-mypy-extra-checks) This flag enables additional checks that are technically correct but may be impractical in real code. In particular, it prohibits partial overlap in `TypedDict` updates, and makes arguments prepended via `Concatenate` positional-only. For example: ``` from typing import TypedDict class Foo(TypedDict): a: int class Bar(TypedDict): a: int b: int def test(foo: Foo, bar: Bar) -> None: # This is technically unsafe since foo can have a subtype of Foo at # runtime, where type of key "b" is incompatible with int, see below bar.update(foo) class Bad(Foo): b: str bad: Bad = {"a": 0, "b": "no"} test(bad, bar) ``` --strict[#](#cmdoption-mypy-strict) This flag mode enables all optional error checking flags. You can see the list of flags enabled by strict mode in the full [`mypy --help`](#cmdoption-mypy-h) output. Note: the exact list of flags enabled by running [`--strict`](#cmdoption-mypy-strict) may change over time. --disable-error-code[#](#cmdoption-mypy-disable-error-code) This flag allows disabling one or multiple error codes globally. See [Error codes](index.html#error-codes) for more information. ``` # no flag x = 'a string' x.trim() # error: "str" has no attribute "trim" [attr-defined] # When using --disable-error-code attr-defined x = 'a string' x.trim() ``` --enable-error-code[#](#cmdoption-mypy-enable-error-code) This flag allows enabling one or multiple error codes globally. See [Error codes](index.html#error-codes) for more information. Note: This flag will override disabled error codes from the [`--disable-error-code`](#cmdoption-mypy-disable-error-code) flag. ``` # When using --disable-error-code attr-defined x = 'a string' x.trim() # --disable-error-code attr-defined --enable-error-code attr-defined x = 'a string' x.trim() # error: "str" has no attribute "trim" [attr-defined] ``` #### Configuring error messages[#](#configuring-error-messages) The following flags let you adjust how much detail mypy displays in error messages. --show-error-context[#](#cmdoption-mypy-show-error-context) This flag will precede all errors with “note” messages explaining the context of the error. For example, consider the following program: ``` class Test: def foo(self, x: int) -> int: return x + "bar" ``` Mypy normally displays an error message that looks like this: ``` main.py:3: error: Unsupported operand types for + ("int" and "str") ``` If we enable this flag, the error message now looks like this: ``` main.py: note: In member "foo" of class "Test": main.py:3: error: Unsupported operand types for + ("int" and "str") ``` --show-column-numbers[#](#cmdoption-mypy-show-column-numbers) This flag will add column offsets to error messages. For example, the following indicates an error in line 12, column 9 (note that column offsets are 0-based): ``` main.py:12:9: error: Unsupported operand types for / ("int" and "str") ``` --show-error-end[#](#cmdoption-mypy-show-error-end) This flag will make mypy show not just that start position where an error was detected, but also the end position of the relevant expression. This way various tools can easily highlight the whole error span. The format is `file:line:column:end_line:end_column`. This option implies `--show-column-numbers`. --hide-error-codes[#](#cmdoption-mypy-hide-error-codes) This flag will hide the error code `[<code>]` from error messages. By default, the error code is shown after each error message: ``` prog.py:1: error: "str" has no attribute "trim" [attr-defined] ``` See [Error codes](index.html#error-codes) for more information. --pretty[#](#cmdoption-mypy-pretty) Use visually nicer output in error messages: use soft word wrap, show source code snippets, and show error location markers. --no-color-output[#](#cmdoption-mypy-no-color-output) This flag will disable color output in error messages, enabled by default. --no-error-summary[#](#cmdoption-mypy-no-error-summary) This flag will disable error summary. By default mypy shows a summary line including total number of errors, number of files with errors, and number of files checked. --show-absolute-path[#](#cmdoption-mypy-show-absolute-path) Show absolute paths to files. --soft-error-limit N[#](#cmdoption-mypy-soft-error-limit) This flag will adjust the limit after which mypy will (sometimes) disable reporting most additional errors. The limit only applies if it seems likely that most of the remaining errors will not be useful or they may be overly noisy. If `N` is negative, there is no limit. The default limit is 200. #### Incremental mode[#](#incremental-mode) By default, mypy will store type information into a cache. Mypy will use this information to avoid unnecessary recomputation when it type checks your code again. This can help speed up the type checking process, especially when most parts of your program have not changed since the previous mypy run. If you want to speed up how long it takes to recheck your code beyond what incremental mode can offer, try running mypy in [daemon mode](index.html#mypy-daemon). --no-incremental[#](#cmdoption-mypy-no-incremental) This flag disables incremental mode: mypy will no longer reference the cache when re-run. Note that mypy will still write out to the cache even when incremental mode is disabled: see the [`--cache-dir`](#cmdoption-mypy-cache-dir) flag below for more details. --cache-dir DIR[#](#cmdoption-mypy-cache-dir) By default, mypy stores all cache data inside of a folder named `.mypy_cache` in the current directory. This flag lets you change this folder. This flag can also be useful for controlling cache use when using [remote caching](index.html#remote-cache). This setting will override the `MYPY_CACHE_DIR` environment variable if it is set. Mypy will also always write to the cache even when incremental mode is disabled so it can “warm up” the cache. To disable writing to the cache, use `--cache-dir=/dev/null` (UNIX) or `--cache-dir=nul` (Windows). --sqlite-cache[#](#cmdoption-mypy-sqlite-cache) Use an [SQLite](https://www.sqlite.org/) database to store the cache. --cache-fine-grained[#](#cmdoption-mypy-cache-fine-grained) Include fine-grained dependency information in the cache for the mypy daemon. --skip-version-check[#](#cmdoption-mypy-skip-version-check) By default, mypy will ignore cache data generated by a different version of mypy. This flag disables that behavior. --skip-cache-mtime-checks[#](#cmdoption-mypy-skip-cache-mtime-checks) Skip cache internal consistency checks based on mtime. #### Advanced options[#](#advanced-options) The following flags are useful mostly for people who are interested in developing or debugging mypy internals. --pdb[#](#cmdoption-mypy-pdb) This flag will invoke the Python debugger when mypy encounters a fatal error. --show-traceback, --tb[#](#cmdoption-mypy-show-traceback) If set, this flag will display a full traceback when mypy encounters a fatal error. --raise-exceptions[#](#cmdoption-mypy-raise-exceptions) Raise exception on fatal error. --custom-typing-module MODULE[#](#cmdoption-mypy-custom-typing-module) This flag lets you use a custom module as a substitute for the [`typing`](https://docs.python.org/3/library/typing.html#module-typing) module. --custom-typeshed-dir DIR[#](#cmdoption-mypy-custom-typeshed-dir) This flag specifies the directory where mypy looks for standard library typeshed stubs, instead of the typeshed that ships with mypy. This is primarily intended to make it easier to test typeshed changes before submitting them upstream, but also allows you to use a forked version of typeshed. Note that this doesn’t affect third-party library stubs. To test third-party stubs, for example try `MYPYPATH=stubs/six mypy ...`. --warn-incomplete-stub[#](#cmdoption-mypy-warn-incomplete-stub) This flag modifies both the [`--disallow-untyped-defs`](#cmdoption-mypy-disallow-untyped-defs) and [`--disallow-incomplete-defs`](#cmdoption-mypy-disallow-incomplete-defs) flags so they also report errors if stubs in typeshed are missing type annotations or has incomplete annotations. If both flags are missing, [`--warn-incomplete-stub`](#cmdoption-mypy-warn-incomplete-stub) also does nothing. This flag is mainly intended to be used by people who want contribute to typeshed and would like a convenient way to find gaps and omissions. If you want mypy to report an error when your codebase *uses* an untyped function, whether that function is defined in typeshed or not, use the [`--disallow-untyped-calls`](#cmdoption-mypy-disallow-untyped-calls) flag. See [Untyped definitions and calls](#untyped-definitions-and-calls) for more details. --shadow-file SOURCE_FILE SHADOW_FILE[#](#cmdoption-mypy-shadow-file) When mypy is asked to type check `SOURCE_FILE`, this flag makes mypy read from and type check the contents of `SHADOW_FILE` instead. However, diagnostics will continue to refer to `SOURCE_FILE`. Specifying this argument multiple times (`--shadow-file X1 Y1 --shadow-file X2 Y2`) will allow mypy to perform multiple substitutions. This allows tooling to create temporary files with helpful modifications without having to change the source file in place. For example, suppose we have a pipeline that adds `reveal_type` for certain variables. This pipeline is run on `original.py` to produce `temp.py`. Running `mypy --shadow-file original.py temp.py original.py` will then cause mypy to type check the contents of `temp.py` instead of `original.py`, but error messages will still reference `original.py`. #### Report generation[#](#report-generation) If these flags are set, mypy will generate a report in the specified format into the specified directory. --any-exprs-report DIR[#](#cmdoption-mypy-any-exprs-report) Causes mypy to generate a text file report documenting how many expressions of type `Any` are present within your codebase. --cobertura-xml-report DIR[#](#cmdoption-mypy-cobertura-xml-report) Causes mypy to generate a Cobertura XML type checking coverage report. To generate this report, you must either manually install the [lxml](https://pypi.org/project/lxml/) library or specify mypy installation with the setuptools extra `mypy[reports]`. --html-report / --xslt-html-report DIR[#](#cmdoption-mypy-html-report) Causes mypy to generate an HTML type checking coverage report. To generate this report, you must either manually install the [lxml](https://pypi.org/project/lxml/) library or specify mypy installation with the setuptools extra `mypy[reports]`. --linecount-report DIR[#](#cmdoption-mypy-linecount-report) Causes mypy to generate a text file report documenting the functions and lines that are typed and untyped within your codebase. --linecoverage-report DIR[#](#cmdoption-mypy-linecoverage-report) Causes mypy to generate a JSON file that maps each source file’s absolute filename to a list of line numbers that belong to typed functions in that file. --lineprecision-report DIR[#](#cmdoption-mypy-lineprecision-report) Causes mypy to generate a flat text file report with per-module statistics of how many lines are typechecked etc. --txt-report / --xslt-txt-report DIR[#](#cmdoption-mypy-txt-report) Causes mypy to generate a text file type checking coverage report. To generate this report, you must either manually install the [lxml](https://pypi.org/project/lxml/) library or specify mypy installation with the setuptools extra `mypy[reports]`. --xml-report DIR[#](#cmdoption-mypy-xml-report) Causes mypy to generate an XML type checking coverage report. To generate this report, you must either manually install the [lxml](https://pypi.org/project/lxml/) library or specify mypy installation with the setuptools extra `mypy[reports]`. #### Miscellaneous[#](#miscellaneous) --install-types[#](#cmdoption-mypy-install-types) This flag causes mypy to install known missing stub packages for third-party libraries using pip. It will display the pip command that will be run, and expects a confirmation before installing anything. For security reasons, these stubs are limited to only a small subset of manually selected packages that have been verified by the typeshed team. These packages include only stub files and no executable code. If you use this option without providing any files or modules to type check, mypy will install stub packages suggested during the previous mypy run. If there are files or modules to type check, mypy first type checks those, and proposes to install missing stubs at the end of the run, but only if any missing modules were detected. Note This is new in mypy 0.900. Previous mypy versions included a selection of third-party package stubs, instead of having them installed separately. --non-interactive[#](#cmdoption-mypy-non-interactive) When used together with [`--install-types`](#cmdoption-mypy-install-types), this causes mypy to install all suggested stub packages using pip without asking for confirmation, and then continues to perform type checking using the installed stubs, if some files or modules are provided to type check. This is implemented as up to two mypy runs internally. The first run is used to find missing stub packages, and output is shown from this run only if no missing stub packages were found. If missing stub packages were found, they are installed and then another run is performed. --junit-xml JUNIT_XML[#](#cmdoption-mypy-junit-xml) Causes mypy to generate a JUnit XML test result document with type checking results. This can make it easier to integrate mypy with continuous integration (CI) tools. --find-occurrences CLASS.MEMBER[#](#cmdoption-mypy-find-occurrences) This flag will make mypy print out all usages of a class member based on static type information. This feature is experimental. --scripts-are-modules[#](#cmdoption-mypy-scripts-are-modules) This flag will give command line arguments that appear to be scripts (i.e. files whose name does not end in `.py`) a module name derived from the script name rather than the fixed name [`__main__`](https://docs.python.org/3/library/__main__.html#module-__main__). This lets you check more than one script in a single mypy invocation. (The default [`__main__`](https://docs.python.org/3/library/__main__.html#module-__main__) is technically more correct, but if you have many scripts that import a large package, the behavior enabled by this flag is often more convenient.) ### The mypy configuration file[#](#the-mypy-configuration-file) Mypy supports reading configuration settings from a file with the following precedence order: > 1. `./mypy.ini` > 2. `./.mypy.ini` > 3. `./pyproject.toml` > 4. `./setup.cfg` > 5. `$XDG_CONFIG_HOME/mypy/config` > 6. `~/.config/mypy/config` > 7. `~/.mypy.ini` It is important to understand that there is no merging of configuration files, as it would lead to ambiguity. The [`--config-file`](index.html#cmdoption-mypy-config-file) command-line flag has the highest precedence and must be correct; otherwise mypy will report an error and exit. Without the command line option, mypy will look for configuration files in the precedence order above. Most flags correspond closely to [command-line flags](index.html#command-line) but there are some differences in flag names and some flags may take a different value based on the module being processed. Some flags support user home directory and environment variable expansion. To refer to the user home directory, use `~` at the beginning of the path. To expand environment variables use `$VARNAME` or `${VARNAME}`. #### Config file format[#](#config-file-format) The configuration file format is the usual [ini file](https://docs.python.org/3/library/configparser.html) format. It should contain section names in square brackets and flag settings of the form NAME = VALUE. Comments start with `#` characters. * A section named `[mypy]` must be present. This specifies the global flags. * Additional sections named `[mypy-PATTERN1,PATTERN2,...]` may be present, where `PATTERN1`, `PATTERN2`, etc., are comma-separated patterns of fully-qualified module names, with some components optionally replaced by the ‘*’ character (e.g. `foo.bar`, `foo.bar.*`, `foo.*.baz`). These sections specify additional flags that only apply to *modules* whose name matches at least one of the patterns. A pattern of the form `qualified_module_name` matches only the named module, while `dotted_module_name.*` matches `dotted_module_name` and any submodules (so `foo.bar.*` would match all of `foo.bar`, `foo.bar.baz`, and `foo.bar.baz.quux`). Patterns may also be “unstructured” wildcards, in which stars may appear in the middle of a name (e.g `site.*.migrations.*`). Stars match zero or more module components (so `site.*.migrations.*` can match `site.migrations`). When options conflict, the precedence order for configuration is: > 1. [Inline configuration](index.html#inline-config) in the source file > 2. Sections with concrete module names (`foo.bar`) > 3. Sections with “unstructured” wildcard patterns (`foo.*.baz`), > with sections later in the configuration file overriding > sections earlier. > 4. Sections with “well-structured” wildcard patterns > (`foo.bar.*`), with more specific overriding more general. > 5. Command line options. > 6. Top-level configuration file options. The difference in precedence order between “structured” patterns (by specificity) and “unstructured” patterns (by order in the file) is unfortunate, and is subject to change in future versions. Note The [`warn_unused_configs`](#confval-warn_unused_configs) flag may be useful to debug misspelled section names. Note Configuration flags are liable to change between releases. #### Per-module and global options[#](#per-module-and-global-options) Some of the config options may be set either globally (in the `[mypy]` section) or on a per-module basis (in sections like `[mypy-foo.bar]`). If you set an option both globally and for a specific module, the module configuration options take precedence. This lets you set global defaults and override them on a module-by-module basis. If multiple pattern sections match a module, [the options from the most specific section are used where they disagree](#config-precedence). Some other options, as specified in their description, may only be set in the global section (`[mypy]`). #### Inverting option values[#](#inverting-option-values) Options that take a boolean value may be inverted by adding `no_` to their name or by (when applicable) swapping their prefix from `disallow` to `allow` (and vice versa). #### Example `mypy.ini`[#](#example-mypy-ini) Here is an example of a `mypy.ini` file. To use this config file, place it at the root of your repo and run mypy. ``` # Global options: [mypy] warn_return_any = True warn_unused_configs = True # Per-module options: [mypy-mycode.foo.*] disallow_untyped_defs = True [mypy-mycode.bar] warn_return_any = False [mypy-somelibrary] ignore_missing_imports = True ``` This config file specifies two global options in the `[mypy]` section. These two options will: 1. Report an error whenever a function returns a value that is inferred to have type `Any`. 2. Report any config options that are unused by mypy. (This will help us catch typos when making changes to our config file). Next, this module specifies three per-module options. The first two options change how mypy type checks code in `mycode.foo.*` and `mycode.bar`, which we assume here are two modules that you wrote. The final config option changes how mypy type checks `somelibrary`, which we assume here is some 3rd party library you’ve installed and are importing. These options will: 1. Selectively disallow untyped function definitions only within the `mycode.foo` package – that is, only for function definitions defined in the `mycode/foo` directory. 2. Selectively *disable* the “function is returning any” warnings within `mycode.bar` only. This overrides the global default we set earlier. 3. Suppress any error messages generated when your codebase tries importing the module `somelibrary`. This is useful if `somelibrary` is some 3rd party library missing type hints. #### Import discovery[#](#import-discovery) For more information, see the [Import discovery](index.html#import-discovery) section of the command line docs. mypy_path[#](#confval-mypy_path) Type string Specifies the paths to use, after trying the paths from `MYPYPATH` environment variable. Useful if you’d like to keep stubs in your repo, along with the config file. Multiple paths are always separated with a `:` or `,` regardless of the platform. User home directory and environment variables will be expanded. Relative paths are treated relative to the working directory of the mypy command, not the config file. Use the `MYPY_CONFIG_FILE_DIR` environment variable to refer to paths relative to the config file (e.g. `mypy_path = $MYPY_CONFIG_FILE_DIR/src`). This option may only be set in the global section (`[mypy]`). **Note:** On Windows, use UNC paths to avoid using `:` (e.g. `\\127.0.0.1\X$\MyDir` where `X` is the drive letter). files[#](#confval-files) Type comma-separated list of strings A comma-separated list of paths which should be checked by mypy if none are given on the command line. Supports recursive file globbing using [`glob`](https://docs.python.org/3/library/glob.html#module-glob), where `*` (e.g. `*.py`) matches files in the current directory and `**/` (e.g. `**/*.py`) matches files in any directories below the current one. User home directory and environment variables will be expanded. This option may only be set in the global section (`[mypy]`). modules[#](#confval-modules) Type comma-separated list of strings A comma-separated list of packages which should be checked by mypy if none are given on the command line. Mypy *will not* recursively type check any submodules of the provided module. This option may only be set in the global section (`[mypy]`). packages[#](#confval-packages) Type comma-separated list of strings A comma-separated list of packages which should be checked by mypy if none are given on the command line. Mypy *will* recursively type check any submodules of the provided package. This flag is identical to [`modules`](#confval-modules) apart from this behavior. This option may only be set in the global section (`[mypy]`). exclude[#](#confval-exclude) Type regular expression A regular expression that matches file names, directory names and paths which mypy should ignore while recursively discovering files to check. Use forward slashes (`/`) as directory separators on all platforms. ``` [mypy] exclude = (?x)( ^one\.py$ # files named "one.py" | two\.pyi$ # or files ending with "two.pyi" | ^three\. # or files starting with "three." ) ``` Crafting a single regular expression that excludes multiple files while remaining human-readable can be a challenge. The above example demonstrates one approach. `(?x)` enables the `VERBOSE` flag for the subsequent regular expression, which [ignores most whitespace and supports comments](https://docs.python.org/3/library/re.html#re.X). The above is equivalent to: `(^one\.py$|two\.pyi$|^three\.)`. For more details, see [`--exclude`](index.html#cmdoption-mypy-exclude). This option may only be set in the global section (`[mypy]`). Note Note that the TOML equivalent differs slightly. It can be either a single string (including a multi-line string) – which is treated as a single regular expression – or an array of such strings. The following TOML examples are equivalent to the above INI example. Array of strings: ``` [tool.mypy] exclude = [ "^one\\.py$", # TOML's double-quoted strings require escaping backslashes 'two\.pyi$', # but TOML's single-quoted strings do not '^three\.', ] ``` A single, multi-line string: ``` [tool.mypy] exclude = '''(?x)( ^one\.py$ # files named "one.py" | two\.pyi$ # or files ending with "two.pyi" | ^three\. # or files starting with "three." )''' # TOML's single-quoted strings do not require escaping backslashes ``` See [Using a pyproject.toml file](#using-a-pyproject-toml). namespace_packages[#](#confval-namespace_packages) Type boolean Default True Enables [**PEP 420**](https://peps.python.org/pep-0420/) style namespace packages. See the corresponding flag [`--no-namespace-packages`](index.html#cmdoption-mypy-no-namespace-packages) for more information. This option may only be set in the global section (`[mypy]`). explicit_package_bases[#](#confval-explicit_package_bases) Type boolean Default False This flag tells mypy that top-level packages will be based in either the current directory, or a member of the `MYPYPATH` environment variable or [`mypy_path`](#confval-mypy_path) config option. This option is only useful in the absence of __init__.py. See [Mapping file paths to modules](index.html#mapping-paths-to-modules) for details. This option may only be set in the global section (`[mypy]`). ignore_missing_imports[#](#confval-ignore_missing_imports) Type boolean Default False Suppresses error messages about imports that cannot be resolved. If this option is used in a per-module section, the module name should match the name of the *imported* module, not the module containing the import statement. follow_imports[#](#confval-follow_imports) Type string Default `normal` Directs what to do with imports when the imported module is found as a `.py` file and not part of the files, modules and packages provided on the command line. The four possible values are `normal`, `silent`, `skip` and `error`. For explanations see the discussion for the [`--follow-imports`](index.html#cmdoption-mypy-follow-imports) command line flag. Using this option in a per-module section (potentially with a wildcard, as described at the top of this page) is a good way to prevent mypy from checking portions of your code. If this option is used in a per-module section, the module name should match the name of the *imported* module, not the module containing the import statement. follow_imports_for_stubs[#](#confval-follow_imports_for_stubs) Type boolean Default False Determines whether to respect the [`follow_imports`](#confval-follow_imports) setting even for stub (`.pyi`) files. Used in conjunction with [`follow_imports=skip`](#confval-follow_imports), this can be used to suppress the import of a module from `typeshed`, replacing it with `Any`. Used in conjunction with [`follow_imports=error`](#confval-follow_imports), this can be used to make any use of a particular `typeshed` module an error. Note This is not supported by the mypy daemon. python_executable[#](#confval-python_executable) Type string Specifies the path to the Python executable to inspect to collect a list of available [PEP 561 packages](index.html#installed-packages). User home directory and environment variables will be expanded. Defaults to the executable used to run mypy. This option may only be set in the global section (`[mypy]`). no_site_packages[#](#confval-no_site_packages) Type boolean Default False Disables using type information in installed packages (see [**PEP 561**](https://peps.python.org/pep-0561/)). This will also disable searching for a usable Python executable. This acts the same as [`--no-site-packages`](index.html#cmdoption-mypy-no-site-packages) command line flag. no_silence_site_packages[#](#confval-no_silence_site_packages) Type boolean Default False Enables reporting error messages generated within installed packages (see [**PEP 561**](https://peps.python.org/pep-0561/) for more details on distributing type information). Those error messages are suppressed by default, since you are usually not able to control errors in 3rd party code. This option may only be set in the global section (`[mypy]`). #### Platform configuration[#](#platform-configuration) python_version[#](#confval-python_version) Type string Specifies the Python version used to parse and check the target program. The string should be in the format `MAJOR.MINOR` – for example `2.7`. The default is the version of the Python interpreter used to run mypy. This option may only be set in the global section (`[mypy]`). platform[#](#confval-platform) Type string Specifies the OS platform for the target program, for example `darwin` or `win32` (meaning OS X or Windows, respectively). The default is the current platform as revealed by Python’s [`sys.platform`](https://docs.python.org/3/library/sys.html#sys.platform) variable. This option may only be set in the global section (`[mypy]`). always_true[#](#confval-always_true) Type comma-separated list of strings Specifies a list of variables that mypy will treat as compile-time constants that are always true. always_false[#](#confval-always_false) Type comma-separated list of strings Specifies a list of variables that mypy will treat as compile-time constants that are always false. #### Disallow dynamic typing[#](#disallow-dynamic-typing) For more information, see the [Disallow dynamic typing](index.html#disallow-dynamic-typing) section of the command line docs. disallow_any_unimported[#](#confval-disallow_any_unimported) Type boolean Default False Disallows usage of types that come from unfollowed imports (anything imported from an unfollowed import is automatically given a type of `Any`). disallow_any_expr[#](#confval-disallow_any_expr) Type boolean Default False Disallows all expressions in the module that have type `Any`. disallow_any_decorated[#](#confval-disallow_any_decorated) Type boolean Default False Disallows functions that have `Any` in their signature after decorator transformation. disallow_any_explicit[#](#confval-disallow_any_explicit) Type boolean Default False Disallows explicit `Any` in type positions such as type annotations and generic type parameters. disallow_any_generics[#](#confval-disallow_any_generics) Type boolean Default False Disallows usage of generic types that do not specify explicit type parameters. disallow_subclassing_any[#](#confval-disallow_subclassing_any) Type boolean Default False Disallows subclassing a value of type `Any`. #### Untyped definitions and calls[#](#untyped-definitions-and-calls) For more information, see the [Untyped definitions and calls](index.html#untyped-definitions-and-calls) section of the command line docs. disallow_untyped_calls[#](#confval-disallow_untyped_calls) Type boolean Default False Disallows calling functions without type annotations from functions with type annotations. Note that when used in per-module options, it enables/disables this check **inside** the module(s) specified, not for functions that come from that module(s), for example config like this: ``` [mypy] disallow_untyped_calls = True [mypy-some.library.*] disallow_untyped_calls = False ``` will disable this check inside `some.library`, not for your code that imports `some.library`. If you want to selectively disable this check for all your code that imports `some.library` you should instead use [`untyped_calls_exclude`](#confval-untyped_calls_exclude), for example: ``` [mypy] disallow_untyped_calls = True untyped_calls_exclude = some.library ``` untyped_calls_exclude[#](#confval-untyped_calls_exclude) Type comma-separated list of strings Selectively excludes functions and methods defined in specific packages, modules, and classes from action of [`disallow_untyped_calls`](#confval-disallow_untyped_calls). This also applies to all submodules of packages (i.e. everything inside a given prefix). Note, this option does not support per-file configuration, the exclusions list is defined globally for all your code. disallow_untyped_defs[#](#confval-disallow_untyped_defs) Type boolean Default False Disallows defining functions without type annotations or with incomplete type annotations (a superset of [`disallow_incomplete_defs`](#confval-disallow_incomplete_defs)). For example, it would report an error for `def f(a, b)` and `def f(a: int, b)`. disallow_incomplete_defs[#](#confval-disallow_incomplete_defs) Type boolean Default False Disallows defining functions with incomplete type annotations, while still allowing entirely unannotated definitions. For example, it would report an error for `def f(a: int, b)` but not `def f(a, b)`. check_untyped_defs[#](#confval-check_untyped_defs) Type boolean Default False Type-checks the interior of functions without type annotations. disallow_untyped_decorators[#](#confval-disallow_untyped_decorators) Type boolean Default False Reports an error whenever a function with type annotations is decorated with a decorator without annotations. #### None and Optional handling[#](#none-and-optional-handling) For more information, see the [None and Optional handling](index.html#none-and-optional-handling) section of the command line docs. implicit_optional[#](#confval-implicit_optional) Type boolean Default False Causes mypy to treat arguments with a `None` default value as having an implicit [`Optional`](https://docs.python.org/3/library/typing.html#typing.Optional) type. **Note:** This was True by default in mypy versions 0.980 and earlier. strict_optional[#](#confval-strict_optional) Type boolean Default True Enables or disables strict Optional checks. If False, mypy treats `None` as compatible with every type. **Note:** This was False by default in mypy versions earlier than 0.600. #### Configuring warnings[#](#configuring-warnings) For more information, see the [Configuring warnings](index.html#configuring-warnings) section of the command line docs. warn_redundant_casts[#](#confval-warn_redundant_casts) Type boolean Default False Warns about casting an expression to its inferred type. This option may only be set in the global section (`[mypy]`). warn_unused_ignores[#](#confval-warn_unused_ignores) Type boolean Default False Warns about unneeded `# type: ignore` comments. warn_no_return[#](#confval-warn_no_return) Type boolean Default True Shows errors for missing return statements on some execution paths. warn_return_any[#](#confval-warn_return_any) Type boolean Default False Shows a warning when returning a value with type `Any` from a function declared with a non- `Any` return type. warn_unreachable[#](#confval-warn_unreachable) Type boolean Default False Shows a warning when encountering any code inferred to be unreachable or redundant after performing type analysis. #### Suppressing errors[#](#suppressing-errors) Note: these configuration options are available in the config file only. There is no analog available via the command line options. ignore_errors[#](#confval-ignore_errors) Type boolean Default False Ignores all non-fatal errors. #### Miscellaneous strictness flags[#](#miscellaneous-strictness-flags) For more information, see the [Miscellaneous strictness flags](index.html#miscellaneous-strictness-flags) section of the command line docs. allow_untyped_globals[#](#confval-allow_untyped_globals) Type boolean Default False Causes mypy to suppress errors caused by not being able to fully infer the types of global and class variables. allow_redefinition[#](#confval-allow_redefinition) Type boolean Default False Allows variables to be redefined with an arbitrary type, as long as the redefinition is in the same block and nesting level as the original definition. Example where this can be useful: ``` def process(items: list[str]) -> None: # 'items' has type list[str] items = [item.split() for item in items] # 'items' now has type list[list[str]] ``` The variable must be used before it can be redefined: ``` def process(items: list[str]) -> None: items = "mypy" # invalid redefinition to str because the variable hasn't been used yet print(items) items = "100" # valid, items now has type str items = int(items) # valid, items now has type int ``` local_partial_types[#](#confval-local_partial_types) Type boolean Default False Disallows inferring variable type for `None` from two assignments in different scopes. This is always implicitly enabled when using the [mypy daemon](index.html#mypy-daemon). disable_error_code[#](#confval-disable_error_code) Type comma-separated list of strings Allows disabling one or multiple error codes globally. enable_error_code[#](#confval-enable_error_code) Type comma-separated list of strings Allows enabling one or multiple error codes globally. Note: This option will override disabled error codes from the disable_error_code option. implicit_reexport[#](#confval-implicit_reexport) Type boolean Default True By default, imported values to a module are treated as exported and mypy allows other modules to import them. When false, mypy will not re-export unless the item is imported using from-as or is included in `__all__`. Note that mypy treats stub files as if this is always disabled. For example: ``` # This won't re-export the value from foo import bar # This will re-export it as bar and allow other modules to import it from foo import bar as bar # This will also re-export bar from foo import bar __all__ = ['bar'] ``` strict_concatenate[#](#confval-strict_concatenate) Type boolean Default False Make arguments prepended via `Concatenate` be truly positional-only. strict_equality[#](#confval-strict_equality) > type > boolean > default > False Prohibit equality checks, identity checks, and container checks between non-overlapping types. strict[#](#confval-strict) > type > boolean > default > False Enable all optional error checking flags. You can see the list of flags enabled by strict mode in the full [`mypy --help`](index.html#cmdoption-mypy-h) output. Note: the exact list of flags enabled by [`strict`](#confval-strict) may change over time. #### Configuring error messages[#](#configuring-error-messages) For more information, see the [Configuring error messages](index.html#configuring-error-messages) section of the command line docs. These options may only be set in the global section (`[mypy]`). show_error_context[#](#confval-show_error_context) Type boolean Default False Prefixes each error with the relevant context. show_column_numbers[#](#confval-show_column_numbers) Type boolean Default False Shows column numbers in error messages. hide_error_codes[#](#confval-hide_error_codes) Type boolean Default False Hides error codes in error messages. See [Error codes](index.html#error-codes) for more information. pretty[#](#confval-pretty) Type boolean Default False Use visually nicer output in error messages: use soft word wrap, show source code snippets, and show error location markers. color_output[#](#confval-color_output) Type boolean Default True Shows error messages with color enabled. error_summary[#](#confval-error_summary) Type boolean Default True Shows a short summary line after error messages. show_absolute_path[#](#confval-show_absolute_path) Type boolean Default False Show absolute paths to files. #### Incremental mode[#](#incremental-mode) These options may only be set in the global section (`[mypy]`). incremental[#](#confval-incremental) Type boolean Default True Enables [incremental mode](index.html#incremental). cache_dir[#](#confval-cache_dir) Type string Default `.mypy_cache` Specifies the location where mypy stores incremental cache info. User home directory and environment variables will be expanded. This setting will be overridden by the `MYPY_CACHE_DIR` environment variable. Note that the cache is only read when incremental mode is enabled but is always written to, unless the value is set to `/dev/null` (UNIX) or `nul` (Windows). sqlite_cache[#](#confval-sqlite_cache) Type boolean Default False Use an [SQLite](https://www.sqlite.org/) database to store the cache. cache_fine_grained[#](#confval-cache_fine_grained) Type boolean Default False Include fine-grained dependency information in the cache for the mypy daemon. skip_version_check[#](#confval-skip_version_check) Type boolean Default False Makes mypy use incremental cache data even if it was generated by a different version of mypy. (By default, mypy will perform a version check and regenerate the cache if it was written by older versions of mypy.) skip_cache_mtime_checks[#](#confval-skip_cache_mtime_checks) Type boolean Default False Skip cache internal consistency checks based on mtime. #### Advanced options[#](#advanced-options) These options may only be set in the global section (`[mypy]`). plugins[#](#confval-plugins) Type comma-separated list of strings A comma-separated list of mypy plugins. See [Extending mypy using plugins](index.html#extending-mypy-using-plugins). pdb[#](#confval-pdb) Type boolean Default False Invokes [`pdb`](https://docs.python.org/3/library/pdb.html#module-pdb) on fatal error. show_traceback[#](#confval-show_traceback) Type boolean Default False Shows traceback on fatal error. raise_exceptions[#](#confval-raise_exceptions) Type boolean Default False Raise exception on fatal error. custom_typing_module[#](#confval-custom_typing_module) Type string Specifies a custom module to use as a substitute for the [`typing`](https://docs.python.org/3/library/typing.html#module-typing) module. custom_typeshed_dir[#](#confval-custom_typeshed_dir) Type string This specifies the directory where mypy looks for standard library typeshed stubs, instead of the typeshed that ships with mypy. This is primarily intended to make it easier to test typeshed changes before submitting them upstream, but also allows you to use a forked version of typeshed. User home directory and environment variables will be expanded. Note that this doesn’t affect third-party library stubs. To test third-party stubs, for example try `MYPYPATH=stubs/six mypy ...`. warn_incomplete_stub[#](#confval-warn_incomplete_stub) Type boolean Default False Warns about missing type annotations in typeshed. This is only relevant in combination with [`disallow_untyped_defs`](#confval-disallow_untyped_defs) or [`disallow_incomplete_defs`](#confval-disallow_incomplete_defs). #### Report generation[#](#report-generation) If these options are set, mypy will generate a report in the specified format into the specified directory. Warning Generating reports disables incremental mode and can significantly slow down your workflow. It is recommended to enable reporting only for specific runs (e.g. in CI). any_exprs_report[#](#confval-any_exprs_report) Type string Causes mypy to generate a text file report documenting how many expressions of type `Any` are present within your codebase. cobertura_xml_report[#](#confval-cobertura_xml_report) Type string Causes mypy to generate a Cobertura XML type checking coverage report. To generate this report, you must either manually install the [lxml](https://pypi.org/project/lxml/) library or specify mypy installation with the setuptools extra `mypy[reports]`. html_report / xslt_html_report[#](#confval-html_report-xslt_html_report) Type string Causes mypy to generate an HTML type checking coverage report. To generate this report, you must either manually install the [lxml](https://pypi.org/project/lxml/) library or specify mypy installation with the setuptools extra `mypy[reports]`. linecount_report[#](#confval-linecount_report) Type string Causes mypy to generate a text file report documenting the functions and lines that are typed and untyped within your codebase. linecoverage_report[#](#confval-linecoverage_report) Type string Causes mypy to generate a JSON file that maps each source file’s absolute filename to a list of line numbers that belong to typed functions in that file. lineprecision_report[#](#confval-lineprecision_report) Type string Causes mypy to generate a flat text file report with per-module statistics of how many lines are typechecked etc. txt_report / xslt_txt_report[#](#confval-txt_report-xslt_txt_report) Type string Causes mypy to generate a text file type checking coverage report. To generate this report, you must either manually install the [lxml](https://pypi.org/project/lxml/) library or specify mypy installation with the setuptools extra `mypy[reports]`. xml_report[#](#confval-xml_report) Type string Causes mypy to generate an XML type checking coverage report. To generate this report, you must either manually install the [lxml](https://pypi.org/project/lxml/) library or specify mypy installation with the setuptools extra `mypy[reports]`. #### Miscellaneous[#](#miscellaneous) These options may only be set in the global section (`[mypy]`). junit_xml[#](#confval-junit_xml) Type string Causes mypy to generate a JUnit XML test result document with type checking results. This can make it easier to integrate mypy with continuous integration (CI) tools. scripts_are_modules[#](#confval-scripts_are_modules) Type boolean Default False Makes script `x` become module `x` instead of `__main__`. This is useful when checking multiple scripts in a single run. warn_unused_configs[#](#confval-warn_unused_configs) Type boolean Default False Warns about per-module sections in the config file that do not match any files processed when invoking mypy. (This requires turning off incremental mode using [`incremental = False`](#confval-incremental).) verbosity[#](#confval-verbosity) Type integer Default 0 Controls how much debug output will be generated. Higher numbers are more verbose. #### Using a pyproject.toml file[#](#using-a-pyproject-toml-file) Instead of using a `mypy.ini` file, a `pyproject.toml` file (as specified by [PEP 518](https://www.python.org/dev/peps/pep-0518/)) may be used instead. A few notes on doing so: * The `[mypy]` section should have `tool.` prepended to its name: + I.e., `[mypy]` would become `[tool.mypy]` * The module specific sections should be moved into `[[tool.mypy.overrides]]` sections: + For example, `[mypy-packagename]` would become: ``` [[tool.mypy.overrides]] module = 'packagename' ... ``` * Multi-module specific sections can be moved into a single `[[tool.mypy.overrides]]` section with a module property set to an array of modules: + For example, `[mypy-packagename,packagename2]` would become: ``` [[tool.mypy.overrides]] module = [ 'packagename', 'packagename2' ] ... ``` * The following care should be given to values in the `pyproject.toml` files as compared to `ini` files: + Strings must be wrapped in double quotes, or single quotes if the string contains special characters + Boolean values should be all lower case Please see the [TOML Documentation](https://toml.io/) for more details and information on what is allowed in a `toml` file. See [PEP 518](https://www.python.org/dev/peps/pep-0518/) for more information on the layout and structure of the `pyproject.toml` file. #### Example `pyproject.toml`[#](#example-pyproject-toml) Here is an example of a `pyproject.toml` file. To use this config file, place it at the root of your repo (or append it to the end of an existing `pyproject.toml` file) and run mypy. ``` # mypy global options: [tool.mypy] python_version = "2.7" warn_return_any = true warn_unused_configs = true exclude = [ '^file1\.py$', # TOML literal string (single-quotes, no escaping necessary) "^file2\\.py$", # TOML basic string (double-quotes, backslash and other characters need escaping) ] # mypy per-module options: [[tool.mypy.overrides]] module = "mycode.foo.*" disallow_untyped_defs = true [[tool.mypy.overrides]] module = "mycode.bar" warn_return_any = false [[tool.mypy.overrides]] module = [ "somelibrary", "some_other_library" ] ignore_missing_imports = true ``` ### Inline configuration[#](#inline-configuration) Mypy supports setting per-file configuration options inside files themselves using `# mypy:` comments. For example: ``` # mypy: disallow-any-generics ``` Inline configuration comments take precedence over all other configuration mechanisms. #### Configuration comment format[#](#configuration-comment-format) Flags correspond to [config file flags](index.html#config-file) but allow hyphens to be substituted for underscores. Values are specified using `=`, but `= True` may be omitted: ``` # mypy: disallow-any-generics # mypy: always-true=FOO ``` Multiple flags can be separated by commas or placed on separate lines. To include a comma as part of an option’s value, place the value inside quotes: ``` # mypy: disallow-untyped-defs, always-false="FOO,BAR" ``` Like in the configuration file, options that take a boolean value may be inverted by adding `no-` to their name or by (when applicable) swapping their prefix from `disallow` to `allow` (and vice versa): ``` # mypy: allow-untyped-defs, no-strict-optional ``` ### Mypy daemon (mypy server)[#](#mypy-daemon-mypy-server) Instead of running mypy as a command-line tool, you can also run it as a long-running daemon (server) process and use a command-line client to send type-checking requests to the server. This way mypy can perform type checking much faster, since program state cached from previous runs is kept in memory and doesn’t have to be read from the file system on each run. The server also uses finer-grained dependency tracking to reduce the amount of work that needs to be done. If you have a large codebase to check, running mypy using the mypy daemon can be *10 or more times faster* than the regular command-line `mypy` tool, especially if your workflow involves running mypy repeatedly after small edits – which is often a good idea, as this way you’ll find errors sooner. Note The command-line interface of mypy daemon may change in future mypy releases. Note Each mypy daemon process supports one user and one set of source files, and it can only process one type checking request at a time. You can run multiple mypy daemon processes to type check multiple repositories. #### Basic usage[#](#basic-usage) The client utility `dmypy` is used to control the mypy daemon. Use `dmypy run -- <flags> <files>` to type check a set of files (or directories). This will launch the daemon if it is not running. You can use almost arbitrary mypy flags after `--`. The daemon will always run on the current host. Example: ``` dmypy run -- prog.py pkg/*.py ``` `dmypy run` will automatically restart the daemon if the configuration or mypy version changes. The initial run will process all the code and may take a while to finish, but subsequent runs will be quick, especially if you’ve only changed a few files. (You can use [remote caching](index.html#remote-cache) to speed up the initial run. The speedup can be significant if you have a large codebase.) Note Mypy 0.780 added support for following imports in dmypy (enabled by default). This functionality is still experimental. You can use `--follow-imports=skip` or `--follow-imports=error` to fall back to the stable functionality. See [Following imports](index.html#follow-imports) for details on how these work. Note The mypy daemon automatically enables `--local-partial-types` by default. #### Daemon client commands[#](#daemon-client-commands) While `dmypy run` is sufficient for most uses, some workflows (ones using [remote caching](index.html#remote-cache), perhaps), require more precise control over the lifetime of the daemon process: * `dmypy stop` stops the daemon. * `dmypy start -- <flags>` starts the daemon but does not check any files. You can use almost arbitrary mypy flags after `--`. * `dmypy restart -- <flags>` restarts the daemon. The flags are the same as with `dmypy start`. This is equivalent to a stop command followed by a start. * Use `dmypy run --timeout SECONDS -- <flags>` (or `start` or `restart`) to automatically shut down the daemon after inactivity. By default, the daemon runs until it’s explicitly stopped. * `dmypy check <files>` checks a set of files using an already running daemon. * `dmypy recheck` checks the same set of files as the most recent `check` or `recheck` command. (You can also use the [`--update`](#cmdoption-dmypy-update) and [`--remove`](#cmdoption-dmypy-remove) options to alter the set of files, and to define which files should be processed.) * `dmypy status` checks whether a daemon is running. It prints a diagnostic and exits with `0` if there is a running daemon. Use `dmypy --help` for help on additional commands and command-line options not discussed here, and `dmypy <command> --help` for help on command-specific options. #### Additional daemon flags[#](#additional-daemon-flags) --status-file FILE[#](#cmdoption-dmypy-status-file) Use `FILE` as the status file for storing daemon runtime state. This is normally a JSON file that contains information about daemon process and connection. The default path is `.dmypy.json` in the current working directory. --log-file FILE[#](#cmdoption-dmypy-log-file) Direct daemon stdout/stderr to `FILE`. This is useful for debugging daemon crashes, since the server traceback is not always printed by the client. This is available for the `start`, `restart`, and `run` commands. --timeout TIMEOUT[#](#cmdoption-dmypy-timeout) Automatically shut down server after `TIMEOUT` seconds of inactivity. This is available for the `start`, `restart`, and `run` commands. --update FILE[#](#cmdoption-dmypy-update) Re-check `FILE`, or add it to the set of files being checked (and check it). This option may be repeated, and it’s only available for the `recheck` command. By default, mypy finds and checks all files changed since the previous run and files that depend on them. However, if you use this option (and/or [`--remove`](#cmdoption-dmypy-remove)), mypy assumes that only the explicitly specified files have changed. This is only useful to speed up mypy if you type check a very large number of files, and use an external, fast file system watcher, such as [watchman](https://facebook.github.io/watchman/) or [watchdog](https://pypi.org/project/watchdog/), to determine which files got edited or deleted. *Note:* This option is never required and is only available for performance tuning. --remove FILE[#](#cmdoption-dmypy-remove) Remove `FILE` from the set of files being checked. This option may be repeated. This is only available for the `recheck` command. See [`--update`](#cmdoption-dmypy-update) above for when this may be useful. *Note:* This option is never required and is only available for performance tuning. --fswatcher-dump-file FILE[#](#cmdoption-dmypy-fswatcher-dump-file) Collect information about the current internal file state. This is only available for the `status` command. This will dump JSON to `FILE` in the format `{path: [modification_time, size, content_hash]}`. This is useful for debugging the built-in file system watcher. *Note:* This is an internal flag and the format may change. --perf-stats-file FILE[#](#cmdoption-dmypy-perf-stats-file) Write performance profiling information to `FILE`. This is only available for the `check`, `recheck`, and `run` commands. --export-types[#](#cmdoption-dmypy-export-types) Store all expression types in memory for future use. This is useful to speed up future calls to `dmypy inspect` (but uses more memory). Only valid for `check`, `recheck`, and `run` command. #### Static inference of annotations[#](#static-inference-of-annotations) The mypy daemon supports (as an experimental feature) statically inferring draft function and method type annotations. Use `dmypy suggest FUNCTION` to generate a draft signature in the format `(param_type_1, param_type_2, ...) -> ret_type` (types are included for all arguments, including keyword-only arguments, `*args` and `**kwargs`). This is a low-level feature intended to be used by editor integrations, IDEs, and other tools (for example, the [mypy plugin for PyCharm](https://github.com/dropbox/mypy-PyCharm-plugin)), to automatically add annotations to source files, or to propose function signatures. In this example, the function `format_id()` has no annotation: ``` def format_id(user): return f"User: {user}" root = format_id(0) ``` `dmypy suggest` uses call sites, return statements, and other heuristics (such as looking for signatures in base classes) to infer that `format_id()` accepts an `int` argument and returns a `str`. Use `dmypy suggest module.format_id` to print the suggested signature for the function. More generally, the target function may be specified in two ways: * By its fully qualified name, i.e. `[package.]module.[class.]function`. * By its location in a source file, i.e. `/path/to/file.py:line`. The path can be absolute or relative, and `line` can refer to any line number within the function body. This command can also be used to find a more precise alternative for an existing, imprecise annotation with some `Any` types. The following flags customize various aspects of the `dmypy suggest` command. --json[#](#cmdoption-dmypy-json) Output the signature as JSON, so that [PyAnnotate](https://github.com/dropbox/pyannotate) can read it and add the signature to the source file. Here is what the JSON looks like: ``` [{"func_name": "example.format_id", "line": 1, "path": "/absolute/path/to/example.py", "samples": 0, "signature": {"arg_types": ["int"], "return_type": "str"}}] ``` --no-errors[#](#cmdoption-dmypy-no-errors) Only produce suggestions that cause no errors in the checked code. By default, mypy will try to find the most precise type, even if it causes some type errors. --no-any[#](#cmdoption-dmypy-no-any) Only produce suggestions that don’t contain `Any` types. By default mypy proposes the most precise signature found, even if it contains `Any` types. --flex-any FRACTION[#](#cmdoption-dmypy-flex-any) Only allow some fraction of types in the suggested signature to be `Any` types. The fraction ranges from `0` (same as `--no-any`) to `1`. --callsites[#](#cmdoption-dmypy-callsites) Only find call sites for a given function instead of suggesting a type. This will produce a list with line numbers and types of actual arguments for each call: `/path/to/file.py:line: (arg_type_1, arg_type_2, ...)`. --use-fixme NAME[#](#cmdoption-dmypy-use-fixme) Use a dummy name instead of plain `Any` for types that cannot be inferred. This may be useful to emphasize to a user that a given type couldn’t be inferred and needs to be entered manually. --max-guesses NUMBER[#](#cmdoption-dmypy-max-guesses) Set the maximum number of types to try for a function (default: `64`). #### Statically inspect expressions[#](#statically-inspect-expressions) The daemon allows to get declared or inferred type of an expression (or other information about an expression, such as known attributes or definition location) using `dmypy inspect LOCATION` command. The location of the expression should be specified in the format `path/to/file.py:line:column[:end_line:end_column]`. Both line and column are 1-based. Both start and end position are inclusive. These rules match how mypy prints the error location in error messages. If a span is given (i.e. all 4 numbers), then only an exactly matching expression is inspected. If only a position is given (i.e. 2 numbers, line and column), mypy will inspect all *expressions*, that include this position, starting from the innermost one. Consider this Python code snippet: ``` def foo(x: int, longer_name: str) -> None: x longer_name ``` Here to find the type of `x` one needs to call `dmypy inspect src.py:2:5:2:5` or `dmypy inspect src.py:2:5`. While for `longer_name` one needs to call `dmypy inspect src.py:3:5:3:15` or, for example, `dmypy inspect src.py:3:10`. Please note that this command is only valid after daemon had a successful type check (without parse errors), so that types are populated, e.g. using `dmypy check`. In case where multiple expressions match the provided location, their types are returned separated by a newline. Important note: it is recommended to check files with [`--export-types`](#cmdoption-dmypy-export-types) since otherwise most inspections will not work without [`--force-reload`](#cmdoption-dmypy-force-reload). --show INSPECTION[#](#cmdoption-dmypy-show) What kind of inspection to run for expression(s) found. Currently the supported inspections are: * `type` (default): Show the best known type of a given expression. * `attrs`: Show which attributes are valid for an expression (e.g. for auto-completion). Format is `{"Base1": ["name_1", "name_2", ...]; "Base2": ...}`. Names are sorted by method resolution order. If expression refers to a module, then module attributes will be under key like `"<full.module.name>"`. * `definition` (experimental): Show the definition location for a name expression or member expression. Format is `path/to/file.py:line:column:Symbol`. If multiple definitions are found (e.g. for a Union attribute), they are separated by comma. --verbose[#](#cmdoption-dmypy-verbose) Increase verbosity of types string representation (can be repeated). For example, this will print fully qualified names of instance types (like `"builtins.str"`), instead of just a short name (like `"str"`). --limit NUM[#](#cmdoption-dmypy-limit) If the location is given as `line:column`, this will cause daemon to return only at most `NUM` inspections of innermost expressions. Value of 0 means no limit (this is the default). For example, if one calls `dmypy inspect src.py:4:10 --limit=1` with this code ``` def foo(x: int) -> str: .. def bar(x: str) -> None: ... baz: int bar(foo(baz)) ``` This will output just one type `"int"` (for `baz` name expression). While without the limit option, it would output all three types: `"int"`, `"str"`, and `"None"`. --include-span[#](#cmdoption-dmypy-include-span) With this option on, the daemon will prepend each inspection result with the full span of corresponding expression, formatted as `1:2:1:4 -> "int"`. This may be useful in case multiple expressions match a location. --include-kind[#](#cmdoption-dmypy-include-kind) With this option on, the daemon will prepend each inspection result with the kind of corresponding expression, formatted as `NameExpr -> "int"`. If both this option and [`--include-span`](#cmdoption-dmypy-include-span) are on, the kind will appear first, for example `NameExpr:1:2:1:4 -> "int"`. --include-object-attrs[#](#cmdoption-dmypy-include-object-attrs) This will make the daemon include attributes of `object` (excluded by default) in case of an `atts` inspection. --union-attrs[#](#cmdoption-dmypy-union-attrs) Include attributes valid for some of possible expression types (by default an intersection is returned). This is useful for union types of type variables with values. For example, with this code: ``` from typing import Union class A: x: int z: int class B: y: int z: int var: Union[A, B] var ``` The command `dmypy inspect --show attrs src.py:10:1` will return `{"A": ["z"], "B": ["z"]}`, while with `--union-attrs` it will return `{"A": ["x", "z"], "B": ["y", "z"]}`. --force-reload[#](#cmdoption-dmypy-force-reload) Force re-parsing and re-type-checking file before inspection. By default this is done only when needed (for example file was not loaded from cache or daemon was initially run without `--export-types` mypy option), since reloading may be slow (up to few seconds for very large files). ### Using installed packages[#](#using-installed-packages) Packages installed with pip can declare that they support type checking. For example, the [aiohttp](https://docs.aiohttp.org/en/stable/) package has built-in support for type checking. Packages can also provide stubs for a library. For example, `types-requests` is a stub-only package that provides stubs for the [requests](https://requests.readthedocs.io/en/master/) package. Stub packages are usually published from [typeshed](https://github.com/python/typeshed), a shared repository for Python library stubs, and have a name of form `types-<library>`. Note that many stub packages are not maintained by the original maintainers of the package. The sections below explain how mypy can use these packages, and how you can create such packages. Note [**PEP 561**](https://peps.python.org/pep-0561/) specifies how a package can declare that it supports type checking. Note New versions of stub packages often use type system features not supported by older, and even fairly recent mypy versions. If you pin to an older version of mypy (using `requirements.txt`, for example), it is recommended that you also pin the versions of all your stub package dependencies. Note Starting in mypy 0.900, most third-party package stubs must be installed explicitly. This decouples mypy and stub versioning, allowing stubs to updated without updating mypy. This also allows stubs not originally included with mypy to be installed. Earlier mypy versions included a fixed set of stubs for third-party packages. #### Using installed packages with mypy (PEP 561)[#](#using-installed-packages-with-mypy-pep-561) Typically mypy will automatically find and use installed packages that support type checking or provide stubs. This requires that you install the packages in the Python environment that you use to run mypy. As many packages don’t support type checking yet, you may also have to install a separate stub package, usually named `types-<library>`. (See [Missing imports](index.html#fix-missing-imports) for how to deal with libraries that don’t support type checking and are also missing stubs.) If you have installed typed packages in another Python installation or environment, mypy won’t automatically find them. One option is to install another copy of those packages in the environment in which you installed mypy. Alternatively, you can use the [`--python-executable`](index.html#cmdoption-mypy-python-executable) flag to point to the Python executable for another environment, and mypy will find packages installed for that Python executable. Note that mypy does not support some more advanced import features, such as zip imports and custom import hooks. If you don’t want to use installed packages that provide type information at all, use the [`--no-site-packages`](index.html#cmdoption-mypy-no-site-packages) flag to disable searching for installed packages. Note that stub-only packages cannot be used with `MYPYPATH`. If you want mypy to find the package, it must be installed. For a package `foo`, the name of the stub-only package (`foo-stubs`) is not a legal package name, so mypy will not find it, unless it is installed (see [**PEP 561: Stub-only Packages**](https://peps.python.org/pep-0561/#stub-only-packages) for more information). #### Creating PEP 561 compatible packages[#](#creating-pep-561-compatible-packages) Note You can generally ignore this section unless you maintain a package on PyPI, or want to publish type information for an existing PyPI package. [**PEP 561**](https://peps.python.org/pep-0561/) describes three main ways to distribute type information: 1. A package has inline type annotations in the Python implementation. 2. A package ships [stub files](index.html#stub-files) with type information alongside the Python implementation. 3. A package ships type information for another package separately as stub files (also known as a “stub-only package”). If you want to create a stub-only package for an existing library, the simplest way is to contribute stubs to the [typeshed](https://github.com/python/typeshed) repository, and a stub package will automatically be uploaded to PyPI. If you would like to publish a library package to a package repository yourself (e.g. on PyPI) for either internal or external use in type checking, packages that supply type information via type comments or annotations in the code should put a `py.typed` file in their package directory. For example, here is a typical directory structure: ``` setup.py package_a/ __init__.py lib.py py.typed ``` The `setup.py` file could look like this: ``` from setuptools import setup setup( name="SuperPackageA", author="Me", version="0.1", package_data={"package_a": ["py.typed"]}, packages=["package_a"] ) ``` Some packages have a mix of stub files and runtime files. These packages also require a `py.typed` file. An example can be seen below: ``` setup.py package_b/ __init__.py lib.py lib.pyi py.typed ``` The `setup.py` file might look like this: ``` from setuptools import setup setup( name="SuperPackageB", author="Me", version="0.1", package_data={"package_b": ["py.typed", "lib.pyi"]}, packages=["package_b"] ) ``` In this example, both `lib.py` and the `lib.pyi` stub file exist. At runtime, the Python interpreter will use `lib.py`, but mypy will use `lib.pyi` instead. If the package is stub-only (not imported at runtime), the package should have a prefix of the runtime package name and a suffix of `-stubs`. A `py.typed` file is not needed for stub-only packages. For example, if we had stubs for `package_c`, we might do the following: ``` setup.py package_c-stubs/ __init__.pyi lib.pyi ``` The `setup.py` might look like this: ``` from setuptools import setup setup( name="SuperPackageC", author="Me", version="0.1", package_data={"package_c-stubs": ["__init__.pyi", "lib.pyi"]}, packages=["package_c-stubs"] ) ``` The instructions above are enough to ensure that the built wheels contain the appropriate files. However, to ensure inclusion inside the `sdist` (`.tar.gz` archive), you may also need to modify the inclusion rules in your `MANIFEST.in`: ``` global-include *.pyi global-include *.typed ``` ### Extending and integrating mypy[#](#extending-and-integrating-mypy) #### Integrating mypy into another Python application[#](#integrating-mypy-into-another-python-application) It is possible to integrate mypy into another Python 3 application by importing `mypy.api` and calling the `run` function with a parameter of type `list[str]`, containing what normally would have been the command line arguments to mypy. Function `run` returns a `tuple[str, str, int]`, namely `(<normal_report>, <error_report>, <exit_status>)`, in which `<normal_report>` is what mypy normally writes to [`sys.stdout`](https://docs.python.org/3/library/sys.html#sys.stdout), `<error_report>` is what mypy normally writes to [`sys.stderr`](https://docs.python.org/3/library/sys.html#sys.stderr) and `exit_status` is the exit status mypy normally returns to the operating system. A trivial example of using the api is the following ``` import sys from mypy import api result = api.run(sys.argv[1:]) if result[0]: print('\nType checking report:\n') print(result[0]) # stdout if result[1]: print('\nError report:\n') print(result[1]) # stderr print('\nExit status:', result[2]) ``` #### Extending mypy using plugins[#](#extending-mypy-using-plugins) Python is a highly dynamic language and has extensive metaprogramming capabilities. Many popular libraries use these to create APIs that may be more flexible and/or natural for humans, but are hard to express using static types. Extending the [**PEP 484**](https://peps.python.org/pep-0484/) type system to accommodate all existing dynamic patterns is impractical and often just impossible. Mypy supports a plugin system that lets you customize the way mypy type checks code. This can be useful if you want to extend mypy so it can type check code that uses a library that is difficult to express using just [**PEP 484**](https://peps.python.org/pep-0484/) types. The plugin system is focused on improving mypy’s understanding of *semantics* of third party frameworks. There is currently no way to define new first class kinds of types. Note The plugin system is experimental and prone to change. If you want to write a mypy plugin, we recommend you start by contacting the mypy core developers on [gitter](https://gitter.im/python/typing). In particular, there are no guarantees about backwards compatibility. Backwards incompatible changes may be made without a deprecation period, but we will announce them in [the plugin API changes announcement issue](https://github.com/python/mypy/issues/6617). #### Configuring mypy to use plugins[#](#configuring-mypy-to-use-plugins) Plugins are Python files that can be specified in a mypy [config file](index.html#config-file) using the [`plugins`](index.html#confval-plugins) option and one of the two formats: relative or absolute path to the plugin file, or a module name (if the plugin is installed using `pip install` in the same virtual environment where mypy is running). The two formats can be mixed, for example: ``` [mypy] plugins = /one/plugin.py, other.plugin ``` Mypy will try to import the plugins and will look for an entry point function named `plugin`. If the plugin entry point function has a different name, it can be specified after colon: ``` [mypy] plugins = custom_plugin:custom_entry_point ``` In the following sections we describe the basics of the plugin system with some examples. For more technical details, please read the docstrings in [mypy/plugin.py](https://github.com/python/mypy/blob/master/mypy/plugin.py) in mypy source code. Also you can find good examples in the bundled plugins located in [mypy/plugins](https://github.com/python/mypy/tree/master/mypy/plugins). #### High-level overview[#](#high-level-overview) Every entry point function should accept a single string argument that is a full mypy version and return a subclass of `mypy.plugin.Plugin`: ``` from mypy.plugin import Plugin class CustomPlugin(Plugin): def get_type_analyze_hook(self, fullname: str): # see explanation below ... def plugin(version: str): # ignore version argument if the plugin works with all mypy versions. return CustomPlugin ``` During different phases of analyzing the code (first in semantic analysis, and then in type checking) mypy calls plugin methods such as `get_type_analyze_hook()` on user plugins. This particular method, for example, can return a callback that mypy will use to analyze unbound types with the given full name. See the full plugin hook method list [below](#plugin-hooks). Mypy maintains a list of plugins it gets from the config file plus the default (built-in) plugin that is always enabled. Mypy calls a method once for each plugin in the list until one of the methods returns a non-`None` value. This callback will be then used to customize the corresponding aspect of analyzing/checking the current abstract syntax tree node. The callback returned by the `get_xxx` method will be given a detailed current context and an API to create new nodes, new types, emit error messages, etc., and the result will be used for further processing. Plugin developers should ensure that their plugins work well in incremental and daemon modes. In particular, plugins should not hold global state due to caching of plugin hook results. #### Current list of plugin hooks[#](#current-list-of-plugin-hooks) **get_type_analyze_hook()** customizes behaviour of the type analyzer. For example, [**PEP 484**](https://peps.python.org/pep-0484/) doesn’t support defining variadic generic types: ``` from lib import Vector a: Vector[int, int] b: Vector[int, int, int] ``` When analyzing this code, mypy will call `get_type_analyze_hook("lib.Vector")`, so the plugin can return some valid type for each variable. **get_function_hook()** is used to adjust the return type of a function call. This hook will be also called for instantiation of classes. This is a good choice if the return type is too complex to be expressed by regular python typing. **get_function_signature_hook()** is used to adjust the signature of a function. **get_method_hook()** is the same as `get_function_hook()` but for methods instead of module level functions. **get_method_signature_hook()** is used to adjust the signature of a method. This includes special Python methods except [`__init__()`](https://docs.python.org/3/reference/datamodel.html#object.__init__) and [`__new__()`](https://docs.python.org/3/reference/datamodel.html#object.__new__). For example in this code: ``` from ctypes import Array, c_int x: Array[c_int] x[0] = 42 ``` mypy will call `get_method_signature_hook("ctypes.Array.__setitem__")` so that the plugin can mimic the [`ctypes`](https://docs.python.org/3/library/ctypes.html#module-ctypes) auto-convert behavior. **get_attribute_hook()** overrides instance member field lookups and property access (not assignments, and not method calls). This hook is only called for fields which already exist on the class. *Exception:* if [`__getattr__`](https://docs.python.org/3/reference/datamodel.html#object.__getattr__) or [`__getattribute__`](https://docs.python.org/3/reference/datamodel.html#object.__getattribute__) is a method on the class, the hook is called for all fields which do not refer to methods. **get_class_attribute_hook()** is similar to above, but for attributes on classes rather than instances. Unlike above, this does not have special casing for [`__getattr__`](https://docs.python.org/3/reference/datamodel.html#object.__getattr__) or [`__getattribute__`](https://docs.python.org/3/reference/datamodel.html#object.__getattribute__). **get_class_decorator_hook()** can be used to update class definition for given class decorators. For example, you can add some attributes to the class to match runtime behaviour: ``` from dataclasses import dataclass @dataclass # built-in plugin adds `__init__` method here class User: name: str user = User(name='example') # mypy can understand this using a plugin ``` **get_metaclass_hook()** is similar to above, but for metaclasses. **get_base_class_hook()** is similar to above, but for base classes. **get_dynamic_class_hook()** can be used to allow dynamic class definitions in mypy. This plugin hook is called for every assignment to a simple name where right hand side is a function call: ``` from lib import dynamic_class X = dynamic_class('X', []) ``` For such definition, mypy will call `get_dynamic_class_hook("lib.dynamic_class")`. The plugin should create the corresponding `mypy.nodes.TypeInfo` object, and place it into a relevant symbol table. (Instances of this class represent classes in mypy and hold essential information such as qualified name, method resolution order, etc.) **get_customize_class_mro_hook()** can be used to modify class MRO (for example insert some entries there) before the class body is analyzed. **get_additional_deps()** can be used to add new dependencies for a module. It is called before semantic analysis. For example, this can be used if a library has dependencies that are dynamically loaded based on configuration information. **report_config_data()** can be used if the plugin has some sort of per-module configuration that can affect typechecking. In that case, when the configuration for a module changes, we want to invalidate mypy’s cache for that module so that it can be rechecked. This hook should be used to report to mypy any relevant configuration data, so that mypy knows to recheck the module if the configuration changes. The hooks should return data encodable as JSON. ### Automatic stub generation (stubgen)[#](#automatic-stub-generation-stubgen) A stub file (see [**PEP 484**](https://peps.python.org/pep-0484/)) contains only type hints for the public interface of a module, with empty function bodies. Mypy can use a stub file instead of the real implementation to provide type information for the module. They are useful for third-party modules whose authors have not yet added type hints (and when no stubs are available in typeshed) and C extension modules (which mypy can’t directly process). Mypy includes the `stubgen` tool that can automatically generate stub files (`.pyi` files) for Python modules and C extension modules. For example, consider this source file: ``` from other_module import dynamic BORDER_WIDTH = 15 class Window: parent = dynamic() def __init__(self, width, height): self.width = width self.height = height def create_empty() -> Window: return Window(0, 0) ``` Stubgen can generate this stub file based on the above file: ``` from typing import Any BORDER_WIDTH: int = ... class Window: parent: Any = ... width: Any = ... height: Any = ... def __init__(self, width, height) -> None: ... def create_empty() -> Window: ... ``` Stubgen generates *draft* stubs. The auto-generated stub files often require some manual updates, and most types will default to `Any`. The stubs will be much more useful if you add more precise type annotations, at least for the most commonly used functionality. The rest of this section documents the command line interface of stubgen. Run [`stubgen --help`](#cmdoption-stubgen-h) for a quick summary of options. Note The command-line flags may change between releases. #### Specifying what to stub[#](#specifying-what-to-stub) You can give stubgen paths of the source files for which you want to generate stubs: ``` $ stubgen foo.py bar.py ``` This generates stubs `out/foo.pyi` and `out/bar.pyi`. The default output directory `out` can be overridden with [`-o DIR`](#cmdoption-stubgen-o). You can also pass directories, and stubgen will recursively search them for any `.py` files and generate stubs for all of them: ``` $ stubgen my_pkg_dir ``` Alternatively, you can give module or package names using the [`-m`](#cmdoption-stubgen-m) or [`-p`](#cmdoption-stubgen-p) options: ``` $ stubgen -m foo -m bar -p my_pkg_dir ``` Details of the options: -m MODULE, --module MODULE[#](#cmdoption-stubgen-m) Generate a stub file for the given module. This flag may be repeated multiple times. Stubgen *will not* recursively generate stubs for any submodules of the provided module. -p PACKAGE, --package PACKAGE[#](#cmdoption-stubgen-p) Generate stubs for the given package. This flag maybe repeated multiple times. Stubgen *will* recursively generate stubs for all submodules of the provided package. This flag is identical to [`--module`](#cmdoption-stubgen-m) apart from this behavior. Note You can’t mix paths and [`-m`](#cmdoption-stubgen-m)/[`-p`](#cmdoption-stubgen-p) options in the same stubgen invocation. Stubgen applies heuristics to avoid generating stubs for submodules that include tests or vendored third-party packages. #### Specifying how to generate stubs[#](#specifying-how-to-generate-stubs) By default stubgen will try to import the target modules and packages. This allows stubgen to use runtime introspection to generate stubs for C extension modules and to improve the quality of the generated stubs. By default, stubgen will also use mypy to perform light-weight semantic analysis of any Python modules. Use the following flags to alter the default behavior: --no-import[#](#cmdoption-stubgen-no-import) Don’t try to import modules. Instead only use mypy’s normal search mechanism to find sources. This does not support C extension modules. This flag also disables runtime introspection functionality, which mypy uses to find the value of `__all__`. As result the set of exported imported names in stubs may be incomplete. This flag is generally only useful when importing a module causes unwanted side effects, such as the running of tests. Stubgen tries to skip test modules even without this option, but this does not always work. --parse-only[#](#cmdoption-stubgen-parse-only) Don’t perform semantic analysis of source files. This may generate worse stubs – in particular, some module, class, and function aliases may be represented as variables with the `Any` type. This is generally only useful if semantic analysis causes a critical mypy error. --doc-dir PATH[#](#cmdoption-stubgen-doc-dir) Try to infer better signatures by parsing .rst documentation in `PATH`. This may result in better stubs, but currently it only works for C extension modules. #### Additional flags[#](#additional-flags) -h, --help[#](#cmdoption-stubgen-h) Show help message and exit. --ignore-errors[#](#cmdoption-stubgen-ignore-errors) If an exception was raised during stub generation, continue to process any remaining modules instead of immediately failing with an error. --include-private[#](#cmdoption-stubgen-include-private) Include definitions that are considered private in stubs (with names such as `_foo` with single leading underscore and no trailing underscores). --export-less[#](#cmdoption-stubgen-export-less) Don’t export all names imported from other modules within the same package. Instead, only export imported names that are not referenced in the module that contains the import. --include-docstrings[#](#cmdoption-stubgen-include-docstrings) Include docstrings in stubs. This will add docstrings to Python function and classes stubs and to C extension function stubs. --search-path PATH[#](#cmdoption-stubgen-search-path) Specify module search directories, separated by colons (only used if [`--no-import`](#cmdoption-stubgen-no-import) is given). -o PATH, --output PATH[#](#cmdoption-stubgen-o) Change the output directory. By default the stubs are written in the `./out` directory. The output directory will be created if it doesn’t exist. Existing stubs in the output directory will be overwritten without warning. -v, --verbose[#](#cmdoption-stubgen-v) Produce more verbose output. -q, --quiet[#](#cmdoption-stubgen-q) Produce less verbose output. ### Automatic stub testing (stubtest)[#](#automatic-stub-testing-stubtest) Stub files are files containing type annotations. See [PEP 484](https://www.python.org/dev/peps/pep-0484/#stub-files) for more motivation and details. A common problem with stub files is that they tend to diverge from the actual implementation. Mypy includes the `stubtest` tool that can automatically check for discrepancies between the stubs and the implementation at runtime. #### What stubtest does and does not do[#](#what-stubtest-does-and-does-not-do) Stubtest will import your code and introspect your code objects at runtime, for example, by using the capabilities of the [`inspect`](https://docs.python.org/3/library/inspect.html#module-inspect) module. Stubtest will then analyse the stub files, and compare the two, pointing out things that differ between stubs and the implementation at runtime. It’s important to be aware of the limitations of this comparison. Stubtest will not make any attempt to statically analyse your actual code and relies only on dynamic runtime introspection (in particular, this approach means stubtest works well with extension modules). However, this means that stubtest has limited visibility; for instance, it cannot tell if a return type of a function is accurately typed in the stubs. For clarity, here are some additional things stubtest can’t do: * Type check your code – use `mypy` instead * Generate stubs – use `stubgen` or `pyright --createstub` instead * Generate stubs based on running your application or test suite – use `monkeytype` instead * Apply stubs to code to produce inline types – use `retype` or `libcst` instead In summary, stubtest works very well for ensuring basic consistency between stubs and implementation or to check for stub completeness. It’s used to test Python’s official collection of library stubs, [typeshed](https://github.com/python/typeshed). Warning stubtest will import and execute Python code from the packages it checks. #### Example[#](#example) Here’s a quick example of what stubtest can do: ``` $ python3 -m pip install mypy $ cat library.py x = "hello, stubtest" def foo(x=None): print(x) $ cat library.pyi x: int def foo(x: int) -> None: ... $ python3 -m mypy.stubtest library error: library.foo is inconsistent, runtime argument "x" has a default value but stub argument does not Stub: at line 3 def (x: builtins.int) Runtime: in file ~/library.py:3 def (x=None) error: library.x variable differs from runtime type Literal['hello, stubtest'] Stub: at line 1 builtins.int Runtime: 'hello, stubtest' ``` #### Usage[#](#usage) Running stubtest can be as simple as `stubtest module_to_check`. Run [`stubtest --help`](#cmdoption-stubtest-help) for a quick summary of options. Stubtest must be able to import the code to be checked, so make sure that mypy is installed in the same environment as the library to be tested. In some cases, setting `PYTHONPATH` can help stubtest find the code to import. Similarly, stubtest must be able to find the stubs to be checked. Stubtest respects the `MYPYPATH` environment variable – consider using this if you receive a complaint along the lines of “failed to find stubs”. Note that stubtest requires mypy to be able to analyse stubs. If mypy is unable to analyse stubs, you may get an error on the lines of “not checking stubs due to mypy build errors”. In this case, you will need to mitigate those errors before stubtest will run. Despite potential overlap in errors here, stubtest is not intended as a substitute for running mypy directly. If you wish to ignore some of stubtest’s complaints, stubtest supports a pretty handy allowlist system. The rest of this section documents the command line interface of stubtest. --concise[#](#cmdoption-stubtest-concise) Makes stubtest’s output more concise, one line per error --ignore-missing-stub[#](#cmdoption-stubtest-ignore-missing-stub) Ignore errors for stub missing things that are present at runtime --ignore-positional-only[#](#cmdoption-stubtest-ignore-positional-only) Ignore errors for whether an argument should or shouldn’t be positional-only --allowlist FILE[#](#cmdoption-stubtest-allowlist) Use file as an allowlist. Can be passed multiple times to combine multiple allowlists. Allowlists can be created with –generate-allowlist. Allowlists support regular expressions. The presence of an entry in the allowlist means stubtest will not generate any errors for the corresponding definition. --generate-allowlist[#](#cmdoption-stubtest-generate-allowlist) Print an allowlist (to stdout) to be used with –allowlist When introducing stubtest to an existing project, this is an easy way to silence all existing errors. --ignore-unused-allowlist[#](#cmdoption-stubtest-ignore-unused-allowlist) Ignore unused allowlist entries Without this option enabled, the default is for stubtest to complain if an allowlist entry is not necessary for stubtest to pass successfully. Note if an allowlist entry is a regex that matches the empty string, stubtest will never consider it unused. For example, to get –ignore-unused-allowlist behaviour for a single allowlist entry like `foo.bar` you could add an allowlist entry `(foo\.bar)?`. This can be useful when an error only occurs on a specific platform. --mypy-config-file FILE[#](#cmdoption-stubtest-mypy-config-file) Use specified mypy config file to determine mypy plugins and mypy path --custom-typeshed-dir DIR[#](#cmdoption-stubtest-custom-typeshed-dir) Use the custom typeshed in DIR --check-typeshed[#](#cmdoption-stubtest-check-typeshed) Check all stdlib modules in typeshed --help[#](#cmdoption-stubtest-help) Show a help message :-) ### Common issues and solutions[#](#common-issues-and-solutions) This section has examples of cases when you need to update your code to use static typing, and ideas for working around issues if mypy doesn’t work as expected. Statically typed code is often identical to normal Python code (except for type annotations), but sometimes you need to do things slightly differently. #### No errors reported for obviously wrong code[#](#no-errors-reported-for-obviously-wrong-code) There are several common reasons why obviously wrong code is not flagged as an error. **The function containing the error is not annotated.** Functions that do not have any annotations (neither for any argument nor for the return type) are not type-checked, and even the most blatant type errors (e.g. `2 + 'a'`) pass silently. The solution is to add annotations. Where that isn’t possible, functions without annotations can be checked using [`--check-untyped-defs`](index.html#cmdoption-mypy-check-untyped-defs). Example: ``` def foo(a): return '(' + a.split() + ')' # No error! ``` This gives no error even though `a.split()` is “obviously” a list (the author probably meant `a.strip()`). The error is reported once you add annotations: ``` def foo(a: str) -> str: return '(' + a.split() + ')' # error: Unsupported operand types for + ("str" and List[str]) ``` If you don’t know what types to add, you can use `Any`, but beware: **One of the values involved has type ‘Any’.** Extending the above example, if we were to leave out the annotation for `a`, we’d get no error: ``` def foo(a) -> str: return '(' + a.split() + ')' # No error! ``` The reason is that if the type of `a` is unknown, the type of `a.split()` is also unknown, so it is inferred as having type `Any`, and it is no error to add a string to an `Any`. If you’re having trouble debugging such situations, [reveal_type()](#reveal-type) might come in handy. Note that sometimes library stubs with imprecise type information can be a source of `Any` values. [`__init__`](https://docs.python.org/3/reference/datamodel.html#object.__init__) **method has no annotated arguments and no return type annotation.** This is basically a combination of the two cases above, in that `__init__` without annotations can cause `Any` types leak into instance variables: ``` class Bad: def __init__(self): self.value = "asdf" 1 + "asdf" # No error! bad = Bad() bad.value + 1 # No error! reveal_type(bad) # Revealed type is "__main__.Bad" reveal_type(bad.value) # Revealed type is "Any" class Good: def __init__(self) -> None: # Explicitly return None self.value = value ``` **Some imports may be silently ignored**. A common source of unexpected `Any` values is the [`--ignore-missing-imports`](index.html#cmdoption-mypy-ignore-missing-imports) flag. When you use [`--ignore-missing-imports`](index.html#cmdoption-mypy-ignore-missing-imports), any imported module that cannot be found is silently replaced with `Any`. To help debug this, simply leave out [`--ignore-missing-imports`](index.html#cmdoption-mypy-ignore-missing-imports). As mentioned in [Missing imports](index.html#fix-missing-imports), setting `ignore_missing_imports=True` on a per-module basis will make bad surprises less likely and is highly encouraged. Use of the [`--follow-imports=skip`](index.html#cmdoption-mypy-follow-imports) flags can also cause problems. Use of these flags is strongly discouraged and only required in relatively niche situations. See [Following imports](index.html#follow-imports) for more information. **mypy considers some of your code unreachable**. See [Unreachable code](#unreachable) for more information. **A function annotated as returning a non-optional type returns ‘None’ and mypy doesn’t complain**. ``` def foo() -> str: return None # No error! ``` You may have disabled strict optional checking (see [Disabling strict optional checking](index.html#no-strict-optional) for more). #### Spurious errors and locally silencing the checker[#](#spurious-errors-and-locally-silencing-the-checker) You can use a `# type: ignore` comment to silence the type checker on a particular line. For example, let’s say our code is using the C extension module `frobnicate`, and there’s no stub available. Mypy will complain about this, as it has no information about the module: ``` import frobnicate # Error: No module "frobnicate" frobnicate.start() ``` You can add a `# type: ignore` comment to tell mypy to ignore this error: ``` import frobnicate # type: ignore frobnicate.start() # Okay! ``` The second line is now fine, since the ignore comment causes the name `frobnicate` to get an implicit `Any` type. Note You can use the form `# type: ignore[<code>]` to only ignore specific errors on the line. This way you are less likely to silence unexpected errors that are not safe to ignore, and this will also document what the purpose of the comment is. See [Error codes](index.html#error-codes) for more information. Note The `# type: ignore` comment will only assign the implicit `Any` type if mypy cannot find information about that particular module. So, if we did have a stub available for `frobnicate` then mypy would ignore the `# type: ignore` comment and typecheck the stub as usual. Another option is to explicitly annotate values with type `Any` – mypy will let you perform arbitrary operations on `Any` values. Sometimes there is no more precise type you can use for a particular value, especially if you use dynamic Python features such as [`__getattr__`](https://docs.python.org/3/reference/datamodel.html#object.__getattr__): ``` class Wrapper: ... def __getattr__(self, a: str) -> Any: return getattr(self._wrapped, a) ``` Finally, you can create a stub file (`.pyi`) for a file that generates spurious errors. Mypy will only look at the stub file and ignore the implementation, since stub files take precedence over `.py` files. #### Ignoring a whole file[#](#ignoring-a-whole-file) * To only ignore errors, use a top-level `# mypy: ignore-errors` comment instead. * To only ignore errors with a specific error code, use a top-level `# mypy: disable-error-code="..."` comment. Example: `# mypy: disable-error-code="truthy-bool, ignore-without-code"` * To replace the contents of a module with `Any`, use a per-module `follow_imports = skip`. See [Following imports](index.html#follow-imports) for details. Note that a `# type: ignore` comment at the top of a module (before any statements, including imports or docstrings) has the effect of ignoring the entire contents of the module. This behaviour can be surprising and result in “Module … has no attribute … [attr-defined]” errors. #### Issues with code at runtime[#](#issues-with-code-at-runtime) Idiomatic use of type annotations can sometimes run up against what a given version of Python considers legal code. These can result in some of the following errors when trying to run your code: * `ImportError` from circular imports * `NameError: name "X" is not defined` from forward references * `TypeError: 'type' object is not subscriptable` from types that are not generic at runtime * `ImportError` or `ModuleNotFoundError` from use of stub definitions not available at runtime * `TypeError: unsupported operand type(s) for |: 'type' and 'type'` from use of new syntax For dealing with these, see [Annotation issues at runtime](index.html#runtime-troubles). #### Mypy runs are slow[#](#mypy-runs-are-slow) If your mypy runs feel slow, you should probably use the [mypy daemon](index.html#mypy-daemon), which can speed up incremental mypy runtimes by a factor of 10 or more. [Remote caching](index.html#remote-cache) can make cold mypy runs several times faster. #### Types of empty collections[#](#types-of-empty-collections) You often need to specify the type when you assign an empty list or dict to a new variable, as mentioned earlier: ``` a: List[int] = [] ``` Without the annotation mypy can’t always figure out the precise type of `a`. You can use a simple empty list literal in a dynamically typed function (as the type of `a` would be implicitly `Any` and need not be inferred), if type of the variable has been declared or inferred before, or if you perform a simple modification operation in the same scope (such as `append` for a list): ``` a = [] # Okay because followed by append, inferred type List[int] for i in range(n): a.append(i * i) ``` However, in more complex cases an explicit type annotation can be required (mypy will tell you this). Often the annotation can make your code easier to understand, so it doesn’t only help mypy but everybody who is reading the code! #### Redefinitions with incompatible types[#](#redefinitions-with-incompatible-types) Each name within a function only has a single ‘declared’ type. You can reuse for loop indices etc., but if you want to use a variable with multiple types within a single function, you may need to instead use multiple variables (or maybe declare the variable with an `Any` type). ``` def f() -> None: n = 1 ... n = 'x' # error: Incompatible types in assignment (expression has type "str", variable has type "int") ``` Note Using the [`--allow-redefinition`](index.html#cmdoption-mypy-allow-redefinition) flag can suppress this error in several cases. Note that you can redefine a variable with a more *precise* or a more concrete type. For example, you can redefine a sequence (which does not support `sort()`) as a list and sort it in-place: ``` def f(x: Sequence[int]) -> None: # Type of x is Sequence[int] here; we don't know the concrete type. x = list(x) # Type of x is List[int] here. x.sort() # Okay! ``` See [Type narrowing](index.html#type-narrowing) for more information. #### Invariance vs covariance[#](#invariance-vs-covariance) Most mutable generic collections are invariant, and mypy considers all user-defined generic classes invariant by default (see [Variance of generic types](index.html#variance-of-generics) for motivation). This could lead to some unexpected errors when combined with type inference. For example: ``` class A: ... class B(A): ... lst = [A(), A()] # Inferred type is List[A] new_lst = [B(), B()] # inferred type is List[B] lst = new_lst # mypy will complain about this, because List is invariant ``` Possible strategies in such situations are: * Use an explicit type annotation: ``` new_lst: List[A] = [B(), B()] lst = new_lst # OK ``` * Make a copy of the right hand side: ``` lst = list(new_lst) # Also OK ``` * Use immutable collections as annotations whenever possible: ``` def f_bad(x: List[A]) -> A: return x[0] f_bad(new_lst) # Fails def f_good(x: Sequence[A]) -> A: return x[0] f_good(new_lst) # OK ``` #### Declaring a supertype as variable type[#](#declaring-a-supertype-as-variable-type) Sometimes the inferred type is a subtype (subclass) of the desired type. The type inference uses the first assignment to infer the type of a name: ``` class Shape: ... class Circle(Shape): ... class Triangle(Shape): ... shape = Circle() # mypy infers the type of shape to be Circle shape = Triangle() # error: Incompatible types in assignment (expression has type "Triangle", variable has type "Circle") ``` You can just give an explicit type for the variable in cases such the above example: ``` shape: Shape = Circle() # The variable s can be any Shape, not just Circle shape = Triangle() # OK ``` #### Complex type tests[#](#complex-type-tests) Mypy can usually infer the types correctly when using [`isinstance`](https://docs.python.org/3/library/functions.html#isinstance), [`issubclass`](https://docs.python.org/3/library/functions.html#issubclass), or `type(obj) is some_class` type tests, and even [user-defined type guards](index.html#type-guards), but for other kinds of checks you may need to add an explicit type cast: ``` from typing import Sequence, cast def find_first_str(a: Sequence[object]) -> str: index = next((i for i, s in enumerate(a) if isinstance(s, str)), -1) if index < 0: raise ValueError('No str found') found = a[index] # Has type "object", despite the fact that we know it is "str" return cast(str, found) # We need an explicit cast to make mypy happy ``` Alternatively, you can use an `assert` statement together with some of the supported type inference techniques: ``` def find_first_str(a: Sequence[object]) -> str: index = next((i for i, s in enumerate(a) if isinstance(s, str)), -1) if index < 0: raise ValueError('No str found') found = a[index] # Has type "object", despite the fact that we know it is "str" assert isinstance(found, str) # Now, "found" will be narrowed to "str" return found # No need for the explicit "cast()" anymore ``` Note Note that the [`object`](https://docs.python.org/3/library/functions.html#object) type used in the above example is similar to `Object` in Java: it only supports operations defined for *all* objects, such as equality and [`isinstance()`](https://docs.python.org/3/library/functions.html#isinstance). The type `Any`, in contrast, supports all operations, even if they may fail at runtime. The cast above would have been unnecessary if the type of `o` was `Any`. Note You can read more about type narrowing techniques [here](index.html#type-narrowing). Type inference in Mypy is designed to work well in common cases, to be predictable and to let the type checker give useful error messages. More powerful type inference strategies often have complex and difficult-to-predict failure modes and could result in very confusing error messages. The tradeoff is that you as a programmer sometimes have to give the type checker a little help. #### Python version and system platform checks[#](#python-version-and-system-platform-checks) Mypy supports the ability to perform Python version checks and platform checks (e.g. Windows vs Posix), ignoring code paths that won’t be run on the targeted Python version or platform. This allows you to more effectively typecheck code that supports multiple versions of Python or multiple operating systems. More specifically, mypy will understand the use of [`sys.version_info`](https://docs.python.org/3/library/sys.html#sys.version_info) and [`sys.platform`](https://docs.python.org/3/library/sys.html#sys.platform) checks within `if/elif/else` statements. For example: ``` import sys # Distinguishing between different versions of Python: if sys.version_info >= (3, 8): # Python 3.8+ specific definitions and imports else: # Other definitions and imports # Distinguishing between different operating systems: if sys.platform.startswith("linux"): # Linux-specific code elif sys.platform == "darwin": # Mac-specific code elif sys.platform == "win32": # Windows-specific code else: # Other systems ``` As a special case, you can also use one of these checks in a top-level (unindented) `assert`; this makes mypy skip the rest of the file. Example: ``` import sys assert sys.platform != 'win32' # The rest of this file doesn't apply to Windows. ``` Some other expressions exhibit similar behavior; in particular, [`TYPE_CHECKING`](https://docs.python.org/3/library/typing.html#typing.TYPE_CHECKING), variables named `MYPY`, and any variable whose name is passed to [`--always-true`](index.html#cmdoption-mypy-always-true) or [`--always-false`](index.html#cmdoption-mypy-always-false). (However, `True` and `False` are not treated specially!) Note Mypy currently does not support more complex checks, and does not assign any special meaning when assigning a [`sys.version_info`](https://docs.python.org/3/library/sys.html#sys.version_info) or [`sys.platform`](https://docs.python.org/3/library/sys.html#sys.platform) check to a variable. This may change in future versions of mypy. By default, mypy will use your current version of Python and your current operating system as default values for [`sys.version_info`](https://docs.python.org/3/library/sys.html#sys.version_info) and [`sys.platform`](https://docs.python.org/3/library/sys.html#sys.platform). To target a different Python version, use the [`--python-version X.Y`](index.html#cmdoption-mypy-python-version) flag. For example, to verify your code typechecks if were run using Python 3.8, pass in [`--python-version 3.8`](index.html#cmdoption-mypy-python-version) from the command line. Note that you do not need to have Python 3.8 installed to perform this check. To target a different operating system, use the [`--platform PLATFORM`](index.html#cmdoption-mypy-platform) flag. For example, to verify your code typechecks if it were run in Windows, pass in [`--platform win32`](index.html#cmdoption-mypy-platform). See the documentation for [`sys.platform`](https://docs.python.org/3/library/sys.html#sys.platform) for examples of valid platform parameters. #### Displaying the type of an expression[#](#displaying-the-type-of-an-expression) You can use `reveal_type(expr)` to ask mypy to display the inferred static type of an expression. This can be useful when you don’t quite understand how mypy handles a particular piece of code. Example: ``` reveal_type((1, 'hello')) # Revealed type is "Tuple[builtins.int, builtins.str]" ``` You can also use `reveal_locals()` at any line in a file to see the types of all local variables at once. Example: ``` a = 1 b = 'one' reveal_locals() # Revealed local types are: # a: builtins.int # b: builtins.str ``` Note `reveal_type` and `reveal_locals` are only understood by mypy and don’t exist in Python. If you try to run your program, you’ll have to remove any `reveal_type` and `reveal_locals` calls before you can run your code. Both are always available and you don’t need to import them. #### Silencing linters[#](#silencing-linters) In some cases, linters will complain about unused imports or code. In these cases, you can silence them with a comment after type comments, or on the same line as the import: ``` # to silence complaints about unused imports from typing import List # noqa a = None # type: List[int] ``` To silence the linter on the same line as a type comment put the linter comment *after* the type comment: ``` a = some_complex_thing() # type: ignore # noqa ``` #### Covariant subtyping of mutable protocol members is rejected[#](#covariant-subtyping-of-mutable-protocol-members-is-rejected) Mypy rejects this because this is potentially unsafe. Consider this example: ``` from typing_extensions import Protocol class P(Protocol): x: float def fun(arg: P) -> None: arg.x = 3.14 class C: x = 42 c = C() fun(c) # This is not safe c.x << 5 # Since this will fail! ``` To work around this problem consider whether “mutating” is actually part of a protocol. If not, then one can use a [`@property`](https://docs.python.org/3/library/functions.html#property) in the protocol definition: ``` from typing_extensions import Protocol class P(Protocol): @property def x(self) -> float: pass def fun(arg: P) -> None: ... class C: x = 42 fun(C()) # OK ``` #### Dealing with conflicting names[#](#dealing-with-conflicting-names) Suppose you have a class with a method whose name is the same as an imported (or built-in) type, and you want to use the type in another method signature. E.g.: ``` class Message: def bytes(self): ... def register(self, path: bytes): # error: Invalid type "mod.Message.bytes" ... ``` The third line elicits an error because mypy sees the argument type `bytes` as a reference to the method by that name. Other than renaming the method, a workaround is to use an alias: ``` bytes_ = bytes class Message: def bytes(self): ... def register(self, path: bytes_): ... ``` #### Using a development mypy build[#](#using-a-development-mypy-build) You can install the latest development version of mypy from source. Clone the [mypy repository on GitHub](https://github.com/python/mypy), and then run `pip install` locally: ``` git clone https://github.com/python/mypy.git cd mypy python3 -m pip install --upgrade . ``` To install a development version of mypy that is mypyc-compiled, see the instructions at the [mypyc wheels repo](https://github.com/mypyc/mypy_mypyc-wheels). #### Variables vs type aliases[#](#variables-vs-type-aliases) Mypy has both *type aliases* and variables with types like `Type[...]`. These are subtly different, and it’s important to understand how they differ to avoid pitfalls. 1. A variable with type `Type[...]` is defined using an assignment with an explicit type annotation: ``` class A: ... tp: Type[A] = A ``` 2. You can define a type alias using an assignment without an explicit type annotation at the top level of a module: ``` class A: ... Alias = A ``` You can also use `TypeAlias` ([**PEP 613**](https://peps.python.org/pep-0613/)) to define an *explicit type alias*: ``` from typing import TypeAlias # "from typing_extensions" in Python 3.9 and earlier class A: ... Alias: TypeAlias = A ``` You should always use `TypeAlias` to define a type alias in a class body or inside a function. The main difference is that the target of an alias is precisely known statically, and this means that they can be used in type annotations and other *type contexts*. Type aliases can’t be defined conditionally (unless using [supported Python version and platform checks](#version-and-platform-checks)): > ``` > class A: ... > class B: ... > if random() > 0.5: > Alias = A > else: > # error: Cannot assign multiple types to name "Alias" without an > # explicit "Type[...]" annotation > Alias = B > tp: Type[object] # "tp" is a variable with a type object value > if random() > 0.5: > tp = A > else: > tp = B # This is OK > def fun1(x: Alias) -> None: ... # OK > def fun2(x: tp) -> None: ... # Error: "tp" is not valid as a type > ``` #### Incompatible overrides[#](#incompatible-overrides) It’s unsafe to override a method with a more specific argument type, as it violates the [Liskov substitution principle](https://stackoverflow.com/questions/56860/what-is-an-example-of-the-liskov-substitution-principle). For return types, it’s unsafe to override a method with a more general return type. Other incompatible signature changes in method overrides, such as adding an extra required parameter, or removing an optional parameter, will also generate errors. The signature of a method in a subclass should accept all valid calls to the base class method. Mypy treats a subclass as a subtype of the base class. An instance of a subclass is valid everywhere where an instance of the base class is valid. This example demonstrates both safe and unsafe overrides: ``` from typing import Sequence, List, Iterable class A: def test(self, t: Sequence[int]) -> Sequence[str]: ... class GeneralizedArgument(A): # A more general argument type is okay def test(self, t: Iterable[int]) -> Sequence[str]: # OK ... class NarrowerArgument(A): # A more specific argument type isn't accepted def test(self, t: List[int]) -> Sequence[str]: # Error ... class NarrowerReturn(A): # A more specific return type is fine def test(self, t: Sequence[int]) -> List[str]: # OK ... class GeneralizedReturn(A): # A more general return type is an error def test(self, t: Sequence[int]) -> Iterable[str]: # Error ... ``` You can use `# type: ignore[override]` to silence the error. Add it to the line that generates the error, if you decide that type safety is not necessary: ``` class NarrowerArgument(A): def test(self, t: List[int]) -> Sequence[str]: # type: ignore[override] ... ``` #### Unreachable code[#](#unreachable-code) Mypy may consider some code as *unreachable*, even if it might not be immediately obvious why. It’s important to note that mypy will *not* type check such code. Consider this example: ``` class Foo: bar: str = '' def bar() -> None: foo: Foo = Foo() return x: int = 'abc' # Unreachable -- no error ``` It’s easy to see that any statement after `return` is unreachable, and hence mypy will not complain about the mis-typed code below it. For a more subtle example, consider this code: ``` class Foo: bar: str = '' def bar() -> None: foo: Foo = Foo() assert foo.bar is None x: int = 'abc' # Unreachable -- no error ``` Again, mypy will not report any errors. The type of `foo.bar` is `str`, and mypy reasons that it can never be `None`. Hence the `assert` statement will always fail and the statement below will never be executed. (Note that in Python, `None` is not an empty reference but an object of type `None`.) In this example mypy will go on to check the last line and report an error, since mypy thinks that the condition could be either True or False: ``` class Foo: bar: str = '' def bar() -> None: foo: Foo = Foo() if not foo.bar: return x: int = 'abc' # Reachable -- error ``` If you use the [`--warn-unreachable`](index.html#cmdoption-mypy-warn-unreachable) flag, mypy will generate an error about each unreachable code block. #### Narrowing and inner functions[#](#narrowing-and-inner-functions) Because closures in Python are late-binding (<https://docs.python-guide.org/writing/gotchas/#late-binding-closures>), mypy will not narrow the type of a captured variable in an inner function. This is best understood via an example: ``` def foo(x: Optional[int]) -> Callable[[], int]: if x is None: x = 5 print(x + 1) # mypy correctly deduces x must be an int here def inner() -> int: return x + 1 # but (correctly) complains about this line x = None # because x could later be assigned None return inner inner = foo(5) inner() # this will raise an error when called ``` To get this code to type check, you could assign `y = x` after `x` has been narrowed, and use `y` in the inner function, or add an assert in the inner function. ### Supported Python features[#](#supported-python-features) A list of unsupported Python features is maintained in the mypy wiki: * [Unsupported Python features](https://github.com/python/mypy/wiki/Unsupported-Python-Features) #### Runtime definition of methods and functions[#](#runtime-definition-of-methods-and-functions) By default, mypy will complain if you add a function to a class or module outside its definition – but only if this is visible to the type checker. This only affects static checking, as mypy performs no additional type checking at runtime. You can easily work around this. For example, you can use dynamically typed code or values with `Any` types, or you can use [`setattr()`](https://docs.python.org/3/library/functions.html#setattr) or other introspection features. However, you need to be careful if you decide to do this. If used indiscriminately, you may have difficulty using static typing effectively, since the type checker cannot see functions defined at runtime. ### Error codes[#](#error-codes) Mypy can optionally display an error code such as `[attr-defined]` after each error message. Error codes serve two purposes: 1. It’s possible to silence specific error codes on a line using `# type: ignore[code]`. This way you won’t accidentally ignore other, potentially more serious errors. 2. The error code can be used to find documentation about the error. The next two topics ([Error codes enabled by default](index.html#error-code-list) and [Error codes for optional checks](index.html#error-codes-optional)) document the various error codes mypy can report. Most error codes are shared between multiple related error messages. Error codes may change in future mypy releases. #### Displaying error codes[#](#displaying-error-codes) Error codes are displayed by default. Use [`--hide-error-codes`](index.html#cmdoption-mypy-hide-error-codes) or config `hide_error_codes = True` to hide error codes. Error codes are shown inside square brackets: ``` $ mypy prog.py prog.py:1: error: "str" has no attribute "trim" [attr-defined] ``` It’s also possible to require error codes for `type: ignore` comments. See [ignore-without-code](index.html#code-ignore-without-code) for more information. #### Silencing errors based on error codes[#](#silencing-errors-based-on-error-codes) You can use a special comment `# type: ignore[code, ...]` to only ignore errors with a specific error code (or codes) on a particular line. This can be used even if you have not configured mypy to show error codes. This example shows how to ignore an error about an imported name mypy thinks is undefined: ``` # 'foo' is defined in 'foolib', even though mypy can't see the # definition. from foolib import foo # type: ignore[attr-defined] ``` #### Enabling/disabling specific error codes globally[#](#enabling-disabling-specific-error-codes-globally) There are command-line flags and config file settings for enabling certain optional error codes, such as [`--disallow-untyped-defs`](index.html#cmdoption-mypy-disallow-untyped-defs), which enables the `no-untyped-def` error code. You can use [`--enable-error-code`](index.html#cmdoption-mypy-enable-error-code) and [`--disable-error-code`](index.html#cmdoption-mypy-disable-error-code) to enable or disable specific error codes that don’t have a dedicated command-line flag or config file setting. #### Per-module enabling/disabling error codes[#](#per-module-enabling-disabling-error-codes) You can use [configuration file](index.html#config-file) sections to enable or disable specific error codes only in some modules. For example, this `mypy.ini` config will enable non-annotated empty containers in tests, while keeping other parts of code checked in strict mode: ``` [mypy] strict = True [mypy-tests.*] allow_untyped_defs = True allow_untyped_calls = True disable_error_code = var-annotated, has-type ``` Note that per-module enabling/disabling acts as override over the global options. So that you don’t need to repeat the error code lists for each module if you have them in global config section. For example: ``` [mypy] enable_error_code = truthy-bool, ignore-without-code, unused-awaitable [mypy-extensions.*] disable_error_code = unused-awaitable ``` The above config will allow unused awaitables in extension modules, but will still keep the other two error codes enabled. The overall logic is following: * Command line and/or config main section set global error codes * Individual config sections *adjust* them per glob/module * Inline `# mypy: disable-error-code="..."` comments can further *adjust* them for a specific module. For example: `# mypy: disable-error-code="truthy-bool, ignore-without-code"` So one can e.g. enable some code globally, disable it for all tests in the corresponding config section, and then re-enable it with an inline comment in some specific test. #### Subcodes of error codes[#](#subcodes-of-error-codes) In some cases, mostly for backwards compatibility reasons, an error code may be covered also by another, wider error code. For example, an error with code `[method-assign]` can be ignored by `# type: ignore[assignment]`. Similar logic works for disabling error codes globally. If a given error code is a subcode of another one, it will be mentioned in the documentation for the narrower code. This hierarchy is not nested: there cannot be subcodes of other subcodes. ### Error codes enabled by default[#](#error-codes-enabled-by-default) This section documents various errors codes that mypy can generate with default options. See [Error codes](index.html#error-codes) for general documentation about error codes. [Error codes for optional checks](index.html#error-codes-optional) documents additional error codes that you can enable. #### Check that attribute exists [attr-defined][#](#check-that-attribute-exists-attr-defined) Mypy checks that an attribute is defined in the target class or module when using the dot operator. This applies to both getting and setting an attribute. New attributes are defined by assignments in the class body, or assignments to `self.x` in methods. These assignments don’t generate `attr-defined` errors. Example: ``` class Resource: def __init__(self, name: str) -> None: self.name = name r = Resource('x') print(r.name) # OK print(r.id) # Error: "Resource" has no attribute "id" [attr-defined] r.id = 5 # Error: "Resource" has no attribute "id" [attr-defined] ``` This error code is also generated if an imported name is not defined in the module in a `from ... import` statement (as long as the target module can be found): ``` # Error: Module "os" has no attribute "non_existent" [attr-defined] from os import non_existent ``` A reference to a missing attribute is given the `Any` type. In the above example, the type of `non_existent` will be `Any`, which can be important if you silence the error. #### Check that attribute exists in each union item [union-attr][#](#check-that-attribute-exists-in-each-union-item-union-attr) If you access the attribute of a value with a union type, mypy checks that the attribute is defined for *every* type in that union. Otherwise the operation can fail at runtime. This also applies to optional types. Example: ``` from typing import Union class Cat: def sleep(self) -> None: ... def miaow(self) -> None: ... class Dog: def sleep(self) -> None: ... def follow_me(self) -> None: ... def func(animal: Union[Cat, Dog]) -> None: # OK: 'sleep' is defined for both Cat and Dog animal.sleep() # Error: Item "Cat" of "Union[Cat, Dog]" has no attribute "follow_me" [union-attr] animal.follow_me() ``` You can often work around these errors by using `assert isinstance(obj, ClassName)` or `assert obj is not None` to tell mypy that you know that the type is more specific than what mypy thinks. #### Check that name is defined [name-defined][#](#check-that-name-is-defined-name-defined) Mypy expects that all references to names have a corresponding definition in an active scope, such as an assignment, function definition or an import. This can catch missing definitions, missing imports, and typos. This example accidentally calls `sort()` instead of [`sorted()`](https://docs.python.org/3/library/functions.html#sorted): ``` x = sort([3, 2, 4]) # Error: Name "sort" is not defined [name-defined] ``` #### Check that a variable is not used before it’s defined [used-before-def][#](#check-that-a-variable-is-not-used-before-it-s-defined-used-before-def) Mypy will generate an error if a name is used before it’s defined. While the name-defined check will catch issues with names that are undefined, it will not flag if a variable is used and then defined later in the scope. used-before-def check will catch such cases. Example: ``` print(x) # Error: Name "x" is used before definition [used-before-def] x = 123 ``` #### Check arguments in calls [call-arg][#](#check-arguments-in-calls-call-arg) Mypy expects that the number and names of arguments match the called function. Note that argument type checks have a separate error code `arg-type`. Example: ``` from typing import Sequence def greet(name: str) -> None: print('hello', name) greet('jack') # OK greet('jill', 'jack') # Error: Too many arguments for "greet" [call-arg] ``` #### Check argument types [arg-type][#](#check-argument-types-arg-type) Mypy checks that argument types in a call match the declared argument types in the signature of the called function (if one exists). Example: ``` from typing import Optional def first(x: list[int]) -> Optional[int]: return x[0] if x else 0 t = (5, 4) # Error: Argument 1 to "first" has incompatible type "tuple[int, int]"; # expected "list[int]" [arg-type] print(first(t)) ``` #### Check calls to overloaded functions [call-overload][#](#check-calls-to-overloaded-functions-call-overload) When you call an overloaded function, mypy checks that at least one of the signatures of the overload items match the argument types in the call. Example: ``` from typing import overload, Optional @overload def inc_maybe(x: None) -> None: ... @overload def inc_maybe(x: int) -> int: ... def inc_maybe(x: Optional[int]) -> Optional[int]: if x is None: return None else: return x + 1 inc_maybe(None) # OK inc_maybe(5) # OK # Error: No overload variant of "inc_maybe" matches argument type "float" [call-overload] inc_maybe(1.2) ``` #### Check validity of types [valid-type][#](#check-validity-of-types-valid-type) Mypy checks that each type annotation and any expression that represents a type is a valid type. Examples of valid types include classes, union types, callable types, type aliases, and literal types. Examples of invalid types include bare integer literals, functions, variables, and modules. This example incorrectly uses the function `log` as a type: ``` def log(x: object) -> None: print('log:', repr(x)) # Error: Function "t.log" is not valid as a type [valid-type] def log_all(objs: list[object], f: log) -> None: for x in objs: f(x) ``` You can use [`Callable`](https://docs.python.org/3/library/typing.html#typing.Callable) as the type for callable objects: ``` from typing import Callable # OK def log_all(objs: list[object], f: Callable[[object], None]) -> None: for x in objs: f(x) ``` #### Require annotation if variable type is unclear [var-annotated][#](#require-annotation-if-variable-type-is-unclear-var-annotated) In some cases mypy can’t infer the type of a variable without an explicit annotation. Mypy treats this as an error. This typically happens when you initialize a variable with an empty collection or `None`. If mypy can’t infer the collection item type, mypy replaces any parts of the type it couldn’t infer with `Any` and generates an error. Example with an error: ``` class Bundle: def __init__(self) -> None: # Error: Need type annotation for "items" # (hint: "items: list[<type>] = ...") [var-annotated] self.items = [] reveal_type(Bundle().items) # list[Any] ``` To address this, we add an explicit annotation: ``` class Bundle: def __init__(self) -> None: self.items: list[str] = [] # OK reveal_type(Bundle().items) # list[str] ``` #### Check validity of overrides [override][#](#check-validity-of-overrides-override) Mypy checks that an overridden method or attribute is compatible with the base class. A method in a subclass must accept all arguments that the base class method accepts, and the return type must conform to the return type in the base class (Liskov substitution principle). Argument types can be more general is a subclass (i.e., they can vary contravariantly). The return type can be narrowed in a subclass (i.e., it can vary covariantly). It’s okay to define additional arguments in a subclass method, as long all extra arguments have default values or can be left out (`*args`, for example). Example: ``` from typing import Optional, Union class Base: def method(self, arg: int) -> Optional[int]: ... class Derived(Base): def method(self, arg: Union[int, str]) -> int: # OK ... class DerivedBad(Base): # Error: Argument 1 of "method" is incompatible with "Base" [override] def method(self, arg: bool) -> int: ... ``` #### Check that function returns a value [return][#](#check-that-function-returns-a-value-return) If a function has a non-`None` return type, mypy expects that the function always explicitly returns a value (or raises an exception). The function should not fall off the end of the function, since this is often a bug. Example: ``` # Error: Missing return statement [return] def show(x: int) -> int: print(x) # Error: Missing return statement [return] def pred1(x: int) -> int: if x > 0: return x - 1 # OK def pred2(x: int) -> int: if x > 0: return x - 1 else: raise ValueError('not defined for zero') ``` #### Check that functions don’t have empty bodies outside stubs [empty-body][#](#check-that-functions-don-t-have-empty-bodies-outside-stubs-empty-body) This error code is similar to the `[return]` code but is emitted specifically for functions and methods with empty bodies (if they are annotated with non-trivial return type). Such a distinction exists because in some contexts an empty body can be valid, for example for an abstract method or in a stub file. Also old versions of mypy used to unconditionally allow functions with empty bodies, so having a dedicated error code simplifies cross-version compatibility. Note that empty bodies are allowed for methods in *protocols*, and such methods are considered implicitly abstract: ``` from abc import abstractmethod from typing import Protocol class RegularABC: @abstractmethod def foo(self) -> int: pass # OK def bar(self) -> int: pass # Error: Missing return statement [empty-body] class Proto(Protocol): def bar(self) -> int: pass # OK ``` #### Check that return value is compatible [return-value][#](#check-that-return-value-is-compatible-return-value) Mypy checks that the returned value is compatible with the type signature of the function. Example: ``` def func(x: int) -> str: # Error: Incompatible return value type (got "int", expected "str") [return-value] return x + 1 ``` #### Check types in assignment statement [assignment][#](#check-types-in-assignment-statement-assignment) Mypy checks that the assigned expression is compatible with the assignment target (or targets). Example: ``` class Resource: def __init__(self, name: str) -> None: self.name = name r = Resource('A') r.name = 'B' # OK # Error: Incompatible types in assignment (expression has type "int", # variable has type "str") [assignment] r.name = 5 ``` #### Check that assignment target is not a method [method-assign][#](#check-that-assignment-target-is-not-a-method-method-assign) In general, assigning to a method on class object or instance (a.k.a. monkey-patching) is ambiguous in terms of types, since Python’s static type system cannot express the difference between bound and unbound callable types. Consider this example: ``` class A: def f(self) -> None: pass def g(self) -> None: pass def h(self: A) -> None: pass A.f = h # Type of h is Callable[[A], None] A().f() # This works A.f = A().g # Type of A().g is Callable[[], None] A().f() # ...but this also works at runtime ``` To prevent the ambiguity, mypy will flag both assignments by default. If this error code is disabled, mypy will treat the assigned value in all method assignments as unbound, so only the second assignment will still generate an error. Note This error code is a subcode of the more general `[assignment]` code. #### Check type variable values [type-var][#](#check-type-variable-values-type-var) Mypy checks that value of a type variable is compatible with a value restriction or the upper bound type. Example: ``` from typing import TypeVar T1 = TypeVar('T1', int, float) def add(x: T1, y: T1) -> T1: return x + y add(4, 5.5) # OK # Error: Value of type variable "T1" of "add" cannot be "str" [type-var] add('x', 'y') ``` #### Check uses of various operators [operator][#](#check-uses-of-various-operators-operator) Mypy checks that operands support a binary or unary operation, such as `+` or `~`. Indexing operations are so common that they have their own error code `index` (see below). Example: ``` # Error: Unsupported operand types for + ("int" and "str") [operator] 1 + 'x' ``` #### Check indexing operations [index][#](#check-indexing-operations-index) Mypy checks that the indexed value in indexing operation such as `x[y]` supports indexing, and that the index expression has a valid type. Example: ``` a = {'x': 1, 'y': 2} a['x'] # OK # Error: Invalid index type "int" for "dict[str, int]"; expected type "str" [index] print(a[1]) # Error: Invalid index type "bytes" for "dict[str, int]"; expected type "str" [index] a[b'x'] = 4 ``` #### Check list items [list-item][#](#check-list-items-list-item) When constructing a list using `[item, ...]`, mypy checks that each item is compatible with the list type that is inferred from the surrounding context. Example: ``` # Error: List item 0 has incompatible type "int"; expected "str" [list-item] a: list[str] = [0] ``` #### Check dict items [dict-item][#](#check-dict-items-dict-item) When constructing a dictionary using `{key: value, ...}` or `dict(key=value, ...)`, mypy checks that each key and value is compatible with the dictionary type that is inferred from the surrounding context. Example: ``` # Error: Dict entry 0 has incompatible type "str": "str"; expected "str": "int" [dict-item] d: dict[str, int] = {'key': 'value'} ``` #### Check TypedDict items [typeddict-item][#](#check-typeddict-items-typeddict-item) When constructing a TypedDict object, mypy checks that each key and value is compatible with the TypedDict type that is inferred from the surrounding context. When getting a TypedDict item, mypy checks that the key exists. When assigning to a TypedDict, mypy checks that both the key and the value are valid. Example: ``` from typing_extensions import TypedDict class Point(TypedDict): x: int y: int # Error: Incompatible types (expression has type "float", # TypedDict item "x" has type "int") [typeddict-item] p: Point = {'x': 1.2, 'y': 4} ``` #### Check TypedDict Keys [typeddict-unknown-key][#](#check-typeddict-keys-typeddict-unknown-key) When constructing a TypedDict object, mypy checks whether the definition contains unknown keys, to catch invalid keys and misspellings. On the other hand, mypy will not generate an error when a previously constructed TypedDict value with extra keys is passed to a function as an argument, since TypedDict values support structural subtyping (“static duck typing”) and the keys are assumed to have been validated at the point of construction. Example: ``` from typing_extensions import TypedDict class Point(TypedDict): x: int y: int class Point3D(Point): z: int def add_x_coordinates(a: Point, b: Point) -> int: return a["x"] + b["x"] a: Point = {"x": 1, "y": 4} b: Point3D = {"x": 2, "y": 5, "z": 6} add_x_coordinates(a, b) # OK # Error: Extra key "z" for TypedDict "Point" [typeddict-unknown-key] add_x_coordinates(a, {"x": 1, "y": 4, "z": 5}) ``` Setting a TypedDict item using an unknown key will also generate this error, since it could be a misspelling: ``` a: Point = {"x": 1, "y": 2} # Error: Extra key "z" for TypedDict "Point" [typeddict-unknown-key] a["z"] = 3 ``` Reading an unknown key will generate the more general (and serious) `typeddict-item` error, which is likely to result in an exception at runtime: ``` a: Point = {"x": 1, "y": 2} # Error: TypedDict "Point" has no key "z" [typeddict-item] _ = a["z"] ``` Note This error code is a subcode of the wider `[typeddict-item]` code. #### Check that type of target is known [has-type][#](#check-that-type-of-target-is-known-has-type) Mypy sometimes generates an error when it hasn’t inferred any type for a variable being referenced. This can happen for references to variables that are initialized later in the source file, and for references across modules that form an import cycle. When this happens, the reference gets an implicit `Any` type. In this example the definitions of `x` and `y` are circular: ``` class Problem: def set_x(self) -> None: # Error: Cannot determine type of "y" [has-type] self.x = self.y def set_y(self) -> None: self.y = self.x ``` To work around this error, you can add an explicit type annotation to the target variable or attribute. Sometimes you can also reorganize the code so that the definition of the variable is placed earlier than the reference to the variable in a source file. Untangling cyclic imports may also help. We add an explicit annotation to the `y` attribute to work around the issue: ``` class Problem: def set_x(self) -> None: self.x = self.y # OK def set_y(self) -> None: self.y: int = self.x # Added annotation here ``` #### Check for an issue with imports [import][#](#check-for-an-issue-with-imports-import) Mypy generates an error if it can’t resolve an import statement. This is a parent error code of import-not-found and import-untyped See [Missing imports](index.html#ignore-missing-imports) for how to work around these errors. #### Check that import target can be found [import-not-found][#](#check-that-import-target-can-be-found-import-not-found) Mypy generates an error if it can’t find the source code or a stub file for an imported module. Example: ``` # Error: Cannot find implementation or library stub for module named "m0dule_with_typo" [import-not-found] import m0dule_with_typo ``` See [Missing imports](index.html#ignore-missing-imports) for how to work around these errors. #### Check that import target can be found [import-untyped][#](#check-that-import-target-can-be-found-import-untyped) Mypy generates an error if it can find the source code for an imported module, but that module does not provide type annotations (via [PEP 561](index.html#installed-packages)). Example: ``` # Error: Library stubs not installed for "bs4" [import-untyped] import bs4 # Error: Skipping analyzing "no_py_typed": module is installed, but missing library stubs or py.typed marker [import-untyped] import no_py_typed ``` In some cases, these errors can be fixed by installing an appropriate stub package. See [Missing imports](index.html#ignore-missing-imports) for more details. #### Check that each name is defined once [no-redef][#](#check-that-each-name-is-defined-once-no-redef) Mypy may generate an error if you have multiple definitions for a name in the same namespace. The reason is that this is often an error, as the second definition may overwrite the first one. Also, mypy often can’t be able to determine whether references point to the first or the second definition, which would compromise type checking. If you silence this error, all references to the defined name refer to the *first* definition. Example: ``` class A: def __init__(self, x: int) -> None: ... class A: # Error: Name "A" already defined on line 1 [no-redef] def __init__(self, x: str) -> None: ... # Error: Argument 1 to "A" has incompatible type "str"; expected "int" # (the first definition wins!) A('x') ``` #### Check that called function returns a value [func-returns-value][#](#check-that-called-function-returns-a-value-func-returns-value) Mypy reports an error if you call a function with a `None` return type and don’t ignore the return value, as this is usually (but not always) a programming error. In this example, the `if f()` check is always false since `f` returns `None`: ``` def f() -> None: ... # OK: we don't do anything with the return value f() # Error: "f" does not return a value [func-returns-value] if f(): print("not false") ``` #### Check instantiation of abstract classes [abstract][#](#check-instantiation-of-abstract-classes-abstract) Mypy generates an error if you try to instantiate an abstract base class (ABC). An abstract base class is a class with at least one abstract method or attribute. (See also [`abc`](https://docs.python.org/3/library/abc.html#module-abc) module documentation) Sometimes a class is made accidentally abstract, often due to an unimplemented abstract method. In a case like this you need to provide an implementation for the method to make the class concrete (non-abstract). Example: ``` from abc import ABCMeta, abstractmethod class Persistent(metaclass=ABCMeta): @abstractmethod def save(self) -> None: ... class Thing(Persistent): def __init__(self) -> None: ... ... # No "save" method # Error: Cannot instantiate abstract class "Thing" with abstract attribute "save" [abstract] t = Thing() ``` #### Safe handling of abstract type object types [type-abstract][#](#safe-handling-of-abstract-type-object-types-type-abstract) Mypy always allows instantiating (calling) type objects typed as `Type[t]`, even if it is not known that `t` is non-abstract, since it is a common pattern to create functions that act as object factories (custom constructors). Therefore, to prevent issues described in the above section, when an abstract type object is passed where `Type[t]` is expected, mypy will give an error. Example: ``` from abc import ABCMeta, abstractmethod from typing import List, Type, TypeVar class Config(metaclass=ABCMeta): @abstractmethod def get_value(self, attr: str) -> str: ... T = TypeVar("T") def make_many(typ: Type[T], n: int) -> List[T]: return [typ() for _ in range(n)] # This will raise if typ is abstract # Error: Only concrete class can be given where "Type[Config]" is expected [type-abstract] make_many(Config, 5) ``` #### Check that call to an abstract method via super is valid [safe-super][#](#check-that-call-to-an-abstract-method-via-super-is-valid-safe-super) Abstract methods often don’t have any default implementation, i.e. their bodies are just empty. Calling such methods in subclasses via `super()` will cause runtime errors, so mypy prevents you from doing so: ``` from abc import abstractmethod class Base: @abstractmethod def foo(self) -> int: ... class Sub(Base): def foo(self) -> int: return super().foo() + 1 # error: Call to abstract method "foo" of "Base" with # trivial body via super() is unsafe [safe-super] Sub().foo() # This will crash at runtime. ``` Mypy considers the following as trivial bodies: a `pass` statement, a literal ellipsis `...`, a docstring, and a `raise NotImplementedError` statement. #### Check the target of NewType [valid-newtype][#](#check-the-target-of-newtype-valid-newtype) The target of a [`NewType`](https://docs.python.org/3/library/typing.html#typing.NewType) definition must be a class type. It can’t be a union type, `Any`, or various other special types. You can also get this error if the target has been imported from a module whose source mypy cannot find, since any such definitions are treated by mypy as values with `Any` types. Example: ``` from typing import NewType # The source for "acme" is not available for mypy from acme import Entity # type: ignore # Error: Argument 2 to NewType(...) must be subclassable (got "Any") [valid-newtype] UserEntity = NewType('UserEntity', Entity) ``` To work around the issue, you can either give mypy access to the sources for `acme` or create a stub file for the module. See [Missing imports](index.html#ignore-missing-imports) for more information. #### Check the return type of __exit__ [exit-return][#](#check-the-return-type-of-exit-exit-return) If mypy can determine that [`__exit__`](https://docs.python.org/3/reference/datamodel.html#object.__exit__) always returns `False`, mypy checks that the return type is *not* `bool`. The boolean value of the return type affects which lines mypy thinks are reachable after a `with` statement, since any [`__exit__`](https://docs.python.org/3/reference/datamodel.html#object.__exit__) method that can return `True` may swallow exceptions. An imprecise return type can result in mysterious errors reported near `with` statements. To fix this, use either `typing_extensions.Literal[False]` or `None` as the return type. Returning `None` is equivalent to returning `False` in this context, since both are treated as false values. Example: ``` class MyContext: ... def __exit__(self, exc, value, tb) -> bool: # Error print('exit') return False ``` This produces the following output from mypy: ``` example.py:3: error: "bool" is invalid as return type for "__exit__" that always returns False example.py:3: note: Use "typing_extensions.Literal[False]" as the return type or change it to "None" example.py:3: note: If return type of "__exit__" implies that it may return True, the context manager may swallow exceptions ``` You can use `Literal[False]` to fix the error: ``` from typing_extensions import Literal class MyContext: ... def __exit__(self, exc, value, tb) -> Literal[False]: # OK print('exit') return False ``` You can also use `None`: ``` class MyContext: ... def __exit__(self, exc, value, tb) -> None: # Also OK print('exit') ``` #### Check that naming is consistent [name-match][#](#check-that-naming-is-consistent-name-match) The definition of a named tuple or a TypedDict must be named consistently when using the call-based syntax. Example: ``` from typing import NamedTuple # Error: First argument to namedtuple() should be "Point2D", not "Point" Point2D = NamedTuple("Point", [("x", int), ("y", int)]) ``` #### Check that literal is used where expected [literal-required][#](#check-that-literal-is-used-where-expected-literal-required) There are some places where only a (string) literal value is expected for the purposes of static type checking, for example a `TypedDict` key, or a `__match_args__` item. Providing a `str`-valued variable in such contexts will result in an error. Note that in many cases you can also use `Final` or `Literal` variables. Example: ``` from typing import Final, Literal, TypedDict class Point(TypedDict): x: int y: int def test(p: Point) -> None: X: Final = "x" p[X] # OK Y: Literal["y"] = "y" p[Y] # OK key = "x" # Inferred type of key is `str` # Error: TypedDict key must be a string literal; # expected one of ("x", "y") [literal-required] p[key] ``` #### Check that overloaded functions have an implementation [no-overload-impl][#](#check-that-overloaded-functions-have-an-implementation-no-overload-impl) Overloaded functions outside of stub files must be followed by a non overloaded implementation. ``` from typing import overload @overload def func(value: int) -> int: ... @overload def func(value: str) -> str: ... # presence of required function below is checked def func(value): pass # actual implementation ``` #### Check that coroutine return value is used [unused-coroutine][#](#check-that-coroutine-return-value-is-used-unused-coroutine) Mypy ensures that return values of async def functions are not ignored, as this is usually a programming error, as the coroutine won’t be executed at the call site. ``` async def f() -> None: ... async def g() -> None: f() # Error: missing await await f() # OK ``` You can work around this error by assigning the result to a temporary, otherwise unused variable: ``` _ = f() # No error ``` #### Warn about top level await expressions [top-level-await][#](#warn-about-top-level-await-expressions-top-level-await) This error code is separate from the general `[syntax]` errors, because in some environments (e.g. IPython) a top level `await` is allowed. In such environments a user may want to use `--disable-error-code=top-level-await`, that allows to still have errors for other improper uses of `await`, for example: ``` async def f() -> None: ... top = await f() # Error: "await" outside function [top-level-await] ``` #### Warn about await expressions used outside of coroutines [await-not-async][#](#warn-about-await-expressions-used-outside-of-coroutines-await-not-async) `await` must be used inside a coroutine. ``` async def f() -> None: ... def g() -> None: await f() # Error: "await" outside coroutine ("async def") [await-not-async] ``` #### Check types in assert_type [assert-type][#](#check-types-in-assert-type-assert-type) The inferred type for an expression passed to `assert_type` must match the provided type. ``` from typing_extensions import assert_type assert_type([1], list[int]) # OK assert_type([1], list[str]) # Error ``` #### Check that function isn’t used in boolean context [truthy-function][#](#check-that-function-isn-t-used-in-boolean-context-truthy-function) Functions will always evaluate to true in boolean contexts. ``` def f(): ... if f: # Error: Function "Callable[[], Any]" could always be true in boolean context [truthy-function] pass ``` #### Check that string formatting/interpolation is type-safe [str-format][#](#check-that-string-formatting-interpolation-is-type-safe-str-format) Mypy will check that f-strings, `str.format()` calls, and `%` interpolations are valid (when corresponding template is a literal string). This includes checking number and types of replacements, for example: ``` # Error: Cannot find replacement for positional format specifier 1 [str-format] "{} and {}".format("spam") "{} and {}".format("spam", "eggs") # OK # Error: Not all arguments converted during string formatting [str-format] "{} and {}".format("spam", "eggs", "cheese") # Error: Incompatible types in string interpolation # (expression has type "float", placeholder has type "int") [str-format] "{:d}".format(3.14) ``` #### Check for implicit bytes coercions [str-bytes-safe][#](#check-for-implicit-bytes-coercions-str-bytes-safe) Warn about cases where a bytes object may be converted to a string in an unexpected manner. ``` b = b"abc" # Error: If x = b'abc' then f"{x}" or "{}".format(x) produces "b'abc'", not "abc". # If this is desired behavior, use f"{x!r}" or "{!r}".format(x). # Otherwise, decode the bytes [str-bytes-safe] print(f"The alphabet starts with {b}") # Okay print(f"The alphabet starts with {b!r}") # The alphabet starts with b'abc' print(f"The alphabet starts with {b.decode('utf-8')}") # The alphabet starts with abc ``` #### Notify about an annotation in an unchecked function [annotation-unchecked][#](#notify-about-an-annotation-in-an-unchecked-function-annotation-unchecked) Sometimes a user may accidentally omit an annotation for a function, and mypy will not check the body of this function (unless one uses [`--check-untyped-defs`](index.html#cmdoption-mypy-check-untyped-defs) or [`--disallow-untyped-defs`](index.html#cmdoption-mypy-disallow-untyped-defs)). To avoid such situations go unnoticed, mypy will show a note, if there are any type annotations in an unchecked function: ``` def test_assignment(): # "-> None" return annotation is missing # Note: By default the bodies of untyped functions are not checked, # consider using --check-untyped-defs [annotation-unchecked] x: int = "no way" ``` Note that mypy will still exit with return code `0`, since such behaviour is specified by [**PEP 484**](https://peps.python.org/pep-0484/). #### Report syntax errors [syntax][#](#report-syntax-errors-syntax) If the code being checked is not syntactically valid, mypy issues a syntax error. Most, but not all, syntax errors are *blocking errors*: they can’t be ignored with a `# type: ignore` comment. #### Miscellaneous checks [misc][#](#miscellaneous-checks-misc) Mypy performs numerous other, less commonly failing checks that don’t have specific error codes. These use the `misc` error code. Other than being used for multiple unrelated errors, the `misc` error code is not special. For example, you can ignore all errors in this category by using `# type: ignore[misc]` comment. Since these errors are not expected to be common, it’s unlikely that you’ll see two *different* errors with the `misc` code on a single line – though this can certainly happen once in a while. Note Future mypy versions will likely add new error codes for some errors that currently use the `misc` error code. ### Error codes for optional checks[#](#error-codes-for-optional-checks) This section documents various errors codes that mypy generates only if you enable certain options. See [Error codes](index.html#error-codes) for general documentation about error codes. [Error codes enabled by default](index.html#error-code-list) documents error codes that are enabled by default. Note The examples in this section use [inline configuration](index.html#inline-config) to specify mypy options. You can also set the same options by using a [configuration file](index.html#config-file) or [command-line options](index.html#command-line). #### Check that type arguments exist [type-arg][#](#check-that-type-arguments-exist-type-arg) If you use [`--disallow-any-generics`](index.html#cmdoption-mypy-disallow-any-generics), mypy requires that each generic type has values for each type argument. For example, the types `list` or `dict` would be rejected. You should instead use types like `list[int]` or `dict[str, int]`. Any omitted generic type arguments get implicit `Any` values. The type `list` is equivalent to `list[Any]`, and so on. Example: ``` # mypy: disallow-any-generics # Error: Missing type parameters for generic type "list" [type-arg] def remove_dups(items: list) -> list: ... ``` #### Check that every function has an annotation [no-untyped-def][#](#check-that-every-function-has-an-annotation-no-untyped-def) If you use [`--disallow-untyped-defs`](index.html#cmdoption-mypy-disallow-untyped-defs), mypy requires that all functions have annotations (either a Python 3 annotation or a type comment). Example: ``` # mypy: disallow-untyped-defs def inc(x): # Error: Function is missing a type annotation [no-untyped-def] return x + 1 def inc_ok(x: int) -> int: # OK return x + 1 class Counter: # Error: Function is missing a type annotation [no-untyped-def] def __init__(self): self.value = 0 class CounterOk: # OK: An explicit "-> None" is needed if "__init__" takes no arguments def __init__(self) -> None: self.value = 0 ``` #### Check that cast is not redundant [redundant-cast][#](#check-that-cast-is-not-redundant-redundant-cast) If you use [`--warn-redundant-casts`](index.html#cmdoption-mypy-warn-redundant-casts), mypy will generate an error if the source type of a cast is the same as the target type. Example: ``` # mypy: warn-redundant-casts from typing import cast Count = int def example(x: Count) -> int: # Error: Redundant cast to "int" [redundant-cast] return cast(int, x) ``` #### Check that methods do not have redundant Self annotations [redundant-self][#](#check-that-methods-do-not-have-redundant-self-annotations-redundant-self) If a method uses the `Self` type in the return type or the type of a non-self argument, there is no need to annotate the `self` argument explicitly. Such annotations are allowed by [**PEP 673**](https://peps.python.org/pep-0673/) but are redundant. If you enable this error code, mypy will generate an error if there is a redundant `Self` type. Example: ``` # mypy: enable-error-code="redundant-self" from typing import Self class C: # Error: Redundant "Self" annotation for the first method argument def copy(self: Self) -> Self: return type(self)() ``` #### Check that comparisons are overlapping [comparison-overlap][#](#check-that-comparisons-are-overlapping-comparison-overlap) If you use [`--strict-equality`](index.html#cmdoption-mypy-strict-equality), mypy will generate an error if it thinks that a comparison operation is always true or false. These are often bugs. Sometimes mypy is too picky and the comparison can actually be useful. Instead of disabling strict equality checking everywhere, you can use `# type: ignore[comparison-overlap]` to ignore the issue on a particular line only. Example: ``` # mypy: strict-equality def is_magic(x: bytes) -> bool: # Error: Non-overlapping equality check (left operand type: "bytes", # right operand type: "str") [comparison-overlap] return x == 'magic' ``` We can fix the error by changing the string literal to a bytes literal: ``` # mypy: strict-equality def is_magic(x: bytes) -> bool: return x == b'magic' # OK ``` #### Check that no untyped functions are called [no-untyped-call][#](#check-that-no-untyped-functions-are-called-no-untyped-call) If you use [`--disallow-untyped-calls`](index.html#cmdoption-mypy-disallow-untyped-calls), mypy generates an error when you call an unannotated function in an annotated function. Example: ``` # mypy: disallow-untyped-calls def do_it() -> None: # Error: Call to untyped function "bad" in typed context [no-untyped-call] bad() def bad(): ... ``` #### Check that function does not return Any value [no-any-return][#](#check-that-function-does-not-return-any-value-no-any-return) If you use [`--warn-return-any`](index.html#cmdoption-mypy-warn-return-any), mypy generates an error if you return a value with an `Any` type in a function that is annotated to return a non-`Any` value. Example: ``` # mypy: warn-return-any def fields(s): return s.split(',') def first_field(x: str) -> str: # Error: Returning Any from function declared to return "str" [no-any-return] return fields(x)[0] ``` #### Check that types have no Any components due to missing imports [no-any-unimported][#](#check-that-types-have-no-any-components-due-to-missing-imports-no-any-unimported) If you use [`--disallow-any-unimported`](index.html#cmdoption-mypy-disallow-any-unimported), mypy generates an error if a component of a type becomes `Any` because mypy couldn’t resolve an import. These “stealth” `Any` types can be surprising and accidentally cause imprecise type checking. In this example, we assume that mypy can’t find the module `animals`, which means that `Cat` falls back to `Any` in a type annotation: ``` # mypy: disallow-any-unimported from animals import Cat # type: ignore # Error: Argument 1 to "feed" becomes "Any" due to an unfollowed import [no-any-unimported] def feed(cat: Cat) -> None: ... ``` #### Check that statement or expression is unreachable [unreachable][#](#check-that-statement-or-expression-is-unreachable-unreachable) If you use [`--warn-unreachable`](index.html#cmdoption-mypy-warn-unreachable), mypy generates an error if it thinks that a statement or expression will never be executed. In most cases, this is due to incorrect control flow or conditional checks that are accidentally always true or false. ``` # mypy: warn-unreachable def example(x: int) -> None: # Error: Right operand of "or" is never evaluated [unreachable] assert isinstance(x, int) or x == 'unused' return # Error: Statement is unreachable [unreachable] print('unreachable') ``` #### Check that expression is redundant [redundant-expr][#](#check-that-expression-is-redundant-redundant-expr) If you use [`--enable-error-code redundant-expr`](index.html#cmdoption-mypy-enable-error-code), mypy generates an error if it thinks that an expression is redundant. ``` # Use "mypy --enable-error-code redundant-expr ..." def example(x: int) -> None: # Error: Left operand of "and" is always true [redundant-expr] if isinstance(x, int) and x > 0: pass # Error: If condition is always true [redundant-expr] 1 if isinstance(x, int) else 0 # Error: If condition in comprehension is always true [redundant-expr] [i for i in range(x) if isinstance(i, int)] ``` #### Warn about variables that are defined only in some execution paths [possibly-undefined][#](#warn-about-variables-that-are-defined-only-in-some-execution-paths-possibly-undefined) If you use [`--enable-error-code possibly-undefined`](index.html#cmdoption-mypy-enable-error-code), mypy generates an error if it cannot verify that a variable will be defined in all execution paths. This includes situations when a variable definition appears in a loop, in a conditional branch, in an except handler, etc. For example: ``` # Use "mypy --enable-error-code possibly-undefined ..." from typing import Iterable def test(values: Iterable[int], flag: bool) -> None: if flag: a = 1 z = a + 1 # Error: Name "a" may be undefined [possibly-undefined] for v in values: b = v z = b + 1 # Error: Name "b" may be undefined [possibly-undefined] ``` #### Check that expression is not implicitly true in boolean context [truthy-bool][#](#check-that-expression-is-not-implicitly-true-in-boolean-context-truthy-bool) Warn when the type of an expression in a boolean context does not implement `__bool__` or `__len__`. Unless one of these is implemented by a subtype, the expression will always be considered true, and there may be a bug in the condition. As an exception, the `object` type is allowed in a boolean context. Using an iterable value in a boolean context has a separate error code (see below). ``` # Use "mypy --enable-error-code truthy-bool ..." class Foo: pass foo = Foo() # Error: "foo" has type "Foo" which does not implement __bool__ or __len__ so it could always be true in boolean context if foo: ... ``` #### Check that iterable is not implicitly true in boolean context [truthy-iterable][#](#check-that-iterable-is-not-implicitly-true-in-boolean-context-truthy-iterable) Generate an error if a value of type `Iterable` is used as a boolean condition, since `Iterable` does not implement `__len__` or `__bool__`. Example: ``` from typing import Iterable def transform(items: Iterable[int]) -> list[int]: # Error: "items" has type "Iterable[int]" which can always be true in boolean context. Consider using "Collection[int]" instead. [truthy-iterable] if not items: return [42] return [x + 1 for x in items] ``` If `transform` is called with a `Generator` argument, such as `int(x) for x in []`, this function would not return `[42]` unlike what might be intended. Of course, it’s possible that `transform` is only called with `list` or other container objects, and the `if not items` check is actually valid. If that is the case, it is recommended to annotate `items` as `Collection[int]` instead of `Iterable[int]`. #### Check that `# type: ignore` include an error code [ignore-without-code][#](#check-that-type-ignore-include-an-error-code-ignore-without-code) Warn when a `# type: ignore` comment does not specify any error codes. This clarifies the intent of the ignore and ensures that only the expected errors are silenced. Example: ``` # Use "mypy --enable-error-code ignore-without-code ..." class Foo: def __init__(self, name: str) -> None: self.name = name f = Foo('foo') # This line has a typo that mypy can't help with as both: # - the expected error 'assignment', and # - the unexpected error 'attr-defined' # are silenced. # Error: "type: ignore" comment without error code (consider "type: ignore[attr-defined]" instead) f.nme = 42 # type: ignore # This line warns correctly about the typo in the attribute name # Error: "Foo" has no attribute "nme"; maybe "name"? f.nme = 42 # type: ignore[assignment] ``` #### Check that awaitable return value is used [unused-awaitable][#](#check-that-awaitable-return-value-is-used-unused-awaitable) If you use [`--enable-error-code unused-awaitable`](index.html#cmdoption-mypy-enable-error-code), mypy generates an error if you don’t use a returned value that defines `__await__`. Example: ``` # Use "mypy --enable-error-code unused-awaitable ..." import asyncio async def f() -> int: ... async def g() -> None: # Error: Value of type "Task[int]" must be used # Are you missing an await? asyncio.create_task(f()) ``` You can assign the value to a temporary, otherwise unused to variable to silence the error: ``` async def g() -> None: _ = asyncio.create_task(f()) # No error ``` #### Check that `# type: ignore` comment is used [unused-ignore][#](#check-that-type-ignore-comment-is-used-unused-ignore) If you use [`--enable-error-code unused-ignore`](index.html#cmdoption-mypy-enable-error-code), or [`--warn-unused-ignores`](index.html#cmdoption-mypy-warn-unused-ignores) mypy generates an error if you don’t use a `# type: ignore` comment, i.e. if there is a comment, but there would be no error generated by mypy on this line anyway. Example: ``` # Use "mypy --warn-unused-ignores ..." def add(a: int, b: int) -> int: # Error: unused "type: ignore" comment return a + b # type: ignore ``` Note that due to a specific nature of this comment, the only way to selectively silence it, is to include the error code explicitly. Also note that this error is not shown if the `# type: ignore` is not used due to code being statically unreachable (e.g. due to platform or version checks). Example: ``` # Use "mypy --warn-unused-ignores ..." import sys try: # The "[unused-ignore]" is needed to get a clean mypy run # on both Python 3.8, and 3.9 where this module was added import graphlib # type: ignore[import,unused-ignore] except ImportError: pass if sys.version_info >= (3, 9): # The following will not generate an error on either # Python 3.8, or Python 3.9 42 + "testing..." # type: ignore ``` #### Check that `@override` is used when overriding a base class method [explicit-override][#](#check-that-override-is-used-when-overriding-a-base-class-method-explicit-override) If you use [`--enable-error-code explicit-override`](index.html#cmdoption-mypy-enable-error-code) mypy generates an error if you override a base class method without using the `@override` decorator. An error will not be emitted for overrides of `__init__` or `__new__`. See [PEP 698](https://peps.python.org/pep-0698/#strict-enforcement-per-project). Note Starting with Python 3.12, the `@override` decorator can be imported from `typing`. To use it with older Python versions, import it from `typing_extensions` instead. Example: ``` # Use "mypy --enable-error-code explicit-override ..." from typing import override class Parent: def f(self, x: int) -> None: pass def g(self, y: int) -> None: pass class Child(Parent): def f(self, x: int) -> None: # Error: Missing @override decorator pass @override def g(self, y: int) -> None: pass ``` ### Additional features[#](#additional-features) This section discusses various features that did not fit in naturally in one of the previous sections. #### Dataclasses[#](#dataclasses) The [`dataclasses`](https://docs.python.org/3/library/dataclasses.html#module-dataclasses) module allows defining and customizing simple boilerplate-free classes. They can be defined using the [`@dataclasses.dataclass`](https://docs.python.org/3/library/dataclasses.html#dataclasses.dataclass) decorator: ``` from dataclasses import dataclass, field @dataclass class Application: name: str plugins: list[str] = field(default_factory=list) test = Application("Testing...") # OK bad = Application("Testing...", "with plugin") # Error: list[str] expected ``` Mypy will detect special methods (such as [`__lt__`](https://docs.python.org/3/reference/datamodel.html#object.__lt__)) depending on the flags used to define dataclasses. For example: ``` from dataclasses import dataclass @dataclass(order=True) class OrderedPoint: x: int y: int @dataclass(order=False) class UnorderedPoint: x: int y: int OrderedPoint(1, 2) < OrderedPoint(3, 4) # OK UnorderedPoint(1, 2) < UnorderedPoint(3, 4) # Error: Unsupported operand types ``` Dataclasses can be generic and can be used in any other way a normal class can be used: ``` from dataclasses import dataclass from typing import Generic, TypeVar T = TypeVar('T') @dataclass class BoxedData(Generic[T]): data: T label: str def unbox(bd: BoxedData[T]) -> T: ... val = unbox(BoxedData(42, "<important>")) # OK, inferred type is int ``` For more information see [official docs](https://docs.python.org/3/library/dataclasses.html) and [**PEP 557**](https://peps.python.org/pep-0557/). ##### Caveats/Known Issues[#](#caveats-known-issues) Some functions in the [`dataclasses`](https://docs.python.org/3/library/dataclasses.html#module-dataclasses) module, such as [`replace()`](https://docs.python.org/3/library/dataclasses.html#dataclasses.replace) and [`asdict()`](https://docs.python.org/3/library/dataclasses.html#dataclasses.asdict), have imprecise (too permissive) types. This will be fixed in future releases. Mypy does not yet recognize aliases of [`dataclasses.dataclass`](https://docs.python.org/3/library/dataclasses.html#dataclasses.dataclass), and will probably never recognize dynamically computed decorators. The following examples do **not** work: ``` from dataclasses import dataclass dataclass_alias = dataclass def dataclass_wrapper(cls): return dataclass(cls) @dataclass_alias class AliasDecorated: """ Mypy doesn't recognize this as a dataclass because it is decorated by an alias of `dataclass` rather than by `dataclass` itself. """ attribute: int @dataclass_wrapper class DynamicallyDecorated: """ Mypy doesn't recognize this as a dataclass because it is decorated by a function returning `dataclass` rather than by `dataclass` itself. """ attribute: int AliasDecorated(attribute=1) # error: Unexpected keyword argument DynamicallyDecorated(attribute=1) # error: Unexpected keyword argument ``` #### The attrs package[#](#the-attrs-package) [attrs](https://www.attrs.org/en/stable/index.html) is a package that lets you define classes without writing boilerplate code. Mypy can detect uses of the package and will generate the necessary method definitions for decorated classes using the type annotations it finds. Type annotations can be added as follows: ``` import attr @attrs.define class A: one: int two: int = 7 three: int = attrs.field(8) ``` If you’re using `auto_attribs=False` you must use `attrs.field`: ``` import attrs @attrs.define class A: one: int = attrs.field() # Variable annotation (Python 3.6+) two = attrs.field() # type: int # Type comment three = attrs.field(type=int) # type= argument ``` Typeshed has a couple of “white lie” annotations to make type checking easier. [`attrs.field()`](https://www.attrs.org/en/stable/api.html#attrs.field) and [`attrs.Factory`](https://www.attrs.org/en/stable/api.html#attrs.Factory) actually return objects, but the annotation says these return the types that they expect to be assigned to. That enables this to work: ``` import attrs @attrs.define class A: one: int = attrs.field(8) two: dict[str, str] = attrs.Factory(dict) bad: str = attrs.field(16) # Error: can't assign int to str ``` ##### Caveats/Known Issues[#](#id1) * The detection of attr classes and attributes works by function name only. This means that if you have your own helper functions that, for example, `return attrs.field()` mypy will not see them. * All boolean arguments that mypy cares about must be literal `True` or `False`. e.g the following will not work: ``` import attrs YES = True @attrs.define(init=YES) class A: ... ``` * Currently, `converter` only supports named functions. If mypy finds something else it will complain about not understanding the argument and the type annotation in [`__init__`](https://docs.python.org/3/reference/datamodel.html#object.__init__) will be replaced by `Any`. * [Validator decorators](https://www.attrs.org/en/stable/examples.html#examples-validators) and [default decorators](https://www.attrs.org/en/stable/examples.html#defaults) are not type-checked against the attribute they are setting/validating. * Method definitions added by mypy currently overwrite any existing method definitions. #### Using a remote cache to speed up mypy runs[#](#using-a-remote-cache-to-speed-up-mypy-runs) Mypy performs type checking *incrementally*, reusing results from previous runs to speed up successive runs. If you are type checking a large codebase, mypy can still be sometimes slower than desirable. For example, if you create a new branch based on a much more recent commit than the target of the previous mypy run, mypy may have to process almost every file, as a large fraction of source files may have changed. This can also happen after you’ve rebased a local branch. Mypy supports using a *remote cache* to improve performance in cases such as the above. In a large codebase, remote caching can sometimes speed up mypy runs by a factor of 10, or more. Mypy doesn’t include all components needed to set this up – generally you will have to perform some simple integration with your Continuous Integration (CI) or build system to configure mypy to use a remote cache. This discussion assumes you have a CI system set up for the mypy build you want to speed up, and that you are using a central git repository. Generalizing to different environments should not be difficult. Here are the main components needed: * A shared repository for storing mypy cache files for all landed commits. * CI build that uploads mypy incremental cache files to the shared repository for each commit for which the CI build runs. * A wrapper script around mypy that developers use to run mypy with remote caching enabled. Below we discuss each of these components in some detail. ##### Shared repository for cache files[#](#shared-repository-for-cache-files) You need a repository that allows you to upload mypy cache files from your CI build and make the cache files available for download based on a commit id. A simple approach would be to produce an archive of the `.mypy_cache` directory (which contains the mypy cache data) as a downloadable *build artifact* from your CI build (depending on the capabilities of your CI system). Alternatively, you could upload the data to a web server or to S3, for example. ##### Continuous Integration build[#](#continuous-integration-build) The CI build would run a regular mypy build and create an archive containing the `.mypy_cache` directory produced by the build. Finally, it will produce the cache as a build artifact or upload it to a repository where it is accessible by the mypy wrapper script. Your CI script might work like this: * Run mypy normally. This will generate cache data under the `.mypy_cache` directory. * Create a tarball from the `.mypy_cache` directory. * Determine the current git master branch commit id (say, using `git rev-parse HEAD`). * Upload the tarball to the shared repository with a name derived from the commit id. ##### Mypy wrapper script[#](#mypy-wrapper-script) The wrapper script is used by developers to run mypy locally during development instead of invoking mypy directly. The wrapper first populates the local `.mypy_cache` directory from the shared repository and then runs a normal incremental build. The wrapper script needs some logic to determine the most recent central repository commit (by convention, the `origin/master` branch for git) the local development branch is based on. In a typical git setup you can do it like this: ``` git merge-base HEAD origin/master ``` The next step is to download the cache data (contents of the `.mypy_cache` directory) from the shared repository based on the commit id of the merge base produced by the git command above. The script will decompress the data so that mypy will start with a fresh `.mypy_cache`. Finally, the script runs mypy normally. And that’s all! ##### Caching with mypy daemon[#](#caching-with-mypy-daemon) You can also use remote caching with the [mypy daemon](index.html#mypy-daemon). The remote cache will significantly speed up the first `dmypy check` run after starting or restarting the daemon. The mypy daemon requires extra fine-grained dependency data in the cache files which aren’t included by default. To use caching with the mypy daemon, use the [`--cache-fine-grained`](index.html#cmdoption-mypy-cache-fine-grained) option in your CI build: ``` $ mypy --cache-fine-grained <args...> ``` This flag adds extra information for the daemon to the cache. In order to use this extra information, you will also need to use the `--use-fine-grained-cache` option with `dmypy start` or `dmypy restart`. Example: ``` $ dmypy start -- --use-fine-grained-cache <options...> ``` Now your first `dmypy check` run should be much faster, as it can use cache information to avoid processing the whole program. ##### Refinements[#](#refinements) There are several optional refinements that may improve things further, at least if your codebase is hundreds of thousands of lines or more: * If the wrapper script determines that the merge base hasn’t changed from a previous run, there’s no need to download the cache data and it’s better to instead reuse the existing local cache data. * If you use the mypy daemon, you may want to restart the daemon each time after the merge base or local branch has changed to avoid processing a potentially large number of changes in an incremental build, as this can be much slower than downloading cache data and restarting the daemon. * If the current local branch is based on a very recent master commit, the remote cache data may not yet be available for that commit, as there will necessarily be some latency to build the cache files. It may be a good idea to look for cache data for, say, the 5 latest master commits and use the most recent data that is available. * If the remote cache is not accessible for some reason (say, from a public network), the script can still fall back to a normal incremental build. * You can have multiple local cache directories for different local branches using the [`--cache-dir`](index.html#cmdoption-mypy-cache-dir) option. If the user switches to an existing branch where downloaded cache data is already available, you can continue to use the existing cache data instead of redownloading the data. * You can set up your CI build to use a remote cache to speed up the CI build. This would be particularly useful if each CI build starts from a fresh state without access to cache files from previous builds. It’s still recommended to run a full, non-incremental mypy build to create the cache data, as repeatedly updating cache data incrementally could result in drift over a long time period (due to a mypy caching issue, perhaps). #### Extended Callable types[#](#extended-callable-types) Note This feature is deprecated. You can use [callback protocols](index.html#callback-protocols) as a replacement. As an experimental mypy extension, you can specify [`Callable`](https://docs.python.org/3/library/typing.html#typing.Callable) types that support keyword arguments, optional arguments, and more. When you specify the arguments of a [`Callable`](https://docs.python.org/3/library/typing.html#typing.Callable), you can choose to supply just the type of a nameless positional argument, or an “argument specifier” representing a more complicated form of argument. This allows one to more closely emulate the full range of possibilities given by the `def` statement in Python. As an example, here’s a complicated function definition and the corresponding [`Callable`](https://docs.python.org/3/library/typing.html#typing.Callable): ``` from typing import Callable from mypy_extensions import (Arg, DefaultArg, NamedArg, DefaultNamedArg, VarArg, KwArg) def func(__a: int, # This convention is for nameless arguments b: int, c: int = 0, *args: int, d: int, e: int = 0, **kwargs: int) -> int: ... F = Callable[[int, # Or Arg(int) Arg(int, 'b'), DefaultArg(int, 'c'), VarArg(int), NamedArg(int, 'd'), DefaultNamedArg(int, 'e'), KwArg(int)], int] f: F = func ``` Argument specifiers are special function calls that can specify the following aspects of an argument: * its type (the only thing that the basic format supports) * its name (if it has one) * whether it may be omitted * whether it may or must be passed using a keyword * whether it is a `*args` argument (representing the remaining positional arguments) * whether it is a `**kwargs` argument (representing the remaining keyword arguments) The following functions are available in `mypy_extensions` for this purpose: ``` def Arg(type=Any, name=None): # A normal, mandatory, positional argument. # If the name is specified it may be passed as a keyword. def DefaultArg(type=Any, name=None): # An optional positional argument (i.e. with a default value). # If the name is specified it may be passed as a keyword. def NamedArg(type=Any, name=None): # A mandatory keyword-only argument. def DefaultNamedArg(type=Any, name=None): # An optional keyword-only argument (i.e. with a default value). def VarArg(type=Any): # A *args-style variadic positional argument. # A single VarArg() specifier represents all remaining # positional arguments. def KwArg(type=Any): # A **kwargs-style variadic keyword argument. # A single KwArg() specifier represents all remaining # keyword arguments. ``` In all cases, the `type` argument defaults to `Any`, and if the `name` argument is omitted the argument has no name (the name is required for `NamedArg` and `DefaultNamedArg`). A basic [`Callable`](https://docs.python.org/3/library/typing.html#typing.Callable) such as ``` MyFunc = Callable[[int, str, int], float] ``` is equivalent to the following: ``` MyFunc = Callable[[Arg(int), Arg(str), Arg(int)], float] ``` A [`Callable`](https://docs.python.org/3/library/typing.html#typing.Callable) with unspecified argument types, such as ``` MyOtherFunc = Callable[..., int] ``` is (roughly) equivalent to ``` MyOtherFunc = Callable[[VarArg(), KwArg()], int] ``` Note Each of the functions above currently just returns its `type` argument at runtime, so the information contained in the argument specifiers is not available at runtime. This limitation is necessary for backwards compatibility with the existing `typing.py` module as present in the Python 3.5+ standard library and distributed via PyPI. ### Frequently Asked Questions[#](#frequently-asked-questions) #### Why have both dynamic and static typing?[#](#why-have-both-dynamic-and-static-typing) Dynamic typing can be flexible, powerful, convenient and easy. But it’s not always the best approach; there are good reasons why many developers choose to use statically typed languages or static typing for Python. Here are some potential benefits of mypy-style static typing: * Static typing can make programs easier to understand and maintain. Type declarations can serve as machine-checked documentation. This is important as code is typically read much more often than modified, and this is especially important for large and complex programs. * Static typing can help you find bugs earlier and with less testing and debugging. Especially in large and complex projects this can be a major time-saver. * Static typing can help you find difficult-to-find bugs before your code goes into production. This can improve reliability and reduce the number of security issues. * Static typing makes it practical to build very useful development tools that can improve programming productivity or software quality, including IDEs with precise and reliable code completion, static analysis tools, etc. * You can get the benefits of both dynamic and static typing in a single language. Dynamic typing can be perfect for a small project or for writing the UI of your program, for example. As your program grows, you can adapt tricky application logic to static typing to help maintenance. See also the [front page](https://www.mypy-lang.org) of the mypy web site. #### Would my project benefit from static typing?[#](#would-my-project-benefit-from-static-typing) For many projects dynamic typing is perfectly fine (we think that Python is a great language). But sometimes your projects demand bigger guns, and that’s when mypy may come in handy. If some of these ring true for your projects, mypy (and static typing) may be useful: * Your project is large or complex. * Your codebase must be maintained for a long time. * Multiple developers are working on the same code. * Running tests takes a lot of time or work (type checking helps you find errors quickly early in development, reducing the number of testing iterations). * Some project members (devs or management) don’t like dynamic typing, but others prefer dynamic typing and Python syntax. Mypy could be a solution that everybody finds easy to accept. * You want to future-proof your project even if currently none of the above really apply. The earlier you start, the easier it will be to adopt static typing. #### Can I use mypy to type check my existing Python code?[#](#can-i-use-mypy-to-type-check-my-existing-python-code) Mypy supports most Python features and idioms, and many large Python projects are using mypy successfully. Code that uses complex introspection or metaprogramming may be impractical to type check, but it should still be possible to use static typing in other parts of a codebase that are less dynamic. #### Will static typing make my programs run faster?[#](#will-static-typing-make-my-programs-run-faster) Mypy only does static type checking and it does not improve performance. It has a minimal performance impact. In the future, there could be other tools that can compile statically typed mypy code to C modules or to efficient JVM bytecode, for example, but this is outside the scope of the mypy project. #### Is mypy free?[#](#is-mypy-free) Yes. Mypy is free software, and it can also be used for commercial and proprietary projects. Mypy is available under the MIT license. #### Can I use duck typing with mypy?[#](#can-i-use-duck-typing-with-mypy) Mypy provides support for both [nominal subtyping](https://en.wikipedia.org/wiki/Nominative_type_system) and [structural subtyping](https://en.wikipedia.org/wiki/Structural_type_system). Structural subtyping can be thought of as “static duck typing”. Some argue that structural subtyping is better suited for languages with duck typing such as Python. Mypy however primarily uses nominal subtyping, leaving structural subtyping mostly opt-in (except for built-in protocols such as [`Iterable`](https://docs.python.org/3/library/typing.html#typing.Iterable) that always support structural subtyping). Here are some reasons why: 1. It is easy to generate short and informative error messages when using a nominal type system. This is especially important when using type inference. 2. Python provides built-in support for nominal [`isinstance()`](https://docs.python.org/3/library/functions.html#isinstance) tests and they are widely used in programs. Only limited support for structural [`isinstance()`](https://docs.python.org/3/library/functions.html#isinstance) is available, and it’s less type safe than nominal type tests. 3. Many programmers are already familiar with static, nominal subtyping and it has been successfully used in languages such as Java, C++ and C#. Fewer languages use structural subtyping. However, structural subtyping can also be useful. For example, a “public API” may be more flexible if it is typed with protocols. Also, using protocol types removes the necessity to explicitly declare implementations of ABCs. As a rule of thumb, we recommend using nominal classes where possible, and protocols where necessary. For more details about protocol types and structural subtyping see [Protocols and structural subtyping](index.html#protocol-types) and [**PEP 544**](https://peps.python.org/pep-0544/). #### I like Python and I have no need for static typing[#](#i-like-python-and-i-have-no-need-for-static-typing) The aim of mypy is not to convince everybody to write statically typed Python – static typing is entirely optional, now and in the future. The goal is to give more options for Python programmers, to make Python a more competitive alternative to other statically typed languages in large projects, to improve programmer productivity, and to improve software quality. #### How are mypy programs different from normal Python?[#](#how-are-mypy-programs-different-from-normal-python) Since you use a vanilla Python implementation to run mypy programs, mypy programs are also Python programs. The type checker may give warnings for some valid Python code, but the code is still always runnable. Also, some Python features and syntax are still not supported by mypy, but this is gradually improving. The obvious difference is the availability of static type checking. The section [Common issues and solutions](index.html#common-issues) mentions some modifications to Python code that may be required to make code type check without errors. Also, your code must make attributes explicit. Mypy supports modular, efficient type checking, and this seems to rule out type checking some language features, such as arbitrary monkey patching of methods. #### How is mypy different from Cython?[#](#how-is-mypy-different-from-cython) [Cython](https://docs.cython.org/en/latest/index.html) is a variant of Python that supports compilation to CPython C modules. It can give major speedups to certain classes of programs compared to CPython, and it provides static typing (though this is different from mypy). Mypy differs in the following aspects, among others: * Cython is much more focused on performance than mypy. Mypy is only about static type checking, and increasing performance is not a direct goal. * The mypy syntax is arguably simpler and more “Pythonic” (no cdef/cpdef, etc.) for statically typed code. * The mypy syntax is compatible with Python. Mypy programs are normal Python programs that can be run using any Python implementation. Cython has many incompatible extensions to Python syntax, and Cython programs generally cannot be run without first compiling them to CPython extension modules via C. Cython also has a pure Python mode, but it seems to support only a subset of Cython functionality, and the syntax is quite verbose. * Mypy has a different set of type system features. For example, mypy has genericity (parametric polymorphism), function types and bidirectional type inference, which are not supported by Cython. (Cython has fused types that are different but related to mypy generics. Mypy also has a similar feature as an extension of generics.) * The mypy type checker knows about the static types of many Python stdlib modules and can effectively type check code that uses them. * Cython supports accessing C functions directly and many features are defined in terms of translating them to C or C++. Mypy just uses Python semantics, and mypy does not deal with accessing C library functionality. #### Does it run on PyPy?[#](#does-it-run-on-pypy) Somewhat. With PyPy 3.8, mypy is at least able to type check itself. With older versions of PyPy, mypy relies on [typed-ast](https://github.com/python/typed_ast), which uses several APIs that PyPy does not support (including some internal CPython APIs). #### Mypy is a cool project. Can I help?[#](#mypy-is-a-cool-project-can-i-help) Any help is much appreciated! [Contact](https://www.mypy-lang.org/contact.html) the developers if you would like to contribute. Any help related to development, design, publicity, documentation, testing, web site maintenance, financing, etc. can be helpful. You can learn a lot by contributing, and anybody can help, even beginners! However, some knowledge of compilers and/or type systems is essential if you want to work on mypy internals. Indices and tables[#](#indices-and-tables) === * [Index](genindex.html) * [Search Page](search.html) On this page First steps * [Getting started](index.html#document-getting_started) + [Installing and running mypy](index.html#installing-and-running-mypy) + [Dynamic vs static typing](index.html#dynamic-vs-static-typing) + [Strict mode and configuration](index.html#strict-mode-and-configuration) + [More complex types](index.html#more-complex-types) + [Local type inference](index.html#local-type-inference) + [Types from libraries](index.html#types-from-libraries) + [Next steps](index.html#next-steps) * [Type hints cheat sheet](index.html#document-cheat_sheet_py3) + [Variables](index.html#variables) + [Useful built-in types](index.html#useful-built-in-types) + [Functions](index.html#functions) + [Classes](index.html#classes) + [When you’re puzzled or when things are complicated](index.html#when-you-re-puzzled-or-when-things-are-complicated) + [Standard “duck types”](index.html#standard-duck-types) + [Forward references](index.html#forward-references) + [Decorators](index.html#decorators) + [Coroutines and asyncio](index.html#coroutines-and-asyncio) * [Using mypy with an existing codebase](index.html#document-existing_code) + [Start small](index.html#start-small) + [Run mypy consistently and prevent regressions](index.html#run-mypy-consistently-and-prevent-regressions) + [Ignoring errors from certain modules](index.html#ignoring-errors-from-certain-modules) + [Fixing errors related to imports](index.html#fixing-errors-related-to-imports) + [Prioritise annotating widely imported modules](index.html#prioritise-annotating-widely-imported-modules) + [Write annotations as you go](index.html#write-annotations-as-you-go) + [Automate annotation of legacy code](index.html#automate-annotation-of-legacy-code) + [Introduce stricter options](index.html#introduce-stricter-options) + [Speed up mypy runs](index.html#speed-up-mypy-runs) Type system reference * [Built-in types](index.html#document-builtin_types) + [Simple types](index.html#simple-types) + [Any type](index.html#any-type) + [Generic types](index.html#generic-types) * [Type inference and type annotations](index.html#document-type_inference_and_annotations) + [Type inference](index.html#type-inference) + [Explicit types for variables](index.html#explicit-types-for-variables) + [Explicit types for collections](index.html#explicit-types-for-collections) + [Compatibility of container types](index.html#compatibility-of-container-types) + [Context in type inference](index.html#context-in-type-inference) + [Silencing type errors](index.html#silencing-type-errors) * [Kinds of types](index.html#document-kinds_of_types) + [Class types](index.html#class-types) + [The Any type](index.html#the-any-type) + [Tuple types](index.html#tuple-types) + [Callable types (and lambdas)](index.html#callable-types-and-lambdas) + [Union types](index.html#union-types) + [Optional types and the None type](index.html#optional-types-and-the-none-type) + [Disabling strict optional checking](index.html#disabling-strict-optional-checking) + [Type aliases](index.html#type-aliases) + [Named tuples](index.html#named-tuples) + [The type of class objects](index.html#the-type-of-class-objects) + [Generators](index.html#generators) * [Class basics](index.html#document-class_basics) + [Instance and class attributes](index.html#instance-and-class-attributes) + [Annotating __init__ methods](index.html#annotating-init-methods) + [Class attribute annotations](index.html#class-attribute-annotations) + [Overriding statically typed methods](index.html#overriding-statically-typed-methods) + [Abstract base classes and multiple inheritance](index.html#abstract-base-classes-and-multiple-inheritance) + [Slots](index.html#slots) * [Annotation issues at runtime](index.html#document-runtime_troubles) + [String literal types and type comments](index.html#string-literal-types-and-type-comments) + [Future annotations import (PEP 563)](index.html#future-annotations-import-pep-563) + [typing.TYPE_CHECKING](index.html#typing-type-checking) + [Class name forward references](index.html#class-name-forward-references) + [Import cycles](index.html#import-cycles) + [Using classes that are generic in stubs but not at runtime](index.html#using-classes-that-are-generic-in-stubs-but-not-at-runtime) + [Using types defined in stubs but not at runtime](index.html#using-types-defined-in-stubs-but-not-at-runtime) + [Using generic builtins](index.html#using-generic-builtins) + [Using X | Y syntax for Unions](index.html#using-x-y-syntax-for-unions) + [Using new additions to the typing module](index.html#using-new-additions-to-the-typing-module) * [Protocols and structural subtyping](index.html#document-protocols) + [Predefined protocols](index.html#predefined-protocols) + [Simple user-defined protocols](index.html#simple-user-defined-protocols) + [Defining subprotocols and subclassing protocols](index.html#defining-subprotocols-and-subclassing-protocols) + [Invariance of protocol attributes](index.html#invariance-of-protocol-attributes) + [Recursive protocols](index.html#recursive-protocols) + [Using isinstance() with protocols](index.html#using-isinstance-with-protocols) + [Callback protocols](index.html#callback-protocols) + [Predefined protocol reference](index.html#predefined-protocol-reference) * [Dynamically typed code](index.html#document-dynamic_typing) + [Operations on Any values](index.html#operations-on-any-values) + [Any vs. object](index.html#any-vs-object) * [Type narrowing](index.html#document-type_narrowing) + [Type narrowing expressions](index.html#type-narrowing-expressions) + [Casts](index.html#casts) + [User-Defined Type Guards](index.html#user-defined-type-guards) * [Duck type compatibility](index.html#document-duck_type_compatibility) * [Stub files](index.html#document-stubs) + [Creating a stub](index.html#creating-a-stub) + [Stub file syntax](index.html#stub-file-syntax) + [Using stub file syntax at runtime](index.html#using-stub-file-syntax-at-runtime) * [Generics](index.html#document-generics) + [Defining generic classes](index.html#defining-generic-classes) + [Defining subclasses of generic classes](index.html#defining-subclasses-of-generic-classes) + [Generic functions](index.html#generic-functions) + [Generic methods and generic self](index.html#generic-methods-and-generic-self) + [Automatic self types using typing.Self](index.html#automatic-self-types-using-typing-self) + [Variance of generic types](index.html#variance-of-generic-types) + [Type variables with upper bounds](index.html#type-variables-with-upper-bounds) + [Type variables with value restriction](index.html#type-variables-with-value-restriction) + [Declaring decorators](index.html#declaring-decorators) + [Generic protocols](index.html#generic-protocols) + [Generic type aliases](index.html#generic-type-aliases) + [Generic class internals](index.html#generic-class-internals) * [More types](index.html#document-more_types) + [The NoReturn type](index.html#the-noreturn-type) + [NewTypes](index.html#newtypes) + [Function overloading](index.html#function-overloading) + [Advanced uses of self-types](index.html#advanced-uses-of-self-types) + [Typing async/await](index.html#typing-async-await) * [Literal types and Enums](index.html#document-literal_types) + [Literal types](index.html#literal-types) + [Enums](index.html#enums) * [TypedDict](index.html#document-typed_dict) + [Totality](index.html#totality) + [Supported operations](index.html#supported-operations) + [Class-based syntax](index.html#class-based-syntax) + [Mixing required and non-required items](index.html#mixing-required-and-non-required-items) + [Unions of TypedDicts](index.html#unions-of-typeddicts) * [Final names, methods and classes](index.html#document-final_attrs) + [Final names](index.html#final-names) + [Final methods](index.html#final-methods) + [Final classes](index.html#final-classes) * [Metaclasses](index.html#document-metaclasses) + [Defining a metaclass](index.html#defining-a-metaclass) + [Metaclass usage example](index.html#metaclass-usage-example) + [Gotchas and limitations of metaclass support](index.html#gotchas-and-limitations-of-metaclass-support) Configuring and running mypy * [Running mypy and managing imports](index.html#document-running_mypy) + [Specifying code to be checked](index.html#specifying-code-to-be-checked) + [Reading a list of files from a file](index.html#reading-a-list-of-files-from-a-file) + [Mapping file paths to modules](index.html#mapping-file-paths-to-modules) + [How mypy handles imports](index.html#how-mypy-handles-imports) + [Missing imports](index.html#missing-imports) + [How imports are found](index.html#how-imports-are-found) + [Following imports](index.html#following-imports) * [The mypy command line](index.html#document-command_line) + [Specifying what to type check](index.html#specifying-what-to-type-check) + [Optional arguments](index.html#optional-arguments) + [Config file](index.html#config-file) + [Import discovery](index.html#import-discovery) + [Platform configuration](index.html#platform-configuration) + [Disallow dynamic typing](index.html#disallow-dynamic-typing) + [Untyped definitions and calls](index.html#untyped-definitions-and-calls) + [None and Optional handling](index.html#none-and-optional-handling) + [Configuring warnings](index.html#configuring-warnings) + [Miscellaneous strictness flags](index.html#miscellaneous-strictness-flags) + [Configuring error messages](index.html#configuring-error-messages) + [Incremental mode](index.html#incremental-mode) + [Advanced options](index.html#advanced-options) + [Report generation](index.html#report-generation) + [Miscellaneous](index.html#miscellaneous) * [The mypy configuration file](index.html#document-config_file) + [Config file format](index.html#config-file-format) + [Per-module and global options](index.html#per-module-and-global-options) + [Inverting option values](index.html#inverting-option-values) + [Example `mypy.ini`](index.html#example-mypy-ini) + [Import discovery](index.html#import-discovery) + [Platform configuration](index.html#platform-configuration) + [Disallow dynamic typing](index.html#disallow-dynamic-typing) + [Untyped definitions and calls](index.html#untyped-definitions-and-calls) + [None and Optional handling](index.html#none-and-optional-handling) + [Configuring warnings](index.html#configuring-warnings) + [Suppressing errors](index.html#suppressing-errors) + [Miscellaneous strictness flags](index.html#miscellaneous-strictness-flags) + [Configuring error messages](index.html#configuring-error-messages) + [Incremental mode](index.html#incremental-mode) + [Advanced options](index.html#advanced-options) + [Report generation](index.html#report-generation) + [Miscellaneous](index.html#miscellaneous) + [Using a pyproject.toml file](index.html#using-a-pyproject-toml-file) + [Example `pyproject.toml`](index.html#example-pyproject-toml) * [Inline configuration](index.html#document-inline_config) + [Configuration comment format](index.html#configuration-comment-format) * [Mypy daemon (mypy server)](index.html#document-mypy_daemon) + [Basic usage](index.html#basic-usage) + [Daemon client commands](index.html#daemon-client-commands) + [Additional daemon flags](index.html#additional-daemon-flags) + [Static inference of annotations](index.html#static-inference-of-annotations) + [Statically inspect expressions](index.html#statically-inspect-expressions) * [Using installed packages](index.html#document-installed_packages) + [Using installed packages with mypy (PEP 561)](index.html#using-installed-packages-with-mypy-pep-561) + [Creating PEP 561 compatible packages](index.html#creating-pep-561-compatible-packages) * [Extending and integrating mypy](index.html#document-extending_mypy) + [Integrating mypy into another Python application](index.html#integrating-mypy-into-another-python-application) + [Extending mypy using plugins](index.html#extending-mypy-using-plugins) + [Configuring mypy to use plugins](index.html#configuring-mypy-to-use-plugins) + [High-level overview](index.html#high-level-overview) + [Current list of plugin hooks](index.html#current-list-of-plugin-hooks) * [Automatic stub generation (stubgen)](index.html#document-stubgen) + [Specifying what to stub](index.html#specifying-what-to-stub) + [Specifying how to generate stubs](index.html#specifying-how-to-generate-stubs) + [Additional flags](index.html#additional-flags) * [Automatic stub testing (stubtest)](index.html#document-stubtest) + [What stubtest does and does not do](index.html#what-stubtest-does-and-does-not-do) + [Example](index.html#example) + [Usage](index.html#usage) Miscellaneous * [Common issues and solutions](index.html#document-common_issues) + [No errors reported for obviously wrong code](index.html#no-errors-reported-for-obviously-wrong-code) + [Spurious errors and locally silencing the checker](index.html#spurious-errors-and-locally-silencing-the-checker) + [Ignoring a whole file](index.html#ignoring-a-whole-file) + [Issues with code at runtime](index.html#issues-with-code-at-runtime) + [Mypy runs are slow](index.html#mypy-runs-are-slow) + [Types of empty collections](index.html#types-of-empty-collections) + [Redefinitions with incompatible types](index.html#redefinitions-with-incompatible-types) + [Invariance vs covariance](index.html#invariance-vs-covariance) + [Declaring a supertype as variable type](index.html#declaring-a-supertype-as-variable-type) + [Complex type tests](index.html#complex-type-tests) + [Python version and system platform checks](index.html#python-version-and-system-platform-checks) + [Displaying the type of an expression](index.html#displaying-the-type-of-an-expression) + [Silencing linters](index.html#silencing-linters) + [Covariant subtyping of mutable protocol members is rejected](index.html#covariant-subtyping-of-mutable-protocol-members-is-rejected) + [Dealing with conflicting names](index.html#dealing-with-conflicting-names) + [Using a development mypy build](index.html#using-a-development-mypy-build) + [Variables vs type aliases](index.html#variables-vs-type-aliases) + [Incompatible overrides](index.html#incompatible-overrides) + [Unreachable code](index.html#unreachable-code) + [Narrowing and inner functions](index.html#narrowing-and-inner-functions) * [Supported Python features](index.html#document-supported_python_features) + [Runtime definition of methods and functions](index.html#runtime-definition-of-methods-and-functions) * [Error codes](index.html#document-error_codes) + [Displaying error codes](index.html#displaying-error-codes) + [Silencing errors based on error codes](index.html#silencing-errors-based-on-error-codes) + [Enabling/disabling specific error codes globally](index.html#enabling-disabling-specific-error-codes-globally) + [Per-module enabling/disabling error codes](index.html#per-module-enabling-disabling-error-codes) + [Subcodes of error codes](index.html#subcodes-of-error-codes) * [Error codes enabled by default](index.html#document-error_code_list) + [Check that attribute exists [attr-defined]](index.html#check-that-attribute-exists-attr-defined) + [Check that attribute exists in each union item [union-attr]](index.html#check-that-attribute-exists-in-each-union-item-union-attr) + [Check that name is defined [name-defined]](index.html#check-that-name-is-defined-name-defined) + [Check that a variable is not used before it’s defined [used-before-def]](index.html#check-that-a-variable-is-not-used-before-it-s-defined-used-before-def) + [Check arguments in calls [call-arg]](index.html#check-arguments-in-calls-call-arg) + [Check argument types [arg-type]](index.html#check-argument-types-arg-type) + [Check calls to overloaded functions [call-overload]](index.html#check-calls-to-overloaded-functions-call-overload) + [Check validity of types [valid-type]](index.html#check-validity-of-types-valid-type) + [Require annotation if variable type is unclear [var-annotated]](index.html#require-annotation-if-variable-type-is-unclear-var-annotated) + [Check validity of overrides [override]](index.html#check-validity-of-overrides-override) + [Check that function returns a value [return]](index.html#check-that-function-returns-a-value-return) + [Check that functions don’t have empty bodies outside stubs [empty-body]](index.html#check-that-functions-don-t-have-empty-bodies-outside-stubs-empty-body) + [Check that return value is compatible [return-value]](index.html#check-that-return-value-is-compatible-return-value) + [Check types in assignment statement [assignment]](index.html#check-types-in-assignment-statement-assignment) + [Check that assignment target is not a method [method-assign]](index.html#check-that-assignment-target-is-not-a-method-method-assign) + [Check type variable values [type-var]](index.html#check-type-variable-values-type-var) + [Check uses of various operators [operator]](index.html#check-uses-of-various-operators-operator) + [Check indexing operations [index]](index.html#check-indexing-operations-index) + [Check list items [list-item]](index.html#check-list-items-list-item) + [Check dict items [dict-item]](index.html#check-dict-items-dict-item) + [Check TypedDict items [typeddict-item]](index.html#check-typeddict-items-typeddict-item) + [Check TypedDict Keys [typeddict-unknown-key]](index.html#check-typeddict-keys-typeddict-unknown-key) + [Check that type of target is known [has-type]](index.html#check-that-type-of-target-is-known-has-type) + [Check for an issue with imports [import]](index.html#check-for-an-issue-with-imports-import) + [Check that import target can be found [import-not-found]](index.html#check-that-import-target-can-be-found-import-not-found) + [Check that import target can be found [import-untyped]](index.html#check-that-import-target-can-be-found-import-untyped) + [Check that each name is defined once [no-redef]](index.html#check-that-each-name-is-defined-once-no-redef) + [Check that called function returns a value [func-returns-value]](index.html#check-that-called-function-returns-a-value-func-returns-value) + [Check instantiation of abstract classes [abstract]](index.html#check-instantiation-of-abstract-classes-abstract) + [Safe handling of abstract type object types [type-abstract]](index.html#safe-handling-of-abstract-type-object-types-type-abstract) + [Check that call to an abstract method via super is valid [safe-super]](index.html#check-that-call-to-an-abstract-method-via-super-is-valid-safe-super) + [Check the target of NewType [valid-newtype]](index.html#check-the-target-of-newtype-valid-newtype) + [Check the return type of __exit__ [exit-return]](index.html#check-the-return-type-of-exit-exit-return) + [Check that naming is consistent [name-match]](index.html#check-that-naming-is-consistent-name-match) + [Check that literal is used where expected [literal-required]](index.html#check-that-literal-is-used-where-expected-literal-required) + [Check that overloaded functions have an implementation [no-overload-impl]](index.html#check-that-overloaded-functions-have-an-implementation-no-overload-impl) + [Check that coroutine return value is used [unused-coroutine]](index.html#check-that-coroutine-return-value-is-used-unused-coroutine) + [Warn about top level await expressions [top-level-await]](index.html#warn-about-top-level-await-expressions-top-level-await) + [Warn about await expressions used outside of coroutines [await-not-async]](index.html#warn-about-await-expressions-used-outside-of-coroutines-await-not-async) + [Check types in assert_type [assert-type]](index.html#check-types-in-assert-type-assert-type) + [Check that function isn’t used in boolean context [truthy-function]](index.html#check-that-function-isn-t-used-in-boolean-context-truthy-function) + [Check that string formatting/interpolation is type-safe [str-format]](index.html#check-that-string-formatting-interpolation-is-type-safe-str-format) + [Check for implicit bytes coercions [str-bytes-safe]](index.html#check-for-implicit-bytes-coercions-str-bytes-safe) + [Notify about an annotation in an unchecked function [annotation-unchecked]](index.html#notify-about-an-annotation-in-an-unchecked-function-annotation-unchecked) + [Report syntax errors [syntax]](index.html#report-syntax-errors-syntax) + [Miscellaneous checks [misc]](index.html#miscellaneous-checks-misc) * [Error codes for optional checks](index.html#document-error_code_list2) + [Check that type arguments exist [type-arg]](index.html#check-that-type-arguments-exist-type-arg) + [Check that every function has an annotation [no-untyped-def]](index.html#check-that-every-function-has-an-annotation-no-untyped-def) + [Check that cast is not redundant [redundant-cast]](index.html#check-that-cast-is-not-redundant-redundant-cast) + [Check that methods do not have redundant Self annotations [redundant-self]](index.html#check-that-methods-do-not-have-redundant-self-annotations-redundant-self) + [Check that comparisons are overlapping [comparison-overlap]](index.html#check-that-comparisons-are-overlapping-comparison-overlap) + [Check that no untyped functions are called [no-untyped-call]](index.html#check-that-no-untyped-functions-are-called-no-untyped-call) + [Check that function does not return Any value [no-any-return]](index.html#check-that-function-does-not-return-any-value-no-any-return) + [Check that types have no Any components due to missing imports [no-any-unimported]](index.html#check-that-types-have-no-any-components-due-to-missing-imports-no-any-unimported) + [Check that statement or expression is unreachable [unreachable]](index.html#check-that-statement-or-expression-is-unreachable-unreachable) + [Check that expression is redundant [redundant-expr]](index.html#check-that-expression-is-redundant-redundant-expr) + [Warn about variables that are defined only in some execution paths [possibly-undefined]](index.html#warn-about-variables-that-are-defined-only-in-some-execution-paths-possibly-undefined) + [Check that expression is not implicitly true in boolean context [truthy-bool]](index.html#check-that-expression-is-not-implicitly-true-in-boolean-context-truthy-bool) + [Check that iterable is not implicitly true in boolean context [truthy-iterable]](index.html#check-that-iterable-is-not-implicitly-true-in-boolean-context-truthy-iterable) + [Check that `# type: ignore` include an error code [ignore-without-code]](index.html#check-that-type-ignore-include-an-error-code-ignore-without-code) + [Check that awaitable return value is used [unused-awaitable]](index.html#check-that-awaitable-return-value-is-used-unused-awaitable) + [Check that `# type: ignore` comment is used [unused-ignore]](index.html#check-that-type-ignore-comment-is-used-unused-ignore) + [Check that `@override` is used when overriding a base class method [explicit-override]](index.html#check-that-override-is-used-when-overriding-a-base-class-method-explicit-override) * [Additional features](index.html#document-additional_features) + [Dataclasses](index.html#dataclasses) + [The attrs package](index.html#the-attrs-package) + [Using a remote cache to speed up mypy runs](index.html#using-a-remote-cache-to-speed-up-mypy-runs) + [Extended Callable types](index.html#extended-callable-types) * [Frequently Asked Questions](index.html#document-faq) + [Why have both dynamic and static typing?](index.html#why-have-both-dynamic-and-static-typing) + [Would my project benefit from static typing?](index.html#would-my-project-benefit-from-static-typing) + [Can I use mypy to type check my existing Python code?](index.html#can-i-use-mypy-to-type-check-my-existing-python-code) + [Will static typing make my programs run faster?](index.html#will-static-typing-make-my-programs-run-faster) + [Is mypy free?](index.html#is-mypy-free) + [Can I use duck typing with mypy?](index.html#can-i-use-duck-typing-with-mypy) + [I like Python and I have no need for static typing](index.html#i-like-python-and-i-have-no-need-for-static-typing) + [How are mypy programs different from normal Python?](index.html#how-are-mypy-programs-different-from-normal-python) + [How is mypy different from Cython?](index.html#how-is-mypy-different-from-cython) + [Does it run on PyPy?](index.html#does-it-run-on-pypy) + [Mypy is a cool project. Can I help?](index.html#mypy-is-a-cool-project-can-i-help) Project Links * [GitHub](https://github.com/python/mypy) * [Website](https://mypy-lang.org/)
String2AdjMatrix
cran
R
Package ‘String2AdjMatrix’ October 12, 2022 Type Package Title Creates an Adjacency Matrix from a List of Strings Version 0.1.0 Author <NAME> Maintainer <NAME> <<EMAIL>> Description Takes a list of character strings and forms an adjacency matrix for the times the specified characters appear together in the strings provided. For use in social network analysis and data wrangling. Simple package, comprised of three functions. License GPL-3 Encoding UTF-8 Depends stringr LazyData true RoxygenNote @import stringr 6.0.1 NeedsCompilation no Repository CRAN Date/Publication 2018-01-30 10:24:44 UTC R topics documented: generate_adj_matri... 2 string_2_matri... 3 generate_adj_matrix generate_adj_matrix Description Generates a blank adjacency matrix from a specified string Usage generate_adj_matrix(string_data, data_separator = ",", remove_spaces = F) Arguments string_data The ‘string_data‘ argument is the string from which the unique values and matrix will be generated. data_separator The ‘data_separator‘ argument is the chracter separating specified substrings in the given string. Default is ‘,‘. remove_spaces The ‘remove_spaces‘ argument will remove spaces from the header values (thus disrupting the search unless all spaces are removed in the given string in next steps). This is useful for separating strings with an irregular number of spaces between the same substrings. Details Generates an adjacency matrix from a given string. Detects unique values and generates a blank matrix with colnames and rownames of each unique value in supplied string. Data must be provided as a character string. Author(s) <NAME> Examples ##Example library(String2AdjMatrix) #Start with character string to generate an adjacency matrix from string_in = c('apples, pears, bananas', 'apples, bananas', 'apples, pears') #Generate a new blank matrix blank_matrix = generate_adj_matrix(string_in) #Now fill the matrix string_2_matrix(blank_matrix, string_in) string_2_matrix string_2_matrix Description Creates an adjacency matrix Usage string_2_matrix(new_matrix, supplied_string, self = 0) Arguments new_matrix The ‘new_matrix‘ element of the function should be either the matrix generated by ‘generate_adj_matrix()‘ or an empty data matrix of equal number of rows and columns. These should have unique values specified as the row names and column names. supplied_string The ‘supplied_string‘ element refers to the string in which the search is to be performed. i.e ‘list = c(’apples, pears, bananas’, ’apples, bananas’, ’apples, pears’)‘ self The ‘self‘ option specifies how to handle data when the specified object is found within a string. Default is 0. i.e. the adjacency matrix does not count it when the substring is found, only when the substring is found in combination with another unique substring. Value An adjacency matrix Note Generating large matrices is computationally intensive and may take a while. Author(s) <NAME> Examples ##Example library(String2AdjMatrix) #Start with character string to generate an adjacency matrix from string_in = c('apples, pears, bananas', 'apples, bananas', 'apples, pears') #Generate a new blank matrix blank_matrix = generate_adj_matrix(string_in) 4 string_2_matrix #Now fill the matrix string_2_matrix(blank_matrix, string_in)
@syncfusion/ej2-inputs
npm
JavaScript
[JavaScript Inputs Controls](#javascript-inputs-controls) === A package of JavaScript Inputs controls. It comes with a collection of form components which is useful to get different input values from the users such as text, numbers, patterns, color and file inputs. [What's Included in the JavaScript Inputs Package](#whats-included-in-the-javascript-inputs-package) --- The JavaScript Inputs package includes the following list of components. ### [JavaScript ColorPicker](#javascript-colorpicker) The [JavaScript ColorPicker](https://www.syncfusion.com/javascript-ui-controls/js-color-picker?utm_source=npm&utm_medium=listing&utm_campaign=javascript-inputs-npm) control is a user interface that is used to select and adjust color values. [Getting Started](https://ej2.syncfusion.com/documentation/color-picker/getting-started/?utm_source=npm&utm_medium=listing&utm_campaign=javascript-inputs-npm) . [Online demos](https://ej2.syncfusion.com/demos/?utm_source=npm&utm_medium=listing&utm_campaign=javascript-inputs-npm#/material/color-picker/default.html) . [Learn more](https://www.syncfusion.com/javascript-ui-controls/js-color-picker?utm_source=npm&utm_medium=listing&utm_campaign=javascript-inputs-npm) #### [Key features](#key-features) * [Color specification](https://ej2.syncfusion.com/demos/?utm_source=npm&utm_campaign=color-picker#/material/color-picker/default.html) - Supports `Red Green Blue`, `Hue Saturation Value` and `Hex` codes. * [Mode](https://ej2.syncfusion.com/documentation/color-picker/mode-and-value#mode-and-value) - Supports `Picker` and `Palette` mode. * [Inline](https://ej2.syncfusion.com/demos/?utm_source=npm&utm_campaign=color-picker#/material/color-picker/inline.html) - Supports inline type rendering of color picker. * [Custom palettes](https://ej2.syncfusion.com/demos/?utm_source=npm&utm_campaign=color-picker#/material/color-picker/custom.html) - Allows to customize palettes and supports multiple palette groups rendering. * [Opacity](https://ej2.syncfusion.com/documentation/color-picker/mode-and-value#color-value) - Allows to set and change the `opacity` of the selected color. * [Accessibility](https://ej2.syncfusion.com/documentation/color-picker/accessibility#accessibility) - Built-in accessibility features to access color picker using the keyboard, screen readers, or other assistive technology devices. ### [JavaScript Form Validator](#javascript-form-validator) The [JavaScript Form Validator](https://www.syncfusion.com/javascript-ui-controls/js-form-validation?utm_source=npm&utm_medium=listing&utm_campaign=javascript-inputs-npm) control is used to validate the form elements before submitting to the server. [Getting Started](https://ej2.syncfusion.com/documentation/form-validator/validation-rules/?utm_source=npm&utm_medium=listing&utm_campaign=javascript-inputs-npm) . [Online demos](https://ej2.syncfusion.com/demos/?utm_source=npm&utm_medium=listing&utm_campaign=javascript-inputs-npm#/material/form-validator/default.html) . [Learn more](https://www.syncfusion.com/javascript-ui-controls/js-form-validation?utm_source=npm&utm_medium=listing&utm_campaign=javascript-inputs-npm) ### [JavaScript TextBox](#javascript-textbox) The [JavaScript TextBox](https://www.syncfusion.com/javascript-ui-controls/js-textbox?utm_source=npm&utm_medium=listing&utm_campaign=javascript-inputs-npm) control is an extended version of the HTML input control which is used to edit or display text input on a form. [Getting Started](https://ej2.syncfusion.com/documentation/textbox/getting-started/?utm_source=npm&utm_medium=listing&utm_campaign=javascript-inputs-npm) . [Online demos](https://ej2.syncfusion.com/demos/?utm_source=npm&utm_medium=listing&utm_campaign=javascript-inputs-npm#/material/textbox/default.html) . [Learn more](https://www.syncfusion.com/javascript-ui-controls/js-textbox?utm_source=npm&utm_medium=listing&utm_campaign=javascript-inputs-npm) #### [Key features](#key-features-1) * [Floating label](https://ej2.syncfusion.com/demos/?utm_source=npm&utm_campaign=textbox#/material/textbox/default.html) - Floats the placeholder text while focus. * [Input group](https://ej2.syncfusion.com/demos/?utm_source=npm&utm_campaign=textbox#/material/textbox/default.html) - Group the icons, buttons along with textbox. * [Validation states](https://ej2.syncfusion.com/demos/?utm_source=npm&utm_campaign=textbox#/material/textbox/default.html) - Provides styles for success, error, and warning states. * [Multiline](https://ej2.syncfusion.com/demos/?utm_source=npm&utm_campaign=textbox#/material/textbox/default.html) - Handles multiline input with placeholder text. ### [JavaScript Masked TextBox](#javascript-masked-textbox) The [JavaScript Masked TextBox](https://www.syncfusion.com/javascript-ui-controls/js-input-mask?utm_source=npm&utm_medium=listing&utm_campaign=javascript-inputs-npm) control allows the user to enter the valid input only based on the provided mask. [Getting Started](https://ej2.syncfusion.com/documentation/maskedtextbox/getting-started/?utm_source=npm&utm_medium=listing&utm_campaign=javascript-inputs-npm) . [Online demos](https://ej2.syncfusion.com/demos/?utm_source=npm&utm_medium=listing&utm_campaign=javascript-inputs-npm#/material/maskedtextbox/default.html) . [Learn more](https://www.syncfusion.com/javascript-ui-controls/js-input-mask?utm_source=npm&utm_medium=listing&utm_campaign=javascript-inputs-npm) #### [Key features](#key-features-2) * [Custom characters](https://ej2.syncfusion.com/demos/?lang=typescript&utm_source=npm&utm_campaign=maskedtextbox/#/material/maskedtextbox/custom-mask.html) - Allows you to use your own characters as the mask elements. * [Regular expression](https://ej2.syncfusion.com/documentation/maskedtextbox/mask-configuration#regular-expression?lang=typescript&utm_source=npm&utm_campaign=maskedtextbox#regular-expression) - Can be used as a mask element for each character of the MaskedTextBox. * [Accessibility](https://ej2.syncfusion.com/documentation/maskedtextbox/accessibility?lang=typescript&utm_source=npm&utm_campaign=maskedtextbox) - Provides built-in accessibility support which helps to access all the MaskedTextBox component features through keyboard, on-screen readers, or other assistive technology devices. ### [JavaScript Numeric TextBox](#javascript-numeric-textbox) The [JavaScript Numeric TextBox](https://www.syncfusion.com/javascript-ui-controls/js-numeric-textbox?utm_source=npm&utm_medium=listing&utm_campaign=javascript-inputs-npm) control is used to get the number inputs from the user. The input values can be incremented or decremented by a predefined step value. [Getting Started](https://ej2.syncfusion.com/documentation/numerictextbox/getting-started/?utm_source=npm&utm_medium=listing&utm_campaign=javascript-inputs-npm) . [Online demos](https://ej2.syncfusion.com/demos/?utm_source=npm&utm_medium=listing&utm_campaign=javascript-inputs-npm#/material/numerictextbox/default.html) . [Learn more](https://www.syncfusion.com/javascript-ui-controls/js-numeric-textbox?utm_source=npm&utm_medium=listing&utm_campaign=javascript-inputs-npm) #### [Key features](#key-features-3) * [Range validation](https://ej2.syncfusion.com/demos/?utm_source=npm&utm_campaign=numerictextbox/#/material/numerictextbox/range-validation.html) - Allows to set the minimum and maximum range of values in the NumericTextBox. * [Number formats](https://ej2.syncfusion.com/demos/?utm_source=npm&utm_campaign=numerictextbox/#/material/numerictextbox/custom-format.html) - Supports the number display formatting with MSDN standard and custom number formats. * [Precision of numbers](https://ej2.syncfusion.com/demos/?utm_source=npm&utm_campaign=numerictextbox/#/material/numerictextbox/restrict-decimals.html) - Allows to restrict the number precision when enters the value. * [Keyboard interaction](https://ej2.syncfusion.com/documentation/numerictextbox/accessibility#keyboard-interaction/?lang=typescript&utm_source=npm&utm_campaign=numerictextbox#keyboard-interaction) - Allows users to interact with the NumericTextBox using the keyboard. * [Accessibility](https://ej2.syncfusion.com/documentation/numerictextbox/accessibility/?lang=typescript&utm_source=npm&utm_campaign=numerictextbox) - Provides built-in accessibility support which helps to access all the NumericTextBox component features through keyboard, on-screen readers or other assistive technology devices. * [Internationalization](https://ej2.syncfusion.com/documentation/numerictextbox/globalization#internationalization/?lang=typescript&utm_source=npm&utm_campaign=numerictextbox) - Library provides support for formatting and parsing number using the official Unicode CLDR JSON data. * [Localization](https://ej2.syncfusion.com/documentation/numerictextbox/globalization#internationalization/?lang=typescript&utm_source=npm&utm_campaign=numerictextbox#localization) - Supports to localize spin up and down buttons title for the tooltip to different cultures. ### [JavaScript Signature](#javascript-signature) The [JavaScript Signature](https://www.syncfusion.com/javascript-ui-controls/js-signature?utm_source=npm&utm_medium=listing&utm_campaign=javascript-inputs-npm) control allows user to draw smooth signatures as vector outline of strokes using variable width bezier curve interpolation. It allows to save signature as image. [Getting Started](https://ej2.syncfusion.com/documentation/signature/getting-started/?utm_source=npm&utm_medium=listing&utm_campaign=javascript-inputs-npm) . [Online demos](https://ej2.syncfusion.com/demos/?utm_source=npm&utm_medium=listing&utm_campaign=javascript-inputs-npm#/material/signature/default.html) . [Learn more](https://www.syncfusion.com/javascript-ui-controls/js-signature?utm_source=npm&utm_medium=listing&utm_campaign=javascript-inputs-npm) #### [Key features](#key-features-4) * [Customization](https://ej2.syncfusion.com/demos/?utm_source=npm&utm_campaign=slider#/material/signature/default.html) - Support various customization options like background color, background image, stroke color, stroke width, save with background, undo, redo, clear, readonly, and disabled. * [Save](https://ej2.syncfusion.com/demos/?utm_source=npm&utm_campaign=slider#/material/signature/default.html) - Support to save the signature as image like PNG, JPEG, and SVG. * [Load](https://ej2.syncfusion.com/demos/?utm_source=npm&utm_campaign=slider#/material/signature/default.html) - Support to load the signature as base64 url of the image. * [Draw](https://ej2.syncfusion.com/demos/?utm_source=npm&utm_campaign=slider#/material/signature/default.html) - Support to draw the text with the different font family and font size. ### [JavaScript Slider](#javascript-slider) The [JavaScript Slider](https://www.syncfusion.com/javascript-ui-controls/js-range-slider?utm_source=npm&utm_medium=listing&utm_campaign=javascript-inputs-npm) control allows you to select a value or range of values between the min and max range. [Getting Started](https://ej2.syncfusion.com/documentation/range-slider/getting-started/?utm_source=npm&utm_medium=listing&utm_campaign=javascript-inputs-npm) . [Online demos](https://ej2.syncfusion.com/demos/?utm_source=npm&utm_medium=listing&utm_campaign=javascript-inputs-npm#/material/slider/default.html) . [Learn more](https://www.syncfusion.com/javascript-ui-controls/js-range-slider?utm_source=npm&utm_medium=listing&utm_campaign=javascript-inputs-npm) #### [Key features](#key-features-5) * [Types](https://ej2.syncfusion.com/demos/?utm_source=npm&utm_campaign=slider#/material/slider/default.html) - Provided three types of Slider. * [Orientation](https://ej2.syncfusion.com/demos/?utm_source=npm&utm_campaign=slider#/material/slider/orientation.html) - Displays the Slider in horizontal or vertical direction. * [Buttons](https://ej2.syncfusion.com/demos/?utm_source=npm&utm_campaign=slider#/material/slider/tooltip.html) - Provided built-in support to render the buttons in both edges of the Slider. * [Tooltip](https://ej2.syncfusion.com/demos/?utm_source=npm&utm_campaign=slider#/material/slider/tooltip.html) - Displays a tooltip to show the currently selected value. * [Ticks](https://ej2.syncfusion.com/demos/?utm_source=npm&utm_campaign=slider#/material/slider/ticks.html) - Displays a scale with small and big ticks. * [Format](https://ej2.syncfusion.com/demos/?utm_source=npm&utm_campaign=slider#/material/slider/format.html) - Customize the slider values into various format. * [Limits](https://ej2.syncfusion.com/demos/?utm_source=npm&utm_campaign=slider#/material/slider/limits.html) - Slider thumb movement restriction enabled with interval dragging in range-slider. * [Accessibility](https://ej2.syncfusion.com/demos/?utm_source=npm&utm_campaign=slider#/material/slider/default.html) - Built-in compliance with the [WAI-ARIA](http://www.w3.org/WAI/PF/aria-practices/) specifications. * [Keyboard interaction](https://ej2.syncfusion.com/demos/?utm_source=npm&utm_campaign=slider#/material/slider/api.html) - The Slider can be intractable through the keyboard. ### [JavaScript File Upload](#javascript-file-upload) The [JavaScript File Upload](https://www.syncfusion.com/javascript-ui-controls/js-file-upload?utm_source=npm&utm_medium=listing&utm_campaign=javascript-inputs-npm) control is an extended version of the HTML5 upload control which is used to upload images, documents, and other files to a server. [Getting Started](https://ej2.syncfusion.com/documentation/uploader/getting-started/?utm_source=npm&utm_medium=listing&utm_campaign=javascript-inputs-npm) . [Online demos](https://ej2.syncfusion.com/demos/?utm_source=npm&utm_medium=listing&utm_campaign=javascript-inputs-npm#/material/uploader/default.html) . [Learn more](https://www.syncfusion.com/javascript-ui-controls/js-file-upload?utm_source=npm&utm_medium=listing&utm_campaign=javascript-inputs-npm) #### [Key features](#key-features-6) * [Chunk upload](https://ej2.syncfusion.com/demos/?utm_source=npm&utm_medium=listing&utm_campaign=javascript-file-upload-npm#/material/uploader/chunk-upload.html) - Used to upload large files as chunks * [Drag and drop](https://ej2.syncfusion.com/demos/?utm_source=npm&utm_medium=listing&utm_campaign=javascript-file-upload-npm/#/material/uploader/custom-drop-area.html) - Drag the files and drop into component to upload them. * [Template](https://ej2.syncfusion.com/demos/?utm_source=npm&utm_medium=listing&utm_campaign=javascript-file-upload-npm/#/material/uploader/custom-file-list.html) - The file list and buttons can be customize using template * [Validation](https://ej2.syncfusion.com/demos/?utm_source=npm&utm_medium=listing&utm_campaign=javascript-file-upload-npm/#/material/uploader/file-validation.html) - Validate extension and size of upload file * [Auto upload](https://ej2.syncfusion.com/demos/?utm_source=npm&utm_medium=listing&utm_campaign=javascript-file-upload-npm#/material/uploader/default.html) - Process the file to upload without interaction. * [Preload files](https://ej2.syncfusion.com/demos/?utm_source=npm&utm_medium=listing&utm_campaign=javascript-file-upload-npm/#/material/uploader/preload-files.html) - View and manipulate previously uploaded files. Trusted by the world's leading companies [Setup](#setup) --- To install `inputs` and its dependent packages, use the following command, ``` npm install @syncfusion/ej2-inputs ``` [Supported frameworks](#supported-frameworks) --- Input controls are also offered in following list of frameworks. |      [Angular](https://www.syncfusion.com/angular-ui-components?utm_medium=listing&utm_source=github) |        [React](https://www.syncfusion.com/react-ui-components?utm_medium=listing&utm_source=github) |        [Vue](https://www.syncfusion.com/vue-ui-components?utm_medium=listing&utm_source=github) |   [ASP.NET Core](https://www.syncfusion.com/aspnet-core-ui-controls?utm_medium=listing&utm_source=github) |   [ASP.NET MVC](https://www.syncfusion.com/aspnet-mvc-ui-controls?utm_medium=listing&utm_source=github) | | --- | --- | --- | --- | --- | [Showcase samples](#showcase-samples) --- * Expanse Tracker - [Source](https://github.com/syncfusion/ej2-sample-ts-expensetracker), [Live Demo](https://ej2.syncfusion.com/showcase/typescript/expensetracker/?utm_source=npm&utm_campaign=numerictextbox#/expense) * Loan Calculator - [Source](https://github.com/syncfusion/ej2-sample-ts-loancalculator), [Live Demo](https://ej2.syncfusion.com/showcase/typescript/loancalculator/?utm_source=npm&utm_campaign=slider) * Cloud Pricing - [Live Demo](https://ej2.syncfusion.com/demos/?utm_source=npm&utm_campaign=slider#/material/slider/azure-pricing.html) [Support](#support) --- Product support is available through following mediums. * [Support ticket](https://support.syncfusion.com/support/tickets/create) - Guaranteed Response in 24 hours | Unlimited tickets | Holiday support * [Community forum](https://www.syncfusion.com/forums/essential-js2?utm_source=npm&utm_medium=listing&utm_campaign=javascript-inputs-npm) * [GitHub issues](https://github.com/syncfusion/ej2-javascript-ui-controls/issues/new) * [Request feature or report bug](https://www.syncfusion.com/feedback/javascript?utm_source=npm&utm_medium=listing&utm_campaign=javascript-inputs-npm) * Live chat [Changelog](#changelog) --- Check the changelog [here](https://github.com/syncfusion/ej2-javascript-ui-controls/blob/master/controls/inputs/CHANGELOG.md/?utm_source=npm&utm_campaign=input). Get minor improvements and bug fixes every week to stay up to date with frequent updates. [License and copyright](#license-and-copyright) --- > This is a commercial product and requires a paid license for possession or use. Syncfusion’s licensed software, including this component, is subject to the terms and conditions of Syncfusion's [EULA](https://www.syncfusion.com/eula/es/). To acquire a license for 80+ [JavaScript UI controls](https://www.syncfusion.com/javascript-ui-controls), you can [purchase](https://www.syncfusion.com/sales/products) or [start a free 30-day trial](https://www.syncfusion.com/account/manage-trials/start-trials). > A [free community license](https://www.syncfusion.com/products/communitylicense) is also available for companies and individuals whose organizations have less than $1 million USD in annual gross revenue and five or fewer developers. See [LICENSE FILE](https://github.com/syncfusion/ej2-javascript-ui-controls/blob/master/license/?utm_source=npm&utm_campaign=input) for more info. © Copyright 2023 Syncfusion, Inc. All Rights Reserved. The Syncfusion Essential Studio license and copyright applies to this distribution. Readme --- ### Keywords * ej2 * syncfusion * web-components * ej2-inputs * input box * textbox * html5 textbox * floating input * floating label * form controls * input controls * color * color picker * colorpicker * picker * palette * hsv colorpicker * alpha colorpicker * color palette * custom palette * ej2 colorpicker * color chooser * validator * form * form validator * masked textbox * masked input * input mask * date mask * mask format * numeric textbox * percent textbox * percentage textbox * currency textbox * numeric spinner * numeric up-down * number input * slider * range slider * minrange * slider limits * localization slider * format slider * slider with tooltip * vertical slider * mobile slider * upload * upload-box * input-file * floating-label * chunk-upload
opencpu
cran
R
Package ‘opencpu’ August 7, 2023 Title Producing and Reproducing Results Version 2.2.11 License Apache License 2.0 Encoding UTF-8 URL https://www.opencpu.org https://opencpu.r-universe.dev/opencpu BugReports https://github.com/opencpu/opencpu/issues Depends R (>= 3.0.0) Imports evaluate (>= 0.12), httpuv (>= 1.3), knitr (>= 1.6), jsonlite (>= 1.4), remotes (>= 2.0.2), sys (>= 2.1), webutils (>= 0.6), curl (>= 4.0), rappdirs, rlang, vctrs, methods, zip, mime, protolite, brew, openssl Suggests arrow, unix (>= 1.5.3), haven, pander, R.rsp, svglite SystemRequirements pandoc, apparmor (optional) VignetteBuilder knitr, R.rsp Description A system for embedded scientific computing and reproducible research with R. The OpenCPU server exposes a simple but powerful HTTP api for RPC and data interchange with R. This provides a reliable and scalable foundation for statistical services or building R web applications. The OpenCPU server runs either as a single-user development server within the interactive R session, or as a multi-user Linux stack based on Apache2. The entire system is fully open source and permissively licensed. The OpenCPU website has detailed documentation and example apps. LazyData yes RoxygenNote 7.1.1 NeedsCompilation no Author <NAME> [aut, cre] (<https://orcid.org/0000-0002-4035-0289>) Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2023-08-07 12:50:02 UTC R topics documented: app... 2 ocpu-serve... 3 apps OpenCPU Application Description Manage installed OpenCPU applications. These applications can be started locally using ocpu_start_app or deployed online on ocpu.io. Usage install_apps(repo, ...) remove_apps(repo) installed_apps() available_apps() update_apps(...) Arguments repo a github repository such as user/repo, see install_github. ... additional options for install_github Details OpenCPU apps are simply R packages. For regular users, apps get installed in a user-specific app library which is persistent between R sessions. This is used for locally running or developing web applications. When running these functions as opencpu user on an OpenCPU cloud server, apps will be installed in the global opencpu server app library; the same library as used by the OpenCPU Github webhook. See Also Other ocpu: ocpu-server Examples ## Not run: # List available demo apps available_apps() # Run application from: https://github.com/rwebapps/nabel ocpu_start_app("rwebapps/nabel") # Run application from: https://github.com/rwebapps/markdownapp ocpu_start_app("rwebapps/markdownapp") # Run application from: https://github.com/rwebapps/stockapp ocpu_start_app("rwebapps/stockapp") # Run application from: https://github.com/rwebapps/appdemo ocpu_start_app("rwebapps/appdemo") # Show currently installed apps installed_apps() ## End(Not run) ocpu-server OpenCPU Single-User Server Description Starts the OpenCPU single-user server for developing and running apps locally. To deploy your apps on a cloud server or ocpu.io, simply push them to github and install the opencpu webhook. Some example apps are available from github::rwebapps/. Usage ocpu_start_server( port = 5656, root = "/ocpu", workers = 2, preload = NULL, on_startup = NULL, no_cache = FALSE ) ocpu_start_app(app, update = TRUE, ...) Arguments port port number root base of the URL where to host the OpenCPU API workers number of worker processes preload character vector of packages to preload in the workers. This speeds up requests to those packages. on_startup function to call once server has started (e.g. utils::browseURL) no_cache sets Cache-Control: no-cache for all responses to disable browser caching. Useful for development when files change frequently. You might still need to manually flush the browser cache for resources cached previously. Try pressing CTRL+R or go incognito if your browser is showing old content. app either the name of a locally installed package, or a github remote (see install_apps) update checks if the app is up-to-date (if possible) before running ... extra parameters passed to ocpu_start_server See Also Other ocpu: apps Examples ## Not run: # List available demo apps available_apps() # Run application from: https://github.com/rwebapps/nabel ocpu_start_app("rwebapps/nabel") # Run application from: https://github.com/rwebapps/markdownapp ocpu_start_app("rwebapps/markdownapp") # Run application from: https://github.com/rwebapps/stockapp ocpu_start_app("rwebapps/stockapp") # Run application from: https://github.com/rwebapps/appdemo ocpu_start_app("rwebapps/appdemo") # Show currently installed apps installed_apps() ## End(Not run)
@aws-sdk/client-dynamodb
npm
JavaScript
[@aws-sdk/client-dynamodb](#aws-sdkclient-dynamodb) === [Description](#description) --- AWS SDK for JavaScript DynamoDB Client for Node.js, Browser and React Native. Amazon DynamoDB Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB lets you offload the administrative burdens of operating and scaling a distributed database, so that you don't have to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaling. With DynamoDB, you can create database tables that can store and retrieve any amount of data, and serve any level of request traffic. You can scale up or scale down your tables' throughput capacity without downtime or performance degradation, and use the Amazon Web Services Management Console to monitor resource utilization and performance metrics. DynamoDB automatically spreads the data and traffic for your tables over a sufficient number of servers to handle your throughput and storage requirements, while maintaining consistent and fast performance. All of your data is stored on solid state disks (SSDs) and automatically replicated across multiple Availability Zones in an Amazon Web Services Region, providing built-in high availability and data durability. [Installing](#installing) --- To install the this package, simply type add or install @aws-sdk/client-dynamodb using your favorite package manager: * `npm install @aws-sdk/client-dynamodb` * `yarn add @aws-sdk/client-dynamodb` * `pnpm add @aws-sdk/client-dynamodb` [Getting Started](#getting-started) --- ### [Import](#import) The AWS SDK is modulized by clients and commands. To send a request, you only need to import the `DynamoDBClient` and the commands you need, for example `ListBackupsCommand`: ``` // ES5 example const { DynamoDBClient, ListBackupsCommand } = require("@aws-sdk/client-dynamodb"); ``` ``` // ES6+ example import { DynamoDBClient, ListBackupsCommand } from "@aws-sdk/client-dynamodb"; ``` ### [Usage](#usage) To send a request, you: * Initiate client with configuration (e.g. credentials, region). * Initiate command with input parameters. * Call `send` operation on client with command object as input. * If you are using a custom http handler, you may call `destroy()` to close open connections. ``` // a client can be shared by different commands. const client = new DynamoDBClient({ region: "REGION" }); const params = { /** input parameters */ }; const command = new ListBackupsCommand(params); ``` #### [Async/await](#asyncawait) We recommend using [await](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/await) operator to wait for the promise returned by send operation as follows: ``` // async/await. try { const data = await client.send(command); // process data. } catch (error) { // error handling. } finally { // finally. } ``` Async-await is clean, concise, intuitive, easy to debug and has better error handling as compared to using Promise chains or callbacks. #### [Promises](#promises) You can also use [Promise chaining](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Using_promises#chaining) to execute send operation. ``` client.send(command).then( (data) => { // process data. }, (error) => { // error handling. } ); ``` Promises can also be called using `.catch()` and `.finally()` as follows: ``` client .send(command) .then((data) => { // process data. }) .catch((error) => { // error handling. }) .finally(() => { // finally. }); ``` #### [Callbacks](#callbacks) We do not recommend using callbacks because of [callback hell](http://callbackhell.com/), but they are supported by the send operation. ``` // callbacks. client.send(command, (err, data) => { // process err and data. }); ``` #### [v2 compatible style](#v2-compatible-style) The client can also send requests using v2 compatible style. However, it results in a bigger bundle size and may be dropped in next major version. More details in the blog post on [modular packages in AWS SDK for JavaScript](https://aws.amazon.com/blogs/developer/modular-packages-in-aws-sdk-for-javascript/) ``` import * as AWS from "@aws-sdk/client-dynamodb"; const client = new AWS.DynamoDB({ region: "REGION" }); // async/await. try { const data = await client.listBackups(params); // process data. } catch (error) { // error handling. } // Promises. client .listBackups(params) .then((data) => { // process data. }) .catch((error) => { // error handling. }); // callbacks. client.listBackups(params, (err, data) => { // process err and data. }); ``` ### [Troubleshooting](#troubleshooting) When the service returns an exception, the error will include the exception information, as well as response metadata (e.g. request id). ``` try { const data = await client.send(command); // process data. } catch (error) { const { requestId, cfId, extendedRequestId } = error.$$metadata; console.log({ requestId, cfId, extendedRequestId }); /** * The keys within exceptions are also parsed. * You can access them by specifying exception names: * if (error.name === 'SomeServiceException') { * const value = error.specialKeyInException; * } */ } ``` [Getting Help](#getting-help) --- Please use these community resources for getting help. We use the GitHub issues for tracking bugs and feature requests, but have limited bandwidth to address them. * Visit [Developer Guide](https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/welcome.html) or [API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/index.html). * Check out the blog posts tagged with [`aws-sdk-js`](https://aws.amazon.com/blogs/developer/tag/aws-sdk-js/) on AWS Developer Blog. * Ask a question on [StackOverflow](https://stackoverflow.com/questions/tagged/aws-sdk-js) and tag it with `aws-sdk-js`. * Join the AWS JavaScript community on [gitter](https://gitter.im/aws/aws-sdk-js-v3). * If it turns out that you may have found a bug, please [open an issue](https://github.com/aws/aws-sdk-js-v3/issues/new/choose). To test your universal JavaScript code in Node.js, browser and react-native environments, visit our [code samples repo](https://github.com/aws-samples/aws-sdk-js-tests). [Contributing](#contributing) --- This client code is generated automatically. Any modifications will be overwritten the next time the `@aws-sdk/client-dynamodb` package is updated. To contribute to client you can check our [generate clients scripts](https://github.com/aws/aws-sdk-js-v3/tree/main/scripts/generate-clients). [License](#license) --- This SDK is distributed under the [Apache License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0), see LICENSE for more information. [Client Commands (Operations List)](#client-commands-operations-list) --- BatchExecuteStatement [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/batchexecutestatementcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/batchexecutestatementcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/batchexecutestatementcommandoutput.html) BatchGetItem [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/batchgetitemcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/batchgetitemcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/batchgetitemcommandoutput.html) BatchWriteItem [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/batchwriteitemcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/batchwriteitemcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/batchwriteitemcommandoutput.html) CreateBackup [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/createbackupcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/createbackupcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/createbackupcommandoutput.html) CreateGlobalTable [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/createglobaltablecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/createglobaltablecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/createglobaltablecommandoutput.html) CreateTable [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/createtablecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/createtablecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/createtablecommandoutput.html) DeleteBackup [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/deletebackupcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/deletebackupcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/deletebackupcommandoutput.html) DeleteItem [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/deleteitemcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/deleteitemcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/deleteitemcommandoutput.html) DeleteTable [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/deletetablecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/deletetablecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/deletetablecommandoutput.html) DescribeBackup [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/describebackupcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/describebackupcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/describebackupcommandoutput.html) DescribeContinuousBackups [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/describecontinuousbackupscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/describecontinuousbackupscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/describecontinuousbackupscommandoutput.html) DescribeContributorInsights [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/describecontributorinsightscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/describecontributorinsightscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/describecontributorinsightscommandoutput.html) DescribeEndpoints [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/describeendpointscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/describeendpointscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/describeendpointscommandoutput.html) DescribeExport [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/describeexportcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/describeexportcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/describeexportcommandoutput.html) DescribeGlobalTable [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/describeglobaltablecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/describeglobaltablecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/describeglobaltablecommandoutput.html) DescribeGlobalTableSettings [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/describeglobaltablesettingscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/describeglobaltablesettingscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/describeglobaltablesettingscommandoutput.html) DescribeImport [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/describeimportcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/describeimportcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/describeimportcommandoutput.html) DescribeKinesisStreamingDestination [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/describekinesisstreamingdestinationcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/describekinesisstreamingdestinationcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/describekinesisstreamingdestinationcommandoutput.html) DescribeLimits [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/describelimitscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/describelimitscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/describelimitscommandoutput.html) DescribeTable [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/describetablecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/describetablecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/describetablecommandoutput.html) DescribeTableReplicaAutoScaling [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/describetablereplicaautoscalingcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/describetablereplicaautoscalingcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/describetablereplicaautoscalingcommandoutput.html) DescribeTimeToLive [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/describetimetolivecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/describetimetolivecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/describetimetolivecommandoutput.html) DisableKinesisStreamingDestination [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/disablekinesisstreamingdestinationcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/disablekinesisstreamingdestinationcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/disablekinesisstreamingdestinationcommandoutput.html) EnableKinesisStreamingDestination [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/enablekinesisstreamingdestinationcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/enablekinesisstreamingdestinationcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/enablekinesisstreamingdestinationcommandoutput.html) ExecuteStatement [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/executestatementcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/executestatementcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/executestatementcommandoutput.html) ExecuteTransaction [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/executetransactioncommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/executetransactioncommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/executetransactioncommandoutput.html) ExportTableToPointInTime [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/exporttabletopointintimecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/exporttabletopointintimecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/exporttabletopointintimecommandoutput.html) GetItem [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/getitemcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/getitemcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/getitemcommandoutput.html) ImportTable [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/importtablecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/importtablecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/importtablecommandoutput.html) ListBackups [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/listbackupscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/listbackupscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/listbackupscommandoutput.html) ListContributorInsights [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/listcontributorinsightscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/listcontributorinsightscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/listcontributorinsightscommandoutput.html) ListExports [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/listexportscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/listexportscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/listexportscommandoutput.html) ListGlobalTables [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/listglobaltablescommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/listglobaltablescommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/listglobaltablescommandoutput.html) ListImports [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/listimportscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/listimportscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/listimportscommandoutput.html) ListTables [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/listtablescommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/listtablescommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/listtablescommandoutput.html) ListTagsOfResource [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/listtagsofresourcecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/listtagsofresourcecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/listtagsofresourcecommandoutput.html) PutItem [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/putitemcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/putitemcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/putitemcommandoutput.html) Query [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/querycommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/querycommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/querycommandoutput.html) RestoreTableFromBackup [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/restoretablefrombackupcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/restoretablefrombackupcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/restoretablefrombackupcommandoutput.html) RestoreTableToPointInTime [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/restoretabletopointintimecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/restoretabletopointintimecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/restoretabletopointintimecommandoutput.html) Scan [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/scancommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/scancommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/scancommandoutput.html) TagResource [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/tagresourcecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/tagresourcecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/tagresourcecommandoutput.html) TransactGetItems [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/transactgetitemscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/transactgetitemscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/transactgetitemscommandoutput.html) TransactWriteItems [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/transactwriteitemscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/transactwriteitemscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/transactwriteitemscommandoutput.html) UntagResource [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/untagresourcecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/untagresourcecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/untagresourcecommandoutput.html) UpdateContinuousBackups [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/updatecontinuousbackupscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/updatecontinuousbackupscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/updatecontinuousbackupscommandoutput.html) UpdateContributorInsights [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/updatecontributorinsightscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/updatecontributorinsightscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/updatecontributorinsightscommandoutput.html) UpdateGlobalTable [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/updateglobaltablecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/updateglobaltablecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/updateglobaltablecommandoutput.html) UpdateGlobalTableSettings [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/updateglobaltablesettingscommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/updateglobaltablesettingscommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/updateglobaltablesettingscommandoutput.html) UpdateItem [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/updateitemcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/updateitemcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/updateitemcommandoutput.html) UpdateTable [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/updatetablecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/updatetablecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/updatetablecommandoutput.html) UpdateTableReplicaAutoScaling [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/updatetablereplicaautoscalingcommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/updatetablereplicaautoscalingcommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/updatetablereplicaautoscalingcommandoutput.html) UpdateTimeToLive [Command API Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/updatetimetolivecommand.html) / [Input](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/updatetimetolivecommandinput.html) / [Output](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/interfaces/updatetimetolivecommandoutput.html) Readme --- ### Keywords none
asv
readthedoc
Unknown
airspeed velocity Documentation Release 0.6.1 <NAME> Oct 03, 2023 Contents 1 Installing airspeed velocity 3 2 Using airspeed velocity 5 3 Writing benchmarks 17 4 Tuning timing measurements 27 5 Reference 29 6 Developer Docs 57 7 Changelog 67 8 Credits 77 Bibliography 81 Python Module Index 83 Index 85 i ii airspeed velocity Documentation, Release 0.6.1 airspeed velocity (asv) is a tool for benchmarking Python packages over their lifetime. Runtime, memory consump- tion and even custom-computed values may be tracked. The results are displayed in an interactive web frontend that requires only a basic static webserver to host. See examples of Airspeed Velocity websites: astropy, numpy, scipy. License: BSD three-clause license. Releases: https://pypi.python.org/pypi/asv Development: https://github.com/airspeed-velocity/asv airspeed velocity Documentation, Release 0.6.1 2 Contents CHAPTER 1 Installing airspeed velocity airspeed velocity is known to work on Linux, Mac OS-X, and Windows. It is known to work with Python 3.7 and higher. It works also with PyPy. airspeed velocity is a standard Python package, and the latest released version may be installed in the standard way from PyPI: pip install asv The development version can be installed by cloning the source repository and running pip install . inside it, or by pip install git+https://github.com/airspeed-velocity/asv. The requirements should be automatically installed. If they aren’t installed automatically, for example due to network- ing restrictions, the python requirements are as noted in the pyproject.toml. For managing the environments, one of the following packages is required: • libmambapy, which is typically part of mamba • virtualenv, which is required since venv is not compatible with other versions of Python. • An anaconda or miniconda installation, with the conda command available on your path. Note: libmambapy is the fastest for situations where non-pythonic dependencies are required. Anaconda or miniconda is slower but still preferred if the project involves a lot of compiled C/C++ extensions and are available in the conda repository, since conda will be able to fetch precompiled binaries for these dependencies in many cases. Using virtualenv, dependencies without precompiled wheels usually have to be compiled every time the environments are set up. 1.1 Optional optimization If your project being benchmarked contains C, C++, Objective-C or Cython, consider installing ccache. ccache is a compiler cache that speeds up compilation time when the same objects are repeatedly compiled. In airspeed velocity, airspeed velocity Documentation, Release 0.6.1 the project being benchmarked is recompiled at many different points in its history, often with only minor changes to the source code, so ccache can help speed up the total benchmarking time considerably. 1.2 Running the self-tests The self tests are based on pytest. If you don’t have it installed, and you have a connection to the Internet, it will be installed automatically. To run airspeed velocity’s self tests: pytest CHAPTER 2 Using airspeed velocity airspeed velocity is designed to benchmark a single project over its lifetime using a given set of benchmarks. Below, we use the phrase “project” to refer to the project being benchmarked, and “benchmark suite” to refer to the set of benchmarks – i.e., little snippets of code that are timed – being run against the project. The benchmark suite may live inside the project’s repository, or it may reside in a separate repository – the choice is up to you and is primarily a matter of style or policy. Note also that the result data is stored in JSON files alongside the benchmark suite and may grow quite large, and you may want to plan where to store it. You can interact with airspeed velocity through the asv command. Like git, the asv command has a number of “subcommands” for performing various actions on your benchmarking project. 2.1 Setting up a new benchmarking project The first thing to do is to set up an airspeed velocity benchmark suite for your project. It must contain, at a minimum, a single configuration file, asv.conf.json, and a directory tree of Python files containing benchmarks. The asv quickstart command can be used to create a new benchmarking suite. Change to the directory where you would like your new benchmarking suite to be created and run: $ asv quickstart · Setting up new Airspeed Velocity benchmark suite. Which of the following template layouts to use: (1) benchmark suite at the top level of the project repository (2) benchmark suite in a separate repository Layout to use? [1/2] 1 · Edit asv.conf.json to get started. Answer ‘1’ if you want a default configuration suitable for putting the benchmark suite on the top level of the same repository where your project is, or ‘2’ to get default configuration for putting it in a separate repository. airspeed velocity Documentation, Release 0.6.1 Now that you have the bare bones of a benchmarking suite, let’s edit the configuration file, asv.conf.json. Like most files that airspeed velocity uses and generates, it is a JSON file. There are comments in the file describing what each of the elements do, and there is also a asv.conf.json reference with more details. The values that will most likely need to be changed for any benchmarking suite are: • project: The Python package name of the project being benchmarked. • project_url: The project’s homepage. • repo: The URL or path to the DVCS repository for the project. This should be a read-only URL so that anyone, even those without commit rights to the repository, can run the benchmarks. For a project on github, for example, the URL would look like: https://github.com/airspeed-velocity/asv.git The value can also be a path, relative to the location of the configuration file. For example, if the benchmarks are stored in the same repository as the project itself, and the configuration file is located at benchmarks/ asv.conf.json inside the repository, you can set "repo": ".." to use the local repository. • show_commit_url: The base of URLs used to display commits for the project. This allows users to click on a commit in the web interface and have it display the contents of that commit. For a github project, the URL is of the form http://github.com/$OWNER/$REPO/commit/. • environment_type: The tool used to create environments. May be conda or virtualenv or mamba. If Conda supports the dependencies you need, that is the recommended method. Mamba is faster but needs a newer Python version (3.8 or greater). See Environments for more information. • matrix: Dependencies you want to preinstall into the environment where benchmarks are run. The rest of the values can usually be left to their defaults, unless you want to benchmark against multiple versions of Python or multiple versions of third-party dependencies, or if your package needs nonstandard installation commands. Once you’ve set up the project’s configuration, you’ll need to write some benchmarks. The benchmarks live in Python files in the benchmarks directory. The quickstart command has created a single example benchmark file already in benchmarks/benchmarks.py: # Write the benchmarking functions here. # See "Writing benchmarks" in the asv docs for more information. class TimeSuite: """ An example benchmark that times the performance of various kinds of iterating over dictionaries in Python. """ def setup(self): self.d = {} for x in range(500): self.d[x] = None def time_keys(self): for key in self.d.keys(): pass def time_values(self): for value in self.d.values(): pass def time_range(self): d = self.d for key in range(500): (continues on next page) airspeed velocity Documentation, Release 0.6.1 (continued from previous page) d[key] class MemSuite: def mem_list(self): return [0] * 256 You’ll want to replace these benchmarks with your own. See Writing benchmarks for more information. 2.2 Running benchmarks Benchmarks are run using the asv run subcommand. Let’s start by just benchmarking the latest commit on the current main branch of the project: $ asv run 2.2.1 Machine information If this is the first time using asv run on a given machine, (which it probably is, if you’re following along), you will be prompted for information about the machine, such as its platform, cpu and memory. airspeed velocity will try to make reasonable guesses, so it’s usually ok to just press Enter to accept each default value. This information is stored in the ~/.asv-machine.json file in your home directory: I will now ask you some questions about this machine to identify it in the benchmarks. 1. machine: A unique name to identify this machine in the results. May be anything, as long as it is unique across all the machines used to benchmark this project. NOTE: If changed from the default, it will no longer match the hostname of this machine, and you may need to explicitly use the --machine argument to asv. machine [cheetah]: 2. os: The OS type and version of this machine. For example, 'Macintosh OS-X 10.8'. os [Linux 3.17.6-300.fc21.x86_64]: 3. arch: The generic CPU architecture of this machine. For example, 'i386' or 'x86_64'. arch [x86_64]: 4. cpu: A specific description of the CPU of this machine, including its speed and class. For example, 'Intel(R) Core(TM) i5-2520M CPU @ 2.50GHz (4 cores)'. cpu [Intel(R) Core(TM) i5-2520M CPU @ 2.50GHz]: 5. ram: The amount of physical RAM on this machine. For example, '4GB'. ram [8055476]: Note: If you ever need to update the machine information later, you can run asv machine. airspeed velocity Documentation, Release 0.6.1 Note: By default, the name of the machine is determined from your hostname. If you have a hostname that frequently changes, and your ~/.asv-machine.json file contains more than one entry, you will need to use the --machine argument to asv run and similar commands. 2.2.2 Environments Next, the Python environments to run the benchmarks are set up. asv always runs its benchmarks in an environment that it creates, in order to not change any of your existing Python environments. One environment will be set up for each of the combinations of Python versions and the matrix of project dependencies, if any. The first time this is run, this may take some time, as many files are copied over and dependencies are installed into the environment. The environments are stored in the env directory so that the next time the benchmarks are run, things will start much faster. Environments can be created using different tools. By default, asv ships with support for anaconda, mamba, and virtualenv, though plugins may be installed to support other environment tools. The environment_type key in asv.conf.json is used to select the tool used to create environments. When using virtualenv, asv does not build Python interpreters for you, but it expects to find the Python versions specified in the asv.conf.json file available on the PATH. For example, if the asv.conf.json file has: "pythons": ["2.7", "3.6"] then it will use the executables named python2.7 and python3.6 on the path. There are many ways to get multiple versions of Python installed – your package manager, apt-get, yum, MacPorts or homebrew probably has them, or you can also use pyenv. The virtualenv environment also supports PyPy. You can specify "pypy" or "pypy3" as a Python version number in the "pythons" list. Note that PyPy must also be installed and available on your PATH. 2.2.3 Benchmarking Finally, the benchmarks are run: $ asv run · Cloning project. · Fetching recent changes · Creating environments...... · Discovering benchmarks ·· Uninstalling from virtualenv-py2.7 ·· Building 4238c44d <main> for virtualenv-py2.7 ·· Installing into virtualenv-py2.7. · Running 10 total benchmarks (1 commits * 2 environments * 5 benchmarks) [ 0.00%] · For project commit 4238c44d <main>: [ 0.00%] ·· Building for virtualenv-py2.7. [ 0.00%] ·· Benchmarking virtualenv-py2.7 [ 10.00%] ··· Running (benchmarks.TimeSuite.time_iterkeys--).... [ 35.00%] ··· benchmarks.TimeSuite.time_iterkeys 11.1±0.01𝜇s [ 40.00%] ··· benchmarks.TimeSuite.time_keys 11.2±0.01𝜇s [ 45.00%] ··· benchmarks.TimeSuite.time_range 32.9±0.01𝜇s [ 50.00%] ··· benchmarks.TimeSuite.time_xrange 30.3±0.01𝜇s [ 50.00%] ·· Building for virtualenv-py3.6.. [ 50.00%] ·· Benchmarking virtualenv-py3.6 (continues on next page) airspeed velocity Documentation, Release 0.6.1 (continued from previous page) [ 60.00%] ··· Running (benchmarks.TimeSuite.time_iterkeys--).... [ 85.00%] ··· benchmarks.TimeSuite.time_iterkeys failed [ 90.00%] ··· benchmarks.TimeSuite.time_keys 9.07±0.5𝜇s [ 95.00%] ··· benchmarks.TimeSuite.time_range 35.5±0.01𝜇s To improve reproducibility, each benchmark is run in its own process. The results of each benchmark are displayed in the output and also recorded on disk. For timing benchmarks, the median and interquartile range of collected measurements are displayed. Note that the results may vary on slow time scales due to CPU frequency scaling, heat management, and system load, and this variability is not necessarily captured by a single run. How to deal with this is discussed in Tuning timing measurements. The killer feature of airspeed velocity is that it can track the benchmark performance of your project over time. The range argument to asv run specifies a range of commits that should be benchmarked. The value of this argument is passed directly to either git log or to the Mercurial log command to get the set of commits, so it actually has a very powerful syntax defined in the gitrevisions manpage, or the revsets help section for Mercurial. For example, in a Git repository, one can test a range of commits on a particular branch since branching off main: asv run main..mybranch Or, to benchmark all of the commits since a particular tag (v0.1): asv run v0.1..main To benchmark a single commit, or tag, use ^! (git): asv run v0.1^! Corresponding examples for Mercurial using the revsets specification are also possible. In many cases, this may result in more commits than you are able to benchmark in a reasonable amount of time. In that case, the --steps argument is helpful. It specifies the maximum number of commits you want to test, and it will evenly space them over the specified range. You can benchmark all commits in the repository by using: asv run ALL You may also want to benchmark every commit that has already been benchmarked on all the other machines. For that, use: asv run EXISTING You can benchmark all commits since the last one that was benchmarked on this machine. This is useful for running in nightly cron jobs: asv run NEW You can also benchmark a specific set of commits listed explicitly in a file (one commit hash per line): asv run HASHFILE:hashestobenchmark.txt Finally, you can also benchmark all commits that have not yet been benchmarked for this machine: airspeed velocity Documentation, Release 0.6.1 asv run --skip-existing-commits ALL Note: There is a special version of asv run that is useful when developing benchmarks, called asv dev. See Writing benchmarks for more information. You can also do a validity check for the benchmark suite without running benchmarks, using asv check. The results are stored as JSON files in the directory results/$MACHINE, where $MACHINE is the unique machine name that was set up in your ~/.asv-machine.json file. In order to combine results from multiple machines, you can for example store the results in separate repositories, for example git submodules, alongside the results from other machines. These results are then collated and “published” altogether into a single interactive website for viewing (see Viewing the results). You can also continue to generate benchmark results for other commits, or for new benchmarks and continue to throw them in the results directory. airspeed velocity is designed from the ground up to handle missing data where certain benchmarks have yet to be performed – it’s entirely up to you how often you want to generate results, and on which commits and in which configurations. 2.3 Viewing the results You can use the asv show command to display results from previous runs on the command line: $ asv show main Commit: 4238c44d <main> benchmarks.MemSuite.mem_list [mymachine/virtualenv-py2.7] 2.42k started: 2018-08-19 18:46:47, duration: 1.00s benchmarks.TimeSuite.time_iterkeys [mymachine/virtualenv-py2.7] 11.1±0.06𝜇s started: 2018-08-19 18:46:47, duration: 1.00s ... To collate a set of results into a viewable website, run: asv publish This will put a tree of files in the html directory. This website can not be viewed directly from the local filesystem, since web browsers do not support AJAX requests to the local filesystem. Instead, airspeed velocity provides a simple static webserver that can be used to preview the website. Just run: asv preview and open the URL that is displayed at the console. Press Ctrl+C to stop serving. airspeed velocity Documentation, Release 0.6.1 To share the website on the open internet, simply put the files in the html directory on any webserver that can serve static content. Github Pages works quite well, for example. For using Github Pages, asv includes the convenience command asv gh-pages to put the results to the gh-pages branch and push them to Github. See asv gh-pages –help for details. 2.4 Managing the results database The asv rm command can be used to remove benchmarks from the database. The command takes an arbitrary number of key=value entries that are “and”ed together to determine which benchmarks to remove. The keys may be one of: • benchmark: A benchmark name • python: The version of python • commit_hash: The commit hash • machine-related: machine, arch, cpu, os, ram • environment-related: a name of a dependency, e.g. numpy The values are glob patterns, as supported by the Python standard library module fnmatch. So, for example, to remove all benchmarks in the time_units module: asv rm "benchmark=time_units.*" Note the double quotes around the entry to prevent the shell from expanding the * itself. The asv rm command will prompt before performing any operations. Passing the -y option will skip the prompt. Here is a more complex example, to remove all of the benchmarks on Python 2.7 and the machine named giraffe: asv rm python=2.7 machine=giraffe 2.5 Finding a commit that produces a large regression airspeed velocity detects statistically significant decreases of performance automatically based on the available data when you run asv publish. The results can be inspected via the web interface, clicking the “Regressions” tab on the web site. The results include links to each benchmark graph deemed to contain a decrease in performance, the commits where the regressions were estimated to occur, and other potentially useful information. airspeed velocity Documentation, Release 0.6.1 However, since benchmarking can be rather time consuming, it’s likely that you’re only benchmarking a subset of all commits in the repository. When you discover from the graph that the runtime between commit A and commit B suddenly doubles, you don’t necessarily know which particular commit in that range is the likely culprit. asv find can be used to help find a commit within that range that produced a large regression using a binary search. You can select a range of commits easily from the web interface by dragging a box around the commits in question. The commit hashes associated with that range is then displayed in the “commits” section of the sidebar. We’ll copy and paste this commit range into the commandline arguments of the asv find command, along with the name of a single benchmark to use. The output below is truncated to show how the search progresses: $ asv find 05d4f83d..b96fcc53 time_coordinates.time_latitude - Running approximately 10 benchmarks within 1156 commits - Testing <----------------------------O-----------------------------> - Testing <-------------O-------------->------------------------------ - Testing --------------<-------O------>------------------------------ - Testing --------------<---O--->------------------------------------- - Testing --------------<-O->----------------------------------------- - Testing --------------<O>------------------------------------------- - Testing --------------<>-------------------------------------------- - Greatest regression found: 2918f61e The result, 2918f61e is the commit found with the largest regression, using the binary search. Note: The binary search used by asv find will only be effective when the runtimes over the range are more-or-less monotonic. If there is a lot of variation within that range, it may find only a local maximum, rather than the global maximum. For best results, use a reasonably small commit range. 2.6 Running a benchmark in the profiler airspeed velocity can oftentimes tell you if something got slower, but it can’t really tell you why it got slower. That’s where a profiler comes in. airspeed velocity has features to easily run a given benchmark in the Python standard library’s cProfile profiler, and then open the profiling data in the tool of your choice. The asv profile command profiles a given benchmark on a given revision of the project. Note: You can also pass the --profile option to asv run. In addition to running the benchmarks as usual, it also runs them again in the cProfile profiler and save the results. asv profile will use this data, if found, rather than needing to profile the benchmark each time. However, it’s important to note that profiler data contains absolute paths to the source code, so they are generally not portable between machines. asv profile takes as arguments the name of the benchmark and the hash, tag or branch of the project to run it in. Below is a real world example of testing the astropy project. By default, a simple table summary of profiling results is displayed: airspeed velocity Documentation, Release 0.6.1 > asv profile time_units.time_very_simple_unit_parse 10fc29cb 8700042 function calls in 6.844 seconds Ordered by: cumulative time ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 6.844 6.844 asv/benchmark.py:171(method_caller) 1 0.000 0.000 6.844 6.844 asv/benchmark.py:197(run) 1 0.000 0.000 6.844 6.844 /usr/lib64/python2.7/timeit.py:201(repeat) 3 0.000 0.000 6.844 2.281 /usr/lib64/python2.7/timeit.py:178(timeit) 3 0.104 0.035 6.844 2.281 /usr/lib64/python2.7/timeit.py:96(inner) 300000 0.398 0.000 6.740 0.000 benchmarks/time_units.py:20(time_very_ ˓→simple_unit_parse) 300000 1.550 0.000 6.342 0.000 astropy/units/core.py:1673(__call__) 300000 0.495 0.000 2.416 0.000 astropy/units/format/generic.py:361(parse) 300000 1.023 0.000 1.841 0.000 astropy/units/format/__init__.py:31(get_ ˓→format) 300000 0.168 0.000 1.283 0.000 astropy/units/format/generic.py:374(_do_ ˓→parse) 300000 0.986 0.000 1.115 0.000 astropy/units/format/generic.py:345(_ ˓→parse_unit) 3000002 0.735 0.000 0.735 0.000 {isinstance} 300000 0.403 0.000 0.403 0.000 {method 'decode' of 'str' objects} 300000 0.216 0.000 0.216 0.000 astropy/units/format/generic.py:32(__init_ ˓→_) 300000 0.152 0.000 0.188 0.000 /usr/lib64/python2.7/inspect. ˓→py:59(isclass) 900000 0.170 0.000 0.170 0.000 {method 'lower' of 'unicode' objects} 300000 0.133 0.000 0.133 0.000 {method 'count' of 'unicode' objects} 300000 0.078 0.000 0.078 0.000 astropy/units/core.py:272(get_current_ ˓→unit_registry) 300000 0.076 0.000 0.076 0.000 {issubclass} 300000 0.052 0.000 0.052 0.000 astropy/units/core.py:131(registry) 300000 0.038 0.000 0.038 0.000 {method 'strip' of 'str' objects} 300003 0.037 0.000 0.037 0.000 {globals} 300000 0.033 0.000 0.033 0.000 {len} 3 0.000 0.000 0.000 0.000 /usr/lib64/python2.7/timeit.py:143(setup) 1 0.000 0.000 0.000 0.000 /usr/lib64/python2.7/timeit.py:121(__init_ ˓→_) 6 0.000 0.000 0.000 0.000 {time.time} 1 0.000 0.000 0.000 0.000 {min} 1 0.000 0.000 0.000 0.000 {range} 1 0.000 0.000 0.000 0.000 {hasattr} 1 0.000 0.000 0.000 0.000 /usr/lib64/python2.7/timeit.py:94(_ ˓→template_func) 3 0.000 0.000 0.000 0.000 {gc.enable} 3 0.000 0.000 0.000 0.000 {method 'append' of 'list' objects} 3 0.000 0.000 0.000 0.000 {gc.disable} 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' ˓→objects} 3 0.000 0.000 0.000 0.000 {gc.isenabled} 1 0.000 0.000 0.000 0.000 <string>:1(<module>) Navigating these sorts of results can be tricky, and generally you want to open the results in a GUI tool, such as Run- SnakeRun or snakeviz. For example, by passing the --gui=runsnake to asv profile, the profile is collected (or extracted) and opened in the RunSnakeRun tool. airspeed velocity Documentation, Release 0.6.1 Note: To make sure the line numbers in the profiling data correctly match the source files being viewed, the correct revision of the project is checked out before opening it in the external GUI tool. You can also get the raw profiling data by using the --output argument to asv profile. Note: Since the method name is passed as a regex, parenthesis need to be escaped, e.g. asv profile 'benchmarks.MyBench.time_sort\(500\)' HEAD --gui snakeviz See asv profile for more options. To extract information from --profile runs of asv: $ python import asv results_asv = asv.results.iter_results(".asv") res_objects = list(results_asv) prof_data = res_objects[0].get_profile_stats('benchmarks.MyBench.time_sort') prof_data.strip_dirs() # Remove machine specific info prof_data.sort_stats('cumulative').print_stats() Where different benchmarks may be used. A specific json may also be loaded directly with asv.results. Results.load(<json_path>), after which get_profile_stats can be used. 2.7 Comparing the benchmarking results for two revisions In some cases, you may want to directly compare the results for two specific revisions of the project. You can do so with the compare command: $ asv compare v0.1 v0.2 All benchmarks: before after ratio [3bfda9c6] [bf719488] <v0.1> <v0.2> 40.4m 40.4m 1.00 benchmarks.MemSuite.mem_list [amulet. ˓→localdomain/virtualenv-py2.7-numpy] failed 35.2m n/a benchmarks.MemSuite.mem_list [amulet. ˓→localdomain/virtualenv-py3.6-numpy] 11.5±0.08𝜇s 11.0±0𝜇s 0.96 benchmarks.TimeSuite.time_iterkeys ˓→[amulet.localdomain/virtualenv-py2.7-numpy] failed failed n/a benchmarks.TimeSuite.time_iterkeys ˓→[amulet.localdomain/virtualenv-py3.6-numpy] 11.5±1𝜇s 11.2±0.02𝜇s 0.97 benchmarks.TimeSuite.time_keys [amulet. ˓→localdomain/virtualenv-py2.7-numpy] failed 8.40±0.02𝜇s n/a benchmarks.TimeSuite.time_keys [amulet. ˓→localdomain/virtualenv-py3.6-numpy] 34.6±0.09𝜇s 32.9±0.01𝜇s 0.95 benchmarks.TimeSuite.time_range [amulet. ˓→localdomain/virtualenv-py2.7-numpy] failed 35.6±0.05𝜇s n/a benchmarks.TimeSuite.time_range [amulet. ˓→localdomain/virtualenv-py3.6-numpy] 31.6±0.1𝜇s 30.2±0.02𝜇s 0.95 benchmarks.TimeSuite.time_xrange [amulet. ˓→localdomain/virtualenv-py2.7-numpy] failed failed n/a benchmarks.TimeSuite.time_xrange [amulet. airspeed velocity Documentation, Release 0.6.1 (continued from previous page) This will show the times for each benchmark for the first and second revision, and the ratio of the second to the first. In addition, the benchmarks will be color coded green and red if the benchmark improves or worsens more than a certain threshold factor, which defaults to 1.1 (that is, benchmarks that improve by more than 10% or worsen by 10% are color coded). The threshold can be set with the --factor=value option. Finally, the benchmarks can be split into ones that have improved, stayed the same, and worsened, using the same threshold using the --split option. See asv compare for more. airspeed velocity Documentation, Release 0.6.1 16 Chapter 2. Using airspeed velocity CHAPTER 3 Writing benchmarks Benchmarks are stored in a Python package, i.e. collection of .py files in the benchmark suite’s benchmark direc- tory (as defined by benchmark_dir in the asv.conf.json file). The package may contain arbitrarily nested subpackages, contents of which will also be used, regardless of the file names. Within each .py file, each benchmark is a function or method. The name of the function must have a special prefix, depending on the type of benchmark. asv understands how to handle the prefix in either CamelCase or lowercase with underscores. For example, to create a timing benchmark, the following are equivalent: def time_range(): for i in range(1000): pass def TimeRange(): for i in range(1000): pass Benchmarks may be organized into methods of classes if desired: class Suite: def time_range(self): for i in range(1000): pass def time_xrange(self): for i in xrange(1000): pass 3.1 Running benchmarks during development There are some options to asv run that may be useful when writing benchmarks. You may find that asv run spends a lot of time setting up the environment each time. You can have asv run use an existing Python environment that already has the benchmarked project and all of its dependencies installed. Use airspeed velocity Documentation, Release 0.6.1 the --python argument to specify a Python environment to use: asv run --python=python If you don’t care about getting accurate timings, but just want to ensure the code is running, you can add the --quick argument, which will run each benchmark only once: asv run --quick In order to display the standard error output (this includes exception tracebacks) that your benchmarks may produce, pass the --show-stderr flag: asv run --show-stderr Finally, there is a special command, asv dev, that uses all of these features and is equivalent to: asv run --python=same --quick --show-stderr --dry-run You may also want to only do a basic check whether the benchmark suite is well-formatted, without actually running any benchmarks: asv check --python=same 3.2 Setup and teardown functions If initialization needs to be performed that should not be included in the timing of the benchmark, include that code in a setup method on the class, or add an attribute called setup to a free function. For example: class Suite: def setup(self): # load data from a file with open("/usr/share/words.txt", "r") as fd: self.words = fd.readlines() def time_upper(self): for word in self.words: word.upper() # or equivalently... words = [] def my_setup(): global words with open("/usr/share/words.txt", "r") as fd: words = fd.readlines() def time_upper(): for word in words: word.upper() time_upper.setup = my_setup You can also include a module-level setup function, which will be run for every benchmark within the module, prior to any setup assigned specifically to each function. airspeed velocity Documentation, Release 0.6.1 Similarly, benchmarks can also have a teardown function that is run after the benchmark. This is useful if, for example, you need to clean up any changes made to the filesystem. Note that although different benchmarks run in separate processes, for a given benchmark repeated measurement (cf. repeat attribute) and profiling occur within the same process. For these cases, the setup and teardown routines are run multiple times in the same process. If setup raises a NotImplementedError, the benchmark is marked as skipped. Note: For asv versions before 0.5 it was possible to raise NotImplementedError from any existing benchmark during its execution and the benchmark would be marked as skipped. This behavior was deprecated from 0.5 onwards. Changed in version 0.6.0: To keep compatibility with earlier versions, it is possible to raise asv_runner. benchmark.mark.SkipNotImplemented anywhere within a Benchmark, though users are advised to use the skip decorators instead as they are faster and do not execute the setup function. See Skipping benchmarks for more details. The setup method is run multiple times, for each benchmark and for each repeat. If the setup is especially expen- sive, the setup_cache method may be used instead, which only performs the setup calculation once and then caches the result to disk. It is run only once also for repeated benchmarks and profiling, unlike setup. setup_cache can persist the data for the benchmarks it applies to in two ways: • Returning a data structure, which asv pickles to disk, and then loads and passes it as the first argument to each benchmark. • Saving files to the current working directory (which is a temporary directory managed by asv) which are then explicitly loaded in each benchmark process. It is probably best to load the data in a setup method so the loading time is not included in the timing of the benchmark. A separate cache is used for each environment and each commit of the project being tested and is thrown out between benchmark runs. For example, caching data in a pickle: class Suite: def setup_cache(self): fib = [1, 1] for i in range(100): fib.append(fib[-2] + fib[-1]) return fib def track_fib(self, fib): return fib[-1] As another example, explicitly saving data in a file: class Suite: def setup_cache(self): with open("test.dat", "wb") as fd: for i in range(100): fd.write('{0}\n'.format(i)) def setup(self): with open("test.dat", "rb") as fd: self.data = [int(x) for x in fd.readlines()] def track_numbers(self): return len(self.data) airspeed velocity Documentation, Release 0.6.1 The setup_cache timeout can be specified by setting the .timeout attribute of the setup_cache function. The default value is the maximum of the timeouts of the benchmarks using it. Note: Changed in version 0.6.0: The configuration option default_benchmark_timeout can also be set for a project-wide timeout. 3.3 Benchmark attributes Each benchmark can have a number of arbitrary attributes assigned to it. The attributes that asv understands depends on the type of benchmark and are defined below. For free functions, just assign the attribute to the function. For methods, include the attribute at the class level. For example, the following are equivalent: def time_range(): for i in range(1000): pass time_range.timeout = 120.0 class Suite: timeout = 120.0 def time_range(self): for i in range(1000): pass For the list of attributes, see Benchmark types and attributes. 3.4 Parameterized benchmarks You might want to run a single benchmark for multiple values of some parameter. This can be done by adding a params attribute to the benchmark object: def time_range(n): for i in range(n): pass time_range.params = [0, 10, 20, 30] This will also make the setup and teardown functions parameterized: class Suite: params = [0, 10, 20] def setup(self, n): self.obj = range(n) def teardown(self, n): del self.obj def time_range_iter(self, n): for i in self.obj: pass airspeed velocity Documentation, Release 0.6.1 If setup raises a NotImplementedError, the benchmark is marked as skipped for the parameter values in question. The parameter values can be any Python objects. However, it is often best to use only strings or numbers, because these have simple unambiguous text representations. In the event the repr() output is non-unique, the representations will be made unique by suffixing an integer identifier corresponding to the order of appearance. When you have multiple parameters, the test is run for all of their combinations: def time_ranges(n, func_name): f = {'range': range, 'arange': numpy.arange}[func_name] for i in f(n): pass time_ranges.params = ([10, 1000], ['range', 'arange']) The test will be run for parameters (10, 'range'), (10, 'arange'), (1000, 'range'), (1000, 'arange'). You can also provide informative names for the parameters: time_ranges.param_names = ['n', 'function'] These will appear in the test output; if not provided you get default names such as “param1”, “param2”. Note that setup_cache is not parameterized. 3.5 Skipping benchmarks Note: This section is only applicable from version 0.6.0 on-wards Conversely, it is possible (typically due to high setup times) that one might want to skip some benchmarks all-together, or just for some sets of parameters. This is accomplished by an attribute skip_params, which can be used with the decorator @skip_for_params as: from asv_runner.benchmarks.mark import skip_for_params @skip_for_params([(10, 'arange'), (1000, 'range')]) def time_ranges(n, func_name): f = {'range': range, 'arange': np.arange}[func_name] for i in f(n): pass Benchmarks may also be conditionally skipped based on a boolean with @skip_benchmark_if: from asv_runner.benchmarks.mark import skip_benchmark_if import datetime # Skip if not before midday @skip_benchmark_if( datetime.datetime.now(datetime.timezone.utc).hour >= 12 ) def time_ranges(n, func_name): f = {'range': range, 'arange': np.arange}[func_name] for i in f(n): pass airspeed velocity Documentation, Release 0.6.1 Similarly, for parameters we have @skip_params_if: from asv_runner.benchmarks.mark import skip_params_if import datetime class TimeSuite: params = [100, 200, 300, 400, 500] param_names = ["size"] def setup(self, size): self.d = {} for x in range(size): self.d[x] = None # Skip benchmarking when size is either 100 or 200 # and the current hour is 12 or later. @skip_params_if( [(100,), (200,)], datetime.datetime.now(datetime.timezone.utc).hour >= 12 ) def time_dict_update(self, size): d = self.d for i in range(size): d[i] = i Warning: The skips discussed so far, using the decorators will ignore both the benchmark, and the setup function, however, setup_cache will not be affected. If the onus of preparing the exact parameter sets for skip_for_params is too complicated and the setup function is not too expensive, or if a benchmark needs to be skipped conditionally but skip_*_if are not the right choice, there is also the SkipNotImplemented exception which can be raised anywhere during a benchmark run for it to be marked as skipped (n/a in the output table). This may be used as: from asv_runner.benchmarks.mark import SkipNotImplemented class SimpleSlow: params = ([False, True]) param_names = ["ok"] def time_failure(self, ok): if ok: x = 34.2**4.2 else: raise SkipNotImplemented(f"{ok} is skipped") 3.6 Benchmark types 3.6.1 Timing Timing benchmarks have the prefix time. How ASV runs benchmarks is as follows (pseudocode for main idea): for round in range(`rounds`): for benchmark in benchmarks: (continues on next page) airspeed velocity Documentation, Release 0.6.1 (continued from previous page) with new process: <calibrate `number` if not manually set> for j in range(`repeat`): <setup `benchmark`> sample = timing_function(<run benchmark `number` times>) / `number` <teardown `benchmark`> where the actual rounds, repeat, and number are attributes of the benchmark. The default timing function is timeit.default_timer, which uses the highest resolution clock available on a given platform to measure the elapsed wall time. This has the consequence of being more susceptible to noise from other processes, but the increase in resolution is more significant for shorter duration tests (particularly on Windows). Process timing is provided by the function time.process_time (POSIX CLOCK_PROCESS_CPUTIME), which measures the CPU time used only by the current process. You can change the timer by setting the benchmark’s timer attribute, for example to time.process_time to measure process time. Note: One consequence of using time.process_time is that the time spent in child processes of the benchmark is not included. Multithreaded benchmarks also return the total CPU time counting all CPUs. In these cases you may want to measure the wall clock time, by setting the timer = timeit.default_timer benchmark attribute. For best results, the benchmark function should contain as little as possible, with as much extraneous setup moved to a setup function: class Suite: def setup(self): # load data from a file with open("/usr/share/words.txt", "r") as fd: self.words = fd.readlines() def time_upper(self): for word in self.words: word.upper() How setup and teardown behave for timing benchmarks is similar to the Python timeit module, and the behav- ior is controlled by the number and repeat attributes. For the list of benchmark attributes, see Benchmark types and attributes. 3.6.2 Memory Memory benchmarks have the prefix mem. Memory benchmarks track the size of Python objects. To write a memory benchmark, write a function that returns the object you want to track: def mem_list(): return [0] * 256 The asizeof module is used to determine the size of Python objects. Since asizeof includes the memory of all of an object’s dependencies (including the modules in which their classes are defined), a memory benchmark instead calculates the incremental memory of a copy of the object, which in most cases is probably a more useful indicator of how much space each additional object will use. If you need to do something more specific, a generic Tracking (Generic) benchmark can be used instead. airspeed velocity Documentation, Release 0.6.1 For details, see Benchmark types and attributes. Note: The memory benchmarking feature is still experimental. asizeof may not be the most appropriate metric to use. Note: The memory benchmarks are not supported on PyPy. 3.6.3 Peak Memory Peak memory benchmarks have the prefix peakmem. Peak memory benchmark tracks the maximum resident size (in bytes) of the process in memory. This does not necessarily count memory paged on-disk, or that used by memory-mapped files. To write a peak memory benchmark, write a function that does the operation whose maximum memory usage you want to track: def peakmem_list(): [0] * 165536 Note: The peak memory benchmark also counts memory usage during the setup routine, which may confound the benchmark results. One way to avoid this is to use setup_cache instead. For details, see Benchmark types and attributes. 3.6.4 Raw timing benchmarks For some timing benchmarks, for example measuring the time it takes to import a module, it is important that they are run separately in a new Python process. Measuring execution time for benchmarks run once in a new Python process can be done with timeraw_* timing benchmarks: def timeraw_import_inspect(): return """ import inspect """ Note that these benchmark functions should return a string, corresponding to the code that will be run. Importing a module takes a meaningful amount of time only the first time it is executed, therefore a fresh interpreter is used for each iteration of the benchmark. The string returned by the benchmark function is executed in a subprocess. Note that the setup and setup_cache are performed in the base benchmark process, so that the setup done by them is not available in the benchmark code. To perform setup also in the benchmark itself, you can return a second string: def timeraw_import_inspect(): code = “import inspect” setup = “import ast” return code, setup The raw timing benchmarks have the same parameters as ordinary timing benchmarks, but number is by default 1, and timer is ignored. airspeed velocity Documentation, Release 0.6.1 Note: Timing standard library modules is possible as long as they are not built-in or brought in by importing the timeit module (which further imports gc, sys, time, and itertools). 3.6.5 Imports You can use raw timing benchmarks to measure import times. 3.6.6 Tracking (Generic) It is also possible to use asv to track any arbitrary numerical value. “Tracking” benchmarks can be used for this purpose and use the prefix track. These functions simply need to return a numeric value. For example, to track the number of objects known to the garbage collector at a given state: import gc def track_num_objects(): return len(gc.get_objects()) track_num_objects.unit = "objects" For details, see Benchmark types and attributes. 3.7 Benchmark versioning When you edit benchmark’s code in the benchmark suite, this often changes what is measured, and previously mea- sured results should be discarded. Airspeed Velocity records with each benchmark measurement a “version number” for the benchmark. By default, it is computed by hashing the benchmark source code text, including any setup and setup_cache routines. If there are changes in the source code of the benchmark in the benchmark suite, the version number changes, and asv will ignore results whose version number is different from the current one. It is also possible to control the versioning of benchmark results manually, by setting the .version attribute for the benchmark. The version number, i.e. content of the attribute, can be any Python string. asv only checks whether the version recorded with a measurement matches the current version, so you can use any versioning scheme. See Benchmark types and attributes for reference documentation. airspeed velocity Documentation, Release 0.6.1 26 Chapter 3. Writing benchmarks CHAPTER 4 Tuning timing measurements The results from timing benchmarks are generally variable. Performance variations occur on different time scales. For timing benchmarks repeated immediately after each other, there is always some jitter in the results, due to operating system scheduling and other sources. For timing benchmarks run at more widely separated times, systematic differences changing on long time scales can appear, for example from changes in the background system load or built-in CPU mechanisms for power and heat management. Airspeed Velocity has mechanisms to deal with these variations. For dealing with short-time variations, you can use the sample_time, number and repeat attributes of timing benchmarks to control how results are sam- pled and averaged. For long-time variations, you can use the rounds attribute and --interleave-rounds, --append-samples, and -a rounds=4 command line options to run timing benchmarks at more widely spaced times, in order to average over long-time performance variations. If you are planning to capture historical benchmark data for most commits, very accurate timings are not necessary. The detection of regressions in historical benchmark data used in asv is designed to be statistically robust and tolerates fair amounts of noise. However, if you are planning to use asv continuous and asv compare, accurate results are more important. 4.1 Library settings If your code uses 3rd party libraries, you may want to check their settings before benchmarking. In particular, such libraries may use automatic multithreading, which may affect runtime performance in surprising ways. If you are using libraries such as OpenBLAS, Intel MKL, or OpenMP, benchmark results may become easier to understand by forcing single-threaded operation. For these three, this can be typically done by setting environment variables: OPENBLAS_NUM_THREADS=1 MKL_NUM_THREADS=1 OMP_NUM_THREADS=1 airspeed velocity Documentation, Release 0.6.1 4.2 Tuning machines for benchmarking Especially if you are using a laptop computer for which the heat and power management is an issue, getting reliable results may require too long averaging times to be practical. To improve the situation it is possible to optimize the usage and settings of your machine to minimize the variability in timing benchmarks. Generally, while running benchmarks there should not be other applications actively using CPU, or you can run asv pinned to a CPU core not used by other processes. You should also force the CPU frequency or power level settings to a fixed value. The pyperf project has documentation on how to tune machines for benchmarking. The simplest way to apply basic tuning on Linux using pyperf is to run: sudo python -mpyperf system tune This will modify system settings that can be only changed as root, and you should read the pyperf documentation on what it precisely does. This system tuning also improves results for asv. To achieve CPU affinity pinning with asv (e.g. to an isolated CPU), you should use the –cpu-affinity option. It is also useful to note that configuration changes and operating system upgrades on the benchmarking machine can change the baseline performance of the machine. For absolutely best results, you may then want to use a dedicated benchmarking machine that is not used for anything else. You may also want to carefully select a long-term supported operating system, such that you can only choose to install security upgrades. CHAPTER 5 Reference 5.1 Benchmark types and attributes Contents • Benchmark types and attributes – Benchmark types – Benchmark attributes * Timing benchmarks * Tracking benchmarks – Environment variables Warning: Changed in version 0.6.0: The code for these have now been moved to be in asv_runner, and the rest of the documentation may be outdated. 5.1.1 Benchmark types The following benchmark types are recognized: • def time_*(): measure time taken by the function. See Timing. • def timeraw_*(): measure time taken by the function, after interpreter start. See Raw timing benchmarks. • def mem_*(): measure memory size of the object returned. See Memory. • def peakmem_*(): measure peak memory size of the process when calling the function. See Peak Memory. • def track_*(): use the returned numerical value as the benchmark result See Tracking (Generic). airspeed velocity Documentation, Release 0.6.1 Note: New in version 0.6.0: Users may define their own benchmark types, see asv_runner for examples. 5.1.2 Benchmark attributes Benchmark attributes can either be applied directly to the benchmark function: def time_something(): pass time_something.timeout = 123 or appear as class attributes: class SomeBenchmarks: timeout = 123 def time_something(self): pass Different benchmark types have their own sets of applicable attributes. Moreover, the following attributes are applica- ble to all benchmark types: • timeout: The amount of time, in seconds, to give the benchmark to run before forcibly killing it. Defaults to 60 seconds. • benchmark_name: If given, used as benchmark function name instead of generated one <module>. <class>.<function>. • pretty_name: If given, used to display the benchmark name instead of the benchmark function name. • pretty_source: If given, used to display a custom version of the benchmark source. • version: Used to determine when to invalidate old benchmark results. Benchmark results produced with a different value of the version than the current value will be ignored. The value can be any Python string (or other object, str() will be taken). Default (if version=None or not given): hash of the source code of the benchmark function and setup and setup_cache methods. If the source code of any of these changes, old results become invalidated. • setup: function to be called as a setup function for the benchmark See Setup and teardown functions for discussion. • teardown: function to be called as a teardown function for the benchmark See Setup and teardown functions for discussion. • setup_cache: function to be called as a cache setup function. See Setup and teardown functions for discus- sion. • param_names: list of parameter names See Parameterized benchmarks for discussion. • params: list of lists of parameter values. If there is only a single parameter, may also be a list of parameter values. See Parameterized benchmarks for discussion. Example: def setup_func(n, func): print(n, func) (continues on next page) airspeed velocity Documentation, Release 0.6.1 (continued from previous page) def teardown_func(n, func): print(n, func) def time_ranges(n, func): for i in func(n): pass time_ranges.setup = setup_func time_ranges.param_names = ['n', 'func'] time_ranges.params = ([10, 1000], [range, numpy.arange]) The benchmark will be run for parameters (10, range), (10, numpy.arange), (1000, range), (1000, numpy.arange). The setup and teardown functions will also obtain these parameters. Note that setup_cache is not parameterized. For the purposes of identifying benchmarks in the UI, repr() is called on the elements of params. In the event these strings contain memory addresses, those adresses are stripped to allow comparison across runs. Additionally, if this results in a non-unique mapping, each duplicated element will be suffixed with a distinct integer identifier corresponding to order of appearance. Timing benchmarks • warmup_time: asv will spend this time (in seconds) in calling the benchmarked function repeatedly, before starting to run the actual benchmark. If not specified, warmup_time defaults to 0.1 seconds (on PyPy, the default is 1.0 sec). • rounds: How many rounds to run the benchmark in (default: 2). The rounds run different timing benchmarks in an interleaved order, allowing to sample over longer periods of background performance variations (e.g. CPU power levels). • repeat: The number measurement samples to collect per round. Each sample consists of running the bench- mark number times. The median time from all samples collected in all roudns is used as the final measurement result. repeat can be a tuple (min_repeat, max_repeat, max_time). In this case, the measurement first collects at least min_repeat samples, and continues until either max_repeat samples are collected or the collection time exceeds max_time. When not provided (repeat set to 0), the default value is (1, 10, 20.0) if rounds==1 and (1, 5, 10.0) otherwise. • number: Manually choose the number of iterations in each sample. If number is specified, sample_time is ignored. Note that setup and teardown are not run between iterations: setup runs first, then the timed benchmark routine is called number times, and after that teardown runs. • sample_time: asv will automatically select number so that each sample takes approximatively sample_time seconds. If not specified, sample_time defaults to 10 milliseconds. • min_run_count: the function is run at least this many times during benchmark. Default: 2 • timer: The timing function to use, which can be any source of monotonically increasing numbers, such as time.clock, time.time or time.process_time. If it’s not provided, it defaults to timeit. default_timer, but other useful values are process_time, for which asv provides a backported version for versions of Python prior to 3.3. Changed in version 0.4: Previously, the default timer measured process time, which was chosen to minimize noise from other processes. However, on Windows, this is only available at a resolution of 15.6ms, which is airspeed velocity Documentation, Release 0.6.1 greater than the recommended benchmark runtime of 10ms. Therefore, we default to the highest resolution clock on any platform. The sample_time, number, repeat, and timer attributes can be adjusted in the setup() routine, which can be useful for parameterized benchmarks. Tracking benchmarks • unit: The unit of the values returned by the benchmark. Used for display in the web interface. 5.1.3 Environment variables When asv runs benchmarks, several environment variables are defined, see ASV environment variables. 5.2 asv.conf.json reference The asv.conf.json file contains information about a particular benchmarking project. The following describes each of the keys in this file and their expected values. Contents • asv.conf.json reference – project – project_url – repo – repo_subdir – build_command, install_command, uninstall_command – branches – show_commit_url – pythons – conda_environment_file – conda_channels – matrix – exclude – include – benchmark_dir – environment_type – env_dir – results_dir – html_dir airspeed velocity Documentation, Release 0.6.1 – hash_length – plugins – build_cache_size – regressions_first_commits – regressions_thresholds 5.2.1 project The name of the project being benchmarked. 5.2.2 project_url The URL to the homepage of the project. This can point to anywhere, really, as it’s only used for the link at the top of the benchmark results page back to your project. 5.2.3 repo The URL to the repository for the project. The value can also be a path, relative to the location of the configuration file. For example, if the benchmarks are stored in the same repository as the project itself, and the configuration file is located at benchmarks/asv.conf.json inside the repository, you can set "repo": ".." to use the local repository. Currently, only git and hg repositories are supported, so this must be a URL that git or hg know how to clone from, for example: • [email protected]:airspeed-velocity/asv.git • https://github.com/airspeed-velocity/asv.git • ssh://[email protected]/yt_analysis/yt • hg+https://bitbucket.org/yt_analysis/yt The repository may be readonly. 5.2.4 repo_subdir The relative path to your Python project inside the repository. This is where its setup.py file is located. If empty or omitted, the project is assumed to be located at the root of the repository. 5.2.5 build_command, install_command, uninstall_command Airspeed Velocity rebuilds the project as needed, using these commands. The defaults are: airspeed velocity Documentation, Release 0.6.1 "install_command": ["in-dir={env_dir} python -mpip install {wheel_file}"], "uninstall_command": ["return-code=any python -mpip uninstall -y {project}"], "build_command": ["python setup.py build", "PIP_NO_BUILD_ISOLATION=false python -mpip wheel --no-deps --no-index -w {build_ ˓→cache_dir} {build_dir}"], The install commands should install the project in the active Python environment (virtualenv/conda), so that it can be used by the benchmark code. The uninstall commands should uninstall the project from the environment. Note: Changed in version 0.6.0: If a build command is not specified in the asv.conf.json, the default assumes the build system requirements are defined in a setup.py file. However, the asv.conf.json template also includes as a comment the commands to build the project using a pyproject.toml file. pyproject.toml is the preferred file format to define the build system requirements of Python projects (PEP518), and this approach will be the default from asv v0.6.0 onwards. The build commands can optionally be used to cache build results in the cache directory {build_cache_dir}, which is commit and environment-specific. If the cache directory contains any files after build_command finishes with exit code 0, asv assumes it contains a cached build. When a cached build is available, asv will only call install_command but not build_command. (The number of cached builds retained at any time is determined by the build_cache_size configuration option.) The install_command and build_command are by default launched in {build_dir}. The uninstall_command is launched in the environment root directory. The commands are specified in typical POSIX shell syntax (Python shlex), but are not run in a shell, so that e.g. cd has no effect on subsequent commands, and wildcard or environment variable expansion is not done. The substi- tuted variables {variable_name} do not need to be quoted. The commands may contain environment variable specifications in in form VARNAME=value at the beginning. In addition, valid return codes can be specified via return-code=0,1,2 and return-code=any. The in-dir=somedir specification changes the working directory for the command. The commands can be supplied with the arguments: • {project}: the project name from the configuration file • {env_name}: name of the currently active environment • {env_type}: type of the currently active environment • {env_dir}: full path to the currently active environment root • {conf_dir}: full path to the directory where asv.conf.json is • {build_dir}: full path to the build directory (checked-out source path + repo_subdir) • {build_cache_dir}: full path to the build cache directory • {commit}: commit hash of currently installed project • {wheel_file}: absolute path to a *.whl file in {build_cache_dir} (defined only if there is exactly one existing wheel file in the directory). Several environment variables are also defined. airspeed velocity Documentation, Release 0.6.1 5.2.6 branches Branches to generate benchmark results for. This controls how the benchmark results are displayed, and what benchmarks asv run ALL and asv run NEW run. If not provided, “master” (Git) or “default” (Mercurial) is chosen. 5.2.7 show_commit_url The base URL to show information about a particular commit. The commit hash will be added to the end of this URL and then opened in a new tab when a data point is clicked on in the web interface. For example, if using Github to host your repository, the show_commit_url should be: http://github.com/owner/project/commit/ 5.2.8 pythons The versions of Python to run the benchmarks in. If not provided, it will to default to the version of Python that the asv command (master) is being run under. If provided, it should be a list of strings. It may be one of the following: • a Python version string, e.g. "3.7", in which case: – if conda is found, conda will be used to create an environment for that version of Python via a temporary environment.yml file – if virtualenv is installed, asv will search for that version of Python on the PATH and create a new virtual environment based on it. asv does not handle downloading and installing different versions of Python for you. They must already be installed and on the path. Depending on your platform, you can install multiple versions of Python using your package manager or using pyenv. • an executable name on the PATH or an absolute path to an executable. In this case, the environment is assumed to be already fully loaded and read-only. Thus, the benchmarked project must already be installed, and it will not be possible to benchmark multiple revisions of the project. 5.2.9 conda_environment_file A path to a conda environment file to use as source for the dependencies. For example: "conda_environment_file": "environment.yml" The environment file should generally install wheel and pip, since those are required by the default asv build commands. If there are packages present in matrix, an additional conda env update call is used to install them after the environment is created. Note: Changed in version 0.6.0: If an environment.yml file is present where asv is run, it will be used. To turn off this behavior, conda_environment_file can be set to IGNORE. This option will cause asv to ignore the Python version in the environment creation, which is then assumed to be fixed by the environment file. airspeed velocity Documentation, Release 0.6.1 5.2.10 conda_channels A list of conda channel names (strings) to use in the provided order as the source channels for the dependencies. For example: "conda_channels": ["conda-forge", "defaults"] The channels will be parsed by asv to populate the channels section of a temporary environment.yml file used to build the benchmarking environment. 5.2.11 matrix Defines a matrix of third-party dependencies and environment variables to run the benchmarks with. If provided, it must be a dictionary, containing some of the keys “req”, “env”, “env_nobuild”. For example: "matrix": { "req": { "numpy": ["1.7", "1.8"], "Cython": [] "six": ["", null] }, "env": { "FOO": "bar" } } The keys of the "req" are the names of dependencies, and the values are lists of versions (as strings) of that depen- dency. An empty string means the “latest” version of that dependency available on PyPI. Value of null means the package will not be installed. If the list is empty, it is equivalent to [""], in other words, the “latest” version. For example, the following will test with two different versions of Numpy, the latest version of Cython, and six installed as the latest version and not installed at all: "matrix": { "req": { "numpy": ["1.7", "1.8"], "Cython": [] "six": ["", null], } } The matrix dependencies are installed before any dependencies that the project being benchmarked may specify in its setup.py file. Note: At present, this functionality only supports dependencies that are installable via pip or conda or mamba (depending on which environment is used). If conda/mamba is specified as environment_type and you wish to install the package via pip, then preface the package name with pip+. For example, emcee is only available from pip, so the package name to be used is pip+emcee. The env and env_nobuild dictionaries can be used to set also environment variables: airspeed velocity Documentation, Release 0.6.1 "matrix": { "env": { "ENV_VAR_1": ["val1", "val2"], "ENV_VAR_2": ["val3", null], }, "env_nobuild": { "ENV_VAR_3": ["val4", "val5"], } } Variables in “no_build” will be passed to every environment during the test phase, but will not trigger a new build. A value of null means that the variable will not be set for the current combination. The above matrix will result in 4 different builds with the following additional environment variables and values: • [(“ENV_VAR_1”, “val1”), (“ENV_VAR_2”, “val3”)] • [(“ENV_VAR_1”, “val1”)] • [(“ENV_VAR_1”, “val2”), (“ENV_VAR_2”, “val3”)] • [(“ENV_VAR_1”, “val2”)] It will generate 8 different test environments based on those 4 builds with the following environment variables and values: • [(“ENV_VAR_1”, “val1”), (“ENV_VAR_2”, “val3”), (“ENV_VAR_3”, “val4”)] • [(“ENV_VAR_1”, “val1”), (“ENV_VAR_2”, “val3”), (“ENV_VAR_3”, “val5”)] • [(“ENV_VAR_1”, “val1”), (“ENV_VAR_3”, “val4”)] • [(“ENV_VAR_1”, “val1”), (“ENV_VAR_3”, “val5”)] • [(“ENV_VAR_1”, “val2”), (“ENV_VAR_2”, “val3”), (“ENV_VAR_3”, “val4”)] • [(“ENV_VAR_1”, “val2”), (“ENV_VAR_2”, “val3”), (“ENV_VAR_3”, “val5”)] • [(“ENV_VAR_1”, “val2”), (“ENV_VAR_3”, “val4”)] • [(“ENV_VAR_1”, “val2”), (“ENV_VAR_3”, “val5”)] 5.2.12 exclude Combinations of libraries, Python versions, or platforms to be excluded from the combination matrix. If provided, must be a list of dictionaries, each specifying an exclude rule. An exclude rule consists of key-value pairs, specifying matching rules matrix[key] ~ value. The values are strings containing regular expressions that should match whole strings. The exclude rule matches if all of the items in it match. Each exclude rule can contain the following keys: • python: Python version (from pythons) • sys_platform: Current platform, as in sys.platform. Common values are: linux2, win32, cygwin, darwin. • environment_type: The environment type in use (from environment_type). • req: dictionary of rules vs. the requirements • env: dictionary of rules vs. environment variables airspeed velocity Documentation, Release 0.6.1 • env_nobuild: : dictionary of rules vs. the non-build environment variables For example: "pythons": ["3.8", "3.9"], "matrix": { "req": { "numpy": ["1.7", "1.8"], "Cython": ["", null], "colorama": ["", null] }, "env": {"FOO": ["1", "2"]}, }, "exclude": [ {"python": "3.8", "req": {"numpy": "1.7"}}, {"sys_platform": "(?!win32).*", "req": {"colorama": ""}}, {"sys_platform": "win32", "req": {"colorama": null}}, {"env": {"FOO": "1"}}, ] This will generate all combinations of Python version and items in the matrix, except those with Python 3.8 and Numpy 3.9. In other words, the combinations: python==3.8 numpy==1.8 Cython==latest (colorama==latest) FOO=2 python==3.8 numpy==1.8 (colorama==latest) FOO=2 python==3.9 numpy==1.7 Cython==latest (colorama==latest) FOO=2 python==3.9 numpy==1.7 (colorama==latest) FOO=2 python==3.9 numpy==1.8 Cython==latest (colorama==latest) FOO=2 python==3.9 numpy==1.8 (colorama==latest) FOO=2 The colorama package will be installed only if the current platform is Windows. 5.2.13 include Additional package combinations to be included as environments. If specified, must be a list of dictionaries, indicating the versions of packages and other environment configuration to be installed. The dictionary must also include a python key specifying the Python version. Similarly as for the matrix, the "req", "env" and "env_nobuild" entries specify dictionaries containing re- quirements and environment variables. In contrast to the matrix, the values are not lists, but a single value only. In addition, the following keys can be present: sys_platform, environment_type. If present, the include rule is active only if the values match, using same matching rules as explained for exclude above. The exclude rules are not applied to includes. For example: "include": [ {"python": "3.9", "req": {"numpy": "1.8.2"}, "env": {"FOO": "true"}}, {"platform": "win32", "environment_type": "conda", "req": {"python": "2.7", "libpython": ""}} ] This corresponds to two additional environments. One runs on Python 3.9 and including the specified version of Numpy. The second is active only for Conda on Windows, and installs the latest version of libpython. airspeed velocity Documentation, Release 0.6.1 5.2.14 benchmark_dir The directory, relative to the current directory, that benchmarks are stored in. Should rarely need to be overridden. If not provided, defaults to "benchmarks". 5.2.15 environment_type Specifies the tool to use to create environments. May be “conda”, “virtualenv”, “mamba” or another value depending on the plugins in use. If missing or the empty string, the tool will be automatically determined by looking for tools on the PATH environment variable. 5.2.16 env_dir The directory, relative to the current directory, to cache the Python environments in. If not provided, defaults to "env". 5.2.17 results_dir The directory, relative to the current directory, that the raw results are stored in. If not provided, defaults to "results". 5.2.18 html_dir The directory, relative to the current directory, to save the website content in. If not provided, defaults to "html". 5.2.19 hash_length The number of characters to retain in the commit hashes when displayed in the web interface. The default value of 8 should be more than enough for most projects, but projects with extremely large history may need to increase this value. This does not affect the storage of results, where the full commit hash is always retained. 5.2.20 plugins A list of modules to import containing asv plugins. 5.2.21 build_cache_size The number of builds to cache for each environment. 5.2.22 regressions_first_commits The commits after which the regression search in asv publish should start looking for regressions. The value is a dictionary mapping benchmark identifier regexps to commits after which to look for regressions. The benchmark identifiers are of the form benchmark_name(parameters)@branch, where (parameters) is present only for parameterized benchmarks. If the commit identifier is null, regression detection for the matching benchmark is skipped. The default is to start from the first commit with results. airspeed velocity Documentation, Release 0.6.1 Example: "regressions_first_commits": { ".*": "v0.1.0", "benchmark_1": "80fca08d", "benchmark_2@main": null, } In this case, regressions are detected only for commits after tag v0.1.0 for all benchmarks. For benchmark_1, regression detection is further limited to commits after the commit given, and for benchmark_2, regression detection is skipped completely in the main branch. 5.2.23 regressions_thresholds The minimum relative change required before asv publish reports a regression. The value is a dictionary, similar to regressions_first_commits. If multiple entries match, the largest thresh- old is taken. If no entry matches, the default threshold is 0.05 (iow. 5%). Example: "regressions_thresholds": { ".*": 0.01, "benchmark_1": 0.2, } In this case, the reporting threshold is 1% for all benchmarks, except benchmark_1 which uses a threshold of 20%. 5.3 Commands Contents • Commands – asv help – asv quickstart – asv machine – asv setup – asv run – asv dev – asv continuous – asv find – asv rm – asv publish – asv preview – asv profile airspeed velocity Documentation, Release 0.6.1 – asv update – asv show – asv compare – asv check – asv gh-pages 5.3.1 asv help usage: asv help [-h] options: -h, --help show this help message and exit 5.3.2 asv quickstart usage: asv quickstart [-h] [--dest DEST] [--top-level | --no-top-level] [--verbose] [--config CONFIG] [--version] Creates a new benchmarking suite options: -h, --help show this help message and exit --dest DEST, -d DEST The destination directory for the new benchmarking suite --top-level Use layout suitable for putting the benchmark suite on the top level of the project's repository --no-top-level Use layout suitable for putting the benchmark suite in a separate repository --verbose, -v Increase verbosity --config CONFIG Benchmark configuration file --version Print program version 5.3.3 asv machine usage: asv machine [-h] [--machine MACHINE] [--os OS] [--arch ARCH] [--cpu CPU] [--num_cpu NUM_CPU] [--ram RAM] [--yes] [--verbose] [--config CONFIG] [--version] Defines information about this machine. If no arguments are provided, an interactive console session will be used to ask questions about the machine. options: -h, --help show this help message and exit --machine MACHINE A unique name to identify this machine in the results. May be anything, as long as it is unique across all the machines used to benchmark this project. NOTE: If changed from the default, it will no longer match the hostname of this machine, and you may need to explicitly use the (continues on next page) airspeed velocity Documentation, Release 0.6.1 (continued from previous page) --machine argument to asv. --os OS The OS type and version of this machine. For example, 'Macintosh OS-X 10.8'. --arch ARCH The generic CPU architecture of this machine. For example, 'i386' or 'x86_64'. --cpu CPU A specific description of the CPU of this machine, including its speed and class. For example, 'Intel(R) Core(TM) i5-2520M CPU @ 2.50GHz (4 cores)'. --num_cpu NUM_CPU The number of CPUs in the system. For example, '4'. --ram RAM The amount of physical RAM on this machine. For example, '4GB'. --yes Accept all questions --verbose, -v Increase verbosity --config CONFIG Benchmark configuration file --version Print program version 5.3.4 asv setup usage: asv setup [-h] [--parallel [PARALLEL]] [-E ENV_SPEC] [--python PYTHON] [--verbose] [--config CONFIG] [--version] Setup virtual environments for each combination of Python version and third- party requirement. This is called by the ``run`` command implicitly, and isn't generally required to be run on its own. options: -h, --help show this help message and exit --parallel [PARALLEL], -j [PARALLEL] Build (but don't benchmark) in parallel. The value is the number of CPUs to use, or if no number provided, use the number of cores on this machine. -E ENV_SPEC, --environment ENV_SPEC Specify the environment and Python versions for running the benchmarks. String of the format 'environment_type:python_version', for example 'conda:2.7'. If the Python version is not specified, all those listed in the configuration file are run. The special environment type 'existing:/path/to/python' runs the benchmarks using the given Python interpreter; if the path is omitted, the Python running asv is used. For 'existing', the benchmarked project must be already installed, including all dependencies. By default, uses the values specified in the configuration file. --python PYTHON Same as --environment=:PYTHON --verbose, -v Increase verbosity --config CONFIG Benchmark configuration file --version Print program version 5.3.5 asv run usage: asv run [-h] [--date-period DATE_PERIOD] [--steps STEPS] [--bench BENCH] [--attribute ATTRIBUTE] (continues on next page) airspeed velocity Documentation, Release 0.6.1 (continued from previous page) [--cpu-affinity ATTRIBUTE] [--profile] [--parallel [PARALLEL]] [--show-stderr] [--durations [N]] [--quick] [-E ENV_SPEC] [--python PYTHON] [--set-commit-hash SET_COMMIT_HASH] [--launch-method {auto,spawn,forkserver}] [--dry-run] [--machine MACHINE] [--skip-existing-successful] [--skip-existing-failed] [--skip-existing-commits] [--skip-existing] [--record-samples] [--append-samples] [--interleave-rounds] [--no-interleave-rounds] [--no-pull] [--verbose] [--config CONFIG] [--version] [range] Run a benchmark suite. examples: asv run master run for one branch asv run master^! run for one commit (git) asv run "--merges master" run for only merge commits (git) positional arguments: range Range of commits to benchmark. For a git repository, this is passed as the first argument to ``git rev- list``; or Mercurial log command. See 'specifying ranges' section of the `gitrevisions` manpage, or 'hg help revisions', for more info. Also accepts the special values 'NEW', 'ALL', 'EXISTING', and 'HASHFILE:xxx'. 'NEW' will benchmark all commits since the latest benchmarked on this machine. 'ALL' will benchmark all commits in the project. 'EXISTING' will benchmark against all commits for which there are existing benchmarks on any machine. 'HASHFILE:xxx' will benchmark only a specific set of hashes given in the file named 'xxx' ('-' means stdin), which must have one hash per line. By default, will benchmark the head of each configured of the branches. options: -h, --help show this help message and exit --date-period DATE_PERIOD Pick only one commit in each given time period. For example: 1d (daily), 1w (weekly), 1y (yearly). --steps STEPS, -s STEPS Maximum number of steps to benchmark. This is used to subsample the commits determined by range to a reasonable number. --bench BENCH, -b BENCH Regular expression(s) for benchmark to run. When not provided, all benchmarks are run. --attribute ATTRIBUTE, -a ATTRIBUTE Override a benchmark attribute, e.g. `-a repeat=10`. --cpu-affinity ATTRIBUTE Set CPU affinity for running the benchmark, in format: 0 or 0,1,2 or 0-3. Default: not set --profile, -p In addition to timing, run the benchmarks through the `cProfile` profiler and store the results. --parallel [PARALLEL], -j [PARALLEL] Build (but don't benchmark) in parallel. The value is the number of CPUs to use, or if no number provided, (continues on next page) airspeed velocity Documentation, Release 0.6.1 (continued from previous page) use the number of cores on this machine. --show-stderr, -e Display the stderr output from the benchmarks. --durations [N] Display total duration for N (or 'all') slowest benchmarks --quick, -q Do a "quick" run, where each benchmark function is run only once. This is useful to find basic errors in the benchmark functions faster. The results are unlikely to be useful, and thus are not saved. -E ENV_SPEC, --environment ENV_SPEC Specify the environment and Python versions for running the benchmarks. String of the format 'environment_type:python_version', for example 'conda:2.7'. If the Python version is not specified, all those listed in the configuration file are run. The special environment type 'existing:/path/to/python' runs the benchmarks using the given Python interpreter; if the path is omitted, the Python running asv is used. For 'existing', the benchmarked project must be already installed, including all dependencies. By default, uses the values specified in the configuration file. --python PYTHON Same as --environment=:PYTHON --set-commit-hash SET_COMMIT_HASH Set the commit hash to use when recording benchmark results. This makes results to be saved also when using an existing environment. --launch-method {auto,spawn,forkserver} How to launch benchmarks. Choices: auto, spawn, forkserver --dry-run, -n Do not save any results to disk. --machine MACHINE, -m MACHINE Use the given name to retrieve machine information. If not provided, the hostname is used. If no entry with that name is found, and there is only one entry in ~/.asv-machine.json, that one entry will be used. --skip-existing-successful Skip running benchmarks that have previous successful results --skip-existing-failed Skip running benchmarks that have previous failed results --skip-existing-commits Skip running benchmarks for commits that have existing results --skip-existing, -k Skip running benchmarks that have previous successful or failed results --record-samples Store raw measurement samples, not only statistics --append-samples Combine new measurement samples with previous results, instead of discarding old results. Implies --record- samples. The previous run must also have been run with --record/append-samples. --interleave-rounds Interleave benchmarks with multiple rounds across commits. This can avoid measurement biases from commit ordering, can take longer. --no-interleave-rounds --no-pull Do not pull the repository --verbose, -v Increase verbosity (continues on next page) airspeed velocity Documentation, Release 0.6.1 (continued from previous page) --config CONFIG Benchmark configuration file --version Print program version 5.3.6 asv dev usage: asv dev [-h] [--date-period DATE_PERIOD] [--steps STEPS] [--bench BENCH] [--attribute ATTRIBUTE] [--cpu-affinity ATTRIBUTE] [--profile] [--parallel [PARALLEL]] [--show-stderr] [--durations [N]] [--quick] [-E ENV_SPEC] [--python PYTHON] [--set-commit-hash SET_COMMIT_HASH] [--launch-method {auto,spawn,forkserver}] [--dry-run] [--machine MACHINE] [--skip-existing-successful] [--skip-existing-failed] [--skip-existing-commits] [--skip-existing] [--record-samples] [--append-samples] [--interleave-rounds] [--no-interleave-rounds] [--no-pull] [--verbose] [--config CONFIG] [--version] [range] This runs a benchmark suite in a mode that is useful during development. It is equivalent to ``asv run --python=same`` positional arguments: range Range of commits to benchmark. For a git repository, this is passed as the first argument to ``git rev- list``; or Mercurial log command. See 'specifying ranges' section of the `gitrevisions` manpage, or 'hg help revisions', for more info. Also accepts the special values 'NEW', 'ALL', 'EXISTING', and 'HASHFILE:xxx'. 'NEW' will benchmark all commits since the latest benchmarked on this machine. 'ALL' will benchmark all commits in the project. 'EXISTING' will benchmark against all commits for which there are existing benchmarks on any machine. 'HASHFILE:xxx' will benchmark only a specific set of hashes given in the file named 'xxx' ('-' means stdin), which must have one hash per line. By default, will benchmark the head of each configured of the branches. options: -h, --help show this help message and exit --date-period DATE_PERIOD Pick only one commit in each given time period. For example: 1d (daily), 1w (weekly), 1y (yearly). --steps STEPS, -s STEPS Maximum number of steps to benchmark. This is used to subsample the commits determined by range to a reasonable number. --bench BENCH, -b BENCH Regular expression(s) for benchmark to run. When not provided, all benchmarks are run. --attribute ATTRIBUTE, -a ATTRIBUTE Override a benchmark attribute, e.g. `-a repeat=10`. --cpu-affinity ATTRIBUTE Set CPU affinity for running the benchmark, in format: 0 or 0,1,2 or 0-3. Default: not set (continues on next page) airspeed velocity Documentation, Release 0.6.1 (continued from previous page) --profile, -p In addition to timing, run the benchmarks through the `cProfile` profiler and store the results. --parallel [PARALLEL], -j [PARALLEL] Build (but don't benchmark) in parallel. The value is the number of CPUs to use, or if no number provided, use the number of cores on this machine. --show-stderr, -e Display the stderr output from the benchmarks. --durations [N] Display total duration for N (or 'all') slowest benchmarks --quick, -q Do a "quick" run, where each benchmark function is run only once. This is useful to find basic errors in the benchmark functions faster. The results are unlikely to be useful, and thus are not saved. -E ENV_SPEC, --environment ENV_SPEC Specify the environment and Python versions for running the benchmarks. String of the format 'environment_type:python_version', for example 'conda:2.7'. If the Python version is not specified, all those listed in the configuration file are run. The special environment type 'existing:/path/to/python' runs the benchmarks using the given Python interpreter; if the path is omitted, the Python running asv is used. For 'existing', the benchmarked project must be already installed, including all dependencies. The default value is 'existing:same' --python PYTHON Same as --environment=:PYTHON --set-commit-hash SET_COMMIT_HASH Set the commit hash to use when recording benchmark results. This makes results to be saved also when using an existing environment. --launch-method {auto,spawn,forkserver} How to launch benchmarks. Choices: auto, spawn, forkserver --dry-run, -n Do not save any results to disk. --machine MACHINE, -m MACHINE Use the given name to retrieve machine information. If not provided, the hostname is used. If no entry with that name is found, and there is only one entry in ~/.asv-machine.json, that one entry will be used. --skip-existing-successful Skip running benchmarks that have previous successful results --skip-existing-failed Skip running benchmarks that have previous failed results --skip-existing-commits Skip running benchmarks for commits that have existing results --skip-existing, -k Skip running benchmarks that have previous successful or failed results --record-samples Store raw measurement samples, not only statistics --append-samples Combine new measurement samples with previous results, instead of discarding old results. Implies --record- samples. The previous run must also have been run with --record/append-samples. --interleave-rounds Interleave benchmarks with multiple rounds across (continues on next page) airspeed velocity Documentation, Release 0.6.1 (continued from previous page) commits. This can avoid measurement biases from commit ordering, can take longer. --no-interleave-rounds --no-pull Do not pull the repository --verbose, -v Increase verbosity --config CONFIG Benchmark configuration file --version Print program version 5.3.7 asv continuous usage: asv continuous [-h] [--no-record-samples] [--append-samples] [--quick] [--interleave-rounds] [--no-interleave-rounds] [--factor FACTOR] [--no-stats] [--split] [--only-changed] [--no-only-changed] [--sort {name,ratio,default}] [--show-stderr] [--bench BENCH] [--attribute ATTRIBUTE] [--cpu-affinity ATTRIBUTE] [--machine MACHINE] [-E ENV_SPEC] [--python PYTHON] [--launch-method {auto,spawn,forkserver}] [--verbose] [--config CONFIG] [--version] [base] branch Run a side-by-side comparison of two commits for continuous integration. positional arguments: base The commit/branch to compare against. By default, the parent of the tested commit. branch The commit/branch to test. By default, the first configured branch. options: -h, --help show this help message and exit --no-record-samples Do not store raw measurement samples, but only statistics --append-samples Combine new measurement samples with previous results, instead of discarding old results. Implies --record- samples. The previous run must also have been run with --record/append-samples. --quick, -q Do a "quick" run, where each benchmark function is run only once. This is useful to find basic errors in the benchmark functions faster. The results are unlikely to be useful, and thus are not saved. --interleave-rounds Interleave benchmarks with multiple rounds across commits. This can avoid measurement biases from commit ordering, can take longer. --no-interleave-rounds --factor FACTOR, -f FACTOR The factor above or below which a result is considered problematic. For example, with a factor of 1.1 (the default value), if a benchmark gets 10% slower or faster, it will be displayed in the results list. --no-stats Do not use result statistics in comparisons, only `factor` and the median result. --split, -s Split the output into a table of benchmarks that have improved, stayed the same, and gotten worse. (continues on next page) airspeed velocity Documentation, Release 0.6.1 (continued from previous page) --only-changed Whether to show only changed results. --no-only-changed --sort {name,ratio,default} Sort order --show-stderr, -e Display the stderr output from the benchmarks. --bench BENCH, -b BENCH Regular expression(s) for benchmark to run. When not provided, all benchmarks are run. --attribute ATTRIBUTE, -a ATTRIBUTE Override a benchmark attribute, e.g. `-a repeat=10`. --cpu-affinity ATTRIBUTE Set CPU affinity for running the benchmark, in format: 0 or 0,1,2 or 0-3. Default: not set --machine MACHINE, -m MACHINE Use the given name to retrieve machine information. If not provided, the hostname is used. If no entry with that name is found, and there is only one entry in ~/.asv-machine.json, that one entry will be used. -E ENV_SPEC, --environment ENV_SPEC Specify the environment and Python versions for running the benchmarks. String of the format 'environment_type:python_version', for example 'conda:2.7'. If the Python version is not specified, all those listed in the configuration file are run. The special environment type 'existing:/path/to/python' runs the benchmarks using the given Python interpreter; if the path is omitted, the Python running asv is used. For 'existing', the benchmarked project must be already installed, including all dependencies. By default, uses the values specified in the configuration file. --python PYTHON Same as --environment=:PYTHON --launch-method {auto,spawn,forkserver} How to launch benchmarks. Choices: auto, spawn, forkserver --verbose, -v Increase verbosity --config CONFIG Benchmark configuration file --version Print program version 5.3.8 asv find usage: asv find [-h] [--invert] [--skip-save] [--parallel [PARALLEL]] [--show-stderr] [--machine MACHINE] [-E ENV_SPEC] [--python PYTHON] [--launch-method {auto,spawn,forkserver}] [--verbose] [--config CONFIG] [--version] from..to benchmark_name Adaptively searches a range of commits for one that produces a large regression. This only works well when the regression in the range is mostly monotonic. positional arguments: from..to Range of commits to search. For a git repository, this is passed as the first argument to ``git log``. See 'specifying ranges' section of the `gitrevisions` (continues on next page) airspeed velocity Documentation, Release 0.6.1 (continued from previous page) manpage for more info. benchmark_name Name of benchmark to use in search. options: -h, --help show this help message and exit --invert, -i Search for a decrease in the benchmark value, rather than an increase. --skip-save Do not save intermediate results from the search --parallel [PARALLEL], -j [PARALLEL] Build (but don't benchmark) in parallel. The value is the number of CPUs to use, or if no number provided, use the number of cores on this machine. --show-stderr, -e Display the stderr output from the benchmarks. --machine MACHINE, -m MACHINE Use the given name to retrieve machine information. If not provided, the hostname is used. If no entry with that name is found, and there is only one entry in ~/.asv-machine.json, that one entry will be used. -E ENV_SPEC, --environment ENV_SPEC Specify the environment and Python versions for running the benchmarks. String of the format 'environment_type:python_version', for example 'conda:2.7'. If the Python version is not specified, all those listed in the configuration file are run. The special environment type 'existing:/path/to/python' runs the benchmarks using the given Python interpreter; if the path is omitted, the Python running asv is used. For 'existing', the benchmarked project must be already installed, including all dependencies. By default, uses the values specified in the configuration file. --python PYTHON Same as --environment=:PYTHON --launch-method {auto,spawn,forkserver} How to launch benchmarks. Choices: auto, spawn, forkserver --verbose, -v Increase verbosity --config CONFIG Benchmark configuration file --version Print program version 5.3.9 asv rm usage: asv rm [-h] [-y] [--verbose] [--config CONFIG] [--version] patterns [patterns ...] Removes entries from the results database. positional arguments: patterns Pattern(s) to match, each of the form X=Y. X may be one of "benchmark", "commit_hash", "python" or any of the machine or environment params. Y is a case-sensitive glob pattern. options: -h, --help show this help message and exit -y Don't prompt for confirmation. --verbose, -v Increase verbosity (continues on next page) airspeed velocity Documentation, Release 0.6.1 (continued from previous page) --config CONFIG Benchmark configuration file --version Print program version 5.3.10 asv publish usage: asv publish [-h] [--no-pull] [--html-dir HTML_DIR] [--verbose] [--config CONFIG] [--version] [range] Collate all results into a website. This website will be written to the ``html_dir`` given in the ``asv.conf.json`` file, and may be served using any static web server. positional arguments: range Optional commit range to consider options: -h, --help show this help message and exit --no-pull Do not pull the repository --html-dir HTML_DIR, -o HTML_DIR Optional output directory. Default is 'html_dir' from asv config --verbose, -v Increase verbosity --config CONFIG Benchmark configuration file --version Print program version 5.3.11 asv preview usage: asv preview [-h] [--port PORT] [--browser] [--html-dir HTML_DIR] [--verbose] [--config CONFIG] [--version] Preview the results using a local web server options: -h, --help show this help message and exit --port PORT, -p PORT Port to run webserver on. [8080] --browser, -b Open in webbrowser --html-dir HTML_DIR, -o HTML_DIR Optional output directory. Default is 'html_dir' from asv config --verbose, -v Increase verbosity --config CONFIG Benchmark configuration file --version Print program version 5.3.12 asv profile usage: asv profile [-h] [--gui GUI] [--output OUTPUT] [--force] [-E ENV_SPEC] [--python PYTHON] [--launch-method {auto,spawn,forkserver}] [--verbose] [--config CONFIG] [--version] benchmark [revision] (continues on next page) airspeed velocity Documentation, Release 0.6.1 (continued from previous page) Profile a benchmark positional arguments: benchmark The benchmark to profile. Must be a fully-specified benchmark name. For parameterized benchmark, it must include the parameter combination to use, e.g.: benchmark_name\(param0, param1, ...\) revision The revision of the project to profile. May be a commit hash, or a tag or branch name. options: -h, --help show this help message and exit --gui GUI, -g GUI Display the profile in the given gui. Use --gui=list to list available guis. --output OUTPUT, -o OUTPUT Save the profiling information to the given file. This file is in the format written by the `cProfile` standard library module. If not provided, prints a simple text-based profiling report to the console. --force, -f Forcibly re-run the profile, even if the data already exists in the results database. -E ENV_SPEC, --environment ENV_SPEC Specify the environment and Python versions for running the benchmarks. String of the format 'environment_type:python_version', for example 'conda:2.7'. If the Python version is not specified, all those listed in the configuration file are run. The special environment type 'existing:/path/to/python' runs the benchmarks using the given Python interpreter; if the path is omitted, the Python running asv is used. For 'existing', the benchmarked project must be already installed, including all dependencies. By default, uses the values specified in the configuration file. --python PYTHON Same as --environment=:PYTHON --launch-method {auto,spawn,forkserver} How to launch benchmarks. Choices: auto, spawn, forkserver --verbose, -v Increase verbosity --config CONFIG Benchmark configuration file --version Print program version 5.3.13 asv update usage: asv update [-h] [--verbose] [--config CONFIG] [--version] Update the results and config files to the current version options: -h, --help show this help message and exit --verbose, -v Increase verbosity --config CONFIG Benchmark configuration file --version Print program version airspeed velocity Documentation, Release 0.6.1 5.3.14 asv show usage: asv show [-h] [--details] [--durations] [--bench BENCH] [--attribute ATTRIBUTE] [--cpu-affinity ATTRIBUTE] [--machine MACHINE] [-E ENV_SPEC] [--python PYTHON] [--verbose] [--config CONFIG] [--version] [commit] Print saved benchmark results. positional arguments: commit The commit to show data for options: -h, --help show this help message and exit --details Show all result details --durations Show only run durations --bench BENCH, -b BENCH Regular expression(s) for benchmark to run. When not provided, all benchmarks are run. --attribute ATTRIBUTE, -a ATTRIBUTE Override a benchmark attribute, e.g. `-a repeat=10`. --cpu-affinity ATTRIBUTE Set CPU affinity for running the benchmark, in format: 0 or 0,1,2 or 0-3. Default: not set --machine MACHINE, -m MACHINE Use the given name to retrieve machine information. If not provided, the hostname is used. If no entry with that name is found, and there is only one entry in ~/.asv-machine.json, that one entry will be used. -E ENV_SPEC, --environment ENV_SPEC Specify the environment and Python versions for running the benchmarks. String of the format 'environment_type:python_version', for example 'conda:2.7'. If the Python version is not specified, all those listed in the configuration file are run. The special environment type 'existing:/path/to/python' runs the benchmarks using the given Python interpreter; if the path is omitted, the Python running asv is used. For 'existing', the benchmarked project must be already installed, including all dependencies. By default, uses the values specified in the configuration file. --python PYTHON Same as --environment=:PYTHON --verbose, -v Increase verbosity --config CONFIG Benchmark configuration file --version Print program version 5.3.15 asv compare usage: asv compare [-h] [--factor FACTOR] [--no-stats] [--split] [--only-changed] [--no-only-changed] [--sort {name,ratio,default}] [--machine MACHINE] [-E ENV_SPEC] [--python PYTHON] [--verbose] [--config CONFIG] [--version] (continues on next page) airspeed velocity Documentation, Release 0.6.1 (continued from previous page) revision1 revision2 Compare two sets of results positional arguments: revision1 The reference revision. revision2 The revision being compared. options: -h, --help show this help message and exit --factor FACTOR, -f FACTOR The factor above or below which a result is considered problematic. For example, with a factor of 1.1 (the default value), if a benchmark gets 10% slower or faster, it will be displayed in the results list. --no-stats Do not use result statistics in comparisons, only `factor` and the median result. --split, -s Split the output into a table of benchmarks that have improved, stayed the same, and gotten worse. --only-changed Whether to show only changed results. --no-only-changed --sort {name,ratio,default} Sort order --machine MACHINE, -m MACHINE The machine to compare the revisions for. -E ENV_SPEC, --environment ENV_SPEC Specify the environment and Python versions for running the benchmarks. String of the format 'environment_type:python_version', for example 'conda:2.7'. If the Python version is not specified, all those listed in the configuration file are run. The special environment type 'existing:/path/to/python' runs the benchmarks using the given Python interpreter; if the path is omitted, the Python running asv is used. For 'existing', the benchmarked project must be already installed, including all dependencies. By default, uses the values specified in the configuration file. --python PYTHON Same as --environment=:PYTHON --verbose, -v Increase verbosity --config CONFIG Benchmark configuration file --version Print program version 5.3.16 asv check usage: asv check [-h] [-E ENV_SPEC] [--python PYTHON] [--verbose] [--config CONFIG] [--version] This imports and checks basic validity of the benchmark suite, but does not run the benchmark target code options: -h, --help show this help message and exit -E ENV_SPEC, --environment ENV_SPEC Specify the environment and Python versions for (continues on next page) airspeed velocity Documentation, Release 0.6.1 (continued from previous page) running the benchmarks. String of the format 'environment_type:python_version', for example 'conda:2.7'. If the Python version is not specified, all those listed in the configuration file are run. The special environment type 'existing:/path/to/python' runs the benchmarks using the given Python interpreter; if the path is omitted, the Python running asv is used. For 'existing', the benchmarked project must be already installed, including all dependencies. By default, uses the values specified in the configuration file. --python PYTHON Same as --environment=:PYTHON --verbose, -v Increase verbosity --config CONFIG Benchmark configuration file --version Print program version 5.3.17 asv gh-pages usage: asv gh-pages [-h] [--no-push] [--rewrite] [--verbose] [--config CONFIG] [--version] Publish the results to github pages Updates the 'gh-pages' branch in the current repository, and pushes it to 'origin'. options: -h, --help show this help message and exit --no-push Update local gh-pages branch but don't push --rewrite Rewrite gh-pages branch to contain only a single commit, instead of adding a new commit --verbose, -v Increase verbosity --config CONFIG Benchmark configuration file --version Print program version 5.4 ASV environment variables Benchmarking and build commands are run with the following environment variables available: • ASV: true • ASV_PROJECT: the project name from the configuration file • ASV_ENV_NAME: name of the currently active environment • ASV_ENV_TYPE: type of the currently active environment • ASV_ENV_DIR: full path to the currently active environment root • ASV_CONF_DIR: full path to the directory where asv.conf.json is • ASV_BUILD_DIR: full path to the build directory (checked-out source path + repo_subdir) • ASV_BUILD_CACHE_DIR: full path to the build cache directory • ASV_COMMIT: commit hash of currently installed project airspeed velocity Documentation, Release 0.6.1 If there is no asv-managed environment, build, or cache directory, or commit hash, those environment variables are unset. The following environment variables controlling Python and other behavior are also set: • PATH: environment-specific binary directories prepended • PIP_USER: false • PYTHONNOUSERSITE: True (for conda environments only) • PYTHONPATH: unset (if really needed, can be overridden by setting ASV_PYTHONPATH) Note: 5.4.1 Custom environment variables You can send custom environment variables to build and benchmarking commands by configuring the matrix setting in asv.conf.json. airspeed velocity Documentation, Release 0.6.1 56 Chapter 5. Reference CHAPTER 6 Developer Docs This section describes some things that may be of interest to developers and other people interested in internals of asv. Note: From version 0.6.0 on-wards, functionality in asv has been split into the section needed by the code being benchmarked (asv_runner) and the rest of asv. This means that the asv documentation covers setting up environments, loading plugins, and collecting the results of the benchmarks run with asv_runner. Contents • Developer Docs – Development setup – Separation of concerns – Benchmark suite layout and file formats – Full-stack testing – Step detection * Bayesian information * Overfitting * Autocorrelated noise * Postprocessing * Making use of measured variance airspeed velocity Documentation, Release 0.6.1 6.1 Development setup The required packages to the full asv test suite, are listed in requirements-dev.txt. The minimal set of packages required for testing are: pytest virtualenv filelock six pip setuptools wheel. 6.2 Separation of concerns asv consists of the following steps: • Setting up an environment for the project • Building the project within the environment • Running the benchmarks • Collecting results and visualizing them after analysis for regressions Note: Conceptually there are two separate parts to this process. There is the main process which orchestrates the environment creation. This is followed by a subprocess which essentially runs the project benchmarks. This subprocess must have only minimal dependencies, ideally nothing beyond the minimum Python version needed to run asv along with the dependencies of the project itself. Changed in version 0.6.0: To clarify this, starting from v0.6.0, asv has been split into asv_runner, which is responsible for loading benchmark types, discovering them and running them within an environment, while the asv repository handles the remaining tasks. 6.3 Benchmark suite layout and file formats A benchmark suite directory has the following layout. The $-prefixed variables refer to values in the asv.conf. json file. • asv.conf.json: The configuration file. See asv.conf.json reference. • $benchmark_dir: Contains the benchmark code, created by the user. Each subdirectory needs an __init__.py. • $project/: A clone of the project being benchmarked. Information about the history is grabbed from here, but the actual building happens in the environment-specific clones described below. • $env_dir/: Contains the environments used for building and benchmarking. There is one environment in here for each specific combination of Python version and library dependency. Generally, the dependencies are only installed once, and then reused on subsequent runs of asv, but the project itself needs to be rebuilt for each commit being benchmarked. – $ENVIRONMENT_HASH/: The directory name of each environment is the md5hash of the list of depen- dencies and the Python version. This is not very user friendly, but this keeps the filename within reasonable limits. * asv-env-info.json: Contains information about the environment, mainly the Python version, dependencies and build environment variables used. * project/: An environment-specific clone of the project repository. Each environment has its own clone so that builds can be run in parallel without fear of clobbering (particularly for projects that generate source files outside of the build/ directory. These clones are created from the main airspeed velocity Documentation, Release 0.6.1 $project/ directory using the --shared option to git clone so that the repository history is stored in one place to save on disk space. The project is built in this directory with the standard python setup.py build command. This means repeated builds happen in the same place and ccache is able to cache and reuse many of the build products. * wheels/: If build_cache_size in asv.conf.json is set to something other than 0, this contains Wheels of the last N project builds for this environment. In this way, if a build for a par- ticular commit has already been performed and cached, it can be restored much more quickly. Each subdirectory is a commit hash, containing one .whl file and a timestamp. * usr/, lib/, bin/ etc.: These are the virtualenv or Conda environment directories that we install the project into and then run benchmarks from. • $results_dir/: This is the “database” of results from benchmark runs. – benchmarks.json: Contains metadata about all of the benchmarks in the suite. It is a dictionary from benchmark names (a fully-qualified dot-separated path) to dictionaries containing information about that benchmark. Useful keys include: * code: The Python code of the benchmark * params: List of lists describing parameter values of a parameterized benchmark. If benchmark is not parameterized, an empty list. Otherwise, the n-th entry of the list is a list of the Python repr() strings for the values the n-th parameter should loop over. * param_names: Names for parameters for a parameterized benchmark. Must be of the same length as the params list. * version: An arbitrary string identifying the benchmark version. Default value is hash of code, but user can override. Other keys are specific to the kind of benchmark, and correspond to Benchmark attributes. – MACHINE/: Within the results directory is a directory for each machine. Putting results from different machines in separate directories makes the results trivial to merge, which is useful when benchmarking across different platforms or architectures. * HASH-pythonX.X-depA-depB.json: Each JSON file within a particular machine represents a run of benchmarks for a particular project commit in a particular environment. Contains the keys: · version: the value 2. · commit_hash: The project commit that the benchmarks were run on. · env_name: Name of the environment the benchmarks were run in. · date: A JavaScript date stamp of the date of the commit (not when the benchmarks were run). · params: Information about the machine the benchmarks were run on. · python: Python version of the environment. · requirements: Requirements dictionary of the environment. · env_vars: Environment variable dictionary of the environment. · durations: Duration information for build and setup-cache timings. · result_columns: List of column names for the results dictionary. It is ["result", "params", "version", "started_at", "duration", "stats_ci_99_a", "stats_ci_99_b", "stats_q_25", "stats_q_75", "stats_number", "stats_repeat", "samples", "profile"] currently. airspeed velocity Documentation, Release 0.6.1 · results: A dictionary from benchmark names to benchmark results. The keys are benchmark names, and values are lists such that dict(zip(result_columns, results[benchmark_name])) pairs the appropriate keys with the values; in particular, trailing columns with missing values can be dropped. Some items, marked with “(param-list)” below, are lists with items corresponding to results from a parametrized benchmark (see params below). Non-parametrized benchmarks then have lists with a single item. Values except params can be null, indicating missing data. Floating-point numbers in stats_* and duration are truncated to 5 significant base-10 digits when saving, in order to produce smaller JSON files. · result: (param-list) contains the summarized result value(s) of the benchmark. The values are float, NaN or null. The values are either numbers indicating result from successful run, null indicating a failed benchmark, or NaN indicating a benchmark explicitly skipped by the benchmark suite. · params: contains a copy of the parameter values of the benchmark, as described above. If the user has modified the benchmark after the benchmark was run, these may differ from the current values. The result value is a list of results. Each entry corresponds to one combi- nation of the parameter values. The n-th entry in the list corresponds to the parameter combina- tion itertools.product(*params)[n], i.e., the results appear in cartesian product order, with the last parameters varying fastest. For non-parametrized benchmarks, []. · version: string, a benchmark version identifier. Results whose version is not equal to the current version of the benchmark are ignored. If the value is missing, no version comparisons are done (backward compatibility). · started_at: Javascript timestamp indicating start time of latest benchmark run. · duration: float, indicating the duration of a benchmark run in seconds. · stats_*: (param-list) dictionary containing various statistical indicators. Possible * are ci_99_a, ci_99_b (confidence interval estimate lower/upper values), q_25 (lower quartile), q_75 (upper quartile), repeat, and number. · profile: string, zlib-compressed and base64-encoded Python profile dump. · samples: (param-list) List of samples obtained for a benchmark. The samples are in the order they were measured in. • $html_dir/: The output of asv publish, that turns the raw results in $results_dir/ into something viewable in a web browser. It is an important feature of asv that the results can be shared on a static web server, so there is no server side component, and the result data is accessed through AJAX calls from JavaScript. Most of the files at the root of $html_dir/ are completely static and are just copied verbatim from asv/www/ in the source tree. – index.json: Contains an index into the benchmark data, describing what is available. Important keys include: * benchmarks: A dictionary of benchmarks. At the moment, this is identical to the content in $results_dir/benchmarks.json. * revision_to_hash: A dictionary mapping revision number to commit hash. This allows to show commits tooltip in graph and commits involved in a regression. * revision_to_date: A dictionary mapping JavaScript date stamps to revisions (including tags). This allows the x-scale of a plot to be scaled by date. airspeed velocity Documentation, Release 0.6.1 * machines: Describes the machines used for testing. * params: A dictionary of parameters against which benchmark results can be selected. Each entry is a list of valid values for that parameter. * tags: A dictionary of git tags and their revisions, so this information can be displayed in the plot. – graphs/: This is a nested tree of directories where each level is a parameter from the params dictio- nary, in asciibetical order. The web interface, given a set of parameters that are set, get easily grab the associated graph. * BENCHMARK_NAME.json: At the leaves of this tree are the actual benchmark graphs. It contains a list of pairs, where each pair is of the form (timestamp, result_value). For parameter- ized benchmarks, result_value is a list of results, corresponding to itertools.product iteration over the parameter combinations, similarly as in the result files. For non-parameterized benchmarks, it is directly the result. Missing values (eg. failed and skipped benchmarks) are repre- sented by null. 6.4 Full-stack testing For full-stack testing, we use Selenium WebDriver and its Python bindings. Additional documentation for Selenium Python bindings is here. The browser back-end can be selected via: pytest --webdriver=PhantomJS The allowed values include None (default), PhantomJS, Chrome, Firefox, ChromeHeadless, FirefoxHeadless, or arbitrary Python code initializing a Selenium webdriver instance. To use them, at least one of the following needs to be installed: • Firefox GeckoDriver: Firefox-based controllable browser. • ChromeDriver: Chrome-based controllable browser. On Ubuntu, install via apt-get install chromium-chromedriver, on Fedora via dnf install chromedriver. • PhantomJS: Headless web browser (discontinued, prefer using Firefox or Chrome). For other options regarding the webdriver to use, see py.test --help. 6.5 Step detection Regression detection in ASV is based on detecting stepwise changes in the graphs. The assumptions on the data are as follows: the curves are piecewise constant plus random noise. We don’t know the scaling of the data or the amplitude of the noise, but assume the relative weight of the noise amplitude is known for each data point. ASV measures the noise amplitude of each data point, based on a number of samples. We use this information for weighting the different data points: i.e., we assume the uncertainty in each measurement point is proportional to the estimated confidence interval for each data point. Their inverses are taken as the relative weights w_j. If w_j=0 or undefined, we replace it with the median weight, or with 1 if all are undefined. The step detection algorithm determines the absolute noise amplitude itself based on all available data, which is more robust than relying on the individual measurements. airspeed velocity Documentation, Release 0.6.1 Step detection is a well-studied problem. In this implementation, we mainly follow a variant of the approach outlined in [Friedrich2008] and elsewhere. This provides a fast algorithm for solving the piecewise weighted fitting problem ∑︁𝑘 𝑗𝑟 ∑︁ argmin 𝛾𝑘 + 𝑤𝑖 |𝑦𝑖 − 𝜇𝑟 | (6.1) 𝑘,{𝑗},{𝜇} 𝑟=1 𝑖=𝑗𝑟−1 The differences are: as we do not need exact solutions, we add additional heuristics to work around the 𝒪(𝑛2 ) scaling, which is too harsh for pure-Python code. For details, see asv.step_detect.solve_potts_approx. More- over, we follow a slightly different approach on obtaining a suitable number of intervals, by selecting an optimal value for 𝛾, based on a variant of the information criterion problem discussed in [Yao1988]. 6.5.1 Bayesian information To proceed, we need an argument by which to select a suitable 𝛾 in (6.1). Some of the literature on step detection, e.g. [Yao1988], suggests results based on Schwarz information criteria, 𝑚 SC = ln 𝜎 2 + 𝑘 ln 𝑚 = min! (6.2) where 𝜎 2 is maximum likelihood variance estimator (if noise is gaussian). For the implementation, see asv. step_detect.solve_potts_autogamma. What follows is a handwaving plausibility argument why such an objective function makes sense, and how to end up with 𝑙1 rather than gaussians. Better approaches are probably to be found in step detection literature. If you have a better formulation, contributions/corrections are welcome! We assume a Bayesian model: 𝑘 ∑︁ 𝑗𝑟 ∑︁ −𝑚 𝑃 ({𝑦𝑖 }𝑚 𝑘 𝑘−1 𝑖=1 |𝜎, 𝑘, {𝜇𝑖 }𝑖=1 , {𝑗𝑖 }𝑖=1 ) = 𝑁 𝜎 exp(−𝜎 −1 𝑤𝑖 |𝑦𝑖 − 𝜇𝑟 |) (6.3) Here, 𝑦𝑖 are the 𝑚 data points at hand, 𝑘 is the number of intervals, 𝜇𝑖 are the values of the function at the intervals, 𝑗𝑖 are the interval breakpoints; 𝑗0 = 0, 𝑗𝑘 = 𝑚, 𝑗𝑟−1 < 𝑗𝑟 . The noise is assumed Laplace rather than gaussian, which results to the more robust 𝑙1 norm fitting rather than 𝑙2 . The noise amplitude 𝜎 is not known. 𝑁 is a normalization constant that depends on 𝑚 but not on the other parameters. The optimal 𝑘 comes from Bayesian reasoning: 𝑘ˆ = argmax𝑘 𝑃 (𝑘|{𝑦}), where ∫︁ 𝜋(𝑘) ∑︁ 𝑃 (𝑘|{𝑦}) = 𝑑𝜎(𝑑𝜇)𝑘 𝑃 ({𝑦}|𝜎, 𝑘, {𝜇}, {𝑗})𝜋(𝜎, {𝜇}, {𝑗}|𝑘) (6.4) 𝜋({𝑦}) {𝑗} The prior 𝜋({𝑦}) does not matter for 𝑘; ˆ the other priors are assumed flat. We would need to estimate the behavior of the integral in the limit 𝑚 → ∞. We do not succeed in doing this rigorously here, although it might be done in the literature. Consider first saddle-point integration over {𝜇}, expanding around the max-likelihood values 𝜇*𝑟 . The max-likelihood estimates are the weighted medians of the data points in each interval. Change in the exponent when 𝜇 is perturbed is ∑︁𝑘 𝑗𝑟 ∑︁ ∆ = −𝜎 −1 𝑤𝑖 [|𝑦𝑖 − 𝜇*𝑟 − 𝛿𝜇𝑟 | − |𝑦𝑖 − 𝜇*𝑟 |] (6.5) 𝑟=1 𝑖=𝑗𝑟−1 +1 ∑︀𝑗𝑟 Note that 𝑖=𝑗 𝑟−1 +1 𝑤𝑖 sgn(𝑦𝑖 − 𝜇*𝑟 ) = 0, so that response to small variations 𝛿𝜇𝑟 is 𝑚-independent. For larger variations, we have ∑︁𝑘 airspeed velocity Documentation, Release 0.6.1 ∑︀ where 𝑁𝑟 (𝛿𝜇) = 𝑖 𝑤𝑖 𝑠𝑖 where 𝑠𝑖 = ±1 depending on whether 𝑦𝑖 is∑︀ above or below the perturbed median. Let us assume that in a typical case, 𝑁𝑟 (𝛿𝜇) ∼ 𝑚𝑟 𝑊 ¯ 𝑟2 𝛿𝜇/𝜎 where 𝑊 ¯𝑟 = 1 𝑚𝑟 𝑖 𝑤𝑖 is the average weight of the interval and 𝑚𝑟 the number of points in the interval. This recovers a result we would have obtained in the gaussian noise case ∑︁ 𝑟 For the gaussian case, this would not have required any questionable assumptions. After integration over {𝛿𝜇} we are left with ∫︁ ∫︁ ∑︁ (. . .) ∝ 𝑑𝜎 (2𝜋)𝑘/2 𝜎 𝑘 [𝑊¯1 ···𝑊 ¯ 𝑘 ]−1 [𝑚1 · · · 𝑚𝑘 ]−1/2 𝑃 ({𝑦}|𝜎, 𝑘, {𝜇* }, {𝑗})𝜋(𝜎, {𝑗}|𝑘) {𝑗} We now approximate the rest of the integrals/sums with only the max-likelihood terms, and assume 𝑚*𝑗 ∼ 𝑚/𝑘. Then, 𝑘 𝑘 ¯ + ln 𝑃 ({𝑦}|𝜎* , 𝑘, {𝜇* }, {𝑗* }) ln 𝑃 (𝑘|{𝑦}) ≃ 𝐶1 (𝑚) + 𝐶2 (𝑘) + ln(2𝜋) − ln(𝑚/𝑘) − 𝑘 ln 𝑊 𝑘 ≈ 𝐶˜1 (𝑚) + 𝐶˜2 (𝑘) − ln 𝑚 + ln 𝑃 ({𝑦}|𝜎* , 𝑘, {𝜇* }, {𝑗* }) 2 where we neglect terms that don’t affect asymptotics for 𝑚 → ∞, and 𝐶 are some constants not depending on both 𝑚, 𝑘. The result is of course the Schwarz criterion for 𝑘 free model parameters. We can suspect that the factor 𝑘/2 should be replaced by a different number, since we have 2𝑘 parameters. If also the other integrals/sums can be approximated in the same way as the {𝜇} ones, we should obtain the missing part. Substituting in the max-likelihood value * 𝑘 𝑗𝑟 𝜎* = 𝑤𝑖 |𝑦𝑖 − 𝜇*𝑟 | (6.10) we get * 𝑘 𝑗𝑟 𝑘 ∑︁ ∑︁ ln 𝑃 (𝑘|{𝑦}) ∼ 𝐶 − ln 𝑚 − 𝑚 ln 𝑤𝑖 |𝑦𝑖 − 𝜇*𝑟 | (6.11) This is now similar to (6.2), apart from numerical prefactors. The final fitting problem then becomes ∑︁𝑘 𝑗𝑟 ∑︁ argmin 𝑟(𝑚)𝑘 + ln 𝑤𝑖 |𝑦𝑖 − 𝜇𝑟 | (6.12) 𝑘,{𝑗},{𝜇} 𝑟=1 𝑖=𝑗𝑟−1 with 𝑟(𝑚) = ln2𝑚 𝑚 . Note that it is invariant vs. rescaling of weights 𝑤𝑖 ↦→ 𝛼𝑤𝑖 , i.e., the invariance of the original problem is retained. As we know this function 𝑟(𝑚) is not necessarily completely correct, and it seems doing the calculation rigorously requires more effort than can be justified by the requirements of the application, we now take a pragmatic view and fudge the function to 𝑟(𝑚) = 𝛽 ln𝑚𝑚 with 𝛽 chosen so that things appear to work in practice for the problem at hand. According to [Friedrich2008], problem (6.12) can be solved in 𝒪(𝑛3 ) time. This is too slow, however. We can however approach this on the basis of the easier problem (6.1). It produces a family of solutions [𝑘 * (𝛾), {𝜇* (𝛾)}, {𝑗 * (𝛾)}]. We now evaluate (6.12) restricted to the curve parameterized by 𝛾. In particular, [{𝜇* (𝛾)}, {𝑗 * (𝛾)}] solves (6.12) under the constraint 𝑘 = 𝑘 * (𝛾). If 𝑘 * (𝛾) obtains all values in the set {1, . . . , 𝑚} when 𝛾 is varied, the original problem is solved completely. This probably is not a far-fetched assumption; in practice it appears such Bayesian information criterion provides a reasonable way for selecting a suitable 𝛾. airspeed velocity Documentation, Release 0.6.1 6.5.2 Overfitting It’s possible to fit any data perfectly by choosing size-1 intervals, one per each data point. For such a fit, the logarithm (6.12) gives −∞ which then always minimizes SC. This artifact of the model above needs special handling. Indeed, for 𝜎 → 0, (6.3) reduces to 𝑘 ∏︁ 𝑗𝑟 ∏︁ 𝑃 ({𝑦𝑖 }𝑚 𝑘 𝑖=1 |𝜎, 𝑘, {𝜇𝑖 }𝑖=1 , {𝑗𝑖 }𝑖=1 ) = 𝛿(𝑦𝑖 − 𝜇𝑟 ) (6.13) which in (6.4) gives a contribution (assuming no repeated y-values) ∫︁ 𝜋(𝑛) 𝑃 (𝑘|{𝑦}) = 𝛿𝑛,𝑘 𝑑𝜎𝜋(𝜎, {𝑦}, {𝑖}|𝑛)𝑓 (𝜎) + . . . (6.14) 𝜋({𝑦}) with 𝑓 (𝜎) → 1 for 𝜎 → 0. A similar situation occurs also in other cases where perfect fitting occurs (repeated y-values). With the flat, scale-free prior 𝜋(. . .) ∝ 1/𝜎 used above, the result is undefined. A simple fix is to give up complete scale free-ness of the results, i.e., fixing a minimal noise level 𝜋(𝜎, {𝜇}, {𝑗}|𝑘) ∝ 𝜃(𝜎 − 𝜎0 )/𝜎 with some 𝜎0 ({𝜇}, {𝑗}, 𝑘) > 0. The effect in the 𝜎 integral is cutting off the log-divergence, so that with sufficient accuracy we can in (6.12) replace Here, we fix a measurement accuracy floor with the following guess: sigma_0 = 0.1 * w0 * min(abs(diff(mu))) and sigma_0 = 0.001 * w0 * abs(mu) when there is only a single interval. Here, w0 is the median weight. 6.5.3 Autocorrelated noise Practical experience shows that the noise in the benchmark results can be correlated. Often benchmarks are run for multiple commits at once, for example the new commits at a given time, and the benchmark machine does something else between the runs. Alternatively, the background load from other processes on the machine varies with time. To give a basic model for the noise correlations, we include AR(1) Laplace noise in (6.3), ∑︁𝑘 𝑗𝑟 ∑︁ −𝑚 𝑃 ({𝑦𝑖 }𝑚 𝑘 𝑘−1 𝑖=1 |𝜎, 𝜌, 𝑘, {𝜇𝑖 }𝑖=1 , {𝑗𝑖 }𝑖=1 ) = 𝑁 𝜎 exp(−𝜎 −1 |𝜖𝑖,𝑟 − 𝜌𝜖𝑖−1,𝑟 |) (6.16) where 𝜖𝑖,𝑟 = 𝑦𝑖 − 𝜇𝑟 with 𝜖𝑗𝑟−1 ,𝑟 = 𝑦𝑗𝑟−1 − 𝜇𝑟−1 and 𝜖𝑗0 ,1 = 0 are the deviations from the stepwise model. The correlation measure 𝜌 is unknown, but assumed to be constant in (−1, 1). Since the parameter 𝜌 is global, it does not change the parameter counting part of the Schwarz criterion. The maximum likelihood term however does depend on 𝜌, so that the problem becomes: ∑︁𝑘 𝑗𝑟 ∑︁ argmin 𝑟(𝑚)𝑘 + ln |𝜖𝑖,𝑟 − 𝜌𝜖𝑖−1,𝑟 | (6.17) 𝑘,𝜌,{𝑗},{𝜇} 𝑟=1 𝑖=𝑗𝑟−1 To save computation time, we do not solve this optimization problem exactly. Instead, we again minimize along the 𝜇*𝑟 (𝛾), 𝑗𝑟* (𝛾) curve provided by the solution to (6.1), and use (6.17) only in selecting the optimal value of the 𝛾 parameter. The minimization vs. 𝜌 can be done numerically for given 𝜇*𝑟 (𝛾), 𝑗𝑟* (𝛾). This minimization step is computationally cheap compared to the piecewise fit, so including it will not significantly change the runtime of the total algorithm. airspeed velocity Documentation, Release 0.6.1 6.5.4 Postprocessing For the purposes of regression detection, we do not report all steps the above approach provides. For details, see asv.step_detect.detect_regressions. 6.5.5 Making use of measured variance asv measures also variance in the timings. This information is currently used to provide relative data weighting (see above). airspeed velocity Documentation, Release 0.6.1 66 Chapter 6. Developer Docs CHAPTER 7 Changelog 7.1 0.6.1 (2023-09-11) 7.1.1 New Features 7.1.2 API Changes 7.1.3 Bug Fixes • pip dependencies in environment.yml files for the mamba plugin are handled correctly (#1326) • asv.config.json matrix requirements no longer need pip+ set explicitly for calling the pip solver for virtualenv • asv will now use conda_environment_file if it exists (#1325) 7.1.4 Other Changes and Additions • asv timestamps via datetime are now Python 3.12 compatible (#1331) • asv now provides asv[virtualenv] as an installable target • asv now uses Github Actions exclusively for Windows and Linux 7.2 0.6.0 (2023-08-20) 7.2.1 New Features • asv_runner is now used internally, making the addition of custom benchmark types viable (#1287) airspeed velocity Documentation, Release 0.6.1 • Benchmarks can be skipped, both wholly and in part using new decorators skip_benchmark_if and skip_params_if (#1309) • Benchmarks can be skipped during their execution (after setup) by raising SkipNotImplemented (#1307) • Added default_benchmark_timeout to the configuration object, can also be passed via -a timeout=NUMBER (#1308) • ASV_RUNNER_PATH can be set from the terminal to test newer versions of asv_runner (#1312) 7.2.2 API Changes • Removed asv dev in favor of using asv run with the right arguments (#1200) • asv run and asv continuous don’t implement the --strict option anymore, and they will always return a non-zero (i.e. 2) exit status if any benchmark fail. 7.2.3 Bug Fixes • Fixed install_timeout for conda (#1310) • Fixed handling of local pip matrix (#1312) • Fixed the deadlock when mamba is used with an environment file. (#1300) • Fixed environment file usage with mamba and recognizes default environment.yml. (#1303) 7.2.4 Other Changes and Additions • mamba and conda use environment.yml if it exists • virtualenv now requires packaging due to distutils deprecations (#1240) • Wheels are now built for CPython 3.8, 3.9, 3.10, 3.11 7.3 0.5.1 (2021-02-06) 7.3.1 Bug Fixes • Packaging requirements-dev.txt file, used in setup.py. (#1013) 7.4 0.5 (2021-02-05) 7.4.1 New Features • Adding environment variables to build and benchmark commands. (#809, #833) • Added --strict option to asv run to set exit code on failure. (#865) • Added --no-stats option to asv compare and asv continuous. (#879) • Added --durations option to asv run and asv show for displaying benchmark run durations. (#838) airspeed velocity Documentation, Release 0.6.1 • Added --date-period option to asv run for running benchmarks for commits separated by a constant time interval. (#835) • Web UI button to group regressions by benchmark. (#869) • Space-saving v2 file format for storing results. (#847) • timeraw_* benchmarks for measuring e.g. import times. (#832) • Support for using conda environment files for env setup. (#793) 7.4.2 API Changes • Results file format change requires asv update to update old data to v2 format. • The configuration syntax for “matrix”, “exclude”, and “include” in asv.conf.json has changed. The old syntax is still supported, unless you are installing packages named req, env, env_nobuild. 7.4.3 Bug Fixes • When an asv find step fails due to timeout, assume runtime equal to timeout to allow bisection to proceed (#768) • Minor fixes and improvements (#897, #896, #888, #881, #877, #876, #875, #861, #870, #868, #867, #866, #864, #863, #857, #786, #854, #855, #852, #850, #844, #843, #842, #839, #841, #840, #837, #836, #834, #831, #830, #829, #828, #826, #825, #824) 7.4.4 Other Changes and Additions • Uniqueness of repr() for param objects is now guaranteed by suffixing unique identifier corresponding to order of appearance. (#771) • Memory addresses are now stripped from the repr() of param elements, allowing comparison across multiple runs. (#771) • asv dev is now equivalent to asv run with --python=same default. (#874) • asv continuous by default now records measurement samples, for better comparison statistics. (#878) • ASV now uses PEP 518 pyproject.toml in packaging. (#853) 7.5 0.4.1 (2019-05-30) • Change wheel installation default command to chdir away from build directory instead of --force-install. (#823) 7.6 0.4 (2019-05-26) 7.6.1 New Features • asv check command for a quick check of benchmark suite validity. (#782) • asv run HASHFILE:filename can read commit hashes to run from file or stdin (#768) airspeed velocity Documentation, Release 0.6.1 • --set-commit-hash option to asv run, which allows recording results from runs in “existing” environ- ments not managed by asv (#794) • --cpu-affinity option to asv run and others, to set CPU affinity (#769) • “Hide legend” option in web UI (#807) • pretty_source benchmark attribute for customizing source code shown (#810) • Record number of cores in machine information (#761) 7.6.2 API Changes • Default timer changed from process_time() to timeit.default_timer() to fix resolution issues on Windows. Old behavior can be restored by setting Benchmark.timer = time.process_time (#780) 7.6.3 Bug Fixes • Fix pip command line in install_command (#806) • Python 3.8 compatibility (#814) • Minor fixes and improvements (#759, #764, #767, #772, #779, #783, #784, #787, #790, #795, #799, #804, #812, #813, #815, #816, #817, #818, #820) 7.6.4 Other Changes and Additions • In case of significant changes asv continuous message now reports if performance decreased or increased. 7.7 0.3.1 (2018-10-20) Minor bugfixes and improvements. • Use measured uncertainties to weigh step detection. (#753) • Detect also single-commit regressions, if significant. (#745) • Use proper two-sample test when raw results available. (#754) • Use a better regression “badness” measure. (#744) • Display verbose command output immediately, not when command completes. (#747) • Fix handling of benchmark suite import failures in forkserver and benchmark discovery. (#743, #742) • Fix forkserver child process handling. • In asv test suite, use dummy conda packages. (#738) • Other minor fixes (#756, #750, #749, #746) 7.8 0.3 (2018-09-09) Major release with several new features. airspeed velocity Documentation, Release 0.6.1 7.8.1 New Features • Revised timing benchmarking. asv will display and record the median and interquartile ranges of timing mea- surement results. The information is also used by asv compare and asv continuous in determining what changes are significant. The asv run command has new options for collecting samples. Timing bench- marks have new benchmarking parameters for controlling how timing works, including processes attribute for collect data by running benchmarks in different sequential processes. The defaults are adjusted to obtain faster benchmarking. (#707, #698, #695, #689, #683, #665, #652, #575, #503, #493) • Interleaved benchmark running. Timing benchmarks can be run in interleaved order via asv run --interleave-processes, to obtain better sampling over long-time background performance variations. (#697, #694, #647) • Customization of build/install/uninstall commands. (#699) • Launching benchmarks via a fork server (on Unix-based systems). Reduces the import time overheads in launch- ing new benchmarks. Default on Linux. (#666, #709, #730) • Benchmark versioning. Invalidate old benchmark results when benchmarks change, via a benchmark version attribute. User-configurable, by default based on source code. (#509) • Setting benchmark attributes on command line, via --attribute. (#647) • asv show command for displaying results on command line. (#711) • Support for Conda channels. (#539) • Provide ASV-specific environment variables to launched commands. (#624) • Show branch/tag names in addition to commit hashes. (#705) • Support for projects in repository subdirectories. (#611) • Way to run specific parametrized benchmarks. (#593) • Group benchmarks in the web benchmark grid (#557) • Make the web interface URL addresses more copypasteable. (#608, #605, #580) • Allow customizing benchmark display names (#484) • Don’t reinstall project if it is already installed (#708) 7.8.2 API Changes • The goal_time attribute in timing benchmarks is removed (and now ignored). See documentation on how to tune timing benchmarks now. • asv publish may ask you to run asv update once after upgrading, to regenerate benchmarks.json if asv run was not yet run. • If you are using asv plugins, check their compatibility. The internal APIs in asv are not guaranteed to be backward compatible. 7.8.3 Bug Fixes • Fixes in 0.2.1 and 0.2.2 are also included in 0.3. • Make asv compare accept named commits (#704) • Fix asv profile --python=same (#702) airspeed velocity Documentation, Release 0.6.1 • Make asv compare behave correctly with multiple machines/envs (#687) • Avoid making too long result file names (#675) • Fix saving profile data (#680) • Ignore missing branches during benchmark discovery (#674) • Perform benchmark discovery only when necessary (#568) • Fix benchmark skipping to operate on a per-environment basis (#603) • Allow putting asv.conf.json to benchmark suite directory (#717) • Miscellaneous minor fixes (#735, #734, #733, #729, #728, #727, #726, #723, #721, #719, #718, #716, #715, #714, #713, #706, #701, #691, #688, #684, #682, #660, #634, #615, #600, #573, #556) 7.8.4 Other Changes and Additions • www: display regressions separately, one per commit (#720) • Internal changes. (#712, #700, #681, #663, #662, #637, #613, #606, #572) • CI/etc changes. (#585, #570) • Added internal debugging command asv.benchmarks (#685) • Make tests not require network connection, except with Conda (#696) • Drop support for end-of-lifed Python versions 2.6 & 3.2 & 3.3 (#548) 7.9 0.3b1 (2018-08-29) Prerelease. Same as 0.3rc1, minus #721– 7.10 0.2.2 (2018-07-14) Bugfix release with minor feature additions. 7.10.1 New Features • Add a --no-pull option to asv publish and asv run (#592) • Add a --rewrite option to asv gh-pages and fix bugs (#578, #529) • Add a --html-dir option to asv publish (#545) • Add a --yes option to asv machine (#540) • Enable running via python -masv (#538) airspeed velocity Documentation, Release 0.6.1 7.10.2 Bug Fixes • Fix support for mercurial >= 4.5 (#643) • Fix detection of git subrepositories (#642) • Find conda executable in the “official” way (#646) • Hide tracebacks in testing functions (#601) • Launch virtualenv in a more sensible way (#555) • Disable user site directory also when using conda (#553) • Set PIP_USER to false when running an executable (#524) • Set PATH for commands launched inside environments (#541) • os.environ can only contain bytes on Win/py2 (#528) • Fix hglib encoding issues on Python 3 (#508) • Set GIT_CEILING_DIRECTORIES for Git (#636) • Run pip via python -mpip to avoid shebang limits (#569) • Always use https URLs (#583) • Add a min-height on graphs to avoid a flot traceback (#596) • Escape label html text in plot legends (#614) • Disable pip build isolation in wheel_cache (#670) • Fixup CI, test, etc issues (#616, #552, #601, #586, #554, #549, #571, #527, #560, #565) 7.11 0.2.2rc1 (2018-07-09) Same as 0.2.2, minus #670. 7.12 0.2.1 (2017-06-22) 7.12.1 Bug Fixes • Use process groups on Windows (#489) • Sanitize html filenames (#498) • Fix incorrect date formatting + default sort order in web ui (#504) 7.13 0.2 (2016-10-22) 7.13.1 New Features • Automatic detection and listing of performance regressions. (#236) • Support for Windows. (#282) airspeed velocity Documentation, Release 0.6.1 • New setup_cache method. (#277) • Exclude/include rules in configuration matrix. (#329) • Command-line option for selecting environments. (#352) • Possibility to include packages via pip in conda environments. (#373) • The pretty_name attribute can be used to change the display name of benchmarks. (#425) • Git submodules are supported. (#426) • The time when benchmarks were run is tracked. (#428) • New summary web page showing a list of benchmarks. (#437) • Atom feed for regressions. (#447) • PyPy support. (#452) 7.13.2 API Changes • The parent directory of the benchmark suite is no longer inserted into sys.path. (#307) • Repository mirrors are no longer created for local repositories. (#314) • In asv.conf.json matrix, null previously meant (undocumented) the latest version. Now it means that the package is to not be installed. (#329) • Previously, the setup and teardown methods were run only once even when the benchmark method was run multiple times, for example due to repeat > 1 being present in timing benchmarks. This is now changed so that also they are run multiple times. (#316) • The default branch for Mercurial is now default, not tip. (#394) • Benchmark results are now by default ordered by commit, not by date. (#429) • When asv run and other commands are called without specifying revisions, the default values are taken from the branches in asv.conf.json. (#430) • The default value for --factor in asv continuous and asv compare was changed from 2.0 to 1.1 (#469). 7.13.3 Bug Fixes • Output will display on non-Unicode consoles. (#313, #318, #336) • Longer default install timeout. (#342) • Many other bugfixes and minor improvements. 7.14 0.2rc2 (2016-10-17) Same as 0.2. 7.15 0.1.1 (2015-05-05) First full release. airspeed velocity Documentation, Release 0.6.1 7.16 0.1rc3 (2015-05-01) 7.16.1 Bug Fixes Include pip_requirements.txt. Display version correctly in docs. 7.17 0.1rc2 (2015-05-01) 7.18 0.1rc1 (2015-05-01) airspeed velocity Documentation, Release 0.6.1 76 Chapter 7. Changelog CHAPTER 8 Credits • <NAME> (founder) • <NAME> The rest of the contributors are listed in alphabetical order. • <NAME> • @afragner • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> airspeed velocity Documentation, Release 0.6.1 • @DWesl • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • @jbrockmendel • jeremie du boisberranger • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> (Xarthisius) • <NAME> • @Leenkiz • <NAME> • <NAME> • @mariamadronah • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • @pawel • <NAME> 78 Chapter 8. Credits airspeed velocity Documentation, Release 0.6.1 • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • serge-sans-paille • Sourcery AI • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • @Warbo • <NAME> • <NAME> • <NAME> airspeed velocity Documentation, Release 0.6.1 80 Chapter 8. Credits Bibliography [Friedrich2008] <NAME> et al., ‘’Complexity Penalized M-Estimation: Fast Computation”, Journal of Computa- tional and Graphical Statistics 17.1, 201-224 (2008). http://dx.doi.org/10.1198/106186008X285591 [Yao1988] <NAME>, ‘’Estimating the number of change-points via Schwarz criterion”, Statistics & Probability Let- ters 6, 181-189 (1988). http://dx.doi.org/10.1016/0167-7152(88)90118-6 airspeed velocity Documentation, Release 0.6.1 82 Bibliography Python Module Index a asv.commands, 41 asv.step_detect, 61 83 airspeed velocity Documentation, Release 0.6.1 84 Python Module Index
curtail
hex
Erlang
Toggle Theme Curtail v2.0.0 API Reference === Modules --- [Curtail](Curtail.html) An HTML-safe string truncator === [Curtail.Html](Curtail.Html.html) Helper methods for [`Curtail`](Curtail.html) [Curtail.Options](Curtail.Options.html) Toggle Theme Curtail v2.0.0 Curtail === An HTML-safe string truncator === Usage --- ``` Curtail.truncate("<p>Truncate me</p>", options) ``` [Link to this section](#summary) Summary === [Functions](#functions) --- [truncate(string, opts \\ [])](#truncate/2) Safely truncates a string that contains HTML tags [Link to this section](#functions) Functions === [Link to this function](#truncate/2 "Link to this function") truncate(string, opts \\ []) Safely truncates a string that contains HTML tags. Options --- * length (default: 100) * omission (default: “…”) * word_boundary (default: “~r/S/“) * break_token (default: nil) Examples --- ``` iex> Curtail.truncate("<p>Truncate me!</p>") "<p>Truncate me!</p>" iex> Curtail.truncate("<p>Truncate me!</p>", length: 12) "<p>Truncate...</p>" ``` Truncate without omission string: ``` iex> Curtail.truncate("<p>Truncate me!</p>", omission: "", length: 8) "<p>Truncate</p>" ``` Truncate with custom word_boundary: ``` iex> Curtail.truncate("<p>Truncate. Me!</p>", word_boundary: ~r/S[.]/, length: 12, omission: "") "<p>Truncate.</p>" ``` Truncate without word boundary: ``` iex> Curtail.truncate("<p>Truncate me</p>", word_boundary: false, length: 7) "<p>Trun...</p>" ``` Truncate with custom break_token: ``` iex> Curtail.truncate("<p>This should be truncated here<break_here>!!</p>", length: 49, break_token: "<break_here>") "<p>This should be truncated here</p>" ``` Toggle Theme Curtail v2.0.0 Curtail.Html === Helper methods for [`Curtail`](Curtail.html). [Link to this section](#summary) Summary === [Functions](#functions) --- [comment?(token)](#comment?/1) [matching_close_tag(token)](#matching_close_tag/1) [matching_close_tag?(open_tag, close_tag)](#matching_close_tag?/2) [open_tag?(token)](#open_tag?/1) [tag?(token)](#tag?/1) [Link to this section](#functions) Functions === [Link to this function](#comment?/1 "Link to this function") comment?(token) [Link to this function](#matching_close_tag/1 "Link to this function") matching_close_tag(token) [Link to this function](#matching_close_tag?/2 "Link to this function") matching_close_tag?(open_tag, close_tag) [Link to this function](#open_tag?/1 "Link to this function") open_tag?(token) [Link to this function](#tag?/1 "Link to this function") tag?(token) Toggle Theme Curtail v2.0.0 Curtail.Options === [Link to this section](#summary) Summary === [Functions](#functions) --- [new(opts \\ [])](#new/1) [Link to this section](#functions) Functions === [Link to this function](#new/1 "Link to this function") new(opts \\ [])
telemetry_metrics_statsd
hex
Erlang
TelemetryMetricsStatsd === [`Telemetry.Metrics`](https://hexdocs.pm/telemetry_metrics/0.6.1/Telemetry.Metrics.html) reporter for StatsD-compatible metric servers. To use it, start the reporter with the [`start_link/1`](#start_link/1) function, providing it a list of [`Telemetry.Metrics`](https://hexdocs.pm/telemetry_metrics/0.6.1/Telemetry.Metrics.html) metric definitions: ``` import Telemetry.Metrics TelemetryMetricsStatsd.start_link( metrics: [ counter("http.request.count"), sum("http.request.payload_size"), last_value("vm.memory.total") ] ) ``` > Note that in the real project the reporter should be started under a supervisor, e.g. the main > supervisor of your application. By default the reporter sends metrics to 127.0.0.1:8125 - both hostname and port number can be configured using the `:host` and `:port` options. ``` TelemetryMetricsStatsd.start_link( metrics: metrics, host: "statsd", port: 1234 ) ``` Alternatively, a Unix domain socket path can be provided using the `:socket_path` option. ``` TelemetryMetricsStatsd.start_link( metrics: metrics, socket_path: "/var/run/statsd.sock" ) ``` If the `:socket_path` option is provided, `:host` and `:port` parameters are ignored and the connection is established exclusively via Unix domain socket. Note that the reporter doesn't aggregate metrics in-process - it sends metric updates to StatsD whenever a relevant Telemetry event is emitted. By default, the reporter sends metrics through a single socket. To reduce contention when there are many metrics to be sent, more sockets can be configured to be opened through the `pool_size` option. ``` TelemetryMetricsStatsd.start_link( metrics: metrics, pool_size: 10 ) ``` When the `pool_size` is bigger than 1, the sockets are randomly selected out of the pool each time they need to be used Translation between Telemetry.Metrics and StatsD --- In this section we walk through how the Telemetry.Metrics metric definitions are mapped to StatsD metrics and their types at runtime. Telemetry.Metrics metric names are translated as follows: * if the metric name was provided as a string, e.g. `"http.request.count"`, it is sent to StatsD server as-is * if the metric name was provided as a list of atoms, e.g. `[:http, :request, :count]`, it is first converted to a string by joining the segments with dots. In this example, the StatsD metric name would be `"http.request.count"` as well Since there are multiple implementations of StatsD and each of them provides slightly different set of features, other aspects of metric translation are controlled by the formatters. The formatter can be selected using the `:formatter` option. Currently only two formats are supported - `:standard` and `:datadog`. The following table shows how [`Telemetry.Metrics`](https://hexdocs.pm/telemetry_metrics/0.6.1/Telemetry.Metrics.html) metrics map to standard StatsD metrics: | Telemetry.Metrics | StatsD | | --- | --- | | `last_value` | `gauge` | | `counter` | `counter` | | `sum` | `gauge` or `counter` | | `summary` | `timer` | | `distribution` | `timer` | [DataDog](https://docs.datadoghq.com/developers/metrics/types/?tab=count#metric-types) provides a richer set of metric types: | Telemetry.Metrics | DogStatsD | | --- | --- | | `last_value` | `gauge` | | `counter` | `counter` | | `sum` | `gauge` or `counter` | | `summary` | `histogram` | | `distribution` | `distribution` | ### The standard StatsD formatter The `:standard` formatter is compatible with the [Etsy implementation](https://github.com/statsd/statsd/blob/master/docs/metric_types.md) of StatsD. Since this particular implementation doesn't support explicit tags, tag values are appended as consecutive segments of the metric name. For example, given the definition ``` counter("db.query.count", tags: [:table, :operation]) ``` and the event ``` :telemetry.execute([:db, :query], %{}, %{table: "users", operation: "select"}) ``` the StatsD metric name would be `"db.query.count.users.select"`. Note that the tag values are appended to the base metric name in the order they were declared in the metric definition. Another important aspect of the standard formatter is that all measurements are converted to integers, i.e. no floats are ever sent to the StatsD daemon. Now to the metric types! #### Counter Telemetry.Metrics counter is simply represented as a StatsD counter. Each event the metric is based on increments the counter by 1. To be more concrete, given the metric definition ``` counter("http.request.count") ``` and the event ``` :telemetry.execute([:http, :request], %{duration: 120}) ``` the following line would be send to StatsD ``` "http.request.count:1|c" ``` Note that the counter was bumped by 1, regardless of the measurements included in the event (careful reader will notice that the `:count` measurement we chose for the metric wasn't present in the map of measurements at all!). Such behaviour conforms to the specification of counter as defined by [`Telemetry.Metrics`](https://hexdocs.pm/telemetry_metrics/0.6.1/Telemetry.Metrics.html) package - a counter should be incremented by 1 every time a given event is dispatched. #### Last value Last value metric is represented as a StatsD gauge, whose values are always set to the value of the measurement from the most recent event. With the following metric definition ``` last_value("vm.memory.total") ``` and the event ``` :telemetry.execute([:vm, :memory], %{total: 1024}) ``` the following metric update would be send to StatsD ``` "vm.memory.total:1024|g" ``` #### Sum Sum metric is also represented as a gauge - the difference is that it always changes relatively and is never set to an absolute value. Given metric definition below ``` sum("http.request.payload_size") ``` and the event ``` :telemetry.execute([:http, :request], %{payload_size: 1076}) ``` the following line would be send to StatsD ``` "http.request.count:+1076|g" ``` When the measurement is negative, the StatsD gauge is decreased accordingly. When the `report_as: :counter` reporter option is passed, the sum metric is reported as a counter and increased with the value provided. Only positive values are allowed, negative measurements are discarded and logged. Given the metric definition ``` sum("kafka.consume.batch_size", reporter_options: [report_as: :counter]) ``` and the event ``` :telemetry.execute([:kafka, :consume], %{batch_size: 200}) ``` the following would be sent to StatsD ``` "kafka.consume.batch_size:200|c" ``` #### Summary The summary is simply represented as a StatsD timer, since it should generate statistics about gathered measurements. Given the metric definition below ``` summary("http.request.duration") ``` and the event ``` :telemetry.execute([:http, :request], %{duration: 120}) ``` the following line would be send to StatsD ``` "http.request.duration:120|ms" ``` #### Distribution There is no metric in original StatsD implementation equivalent to Telemetry.Metrics distribution. However, histograms can be enabled for selected timer metrics in the [StatsD daemon configuration](https://github.com/statsd/statsd/blob/master/docs/metric_types.md#timing). Because of that, the distribution is also reported as a timer. For example, given the following metric definition ``` distribution("http.request.duration") ``` and the event ``` :telemetry.execute([:http, :request], %{duration: 120}) ``` the following line would be send to StatsD ``` "http.request.duration:120|ms" ``` ### The DataDog formatter The DataDog formatter is compatible with [DogStatsD](https://docs.datadoghq.com/developers/dogstatsd/), the DataDog StatsD service bundled with its agent. #### Tags The main difference from the standard formatter is that DataDog supports explicit tagging in its protocol. Using the same example as with the standard formatter, given the following definition ``` counter("db.query.count", tags: [:table, :operation]) ``` and the event ``` :telemetry.execute([:db, :query], %{}, %{table: "users", operation: "select"}) ``` the metric update packet sent to StatsD would be `db.query.count:1|c|#table:users,operation:select`. #### Metric types There is no difference in how the counter and last value metrics are handled between the standard and DataDog formatters. The sum metric is reporter as DataDog counter, which is being transformed into a rate metric in DataDog: <https://docs.datadoghq.com/developers/metrics/dogstatsd_metrics_submission/#count.> To be able to observe the actual sum of measurements make sure to use the [`as_count()`](https://docs.datadoghq.com/developers/metrics/type_modifiers/?tab=rate#in-application-modifiers) modifier in your DataDog dashboard. The `report_as: :count` option does not have any effect with the DataDog formatter. The summary metric is reported as [DataDog histogram](https://docs.datadoghq.com/developers/metrics/types/?tab=histogram), as that is the metric that provides a set of statistics about gathered measurements on the DataDog side. The distribution is flushed as [DataDog distribution](https://docs.datadoghq.com/developers/metrics/types/?tab=distribution) metric, which provides statistically correct aggregations of data gathered from multiple services or DogStatsD agents. Also note that DataDog allows measurements to be floats, that's why no rounding is performed when formatting the metric. Global tags --- The library provides an option to specify a set of global tag values, which are available to all metrics running under the reporter. For example, if you're running your application in multiple deployment environment (staging, production, etc.), you might set the environment as a global tag: ``` TelemetryMetricsStatsd.start_link( metrics: [ counter("http.request.count", tags: [:env]) ], global_tags: [env: "prod"] ) ``` Note that if the global tag is to be sent with the metric, the metric needs to have it listed under the `:tags` option, just like any other tag. Also, if the same key is configured as a global tag and emitted as a part of event metadata or returned by the `:tag_values` function, the metadata/`:tag_values` take precedence and override the global tag value. Prefixing metric names --- Sometimes it's convenient to prefix all metric names with particular value, to group them by the name of the service, the host, or something else. You can use `:prefix` option to provide a prefix which will be prepended to all metrics published by the reporter (regardless of the formatter used). Maximum datagram size --- Metrics are sent to StatsD over UDP, so it's important that the size of the datagram does not exceed the Maximum Transmission Unit, or MTU, of the link, so that no data is lost on the way. By default the reporter will break up the datagrams at 512 bytes, but this is configurable via the `:mtu` option. Sampling data --- It's not always convenient to capture every piece of data, such as in the case of high-traffic applications. In those cases, you may want to capture a "sample" of the data. You can do this by passing `[sampling_rate: <rate>]` as an option to `:reporter_options`, where `rate` is a value between 0.0 and 1.0. The default `:sampling_rate` is 1.0, which means that all the measurements are being captured. ### Example ``` TelemetryMetricsStatsd.start_link( metrics: [ counter("http.request.count"), summary("http.request.duration", reporter_options: [sampling_rate: 0.1]), distribution("http.request.duration", reporter_options: [sampling_rate: 0.1]) ] ) ``` In this example, we are capturing 100% of the measurements for the counter, but only 10% for both summary and distribution. [Link to this section](#summary) Summary === [Types](#types) --- [host()](#t:host/0) [option()](#t:option/0) [options()](#t:options/0) [prefix()](#t:prefix/0) [Functions](#functions) --- [child_spec(init_arg)](#child_spec/1) Reporter's child spec. [start_link(options)](#start_link/1) Starts a reporter and links it to the calling process. [Link to this section](#types) Types === [Link to this section](#functions) Functions === API Reference === Modules --- [TelemetryMetricsStatsd](TelemetryMetricsStatsd.html) [`Telemetry.Metrics`](https://hexdocs.pm/telemetry_metrics/0.6.1/Telemetry.Metrics.html) reporter for StatsD-compatible metric servers.
Algoritmi_e_Strutture_Dati.pdf
free_programming_book
Unknown
P ROGETTO E ANALISI DI ALGORITMI Dispense del corso di Algoritmi e Strutture Dati Corso di laurea in Informatica Rapporto Interno n. 230-98 Dipartimento di Scienze dellInformazione Universita degli Studi di Milano Settembre 2004 <NAME> <NAME> Indice 1 Introduzione 1.1 La nozione di algoritmo ... 1.2 La complessita di un algoritmo ... 1.3 Ordini di grandezza della complessita in tempo ... 5 5 6 7 2 Nozioni preliminari 2.1 Notazioni di base ... 2.2 Elementi di calcolo combinatorio ... 2.3 Espressioni asintotiche ... 2.4 Stima di somme ... 2.4.1 Serie geometrica ... 2.4.2 Somme di potenze di interi ... 2.4.3 Stima mediante integrali ... 10 10 11 14 17 18 19 20 3 Modelli di calcolo 3.1 Macchina ad accesso casuale (RAM) ... 3.1.1 Linguaggio di programmazione della macchina RAM . . . . . . . . . . . . . . . 3.1.2 Complessita computazionale di programmi RAM ... 3.2 La macchina RASP ... 3.3 Calcolabilita e calcolabilita effettiva ... 3.4 Un linguaggio ad alto livello: AG ... 22 22 23 26 29 31 32 4 Strutture dati elementari 4.1 Vettori e record ... 4.2 Liste ... 4.2.1 Implementazioni ... 4.3 Pile ... 4.4 Code ... 4.5 Grafi ... 4.6 Alberi ... 4.6.1 Alberi con radice ... 4.6.2 Alberi ordinati ... 4.6.3 Alberi binari ... 4.7 Esempio: attraversamento di grafi in ampiezza ... 36 36 38 40 43 44 46 48 49 50 52 54 1 INDICE 2 5 Procedure ricorsive 5.1 Analisi della ricorsione ... 5.2 Ricorsione terminale ... 5.2.1 Ricerca binaria ... 5.3 Attraversamento di alberi ... 5.4 Attraversamento di grafi ... 5.4.1 Visita in profondita ... 57 57 62 63 64 67 67 6 Equazioni di ricorrenza 6.1 Analisi di procedure ricorsive ... 6.2 Maggiorazioni ... 6.3 Metodo dei fattori sommanti ... 6.4 Equazioni divide et impera ... 6.4.1 Parti intere ... 6.5 Equazioni lineari a coefficienti costanti ... 6.5.1 Equazioni omogenee ... 6.5.2 Equazioni non omogenee ... 6.6 Sostituzione di variabile ... 6.6.1 Lequazione di Quicksort ... 71 71 72 73 75 76 78 78 81 83 84 7 Funzioni generatrici 7.1 Definizioni ... 7.2 Funzioni generatrici ed equazioni di ricorrenza ... 7.3 Calcolo di funzioni generatrici ... 7.3.1 Operazioni su sequenze numeriche ... 7.3.2 Operazioni su funzioni generatrici ... 7.4 Applicazioni ... 7.4.1 Conteggio di alberi binari ... 7.4.2 Analisi in media di Quicksort ... 7.5 Stima dei coefficienti di una funzione generatrice ... 7.5.1 Funzioni razionali ... 7.5.2 Funzioni logaritmiche ... 86 86 88 90 90 91 94 94 96 97 97 98 8 Algoritmi di ordinamento 100 8.1 Caratteristiche generali ... 100 8.2 Numero minimo di confronti ... 101 8.3 Ordinamento per inserimento ... 102 8.4 Heapsort ... 103 8.4.1 Costruzione di uno heap ... 103 8.4.2 Descrizione dellalgoritmo ... 105 8.5 Quicksort ... 106 8.5.1 Analisi dellalgoritmo ... 106 8.5.2 Specifica dellalgoritmo ... 108 8.5.3 Ottimizzazione della memoria ... 109 8.6 Statistiche dordine ... 112 8.7 Bucketsort ... 113 INDICE 9 3 Strutture dati e algoritmi di ricerca 116 9.1 Algebre eterogenee ... 116 9.2 Programmi astratti e loro implementazioni ... 119 9.3 Implementazione di dizionari mediante Hashing ... 120 9.4 Alberi di ricerca binaria ... 121 9.5 Alberi 2-3 ... 126 9.6 B-alberi ... 129 9.7 Operazioni UNION e FIND ... 133 9.7.1 Foreste con bilanciamento ... 135 9.7.2 Compressione di cammino ... 137 10 Il metodo Divide et Impera 139 10.1 Schema generale ... 139 10.2 Calcolo del massimo e del minimo di una sequenza ... 140 10.3 Mergesort ... 142 10.4 Prodotto di interi ... 144 10.5 Lalgoritmo di Strassen... 145 10.6 La trasformata discreta di Fourier ... 146 10.6.1 La trasformata discreta e la sua inversa ... 146 10.6.2 La trasformata veloce di Fourier ... 148 10.6.3 Prodotto di polinomi ... 149 10.6.4 Prodotto di interi ... 151 11 Programmazione dinamica 154 11.1 Un esempio semplice ... 154 11.2 Il metodo generale ... 156 11.3 Moltiplicazione di n matrici ... 157 11.4 Chiusura transitiva ... 160 11.5 Cammini minimi ... 161 12 Algoritmi greedy 164 12.1 Problemi di ottimizzazione ... 164 12.2 Analisi delle procedure greedy ... 166 12.3 Matroidi e teorema di Rado ... 167 12.4 Lalgoritmo di Kruskal ... 170 12.5 Lalgoritmo di Prim ... 172 12.6 Lalgoritmo di Dijkstra ... 176 12.7 Codici di Huffman ... 178 12.7.1 Codici binari ... 179 12.7.2 Descrizione dellalgoritmo ... 181 12.7.3 Correttezza ... 182 13 I problemi NP-completi 185 13.1 Problemi intrattabili ... 186 13.2 La classe P ... 187 13.3 La classe NP ... 187 INDICE 4 13.4 Riducibilita polinomiale e problemi NP-completi ... 190 13.4.1 Riduzione polinomiale da SODD a CLIQUE ... 191 13.4.2 Riduzione polinomiale da SODD a 3-SODD ... 192 13.5 Il teorema di Cook ... 193 13.5.1 Macchine di Turing ... 193 13.5.2 Dimostrazione ... 197 Capitolo 1 Introduzione Lattivita di programmazione puo grossolanamente essere divisa in due aree distinte. La prima e chiamata Programmazione in grande e riguarda la soluzione informatica di problemi di grande dimensione (si pensi allo sviluppo di un sistema informativo di una azienda multinazionale). La seconda invece puo essere chiamata Programmazione in piccolo e consiste nel trovare una buona soluzione algoritmica a specifici problemi ben formalizzati (si pensi agli algoritmi di ordinamento). Obbiettivo di questo corso e quello di fornire una introduzione alle nozioni di base e ai metodi che sovrintendono questo secondo tipo di problematica, dedicata allo studio della rappresentazione e manipolazione dellinformazione con lausilio della teoria degli algoritmi e della organizzazione dei dati. Si tratta di una tipica attivita trasversale che trova applicazione in tutte le aree disciplinari dellinformatica, pur essendo dotata di propri metodi e di una propria autonomia a tal punto da essere inclusa in una delle nove branche nelle quali la ACM (Association for Computing Machinery) suddivide la Computer Science: Algoritmi e strutture dati, Linguaggi di programmazione, Architetture dei calcolatori, Sistemi operativi, Ingegneria del software, Calcolo numerico e simbolico, Basi di dati e sistemi per il reperimento dellinformazione, Intelligenza artificiale, Visione e robotica. 1.1 La nozione di algoritmo Informalmente, un algoritmo e un procedimento formato da una sequenza finita di operazioni elementari che trasforma uno o piu valori di ingresso (che chiameremo anche input) in uno o piu valori di uscita (rispettivamente, output). Un algoritmo definisce quindi implicitamente una funzione dallinsieme degli input a quello degli output e nel contempo descrive un procedimento effettivo che permette di determinare per ogni possibile ingresso i corrispondenti valori di uscita. Dato un algoritmo A, denoteremo con fA la funzione che associa a ogni ingresso x di A la corrispondente uscita fA (x). Questa corrispondenza tra input e output rappresenta il problema risolto dallalgoritmo. Formalmente, un problema e una funzione f : DI DS , definita su insieme DI di elementi che chiameremo istanze, a valori su un insieme DS di soluzioni. Per mettere in evidenza i due insiemi e la relativa corrispondenza, un problema verra in generale descritto usando la seguente rappresentazione: Problema NOME Istanza : x DI Soluzione :f (x) DS Diremo che un algoritmo A risolve un problema f se f (x) = fA (x) per ogni istanza x. 5 CAPITOLO 1. INTRODUZIONE 6 Lesecuzione di un algoritmo su un dato input richiede il consumo di una certa quantita di risorse; queste possono essere rappresentate dal tempo di computazione impiegato, dallo spazio di memoria usato, oppure dal numero e dalla varieta dei dispositivi di calcolo utilizzati. E in generale importante saper valutare la quantita di risorse consumate proprio perche un consumo eccessivo puo pregiudicare le stesse possibilita di utilizzo di un algoritmo. Un metodo tradizionale per compiere questa valutazione e quello di fissare un modello di calcolo preciso e definire in base a questo la stessa nozione di algoritmo e le relative risorse consumate. In questo corso faremo riferimento a modelli di calcolo formati da un solo processore e in particolare introdurremo il modello RAM, trattando quindi unicamente la teoria degli algoritmi sequenziali. In questo contesto le risorse principali che prenderemo in cosiderazione sono il tempo di calcolo e lo spazio di memoria. Possiamo cos raggruppare le problematiche riguardanti lo studio degli algoritmi in tre ambiti principali: 1. Sintesi (detta anche disegno o progetto): dato un problema f , costruire un algoritmo A per risolvere f , cioe tale che che f = fA . In questo corso studieremo alcuni metodi di sintesi, come la ricorsione, la tecnica divide et impera, la programmazione dinamica e le tecniche greedy. 2. Analisi: dato un algoritmo A ed un problema f , dimostrare che A risolve f , cioe che f = fA (correttezza) e valutare la quantita di risorse usate da A (complessita concreta). Gli algoritmi presentati nel corso saranno supportati da cenni di dimostrazione di correttezza, e saranno sviluppate tecniche matematiche per permettere lanalisi della complessita concreta. Tra queste ricordiamo in particolare lo studio di relazioni di ricorrenza mediante funzioni generatrici. 3. Classificazione (o complessita strutturale): data una quantita T di risorse, individuare la classe di problemi risolubili da algoritmi che usano al piu tale quantita . In questo corso verranno considerate le classi P e NP, con qualche dettaglio sulla teoria della NP-completezza. 1.2 La complessita di un algoritmo Due misure ragionevoli per sistemi di calcolo sequenziali sono i valori TA (x) e SA (x) che rappresentano rispettivamente il tempo di calcolo e lo spazio di memoria richiesti da un algoritmo A su input x. Possiamo considerare TA (x) e SA (x) come interi positivi dati rispettivamente dal numero di operazioni elementari eseguite e dal numero di celle di memoria utilizzate durante lesecuzione di A sullistanza x. Descrivere le funzioni TA (x) e SA (x) puo essere molto complicato poiche la variabile x assume valori sullinsieme di tutti gli input. Una soluzione che fornisce buone informazioni su TA e su SA consiste nellintrodurre il concetto di dimensione di una istanza, raggruppando in tal modo tutti gli input che hanno la stessa dimensione: la funzione dimensione (o lunghezza) associa a ogni ingresso un numero naturale che rappresenta intuitivamente la quantita di informazione contenuta nel dato considerato. Per esempio la dimensione naturale di un intero positivo n e 1 + blog2 nc, cioe il numero di cifre necessarie per rappresentare n in notazione binaria. Analogamente, la dimensione di un vettore di elementi e solitamente costituita dal numero delle sue componenti, mentre la dimensione di un grafo e data dal numero dei suoi nodi. Nel seguito, per ogni istanza x denotiamo con |x| la sua dimensione. Si pone ora il seguente problema: dato un algoritmo A su un insieme di input I, puo accadere che due istanze x, x0 I di ugual dimensione, cioe tali che |x| = |x0 |, diano luogo a tempi di esecuzione diversi, ovvero TA (x) 6= TA (x0 ); come definire allora il tempo di calcolo di A in funzione della sola dimensione? Una possibile soluzione e quella di considerare il tempo peggiore su tutti gli input di dimensione n fissata; una seconda e quella di considerare il tempo medio. Possiamo allora dare le seguenti definizioni: 7 CAPITOLO 1. INTRODUZIONE 1. chiamiamo complessita in caso peggiore la funzione TAp : IN IN tale che, per ogni n IN, TAp (n) = max{TA (x) | |x| = n}; 2. chiamiamo invece complessita in caso medio la funzione TAm : IN IR tale che, per ogni n IN, P |x|=n TA (x) m TA (n) = In dove In e il numero di istanze x I di dimensione n. p In modo del tutto analogo possiamo definire la complessita in spazio nel caso peggiore SA (n) e nel caso m medio SA (n). In questo modo le complessita in tempo o in spazio diventano una funzione T (n) definita sugli interi positivi, con tutti i vantaggi che la semplicita di questa nozione comporta. In particolare risulta significativa la cosiddetta complessita asintotica, cioe il comportamento della funzione T (n) per grandi valori di n; a tal riguardo, di grande aiuto sono le notazioni asintotiche e le tecniche matematiche che consentono queste valutazioni. E naturale chiedersi se fornisce piu informazione la complessita in caso peggiore o quella in caso medio. Si puo ragionevolmente osservare che le valutazioni ottenute nei due casi vanno opportunamente integrate poiche entrambe le misure hanno vantaggi e svantaggi. Ad esempio la complessita in caso peggiore fornisce spesso una valutazione troppo pessimistica; viceversa, la complessita in caso medio assume una distribuzione uniforme sulle istanze, ipotesi discutibile in molte applicazioni. 1.3 Ordini di grandezza della complessita in tempo Il criterio principale solitamente usato per valutare il comportamento di un algoritmo e basato sullanalisi asintotica della sua complessita in tempo (nel caso peggiore o in quello medio). In particolare lordine di grandezza di tale quantita , al tendere del parametro n a +, fornisce una valutazione della rapidita di incremento del tempo di calcolo al crescere delle dimensioni del problema. Tale valutazione e solitamente sufficiente per stabilire se un algoritmo e utilizzabile e per confrontare le prestazioni di procedure diverse. Questo criterio e ovviamente significativo per determinare il comportamento di un algoritmo su ingressi di grandi dimensioni mentre e poco rilevante se ci interessa conoscerne le prestazioni su input di piccola taglia. Tuttavia e bene tenere presente che una differenza anche piccola nellordine di grandezza della complessita di due procedure puo comportare enormi differenze nelle prestazioni dei due algoritmi. Un ordine di grandezza troppo elevato puo addirittura rendere una procedura assolutamente inutilizzabile anche su input di dimensione piccola rispetto allo standard usuale. Le due seguenti tabelle danno unidea piu precisa del tempo effettivo corrispondente a funzioni di complessita tipiche che vengono spesso riscontrate nellanalisi di algoritmi. Nella prima confrontiamo i tempi di calcolo richiesti su istanze di varia dimensione da sei algoritmi che hanno una complessita in tempo rispettivamente di n, n log2 n, n2 , n3 , 2n e 3n , supponendo di poter eseguire una operazione elementare in un microsecondo, ovvero 106 secondi. Inoltre, usiamo la seguente notazione per rappresentare le varie unita di tempo: s =microsecondi, ms =millisecondi, s =secondi, mn =minuti, h =ore, g =giorni, a =anni e c =secoli; quando il tempo impiegato diviene troppo lungo per essere di qualche significato, usiamo il simbolo per indicare un periodo comunque superiore al millennio. 8 CAPITOLO 1. INTRODUZIONE Complessita n n log2 n n2 n3 2n 3n n = 10 10s 33, 2s 0, 1ms 1ms 1ms 59ms n = 20 20s 86, 4s 0, 4ms 8ms 1s 58mn n = 50 50s 0, 28ms 2, 5ms 125ms 35, 7a 108 c n = 100 0, 1ms 0, 6ms 10ms 1s 1014 c n = 103 1ms 9, 9ms 1s 16, 6mn n = 104 10ms 0, 1s 100s 11, 5g n = 105 0, 1s 1, 6s 2, 7h 31, 7a n = 106 1s 19, 9s 11, 5g 300c Nella seconda tabella riportiamo le dimensioni massime di ingressi che possono essere processati in un minuto dagli stessi algoritmi. Complessita in tempo n n log2 n n2 n3 2n Max Dimensione 6 107 28 105 77 102 390 25 Dallesame delle due tabelle si verifica subito come gli algoritmi dotati di una complessita in tempo lineare o di poco superiore (n log n) siano utilizzabili in maniera efficiente anche per elevate dimensioni dellinput. Per questo uno dei primi obiettivi, generalmente perseguiti nella progettazione di un algoritmo per un problema dato, e proprio quello di trovare una procedura che abbia una complessita di ordine lineare o al piu n log n. Algoritmi che hanno invece una complessita dellordine di nk , per k 2, risultano applicabili solo quando la dimensione dellingresso non e troppo elevata. In particolare, se 2 k < 3, si possono processare in tempi ragionevoli istanze di dimensione media; mentre per k 3 tale dimensione si riduce drasticamente e i tempi necessari per processare input di lunghezza elevata risultano inaccettabili. Infine notiamo come algoritmi che hanno una complessita esponenziale (per esempio 2n o 3n ) presentino tempi di calcolo proibitivi anche per dimensioni di input limitate. Per questo motivo sono considerati generalmente inefficienti gli algoritmi che hanno una complessita in tempo dellordine di an per qualche a > 1. Questi vengono solitamente usati solo per input particolarmente piccoli, in assenza di algoritmi piu efficienti, oppure quando le costanti principali, trascurate nellanalisi asintotica, sono cos limitate da permettere una applicazione su ingressi di dimensione opportuna. Si potrebbe pensare che le valutazioni generali sopra riportate dipendano dallattuale livello tecnologico e siano destinate ad essere superate con lavvento di una tecnologia piu sofisticata che permetta di produrre strumenti di calcolo sensibilmente piu veloci. Questa opinione puo essere confutata facilmente considerando lincremento, dovuto a una maggiore rapidita nellesecuzione delle operazioni fondamentali, delle dimensioni massime di input trattabili in un tempo fissato. Supponiamo di disporre di due calcolatori che chiamiamo C1 e C2 rispettivamente e assumiamo che C2 sia M volte piu veloce di C1, dove M e un parametro maggiore di 1. Quindi se C1 esegue un certo calcolo in un tempo t, C2 esegue lo stesso procedimento in un tempo t/M . Nella seguente tabella si mostra come cresce, passando da C1 a C2, la massima dimensione di ingresso trattabile in un tempo fissato da algoritmi dotati di diverse complessita in tempo. Complessita in tempo n n lg n n2 2n Max dim. su C1 d1 d2 d3 d4 Max dim. su C2 M d1 M d 2 (per d2  0) M d3 d4 + lg M CAPITOLO 1. INTRODUZIONE 9 Come si evince dalla tabella, algoritmi lineari (n) o quasi lineari (n lg n) traggono pieno vantaggio dal passaggio alla tecnologia piu potente; negli algoritmi polinomiali (n2 ) il vantaggio e evidente ma smorzato, mentre negli algoritmi esponenziali (2n ) il cambiamento tecnologico e quasi ininfluente. Capitolo 2 Nozioni preliminari In questo capitolo ricordiamo i concetti matematici di base e le relative notazioni che sono di uso corrente nella progettazione e nellanalisi di algoritmi. Vengono richiamate le nozioni elementari di calcolo combinatorio e i concetti fondamentali per studiare il comportamento asintotico di sequenze numeriche. 2.1 Notazioni di base Presentiamo innanzitutto la notazione usata in questo capitolo e nei successivi per rappresentare i tradizionali insiemi numerici: IN denota linsieme dei numeri naturali; ZZ denota linsieme degli interi relativi; Q denota linsieme dei numeri razionali; IR denota linsieme dei numeri reali; IR+ denota linsieme dei numeri reali maggiori o uguali a 0; C denota linsieme dei numeri complessi. Come e noto, dal punto di vista algebrico, IN forma un semianello commutativo rispetto alle tradizionali operazioni di somma e prodotto; analogamente, ZZ forma un anello commutativo mentre Q, IR e C formano dei campi. Altri simboli che utilizziamo nel seguito sono i seguenti: per ogni x IR, |x| denota il modulo di x; bxc rappresenta la parte intera inferiore di x, cioe il massimo intero minore o uguale a x; dxe rappresenta la parte intera superiore di x cioe il minimo intero maggiore o uguale a x; log x denota il logaritmo in base e di x. Le seguenti proprieta si possono dedurre dalle definizioni appena date: per ogni x reale x 1 < bxc x dxe < x + 1; per ogni intero n bn/2c + dn/2e = n; per ogni n, a, b IN , diversi da 0, bbn/ac/bc = bn/abc, ddn/ae/be = dn/abe; 10 CAPITOLO 2. NOZIONI PRELIMINARI 11 per ogni x reale e ogni intero a > 1 bloga (bxc)c = bloga xc, dloga (dxe)e = dloga xe. 2.2 Elementi di calcolo combinatorio Le nozioni di permutazione e combinazione di un insieme finito sono strumenti fondamentali per la soluzione di molti problemi di enumerazione e manipolazione di oggetti combinatori su cui si basa lanalisi di classici algoritmi. Per questo riprendiamo questi concetti, solitamente studiati in un corso di matematica discreta, presentando solo le definizioni e le proprieta fondamentali. Dato un intero positivo n e un insieme finito S di k elementi, S = {e1 , e2 , . . . , ek }, chiamiamo n-disposizione di S una qualsiasi funzione f : {1, 2, . . . , n} S. Se tale funzione e iniettiva, f sara detta n-disposizione senza ripetizioni. Se f e biunivoca una n-disposizione senza ripetizioni sara chiamata permutazione dellinsieme S (in tal caso n = k). Nel seguito una n-disposizione sara anche chiamata disposizione di dimensione n. Una n-disposizione f e solitamente rappresentata dallallineamento dei suoi elementi f (1)f (2) f (n) Per questo motivo una n-disposizione di un insieme S e talvolta chiamata anche parola (o stringa) di lunghezza n sullalfabeto S, oppure vettore di dimensione n a componenti in S. Per esempio una 4-disposizione di {a, b, c, d} e la funzione f : {1, 2, 3, 4} {a, b, c, d}, con f (1) = b, f (2) = d, f (3) = c, f (4) = a. Essa e rappresentata dallallineamento, o parola, bdca; questa disposizione e anche una permutazione. Se la funzione f : {1, 2, . . . , n} S e iniettiva, allora lallineamento corrispondente f (1)f (2) f (n) non contiene ripetizioni. Chiaramente in questo caso si deve verificare n k. Esempio 2.1 Le 2-disposizioni senza ripetizioni dellinsieme {a, b, c} sono rappresentate dalle seguenti parole: ab, ac, ba, bc, ca, cb. Indichiamo ora con Dn,k il numero di n-disposizioni di un insieme di k elementi; analogamente, sia Rn,k il numero di n-disposizioni senza ripetizione di un insieme di k elementi. Si verificano allora le seguenti uguaglianze: 1. D1,k = R1,k = k; 2. poiche una n-disposizione e un elemento di S seguito da una qualsiasi disposizione di S di dimensione n 1, abbiamo Dn,k = k Dn1,k ; 3. poiche una n-disposizione senza ripetizioni di un insieme S di k elementi e un elemento di S seguito da una disposizione di dimensione n 1 di un insieme di k 1 elementi, abbiamo Rn,k = k Rn1,k1 . Tali relazioni provano la seguente proprieta : 12 CAPITOLO 2. NOZIONI PRELIMINARI Proposizione 2.1 Per ogni coppia di interi positivi n, k si verifica Dn,k = k n , Rn,k = k(k 1) (k n + 1) (se n k). Osserviamo in particolare che il numero di permutazioni di un insieme di n elementi e Rn,n = n(n 1) 2 1. Esso e quindi dato dalla cosiddetta funzione fattoriale, indicata con n! = 1 2 . . . (n 1) n. Ricordiamo che la nozione di fattoriale viene solitamente estesa ponendo 0! = 1. Siamo ora interessati a calcolare il numero di n-disposizioni di un insieme S = {e1 , e2 , . . . , ek } contenenti q1 ripetizioni di e1 , q2 ripetizioni di e2 ,..., qk ripetizioni di ek (quindi q1 + q2 + + qk = n). Vale a tal riguardo la seguente proposizione: Proposizione 2.2 Il numero N (n; q1 , . . . , qk ) di n-disposizioni di un insieme S = {e1 , e2 , . . . , ek } che contengono q1 ripetizioni di e1 , q2 ripetizioni di e2 ,..., qk ripetizioni di ek e n! q1 !q2 ! qk ! Dimostrazione. Etichettiamo in modo diverso le q1 ripetizioni di e1 aggiungendo a ciascun elemento un indice distinto; facciamo la stessa cosa con le q2 ripetizioni di e2 , con le q3 ripetizioni di e3 ,..., con le qk ripetizioni di ek . In questo modo otteniamo n oggetti distinti. Facendo tutte le possibili permutazioni di questi n elementi otteniamo n! permutazioni. Ognuna di queste individua una n-disposizione originaria che si ottiene cancellando gli indici appena aggiunti; ogni disposizione cos ottenuta e individuata allora da q1 !q2 ! qk ! distinte permutazioni. Di conseguenza possiamo scrivere N (n; q1 , . . . , qk ) q1 !q2 ! qk ! = n!, da cui lasserto. Per ogni n IN e ogni k-pla di interi q1 , q2 , . . . , qk IN tali che n = q1 + q2 + . . . + qk , chiamiamo coefficiente multinomiale di grado n lespressione n q1 q2 qk ! = n! . q1 !q2 ! qk ! Nel caso particolare k = 2 otteniamo il tradizionale coefficiente binomiale, solitamente rappresentato nella forma ! n n! = , j!(n j)! j dove n, j IN e 0 j n. Osserva che nj puo anche essere visto come il numero di parole di lunghezza n, definite su un alfabeto di due simboli, nelle quali compaiono j occorrenza del primo simbolo e n j del secondo. La proprieta fondamentale di questi coefficienti, dalla quale deriva il loro nome, riguarda il calcolo delle potenze di polinomi:  per ogni n IN e ogni coppia di numeri u, v, n (u + v) = n X k=0 ! n k nk u v ; k 13 CAPITOLO 2. NOZIONI PRELIMINARI Inoltre, per ogni k-pla di numeri u1 , u2 , . . . , uk , ! X n (u1 + u2 + . . . + uk ) = q1 +q2 +...+qk =n n uq1 uq2 uqkk , q1 q2 qk 1 2 dove lultima sommatoria si intende estesa a tutte le k-ple q1 , q2 , . . . , qk IN tali che q1 + q2 + . . . + qk = n. Dati due interi k, n tali che 0 k n, chiamiamo combinazione semplice di classe k, o kcombinazione, di n oggetti distinti un sottoinsieme di k elementi scelti fra gli n fissati. Qui assumiamo la convenzione che ogni elemento possa essere scelto al piu una volta. E bene osservare che una combinazione e un insieme di oggetti e non un allineamento; quindi lordine con il quale gli elementi vengono estratti dallinsieme prefissato non ha importanza e due combinazioni risultano distinte solo quando differiscono almeno per un oggetto contenuto. Per esempio le combinazioni di classe 3 dellinsieme {a, b, c, d} sono date da {a, b, c}, {a, b, d}, {a, c, d}, {b, c, d}. Proposizione 2.3 Per ogni coppia di interi n, k IN tali che 0 k n, il numero di combinazioni semplici di classe k di un insieme di n elementi e dato dal coefficiente binomiale n k ! = n! . k!(n k!) Dimostrazione. E sufficiente dimostrare che i sottoinsiemi di {1, 2, . . . , n} contenenti k elementi sono n k . A tal riguardo osserva che ogni sottoinsieme A di {1, 2, . . . , n} e individuato dalla sua funzione caratteristica: ( 1 se x A A (x) = 0 se x 6 A Tale funzione caratteristica e rappresentata da una n-disposizione dellinsieme {0, 1} contenente k ripe n! tizioni di 1 e n k di 0. Per la proposizione precedente il numero di tali disposizioni e k!(nk)! = nk . Dati due interi positivi n, k, consideriamo un insieme S di n oggetti distinti; chiamiamo combinazione con ripetizione di classe k un insieme di k elementi scelti in S con la convenzione che ogni elemento possa essere scelto piu di una volta. Nota che anche in questo caso non teniamo conto dellordine con il quale gli oggetti vengono scelti. Inoltre, poiche ogni elemento puo essere scelto piu volte, si puo verificare k > n. Per esempio, se consideriamo linsieme S = {a, b}, le combinazioni con ripetizione di classe 3 sono date da: {a, a, a}, {a, a, b}, {a, b, b}, {b, b, b} Proposizione 2.4 Il numero di combinazioni con ripetizione di classe k estratte da un insieme di n elementi e dato dal coefficiente binomiale ! n+k1 . k 14 CAPITOLO 2. NOZIONI PRELIMINARI Dimostrazione. Dato un insieme di n elementi, siano s1 , s2 , . . . , sn i suoi oggetti allineati secondo un ordine qualsiasi. Sia C una combinazione con ripetizione di classe k di tale insieme. Essa puo essere rappresentata da una stringa di simboli ottenuta nel modo seguente a partire dalla sequenza s1 s2 sn : - per ogni i = 1, 2, . . . , n, affianca a si tanti simboli quanti sono gli elementi si che compaiono nella combinazione considerata; - togli dalla sequenza ottenuta il primo simbolo s1 e tutti gli indici dai simboli rimanenti. In questo modo abbiamo costruito una disposizione di dimensione n + k 1 nella quale compaiono k occorrenze di e n 1 occorrenze di s. Per esempio se n = 5, k = 7 e C = {s1 , s1 , s2 , s5 , s5 , s4 , s5 }, la stringa ottenuta e s ss s . Viceversa, e facile verificare che ogni parola di questo tipo corrisponde a una combinazione con ripetizione di n oggetti di classe k. Esiste quindi una corrispondenza biunivoca tra questi due insiemi di n+k1 e il numero di disposizioni di dimensione n + k 1 contenenti k strutture combinatorie. Poiche k occorrenze di un dato elemento e n 1 di un altro diverso dal precedente, la proposizione e dimostrata. Esercizi 1) Dimostrare che per ogni n IN vale luguaglianza n   X n k=0 k = 2n 2) Consideriamo unurna contenente N palline di cui H bianche e le altre nere (H N ). Supponiamo di eseguire n estrazioni con sostituzione (ovvero, ad ogni estrazione la palline scelta viene reinserita nellurna). Qual e la probabilita di estrarre esattamente k palline bianche? 3) Consideriamo un mazzo di carte tradizionale formato da 13 carte per ognuno dei quattro semi. Scegliamo nel mazzo 10 carte a caso (estrazione senza sostituzione). Qual e la probabilita che tra le carte scelte ve ne siano 5 di cuori? 2.3 Espressioni asintotiche Come abbiamo visto nel capitolo precedente, lanalisi asintotica di un algoritmo puo essere ridotta alla valutazione del comportamento asintotico di una sequenza di interi {T (n)} dove, per ogni n IN, T (n) rappresenta la quantita di una certa risorsa consumata su un input di dimensione n (nel caso peggiore o in quello medio). Lo studio del comportamento asintotico puo essere fortemente agevolato introducendo alcune relazioni tra sequenze numeriche che sono divenute di uso corrente in questo ambito. Siano f e g due funzioni definite su IN a valori in IR+ . 1. Diciamo che f (n) e o grande di g(n), in simboli f (n) = O(g(n)), se esistono c > 0, n0 IN tali che, per ogni n > n0 , f (n) c g(n); si dice anche che f (n) ha ordine di grandezza minore o uguale a quello di g(n). Per esempio, applicando la definizione e le tradizionali proprieta dei limiti, si verificano facilmente le seguenti relazioni: 5n2 + n = O(n2 ), 3n4 = O(n5 ), n log n = O(n2 ), log k n = O(n) e nk = O(en ) per ogni k IN. 15 CAPITOLO 2. NOZIONI PRELIMINARI 2. Diciamo che f (n) e omega grande di g(n), in simboli f (n) = (g(n)), se esistono c > 0, n0 IN tali che, per ogni n > n0 , f (n) c g(n); si dice anche che f (n) ha ordine di grandezza maggiore o uguale a quello di g(n). Per esempio, si verificano facilmente le seguenti relazioni: 1/k 10n2 log n = (n2 ), n1/k = (log n) e en = (n) per ogni intero k > 0. 3. Diciamo infine che f (n) e g(n) hanno lo stesso ordine di grandezza, e poniamo f (n) = (g(n)), se esistono due costanti c, d > 0 e un intero n0 IN tali che, per ogni n > n0 , c g(n) f (n) d g(n). Per esempio, e facile verificare le seguenti relazioni: 2 1 ) = ( ), n n blog nc = (log n), dn2 e = (n2 ), n(2 + sin n) = (n), n + 5 = ( n). 5n2 + n = (n2 ), 100n log2 n = (n log2 n), en+50 = (en ), log(1 + Dalle definizioni si deducono subito le seguenti proprieta : f (n) = O(g(n)) se e solo se g(n) = (f (n)); f (n) = (g(n)) se e solo se f (n) = O(g(n)) e f (n) = (g(n)). Inoltre, f e definitivamente minore o uguale a una costante p IR+ se e solo se f (n) = O(1). Analogamente, f e definitivamente maggiore o uguale a una costante p IR+ se e solo se f (n) = (1). Ovviamente le relazioni O e godono della proprieta riflessiva e transitiva, ma non di quella simmetrica. Invece gode delle proprieta riflessiva, simmetrica e transitiva e quindi definisce una relazione di equivalenza sullinsieme delle funzioni che abbiamo considerato. Questo significa che ripartisce tale insieme in classi di equivalenza, ciascuna delle quali e costituita da tutte e sole le funzioni che hanno lo stesso ordine di grandezza. E possibile inoltre definire una modesta aritmetica per le notazioni sopra introdotte: se f (n) = O(g(n)) allora c f (n) = O(g(n)) per ogni c > 0; se f1 (n) = O(g1 (n)) e f2 (n) = O(g2 (n)) allora f1 (n) + f2 (n) = O(g1 (n) + g2 (n)), f1 (n) f2 (n) = O(g1 (n) g2 (n)), mentre non vale f1 (n) f2 (n) = O(g1 (n) g2 (n)). 16 CAPITOLO 2. NOZIONI PRELIMINARI Le stesse proprieta valgono per e . Si possono introdurre ulteriori relazioni basate sulla nozione di limite. Consideriamo due funzioni f e g definite come sopra e supponiamo che g(n) sia maggiore di 0 definitivamente. 4) Diciamo che f (n) e asintotica a g(n), in simboli f (n) g(n), se f (n) = 1. n+ g(n) lim Per esempio: 3n2 + n 3n2 , 2n log n 4n 2n log n, log(1 + 3 3 ) . n n 5) Diciamo che f (n) e o piccolo di g(n), in simboli f (n) = o(g(n)), se f (n) = 0; n+ g(n) lim diremo anche che f (n) ha un ordine di grandezza inferiore a quello di g(n). Per esempio: 10n log n = o(n2 ), d n2 e = o(n2 ), logk n = o(n ) per ogni k,  > 0. log n Le seguenti proprieta si deducono facilmente dalle definizioni: f (n) g(n) se e solo se |f (n) g(n)| = o(g(n)); f (n) g(n) implica f (n) = (g(n)), ma il viceversa non e vero; f (n) = o(g(n)) implica f (n) = O(g(n)), ma il viceversa non e vero. Inoltre osserviamo che anche definisce una relazione di equivalenza sullinsieme delle funzioni considerate; questa suddivide linsieme in classi ciascuna delle quali contiene esattemente tutte le funzioni asintotiche a una funzione data. Come ultimo esempio ricordiamo la nota formula di Stirling che fornisce lespressione asintotica del fattoriale di un intero naturale: n! = 2 n n+ 12 e quindi log n! = n log n n + n e  1 1 + O( ) n  1 1 log n + log 2 + O( ). 2 n Esercizi 1) Mostrare mediante controesempi che le relazioni O e non sono simmetriche. (n) 2) Determinare due funzioni f (n) e g(n) tali che f (n) = (g(n)) e il limite limn+ fg(n) non esiste. 3) Mostrare che, se f (n) c g(n) per qualche c > 0, allora f (n) = (g(n)). 17 CAPITOLO 2. NOZIONI PRELIMINARI 2.4 Stima di somme Pn Data una funzione f : IN IR+ , lespressione n X k=0 f (k) rappresenta la somma f (k) = f (0) + f (1) + + f (n). k=0 Essa definisce chiaramente una nuova funzione S : IN IR+ che associa a ogni n IN il valore P S(n) = nk=0 f (k). Lanalisi di semplici algoritmi richiede spesso la valutazione di somme di questo tipo; ad esempio, una stima del tempo di calcolo richiesto dallistruzione for i = 0 to n do C per un comando C qualsiasi, e data da n X c(k) k=0 dove c(k) e il tempo di calcolo del comando C quando la variabile i assume il valore k. Osserviamo subito che lordine di grandezza di una somma puo essere dedotto dallordine di grandezza dei suoi addendi. Proposizione 2.5 Siano f e g due funzioni definite su IN a valori in IR+ e siano F e G le loro funzioni P P somma, cioe F (n) = nk=0 f (k) e G(n) = nk=0 g(k) per ogni n IN. Allora f (n) = (g(n)) implica F (n) = (G(n)). Dimostrazione. La proprieta e una semplice conseguenza della definizione di . Infatti, per lipotesi, esistono due costanti positive c, d tali che c g(k) f (k) d g(k) per ogni k abbastanza grande. Sostituendo questi valori nelle rispettive sommatorie otteniamo C n X n X g(k) k=0 f (k) D k=0 n X g(k) k=0 per due costanti C, D fissate e ogni n sufficientemente grande. Esempio 2.2 Vogliamo valutare lordine di grandezza della somma n X  k log 1 + k=1 3 . k   3 Poiche k log 1 + k = (1), applicando la proposizione precedente otteniamo n X k=1  k log 1 + 3 k  = ( n X k=1 1) = (n). 18 CAPITOLO 2. NOZIONI PRELIMINARI 2.4.1 Serie geometrica Alcune sommatorie ricorrono con particolare frequenza nellanalisi di algoritmi; in molti casi il loro valore puo essere calcolato direttamente. Una delle espressioni piu comuni e proprio la somma parziale della nota serie geometrica che qui consideriamo nel campo dei numeri reali. Osserva che la proprieta seguente vale per un campo qualsiasi. Proposizione 2.6 Per ogni numero reale n X ( k = k=0 n+1 se = 1 se 6= 1 n+1 1 1 Dimostrazione. Se = 1 la proprieta e ovvia. Altrimenti, basta osservare che per ogni n IN, ( 1)(n + n1 + + + 1) = n+1 1. k La proposizione implica che la serie geometrica + k=0 e convergente se e solo se 1 < < 1; essa consente inoltre di derivare il valore esatto di altre somme di uso frequente. P Esempio 2.3 Supponiamo di voler valutare la sommatoria n X k2k . k=0 Consideriamo allora la funzione tn (x) = n X k x k=0 e osserviamo che la sua derivata e data da t0n (x) = n X kxk1 . k=0 Questo significa che 2t0n (2) = n X k2k k=0 n+1 e quindi il nostro problema si riduce a valutare la derivata di tn (x) in 2. Poiche tn (x) = x x11 per ogni x 6= 1, otteniamo t0n (x) = (n + 1)xn (x 1) xn+1 + 1 (x 1)2 e di conseguenza n X k2k = (n 1)2n+1 + 2. k=0 Esercizi 1) Determinare, per n +, lespressione asintotica di n X kxk k=0 per ogni x > 1. 2) Determinare il valore esatto della sommatoria: n X 2 k k 3 . k=0 19 CAPITOLO 2. NOZIONI PRELIMINARI 2.4.2 Somme di potenze di interi Unaltra somma che occorre frequentemente e data da n X ki k=0 dove i IN. Nel caso i = 1 si ottiene facilmente lespressione esplicita della somma. Proposizione 2.7 Per ogni n IN n X k= k=0 n(n + 1) 2 Dimostrazione. Ragioniamo per induzione su n. Se n = 0 la proprieta e banalmente verificata. Supponiamola vera per n IN fissato; allora otteniamo n+1 X k =n+1+ k=0 n X n(n + 1) (n + 1)(n + 2) = 2 2 k =n+1+ k=0 Luguaglianza e quindi vera anche per n + 1 e la proposizione risulta pertanto dimostrata. Esempio 2.4 Somme di quadrati Pn Possiamo ottenere un risultato analogo per la somma dei primi n quadrati, cioe k2 . A tale scopo presentiamo una k=0 Pn dimostrazione basata sul metodo di perturbazione della somma che consente di ricavare lespressione esatta di k=0 ki per ogni intero i > 1. Pn Definiamo g(n) = k=0 k3 . E chiaro che g(n + 1) puo essere espresso nelle due forme seguenti: g(n + 1) = n X 3 k + (n + 1)3 k=0 n g(n + 1) = X (k + 1)3 = k=0 n X (k3 + 3k2 + 3k + 1). k=0 Uguagliando la parte destra delle due relazioni si ottiene n X 3 n X 3 n X 2 n X k=0 k=0 k=0 k=0 k + (n + 1)3 = k +3 k +3 k + n + 1. Possiamo ora semplificare e applicare la proposizione precedente ottenendo 3 n X 2 k = (n + 1)3 3 k=0 n(n + 1) n1 2 da cui, svolgendo semplici calcoli, si ricava n X 2 k = k=0 n(n + 1)(2n + 1) . 6 Questa uguaglianza consente di ottenere la seguente espressione asintotica n X 2 k = k=0 n3 + (n2 ). 3 20 CAPITOLO 2. NOZIONI PRELIMINARI Esercizi 1) Applicando lo stesso metodo usato nella dimostrazione precedente, provare che, per ogni i IN, n X i k = k=0 ni+1 + (ni ). i+1 Pn 2) Per ogni i, n IN, sia gi (n) = k=0 ki . Esprimere il valore esatto di gi (n) come funzione di i, n e di tutti i gj (n) tali che 0 j i 1. Dedurre quindi una procedura generale per calcolare gi (n) su input i e n. 2.4.3 Stima mediante integrali Nelle sezioni precedenti abbiamo presentato alcune tecniche per ottenere il valore esatto delle somme piu comuni. Descriviamo ora un semplice metodo, piu generale dei precedenti, che in molti casi permette di ottenere una buona stima asintotica di una somma senza cacolarne il valore esatto. Si tratta sostanzialmente di approssimare la sommatoria mediante un integrale definito. Proposizione 2.8 Sia f : IR+ IR+ una funzione monotona non decrescente. Allora, per ogni a IN e ogni intero n a, abbiamo Z n f (a) + f (x)dx a n X f (k) Z n f (x)dx + f (n) a k=a Dimostrazione. Se n = a la proprieta e banale. Supponiamo allora n > a. Osserviamo che la funzione e integrabile in ogni intervallo chiuso e limitato di IR+ e inoltre, per ogni k IN, f (k) Z k+1 f (x)dx f (k + 1). k Sommando per k = a, a + 1, . . . , n 1, otteniamo dalla prima disuguaglianza n1 X f (k) k=a mentre dalla seconda Z n f (x)dx = a n1 X Z k+1 Z n f (x)dx = k=a k f (x)dx, a n1 X Z k+1 f (x)dx k=a k n1 X f (k + 1). k=a Aggiungendo ora f (a) e f (n) alle due somme precedenti si ottiene lenunciato. f (x) 6         a a+1 - n n+1 x 21 CAPITOLO 2. NOZIONI PRELIMINARI E di particolare utilita la seguente semplice conseguenza: Corollario 2.9 Assumendo le stesse ipotesi della proposizione precedente, se f (n) = o( an f (x)dx), allora Z R n X n f (k) f (x)dx. a k=a Esempio 2.5 Applicando il metodo appena illustrato e facile verificare che, per ogni numero reale p > 0, n X p Z n k xp dx = 0 k=1 np+1 . p+1 In maniera del tutto analoga si dimostra un risultato equivalente per le funzioni monotone non crescenti. Proposizione 2.10 Sia f : IR+ IR+ una funzione monotona non crescente. Allora, per ogni a IN e ogni intero n a, abbiamo Z n f (x)dx + f (n) a n X f (k) f (a) + Z n f (x)dx a k=a Esempio 2.6 Pn Consideriamo la sommatoria Hn = k=1 k1 e applichiamo la proposizione precedente; si ricava loge n + 1 Hn loge n + 1 n e quindi Hn loge n. I valori Hn , per n > 0, sono chiamati numeri armonici e la loro valutazione compare nellanalisi di classici algoritmi. Ricordiamo che usando metodi piu complicati si puo ottenere la seguente espressione 1 1 + o( ) 2n n dove = 0, 57721.. e una costante nota chiamata costante di Eulero. Hn = loge n + + Concludiamo osservando che la tecnica appena presentata non permette in generale di ottenere approssimazioni asintotiche per funzioni a crescita esponenziale. Per esempio, consideriamo la sommatoria valutata nellEsempio 2.3. La crescita della funzione x2x e esponenziale e il metodo di approssimazione mediante integrali non consente di ottenere lespressione asintotica della somma. Infatti, integrando per parti si verifica facilmente che Z n 2n (n log 2 1) + 1 = (n2n ), log2 2 x2x dx = 0 quindi applicando la proposizione precedente riusciamo solo a determinare lordine di grandezza dellespressione considerata n X k2k = (n2n ). k=0 Esercizio Determinare lespressione asintotica delle seguenti sommatorie al crescere di n a +: n X 3/2 k k=0 , n X k=1 log2 k, n X k=1 k log2 k. Capitolo 3 Modelli di calcolo Obiettivo di questo corso e lo studio di algoritmi eseguibili su macchine: il significato di un algoritmo (detto anche semantica operazionale) e la valutazione del suo costo computazionale non possono prescindere da una descrizione (implicita o esplicita) del modello su cui lalgoritmo viene eseguito. Il modello RAM che presentiamo in questo capitolo e uno strumento classico, ampiamente discusso in vari testi (vedi [1, 12, 15]) e generalmente accettato (spesso sottointeso) quale modello di base per lanalisi delle procedure sequenziali. Lanalisi degli algoritmi che presenteremo nei capitoli successivi sara sempre riferita a questo modello a meno di esplicito avvertimento. Il modello qui presentato e caratterizzato da una memoria ad accesso casuale formata da celle che possono contenere un intero qualsiasi; le istruzioni sono quelle di un elementare linguaggio macchina che consente di eseguire istruzioni di input e output, svolgere operazioni aritmetiche, accedere e modificare il contenuto della memoria, eseguire semplici comandi di salto condizionato. La richiesta che ogni registro possa contenere un intero arbitrario e ovviamente irrealistica. Per quanto riguarda lanalisi di complessita e pero possibile ovviare a tale inconveniente introducendo un criterio di costo logaritmico nel quale il tempo e lo spazio richiesti dalle varie istruzioni dipendono dalle dimensioni degli operandi coinvolti. La semplicita e trasparenza del modello consentono di comprendere rapidamente come procedure scritte mediante linguaggi ad alto livello possono essere implementati ed eseguiti su macchina RAM. Questo permette di valutare direttamente il tempo e lo spazio richiesti dallesecuzione di procedure scritte ad alto livello senza farne una esplicita traduzione in linguaggio RAM. Fra i limiti del modello segnaliamo che non e presente una gerarchia di memoria (memoria tampone, memoria di massa) e le istruzioni sono eseguite una alla volta da un unico processore. Questo modello si presta quindi allanalisi solo di algoritmi sequenziali processati in memoria centrale. 3.1 Macchina ad accesso casuale (RAM) Il modello di calcolo che descriviamo in questa sezione si chiama Macchina ad accesso casuale (detto anche RAM, acronimo di Random Access Machine) ed e costituito da un nastro di ingresso, un nastro di uscita, un programma rappresentato da una sequenza finita di istruzioni, un contatore lc che indica listruzione corrente da eseguire, e una memoria formata da infiniti registri R0 , R1 , . . . , Rk , . . .. In questo modello si assumono inoltre le seguenti ipotesi: 1. Ciascuno dei due nastri e rappresentato da infinite celle, numerate a partire dalla prima, ognuna delle quali puo contenere un numero intero. Il nastro di ingresso e dotato di una testina di sola 22 23 CAPITOLO 3. MODELLI DI CALCOLO lettura mentre quello di uscita dispone di una testina di sola scrittura. Le due testine si muovono sempre verso destra e allinizio del calcolo sono posizionate sulla prima cella. Inizialmente tutte le celle del nastro di uscita sono vuote mentre il nastro di ingresso contiene linput della macchina; questo e formato da un vettore di n interi x1 , x2 , . . . , xn , disposti ordinatamente nelle prime n celle del nastro. 2. Il programma e fissato e non puo essere modificato durante lesecuzione. Ciascuna istruzione e etichettata e il registro lc (location counter) contiene letichetta dellistruzione da eseguire. Le istruzioni sono molto semplici e ricordano quelle di un linguaggio assembler: si possono eseguire operazioni di lettura e scrittura sui due nastri, caricamento dei dati in memoria e salto condizionato, oltre alle tradizionali operazioni aritmetiche sugli interi. 3. Ogni registro Rk , k IN, puo contenere un arbitrario intero relativo (il modello e realistico solo quando gli interi usati nel calcolo hanno dimensione inferiore a quella della parola). Lindirizzo del registro Rk e lintero k. Il registro R0 e chiamato accumulatore ed e lunico sul quale si possono svolgere operazioni aritmetiche. Il modello e rappresentato graficamente dalla seguente figura: 6 R0 R1 R2 lc Programma - Rk 3.1.1 ? Linguaggio di programmazione della macchina RAM Il programma di una macchina RAM e una sequenza finita di istruzioni P = ist1 ; ist2 ; . . . ; istm ciascuna delle quali e una coppia formata da un codice di operazione e da un indirizzo. Un indirizzo a sua volta puo essere un operando oppure una etichetta. Nella tabella seguente elenchiamo i 13 codici di operazione previsti nel nostro modello e specifichiamo per ciascuno di questi il tipo di indirizzo corrispondente. 24 CAPITOLO 3. MODELLI DI CALCOLO Codice di operazione LOAD STORE ADD SUB MULT DIV READ WRITE JUMP JGTZ JZERO JBLANK HALT Indirizzo operando etichetta Come definiremo meglio in seguito, le prime due istruzioni LOAD e STORE servono per spostare i dati fra i registri della memoria; le istruzioni ADD, SUB, MULT e DIV eseguono invece operazioni aritmetiche; vi sono poi due istruzioni di lettura e scrittura (READ e WRITE) e quattro di salto condizionato (JUMP, JGTZ, JZERO e JBLANK); infine listruzione HALT, che non possiede indirizzo, serve per arrestare la computazione. Le etichette sono associate solo a comandi di salto e servono per indicare le istruzioni del programma cui passare eventualmente il controllo; quindi ogni istruzione puo anche essere dotata di etichetta iniziale (solitamente un numero intero). Un operando invece puo assumere tre forme diverse: =i i i indica lintero i ZZ, indica il contenuto del registro Ri e in questo caso i IN, indica il contenuto del registro Rj dove j e il contenuto del registro Ri (e qui entrambi i, j appartengono a IN). Osserva che i rappresenta lusuale modalita di indirizzamento indiretto. Il valore di un operando dipende dal contenuto dei registri. Chiamiamo quindi stato della macchina la legge che associa ad ogni registro il proprio contenuto e alle testine di lettura/scrittura le loro posizioni sul nastro. Formalmente uno stato e una funzione S : {r, w, lc, 0, 1, . . . , k, . . .} ZZ, che interpretiamo nel modo seguente: S(r) indica (eventualmente) la posizione della testina sul nastro di ingresso, nel senso che se S(r) = j e j > 0 allora la testina legge la j-esima cella del nastro; S(w) indica in modo analogo la posizione della testina sul nastro di uscita; S(lc) e il contenuto del registro lc; S(k) e il contenuto del registro Rk per ogni k IN. Uno stato particolare e lo stato iniziale S0 , nel quale S0 (r) = S0 (w) = S0 (lc) = 1 e S0 (k) = 0 per ogni k IN. Nello stato iniziale quindi tutti i registri della memoria sono azzerati, le testine di lettura e scrittura sono posizionate sulla prima cella e il contatore lc indica la prima istruzione del programma. CAPITOLO 3. MODELLI DI CALCOLO 25 Il valore di un operando op in uno stato S, denotato da VS (op), e cos definito: i if op e = i, dove i ZZ S(i) if op e i, dove i IN VS (op) = S(S(i)) if op e i, dove i IN e S(i) 0 altrimenti Possiamo allora descrivere lesecuzione di un programma P su un input x1 , x2 , . . . , xn , dove xi ZZ per ogni i, nel modo seguente: 1. Configura la macchina nello stato iniziale e inserisci i dati di ingresso nel nastro di lettura, collocando ciascun intero xi nella i-esima cella, per ogni i = 1, 2, . . . , n, e inserendo nella n + 1-esima un simbolo speciale [ che chiamiamo blank. 2. Finche il contatore lc non indica listruzione HALT esegui (a) Individua listruzione da eseguire mediante il contenuto di lc. (b) Esegui listruzione cambiando lo stato secondo le regole elencate nella tabella seguente. Le regole di cambiamento di stato sono elencate con ovvio significato nella seguente tabella nella quale S indica lo stato corrente, e := denota lassegnamento di nuovi valori alla funzione S. Si suppone inoltre che il contatore lc venga incrementato di 1 nellesecuzione di tutte le istruzioni salvo quelle di salto JUMP, JGTZ, JZERO, JBLANK. Istruzione LOAD a STORE i STORE i ADD a SUB a MULT a DIV a READ i READ i WRITE a JUMP JGTZ b b JZERO b JBLANK b HALT Significato S(0) := VS (a) S(i) := S(0) S(S(i)) := S(0) S(0) := S(0) + VS (a) S(0) := S(0) VS (a) S(0) := S(0) VS (a) S(0) := S(0) VS (a) S(i) := xS(r) e S(r) := S(r) + 1 S(S(i)) := xS(r) e S(r) := S(r) + 1 Stampa VS (a) nella cella S(w) del nastro di scrittura e poni S(w) := S(w) + 1 S(lc) := b se S(0) > 0 allora S(lc) := b altrimenti S(lc) := S(lc) + 1 se S(0) = 0 allora S(lc) := b altrimenti S(lc) := S(lc) + 1 se la cella S(r) contiene [ allora S(lc) := b altrimenti S(lc) := S(lc) + 1 arresta la computazione Per semplicita supponiamo che una istruzione non venga eseguita se i parametri sono mal definiti (ad esempio quando S(lc) 0, oppure VS (a) = ). In questo caso la macchina si arresta nello stato corrente. 26 CAPITOLO 3. MODELLI DI CALCOLO Possiamo cos considerare la computazione di un programma P su un dato input come una sequenza (finita o infinita) di stati S0 , S 1 , . . . , S i , . . . nella quale S0 e lo stato iniziale e, per ogni i, Si+1 si ottiene eseguendo nello stato Si listruzione di indice Si (lc) del programma P (ammettendo lingresso dato). Se la sequenza e finita e Sm e lultimo suo elemento, allora Sm (lc) indica listruzione HALT oppure unistruzione che non puo essere eseguita. Se invece la sequenza e infinita diciamo che il programma P sullinput dato non si ferma, o anche che la computazione non si arresta. A questo punto possiamo definire la semantica del linguaggio RAM associando ad ogni programma P la funzione parziale FP calcolata da P . Formalmente tale funzione e della forma FP : + [ n=0 ZZn + [ ZZn {} n=0 dove denotiamo con ZZn , n > 0, linsieme dei vettori di interi a n componenti, con ZZ0 linsieme contenente il vettore vuoto e con il simbolo di indefinito. Per ogni n IN e ogni x ZZn , se il programma P su input x si arresta, allora FP (x) e il vettore di interi che si trova stampato sul nastro di uscita al temine della computazione; viceversa, se la computazione non si arresta, allora FP (x) = . Esempio 3.1 Il seguente programma RAM riceve in input n interi, n IN qualsiasi, e calcola il massimo tra questi valori; la procedura confronta ciascun intero con il contenuto del registro R2 nel quale viene mantenuto il massimo dei valori precedenti. 2 10 READ 1 JBLANK 10 LOAD 1 READ 2 SUB 2 JGTZ 2 LOAD 2 STORE 1 JUMP 2 WRITE 1 HALT Esercizi 1) Definire un programma RAM per il calcolo della somma di n interi. 2) Definire un programma RAM per memorizzare una sequenza di n interi nei registri R1 , R2 , . . . , Rn , assumendo n 1 variabile. 3.1.2 Complessita computazionale di programmi RAM In questa sezione vogliamo definire la quantita di tempo e di spazio consumate dallesecuzione di un programma RAM su un dato input. Vi sono essenzialmente due criteri usati per determinare tali quantita . Il primo e il criterio di costo uniforme secondo il quale lesecuzione di ogni istruzione del programma richiede una unita di tempo indipendentemente dalla grandezza degli operandi. Analogamente, lo spazio richiesto per lutilizzo di un registro della memoria e di una unita , indipendentemente dalla dimensione dellintero contenuto. 27 CAPITOLO 3. MODELLI DI CALCOLO Definizione 3.1 Un programma RAM P su input x richiede tempo di calcolo t e spazio di memoria s, secondo il criterio uniforme, se la computazione di P su x esegue t istruzioni e utilizza s registri della macchina RAM, con la convenzione che t = + se la computazione non termina e s = + se si utilizza un numero illimitato di registri. Nel seguito denotiamo con TP (x) e con SP (x) rispettivamente il tempo di calcolo e lo spazio di memoria richiesti dal programma P su input x secondo il criterio di costo uniforme. Esempio 3.2 Consideriamo il programma P per il calcolo del massimo tra n interi, definito nellesempio 3.1. E facile verificare che, per ogni input x di dimensione non nulla, SP (x) = 3. Se invece x forma una sequenza strettamente decrescente di n interi, allora TP (x) = 5(n 1) + 4. Osserviamo che se un programma RAM P non utilizza lindirizzamento indiretto (cioe non contiene istruzioni con operandi della forma k) allora, per ogni input x, SP (x) e minore o uguale a una costante prefissata, dipendente solo dal programma. Poiche i registri nel nostro modello possono contenere interi arbitrariamente grandi, la precedente misura spesso risulta poco significativa rispetto a modelli di calcolo reali. E evidente che se le dimensioni degli interi contenuti nei registri diventano molto grandi rispetto alle dimensioni dellingresso, risulta arbitrario considerare costante il costo di ciascuna istruzione. Per questo motivo il criterio di costo uniforme e considerato un metodo di valutazione realistico solo per quegli algoritmi che non incrementano troppo la dimensione degli interi calcolati. Questo vale ad esempio per gli algoritmi di ordinamento e per quelli di ricerca. Una misura piu realistica di valutazione del tempo e dello spazio consumati da un programma RAM puo essere ottenuta attribuendo ad ogni istruzione un costo di esecuzione che dipende dalla dimensione delloperando. Considereremo qui il criterio di costo logaritmico, cos chiamato perche il tempo di calcolo richiesto da ogni istruzione dipende dal numero di bit necessari per rappresentare gli operandi. Per ogni intero k > 0, denotiamo con l(k) la lunghezza della sua rappresentazione binaria, ovvero l(k) = blog2 kc + 1. Estendiamo inoltre questa definizione a tutti gli interi, ponendo l(0) = 1 e l(k) = blog2 |k|c + 1 per ogni k < 0. Chiameremo il valore l(k) lunghezza dellintero k; questa funzione e una buona approssimazione intera del logaritmo in base 2: per n abbastanza grande, l(k) log2 k. Definiamo allora mediante la seguente tabella il costo logaritmico di un operando a quando la macchina si trova in uno stato S e lo denotiamo con tS (a). Operando a =k k k Costo tS (a) l(k) l(k) + l(S(k)) l(k) + l(S(k)) + l(S(S(k))) La seguente tabella definisce invece il costo logaritmico delle varie istruzioni RAM, quando la macchina si trova nello stato S. Nota che il costo di ogni operazione e dato dalla somma delle lunghezze degli interi necessari per eseguire listruzione. 28 CAPITOLO 3. MODELLI DI CALCOLO Istruzione LOAD a STORE k STORE k ADD a SUB a MULT a DIV a READ k READ k WRITE a JUMP b JGTZ b JZERO b JBLANK b HALT Costo tS (a) l(S(0)) + l(k) l(S(0)) + l(k) + l(S(k)) l(S(0)) + tS (a) l(S(0)) + tS (a) l(S(0)) + tS (a) l(S(0)) + tS (a) l(xS(r) ) + l(k) l(xS(r) ) + l(k) + l(S(k)) tS (a) 1 l(S(0)) l(S(0)) 1 1 Per esempio, il tempo di esecuzione (con costo logaritmico) di STORE k e dato dalla lunghezza dei tre interi coinvolti nellistruzione: il contenuto dellaccumulatore (S(0)), lindirizzo del registro (k) e il suo contenuto (S(k)). Definizione 3.2 Il tempo di calcolo TPl (x) richiesto dal programma P su ingresso x secondo il criterio di costo logaritmico e la somma dei costi logaritmici delle istruzioni eseguite nella computazione di P su input x. E evidente che, per ogni programma P , TP (x) TPl (x), per ogni input x. Per certi programmi tuttavia i valori TP (x) e TPl (x) possono differire drasticamente portando a valutazioni diverse sullefficienza di un algoritmo. Esempio 3.3 n Consideriamo per esempio la seguente procedura Algol-like che calcola la funzione z = 32 , su input n IN, per quadrati successivi: read x y := 3 while x> 0 do y := y y x := x 1 write y k La correttezza e provata osservando che dopo la k-esima esecuzione del ciclo while la variabile y assume il valore 32 . Il programma RAM corrispondente, che denotiamo con , e definito dalla seguente procedura: while READ LOAD STORE LOAD JZERO LOAD MULT STORE 1 =3 2 1 endwhile 2 2 2 29 CAPITOLO 3. MODELLI DI CALCOLO endwhile LOAD SUB STORE JUMP WRITE HALT 1 =1 1 while 2 Si verifica immediatamente che il ciclo while viene percorso n volte, e che quindi T (n) = 8n + 7. k Poiche dopo la k-esima iterazione del ciclo while R2 contiene lintero 32 , il costo logaritmico di LOAD 2, MULT 2k k 2, STORE 2 sara dellordine di l(3 ) 2 log2 3. Di conseguenza: Tl (n) = n1 X k 2 ! = (2n ) k=0 Quindi, mentre T (n) = (n), il valore di Tl (n) e una funzione esponenziale in n. In questo caso la misura T risulta (per macchine sequenziali) assolutamente irrealistica. In modo analogo possiamo definire, secondo il criterio logaritmico, la quantita di spazio di memoria consumata da un certo programma su un dato input. Infatti, consideriamo la computazione di un programma P su un input x; questa puo essere vista come una sequenza di stati, quelli raggiunti dalla macchina dopo lesecuzione di ogni istruzione a partire dallo stato iniziale. Lo spazio occupato in un certo stato della computazione e la somma delle lunghezze degli interi contenuti nei registri utilizzati dal programma in quellistante. Lo spazio complessivo richiesto, secondo il criterio logaritmico, e quindi il massimo di questi valori al variare degli stati raggiunti dalla macchina durante la computazione. Denoteremo questa quantita con SPl (x). Esercizi 1) Sia P il programma definito nellesempio 3.1. Qual e nel caso peggiore il valore di TP (x) tra tutti i vettori x di n interi? 2) Supponiamo che il programma definito nellesempio 3.1 riceva in ingresso un vettore di n interi compresi tra 1 e k. Determinare lordine di grandezza del suo tempo di calcolo secondo il criterio logaritmico al crescere di n e k. 3) Scrivere un programma RAM per il calcolo della somma di n interi. Assumendo il criterio di costo uniforme, determinare lordine di grandezza, al crescere di n, del tempo di calcolo e dello spazio di memoria richiesti. 4) Eseguire lesercizio precedente assumendo il criterio di costo logaritmico e supponendo che gli n interi di ingresso abbiano lunghezza n. 3.2 La macchina RASP Le istruzioni di un programma RAM non sono memorizzate nei registri della macchina e di conseguenza non possono essere modificate nel corso dellesecuzione. In questa sezione presentiamo invece il modello di calcolo RASP (Random Access Stored Program) che mantiene il programma in una parte della memoria e consente quindi di cambiare le istruzioni durante lesecuzione. Linsieme di istruzioni di una macchina RASP e identico a quello della macchina RAM, con lunica eccezione che non e permesso lindirizzamento indiretto (che denotavamo mediante k). Il programma di una macchina RASP viene caricato in memoria assegnando ad ogni istruzione due registri consecutivi: il primo contiene un intero che codifica il codice di operazione dellistruzione; il 30 CAPITOLO 3. MODELLI DI CALCOLO secondo invece conserva lindirizzo. Inoltre il contenuto del primo registro specifica anche il tipo di indirizzo mantenuto nel secondo, cioe rivela se lindirizzo successivo e della forma = k oppure k; in questo modo il secondo registro conserva solo il valore k. Per eseguire listruzione, il contatore di locazione dovra puntare al primo dei due registri. Una possibile codifica delle istruzioni e data dalla seguente tabella: Istruzione LOAD () LOAD =( ) STORE () ADD () ADD =( ) SUB () Codifica 1 2 3 4 5 6 Istruzione SUB =( ) MULT () MULT =( ) DIV () DIV =( ) READ () Codifica 7 8 9 10 11 12 Istruzione WRITE () WRITE =( ) JUMP () JGTZ () JZERO () JBLANK () HALT Codifica 13 14 15 16 17 18 19 Esempio 3.4 Per STORE 17 il primo registro contiene 3, il secondo 17. Per ADD = 8 il primo registro contiene 5, il secondo 8. Per ADD 8 il primo registro contiene 4, il secondo 8. I concetti di stato, computazione, funzione calcolata da un programma, tempo e spazio (uniforme o logaritmico) si definiscono come per le macchine RAM, con qualche piccola variazione: per esempio, salvo che per le istruzioni di salto (JUMP, JGTZ, JZERO, JBLANK), il registro lc viene incrementato di 2, tenendo conto che ogni istruzione occupa due registri consecutivi. Rimarchiamo qui che nello stato iniziale i registri non sono posti tutti a 0 come avveniva nel modello RAM, dovendo il programma essere memorizzato; sottolineiamo, inoltre, che il programma puo automodificasi nel corso della propria esecuzione. Come per le macchine RAM, dato un programma RASP e un input I, denoteremo con F (I) l (I) rispettivamente il tempo uniforme, il la funzione calcolata da e con T (I), Tl (I), S (I), S tempo logaritmico, lo spazio uniforme, lo spazio logaritmico consumato dal programma sullingresso assegnato. Affrontiamo ora il problema della simulazione di macchine RAM con macchine RASP e viceversa. Un primo risultato e il seguente: Teorema 3.1 Per ogni programma RAM , esiste un programma RASP che calcola la stessa funzione (cioe , F = F ) e tale che T (I) 6 T (I). Dimostrazione. Detto || il numero di istruzioni del programma RAM , il programma RASP che costruiremo sara contenuto nei registri compresi tra R2 e Rr , dove r = 12 || + 1; il registro R1 sara usato dalla RASP come accumulatore temporaneo e nella simulazione il contenuto di indirizzo k nella macchina RAM (k 1) sara memorizzato nellindirizzo r + k sulla macchina RASP. Il programma sara ottenuto dal programma sostituendo ogni istruzione RAM in con una sequenza di istruzioni RASP che eseguono lo stesso calcolo. Ogni istruzione RAM che non richiede indirizzamento indiretto e sostituita facilmente dalle corrispondenti istruzioni RASP (con gli indirizzi opportunamente incrementati). Mostriamo ora che ogni istruzione RAM che richiede indirizzamento indiretto puo essere sostituita da 6 istruzioni RASP; cos , il tempo di calcolo di sara al piu 6 volte quello richiesto da , ed il 31 CAPITOLO 3. MODELLI DI CALCOLO programma occupera al piu 12 || registri, giustificando la scelta di r. Dimostriamo la proprieta solo per MULT k poiche il ragionamento si applica facilmente alle altre istruzioni. La simulazione di MULT k e data dalla seguente sequenza di istruzioni RASP, che supponiamo vadano inserite tra i registri RM e RM +11 : Indirizzo M M+1 Contenuto 3 1 Significato STORE 1 Commento Memorizza il contenuto dellaccumulatore nel registro R1 M+2 M+3 1 r+k LOAD r + k Carica nellaccumulatore il contenuto Y del registro di indirizzo r + k M+4 M+5 5 r ADD = r Calcola r + Y nellaccumulatore M+6 M+7 3 M+11 STORE M + 11 Memorizza r + Y nel registro di indirizzo M+11 M+8 M+9 1 1 LOAD 1 Carica nellaccumulatore il vecchio contenuto M+10 M+11 8 - MULT r + Y Esegui il prodotto tra il contenuto dellaccumulatore e quello del registro r + Y Per quanto riguarda il criterio di costo logaritmico, con la stessa tecnica e una attenta analisi dei costi si ottiene una proprieta analoga alla precedente. Teorema 3.2 Per ogni programma RAM esistono un programma RASP e una costante intera C > 0 tali che, per ogni input I, F = F e Tl (I) C Tl (I) Lindirizzamento indiretto rende possibile la simulazione di programmi RASP con macchine RAM. Qui presentiamo il seguente risultato senza dimostrazione: Teorema 3.3 Per ogni programma RASP , esiste un programma RAM che calcola la stessa funzione (cioe F = F ) e due costanti positive C1 , C2 tali che T (I) C1 T (I) e Tl (I) C2 Tl (I) per ogni input I. 3.3 Calcolabilita e calcolabilita effettiva Una conseguenza dei precedenti risultati e che la classe di funzioni calcolabili con programmi RAM coincide con la classe di funzioni calcolabile con programmi RASP. Sempre con tecniche di simulazione si potrebbe mostrare che tale classe coincide con la classe di funzioni calcolabili da vari formalismi (PASCAL, C, Macchine di Turing, -calcolo, PROLOG, ecc.); lindipendenza dai formalismi rende CAPITOLO 3. MODELLI DI CALCOLO 32 questa classe di funzioni, dette funzioni ricorsive parziali, estremamente robusta, cos che alcuni autori propongono di identificare il concetto (intuitivo) di problema risolubile per via automatica con la classe (tecnicamente ben definita) delle funzioni ricorsive parziali (Tesi di Church-Turing). Una seconda conclusione e che la classe delle funzioni calcolabili in tempo (caso peggiore) O(f (n)) con macchine RAM coincide con la classe di funzioni calcolabili in tempo (caso peggiore) O(f (n)) con macchine RASP. Questo risultato non puo essere esteso tuttavia agli altri formalismi prima citati. Se pero chiamiamo P la classe di problemi risolubili da macchine RAM con criterio logaritmico in un tempo limitato da un polinomio (cio succede se il tempo su ingressi di dimensione n e O(nk ) per un opportuno k), tale classe resta invariata passando ad altri formalismi, sempre con costo logaritmico. Questa rimarchevole proprieta di invarianza rende la classe P particolarmente interessante, cos che alcuni autori hanno proposto di identificarla con la classe dei problemi praticamente risolubili per via automatica (Tesi di Church estesa). 3.4 Un linguaggio ad alto livello: AG Come abbiamo visto, un qualsiasi algoritmo puo essere descritto da un programma per macchine RAM e questo permette di definire il tempo e lo spazio richiesti dalla sua esecuzione. Per contro, programmi per macchine RAM sono di difficile comprensione; risulta pertanto rilevante descrivere gli algoritmi in un linguaggio che da un lato sia sufficientemente sintetico, cos da renderne semplice la comprensione, dallaltro sia sufficientemente preciso cos che ogni programma possa essere trasparentemente tradotto in un programma RAM. In realta vogliamo poter scrivere programmi comprensibili e contemporaneamente essere in grado di valutarne la complessita , intesa come complessita del corrispondente programma RAM tradotto, senza farne una esplicita traduzione. Diamo qui di seguito la descrizione informale di un linguaggio di tipo procedurale che chiamiamo AG. Dichiarazione di tipi saranno evitate, almeno quando i tipi risultano chiari dal contesto. Ogni programma AG fa uso di variabili; una variabile e un identificatore X associato a un insieme prefissato U di possibili valori (che intuitivamente definiscono il tipo della variabile). Linsieme U puo essere costituito ad esempio da numeri, parole o strutture dati quali vettori, pile, liste ecc. (che saranno considerate nei capitoli 4 e 9). Esso definisce linsieme dei valori che X puo assumere durante lesecuzione di un programma. Infatti, come vedremo in seguito, il linguaggio prevede opportuni comandi di assegnamento che consentono di attribuire a una variabile un valore dato. Cos , durante lesecuzione di un programma, ciascuna variabile assume sempre un valore corrente. Sulla macchina RAM la variabile X e invece rappresentata da uno o piu registri il cui contenuto, in un certo stato, rappresenta il valore corrente di X. Modificare il valore di X significa quindi sulla RAM cambiare il contenuto dei corrispondenti registri. Una espressione e un termine che denota lapplicazione di simboli di operazioni a variabili o a valori costanti. Per esempio, se X e Y sono variabili a valori interi, (X +Y )2 e una espressione nella quale + e sono simboli che denotano le usuali operazioni di somma e prodotto. Nel prossimo capitolo introdurremo le strutture dati con le relative operazioni e sara cos possibile definire le espressioni corrispondenti (ad esempio, PUSH(P ILA, X) e una espressione nella quale X e P ILA sono variabili, la prima a valori su un insieme U e la seconda sulle pile definite su U). Durante lesecuzione di una procedura anche le espressioni assumono un valore corrente. Il valore di una espressione in uno stato di calcolo e il risultato dellapplicazione delle operazioni corrispondenti ai valori delle variabili. CAPITOLO 3. MODELLI DI CALCOLO 33 Una condizione e un simbolo di predicato applicato a una o piu espressioni; per esempio, X Y > Z, A = B sono condizioni nelle quali compaiono i simboli > e = dallovvio significato. Il valore di una condizione in un certo stato e vero se il predicato applicato ai valori delle espressioni e vero, falso altrimenti. In seguito denoteremo spesso vero con 1 e falso con 0. Descriviamo ora in modo sintetico e informale la sintassi e la semantica dei programmi AG fornendo anche la valutazione dei relativi tempi di esecuzione. Un programma di AG e un comando del tipo seguente: 1. Comando di assegnamento, del tipo V := E dove V e una variabile ed E e una espressione. Leffetto dellesecuzione di un comando di assegnamento e quello di assegnare alla variabile il valore dellespressione; la complessita in tempo e la somma del tempo necessario a valutare lespressione e del tempo necessario ad assegnare il nuovo valore alla variabile. 2. Comando if then else, del tipo if P then C1 else C2 , dove P e una condizione, C1 e C2 sono comandi. Leffetto e quello di eseguire C1 se nello stato di calcolo P e vera, altrimenti quello di eseguire C2 . Il tempo e dato dalla somma del tempo necessario per valutare P e del tempo richiesto da C1 o da C2 a seconda se P e vera o falsa. 3. Comando for, del tipo for k=1 to n do C, dove k e una variabile intera e C e un comando. Leffetto e quello di eseguire in successione i comandi C, in cui la variabile k assume valore 1, 2, . . . , n. Il tempo di calcolo e la somma dei tempi richiesti dalle n esecuzioni di C. 4. Comando while, del tipo while P do C, dove P e una condizione, C e un comando. Se la condizione P e vera, il comando C viene eseguito; questo viene ripetuto finche la condizione diventa falsa. Il tempo di calcolo e la somma dei tempi necessari a valutare la condizione nei vari stati e dei tempi di esecuzione di C nei vari stati. 5. Comando composto, del tipo begin C1 ; . . . ; Cm end dove C1 , . . . , Cm sono comandi. Leffetto e quello di applicare i comandi C1 , C2 , . . . , Cm nellordine, e il tempo e la somma dei tempi di esecuzione di C1 , C2 , . . . , Cm . 6. Comando con etichetta, del tipo e :C dove e e una etichetta, C un comando. Leffetto e quello di eseguire C, col tempo di calcolo di C. 7. Comando goto, del tipo goto e, dove e e una etichetta. Leffetto e quello di rimandare allesecuzione di un comando con etichetta e. In AG e possibile dichiarare sottoprogrammi, richiamandoli poi da un programma principale. Luso che se ne puo fare e duplice: a) Il sottoprogramma serve a calcolare una funzione esplicitamente utilizzata dal programma principale. b) Il sottoprogramma serve a modificare lo stato, cioe il contenuto delle variabili, nel programma principale. Nel primo caso il sottoprogramma e descritto nella forma Procedura Nome C; Nome e un identificatore del sottoprogramma, = [1 , . . . , m ] e una lista di parametri detti parametri formali e C e un comando che contiene istruzioni di ritorno del tipo return E, con E espressione. CAPITOLO 3. MODELLI DI CALCOLO 34 Il programma principale contiene comandi del tipo A:=Nome(B), dove A e una variabile e B = [B1 , . . . , Bm ] e una lista di variabili poste in corrispondenza biunivoca coi parametri formali; esse sono dette parametri reali. Lesecuzione del comando A:=Nome(B) nel programma principale richiede di inizializzare il sottoprogramma attribuendo ai parametri formali il valore dei parametri attuali (chiamata per valore) o il loro indirizzo (chiamata per indirizzo). Il controllo viene passato al sottoprogramma: quando viene eseguito un comando del tipo return E, il valore di E viene attribuito ad A nel programma principale che riprende il controllo. Consideriamo ad esempio il sottoprogramma: Procedura MAX(x, y) if x > y then return x else return y Lesecuzione di A := MAX(V [I], V [J]) in un programma principale in uno stato in cui V [I] ha valore 4 e V [J] ha valore 7 attribuisce ad A il valore 7. Anche nel secondo caso il sottoprogramma e descritto nella forma Procedura Nome C, dove Nome e un identificatore del sottoprogramma e = [1 , . . . , m ] la lista di parametri formali; qui pero non e richiesto che il comando C contenga istruzioni del tipo return E. La procedura puo essere chiamata dal programma principale con un comando di chiamata-procedura del tipo Nome(B) dove B e una lista di parametri attuali in corrispondenza biunivoca coi parametri formali. Anche in questo caso la procedura chiamata viene inizializzata attribuendo ai parametri formali il valore dei parametri attuali (chiamata per valore) o il loro indirizzo (chiamata per indirizzo). Un esempio e dato dalla seguente procedura: Procedura SCAMBIA(x, y) begin t := x; x := y; y := t end Se le chiamate sono per indirizzo, la chiamata-procedura SCAMBIA(A[k], A[s]) nel programma principale ha leffetto di scambiare le componenti k e s nel vettore A. Per quanto riguarda la chiamata per valore, si osservi che eventuali modifiche del valore di un parametro formale nel corso dellesecuzione di un sottoprogramma non si riflette in analoghe modifiche del corrispondente parametro attuale; viceversa, se la chiamata e per indirizzo, ogni modifica del parametro formale si traduce nella modifica analoga del corrispondente parametro attuale nel programma chiamante. Il costo della chiamata di un sottoprogramma (chiamata per indirizzo) e il costo della esecuzione del comando associato al sottoprogramma. Una procedura puo chiamare altre procedure, ed eventualmente se stessa. Discuteremo in seguito il costo della implementazione in RAM in questo importante caso. Concludiamo la descrizione del linguaggio richiamando la nozione di puntatore. Un puntatore e una variabile X che assume come valore corrente lindirizzo sulla macchina RAM di unaltra variabile che nel nostro linguaggio viene denotata da X. La variabile X e anche chiamata variabile puntata da X; durante lesecuzione di un programma essa puo non essere definita e in questo caso X assume il valore convenzionale nil (cos linsieme dei possibili valori di X e dato dalla espressione nil e dagli indirizzi di memoria della RAM). Un puntatore e quindi un oggetto definito nella sintassi del linguaggio AG il cui significato e pero 35 CAPITOLO 3. MODELLI DI CALCOLO strettamente legato alla macchina RAM; in particolare un puntatore puo essere facilmente simulato in un programma RAM usando semplicemente lindirizzamento indiretto. Osserva che la variabile X potrebbe rappresentare vettori, matrici o strutture dati piu complesse; in questo caso, sulla macchina RAM, X sara rappresentato da un registro che contiene lindirizzo della prima cella che rappresenta X. Nel seguito useremo spesso lusuale rappresentazione grafica di un puntatore descritta nella seguente figura. X X c - X nil Durante lesecuzione di un programma AG opportune istruzioni di assegnamento possono modificare i valori di un puntatore e della relativa variabile puntata. Nella seguente tabella descriviamo il significato dei comandi di assegnamento che coinvolgono due puntatori X e Y ; essi faranno parte a tutti gli effetti dei possibili comandi di assegnamento del linguaggio AG. Comando X := Y X := Z Z := X X := Y Significato Assegna alla variabile puntata da X il valore della variabile puntata da Y Assegna il valore della variabile Z alla variabile puntata da X. Assegna il valore della variabile puntata da X alla variabile Z. Assegna il valore di Y (cioe lindirizzo di Y ) a X (dopo lesecuzione X e Y puntano alla stessa variabile Y ). Per esempio, se X e Y contengono matrici n n, il comando X := Y trasferisce in X la matrice n n contenuta in Y ; ogni ragionevole implementazione su macchina RAM richiedera per questo trasferimento (n2 ) passi. Viceversa, il comando X := Y fa puntare la variabile X alla variabile Y (cioe alla variabile a cui punta Y ) in O(1) passi (secondo il criterio uniforme); naturalmente in questo caso sara memorizzata una sola matrice nellindirizzo comune contenuto in X e in Y . Esercizi 1) Usando il linguaggio AG, descrivere un algoritmo per calcolare il prodotto di n interi (con n IN qualsiasi), e uno per determinare il loro valore massimo e quello minimo. 2) Assumendo il criterio di costo uniforme, determinare lordine di grandezza del tempo di calcolo e dello spazio di memoria richiesto dagli algoritmi considerati nellesercizio precedente. 3) Eseguire lesercizio precedente assumendo il criterio di costo logaritmico e supponendo che gli n interi di ingresso abbiano al piu n bit. Capitolo 4 Strutture dati elementari Per mantenere in memoria un insieme di informazioni e permettere una loro efficiente manipolazione e indispensabile organizzare i dati in maniera precisa evidenziando le relazioni e le dipendenze esistenti tra questi e definendo le funzioni che permettono la modifica delle informazioni. In questa sede non vogliamo dare una definizione generale di struttura dati (che viene rimandata a un capitolo successivo), ma piu semplicemente descriverne il significato intuitivo e presentare alcuni esempi che saranno utilizzati in seguito. Informalmente, una struttura dati e costituita da uno o piu insiemi e da operazioni definite sui loro elementi. Questa nozione e quindi astratta, svincolata dalla concreta rappresentazione della struttura sul modello di calcolo RAM. Le tradizionali strutture di vettore, record, lista, gia incontrate nei corsi di programmazione, possono essere cos definite a un livello astratto come insiemi di elementi dotati di opportune operazioni. Limplementazione di una data struttura descrive invece il criterio con il quale i vari elementi sono memorizzati nei registri della macchina e definisce i programmi che eseguono le operazioni. E evidente che ogni struttura dati ammette in generale piu implementazioni a ciascuna delle quali corrisponde un costo in termini di spazio, per il mantenimento dei dati in memoria, e uno in termini di tempo, per lesecuzione dei programmi associati alle operazioni. Nella progettazione e nellanalisi di un algoritmo e tuttavia importante considerare le strutture dati svincolate dalla loro implementazione. Questo consente spesso di rappresentare una parte delle istruzioni di un algoritmo come operazioni su strutture, mettendo in evidenza il metodo adottato dalla procedura per risolvere il problema e tralasciando gli aspetti implementativi. Tutto cio facilita notevolmente la comprensione del funzionamento dellalgoritmo e lanalisi dei suoi costi. Il passo successivo puo essere quello di studiare un algoritmo a livelli diversi di astrazione a seconda del dettaglio con cui sono specificate le implementazioni delle strutture dati su cui si opera. Nel nostro ambito possiamo cos individuare almeno tre livelli di astrazione: il primo e quello relativo alla macchina RAM introdotta nel capitolo precedente; il secondo e quello definito dal linguaggio AG e il terzo (descritto di fatto in questo capitolo) e quello nel quale si usano esplicitamente le strutture dati e le loro operazioni nella descrizione degli algoritmi. 4.1 Vettori e record Sia n un intero positivo e sia U un insieme di valori (per esempio numeri interi, numeri reali o parole definite su un dato alfabeto). Un vettore di dimensione n su U e una n-pla (a1 , a2 , . . . , an ) tale che 36 37 CAPITOLO 4. STRUTTURE DATI ELEMENTARI ai U per ogni i {1, 2, . . . , n}; diciamo anche ai e la componente i-esima del vettore. Linsieme di tutti i vettori di dimensione n su U e quindi il prodotto cartesiano Un = U U U} | {z n volte Le operazioni definite sui vettori sono quelle di proiezione e sostituzione delle componenti, definite nel modo seguente. Per ogni intero i, 1 i n, la proiezione i-esima e la funzione i : U n U tale che, per ogni A = (a1 , a2 , . . . , an ) U n , i (A) = ai . La sostituzione della componente i-esima e invece definita dalla funzione i : (U n U) U n che associa a ogni A = (a1 , a2 , . . . , an ) U n e a ogni valore u U il vettore B = (b1 , b2 , . . . , bn ) U n tale che ( u se j = i bj = aj altrimenti Consideriamo ora limplementazione dei vettori su macchina RAM. E evidente che se ogni elemento in U e rappresentabile dal contenuto di una cella di memoria, un vettore A = (a1 , a2 , . . . , an ) U n e rappresentabile dal contenuto di n celle consecutive: la componente ai e rappresentata dal contenuto della i-esima cella. La seguente figura descrive chiaramente tale implementazione. A a1 a2 an Ovviamente sono possibili implementazioni piu complesse di un vettore a seconda della rappresentazione degli elementi di U sulla macchina RAM. Se per esempio occorrono k celle per rappresentare un valore in U possiamo implementare un vettore A U n mediante n blocchi consecutivi ciascuno dei quali composto di k registri. Per quanto riguarda limplementazione delle operazioni di proiezione e sostituzione osserviamo che un vettore puo rappresentare il valore di una variabile e lo stesso vale per le sue componenti. In particolare, se X rappresenta una variabile su U n e i {1, 2, . . . , n}, possiamo denotare con X[i] la variabile che rappresenta la i-esima componente di X. Questo consente di definire limplementazione delle operazioni i e i direttamente come assegnamento di valori alle variabili. Cos nel nostro linguaggio AG potremo usare le istruzioni X[i] := e o Y := X[j] dove X e Y sono variabili, la prima a valori in U n e la seconda in U, mentre e e una espressione a valori in U. Il loro significato (inteso come implementazione delloperazione su macchina RAM) risulta ovvio. In modo del tutto analogo possiamo definire le matrici, viste come vettori bidimensionali. Dati due interi positivi p e q, una matrice di dimensione p q sullinsieme U e una collezione di elementi mij , dove i {1, 2, . . . , p} e j {1, 2, . . . , q}, ciascuno dei quali e contenuto in U. Tale matrice viene solitamente rappresentata nella forma [mij ] e gli elementi mij sono anche chiamati componenti della matrice. Linsieme delle matrici di questa forma e spesso denotato da U pq . Anche in questo caso le operazioni associate sono quelle di proiezione e sostituzione, caratterizzate questa volta da coppie di indici: per ogni M = [mij ] U pq e ogni t {1, 2, . . . , p}, s {1, 2, . . . , q}, definiamo ts (M ) = mts e ts (M, u) = [rij ], dove ( rij = u mij se i = t e j = s altrimenti 38 CAPITOLO 4. STRUTTURE DATI ELEMENTARI per ogni u U. Supponi che ogni elemento di U sia rappresentabile sulla macchina RAM mediante un solo registro. Allora, limplementazione naturale di una matrice M = [mij ] U pq consiste nellassociare le sue componenti al contenuto di p q registri consecutivi a partire da un registro fissato Rk ; in questo modo la componente mij e rappresentata dal registro Rk+(i1)q+j1 . Nota che per prelevare il valore di questa cella occorre eseguire un certo numero di operazioni aritmetiche solo per determinarne lindirizzo. Chiaramente, se k e un intero fissato, indipendente da p e da q, il costo di accesso e O(log(pq)) nel caso del criterio logaritmico. Implementazioni piu complesse sono necessarie quando gli elementi di U richiedono piu registri. Infine, le operazioni definite sulle matrici possono essere implementate in maniera del tutto simile a quelle definite sui vettori. Una struttura del tutto analoga a quella di vettore e data dalla nozione di record. Dati n insiemi di valori U1 , U2 , . . . , Un , chiamiamo record una qualunque n-pla di elementi R = (x1 , x2 , . . . , xn ) tali che xi Ui per ogni i {1, 2, . . . , n}. Poiche non vogliamo associare le componenti di un record agli indici, introduciamo n nomi el1 , el2 , . . . , eln , per denotare rispettivamente gli insiemi U 1 , U2 , . . . , Un , che chiameremo campi del record R. Diremo che il record R contiene xi nel campo eli e rappresenteremo tale elemento mediante lespressione R eli . Le operazioni associate a un record R sono di nuovo le operazioni di proiezione e sostituzione che questa volta vengono rappresentate dai campi del record: eli (R) = R eli , eli (R, u) = S, dove S e ottenuta da R sostituendo R eli con u. Limplementazione di un record e del tutto simile a quella di un vettore. Si noti che il numero di componenti di un vettore o di un record e fissato dalla sua dimensione, mentre risulta spesso utile considerare vettori o record di dimensione variabile. A questo scopo introduciamo la nozione di tabella. Una tabella T di dimensione k IN e un record formato da due campi: il primo contiene un intero m tale che 1 m k, mentre il secondo contiene un vettore di m componenti. Essa puo essere implementata mediante k celle consecutive, mantenendo nelle prime m i valori del vettore e segnalando in maniera opportuna la sua dimensione m (per esempio mediante un puntatore o memorizzando m in un dato registro). Tale implementazione, nel caso di una tabella T contenente un vettore (e1 , e2 , . . . , em ), sara rappresentata in seguito dalla seguente figura. T 4.2 e1 e2 em Liste Come abbiamo visto nella sezione precedente, vettori e record sono caratterizzati da operazioni che permettono un accesso diretto alle singole componenti ma non consentono di modificare la dimensione della struttura. In questa sezione e nelle successive definiamo invece strutture nelle quali e possibile modificare le dimensioni aggiungendo o togliendo elementi, ma nelle quali laccesso alle componenti non sempre e il risultato di una sola operazione e puo richiedere, durante limplementazione, lesecuzione di numero di passi proporzionale alla dimensione della struttura stessa. Una struttura dati tipica nella quale si possono agevolmente introdurre o togliere elementi e quella di lista. Dato un insieme di valori U, chiamiamo lista una sequenza finita L di elementi di U. In particolare L puo essere la lista vuota, che non contiene alcun elemento, e in questo caso denotiamo L con il simbolo ; altrimenti rappresentiamo L nella forma L = ha1 , a2 , . . . , an i, 39 CAPITOLO 4. STRUTTURE DATI ELEMENTARI dove n 1 e ai U per ogni i = 1, 2, . . . , n. In questo caso diremo anche che ai e li-esimo elemento di L. Le liste sono inoltre dotate delle operazioni IS EMPTY, ELEMENTO, INSERISCI e TOGLI che definiamo nel seguito. Queste consentono di verificare se una lista e vuota, di determinare i suoi elementi oppure di modificarla introducento un nuovo oggetto o togliendone uno in una posizione qualsiasi. E importante sottolineare che le operazioni di inserimento e cancellazione dipendono dalla posizione degli elementi nella lista e non dal loro valore. Per ogni lista L, per ogni k IN e ogni u U, le operazioni citate sono cos definite: ( IS EMPTY(L) = ( ELEMENTO(L, k) = INSERISCI(L, k, u) = ak se L = ha1 , a2 , . . . , an i e 1 k n, altrimenti; hui ha , . . . , ak1 , u, ak , . . . , an i 1 ( TOGLI(L, k) = 1 se L = , 0 altrimenti; ha1 , . . . , ak1 , ak+1 , . . . , an i se L = e k = 1, se L = ha1 , a2 , . . . , an i e 1 k n + 1, altrimenti; se L = ha1 , a2 , . . . , an i e 1 k n, altrimenti; Mediante la composizione di queste operazioni possiamo calcolare altre funzioni che permettono di manipolare gli elementi di una lista; tra queste ricordiamo quelle che determinano o modificano il primo elemento di una lista: ( a1 se L = ha1 , a2 , . . . , an i, TESTA(L) = se L = ; ( hu, a1 , a2 , . . . , an i se L = ha1 , a2 , . . . , an i, hui se L = ; ( ha2 , . . . , an i se L = ha1 , a2 , . . . , an i, se L = ; INSERISCI IN TESTA(L, u) = TOGLI IN TESTA(L) = infatti e evidente che TESTA(L)=ELEMENTO(L, 1), INSERISCI IN TESTA(L, u)=INSERISCI(L, 1, u), TOGLI IN TESTA(L)=TOGLI(L, 1). Usando lo stesso criterio possiamo definire loperazione che calcola la lunghezza di una stringa e quella per sostituire il suo elemento k-esimo. ( LUNGHEZZA(L) = 0 se L = 1 + LUNGHEZZA(TOGLI IN TESTA(L)) altrimenti CAMBIA(L, k, u) = TOGLI(INSERISCI(L, k, u), k + 1) Analogamente si possono definire altre operazioni di uso frequente. Elenchiamo nel seguito alcune di queste tralasciando la definizione formale e riportando solo il significato intuitivo: ( 1 se u compare in L, 0 altrimenti CODA(ha1 , . . . , am i) = am INSERISCI IN CODA(ha1 , . . . , am i, x) = ha1 , . . . , am , xi TOGLI IN CODA(ha1 , . . . , am i) = ha1 , . . . , am1 i CONCATENA(ha1 , . . . , am i, hb1 , . . . , bs i) = ha1 , . . . , am , b1 , . . . , bs i APPARTIENE(L, u) = 40 CAPITOLO 4. STRUTTURE DATI ELEMENTARI Un calcolo eseguito spesso da algoritmi che manipolano una lista consiste nello scorrere i suoi elementi dal primo allultimo, compiendo certe operazioni su ciascuno di questi. Un procedimento di questo tipo puo essere eseguito applicando opportunamente le operazioni definite sulle liste. Per brevita , e con un certo abuso di notazione, nel seguito indicheremo con listruzione for b L do Op(b) la procedura che esegue loperazione predefinita Op su ciascun elemento b della lista L nellordine in cui tali oggetti compaiono in L. Per esempio, per determinare quante volte un elemento a compare in una lista L, possiamo eseguire la seguente procedura: begin n:=0 for b L do if b = a then n:=n+1 return n end Tale procedura puo essere considerata come limplementazione della seguente operazione definita utilizzando le operazioni fondamentali: 0 se L = 1 + NUMERO(a, TOGLI IN TESTA(L)) se L 6= e a = TESTA(L)) NUMERO(a, L) = NUMERO(a, TOGLI IN TESTA(L)) se L 6= e a 6= TESTA(L)) 4.2.1 Implementazioni In questa sezione consideriamo tre possibili implementazioni della struttura dati appena definita. La piu semplice consiste nel rappresentare una lista L = ha1 , a2 , . . . , an i mediante una tabella T di dimensione m > n che mantiene nelle prime n componenti i valori a1 , a2 , . . . , an nel loro ordine. In questo modo il calcolo del k-esimo elemento di L (ovvero lesecuzione delloperazione ELEMENTO(L, k)) e abbastanza semplice poiche e sufficiente eseguire una proiezione sulla k-esima componente di T ; possiamo assumere che nella maggior parte dei casi questo richieda un tempo O(1) secondo il criterio uniforme. Viceversa linserimento o la cancellazione di un elemento in posizione k-esima richiede lo spostamento di tutti gli elementi successivi. Inoltre, prima di inserire un elemento, bisogna assicurarsi che la dimensione della lista non diventi superiore a quella della tabella, nel qual caso occorre incrementare opportunamente la dimensione di questultima ed eventualmente riposizionarla in unaltra area di memoria. Lesecuzione di queste operazioni puo quindi essere costosa in termini di tempo e spazio e questo rende poco adatta tale implementazione quando occorre eseguire di frequente linserimento o la cancellazione di elementi. Una seconda implementazione, che per molte applicazioni appare piu naturale della precedente, e basata sulluso dei puntatori ed e chiamata lista concatenata. In questo caso una lista L = ha1 , a2 , . . . , an i viene rappresentata mediante n record R1 , R2 , . . . , Rn , uno per ogni posizione, formati da due campi ciascuno che chiameremo el e punt. Ogni Ri contiene nel campo el il valore ai e nel campo punt un puntatore al record successivo. Inoltre un puntatore particolare (P ) indica il primo record della lista. R1 P c - a1 c Rn - - an nil 41 CAPITOLO 4. STRUTTURE DATI ELEMENTARI Questa rappresentazione permette inoltre di implementare facilmente (in pseudocodice e quindi su macchina RAM) le operazioni definite sopra. A titolo desempio, riportiamo alcuni casi particolari. IS EMPTY(P ) ifP = nil then else return 1 return 0 INSERISCI IN TESTA(P , a) Crea un puntatore X a una variabile di tipo record X el := a X punt := P P := X X c - a P X c c c - - am nil c @ R c - a@ 1 - - am nil a c @ - - am nil - a P P c 6 c - a1 c @ @ R @ a1 c Figura 4.1: Passi esecutivi della procedura INSERISCI IN TESTA. APPARTIENE(P, a) if P = nil then else return 0 begin R = P while R el 6= a R punt 6= nil do R := (R punt) if R el = a then return 1 else return 0 end Nota la differenza tra le implementazioni di IS EMPTY e APPARTIENE e quella di INSERISCI IN TESTA: nei primi due casi la procedura restituisce un valore, nel terzo invece modifica direttamente la lista ricevuta in input. In molti casi questultimo tipo di implementazione e quella effettivamente richiesta nella manipolazione di una struttura dati. Per quanto riguarda la complessita di queste procedure, se supponiamo che luguaglianza tra elementi richieda tempo O(1), possiamo facilmente osservare che il tempo di calcolo per IS EMPTY e 42 CAPITOLO 4. STRUTTURE DATI ELEMENTARI INSERISCI IN TESTA e O(1), mentre APPARTIENE richiede nel caso peggiore O(n) passi, dove n e il numero di elementi nella lista. Un aspetto molto interessante e che la complessita in spazio risulta essere (n), cosa che permette una efficiente gestione dinamica delle liste. La lista concatenata puo essere percorsa in una sola direzione; questa implementazione si rivela inadeguata quando sia utile percorrere la lista dalla coda alla testa. Una rappresentazione della lista simmetrica rispetto al verso di percorrenza e la cosiddetta lista bidirezionale. Essa e ottenuta da una sequenza di records di 3 campi. In ogni record il primo campo (prec) contiene un puntatore al record precedente, il terzo campo (succ) contiene un puntatore al record successivo mentre lelemento da memorizzare e contenuto nel secondo (el), come evidenziato nella figura 4.2. Testa L c - Coda L nil a1 c  c c a2  -  c am nil  c Figura 4.2: Lista bidirezionale. Unaltra possibile implementazione di una lista puo essere ottenuta mediante due tabelle dette Nome e Succ. Se I e un indice della tabella, Nome[I] e un elemento della lista L mentre Succ[I] e lindice dellelemento che segue Nome[I] in L. Un indice I 6= 0 identifica la lista se Succ[I] e definito ma Nome[I] non lo e , mentre conveniamo che il fine lista sia identificato da 0. Per esempio, la tabella seguente memorizza in posizione 3 la lista hF, O, C, Ai. 1 2 3 4 5 Nome C O A F Succ 4 1 5 0 2 Va osservato che lordine di inserimento degli elementi nella lista non necessariamente coincide con lordine di inserimento nella tabella. Questa rappresentazione e flessibile e interessante, permettendo di memorizzare piu liste in celle di memoria consecutive. Per esempio, la tabella successiva memorizza in posizione 3 la lista hB, A, S, S, Ai, in posizione 8 la lista hA, L, T, Ai e in posizione 10 la lista hD, U, Ei. 1 2 3 4 5 6 7 Nome S T B A A A Succ 11 12 4 5 1 0 9 8 9 10 11 12 13 14 15 L S A E D U 7 2 14 6 0 0 15 13 Anche in questa rappresentazione limplementazione delle varie operazioni e lineare. Vediamo un esempio. 43 CAPITOLO 4. STRUTTURE DATI ELEMENTARI APPARTIENE(I, a) S:= Succ[I] if S = 0 then return 0 else begin while Nome[S] 6= a Succ[S] 6= 0 do S:=Succ[S] if Nome[S] = a then return 1 else return 0 end Le liste bidirezionali ammettono una rappresentazione analoga, a patto di aggiungere una terza tabella Prec, dove Prec[I] e lindice dellelemento che precede Nome[I] nella lista considerata. 4.3 Pile Le pile possono essere interpretate come liste nelle quali le operazioni di proiezione, inserimento e cancellazione si possono applicare solo allultimo elemento. Si tratta quindi di una sorta di restrizione della nozione di lista e la differenza tra le due strutture e limitata alle loro operazioni. Formalmente, dato un insieme U di valori, una pila e una sequenza finita S di elementi di U; di nuovo denotiamo con la pila vuota, mentre una pila S non vuota sara rappresentata nella forma S = (a1 , a2 , . . . , an ) dove n 1 e ai U per ogni i = 1, 2, . . . , n. Sulle pile si possono eseguire le operazioni IS EMPTY, TOP, POP, PUSH definite nel modo seguente: ( 1 se S = , IS EMPTY(S) = 0 altrimenti ( TOP(S) = ( POP(S) = an se S = (a1 , a2 , . . . , an ) se S = , (a1 , a2 , . . . , an1 ) se S = (a1 , a2 , . . . , an ) se S = . Inoltre, per ogni a U: ( PUSH(S, a) = (a1 , a2 , . . . , an , a) (a) se S = (a1 , a2 , . . . , an ) se S = . Le operazioni appena definite realizzano un criterio di mantenimento e prelievo delle informazioni chiamato LIFO (Last In First Out): il primo termine che puo essere tolto dalla pila coincide con lultimo elemento introdotto. Nota che in questa struttura, per accedere allelemento i-esimo, devo togliere dalla pila tutti i successivi. Limplementazione naturale di una pila S = (a1 , a2 , . . . , an ) consiste in una tabella di dimensione k n che contiene gli elementi a1 , a2 , . . . , an nelle prime n componenti, e in un puntatore top(S) alla componente an . 44 CAPITOLO 4. STRUTTURE DATI ELEMENTARI a1 a2 top(S) an 6 c In questo caso, per implementare correttamente le operazioni definite sulla pila devo garantire che la dimensione n di S sia sempre minore o uguale alla dimensione k della tabella. In molte applicazioni questa condizione viene soddisfatta; altrimenti conviene implementare la pila come una lista concatenata In questo caso ogni elemento ai di S e rappresentato da un record di due campi: il primo contiene un puntatore al record relativo alla componente ai1 , mentre il secondo contiene il valore ai . Questa rappresentazione e descritta graficamente nel modo seguente: R1 nil a1  R2 a2  c  top(S) Rn an c 6 c E facile definire i programmi che eseguono le operazioni TOP, POP e PUSH sulle implementazioni appena considerate. Nel seguito presentiamo le procedure relative alla implementazione di una pila mediante tabella che supponiamo memorizzata a partire da una cella di indirizzo k. Procedura IS EMPTY(S) if top(S) < k then return 1 else return 0 Procedura TOP(S) if top(S) k then return top(S) Procedura PUSH(S, x) top(S) := top(S) + 1 top(S) := x Procedura POP(S) if top(S) k then top(S) := top(S) 1 4.4 Code Una coda e una struttura che realizza un criterio di inserimento e cancellazione dei dati chiamato FIFO (First in First Out). In questo caso il primo elemento che puo essere cancellato coincide con il primo elemento introdotto, cioe il piu vecchio tra quelli presenti nella struttura. Anche una coda puo essere interpretata come restrizione della nozione di lista. Possiamo allora definire una coda Q su un insieme di valori U come una sequenza finita di elementi di U. Di nuovo, Q puo essere la coda vuota (), oppure essa sara rappresentata da una sequenza (a1 , a2 , . . . , an ), dove n 1 e ai U per ogni i. Su una coda possiamo eseguire le operazioni 45 CAPITOLO 4. STRUTTURE DATI ELEMENTARI IS EMPTY, FRONT, DEQUEUE e ENQUEUE definite come segue: ( IS EMPTY(Q) = ( a1 FRONT(Q) = DEQUEUE(Q) = 1 se Q = , 0 altrimenti se Q = (a1 , a2 , . . . , an ) se Q = , (a , . . . , an ) 2 se Q = (a1 ) se Q = (a1 , a2 , . . . , an ) se Q = . Inoltre, per ogni b U, ( ENQUEUE(Q, b) = (a1 , a2 , . . . , an , b) (b) se Q = (a1 , a2 , . . . , an ) se Q = . Anche una coda Q = (a1 , a2 , . . . , an ) puo essere implementata usando una tabella di k n elementi e da due puntatori (front e rear) che indicano il primo e lultimo elemento della coda. a1 a2 6 c front(Q) an 6 c rear(Q) In modo analogo possiamo definire limplementazione mediante una lista concatenata; questa rappresentazione puo essere descritta rapidamente dalla figura seguente: R1 a1 front(Q) c 6 c R2 - a2 c Rn - - an nil 6 c rear(Q) Esercizi 1) Dato un alfabeto finito , ricordiamo che una parola su e una sequenza a1 a2 an tale che ai per ogni i, mentre la sua inversa e la parola an an1 a1 . Usando la struttura dati appropriata e le corrispondenti operazioni, definire una procedura per ciascuno dei seguenti problemi: - calcolare linversa di una parola data; - verificare se una parola e palindroma, cioe se e uguale alla sua inversa. 2) Simulare il funzionamento di una pila mediante due code definendo anche le procedure che eseguono le corrispondenti operazioni. Quanti passi sono necessari nel caso peggiore per eseguire una operazione su una pila di n elementi? Definire in modo analogo il funzionamento di una coda mediante due pile. 3) Usando le operazioni definite sulle liste descrivere una procedura per eliminare da una lista data tutti gli elementi che compaiono piu di una volta e unaltra per cancellare tutti quelli che compaiono una volta sola. Determinare inoltre, in entrambi i casi, il numero di operazioni compiute su una lista di n elementi nel caso peggiore. 46 CAPITOLO 4. STRUTTURE DATI ELEMENTARI 4.5 Grafi Una delle strutture piu flessibili in cui organizzare i dati per favorire la gestione delle informazioni e quella di grafo. Come le strutture precedenti, anche questa puo essere definita mediante insiemi di elementi e varie operazioni di manipolazione. Preferiamo tuttavia fornire la definizione classica di grafo, qui inteso come oggetto combinatorio. Questo permette di introdurre le nozioni e i concetti principali in maniera semplice e diretta. Dalle varie implementazioni sara quindi possibile definire facilmente le operazioni di manipolazione principali. Dato un insieme V , denoteremo con V s linsieme di s-ple ordinate di elementi di V e con V (s) la famiglia dei sottoinsiemi di V contenenti s elementi (cioe delle s-combinazioni di V ) Per esempio, se consideriamo linsieme V = {1, 2, 3}, allora {1, 2, 3}2 = {(1, 1), (1, 2), (1, 3), (2, 1), (2, 2), (2, 3), (3, 1), (3, 2), (3, 3)}, {1, 2, 3}(2) = {{1, 2}, {1, 3}, {2, 3}} Come sappiamo dalla sezione 2.2, se V contiene n elementi, V s ne contiene ns , mentre V (s) ne n n! contiene s = s!(ns)! . Definizione 4.1 Un grafo orientato (o diretto) G e una coppia G = hV, Ei, dove V e un insieme (finito) i cui elementi sono detti vertici (o nodi) ed E e un sottoinsieme di V 2 i cui elementi sono detti archi; archi del tipo (x, x) sono detti cappi. Un grafo non orientato G e una coppia hV, Ei, dove V e un insieme (finito) i cui elementi sono detti nodi (o vertici) ed E un sottoinsieme di V (2) , i cui elementi sono detti lati (o archi). La figura seguente da una rappresentazione grafica del grafo orientato G1 = h{1, 2, 3, 4, 5}, {(1, 2), (2, 2), (2, 3), (3, 2), (3, 1), (4, 5), (5, 4), (5, 5)}i e del grafo non orientato G2 = h{1, 2, 3, 4, 5}, {{1, 2}, {2, 3}, {1, 3}, {4, 5}}i.  '  #      ! - 2 - 5 ' %   1    6   &3  %4    G1   1 2 5     % 4   3 G2 Nota che un grafo non orientato non possiede cappi. Un sottografo di un grafo G = hV, Ei e un grafo G0 = hV 0 , E 0 i tale che V 0 V e E 0 E. Dato un lato (x, y) di un grafo orientato, diremo che x e la coda dellarco e y la testa; si dira anche che y e adiacente ad x. Ladiacenza di x in un grafo G = hV, Ei e linsieme dei vertici y tali che (x, y) E, cioe : Adiacenza(x) = {y|(x, y) E} 47 CAPITOLO 4. STRUTTURE DATI ELEMENTARI Un grafo G = hV, Ei puo alternativamente essere rappresentato dalla famiglia delle adiacenze di tutti i suoi vertici, cioe : G = {(x, Adiacenza(x)) |x V } Per esempio, il grafo G1 puo essere rappresentato mediante le adiacenze nel modo seguente: G1 = {(1, {2}), (2, {2, 3}), (3, {2, 1}), (4, {5}), (5, {5, 4})} Un grafo G puo essere descritto anche dalla sua matrice di adiacenza. Questa e la matrice AG , indiciata con vertici del grafo e a componenti in {0, 1}, tale che: ( AG [x, y] = 1 se (x, y) E 0 altrimenti a a - Adiacenza(V1 ) ? a a - Adiacenza(V2 ) 0 1 0 0 0 0 1 1 0 0 1 1 0 0 0 A fianco diamo la matrice di adiacenza del grafo G1 : 0 0 0 0 1 0 0 0 1 1 Se un grafo G ha n vertici, ogni ragionevole implementazione della matrice di adiacenza richiedera uno spazio di memoria O(n2 ); se G possiede un numero di lati notevolmente minore di n2 , puo essere conveniente rappresentare il grafo come una lista di vertici, ognuno dei quali punta alla lista della sua adiacenza: G a - V1 V2 ? ? Vn nil a - Adiacenza(Vn ) Qui riportiamo una possibile rappresentazione di G1 : G1 a - 1 2 3 4 a a ? a a a? a a? a ? 5 nil a - 2 nil - 2 a - 3 nil - 2 a - 1 nil - 5 nil - 4 a - 5 nil Se il grafo G ha n nodi ed e lati, la precedente rappresentazione occupera una memoria O(n + e). Naturalmente le liste potranno essere descritte mediante tabelle. In questo caso possiamo considerare una coppia di tabelle di n + e componenti ciascuna: le prime n componenti indicano la testa della adiacenza di ciascun nodo, mentre le altre definiscono tutte le liste. Un esempio e fornito dalla seguente figura che rappresenta le tabelle del grafo G1 . 48 CAPITOLO 4. STRUTTURE DATI ELEMENTARI Testa 1 2 3 4 5 6 7 8 9 10 11 12 13 2 2 1 3 5 5 2 4 Succ 7 6 12 11 13 9 0 0 0 0 0 8 10 Un cammino da x a y in un grafo G = hV, Ei e una sequenza (v1 , v2 , . . . , vm ) di vertici tale che v1 = x, vm = y e (vk , vk+1 ) e un arco di G, per tutti i k = 1, 2, . . . , m 1. La lunghezza del precedente cammino e m 1. Diremo che un cammino e semplice se tutti i suoi vertici sono distinti, eccetto al piu il primo e lultimo. Un cammino semplice in cui il primo e lultimo vertice coincidano e detto ciclo. Le precedenti nozioni si estendono facilmente ai grafi non orientati; in tal caso si assume pero che i cicli abbiano lunghezza almeno 3. Un grafo e detto connesso quando per ogni coppia x e y di vertici esiste un cammino da x a y. Un grafo non orientato G = hV, Ei puo non essere connesso; esso e pero sempre esprimibile come unione di componenti connesse. Basta infatti osservare che la relazione R su V definita da xRy se esiste un cammino da x a y oppure x = y, e una relazione di equivalenza. Le sue classi di equivalenza definiscono una partizione {V1 , . . . , Vm } dellinsieme V dei vertici. Per ogni i = 1, 2, . . . , m il grafo hVi , Ei i, dove Ei = {{u, v} | u, v Vi , {u, v} E}, e connesso ed e chiamato componente connessa del grafo G; vertici appartenenti a due distinte classi di equivalenza non sono invece congiungibili da alcun cammino. Per esempio, il grafo G2 e formato da due componenti connesse, 1n la prima e 4.6 @ @ 2n 4n mentre la seconda e 3n 5n Alberi Un grafo non orientato e senza cicli e detto foresta; se per di piu esso e connesso allora viene detto albero. E evidente che ogni foresta e unione di alberi, in quanto ogni componente connessa di una foresta e un albero. Il grafo sotto riportato e una foresta composta da 2 alberi: 49 CAPITOLO 4. STRUTTURE DATI ELEMENTARI   5 1   @  @ 4 2 8 7 9   @  @ 3 6   Un albero e dunque un grafo (non orientato) connesso e aciclico, cioe privo di cicli. Osserviamo ora che, per ogni coppia di vertici x e y di un albero qualsiasi, esiste un unico cammino semplice che li collega. Infatti, poiche un albero e connesso, esiste almeno un cammino semplice che congiunge i due nodi; se ve ne fossero due distinti allora si formerebbe un ciclo (v. figura seguente).    x y    Dato un grafo non orientato G = hV, Ei, chiamiamo albero di copertura di G un albero T = hV 0 , E 0 i tale che V 0 = V e E 0 E. Se G non e connesso tale albero non esiste. In questo caso chiamiamo foresta di copertura il sottografo di G formato da una famiglia di alberi, uno per ogni componente connessa di G, ciascuno dei quali costituisce un albero di copertura della corrispondente componente connessa. 4.6.1 Alberi con radice Un albero con radice e una coppia hT, ri, dove T e un albero e r un suo vertice, che viene detto radice.   4n   H HH   H  H    1 6   @ @ 2 5   AA @ @   @  A @       @ 2 4 3 5 7 8   1 3 6 7 8    Albero Albero con radice Un albero con radice e quindi un albero con cui viene evidenziato un vertice (la radice); esso viene anche detto albero radicato. Ora, dato un vertice x in un albero con radice r, ce un unico cammino semplice (r, V2 , . . . , Vm , x) da r a x (se r 6= x); il vertice Vm che precede x nel cammino e detto padre di x: ogni vertice di un albero radicato, escluso la radice, ha quindi un unico padre. Un albero radicato puo essere allora agevolmente rappresentato dalla tabella che realizza la funzione padre: Figlio .. . Padre .. . x .. . padre di x .. . 50 CAPITOLO 4. STRUTTURE DATI ELEMENTARI Continuando lanalogia famigliare, se y e padre di x diremo anche che x e figlio di y; diremo inoltre che z1 e z2 sono fratelli quando sono figli dello stesso padre. Continuando lanalogia botanica, diremo che un vertice V senza figli e una foglia. Invece, un nodo che possiede almeno un figlio e chiamato nodo interno. Infine, diremo che x e discendente o successore di y se y appartiene al cammino semplice che va dalla radice a x; in tal caso diremo anche che y e un antenato o predecessore di x. In un albero con radice, orientando ogni lato {x, y} dal padre verso il figlio, si ottiene una struttura di grafo orientato.  4n   H HH   H  H   2 5 =   AA @  @  A @      1 3 6 7 8     4n   HH  HH  +  j H   2 5    A @  A R @ ? @  AU      1 3 6 7 8    Rispetto a tale struttura, chiameremo altezza di un vertice x la lunghezza del piu lungo cammino da x ad una foglia, e profondita di x la lunghezza del cammino dalla radice a x. Laltezza di un albero e laltezza della radice. Gli alberi con radice sono utilizzati in molte applicazioni per distribuire informazioni fra i vari nodi; gli algoritmi associati eseguono spesso processi di calcolo nei quali si percorre il cammino che va dalla radice a un dato nodo o, viceversa, che risale da un nodo fissato sino alla radice. Laltezza dellalbero diviene quindi un parametro importante nella valutazione dei tempi di calcolo di tali procedure poiche in molti casi questi ultimi risultano proporzionali (nel caso peggiore) proprio alla massima distanza dei nodi dalla radice. Per questo motivo si cerca spesso di utilizzare alberi che abbiano unaltezza piccola rispetto al numero di nodi. Si usa il termine bilanciato proprio per indicare intuitivamente un albero nel quale i nodi sono abbastanza vicini alla radice. In molti casi una altezza accettabile e una altezza minore o uguale al logaritmo del numero di nodi (eventualmente moltiplicato per una costante). Piu formalmente possiamo dare la seguente definizione: una sequenza di alberi con radice {Tn }nIN , dove ogni Tn possiede esattamente n nodi, si dice bilanciata se, denotando con hn laltezza di Tn , abbiamo hn = O(log n). 4.6.2 Alberi ordinati Un albero ordinato (detto anche piano) e un albero radicato in cui i figli di ogni vertice sono totalmente ordinati. I seguenti due alberi sono coincidenti come alberi radicati, ma distinti come alberi ordinati:  4n  H  HH   H  H   2   1  3  @ @ @   5  6   4n  H  HH   H  H   3  AA   A   6 5  2   1  51 CAPITOLO 4. STRUTTURE DATI ELEMENTARI Una classica rappresentazione in memoria di un albero ordinato consiste nelluso di tre vettori: Se lalbero e formato da n nodi, rappresentati dai primi n interi, i tre vettori P, F, S hanno dimensione n e sono definiti nel modo seguente: ( j 0 j 0 se j e il primo figlio di i, se i e una foglia; P [i] = ( F [i] = ( S[i] = j 0 se j e il padre di i, se i e la radice; se j e il fratello successivo di i, se i non possiede un fratello successivo, per ogni nodo i. Per esempio nellalbero ordinato  4n   H H  HH   H   5 2   AA @  @ A  @      1 7 6 3 8    i tre vettori sono definiti da 1 2 3 4 5 6 7 8 P 2 4 2 0 4 5 5 5 F 0 1 0 2 6 0 0 0 S 3 5 0 0 0 7 8 0 Osserviamo che in un albero ordinato un figlio della radice coi suoi discendenti forma a sua volta un albero ordinato. Questo fatto permette di dare la seguente definizione induttiva di albero ordinato, di grande interesse per il progetto e la dimostrazione di correttezza di algoritmi su alberi: 1) lalbero costituito da un solo vertice r e un albero ordinato; 2) se T1 , T2 , . . . , Tm sono alberi ordinati (definiti su insiemi di vertici disgiunti) e r e un nodo diverso dai nodi di T1 , T2 , . . . , Tm , allora la sequenza hr, T1 , . . . , Tm i e un albero ordinato. In entrambi i casi diciamo che r e la radice dellalbero. Consideriamo ad esempio il problema di attraversamento di alberi, cioe la visita dei vertici di un albero in qualche ordine. Due classici metodi di attraversamento sono quello in ordine anticipato e quello in ordine posticipato (pre-ordine e post-ordine), descritti dai due seguenti schemi. 52 CAPITOLO 4. STRUTTURE DATI ELEMENTARI Attraversamento in pre-ordine dellalbero ordinato T Se T e costituito da un solo nodo r allora visita r; altrimenti, se T = hr, T1 , . . . , Tm i allora: 1) visita la radice r di T ; 2) attraversa in pre-ordine gli alberi T1 , T2 , . . . , Tm . Attraversamento in post-ordine dellalbero ordinato T Se T e costituito da un solo nodo r allora visita r; altrimenti, se T = hr, T1 , . . . , Tm i allora: 1) attraversa in post-ordine gli alberi T1 , T2 , . . . , Tm ; 2) visita la radice r di T . La correttezza dei metodi e immediatamente dimostrabile per induzione. Nella seguente figura mettiamo in evidenza lordine di visita dei nodi di un albero ordinato a seconda dei due criteri di attraversamento.   1 9 H  H  HH   H     2 H  HH   H  H     6 5    @ @  @ @  3 4  8 7   3 8 4    @ @ @ @   1 2  9 6   pre-ordine 4.6.3 7 5   post-ordine Alberi binari Un albero binario e un albero radicato in cui ogni nodo interno ha al piu due figli; ogni figlio e distinto come figlio sinistro oppure figlio destro. I seguenti due alberi sono coincidenti come alberi ordinati, ma distinti come alberi binari:  1n  @ @  2 3    1n  @ @  2 3  @ @  4 4   Dato un vertice x di un albero T binario, il sottoalbero che ha come radice il figlio sinistro di x (risp. al figlio destro di x), se esiste, sara detto sottoalbero sinistro di x (risp. sottoalbero destro di x). Un albero binario puo essere agevolmente rappresentato attraverso le due tabelle sin e des, che associano ad ogni vertice x rispettivamente il suo figlio sinistro (o O se non esiste) e il suo figlio destro (o O se non esiste). Un albero completo di altezza h e un albero in cui tutte le foglie hanno profondita h ed ogni altro vertice ha due figli. Il numero di vertici di un albero binario completo e allora n= h X j=0 2j = 2h+1 1 CAPITOLO 4. STRUTTURE DATI ELEMENTARI 53 e quindi h lg2 n (per n +). Alberi completi (o quasi completi) contengono quindi un gran numero di nodi con una bassa altezza. Viceversa, lalbero binario con n nodi che ha altezza massima e quello nel quale ogni nodo interno possiede un solo figlio. In questo caso laltezza dellalbero e chiaramente h = n 1. Anche gli alberi binari possono essere attraversati in ordine anticipato o posticipato. Tuttavia un metodo tipico di visita dei nodi di un albero binario e quello di attraversamento in ordine simmetrico: prima visita il sottoalbero sinistro, poi la radice, poi il sottoalbero destro. Lo schema corrispondente puo essere descritto nel modo seguente. Attraversamento in ordine simmetrico dellalbero binario B Sia r la radice di B; 1) se r ha figlio sinistro allora attraversa il sottoalbero sinistro di r; 2) visita r; 3) se r ha figlio destro allora attraversa il sottoalbero destro di r. Anticipando largomento del prossimo capitolo presentiamo una procedura ricorsiva che visita i nodi di un albero binario assegnando a ciascun vertice il corrispondente numero dordine secondo lordinamento simmetrico. Linput dellalgoritmo e un albero binario di n vertici, rappresentato da una coppia di vettori sin e des di n componenti, definiti come sopra, e da un intero r che rappresenta la radice. Luscita e data dal vettore N di n componenti nel quale N [v] e il numero dordine del nodo v. Lalgoritmo e definito da un programma principale e dalla procedura IN ORDINE nella quale i parametri c, sin, des e N rappresentano variabili globali. begin c:=1 IN ORDINE(r) end Procedura IN ORDINE(x) if sin[x] 6= 0 then IN ORDINE(sin[x]) N [x] := c c := c + 1 if des[x] 6= 0 then IN ORDINE(des[x]) Esercizi 1) Dimostrare che ogni albero con n nodi possiede n 1 lati. 2) Mostrare che laltezza h di ogni albero binario di n nodi soddisfa la relazione blog2 nc h n 1. Determinare un albero binario di n nodi nel quale h = blog2 nc e uno nel quale h = n 1. 3) Si definisce localmente completo un albero binario nel quale ogni nodo interno possiede due figli. Provare che ogni albero binario localmente completo possiede un numero dispari di nodi. Un tale albero puo avere un numero pari di foglie? 4) Provare che se un albero binario di altezza h possiede m foglie ed e localmente completo, allora h + 1 m 2h . 54 CAPITOLO 4. STRUTTURE DATI ELEMENTARI In quale caso abbiamo m = h + 1 e in quale m = 2h ? 5) In un albero con radice la lunghezza di cammino e definita come la somma delle profondita dei nodi. Mostrare che in ogni albero binario con n nodi la lunghezza di cammino L soddisfa la relazione n X k=1 blog2 kc L n(n 1) . 2 6) Considera la numerazione preordine di un albero binario completo e supponi che un nodo interno v abbia numero dordine i. Qual e il numero dordine del figlio sinistro di v e quale di quello destro? 7) Abbiamo visto come un albero ordinato possa essere rappresentato da una famiglia di liste di adiacenza L(v), una per ogni nodo v, oppure mediante i vettori P, F, S che definiscono rispettivamente il padre, il primo figlio e il fratello successivo di ciascun vertice. Utilizzando le operazioni definite sulle liste e sui vettori, descrivere una procedura che permette di passare dalla prima rappresentazione alla seconda e viceversa. 8) Assumendo il criterio di costo uniforme, determina lordine di grandezza del tempo di calcolo richiesto dallalgoritmo descritto nellesercizio precedente al crescere del numero n di nodi. 9) Diciamo che due alberi ordinati T1 = hV1 , E1 i, T2 = hV2 , E2 i sono isomorfi e scriviamo T1 T2 , se esiste una funzione biunivoca f : V1 V2 tale che, per ogni coppia di nodi v, w V1 : - (v, w) E1 (f (v), f (w)) E2 ; - w e il j-esimo figlio di v in T1 f (w) e il j-esimo figlio di f (v) in T2 . Dimostrare che e una relazione di equivalenza sullinsieme degli alberi ordinati. 10) Continuando lesercizio precedente, chiamiamo albero ordinato non etichettato una classe di equivalenza della relazione di isomorfismo definita sopra. Graficamente esso puo essere rappresentato come un albero ordinato privo del nome dei nodi (due alberi ordinati non etichettati si distinguono pertanto solo per la forma dellalbero). Rappresentare graficamente tutti gli alberi ordinati non etichettati che hanno al piu 4 nodi. Dare una definizione analoga per gli alberi binari non etichettati e rappresentare tutti gli alberi di questo tipo che hanno al piu 3 nodi. 11) Dimostrare che per ogni n IN il numero di alberi binari non etichettati con n nodi equivale al numero di alberi ordinati non etichettati che hanno n + 1 nodi. 4.7 Esempio: attraversamento di grafi in ampiezza Mostriamo ora con un esempio come si puo descrivere un algoritmo utilizzando alcune strutture dati introdotte nelle sezioni precedenti insieme alle relative operazioni. Luso di questi strumenti permette di progettare e descrivere una procedura in maniera semplice e concisa, mettendo in luce lorganizzazione dei dati e il metodo adottato per ottenere la soluzione del problema. In questo modo un algoritmo puo essere descritto ad alto livello, trascurando i dettagli implementativi e ponendo in evidenza il suo schema generale che spesso si riduce allesecuzione di una sequenza di operazioni su strutture dati. Una volta fissata tale schema si potra poi scegliere come implementare le strutture dati usate e le procedure che eseguono le corrispondenti operazioni. Come esempio presentiamo un classico algoritmo di esplorazione di grafi basato sulluso di una coda. Il problema e definito nel modo seguente. Dato un grafo non orientato e connesso, vogliamo visitare i suoi nodi, uno dopo laltro, a partire da un nodo assegnato che chiamiamo sorgente. Lordine di visita dipende dalla distanza dalla sorgente: una volta visitata questultima, prima visitiamo tutti i nodi adiacenti, poi i nodi che si trovano a distanza 2 e cos via, fino a quando tutti i vertici del grafo sono stati considerati. Questo tipo di visita si chiama attraversamento in ampiezza e rappresenta uno dei due metodi fondamentali per esplorare un grafo (laltro, detto attraversamento in profondita, sara presentato nel capitolo successivo). Esso puo essere esteso facilmente ai grafi diretti e con ovvie modifiche anche a quelli non connessi. Un metodo naturale per compiere un attraversamento in ampiezza e quello di mantenere in una coda una sequenza di nodi, visitando di volta in volta il vertice che si trova in testa e introducendo allestremita opposta i nodi adiacenti che non sono ancora stati raggiunti. Sara quindi necessario marcare i nodi del 55 CAPITOLO 4. STRUTTURE DATI ELEMENTARI grafo per segnalare quelli che sono gia stati introdotti nella coda; inizialmente il nodo sorgente e lunico vertice che si trova in coda e quindi il solo marcato. Durante la visita lalgoritmo determina naturalmente un albero di copertura del grafo che si ottiene considerando per ogni nodo v (diverso dalla sorgente) il lato che ha permesso di considerare v per la prima volta. Si tratta quindi di un albero che ha per radice la sorgente, formato dai lati che, uscendo da un nodo appena visitato, consentono di determinare i nuovi vertici da introdurre in coda. Formalmente, sia G = hV, Ei un grafo non orientato e connesso e sia s V un nodo qualsiasi (sorgente). Per semplicita rappresentiamo G mediante una famiglia di liste di adiacenza; per ogni v V , denotiamo con L(v) la lista dei vertici adiacenti a v. Lalgoritmo visita in ampiezza i nodi di G e fornisce in uscita la lista U contenente i lati dellalbero di copertura prodotto. Denotiamo con Q la coda mantenuta dallalgoritmo; i vertici vengono inizialmente marcati come nuovi e la marca viene cambiata non appena questi vengono inseriti in Q. La marcatura puo essere facilmente implementata mediante un vettore. Lalgoritmo e descritto dalla seguente procedura: Procedure Ampiezza(G, s) begin U := marca s come vecchio Q := ENQUEUE(, s) for v V do if v 6= s then marca v come nuovo while IS EMPTY(Q) = 0 do begin v := FRONT(Q) Q := DEQUEUE(Q) visita v for w L(v) do if w marcato nuovo then marca w come vecchio U := INSERISCI IN TESTA({v, w}, U ) Q := ENQUEUE(Q, w) end end Nota che durante lesecuzione dellalgoritmo ogni nodo v V si puo trovare in una delle seguenti condizioni: - v e gia stato visitato e in questo caso non appartiene a Q ed e marcato vecchio; - v non e stato visitato ma e adiacente ad un nodo visitato. In questo caso appartiene a Q ed e marcato vecchio; - v non soddisfa alcuna delle condizioni precedenti e quindi e ancora marcato nuovo. Esempio 4.1 Applichiamo lalgoritmo appena descritto al grafo definito dalla seguente figura, supponendo che a sia il nodo sorgente: 56 CAPITOLO 4. STRUTTURE DATI ELEMENTARI  c e a h d f  @ @ @ @  @ @ b  @ @ @ g  Supponendo che le liste dei nodi adiacenti siano ordinate in ordine alfabetico, lalbero di copertura in ampiezza ottenuto e dato dalla seguente immagine. Nota che lordine di visita dei nodi procede dallalto verso il basso e da sinistra verso destra.   a   @   @ b e f c d g           h  Per quanto riguarda lanalisi di complessita dellalgoritmo osserviamo che, per ogni nodo v del grafo di ingresso e ogni vertice w L(v), la procedura esegue un numero di operazioni limitato da una costante opportuna. Assumendo il criterio di costo uniforme, possiamo allora concludere che il tempo di calcolo e O(m), dove m e il numero di lati del grafo di input. Invece, lo spazio di memoria richiesto, oltre a quello necessario per mantenere il grafo di ingresso, e essenzialmente dovuto alle celle occupate dalla coda Q. Questa quantita e dellordine di (n) nel caso peggiore, dove n e il numero dei vertici del grafo. Esercizi 1) Per quali grafi di n nodi lalgoritmo di visita in ampiezza richiede il massimo numero di celle di memoria per mantenere la coda Q e per quali il minimo? 2) Descrivere un algoritmo di visita in ampiezza per grafi orientati. Su quali grafi orientati lalgoritmo produce un albero di copertura in ampiezza? 3) Descrivere un algoritmo per determinare le distanze di tutti i nodi da una sorgente in un grafo non orientato connesso. Svolgere lanalisi dei tempi di calcolo e dello spazio di memoria utilizzati. Capitolo 5 Procedure ricorsive Luso di procedure ricorsive permette spesso di descrivere un algoritmo in maniera semplice e concisa, mettendo in rilievo la tecnica adottata per la soluzione del problema e facilitando quindi la fase di progettazione. Inoltre lanalisi risulta in molti casi semplificata poiche la valutazione del tempo di calcolo si riduce alla soluzione di equazioni di ricorrenza. Questo capitolo e dedicato allanalisi delle procedure di questo tipo; vogliamo innanzitutto mostrare come in generale queste possano essere trasformate in programmi iterativi, che non prevedono cioe lesecuzione di chiamate ricorsive, direttamente implementabili su macchine RASP (o RAM). Un pregio particolare di questa traduzione e quello di mettere in evidenza la quantita di spazio di memoria richiesto dalla ricorsione. Tra gli algoritmi che si possono facilmente descrivere mediante procedure ricorsive risultano di particolare interesse la ricerca binaria e gli algoritmi di esplorazione di alberi e grafi. 5.1 Analisi della ricorsione Una procedura che chiama se stessa, direttamente o indirettamente, viene detta ricorsiva. Consideriamo ad esempio il seguente problema: determinare il numero massimo di parti in cui n rette dividono il piano. Detto p(n) tale numero, il disegno di un algoritmo ricorsivo per calcolare p(n) e basato sulla seguente proprieta : 1. Una retta divide il piano in due parti, cioe p(1) = 2; 2. Sia p(n) il numero di parti in cui n rette dividono il piano. Aggiungendo una nuova retta, e facile osservare che essa e intersecata in al piu n punti dalle precedenti, creando al piu n + 1 nuove parti, come si puo osservare dalla figura 5.1. Vale quindi: p(n + 1) = p(n) + n + 1 Otteniamo dunque la seguente procedura ricorsiva: Procedura P(n) if n = 1 then return 2 else x := P (n 1) return (x + n) Questa tecnica di disegno cerca quindi di esprimere il valore di una funzione su un dato in dipendenza di valori della stessa funzione sui dati possibilmente piu piccoli. Molti problemi si prestano in modo 57 CAPITOLO 5. PROCEDURE RICORSIVE 58 B B @ @ B B @ B @ @ B @ @ r B B @ B @B B @ B@ B @ B @ B @ B @ B @ B @ B @ B @ r2 B r 3 @ rn r1 Figura 5.1: Nuove parti di piano create da una retta r. naturale ad essere risolti con procedure ricorsive, ottenendo algoritmi risolutivi in generale semplici e chiari. Una difficolta che sorge e legata al fatto che il modello di macchina RAM (o RASP) non e in grado di eseguire direttamente algoritmi ricorsivi. Ne consegue che e di estremo interesse trovare tecniche di traduzione di procedure ricorsive in codice RAM e sviluppare metodi che permettano di valutare le misure di complessita dellesecuzione del codice tradotto semplicemente analizzando le procedure ricorsive stesse. Qui delineiamo una tecnica per implementare la ricorsione su macchine RASP semplice e diretta, mostrando nel contempo che il problema di stimare il tempo di calcolo del programma ottenuto puo essere ridotto alla soluzione di equazioni di ricorrenza. Questo metodo di traduzione delle procedure ricorsive in programmi puramente iterativi (nei quali cioe la ricorsione non e permessa) e del tutto generale e a grandi linee e lo stesso procedimento applicato dai compilatori che di fatto traducono in linguaggio macchina programmi scritti in un linguaggio ad alto livello. Cominciamo osservando che la chiamata di una procedura B da parte di A (qui A puo essere il programma principale, oppure una procedura, oppure la procedura B stessa) consiste essenzialmente di due operazioni: 1. passaggio dei parametri da A a B; 2. cessione del controllo da A a B cos da permettere linizio della esecuzione di B conservando in memoria il punto di ritorno, cioe lindirizzo dellistruzione che deve essere eseguita nella procedura A una volta terminata lesecuzione di B. Si hanno ora due possibilita : 1. lesecuzione di B puo terminare; a questo punto viene passato ad A il valore della funzione calcolata da B (se B e una procedura di calcolo di una funzione) e lesecuzione di A riprende dal punto di ritorno; 2. B chiama a sua volta una nuova procedura. 59 CAPITOLO 5. PROCEDURE RICORSIVE Come conseguenza si ha che lultima procedura chiamata e la prima a terminare lesecuzione: questo giustifica luso di una pila per memorizzare i dati di tutte le chiamate di procedura che non hanno ancora terminato lesecuzione. Gli elementi della pila sono chiamati record di attivazione e sono identificati da blocchi di registri consecutivi; ogni chiamata di procedura usa un record di attivazione per memorizzare i dati non globali utili. Record di attivazione: Chiamata di A .. . Record di attivazione: Programma Principale Supponiamo che, come nello schema precedente, la procedura A sia correntemente in esecuzione e chiami la procedura B. In questo caso lesecuzione di B prevede le seguenti fasi: 1. Viene posto al top nella pila un nuovo record di attivazione per la chiamata di B di opportuna dimensione; tale record contiene: (a) puntatori ai parametri attuali che si trovano in A, (b) spazio per le variabili locali di B, (c) lindirizzo dellistruzione di A che deve essere eseguita quando B termina lesecuzione (punto di ritorno); (d) se B calcola una funzione, in B viene posto un puntatore a una variabile di A in cui sara memorizzato il valore della funzione calcolata da B. 2. Il controllo viene passato alla prima istruzione di B, iniziando cos lesecuzione di B. 3. Quando B termina lesecuzione, il controllo ritorna ad A mediante i seguenti passi: (a) se B e una procedura che calcola una funzione, lesecuzione ha termine con un comando return E; il valore di E viene calcolato e passato allopportuna variabile nel record di attivazione della chiamata di A; (b) il punto di ritorno e ottenuto dal record di attivazione di B; (c) il record di attivazione di B viene tolto dal top della pila; lesecuzione di A puo continuare. E quindi possibile associare ad un algoritmo ricorsivo un algoritmo eseguibile su RASP (o RAM) che calcola la stessa funzione. Tale algoritmo rappresentera la semantica operazionale RASP dellalgoritmo ricorsivo. Allo scopo di disegnare schematicamente lalgoritmo, diamo per prima cosa la descrizione del record di attivazione di una procedura A del tipo: 60 CAPITOLO 5. PROCEDURE RICORSIVE Procedura A .. . z := B(a) (1) Istruzione .. . Se nel corso della esecuzione A chiama B, il record di attivazione di B viene mostrato nella figura 5.2. Record di attivazione della chiamata di B Record di attivazione della chiamata di A Variabili locali di B c@ Parametro@ formale @ @ R @ r c I (1) Punto di ritorno Variabili locali di A Figura 5.2: Esempio di record di attivazione. Il cuore della simulazione iterativa di una procedura ricorsiva e allora delineato nel seguente schema di algoritmo: P := Nome del programma principale R := Record di attivazione di P I := Indirizzo della prima istruzione di P P ila := PUSH(, R) repeat N := Nome della procedura con registro di attivazione in TOP(P ila) ist := Istruzione di N di indirizzo I while ist non e di arresto ne una chiamata di una procedura do begin Esegui ist ist := Istruzione successiva end if ist e una chiamata di A then R := Record di attivazione della chiamata di A I := Indirizzo della prima istruzione di A P ila :=PUSH(P ila, R) if ist e di arresto then Valuta il risultato inviandolo al programma chiamante I := Indirizzo di ritorno P ila := POP(P ila) until P ila = A scopo esemplificativo, consideriamo il programma: 61 CAPITOLO 5. PROCEDURE RICORSIVE read(m) z:= FIB(m) (W) write(z) Questo programma chiama la procedura FIB: Procedura FIB (n) if n 1 then return n else a := n 1 x := FIB(a) (A) b := n 2 y := FIB(b) (B) return(x + y) Record di attivazione 3a chiamata FIB c @ n @ a @ Record di attivazione 2a chiamata FIB R @ c @ n @ b r I c (A) r I c   r  (W) y    )  a R @ c @ n @ x b   y    )  2 a x b y @ Record di attivazione del programma principale (A) 1 @ Record di attivazione 1a chiamata FIB x c   R @ I    3 m z Figura 5.3: Contenuto della pila per la procedura FIB su ingresso 3 nei primi 4 passi di esecuzione. La procedura FIB contiene due possibili chiamate a se stessa (x := FIB(a), y := FIB(b)). Le etichette (W), (A), (B) sono gli eventuali punti di ritorno. E facile osservare che su ingresso m il programma principale stampa l m-esimo numero di Fibonacci f (m), dove f (0) = 0, f (1) = 1 e f (n) = f (n 1) + f (n 2) per n > 1. La figura 5.3 illustra il contenuto della pila dopo 4 chiamate su ingresso 3. Affrontiamo ora il problema di analisi degli algoritmi ricorsivi: dato un algoritmo ricorsivo, stimare il tempo di calcolo della sua esecuzione su macchina RASP (o RAM). Tale stima puo essere fatta agevolmente senza entrare nei dettagli della esecuzione iterativa, ma semplicemente attraverso lanalisi della procedura ricorsiva stessa. Si procede come segue, supponendo che lalgoritmo sia descritto da M procedure P1 , P2 , . . . , PM , comprendenti il programma principale: 1. Si associa ad ogni indice k, 1 k M , la funzione (incognita) Tk (n), che denota il tempo di calcolo di Pk in funzione di qualche parametro n dellingresso. 62 CAPITOLO 5. PROCEDURE RICORSIVE 2. Si esprime Tk (n) in funzione dei tempi delle procedure chiamate da Pk , valutati negli opportuni valori dei parametri; a tal riguardo osserviamo che il tempo di esecuzione della istruzione z := Pj (a) e la somma del tempo di esecuzione della procedura Pj su ingresso a, del tempo di chiamata di Pj (necessario a predisporre il record di attivazione) e di quello di ritorno. Si ottiene in tal modo un sistema di M equazioni di ricorrenza che, risolto, permette di stimare Tk (n) per tutti i k (1 k M ), ed in particolare per il programma principale. Allo studio delle equazioni di ricorrenza sara dedicato il prossimo capitolo. A scopo esemplificativo, effettuiamo una stima del tempo di calcolo della procedura FIB(n) col criterio di costo uniforme. Per semplicita , supponiamo che la macchina sia in grado di effettuare una chiamata in una unita di tempo (piu unaltra unita per ricevere il risultato). Sia TF IB (n) il tempo di calcolo (su RAM !) della procedura ricorsiva FIB su input n; tale valore puo essere ottenuto attraverso lanalisi dei tempi richiesti dalle singole istruzioni: Procedura FIB (n) if n 1 (T empo : 2) then else (A) (B) return n (T empo : 1) a := n 1 x := FIB(a) b := n 2 y := FIB(b) return(x + y) (T empo : 3) (T empo : 2 + TF IB (n 1) ) (T empo : 3) (T empo : 2 + TF IB (n 2) ) (T empo : 4 ) Vale allora la seguente equazione di ricorrenza: TF IB (0) = TF IB (1) = 3, TF IB (n) = 16 + TF IB (n 1) + TF IB (n 2), per ogni n 2. Esercizi 1) Valutare lordine di grandezza dello spazio di memoria richiesto dalla procedura FIB su input n assumendo il criterio di costo uniforme. 2) Svolgere lesercizio precedente assumendo il criterio di costo logaritmico. 5.2 Ricorsione terminale Il metodo generale per la traduzione iterativa della ricorsione descritto nella sezione precedente puo essere semplificato e reso piu efficiente quando la chiamata a una procedura e lultima istruzione eseguita dal programma chiamante. In questo caso infatti, una volta terminata lesecuzione della procedura chiamata, non occorre restituire il controllo a quella chiamante. Per descrivere la traduzione iterativa di questa ricorsione, denotiamo rispettivamente con A e B la procedura chiamante e quella chiamata e supponiamo che la chiamata di B sia lultima istruzione del programma A. Possiamo allora eseguire la chiamata a B semplicemente sostituendo il record di attivazione di A con quello di B nella pila e aggiornando opportunamente lindirizzo di ritorno alla procedura che ha chiamato A; il controllo passera cos a questultima una volta terminata lesecuzione di B. In questo modo si riduce il numero di record di attivazione mantenuti nella pila, si risparmia tempo di calcolo e spazio di memoria rendendo quindi piu efficiente limplementazione. CAPITOLO 5. PROCEDURE RICORSIVE 63 Questo tipo di ricorsione viene chiamata ricorsione terminale. Un caso particolarmente semplice si verifica quando lalgoritmo e formato da ununica procedura che richiama se stessa allultima istruzione. In tale situazione non occorre neppure mantenere una pila per implementare la ricorsione perche non mai e necessario riattivare il programma chiamante una volta terminato quello chiamato. Il seguente schema di procedura rappresenta un esempio tipico di questo caso. Consideriamo una procedura F , dipendente da un parametro x, definita dal seguente programma: Procedura F (x) if C(x) then D else begin E y := g(x) F (y) end Qui C(x) e una condizione che dipende dal valore di x, mentre E e D sono opportuni blocchi di istruzioni. La funzione g(x) invece determina un nuovo valore del parametro di input per la F di dimensione ridotta rispetto a quello di x. Allora, se a e un qualunque valore per x, la chiamata F (a) e equivalente alla seguente procedura: begin x := a while C(x) do begin E x := g(x) end D end 5.2.1 Ricerca binaria Consideriamo il seguente problema di ricerca: Istanza: un vettore ordinato B di n interi e un numero intero a; Soluzione: un intero k {1, 2, . . . , n} tale che B[k] = a, se tale intero esiste, 0 altrimenti. Il problema puo essere risolto applicando il noto procedimento di ricerca binaria. Questo consiste nel confrontare a con lelemento del vettore B di indice k = b n+1 2 c. Se i due elementi sono uguali si restituisce lintero k; altrimenti, si prosegue la ricerca nel sottovettore di sinistra, (B[1], . . . , B[k 1]), o in quello di destra, (B[k + 1], . . . , B[n]), a seconda se a < B[k] oppure a > B[k]. Il procedimento e definito in maniera naturale mediante la procedura ricorsiva Ricercabin(i, j) che descriviamo nel seguito. Questa svolge la ricerca nel sottovettore delle componenti di B comprese tra gli indici i e j, 1 i j n, supponendo i parametri a e B come variabili globali. CAPITOLO 5. PROCEDURE RICORSIVE 64 Procedura Ricercabin(i, j) if j < i then return 0 else begin k := b i+j 2 c if a = B[k] then return k else if a < B[k] then return Ricercabin(i, k 1) else return Ricercabin(k + 1, j) end Lalgoritmo che risolve il problema e quindi limitato alla chiamata Ricercabin(1, n). La sua analisi e molto semplice poiche , nel caso peggiore, a ogni chiamata ricorsiva la dimensione del vettore su cui si svolge la ricerca viene dimezzata. Assumendo quindi il criterio di costo uniforme lalgoritmo termina in O(log n) passi. Cerchiamo ora di descrivere una versione iterativa dello stesso procedimento. Osserviamo che il programma appena descritto esegue una ricorsione terminale e questa e lunica chiamata ricorsiva della procedura. Applicando allora il procedimento definito sopra otteniamo il seguente programma iterativo nel quale non viene utilizzata alcuna pila. begin i := 1 j := n out := 0 while i j out = 0 do begin k := b i+j 2 c if a = B[k] then out := k else if a < B[k] then j := k 1 else i := k + 1 end return out end Osserva che lo spazio di memoria richiesto da questultima procedura, escludendo quello necessario per mantenere il vettore B di ingresso, e O(1) secondo il criterio uniforme. 5.3 Attraversamento di alberi I procedimenti di visita dei nodi di un albero ordinato descritti nel capitolo precedente sono facilmente definiti mediante procedure ricorsive che ammettono semplici traduzioni iterative. Si tratta di algoritmi che hanno una loro importanza intrinseca perche sono utilizzati in numerose applicazioni dato il largo uso degli alberi per rappresentare insiemi di dati organizzati gerarchicamente. Supponiamo di voler visitare secondo lordine anticipato (preordine) i nodi di un albero ordinato T , di radice r, definito mediante una famiglia di liste di adiacenza L(v), una per ogni vertice v di T . Supponiamo inoltre che T abbia n nodi rappresentati mediante i primi n interi positivi. Vogliamo associare, ad ogni nodo v, il numero dordine di v secondo la numerazione anticipata (cioe il numero di nodi visitati prima di v incrementato di 1). Lalgoritmo nella sua versione ricorsiva e definito dalla CAPITOLO 5. PROCEDURE RICORSIVE 65 inizializzazione di una variabile globale c che indica il numero dordine del nodo corrente da visitare e dalla chiamata della procedura Visita(r). begin c := 1 Visita(r) end La procedura Visita e data dal seguente programma ricorsivo che utilizza, come variabile globale, il vettore N di n componenti e, per ogni nodo v, calcola in N [v] il numero dordine di v. Procedura Visita(v) begin N [v] := c c := c + 1 for w L(v) do Visita(w) end Descriviamo ora la traduzione iterativa dellalgoritmo applicando il metodo illustrato nella sezione precedente. Il procedimento e basato sulla gestione della pila S che conserva semplicemente una lista di nodi per mantenere la traccia delle chiamate ricorsive. In questo caso infatti lunica informazione contenuta in ogni record di attivazione e costituita dal nome del nodo visitato. begin (1) (2) v := r c := 1 S := N [v] := c c := c + 1 if IS EMPTY(L(v)) = 0 then begin w :=TESTA(L(v)) L(v) :=TOGLI IN TESTA(L(v)) S :=PUSH(S, v) v := w go to (1) end else if IS EMPTY(S) = 0 then begin v :=TOP(S) S :=POP(S) go to (2) end end Nota che listruzione di etichetta (2) rappresenta il punto di ritorno di ogni chiamata ricorsiva mentre quella di etichetta (1) corrisponde allinizio della procedura ricorsiva Visita(v). CAPITOLO 5. PROCEDURE RICORSIVE 66 Questa versione iterativa puo tuttavia essere migliorata tenendo conto delle chiamate terminali. Infatti nella procedura ricorsiva la chiamata Visita(w), quando w rappresenta lultimo figlio del nodo v, e lultima istruzione del programma. Possiamo cos modificare la versione iterativa dellalgoritmo ottenendo il seguente programma (nel quale si sono anche eliminati i comandi go to). begin v := r N [v] := 1 c := 2 S := out := 0 repeat while IS EMPTY(L(v)) = 0 then begin w :=TESTA(L(v)) L(v) :=TOGLI IN TESTA(L(v)) if IS EMPTY(L(v)) = 0 then S :=PUSH(S, v) v := w N [v] := c c := c + 1 end ( v :=TOP(S) if IS EMPTY(S) = 0 then S :=POP(S) else out := 1 until out = 1 end E facile verificare che gli algoritmi precedenti permettono di visitare un albero ordinato di n nodi in (n) passi, assumendo il criterio uniforme. Nel caso peggiore anche lo spazio richiesto dalla pila S e (n). Invece nel caso migliore, applicando la procedura che implementa la ricorsione terminale, la pila S rimane vuota e quindi non richiede alcuna quantita di spazio. Questo si verifica quando lalbero di ingresso e formato da un cammino semplice. Esercizi 1) Applicando opportune procedure di attraversamento, definire un algoritmo per ciascuno dei seguenti problemi aventi per istanza un albero ordinato T : - calcolare il numero di discendenti di ogni nodo di T ; - calcolare laltezza di ciascun nodo di T ; - calcolare la profondita di ciascun nodo di T . 2) Considera il seguente problema: Istanza: un albero ordinato T di n nodi e un intero k, 1 k n; Soluzione: il nodo v di T che rappresenta il k-esimo vertice di T secondo la numerazione posticipata (post-ordine). Definire una procedura ricorsiva per la sua soluzione senza applicare un attraversamento completo dellalbero in input. 3) Definire una procedura non ricorsiva per risolvere il problema definito nellesercizio precedente. 4) Definire una procedura ricorsiva che attraversa un albero binario visitando ogni nodo interno prima e dopo aver visitato i suoi eventuali figli. CAPITOLO 5. PROCEDURE RICORSIVE 67 5) Descrivere una procedura non ricorsiva per risolvere il problema posto nellesercizio precedente. 6) Tenendo conto dellalgoritmo definito nella sezione 4.6.3, definire una procedura non ricorsiva per risolvere il seguente problema: Istanza: un albero binario T di n nodi, rappresentato da due vettori sin e des di dimensione n; Soluzione: per ogni nodo v di T il numero dordine di v secondo la numerazione simmetrica (inorder). Su quali input la pila gestita dallalgoritmo rimane vuota? 7) Supponendo che i nodi di T siano rappresentati dai primi n interi positivi, determinare lordine di grandezza del tempo di calcolo e dello spazio di memoria richiesti dalla procedura precedente assumendo il criterio di costo logaritmico. 8) Descrivere una procedura per verificare se due alberi ordinati sono isomorfi. 5.4 Attraversamento di grafi Molti classici algoritmi su grafi sono basati su procedimenti che permettono di visitare tutti i nodi uno dopo laltro. Per compiere questa visita esistono due strategie principali, chiamate rispettivamente attraversamento in profondita (depth-first search) e attraversamento in ampiezza (breadth-first search). Esse danno luogo a procedure di base molto comuni che hanno importanza notevole in svariate applicazioni. Abbiamo gia descritto nel capitolo precedente una procedura per lattraversamento in ampiezza di un grafo. Descriviamo ora una procedura per lattraversamento in profondita . Dal punto di vista metodologico questo procedimento puo essere espresso in maniera naturale mediante una procedura ricorsiva, che possiamo quindi analizzare applicando i metodi presentati in questo capitolo. 5.4.1 Visita in profondita Intuitivamente nellattraversamento in profondita si visita ciascuna componente connessa del grafo partendo da un nodo s e percorrendo un cammino, il piu lungo possibile, fino a quando si giunge in un vertice nel quale tutti i nodi adiacenti sono gia stati visitati; a questo punto si risale il cammino fino al primo nodo che ammette un vertice adiacente non ancora visitato e si ricomincia il procedimento di visita seguendo un nuovo cammino con lo stesso criterio. Lattraversamento termina quando tutti i percorsi ignorati nelle varie fasi della visita sono stati considerati. Nota che lunione dei lati percorsi in questo modo forma un albero con radice che connette tutti i nodi della componente connessa considerata e i cui lati sono anche lati del grafo di partenza. Cos lalgoritmo costruisce automaticamente una foresta di copertura del grafo di ingresso che chiamiamo foresta di copertura in profondita . Se il grafo e connesso questa si riduce ad un albero e parleremo allora di albero di copertura in profondita (depth first spanning tree). Lalgoritmo puo essere formalmente descritto nel modo seguente. Consideriamo un grafo non orientato G = hV, Ei, rappresentato da liste di adiacenza. Usando la solita notazione, denotiamo con L(v) la lista di adiacenza del nodo v, per ogni v V . Lalgoritmo visita i nodi del grafo secondo il criterio sopra descritto e costruisce la relativa foresta di copertura fornendo in uscita la lista U dei suoi lati. Il procedimento puo essere descritto mediante un programma principale che richiama una procedura ricorsiva per visitare i nodi di ogni componente connessa e determinare i lati del corrispondente albero di copertura. Inizialmente tutti i nodi sono marcati opportunamente in modo da riconoscere successivamente i vertici che non sono ancora stati visitati. 68 CAPITOLO 5. PROCEDURE RICORSIVE begin U := for v V do marca v come nuovo for v V do if v marcato nuovothen Profondita(v) end Procedure Profondita(v) begin visita il nodo v marca v come vecchio for w L(v) do if w marcato nuovo then begin U := INSERISCI IN TESTA({v, w}, U ) Profondita(w) end end Lalgoritmo quindi partiziona linsieme dei lati E del grafo G in due sottoinsiemi: quelli che si trovano nella lista U in uscita, e quindi appartengono alla foresta di copertura costruita, e quelli che non vi appartengono. E facile verificare che ogni lato che non si trova in U al termine della procedura deve congiungere due nodi che sono uno successore dellaltro in un qualche albero della foresta (i due vertici devono cioe trovarsi sullo stesso cammino dalla radice a uno dei due nodi). Esempio 5.1 Applichiamo lalgoritmo al grafo G descritto nella seguente figura.  c a g l f i  @ @ @ @  @ @ b  @ @ @  e  @ @  @ d h  m  Supponiamo che nel programma principale i nodi vengano considerati in ordine alfabetico e che nello stesso ordine siano disposti i vertici in ciascuna lista di adiacenza. Allora la foresta di copertura ottenuta ha per radici i nodi a e d ed e formata dai seguenti alberi ai quali abbiamo aggiunto (tratteggiati) i nodi di G che non fanno parte della foresta calcolata.     a    @ @ c  f    i     @ @ b    g   m    l  d    e    h  CAPITOLO 5. PROCEDURE RICORSIVE 69 Lanalisi dellalgoritmo e semplice. Assumiamo nuovamente il criterio uniforme e supponiamo che la visita di ogni nodo richieda tempo costante. Se il grafo di ingresso possiede n nodi e m lati, allora si eseguono (n + m) passi. Infatti il costo dellalgoritmo e dato dalle marcature iniziali dei nodi, dalle chiamate alla procedura Profondita effettuate dal programma principale, e dal costo complessivo di ciascuna di queste. Chiaramente le prime due quantita sono (n). La terza invece e determinata dalla somma delle lunghezza delle liste L(v) poiche ogni chiamata Profondita(v) esegue un numero costante di operazioni per ogni elemento di L(v); essendo tale somma pari a due volte il numero dei lati, otteniamo un costo (m). Osserviamo che per grafi sparsi, cioe con un piccolo numero di lati, nei quali possiamo assumere m = O(n), il tempo di calcolo risulta lineare rispetto al numero di nodi. Per quanto riguarda lo spazio di memoria osserviamo che, oltre allo spazio O(n + m) necessario per mantenere il grafo di input, occorre riservare un certo numero di celle per mantenere la pila che implementa la ricorsione. Questultima, nel caso peggiore, puo raggiungere una lunghezza proporzionale al numero di nodi e quindi una quantita O(n). Descriviamo ora la versione iterativa dellalgoritmo precedente, nella quale compare esplicitamente la pila S che implementa la ricorsione e che di fatto mantiene (nellordine appropriato) i nodi gia visitati ma i cui vertici adiacenti non sono ancora stati tutti considerati. Il programma principale e del tutto simile al precedente e si ottiene semplicemente sostituendo la chiamata alla procedura Profondita(v) con quella alla nuova procedura che chiameremo Profondita it(v). Procedure Profondita it(v) begin visita il nodo v marca v come vecchio S := PUSH(, v) repeat while (v) 6= do begin w := TESTA(L(v)) L(v) := TOGLI IN TESTA(L(v)) if w e marcato nuovo then begin visita il nodo w marca w come vecchio U := INSERISCI IN TESTA({v, w}, U ) S := PUSH(S, w) v := w end end S := POP(S) if S 6= then v := TOP(S) until S = end CAPITOLO 5. PROCEDURE RICORSIVE Esercizi 1) Definire un algoritmo per lattraversamento in profondita dei grafi orientati. In quale caso lalgoritmo produce un albero di copertura? 2) Definire una versione iterativa dellalgoritmo di attraversamento in profondita che tenga conto della ricorsione terminale. Per quali input (grafi di n nodi) lo spazio occupato dalla pila e O(1)? 70 Capitolo 6 Equazioni di ricorrenza Molti classici algoritmi possono essere descritti mediante procedure ricorsive. Di conseguenza lanalisi dei relativi tempi di calcolo e ridotta alla soluzione di una o piu equazioni di ricorrenza nelle quali si esprime il termine n-esimo di una sequenza in funzione dei precedenti. Questo capitolo e il successivo sono dedicati alla presentazione delle principali tecniche utilizzate per risolvere equazioni di questo tipo o almeno per ottenere una soluzione approssimata. 6.1 Analisi di procedure ricorsive Supponiamo di dover analizzare dal punto di vista della complessita un algoritmo definito mediante un insieme di procedure P1 , P2 , , Pm , che si richiamano ricorsivamente fra loro. Lobiettivo dellanalisi e quello di stimare, per ogni i = 1, 2, . . . , m, la funzione Ti (n) che rappresenta il tempo di calcolo impiegato dalla i-ma procedura su dati di dimensione n. Se ogni procedura richiama le altre su dati di dimensione minore, sara possibile esprimere Ti (n) come funzione dei valori Tj (k) tali che j {1, 2, . . . , m} e k < n. Per fissare le idee, supponiamo di avere una sola procedura P che chiama se stessa su dati di dimensione minore. Sia T (n) il tempo di calcolo richiesto da P su dati di dimensione n (nellipotesi caso peggiore oppure nellipotesi caso medio). Sara in generale possibile determinare opportune funzioni f1 , f2 , . . . , fn , . . ., in 1, 2, . . . , n, . . . variabili rispettivamente, tali che: (1) o almeno tale che T (n) = fn (T (n 1), , T (2), T (1), T (0)) (2) T (n) fn (T (n 1), , T (2), T (1), T (0)) (n > 1) Relazioni del precedente tipo sono dette relazioni di ricorrenza e in particolare quelle di tipo (1) sono dette equazioni di ricorrenza. Si osservi che data la condizione al contorno T (0) = a, esiste ununica funzione T (n) che soddisfa (1). Lanalisi di un algoritmo ricorsivo prevede quindi due fasi: 1. Deduzione di relazioni di ricorrenza contenenti come incognita la funzione T (n) da stimare. 2. Soluzione delle relazioni di ricorsivita stesse. Consideriamo per esempio il problema di valutare il tempo di calcolo delle seguenti procedure assumendo il criterio di costo uniforme: 71 72 CAPITOLO 6. EQUAZIONI DI RICORRENZA Procedure B(n) begin S := 0; for i = 0, 1, . . . , n do S := S + A(i); return S; end Procedure A(n) if n = 0 then return 0; u := n 1; b := n + A(u); else return b ; Osserviamo innanzitutto che la procedura B richiama A, mentre A richiama se stessa. Per semplicita , assumiamo uguale a c il tempo di esecuzione di ogni istruzione ad alto livello e denotiamo con TB (n) e TA (n) rispettivamente il tempo di calcolo dellesecuzione di B e A su input n. Allora si ottengono le seguenti equazioni: TB (n) = c + n X (c + TA (i)) + c i=0 ( TA (n) = 2c se n = 0 c + (c + TA (n 1)) se n 1 Il problema di analisi e allora ridotto alla soluzione dellequazione di ricorrenza relativa ai valori TA (n), n IN. Lo sviluppo di tecniche per poter risolvere equazioni o relazioni di ricorrenza e quindi un importante e preliminare strumento per lanalisi di algoritmi e le prossime sezioni sono dedicate alla presentazione dei principali metodi utilizzati. Esercizio Scrivere le equazioni di ricorrenza dei tempi di calcolo delle procedure A e B definite sopra assumendo il criterio di costo logaritmico. 6.2 Maggiorazioni Cominciamo presentando una semplice tecnica per affrontare il seguente problema: data una relazione di ricorrenza ( T (n) fn (T (n 1), , T (2), T (1), T (0)) (n 1) T (0) = a e data una funzione g : IN IR+ , decidere se T (n) g(n) per ogni n IN. Una parziale risposta puo essere ottenuta dalla seguente proprieta. Proposizione 6.1 Consideriamo una funzione T (n) che soddisfi la seguente relazione: ( T (n) fn (T (n 1), , T (2), T (1), T (0)) T (0) = a (n 1) 73 CAPITOLO 6. EQUAZIONI DI RICORRENZA dove, per ogni n 1, la funzione fn (x1 , x2 , . . . , xn ) sia monotona non decrescente in ogni variabile. Supponiamo inoltre che, per una opportuna funzione g : IN IR+ , sia fn (g(n 1), . . . , g(0)) g(n) per ogni n 1 e g(0) = a. Allora T (n) g(n). Dimostrazione. Ragioniamo per induzione su n IN. Per n = 0 la proprieta e verificata per ipotesi. Supponiamo che g(k) T (k) per k < n, dimostriamo che g(n) T (n). Infatti, a causa della monotonia di fn : T (n) fn (T (n 1), , T (0)) fn (g(n 1), , g(0)) g(n) Osserviamo che se T (n) e una funzione che verifica la relazione di ricorrenza definita nella proposizione precedente allora, per ogni n IN, T (n) X(n), dove X(n) e definita dallequazione di ricorrenza associata: ( a se n = 0 X(n) = fn (X(n 1), ..., X(0)) se n 1 In questo modo possiamo ridurre lo studio delle relazioni di ricorrenza a quello delle equazioni di ricorrenza. Unulteriore applicazione della proposizione 6.1 e data dal seguente corollario che sara utilizzato nellanalisi di procedure di calcolo delle mediane (sez. 8.6). Corollario 6.2 Date due costanti e , tali che 0 < + < 1, sia T (n) una funzione che soddisfa la relazione ( T (n) T (bnc) + T (bnc) + n se n 1 T (0) = 0 se n = 0 Allora esiste una costante c tale che T (n) c n. Dimostrazione. Infatti c 0 = 0 per ogni c; quindi, per applicare la proposizione precedente, basta determinare una costante c tale che c (b nc) + c (b nc) + n c n per ogni n 1 Questa disuguaglianza e verificata se c 1 1 Nota che in questo modo abbiamo provato che T (n) = O(n). 6.3 Metodo dei fattori sommanti Con questa sezione iniziamo lo studio delle equazioni di ricorrenza piu comuni nellanalisi degli algoritmi. In generale il nostro obiettivo e quello di ottenere una valutazione asintotica della soluzione oppure, piu semplicemente, una stima dellordine di grandezza. Tuttavia sono stati sviluppati in letteratura vari metodi che permettono di ricavare la soluzione esatta di una ricorrenza. In alcuni casi poi lequazione di ricorrenza e particolarmente semplice e si puo ottenere la soluzione esatta iterando direttamente luguaglianza considerata. 74 CAPITOLO 6. EQUAZIONI DI RICORRENZA Consideriamo per esempio la seguente equazione di ricorrenza: ( 0 se n = 0 T (n 1) + 2n se n 1 T (n) = Poiche , per ogni n 2, T (n 1) = T (n 2) + 2(n 1), sostituendo questa espressione nellequazione precedente si ricava: T (n) = 2n + 2(n 1) + T (n 2). Iterando questa sostituzione per T (n 2), per T (n 3) e cos via, si ricava T (n) = 2n + 2(n 1) + . . . + 2(1) + T (0) = = 2 n X i = n(n + 1) i=1 Esempio 6.1 Vogliamo determinare il numero esatto di confronti eseguito dalla procedura di ricerca binaria descritta nella sezione 5.2.1. Per semplicita supponiamo che un confronto tra due numeri possa fornire tre risultati a seconda se i due elementi siano uguali, il primo minore del secondo, oppure viceversa il secondo minore del primo. Sia n la dimensione del vettore su cui si svolge la ricerca e sia T (n) il numero di confronti eseguiti nel caso peggiore. Questultimo si verifica quando la procedura esegue un confronto e richiama se stessa su un sottovettore di lunghezza b n2 c e questo evento si ripete per tutte le chiamate successive. Allora T (n) soddisfa la seguente equazione:  T (n) = 1 T (b n2 c) + 1 se n = 1 se n 2 Ricordando le proprieta delle parti intere definite nel capitolo 2 e applicando il procedimento iterativo descritto sopra, per ogni intero n tale che 2k n < 2k+1 , si ricava T (n) = 1 + T (bn/2c) = 2 + T (bn/22 c) = . . . = k + T (bn/2k c) = k + 1 Poiche per definizione k = blog2 nc si ottiene T (n) = blog2 nc + 1. Esempio 6.2 Sia  T (n) = 1 se n = 0 se n 1 n+k1 T (n 1) n Sviluppando lequazione secondo il metodo descritto otteniamo T (n) = = n+k1 n+k1 n+k2 T (n 1) = T (n 2) = . . . = n n n1 n+k1 n+k2 k ... T (0). n n1 1 Quindi la soluzione ottenuta e  T (n) =  n+k1 n Negli esempi precedenti abbiamo di fatto applicato un metodo generale per risolvere una classe di equazioni (lineari del primo ordine) chiamato metodo dei fattori sommanti. Questo puo essere descritto in generale nel modo seguente. Date due sequenze {an } e {bn }, consideriamo lequazione ( T (n) = b0 se n = 0 an T (n 1) + bn se n 1 75 CAPITOLO 6. EQUAZIONI DI RICORRENZA Sviluppando lequazione otteniamo T (n) = bn + an (bn1 + an1 T (n 2)) = = bn + an bn1 + an an1 (bn2 + an2 T (n 3)) = . . . = = bn + an bn1 + an an1 bn2 + + ( n Y aj )b1 + ( j=2 = bn + n1 X i=0 aj )b0 = j=1 n Y bi n Y aj . j=i+1 Lultima uguaglianza fornisce allora lespressione esplicita della soluzione. Esercizi 1) Ricordando lEsempio 2.3 determinare la soluzione dellequazione  T (n) = 0 2T (n 1) + 2n se n = 0 se n 1 2) Sia T (n) il numero di nodi di un albero binario completo di altezza n IN. Esprimere T (n) mediante una equazione di ricorrenza e risolverla applicando il metodo illustrato. 6.4 Equazioni divide et impera Unimportante classe di equazioni di ricorrenza e legata allanalisi di algoritmi del tipo divide et impera trattati nel capitolo 10. Ricordiamo che un algoritmo di questo tipo suddivide il generico input di dimensione n in un certo numero (m) di sottoistanze del medesimo problema, ciascuna di dimensione n/a (circa) per qualche a > 1; quindi richiama ricorsivamente se stesso su tali istanze ridotte e poi ricompone i risultati parziali ottenuti per determinare la soluzione cercata. Il tempo di calcolo di un algoritmo di questo tipo e quindi soluzione di una equazione di ricorrenza della forma   n T (n) = mT + g(n) a dove g(n) e il tempo necessario per ricomporre i risultati parziali in ununica soluzione. Questa sezione e dedicata alla soluzione di equazioni di ricorrenza di questa forma per le funzioni g(n) piu comuni. Teorema 6.3 Siano m, a, b e c numeri reali positivi e supponiamo a > 1. Per ogni n potenza di a, sia T (n) definita dalla seguente equazione: ( T (n) = b se n = 1 n c mT ( a ) + bn se n > 1. Allora T (n) soddisfa le seguenti relazioni: c (n ) se m < ac (nc log n) se m = ac T (n) = (nloga m ) se m > ac 76 CAPITOLO 6. EQUAZIONI DI RICORRENZA Dimostrazione. otteniamo Sia n = ak per un opportuno k IN. Allora, sviluppando lequazione di ricorrenza, n T (n) = bnc + mT ( ) = a c n n = bnc + mb c + m2 T ( 2 ) = a a 2 m m n = bn + bnc c + bnc 2c + m3 T ( 3 ) = = a a a 2 k1 m m m = bnc (1 + c + 2c + + (k1)c ) + mk T (1) = a a a loga n = bn c X m j j=0 ( c) a m j c Chiaramente se m < ac la serie + j=0 ( ac ) e convergente e quindi T (n) = (n ). c Se invece m = a , la sommatoria precedente si riduce a loga n + 1 e quindi T (n) = (nc log n). Se infine m > ac , abbiamo P loga n  X j=0 m ac j = ( amc )loga n+1 1 = (nloga mc ) m 1 c a e quindi otteniamo T (n) = (nloga m ). Notiamo che in questa dimostrazione non abbiamo sfruttato lipotesi che n sia un numero intero. Il risultato vale quindi per funzioni di una variabile reale n purche definita sulle potenze di a. Lespressione asintotica di T (n) ottenuta nel teorema precedente e il successivo termine O grande possono essere facilmente calcolati considerando il valore esatto della sommatoria loga n  X j=0 m ac j nella dimostrazione appena presentata. 6.4.1 Parti intere Nel teorema precedente abbiamo risolto le equazioni di ricorrenza solo per valori di n potenza di qualche a > 1. Vogliamo ora stimare le soluzioni per un intero n qualsiasi. In questo caso nelle equazioni di ricorrenza compaiono le espressioni d e e b c che denotano le parti intere dei numeri reali, definite nel capitolo 2. Per esempio, consideriamo lalgoritmo Mergesort descritto nella sezione 10.3. Tale algoritmo ordina un vettore di n elementi spezzando il vettore in due parti di dimensione bn/2c e dn/2e rispettivamente; quindi richiama se stesso sui due sottovettori e poi immerge le due soluzioni. Cos , si puo verificare che il numero M (n) di confronti eseguiti per ordinare n elementi, dove n e un qualsiasi intero positivo, soddisfa la seguente ricorrenza: ( M (n) = 0 se n = 1 n n M (b 2 c) + M (d 2 e) + n 1 se n > 1. 77 CAPITOLO 6. EQUAZIONI DI RICORRENZA In generale le equazioni del tipo divide et impera nelle quali compaiono le parti intere possono essere trattate usando il seguente risultato che estende lanaloga valutazione asintotica ottenuta in precedenza. Teorema 6.4 Siano a, b e c numeri reali positivi e supponiamo a > 1. Consideriamo inoltre due interi m1 , m2 IN tali che m1 + m2 > 0 e, per ogni intero n > 0, definiamo T (n) mediante la seguente equazione: ( b se n = 1 T (n) = m1 T (b na c) + m2 T (d na e) + bnc se n > 1. Allora, posto m = m1 + m2 , T (n) soddisfa le seguenti relazioni: c (n ) se m < ac (nc log n) se m = ac T (n) = (nloga m ) se m > ac Dimostrazione. Si prova il risultato nel caso m1 = 0, m2 = m > 0 ottenendo cos un limite superiore al valore di T (n). (Nello stesso modo si dimostra il risultato nel caso m2 = 0, m1 = m > 0 ottenendo un limite inferiore.) Supponi che m1 = 0 e sia k = bloga nc. Chiaramente ak n < ak+1 . Poiche {T (n)} e una sequenza monotona non decrescente, sappiamo che T (ak ) T (n) T (ak+1 ). I valori di T (ak ) e di T (ak+1 ) possono essere valutati usando il teorema 6.3: nel caso m < ac esistono due constanti c1 e c2 tali che c1 T (ak ) c1 akc + o(akc ) c nc + o(nc ), a T (ak+1 ) c2 a(k+1)c + o(a(k+1)c ) c2 ac nc + o(nc ). Sostituendo i valori ottenuti nella diseguaglianza precedente si deduce T (n) = (nc ). I casi m = ac e m > ac si trattano in maniera analoga. Esercizi 1) Considera la sequenza {A(n)} definita da  A(n) = 1 3A(d n2 e) + n 1 se n = 1 se n > 1. Calcolare il valore di A(n) per ogni intero n potenza di 2. Determinare lordine di grandezza A(n) per n tendente a +. 2) Considera la sequenza {B(n)} definita dallequazione  B(n) = 1 2B(b n2 c) + n n se n = 1 se n > 1. Determinare il valore di B(n) per ogni n potenza di 2. Stimare lordine di grandezza di B(n) al crescere di n. 3) Siano m, a numeri reali positivi e supponiamo a > 1. Per ogni x IR, definiamo C(x) mediante la seguente equazione:  0 se x 1 C(x) = mC( xa ) + x2 x se x > 1. Determinare, al variare delle costanti a e m, lordine di grandezza di C(x) per x tendente a +. 4) Sia {D(n)} una sequenza di interi definita dallequazione  D(n) = 1 2D(b n2 c) + n log n Determinare, lordine di grandezza di D(n) al crescere di n a +. se n 1 se n > 1. CAPITOLO 6. EQUAZIONI DI RICORRENZA 6.5 78 Equazioni lineari a coefficienti costanti Unaltra famiglia di equazioni di ricorrenza che compaiono sovente nellanalisi degli algoritmi e quella delle equazioni lineari a coefficienti costanti. Queste sono definite da uguaglianze della forma a0 tn + a1 tn1 + + ak tnk = gn (6.1) dove {tn } e la sequenza incognita (che per semplicita sostituiamo alla funzione T (n)), k, a0 , a1 , . . . , ak sono costanti e {gn } e una qualunque sequenza di numeri. Una sequenza di numeri {tn } si dice soluzione dellequazione se la (6.1) e soddisfatta per ogni n k. Chiaramente le varie soluzioni si differenziano per il valore dei primi k termini. Unequazione di questo tipo si dice omogenea se gn = 0 per ogni n IN. In questo caso esiste una nota regola generale che permette di ottenere esplicitamente tutte le soluzione dellequazione. Inoltre, anche nel caso non omogeneo, per le sequenze {gn } piu comuni, e possibile definire un metodo per calcolare la famiglia delle soluzioni. 6.5.1 Equazioni omogenee Per illustrare la regola di soluzione nel caso omogeneo consideriamo un esempio specifico dato dalla seguente equazione: tn 7tn1 + 10tn2 = 0 (6.2) Cerchiamo innanzitutto soluzioni della forma tn = rn per qualche costante r. Sostituendo tali valori lequazione diventa rn2 (r2 7r + 10) = 0 che risulta verificata per le radici del polinomio r2 7r + 10, ovvero per r = 5 e r = 2. Lequazione r2 7r+10 = 0 e chiamata equazione caratteristica della ricorrenza 6.2. Ne segue allora che le sequenze {5n } e {2n } sono soluzioni di (6.2) e quindi, come e facile verificare, lo e anche la combinazione lineare {5n + 2n } per ogni coppia di costanti e . Mostriamo ora che ogni soluzione non nulla {cn } della (6.2) e combinazione lineare di {5n } e {2n }. Infatti, per ogni n 2, possiamo considerare il sistema di equazioni lineari 5n1 x + 2n1 y = cn1 5n2 x + 2n2 y = cn2 nelle incognite x, y. Questo ammette ununica soluzione x = , y = poiche il determinante dei coefficienti e diverso da 0. Otteniamo cos espressioni di cn1 e cn2 in funzione di 5n e 2n . Di conseguenza, sostituendo queste ultime in cn 7cn1 + 10cn2 = 0 e ricordando che anche {5n } e {2n } sono soluzioni di (6.2), si ottiene cn = 5n + 2n . E facile verificare che i valori di e ottenuti non dipendono da n e possono essere ricavati considerando il sistema per n = 2. Cos la relazione precedente e valida per tutti gli n IN. Come abbiamo visto, le soluzioni dellequazione di ricorrenza considerata sono ottenute mediante le radici dellequazione caratteristica associata. Se le radici sono tutte distinte questa proprieta e del tutto 79 CAPITOLO 6. EQUAZIONI DI RICORRENZA generale e puo essere estesa a ogni equazione di ricorrenza lineare omogenea a coefficienti costanti, cioe ad ogni equazione della forma a0 tn + a1 tn1 + + ak tnk = 0, (6.3) dove k, a0 , a1 , . . . , ak sono costanti. Chiaramente, lequazione caratteristica della relazione (6.3) e a0 xn + a1 xn1 + + ak = 0. Teorema 6.5 Sia a0 tn + a1 tn1 + + ak tnk = 0 unequazione di ricorrenza lineare omogenea a coefficienti costanti e supponiamo che la sua equazione caratteristica abbia k radici distinte r1 , r2 , . . . , rk . Allora le soluzioni dellequazione data sono tutte e sole le sequenze {tn } tali che, per ogni n IN, tn = 1 r1n + 2 r2n + + k rkn dove 1 , 2 , . . . , k sono costanti arbitrarie. Dimostrazione. Il teorema puo essere dimostrato applicando lo stesso ragionamento presentato nellesempio precedente. Si mostra innanzitutto che linsieme delle soluzioni dellequazione forma uno spazio vettoriale di dimensione k e poi si prova che le k soluzioni {r1n }, {r2n }, . . . , {rkn } sono linearmente indipendenti e formano quindi una base dello spazio stesso. Esempio 6.3 Numeri di Fibonacci Considera la sequenza {fn } dei numeri di Fibonacci, definita dallequazione ( fn = 0 1 fn1 + fn2 se n = 0 se n = 1 se n 2 La corrispondente equazione caratteristica e x2 x 1 = 0 che ha per radici i valori 1 5 1+ 5 , = = 2 2 n Allora ogni soluzione {cn } dellequazione considerata e della forma cn = n + , con e costanti. Imponendo le condizioni iniziali c0 = 0, c1 = 1, otteniamo il sistema + 1+ 5 + 12 5 2 =0 =1 che fornisce le soluzioni = 15 , = 15 . Quindi, per ogni n IN, otteniamo 1 fn = 5   n n 1+ 5 1 1 5 . 2 2 5 Esempio 6.4 Vogliamo calcolare ora la sequenza {gn } definita dalla seguente equazione:  gn = n 3gn1 + 4gn2 12gn3 se 0 n 2 se n 3 80 CAPITOLO 6. EQUAZIONI DI RICORRENZA Lequazione caratteristica della ricorrenza e x3 3x2 4x + 12 = 0 che ammette come radici i valori 3, 2, 2. Ne segue che gn e della forma gn = 3n + (2)n + 2n . Imponendo le condizioni iniziali otteniamo il sistema ++ 3 2 + 2 9 + 4 + 4 =0 =1 =2 3 che fornisce la soluzione = 52 , = 20 , = 41 . Quindi la sequenza cercata e data dai valori gn = 2 n 3 1 3 (2)n 2n 5 20 4 Finora abbiamo considerato solo ricorrenze la cui equazione caratteristica ammette radici semplici. La situazione e solo leggermente piu complicata quando compaiono radici multiple. Infatti sappiamo che linsieme delle soluzioni dellequazione (6.3) forma uno spazio vettoriale di dimensione k e quindi lunico problema e quello di determinare k soluzioni linearmente indipendenti. Il seguente teorema, di cui omettiamo la dimostrazione, presenta la soluzione generale. Teorema 6.6 Data lequazione di ricorrenza a0 tn + a1 tn1 + + ak tnk = 0, supponiamo che la corrispondente equazione caratteristica abbia h( k) radici distinte r1 , r2 , . . . , rh e che ciascuna ri abbia molteplicita mi . Allora le soluzioni dellequazione di ricorrenza data sono tutte e sole le combinazioni lineari delle sequenze {nj rin } dove j {0, 1, . . . , mi 1} e i {1, 2, . . . , h}. Esempio 6.5 Vogliamo calcolare la sequenza {hn } definita da ( hn = 0 1 7hn1 15hn2 + 9hn3 se n = 0, 1 se n = 2 se n 3 In questo caso, lequazione caratteristica e x3 7x2 + 15x 9 = 0; essa ammette la radice semplice x = 1 e la radice x = 3 di molteplicita 2. Allora hn e della forma hn = n3n + 3n + . Imponendo le condizioni iniziali otteniamo il sistema + 3 + 3 + 18 + 9 + che fornisce la soluzione = 16 , = 41 , = 14 . Quindi la sequenza cercata e data dai valori hn = =0 =0 =1 n3n 3n 1 + 6 4 4 81 CAPITOLO 6. EQUAZIONI DI RICORRENZA 6.5.2 Equazioni non omogenee Consideriamo ora unequazione di ricorrenza lineare non omogenea a coefficienti costanti, cioe una relazione della forma a0 tn + a1 tn1 + + ak tnk = gn (6.4) dove {tn } e la sequenza incognita, k, a0 , a1 , . . . , ak sono costanti e {gn } e una qualunque sequenza diversa da quella identicamente nulla. Siano {un } e {vn } due soluzioni della (6.4). Questo significa che, per ogni n k, a0 un + a1 un1 + + ak unk = gn a0 vn + a1 vn1 + + ak vnk = gn Allora, sottraendo i termini delle due uguaglianze otteniamo a0 (un vn ) + a1 (un1 vn1 ) + + ak (unk vnk ) = 0 e quindi la sequenza {un vn } e soluzione dellequazione omogenea associata alla (6.4). Viceversa, se {un } e soluzione di (6.4) e {wn } e soluzione dellequazione omogenea a0 tn + a1 tn1 + + ak tnk = 0 allora anche la loro somma {un + wn } e soluzione della (6.4). Abbiamo cos dimostrato che tutte le soluzioni di (6.4) si ottengono sommando una soluzione particolare a tutte le soluzioni dellequazione omogenea associata. Questo significa che per risolvere la (6.4) possiamo eseguire i seguenti passi: 1. trovare tutte le soluzioni dellequazione omogenea associata applicando il metodo dellequazione caratteristica descritto nella sezione precedente; 2. determinare una soluzione particolare dellequazione data e sommarla alle precedenti. Il problema e che non esiste un metodo generale per determinare una soluzione particolare di unequazione non omogenea. Esistono solo tecniche specifiche che dipendono dal valore del termine noto gn . In alcuni casi tuttavia la determinazione della soluzione particolare e del tutto semplice. Esempio 6.6 Vogliamo determinare le soluzioni dellequazione tn 2tn1 + 3 = 0 Lequazione caratteristica dellomogenea associata e x 2 = 0 e quindi la sua soluzione generale e {2n }, con costante arbitraria. Inoltre e facile verificare che la sequenza {un }, dove un = 3 per ogni n IN, e una soluzione dellequazione iniziale. Quindi le soluzioni sono tutte e sole le sequenze della forma {3 + 2n } con costante. Metodo delle costanti indeterminate Una delle tecniche piu comuni per determinare una soluzione particolare di una equazione non omogenea e chiamata metodo delle costanti indeterminate. Questo consiste nel sostituire i termini tn dellequazione (6.4) con quelli di una sequenza particolare nella quale alcune costanti sono incognite e quindi determinare il valore di queste ultime mediante identificazione. Si puo dimostrare che se il termine noto gn della (6.4) ha la forma gn = h X i=1 bi Pi (n) 82 CAPITOLO 6. EQUAZIONI DI RICORRENZA dove, per ogni i = 1, 2, . . . , h, bi e una costante e Pi un polinomio in n, allora una soluzione particolare deve essere del tipo un = h X bi Qi (n) i=1 dove i Qi (n) sono polinomi che soddisfano le seguenti proprieta : 1. se bi non e radice dellequazione caratteristica dellomogenea associata a (6.4), allora il grado di Qi (n) e uguale a quello di di Pi (n); 2. se bi e radice di molteplicita mi dellequazione caratteristica dellomogenea associata a (6.4), allora il grado di Qi (n) e la somma di mi e del grado di Pi (n). Esempio 6.7 Determiniamo tutte le soluzioni dellequazione tn 3tn1 + tn2 = n + 3n . Lequazione caratteristica dellomogenea associata e x2 3x + 2 = 0 che ammette le radici 3+2 5 e 32 5 . Quindi la soluzione generale dellequazione omogenea associata e   n n 3+ 5 3 5 + 2 2 dove e sono costanti. Cerchiamo ora una soluzione particolare applicando il metodo delle costanti indeterminate. Nel nostro caso b1 = 1, b2 = 3, Q1 = n, Q2 = 1; di conseguenza una soluzione candidata e un = (an + b) + c3n , per opportune costanti a, b, c. Per determinare il loro valore sostituiamo un nellequazione e otteniamo un 3un1 + un2 = n + 3n ovvero, svolgendo semplici calcoli, (a 1)n + a b +  c 1 3n = 0 9  che risulta soddisfatta per ogni n IN se e solo se a = 1, b = 1, c = 9. Quindi una soluzione particolare e un = 3n+2 n 1 e di conseguenza le soluzioni dellequazione iniziale sono   n n 3+ 5 3 5 + + 3n+2 n 1 2 2 al variare delle costanti e . Esercizi 1) Descrivere un metodo per calcolare una soluzione particolare di unequazione della forma (6.4) nella quale il termine noto gn sia costante. 2) Considera la seguente procedura: Procedure Fun(n) if n 1 then return n else begin x = Fun(n 1) y = Fun(n 2) return 3x y end 83 CAPITOLO 6. EQUAZIONI DI RICORRENZA Sia D(n) il risultato del calcolo eseguito dalla procedura su input n. a) Calcolare il valore esatto di D(n) per ogni n IN. b) Determinare il numero di operazioni aritmetiche eseguite dalla procedura su input n IN. c) Determinare lordine di grandezza del tempo di calcolo richiesto dalla procedura su input n IN assumendo il criterio di costo logaritmico. 3) Considera la seguente procedura che calcola il valore B(n) IN su input n IN, n > 0. begin read n a=2 fork = 1, . . . , n do a=2+ka output a end a) Esprimere il valore di B(n) in forma chiusa (mediante una sommatoria). b) Determinare lordine di grandezza del tempo di calcolo e dello spazio di memoria richiesti dalla procedura assumendo il criterio di costo logaritmico. 6.6 Sostituzione di variabile Molte equazioni che non risultano lineari a coefficienti costanti possono essere ricondotte a tale forma (o a una forma comunque risolubile) mediante semplici sostituzioni. Un esempio importante e costituito dalle equazioni divide et impera: ( T (n) = b se n = 1 mT ( na ) + bnc se n > 1. Sostituendo n = ak e ponendo H(k) = T (ak ), si ottiene lequazione ( H(k) = b se k = 0 mH(k 1) + bakc se k > 0. Questa puo essere risolta con la tecnica illustrata nella sezione precedente (oppure con il metodo dei fattori sommanti) ricavando infine T (n) = H(loga n). Esercizio Dimostrare il teorema 6.3 usando il procedimento appena illustrato. Un altro esempio di sostituzione e fornito dalla equazione tn = a(tn1 )b , nella quale a e b sono costanti positive, b 6= 1, con la condizione iniziale t0 = 1. In questo caso possiamo porre loga tn , ottenendo un = b loga tn1 + 1 = bun1 + 1; questa puo essere risolta facilmente con il metodo dei fattori sommanti: un = bn 1 . b1 Operando di nuovo la sostituzione si ricava bn 1 tn = a b1 . 84 CAPITOLO 6. EQUAZIONI DI RICORRENZA Esempio 6.8 Considera lequazione  tn = tn1  tn1 +1 tn2 con la condizione iniziale t0 = t1 = 1. Dividendo per tn1 si ottiene tn tn1 = + 1. tn1 tn2 Quindi operando la sostituzione vn = Chiaramente si ottiene vn = n e quindi tn , si ricava lequazione vn tn1 tn = n Y = vn1 + 1 con la condizione iniziale v1 = 1. vi = n!. i=1 Esercizi 1) Sia T (n) il numero di bit necessari per rappresentare lintero positivo n. Determinare una equazione di ricorrenza per T (n) e risolverla. 2) Considera la sequenza {T (n)} definita da  T (n) = 0 1 + T (b nc) se n = 1 se n > 1. Dimostrare che T (n) = blog2 log2 nc + 1 per ogni intero n > 1. 6.6.1 Lequazione di Quicksort Come sappiamo Quicksort e uno degli algoritmi piu importanti utilizzati per ordinare una sequenza di elementi (vedi la sezione 8.5). Lanalisi del suo tempo di calcolo nel caso medio si riduce alla soluzione dellequazione di ricorrenza X 2 n1 tn = n 1 + tk n k=0 con la condizione iniziale t0 = 0. Si puo risolvere tale equazione con un opportuno cambiamento di variabile; il procedimento ha una certa generalita perche permette di trattare equazioni nelle quali P compaiono sommatorie del tipo nk=0 tk . Innanzitutto la ricorrenza puo essere scritta nella forma ntn = n(n 1) + 2 n1 X tk , 0 ovvero, riferendosi al valore n-1 (quindi per ogni n > 1), (n 1)tn1 = (n 1)(n 2) + 2 n2 X tk . 0 Sottraendo membro a membro le due uguaglianze ottenute si elimina la sommatoria, ricavando ntn = (n + 1)tn1 + 2(n 1). 85 CAPITOLO 6. EQUAZIONI DI RICORRENZA tn Nota che questultima relazione vale per ogni n 1. Dividendo cos per n(n+1) e definendo un = n+1 , otteniamo 1 2 ). un = un1 + 2( n+1 n Ora possiamo applicare il metodo dei fattori sommanti ricavando n X n X 2 1 2 1 un = 2 ( ) = 2{ 2+ }; k + 1 k n + 1 k 1 1 da cui si ottiene lespressione per tn : tn = (n + 1)un = 2(n + 1) n X 1 1 k 4n = 2n log n + O(n). Esercizi 1) Determina la soluzione dellequazione tn = n tn1 + 1 n+1 con la condizione iniziale t0 = 0. 2) Considera la sequenza {tn } definita dallequazione tn = n(tn1 )2 n n con la condizione iniziale t1 = 1. Dimostrare che tn = (22 ) e tn = O(n2 ). Capitolo 7 Funzioni generatrici Le funzioni generatrici rappresentano uno strumento classico, introdotto originariamente per risolvere problemi di conteggio in vari settori della matematica, che ha assunto unimportanza particolare nellanalisi degli algoritmi. Si tratta di uno strumento che ha permesso di sviluppare un numero notevole di metodi e tecniche usate sia per determinare la soluzione di equazioni di ricorrenza, sia nello studio delle proprieta delle strutture combinatorie comunemente utilizzate nella progettazione degli algoritmi. Una delle maggiori applicazioni riguarda la possibilita di utilizzare consolidate tecniche analitiche per la determinazione di sviluppi asintotici. Lidea di fondo che sta alla base di questi metodi e quella di rappresentare una sequenza di numeri mediante una funzione analitica e far corrispondere alle operazioni su sequenze analoghe operazioni tra funzioni. In questo modo, possiamo formulare un problema di enumerazione mediante una o piu relazioni definite su funzioni analitiche e, una volta determinata la soluzione in questo ambito, tornare al contesto originario calcolando la o le sequenze associate alle funzioni ottenute. In questo senso le funzioni generatrici possono essere viste come esempio di trasformata. Per motivi storici e soprattutto in testi per applicazioni ingegneristiche le funzioni generatrici sono chiamate anche z-trasformate. 7.1 Definizioni Consideriamo una sequenza di numeri reali {a0 , a1 , . . . , an , . . .}, che nel seguito denoteremo con {an }n0 oppure, piu semplicemente, con {an }. Supponiamo che la serie di potenze + X an z n n=0 abbia un raggio di convergenza R maggiore di 0 (questo si verifica nella maggior parte dei casi di interesse per lanalisi di algoritmi e in questo capitolo consideriamo solo sequenze che godono di tale proprieta ). Chiamiamo allora funzione generatrice di {an } la funzione di variabile reale A(z) = + X an z n (7.1) n=0 definita sullintervallo (R, R) 1 . 1 Nota che in realta A(z) puo essere interpretata come funzione di variabile complessa definita nel cerchio aperto di centro 0 e raggio R. 86 87 CAPITOLO 7. FUNZIONI GENERATRICI Viceversa, data una funzione A(z), sviluppabile in serie di potenze con centro nello 0, diremo che {an }n0 e la sequenza associata alla funzione A(z) se luguaglianza 7.1 vale per ogni z in un opportuno intorno aperto di 0. Questultima definizione e ben posta: se {an } e {bn } sono sequenze distinte, e le serie di potenze + X + X an z n , n=0 bn z n n=0 hanno raggio di convergenza positivo, allora le corrispondenti funzioni generatrici sono distinte. Abbiamo cos costruito una corrispondenza biunivoca tra la famiglia delle sequenze considerate e linsieme delle funzioni sviluppabili in serie di potenze con centro in 0. Un esempio particolarmente semplice si verifica quando la sequenza {an } e definitivamente nulla; in questo caso la funzione generatrice corrispondente e un polinomio. Cos per esempio, la funzione generatrice della sequenza {1, 0, 0, . . . , 0, . . .} e la funzione costante A(z) = 1. In generale, per passare da una funzione generatrice A(z) alla sequenza associata {an } sara sufficiente determinare lo sviluppo in serie di Taylor di A(z) con centro in 0: A(z) = A(0) + A0 (0)z + A00 (z) 2 A(n) (0) n z + + z + , 2 n! dove con A(n) (0) indichiamo la derivata n-esima di A(z) valutata in 0. Ne segue allora che an = A(n) (0) n! per ogni n IN. Cos, ricordando lo sviluppo in serie di Taylor delle funzioni tradizionali, possiamo determinare immediatamente le sequenze associate in diversi casi particolari: m 1. per ogni m IN, la funzione (1 + z)m e la funzione generatrice della sequenza { m 0 , 1 , m . . . , m , 0, 0 . . . , 0 . . .} poiche  m (1 + z) = m X n=0 ! m n z n (z IR). 1 2. Per ogni b IR, la funzione 1bz e la funzione generatrice della sequenza {bn }n0 poiche + X 1 = bn z n 1 bz n=0 (|z| < |1/b|). 1 3. La funzione ez e la funzione generatrice della sequenza { n! }n0 poiche ez = + X zn n! n=0 (z IR). 1 4. La funzione log 1z e la funzione generatrice della sequenza {0, 1, 21 , . . . , n1 , . . .} poiche + X zn 1 log = 1 z n=1 n (|z| < 1).  88 CAPITOLO 7. FUNZIONI GENERATRICI m+n1 1 }n0 5. Per ogni m IN, la funzione (1z) m e la funzione generatrice della sequenza { n poiche ! + X m+n1 1 = zn (|z| < 1). (1 z)m n=0 n  6. Generalizzando la nozione di coefficiente binomiale possiamo definire, per ogni numero reale e ogni n IN, il coefficiente n nel modo seguente: n ! ( = 1 (1)(n+1) n! se n = 0 se n 1 Cos e facile verificare che (1 + z) e la funzione generatrice di { n }n0 poiche  (1 + z) = + X n=0 ! n z n (|z| < 1) n n+1 per ogni n IN. Nota che n = (1) n   Esercizi 1) La sequenza {n!}n0 ammette funzione generatrice? 2) Determinare le sequenze associate alle seguenti funzioni generatrici (con b, IR, b, 6= 0): ebz 1 , z 1 1 log , z 1 bz (1 + bz) 1 z  1 3) Dimostrare che la funzione 14z e funzione generatrice della sequenza { 2n }n0 . n 7.2 Funzioni generatrici ed equazioni di ricorrenza Le funzioni generatrici forniscono un metodo generale per determinare o per approssimare le soluzioni di equazioni di ricorrenza. Infatti in molti casi risulta piu facile determinare equazioni per calcolare la funzione generatrice di una sequenza {an } piuttosto che risolvere direttamente equazioni di ricorrenza per {an }. Esistono inoltre consolidate tecniche analitiche che permettono di determinare una stima asintotica di {an } una volta nota la sua funzione generatrice. Questi vantaggi suggeriscono il seguente schema generale per risolvere una equazione di ricorrenza di una data sequenza {an }: 1. trasformare la ricorrenza in una equazione tra funzioni generatrici che ha per incognita la funzione generatrice A(z) di {an }; 2. risolvere questultima calcolando una espressione esplicita per A(z); 3. determinare {an } sviluppando A(z) in serie di Taylor con centro in 0 oppure calcolarne lespressione asintotica conoscendo i punti di singolarita di A(z). 89 CAPITOLO 7. FUNZIONI GENERATRICI Per illustrare questo approccio presentiamo un esempio semplice che consente di risolvere unequazione gia considerata nel capitolo precedente. Si tratta dellequazione descritta nellesempio (6.3) che definisce i numeri di Fibonacci. Vogliamo calcolare i termini della sequenza {fn } definiti dalla seguente equazione: se n = 0 0 1 se n = 1 fn = f n1 + fn2 se n 2 n In questo caso possiamo calcolare la funzione generatrice F (z) = + n=0 fn z con il metodo che segue. Per ogni n 2 sappiamo che fn = fn1 + fn2 ; quindi moltiplicando a destra e a sinistra per z n e sommando su tutti gli n 2 si ottiene P + X fn z n = n=2 + X fn1 z n + n=2 + X fn2 z n . n=2 Tenendo conto delle condizioni iniziali f0 = 0, f1 = 1, lequazione puo essere scritta nella forma F (z) z = zF (z) + z 2 F (z), z ovvero F (z) = 1zz 2 . Dobbiamo quindi determinare lo sviluppo in serie di Taylor della funzione z . 1 z z2 Per fare questo consideriamo il polinomio 1 z z 2 ; le sue radici sono = Calcoliamo ora le costanti reali A e B tali che 51 e = 2 A B z z + z = 1 1 1 z z2 Questa equazione corrisponde al sistema A+B =0 A + B = che fornisce i valori A = 15 e B = 15 e di conseguenza otteniamo 1 F (z) = 5 ( 1 1 z 1 1 z ) Ricordando lo sviluppo delle serie geometriche si ricava 1 F (z) = 5 ( + X zn + X n  ( zn n n=0 n n=0 ) e quindi, per ogni n 0, abbiamo 1 fn = 5  2 51 n 2 5+1  1 = 5 !n 1+ 5 2 !n ) 1 5 . 2 5+1 2 . 90 CAPITOLO 7. FUNZIONI GENERATRICI Esercizio Usando le funzioni generatrici, determinare la soluzione delle seguenti equazioni:  1 2an1 + 1 se n = 0 se n 1 ( 0 1 3bn1 bn2 se n = 0 se n = 1 se n 2 ( 0 1 4cn1 4cn2 se n = 0 se n = 1 se n 2 an = bn = cn = 7.3 Calcolo di funzioni generatrici Lesempio precedente mostra come lequazione di ricorrenza che definisce una sequenza {fn } possa essere trasformata in una equazione per la corrispondente funzione generatrice F (z). A tale riguardo presentiamo ora una discussione generale su questa trasformazione, mostrando la corrispondenza che esiste tra operazioni sulle sequenze e corrispondenti operazioni sulle funzioni generatrici. Questa corrispondenza permette in molti casi di trasformare ricorrenze (o piu in generale relazioni) tra successioni numeriche in equazioni sulle relative funzioni generatrici. 7.3.1 Operazioni su sequenze numeriche Definiamo anzitutto alcune operazioni sulle successioni. Siano {fn }n0 e {gn }n0 due sequenze e siano c IR e k IN due costanti. Allora definiamo: c {fn } = {cfn } {fn } + {gn } = {fn + gn } P {fn } {gn } = { nk=0 fk gnk } E k {fn } = {fk+n }n0 n {fn } = {nfn } 1 fn Divisione per n + 1 {fn } = { n+1 }n0 n+1 Pn Osserviamo qui che la somma k=0 fk e ottenibile mediante la convoluzione: Moltiplicazione per costante Somma Convoluzione Spostamento Moltiplicazione per n {1} {fn } = ( n X ) fk k=0 dove {1} rappresenta la sequenza {bn }n0 tale che bn = 1 per ogni n IN. Una vasta classe di equazioni di ricorrenza e ottenuta applicando le operazioni precedentemente definite. Esempio 7.1 La sequenza dei numeri di Fibonacci definiti nella sezione precedente  fn = n fn1 + fn2 se n 1 se n 2 puo essere rappresentata nella forma fn+2 = fn+1 + fn , ovvero E 2 {fn } = E 1 {fn } + {fn } insieme alla condizione iniziale fn = n per n 1. 91 CAPITOLO 7. FUNZIONI GENERATRICI Esempio 7.2 Considera la ricorrenza  fn = 1 Pn1 k=0 fk fn1k se n = 0 se n 1 Essa puo essere riscritta nella forma E 1 {fn } = {fn } {fn } insieme alla condizione iniziale f0 = 1. Esempio 7.3 Lequazione di ricorrenza  fn = 1 1 n Pn1 f k=0 k se n = 0 se n 1 puo essere riscritta nella forma: n E 1 {fn } + E 1 {fn } = {1} {fn } insieme alla condizione iniziale f0 = 1. 7.3.2 Operazioni su funzioni generatrici Poiche la corrispondenza tra sequenze e loro funzioni generatrici e biunivoca, ad ogni operazione tra successioni corrisponde in linea di principio una precisa operazione tra funzioni generatrici. Descriviamo ora le operazioni su funzioni generatrici corrispondenti alle operazioni su sequenze definite nella sezione precedente. Denotando con F (z) e G(z) rispettivamente le funzioni generatrici delle sequenze {fn }n0 e {gn }n0 , possiamo presentare nella sequente tabella alcune corrispondenze di immediata verifica. Nella prima colonna riportiamo il termine n-esimo della sequenza mentre nella seconda la funzione generatrice corrispondente; inoltre c rappresenta qui una costante reale qualsiasi. 92 CAPITOLO 7. FUNZIONI GENERATRICI c fn c F (z) fn + gn F (z) + G(z) n X fk gnk F (z) G(z) k=0 fk+n F (z) f0 f1 z fk1 z k1 zk nfn zF 0 (z) fn n+1 1Z z F (t)dt z 0 Per quanto riguarda luso della derivata e dellintegrale nella tabella precedente ricordiamo che, se F (z) e la funzione generatrice di una sequenza {fn }, anche la sua derivata F 0 (z) e una funzione sviluppabile in serie di potenze con centro in 0; inoltre, tale sviluppo e della forma F 0 (z) = + X nfn z n1 = n=1 + X (n + 1)fn+1 z n n=0 Questo significa che F 0 (z) e la funzione generatrice della sequenza {(n + 1)fn+1 }n0 . Esempio 7.4 1 Calcoliamo la funzione generatrice di {(n + 1)}n0 . Poiche 1z e la funzione generatrice di {1}, la funzione cercata e 1 semplicemente la derivata di 1z : + X (n + 1)z n = n=0 d 1 1 = . dz 1 z (1 z)2 Un discorso analogo vale per la funzione integrale I(z) = serie di potenze con centro in 0 e il suo sviluppo e della forma Z z I(z) = F (t)dt = 0 + X Rz 0 F (t)dt: anche I(z) e sviluppabile in fn1 n z n n=1 93 CAPITOLO 7. FUNZIONI GENERATRICI Di conseguenza I(z) risulta la funzione generatrice della sequenza {0, f10 , f21 , . . . , fn1 n , . . .}. Esempio 7.5 Calcoliamo la funzione generatrice della sequenza {0, 1, 12 , . . . , n1 , . . .} (che nel seguito denoteremo piu semplicemente { n1 }). Questa puo essere ottenuta mediante integrazione della serie geometrica: + X 1 n n=1 n z Z z = 0 1 1 dt = log . 1t 1z Vediamo ora alcune applicazioni delle corrispondenze riportate nella tabella precedente. Un primo esempio deriva immediatamente dai casi appena presi in esame. Esempio 7.6 Calcoliamo la funzione generatrice della sequenza {Hn } dei numeri armonici. Dalla loro definizione sappiamo che Hn = n X 1 k=1 k e quindi abbiamo Hn = { n1 } {1} = {1} { n1 }. Di conseguenza la funzione generatrice di {Hn } e data dal prodotto 1 1 log . 1z 1z Piu in generale, siamo ora in grado di trasformare una equazione di ricorrenza in una equazione tra funzioni generatrici e possiamo quindi applicare compiutamente il metodo descritto nella sezione 7.2. In particolare, possiamo risolvere le equazioni di ricorrenza che compaiono negli Esempi 7.1, 7.2, e 7.3. Vediamo esplicitamente come si risolve il primo caso (Esempio 7.1). Lequazione ( fn = n se n 1 fn1 + fn2 se n 2 viene riscritta mediante operatori di shift (spostamento) e condizioni al contorno nella forma: fn+2 = fn+1 + fn , f0 = 0, f1 = 1. Applicando ora le regole 2) e 4) della tabella precedente si ottiene F (z) z F (z) = + F (z). z2 z Applicando un ragionamento simile agli altri due esempi si ottengono le seguenti equazioni sulle funzioni generatrici: F (z) 1 = F 2 (z) z z d dz  F (z) 1 z (Esempio 7.2)  + F (z) 1 1 = F (z) z 1z (Esempio 7.3) Di conseguenza, la determinazione di F (z) e qui ridotta rispettivamente alla soluzione di una equazione di primo grado (Esempio 7.1), di secondo grado (Esempio 7.2), di una equazione differenziale lineare del primo ordine (Esempio 7.3). Abbiamo gia mostrato nella sezione 7.2 come trattare le equazioni di primo grado; presenteremo nella sezione seguente unanalisi degli altri due casi. 94 CAPITOLO 7. FUNZIONI GENERATRICI Esercizi 1) Determinare la funzione generatrice delle seguenti sequenze: 1 {n}n0 , {n 1}n0 , {n }n0 , {n2 }n0 , { }n0 , n+2 2 n ( n X k=0 k1 nk+1 ) . n0 2) Ricordando lesercizio 3 della sezione 7.1, dimostrare che per ogni n IN   n  X 2k 2n 2k k=0 nk k = 4n . 3) Determinare la funzione generatrice F (z) della successione fn dove: fn+2 2fn+1 + fn = n, f0 = f1 = 0 7.4 Applicazioni Come primo esempio applichiamo i metodi sopra illustrati per determinare la soluzione di equazioni del tipo divide et impera. Sia T (n) definita da ( T (n) = b mT n a  se n = 1 + gn se n 2 per n IN potenze di a, dove assumiamo m, b > 0 e a > 1. Mediante la sostituzione n = ak , posto fk = T (ak ) e hk = g(ak+1 ) si ha: fk+1 = mfk + hk . Denotando quindi con F (z) e H(z) rispettivamente le funzioni generatrici di {fk } e di {hk } si ottiene F (z) F (0) = mF (z) + H(z). z Questa e una equazione di primo grado in F (z) che puo essere risolta direttamente una volta noti il valore iniziale F (0) = b e la funzione H(z). Esercizio Per tutti gli n IN potenze di 2 risolvere lequazione  T (n) = 7.4.1 1 3T n 2  + n log n se n = 1 se n 2 Conteggio di alberi binari Un classico problema di enumerazione consiste nel determinare il numero di alberi binari non etichettati di n nodi per n IN qualsiasi 2 . Intuitivamente un albero binario non etichettato e un albero binario al quale abbiamo tolto il nome ai nodi; in questo modo i nodi sono indistinguibili e due alberi di questo genere sono diversi solo le corrispondenti rappresentazioni grafiche, private del nome dei nodi, sono distinte. Nella seguente figura rappresentiamo gli alberi binari non etichettati con tre nodi: 2 Per la definizione di albero binario si veda la sezione 4.6.3 95 CAPITOLO 7. FUNZIONI GENERATRICI           @ @ @ @ @   @         @ @       @ @   Una definizione induttiva e la seguente: un albero binario non etichettato puo essere lalbero vuoto, che denotiamo con , oppure e descritto da un nodo chiamato radice e da due alberi binari non etichettati T1 , T2 (che rappresentano rispettivamente il sottoalbero di sinistra e il sottoalbero di destra). Denotiamo con bn il numero di alberi binari non etichettati con n nodi, n IN. Dalla definizione sappiamo che b0 = 1; inoltre, se n > 0, un albero binario con n + 1 nodi possiede, oltre alla radice, k nodi nel suo sottoalbero di sinistra e n k nel sottoalbero di destra, per qualche intero k, 0 k n. Quindi bn soddisfa la seguente equazione di ricorrenza: b0 = 1, bn+1 = n X bk bnk k=0 Passando alle funzioni generatrici e denotando con B(z) la funzione generatrice di {bn }, lequazione si traduce in B(z) 1 = B 2 (z) z ovvero B(z) = 1 + z(B(z))2 Risolvendo lequazione otteniamo le due soluzioni 1 + 1 4z 1 1 4z B1 (z) = , B2 = 2z 2z La funzione B1 (z) deve essere scartata poiche limz0 B1 (z) = e quindi B1 non e sviluppabile in serie di Taylor con centro in 0. Ne segue che B2 e la funzione generatrice della sequenza {bn } (in particolare si verifica che limz0 B2 (z) = 1 = b0 ). 1 14z Dobbiamo ora sviluppare in serie di Taylor con centro in 0. Applicando lo sviluppo di 2z funzioni della forma (1 + z) riportato nella sezione 7.1, otteniamo 1 4z = + X ! 1/2 (4)n z n n n=0 dove 1/2 0 !  =1 mentre, per ogni n > 0, abbiamo 1/2 n ! = = 1 2  1 2 1  n!  1 2 n+1 = (1)n1 1 3 5 (2n 3) = 2n n! 96 CAPITOLO 7. FUNZIONI GENERATRICI = (1)n1 (2n 2)! = n 2 n!(2 4 2n 2) = 2(1)n1 2n 2 n4n n1 ! Si deduce allora che 1  1 1 1 4z 2z = P+ 2 2n2 n  n=1 n n1 z 2z! 1 2n 2 n1 = z n n1 n=1 + X + X ! 2n n 1 z = n + 1 n n=0 1 2n 1 2n e di conseguenza bn = n+1 n . Ricordiamo che gli interi n+1 n sono noti in letteratura come i numeri di Catalan.  7.4.2  Analisi in media di Quicksort Abbiamo gia studiato nella sezione 6.6.1 lequazione di ricorrenza relativa al tempo di calcolo dellalgoritmo Quicksort nel caso medio. Vediamo ora come risolvere la stessa equazione usando le funzioni generatrici. Lequazione e data dalluguaglianza X 2 n1 Tn = n 1 + Tk n k=0 (7.2) con la condizione iniziale T0 = 0. Moltiplicando entrambi i termini delluguaglianza per nz n1 otteniamo nTn z n1 = n(n 1)z n1 +2 n1 X Tk z n1 k=0 Sommando i due termini di questa uguaglianza per tutti i valori n 1, ricaviamo la seguente equazione: T 0 (z) = 2z 2 + T (z), 3 (1 z) 1z dove T (z) e T 0 (z) denotano rispettivamente la funzione generatrice di {Tn }n0 e la sua derivata. Si tratta di una equazione differenziale lineare del primo ordine che puo essere risolta con metodi classici. Lequazione differenziale omogenea, associata alla precedente, e T 0 (z) = 2 T (z) 1z che ammette la soluzione (1 z)2 ; quindi lintegrale generale, valutato per T (0) = 0, risulta 1 T (z) = (1 z)2 Z z 0 2t 2 1 (log dt = z). 1t (1 z)2 1z 97 CAPITOLO 7. FUNZIONI GENERATRICI Dobbiamo ora sviluppare in serie di Taylor la funzione ottenuta, applicando le proprieta delle operazioni 2 1 e la funzione generatrice della sequenza {n + 1}, 1z 1 1 mentre log 1z e la funzione generatrice di { n }. Di conseguenza la sequenza associata al prodotto 1 1 e data dalla convoluzione delle sequenze {n + 1} e { n1 }. Questo permette di calcolare log 1z (1z)2 su funzioni generatrici. Ricordiamo che  direttamente i termini della sequenza cercata: Tn = 2 n X 1 k=1 k (n + 1 k) 2n = 2(n + 1) n X 1 k=1 k 4n. Esercizio  2n 1 per Applicando la formula di Stirling determinare lespressione asintotica dei numeri di Catalan n+1 n n +. 7.5 Stima dei coefficienti di una funzione generatrice Esistono potenti tecniche che permettono di determinare una stima asintotica di una sequenza fn conoscendo la sua funzione generatrice F (z) in forma chiusa oppure in modo implicito. In effetti, sono questi i metodi che attribuiscono importanza alluso delle funzioni generatrici. Non vogliamo qui addentrarci nello studio generale di questa problematica, che richiede nozioni preliminari di teoria delle funzioni analitiche; ci limitiamo a valutare il comportamento asintotico di fn per alcune particolari funzioni F (z). 7.5.1 Funzioni razionali P (z) Le funzioni razionali in una variabile sono quelle che possono essere rappresentate nella forma Q(z) dove P (z) e Q(z) sono polinomi in z. Queste funzioni sono rilevanti nel nostro contesto poiche si puo facilmente provare che una sequenza {fn } e soluzione di una equazione di ricorrenza lineare omogenea a coefficienti costanti (vedi la sezione 6.5.1) se e solo se la sua funzione generatrice e razionale. Come sappiamo tali equazioni compaiono spesso nellanalisi di algoritmi. Consideriamo allora la funzione generatrice F (z) di una sequenza {fn } e supponiamo che F (z) = P (z) , Q(z) dove P (z) e Q(z) sono polinomi primi fra loro nella variabile z. Senza perdita di generalita possiamo assumere che il grado di P (z) sia minore del grado di Q(z). Inoltre e chiaro che le radici di Q(z) sono diverse da 0 altrimenti F (z) non sarebbe continua e derivabile in 0 (e quindi {fn } non sarebbe definita). Per semplicita supponiamo che Q(z) abbia m radici distinte z1 , z2 , . . . , zm , di molteplicita 1 e fra queste ve ne sia una sola di modulo minimo. Possiamo allora determinare la decomposizione di F (z) in frazioni parziali m P (z) X Ak = Q(z) k=1 zk z 98 CAPITOLO 7. FUNZIONI GENERATRICI Per le ipotesi fatte, tale decomposizione esiste sempre e inoltre, per la regola de lHopital, ogni costante Ak , per k = 1, 2, . . . , m, soddisfa la relazione seguente: Ak = lim (zk z) zzk Poiche F (z) = concludere che P (z) zk z P (zk ) = P (zk ) lim = 0 . zzk Q(z) Q(z) Q (zk ) Ak 1 1 1 k=1 zk 1z/zk , ricordando che 1z/zk e la funzione generatrice di { z n }n0 , possiamo Pm k fn = m X Ak . (zk )n+1 k=1 Se siamo interessati al comportamento asintotico, basta osservare che nella somma il termine principale e quello corrispondente alla radice di minor modulo. Tale proprieta e generale e vale anche quando le altre radici hanno molteplicita maggiore di 1. Abbiamo quindi provato la seguente proprieta . Proposizione 7.1 Consideriamo una funzione razionale F (z) = P (z)/Q(z), dove P (z) e Q(z) sono polinomi primi fra loro, con Q(0) 6= 0; supponiamo che Q(z) abbia ununica radice z di modulo minimo e che tale radice abbia molteplicita 1. Allora, per n +, la sequenza {fn } associata alla funzione F (z) soddisfa la relazione P (z) fn 0 . Q (z)z n+1 Con tecniche analoghe si ottiene il seguente risultato, valido per radici di molteplicita arbitraria: Proposizione 7.2 Sia F (z) una funzione razionale F (z) = P (z)/Q(z), dove P (z) e Q(z) sono polinomi primi fra loro, con Q(0) 6= 0; supponiamo inoltre che Q(z) ammetta ununica radice z di modulo minimo. Se z ha molteplicita m allora la sequenza la sequenza {fn } associata alla F (z) soddisfa la relazione   1 fn = nm1 n . z 7.5.2 Funzioni logaritmiche Consideriamo ora una classe di funzioni non razionali e studiamo il comportamente asintotico delle sequenze associate. Proposizione 7.3 Per ogni intero 1 la funzione 1 1 log (1 z) 1z e la funzione generatrice di una sequenza {fn } tale che fn = (n1 log n). Dimostrazione. Ragioniamo per induzione su . Nel caso = 1 il risultato e gia stato dimostrato nelle sezioni precedenti; infatti sappiamo che + X 1 1 log = Hn z n 1z 1 z n=1 99 CAPITOLO 7. FUNZIONI GENERATRICI dove Hn = nk=1 k1 log n. Supponiamo ora la proprieta vera per fissato, 1, e dimostriamola vera + 1. Denotiamo con (+1) () 1 1 } e la convoluzione fn la sequenza associata alla funzione (1z) log 1z . Si verifica subito che {fn P () delle sequenze {fn } e {1} poiche la sua funzione generatrice e il prodotto delle funzioni generatrici corrispondenti. Di conseguenza possiamo scrivere fn(+1) = = n X k=0 n X k=1 = () fk = (per ipotesi di induzione) (k 1 log k) = n X k=1 Z n = = (applicando la proposizione 2.5) ! k 1 log k = (per la proposizione 2.8)  x1 log xdx = (integrando per parti) 1 (n log n). Concludiamo osservando che la proposizione precedente puo essere estesa al caso in cui IR, purche 6= 0, 1, 2, . . . , n, . . . . Esercizi 1) Considera le seguenti procedure F e G che calcolano rispettivamente i valori F (n) e G(n) su input n IN: Procedure G(n) begin S=0 for i = 0, 1, 2, . . . , n do S = S + F (i) return S end Procedure F (n) if n = 0 then return 1 else return F (n 1) + G(n 1) a) Dimostrare che, su input n, le due procedure richiedono (an ) operazioni aritmetiche per qualche a > 1. b) Calcolare le funzioni generatrici delle sequenze {F (n)} e {G(n)} e determinare la loro espressione asintotica per n +. c) Definire un algoritmo per calcolare F (n) e G(n) che esegua un numero di operazioni aritmetiche polinomiale in n. 2) Considera le seguenti procedure F e G che calcolano rispettivamente i valori F (n) e G(n) su input n IN: Procedure G(n) begin S=0 for i = 0, 1, 2, . . . , n do S = S + F (n i) return S end Procedure F (n) if n = 0 then return 0 else return n + 2F (n 1) a) Determinare lespressione asintotica della sequenza {G(n)}n per n +. b) Assumendo il criterio di costo uniforme, determinare lordine di grandezza del tempo di calcolo e dello spazio di memoria richiesti dallesecuzione della procedura G su input n. c) Svolgere il calcolo precedente assumendo il criterio di costo logaritmico. Capitolo 8 Algoritmi di ordinamento Lefficienza dei sistemi che manipolano insiemi di dati mantenuti in memoria dipende in larga misura dal criterio utilizzato per conservare le chiavi delle informazioni. Uno dei metodi piu semplici e piu usati e quello di mantenere le chiavi ordinate rispetto a una relazione dordine fissata. Ordinare una sequenza di valori e quindi una operazione che ricorre frequentemente nella gestione di sistemi e in applicazioni di varia natura; pertanto le procedure di ordinamento sono spesso usate per la soluzione di problemi piu generali e quindi la loro efficienza puo di fatto condizionare lefficacia dei metodi adottati. 8.1 Caratteristiche generali Per definire formalmente il problema ricordiamo innanzitutto che una relazione dordine (parziale) R su un insieme U e una relazione binaria che gode delle proprieta riflessiva, transitiva e antisimmetrica, ovvero: - per ogni a U , aRa; - per ogni a, b, c U se aRb e bRc allora aRc; - per ogni a, b U se aRb e bRa allora a = b. Classici esempi sono la relazione di minore o uguale sui numeri reali e linclusione tra i sottoinsiemi di un insieme dato. Diciamo che una relazione dordine R su U e totale se per ogni a, b U vale aRb oppure bRa. In questo caso si dice anche che R definisce un ordine lineare su U . Il problema di ordinamento per un insieme U , dotato di una relazione dordine totale , e definito nel modo seguente: Istanza: un vettore A = (A[1], A[2], . . . , A[n]) tale che n > 1 e A[i] U per ogni i {1, 2, . . . , n}. Soluzione: un vettore B = (B[1], B[2], . . . , B[n]), ottenuto mediante una permutazione degli elementi di A, tale che B[i] B[i + 1] per ogni i = 1, 2, . . . , n 1. I metodi adottati per risolvere il problema si diversificano in due gruppi principali chiamati rispettivamente di ordinamento interno ed esterno. Gli algoritmi di ordinamento interno presuppongono che il vettore di ingresso sia interamente contenuto nella memoria RAM della macchina. In questo caso laccesso al valore di una qualunque delle sue componenti avviene in tempi uguali per tutte. In questa sede ci occuperemo principalmente di algoritmi di questo tipo. Al contrario gli algoritmi che operano su dati distribuiti principalmente su memorie di massa (dischi o nastri) vengono chiamati algoritmi di ordinamento esterno. In questo caso i tempi di accesso ai dati 100 CAPITOLO 8. ALGORITMI DI ORDINAMENTO 101 non sono piu uniformi ma dipendono dal tipo di memoria nella quale sono collocati e uno degli obiettivi delle procedure usate e proprio quello di ridurre il numero di accessi alle memorie di massa. Gli algoritmi di ordinamento sono suddivisi anche in base alla generalita dellinsieme U sul quale viene definito linput. Un primo gruppo e costituito dalle procedure basate sul confronto tra gli elementi del vettore di ingresso. In questo caso si suppone di poter sempre eseguire in un numero costante di passi il confronto tra due elementi dellinsieme U rispetto alla relazione dordine fissata. Cos lalgoritmo puo essere applicato a qualunque insieme totalmente ordinato perche non sfrutta alcuna caratteristica specifica dei suoi elementi. Come vedremo, in queste ipotesi, sono necessari (n log n) confronti per ordinare un vettore di n elementi ed e possibile descrivere diverse procedure ottimali, cioe in grado di eseguire il calcolo proprio in tempo O(n log n). Una seconda classe di algoritmi e invece costituita da quelle procedure specificamente progettate per ordinare una sequenza di stringhe definite su un alfabeto finito, ad esempio binario. In questo caso si possono progettare algoritmi che ispezionano i singoli bits delle varie stringhe, sfruttando direttamente la rappresentazione binaria degli interi. Classici esempi di algoritmi di questo tipo sono quelli che ordinano una sequenza di parole su un dato alfabeto secondo lordinamento lessicografico. Come vedremo, sotto opportune ipotesi, si possono definire algoritmi di questo tipo che hanno complessita in tempo lineare. Nellanalisi degli algoritmi di ordinamento che presentiamo nel seguito assumiamo come modello di calcolo una Random Access Machine (RAM) con criterio di costo uniforme. Il costo di ogni operazione aritmetica in termini di tempo di calcolo e di spazio di memoria e quindi costante e non dipende dalle dimensioni degli operandi. Supponiamo inoltre che la nostra RAM sia in grado di mantenere in ogni cella di memoria un elemento del vettore di input e di eseguire il confronto fra due qualsiasi di questi in tempo costante. 8.2 Numero minimo di confronti In questa sezione consideriamo gli algoritmi di ordinamento basati su confronti e presentiamo un risultato generale che riguarda il numero minimo di passi che le procedure di questo tipo devono eseguire per completare il calcolo. A tale scopo utilizziamo una proprieta degli alberi binari che risultera utile anche in altre occasioni. Lemma 8.1 Ogni albero binario con k foglie ha altezza maggiore o uguale a dlog2 ke. Dimostrazione. Procediamo per induzione sul numero k di foglie dellalbero considerato. Se k = 1 la proprieta e banalmente verificata. Supponiamo la proprieta vera per ogni j tale che 1 j < k e consideriamo un albero binario T con k foglie di altezza minima. Siano T1 e T2 i due sottoalberi che hanno per radice i figli della radice di T . Osserva che ciascuno di questi ha meno di k foglie e uno dei due ne possiede almeno d k2 e. Allora, per ipotesi di induzione, laltezza di questultimo e maggiore o uguale a dlog2 k2 e; quindi anche laltezza di T e certamente maggiore o uguale a 1 + dlog2 k2 e = dlog2 ke. Proposizione 8.2 Ogni algoritmo di ordinamento basato sui confronti richiede, nel caso peggiore, almeno n log2 n (log2 e)n + 12 log2 n + O(1) confronti per ordinare una sequenza di n elementi. Dimostrazione. Consideriamo un qualsiasi algoritmo basato sui confronti che opera su sequenze di oggetti distinti estratti da un insieme U totalmente ordinato. Il funzionamento generale, su input formati da n elementi, puo essere rappresentato da un albero di decisione, cioe un albero binario, nel quale ogni nodo interno e etichettato mediante un confronto del tipo ai aj , dove i, j {1, 2, . . . , n}. Il calcolo eseguito dallalgoritmo su uno specifico input di lunghezza n, A = (A[1], A[2], . . . , A[n]), identifica un 102 CAPITOLO 8. ALGORITMI DI ORDINAMENTO cammino dalla radice a una foglia dellalbero: attraversando un nodo interno etichettato da un confronto ai aj , il cammino prosegue lungo il lato di sinistra o di destra a seconda se A[i] A[j] oppure A[i] > A[j]. Laltezza dellalbero rappresenta quindi il massimo numero di confronti eseguiti dallalgoritmo su un input di lunghezza n. Osserviamo che il risultato di un procedimento di ordinamento di n elementi e dato da una delle n! permutazioni della sequenza di input. Ne segue che lalbero di decisione deve contenere almeno n! foglie perche ciascuna di queste identifica un possibile output distinto. Per il lemma precedente, possiamo allora affermare che il numero di confronti richiesti nel caso peggiore e almeno dlog2 n!e. Applicando ora la formula di Stirling sappiamo che log2 n! = n log2 n (log2 e)n + log2 (n) + 1 + o(1) 2 e quindi la proposizione e dimostrata. 8.3 Ordinamento per inserimento Il primo algoritmo che consideriamo e basato sul metodo solitamente usato nel gioco delle carte per ordinare una sequenza di elementi. Si tratta di inserire uno dopo laltro ciascun oggetto nella sequenza ordinata degli elementi che lo precedono. In altre parole, supponendo di aver ordinato le prime i 1 componenti del vettore, inseriamo lelemento i-esimo nella posizione corretta rispetto ai precedenti. Lalgoritmo e descritto in dettaglio dal seguente programma: Procedura Inserimento Input: un vettore A = (A[1], A[2], . . . , A[n]) tale che n > 1 e A[i] U per ogni i {1, 2, . . . , n}; begin for i = 2, . . . , n do begin a := A[i] j := i 1 while j 1 a < A[j] do begin A[j + 1] := A[j] j := j 1 end A[j + 1] := a end end Osserva che limplementazione dellalgoritmo su macchina RAM esegue al piu un numero costante di passi per ogni confronto tra elementi del vettore di ingresso. Possiamo quindi affermare che il tempo di calcolo e dello stesso ordine di grandezza del numero di confronti eseguiti. Il caso peggiore, quello con il massimo numero di confronti, occorre quando A[1] > A[2] > > A[n]. In questo caso la procedura P esegue n1 i = n(n1) confronti. Di conseguenza il tempo di calcolo richiesto dallalgoritmo su un 1 2 input di lunghezza n e (n2 ) nel caso peggiore. 103 CAPITOLO 8. ALGORITMI DI ORDINAMENTO Nel caso migliore invece, quando il vettore A e gia ordinato, la procedura esegue solo n 1 confronti e di conseguenza il tempo di calcolo risulta lineare. Osserviamo tuttavia che il caso migliore non e rappresentativo. Infatti, supponendo di avere in input una permutazione casuale (uniformemente distribuita) di elementi distinti, e stato dimostrato che il numero medio di confronti risulta n(n1) . Quindi, anche nel 4 caso medio, il tempo di calcolo resta quadratico. 8.4 Heapsort Lalgoritmo di ordinamento che presentiamo in questa sezione richiede O(n log n) confronti per ordinare una sequenza di n elementi e risulta quindi ottimale a meno di un fattore costante. Il procedimento adottato si basa su una importante struttura dati, lo heap, che e spesso utilizzata per mantenere un insieme di chiavi dal quale estrarre facilmente lelemento massimo. Definizione 8.1 Fissato un insieme totalmente ordinato U , uno heap e un vettore A = (A[1], A[2], . . . , A[n]), dove A[i] U per ogni i, che gode della seguente proprieta: A[i] A[2i] e A[i] A[2i + 1] per ogni intero i tale che 1 i < n/2, e inoltre, se n e pari, A[n/2] A[n]. Questo significa che A[1] A[2], A[1] A[3], A[2] A[4], A[2] A[5], ecc.. Uno heap A = (A[1], A[2], . . . , A[n]) puo essere rappresentato da un albero binario T di n nodi, denotati dagli interi 1, 2, . . . , n, nel quale il nodo 1 e la radice, ogni nodo i e etichettato dal valore A[i] e gode delle seguenti proprieta: - se 1 i n/2, allora 2i e il figlio sinistro di i; - se 1 i < n/2, allora 2i + 1 e il figlio destro di i. In questo modo, per ogni figlio j di i, A[i] A[j], e quindi letichetta di ogni nodo risulta maggiore o uguale a quelle dei suoi discendenti. In particolare la radice contiene il valore massimo della sequenza. Per esempio, lo heap definito dalla sequenza (7, 4, 6, 3, 4, 5, 1, 2) e rappresentato dal seguente albero binario, nel quale abbiamo riportato le etichette dei nodi al loro interno.  7  H  H  H   4 6   J J J J   3 4 5 1   2  8.4.1 Costruzione di uno heap Vogliamo ora definire una procedura in grado di trasformare un vettore in uno heap. Dato in input un vettore A = (A[1], A[2], . . . , A[n]), lalgoritmo deve quindi riordinare gli elementi di A in modo da ottenere uno heap. CAPITOLO 8. ALGORITMI DI ORDINAMENTO 104 Il procedimento e basato sulla seguente definizione. Data una coppia di interi i, j, dove 1 i j n, diciamo che il vettore (A[i], . . . , A[j]) rispetta la proprieta dello heap se, per ogni ` {i, . . . , j} e Supponiamo ora che per una data ogni ` {2`, 2` + 1} tale che ` j, vale la relazione A[`] A[`]. coppia 1 i < j n il vettore (A[i + 1], . . . , A[j]) rispetti la proprieta dello heap; possiamo allora definire un procedimento che confronta lelemento A[i] con i suoi discendenti nellalbero ed esegue gli scambi opportuni in modo tale che anche il vettore (A[i], A[i + 1], . . . , A[j]) rispetti la proprieta dello heap. Il calcolo viene eseguito dalla seguente procedura che si applica a ogni coppia di indici i, j tali che 1 i < j n, nella quale il vettore A rappresenta una variabile globale. Procedura Costruisciheap(i, j) begin u := i T := A[u] esci := 0 while 2u j esci = 0 do begin k := 2u if k < j A[k] < A[k ( + 1] then k := k + 1 A[u] := A[k] if T < A[k] then u := k else esci := 1 end A[u] := T end Lalgoritmo di costruzione dello heap e allora definito dalla seguente procedura che ha come input un vettore A = (A[1], A[2], . . . , A[n]) tale che n > 1 e A[i] U per ogni i {1, 2, . . . , n}. Procedura Creaheap(A) for i = b n2 c, b n2 c 1, . . . , 1 do Costruisciheap(i, n) La correttezza dellalgoritmo e una conseguenza della seguente osservazione: se i + 1, . . . , n sono radici di heap allora, dopo lesecuzione di Costruisciheap(i, n), i nodi i, i+1, . . . , n sono ancora radici di heap. Per quanto riguarda invece il tempo di calcolo dellalgoritmo possiamo enunciare la seguente proposizione. Teorema 8.3 La procedura Creaheap, su un input di dimensione n, esegue al piu 4n+O(log n) confronti tra elementi del vettore di ingresso, richiedendo quindi tempo (n) nel caso peggiore. Dimostrazione. Poiche assumiamo il criterio uniforme, e chiaro che il tempo di calcolo richiesto dallalgoritmo su un dato input e proporzionale al numero di confronti eseguiti. Valutiamo quindi tale quantita nel caso peggiore. Non e difficile verificare che lesecuzione della procedura Costruisciheap(i, n) richiede un numero di operazioni al piu proporzionale allaltezza del nodo i nello heap, ovvero la massima distanza di i da una foglia. In particolare il numero di confronti eseguiti e al piu 2 volte laltezza del nodo i. Quindi, nel caso peggiore, il numero totale di confronti e due volte la somma delle altezze dei nodi i {1, 2, . . . , n 1}. CAPITOLO 8. ALGORITMI DI ORDINAMENTO 105 Ora, posto k = blog2 nc, abbiamo 2k n < 2k+1 . Si puo quindi verificare che nello heap ci sono al piu un nodo di altezza k, due nodi di altezza k1, . . ., 2j nodi di altezza kj per ogni j = 0, 1, . . . , k1. Quindi la somma delle altezze dei nodi dello heap e minore o uguale a k1 X (k j)2j j=0 Questa somma puo essere calcolata come nellesempio 2.3, ottenendo k k1 X j=0 2j k1 X j2j = k(2k 1) (k 2)2k 2 = 2k+1 k 2. j=0 Di conseguenza, il numero totale di confronti richiesti dallalgoritmo su un input di dimensione n e minore o uguale a 4n + O(logn) e il tempo di calcolo complessivo risulta quindi (n). 8.4.2 Descrizione dellalgoritmo Siamo ora in grado di definire lalgoritmo di ordinamento mediante heap. Procedura Heapsort(A) Input: un vettore A = (A[1], A[2], . . . , A[n]) tale che n > 1 e A[i] U per ogni i {1, 2, . . . , n}; begin Creaheap(A) for j = n, n 1, . . . , 2 do begin Scambia(A[1], A[j]) Costruisciheap(1, j 1) end end La correttezza si ottiene provando per induzione che, dopo k iterazioni del ciclo for, il vettore (A[n k + 1], . . . , A[n]) contiene, ordinati, i k elementi maggiori del vettore di ingresso, mentre il vettore (A[1], . . . , A[n k]) forma uno heap. Per quanto riguarda la valutazione di complessita osserviamo che ogni chiamata Costruisciheap(1, j) richiede al piu O(log n) passi e che Heapsort richiama tale procedura n 1 volte. Poiche il tempo richiesto da Creaheap e lineare il numero totale di passi risulta O(n log n). Concludiamo osservando che Costruisciheap(1, j) esegue al piu 2blog2 jc confronti fra elementi del vettore A. Di conseguenza, nel caso peggiore, il numero totale di confronti eseguiti da Heapsort e minore o uguale a 4n + O(log n) + 2 n1 X blog2 jc = 2n log2 n + O(n). j=1 Quindi anche il suo tempo di calcolo nel caso peggiore risulta (n log n). Esercizio Esegui Heapsort sulla sequenza (1, 3, 2, 5, 6, 7, 9, 4, 2). CAPITOLO 8. ALGORITMI DI ORDINAMENTO 8.5 106 Quicksort Lalgoritmo Quicksort e una classica procedura di ordinamento, basata su un metodo di partizione della sequenza di input, che puo essere facilmente implementata e offre buone prestazioni nel caso medio. Per questo motivo e utilizzata come routine di ordinamento in molte applicazioni e viene spesso preferita ad altre procedure che pur avendo complessita ottimale nel caso peggiore non offrono gli stessi vantaggi in media. La versione che presentiamo in questa sezione e randomizzata, ovvero prevede lesecuzione di alcuni passi casuali. Lidea dellalgoritmo e semplice. La procedura sceglie casualmente un elemento nella sequenza di input e quindi suddivide questultima in due parti, creando il vettore degli elementi minori o uguali ad e quello degli elementi maggiori di . In seguito lalgoritmo richiama ricorsivamente se stesso sui due vettori ottenuti concatenando poi le sequenze ordinate. Ad alto livello lalgoritmo puo essere descritto dalla seguente procedura nella quale gli elementi del vettore di ingresso sono estratti da un insieme U totalmente ordinato e assumono valori non necessariamente distinti. Denotiamo qui con il simbolo loperazione di concatenazione tra vettori. Procedure Quicksort(A) Input: A = (a1 , a2 , . . . , an ) tale che ai U per ogni i {1, 2, . . . , n}. begin if n 1 then return A else begin scegli a caso un intero k in {1, 2, . . . , n} calcola il vettore A1 degli elementi ai di A tali che i 6= k e ai ak calcola il vettore A2 degli elementi aj di A tali che aj > ak A1 := Quicksort(A1 ) A2 := Quicksort(A2 ) return A1 (ak ) A2 end end La correttezza della procedura puo essere dimostrata facilmente per induzione sulla lunghezza del vettore di input. Per valutare il tempo di calcolo richiesto supponiamo di poter generare in tempo O(n) un numero intero casuale uniformemente distribuito nellinsieme {1, 2, . . . , n}. In queste ipotesi lordine di grandezza del tempo di calcolo e determinato dal numero di confronti eseguiti fra elementi del vettore di ingresso. 8.5.1 Analisi dellalgoritmo Denotiamo con Tw (n) il massimo numero di confronti tra elementi del vettore di ingresso eseguiti dallalgoritmo su un input A di lunghezza n. E evidente che i vettori A1 e A2 della partizione possono essere calcolati mediante n 1 confronti. Inoltre la dimensione di A1 e A2 e data rispettivamente da k e n k 1, per qualche k {0, 1, . . . , n 1}. Questo implica che per ogni n 1 Tw (n) = n 1 + mentre Tw (0) = 0. max {Tw (k) + Tw (n k 1)}, 0kn1 107 CAPITOLO 8. ALGORITMI DI ORDINAMENTO Vogliamo ora determinare il valore esatto di Tw (n). Come vedremo, tale valore occorre quando ogni estrazione casuale determina lelemento massimo o minimo del campione. Infatti, poiche max0kn1 {Tw (k) + Tw (n k 1)} Tw (n 1), abbiamo che Tw (n) n 1 + Tw (n 1) e quindi, per ogni n IN, otteniamo Tw (n) n1 X k= 0 n(n 1) 2 (8.1) Dimostriamo ora per induzione che Tw (n) n(n1) . La disuguaglianza e banalmente vera per n = 0. 2 k(k1) Supponiamo che Tw (k) per ogni intero k tale che 0 k < n; sostituendo tali valori nella 2 equazione di ricorrenza e svolgendo semplici calcoli si ottiene Tw (n) n 1 + 1 max {2k(k n + 1) + n2 3n + 2} 2 0kn1 Lo studio della funzione f (x) = 2x(x n + 1) mostra che il valore massimo assunto dalla f nellintervallo [0, n 1] e 0. Di conseguenza Tw (n) n 1 + n2 3n + 2 n(n 1) = 2 2 Assieme alla relazione (8.1) questultima disuguaglianza implica Tw (n) = n(n 1) 2 Quindi lalgoritmo Quicksort nel caso peggiore richiede un tempo (n2 ). Valutiamo ora il numero medio di confronti tra elementi del vettore di ingresso eseguiti dallalgoritmo. Chiaramente tale valore determina anche lordine di grandezza del tempo medio di calcolo necessario per eseguire la procedura su una macchina RAM. Per semplicita supponiamo che tutti gli elementi del vettore A di input siano distinti e che, per ogni n, la scelta casuale dellintero k nellinsieme {1, 2, . . . , n} avvenga in maniera uniforme. In altre parole ogni elemento del campione ha probabilita n1 di essere scelto. Inoltre supponiamo che il risultato di ogni scelta casuale sia indipendente dalle altre. Denotiamo allora con E(n) il numero medio di confronti eseguiti dallalgoritmo su un input di lunghezza n e, per ogni k {1, 2, . . . , n}, rappresentiamo con E(n | |A1 | = k) il numero medio di confronti eseguiti supponendo che il vettore A1 , ottenuto dal processo di partizione, abbia k componenti. Possiamo allora scrivere E(n) = n1 X Pr{|A1 | = k}E(n | |A1 | = k) = k=0 n1 X 1 {n 1 + E(k 1) + E(n k)} n k=0 Attraverso semplici passaggi otteniamo la sequente equazione di ricorrenza: ( E(n) = 0 se n = 0, 1 P E(k) se n 2 n 1 + n2 n1 k=0 (8.2) Tale ricorrenza puo essere risolta applicando il metodo della sostituzione discusso nella sezione 6.6.1 oppure passando alle funzioni generatrici (vedi sezione 7.4.2). CAPITOLO 8. ALGORITMI DI ORDINAMENTO 108 Possiamo cos ricavare il valore esatto di E(n) per ogni n 1: E(n) = 2(n + 1) n X 1 k=1 k 4n = 2n log n + O(n) Di conseguenza lalgoritmo Quicksort puo essere eseguito in tempo (n log n) nel caso medio. Esercizio Confrontare la valutazione del numero medio di confronti appena ottenuta con quella dellalgoritmo Heapsort nel caso peggiore. 8.5.2 Specifica dellalgoritmo Vogliamo ora fornire una versione piu dettagliata dellalgoritmo che specifichi la struttura dati utilizzata e il processo di partizione. Il nostro obiettivo e quello di implementare la procedura in loco, mediante un procedimento che calcoli la sequenza ordinata attraverso scambi diretti tra i valori delle sue componenti, senza usare vettori aggiuntivi per mantenere i risultati parziali della computazione. In questo modo lo spazio di memoria utilizzato e essenzialmente ridotto alle celle necessarie per mantenere il vettore di ingresso e per implementare la ricorsione. Rappresentiamo la sequenza di input mediante il vettore A = (A[1], A[2], . . . , A[n]) di n > 1 componenti. Per ogni coppia di interi p, q, tali che 1 p < q n, denotiamo con Ap,q il sottovettore compreso tra le componenti di indice p e q, cioe Ap,q = (A[p], A[p + 1], . . . , A[q]). Il cuore dellalgoritmo e costituito dalla funzione Partition(p, q) che ripartisce gli elementi del vettore Ap,q rispetto al valore della prima componente A[p]; questa funzione modifica quindi il valore delle componenti di Ap,q e restituisce un indice ` {p, p + 1, . . . , q} che gode delle seguenti proprieta: - A[`] assume il valore ; - Ap,`1 contiene i valori minori o uguali ad , originariamente contenuti in Ap+1,q ; - A`+1,q contiene i valori maggiori di , originariamente contenuti in Ap+1,q . La funzione Partition puo essere calcolata dalla seguente procedura nella quale due puntatori scorrono il vettore da destra a sinistra e viceversa, confrontando le componenti con lelemento scelto casualmente. Per impedire a uno dei due puntatori di uscire dallinsieme dei valori ammissibili aggiungiamo una sentinella al vettore A, cioe una componente A[n + 1] che assume un valore convenzionale superiore a quello di tutti gli elementi di A (per esempio +). Supponiamo inoltre che il parametro A rappresenti una variabile globale; per questo motivo gli unici parametri formali della procedura sono p e q che rappresentano gli indici del sottovettore sul quale si opera la partizione (assumiamo sempre 1 p < q n). Le altre variabili che compaiono nella procedura sono locali. CAPITOLO 8. ALGORITMI DI ORDINAMENTO 109 Function Partition(p, q) begin i := p + 1 j := q while i j do begin while A[j] > A[p] do j := j 1 while A[i] A[p] i j do i := i + 1 if i < j then begin Scambia(A[i], A[j]) i := i + 1 j := j 1 end end Scambia(A[p], A[j]) return j end Definiamo ora la procedura Quicksort(p, q) che ordina il vettore Ap,q utilizzando la funzione Partition definita sopra; anche in questo caso A e una variabile globale e gli unici parametri formali sono gli indici p e q che definiscono il sottovettore da ordinare. Procedure Quicksort(p, q) begin (1) scegli a caso un intero k in {p, p + 1, . . . , q} (2) Scambia(A[p], A[k]) (3) ` := Partition(p, q) (4) if p < ` 1 then Quicksort(p, ` 1) (5) if ` + 1 < q then Quicksort(` + 1, q) end Lalgoritmo complessivo e quindi dato dalla semplice chiamata Quicksort(1, n) e dalla dichia-razione di A come variabile globale. Esercizi 1) La chiamata di procedura Partition(p, q) puo eseguire, a seconda dei valori del vettore di ingresso A, n 1, n oppure n + 1 confronti su elementi di A. Fornire un esempio per ciascuno di questi casi. 2) Dimostrare che la procedura Quicksort(1, n) esegue nel caso peggiore (n+1)(n+2) 3 confronti tra 2 elementi del vettore di ingresso. 3) Stimare il numero medio di confronti eseguiti dalla procedura Quicksort(1, n), assumendo le ipotesi presentate nella sezione 8.5.1. 8.5.3 Ottimizzazione della memoria Vogliamo ora valutare lo spazio di memoria necessario per implementare su macchina RAM la procedura definita nella sezione 8.5.2. Oltre alle n celle necessarie per contenere il vettore di ingresso, occorre CAPITOLO 8. ALGORITMI DI ORDINAMENTO 110 utilizzare una certa quantita di spazio per mantenere la pila che implementa la ricorsione. Se applichiamo lo schema presentato nella sezione 5.1, la traduzione iterativa su macchina RAM della procedura Quicksort(1, n) utilizza, nel caso peggiore, uno spazio di memoria O(n) per mantenere la pila. Se infatti viene sempre estratto lelemento maggiore del campione, la pila deve conservare i parametri relativi a un massimo di n 1 chiamate ricorsive. In questa sezione introduciamo alcune ulteriori modifiche allalgoritmo e descriviamo una diversa gestione della pila, in parte basata sulla ricorsione terminale descritta nella sezione 5.2, che permette di ridurre lo spazio di memoria richiesto dalla pila a una quantita a O(log n). Osserviamo innanzitutto che la procedura Quicksort descritta nella sezione 8.5.2 puo essere migliorata modificando lordine delle chiamate ricorsive. Piu precisamente possiamo forzare la procedura a eseguire sempre per prima la chiamata relativa al sottovettore di lunghezza minore. Si ottiene il nuovo algoritmo semplicemente sostituendo i comandi (4) e (5) della procedura Quicksort(p, q) con le seguenti istruzioni: if ` p q ` then begin if p < ` 1 then Quicksort(p, ` 1) if ` + 1 < q then Quicksort(` + 1, q) end else begin if ` + 1 < q then Quicksort(` + 1, q) if p < ` 1 then Quicksort(p, ` 1) end Descriviamo ora una versione iterativa del nuovo algoritmo. Osserviamo innanzitutto che, nel nostro caso, il criterio di gestione della pila puo essere semplificato sfruttando il fatto che le due chiamate ricorsive sono le ultime istruzioni della procedura. Possiamo cioe definire una versione iterativa nella quale la pila serve per mantenere lelenco delle chiamate che devono ancora essere eseguite e non sono state neppure iniziate. In altre parole, nella esecuzione della procedura, la prima chiamata ricorsiva viene attivata dopo aver accantonato in testa alla pila i parametri necessari per eseguire la seconda. Questultima sara attivata una volta completata la precedente, quando i suoi parametri si troveranno di nuovo in testa alla pila. In particolare non abbiamo bisogno di mantenere nella pila il record di attivazione della procedura chiamante. Lalgoritmo cos ottenuto e descritto nella seguente procedura. Come nella sezione precedente, denotiamo con A il vettore di input di lunghezza n > 1 che si suppone gia presente in memoria. Il vettore corrente e rappresentato dagli indici p e q, 1 p, q n, che inizialmente coincidono con 1 e n rispettivamente. Se p q il vettore corrente e (A[p], A[p + 1], . . . , A[q]) ed e costituito da q p + 1 elementi; se invece q < p il vettore corrente e vuoto. Usando la funzione Partition la procedura spezza il vettore corrente in due parti rappresentate dagli indici p, ` 1 e ` + 1, q, dove ` indica la posizione del pivot restituita da Partition(p, q). La maggiore tra queste due viene memorizzata nella pila mentre la minore diventa il nuovo vettore corrente. Nel programma usiamo poi la variabile S per rappresentare la pila e il valore per denotare la pila vuota. Gli elementi di S sono coppie di indici (i, j) che rappresentano i sottovettori accantonati nella pila. Rappresentiamo quindi con Pop, Push e Top le tradizionali operazioni sulla pila. CAPITOLO 8. ALGORITMI DI ORDINAMENTO 111 Procedure Quicksort Iterativo Input : un vettore A = (A[1], A[2], . . . , A[n]) tale che n > 1 e A[i] U per ogni i {1, 2, . . . , n}; begin p := 1 q := n S := stop:= 0 repeat while q p 1 do begin scegli a caso un intero k in {p, p + 1, . . . , q} Scambia(A[p], A[k]) ` :=Partition(p, q) i := ` + 1 j := q if ` p < q ` then q := ` 1 i := ` 1 j := p else p := ` + 1 S := Push(S, (i, j)) end ( (p, q) := Top(S) if S 6= then S := Pop(S) else stop:= 1 until stop= 1 return A end Si puo dimostrare che la procedura e corretta. Infatti, al termine dellesecuzione di ogni ciclo repeatuntil, gli elementi del vettore di ingresso non ancora ordinati sono contenuti nella pila S oppure nel vettore corrente. La verifica di questa proprieta e facile. Di conseguenza, quando si esce dal ciclo, abbiamo che S = e q p < 1 e questo garantisce che il vettore di ingresso e ordinato. Valutiamo ora laltezza massima raggiunta dalla pila durante lesecuzione dellalgoritmo. Osserviamo innanzitutto che il vettore corrente sul quale la procedura sta lavorando non e mai maggiore del vettore che si trova in testa alla pila S. Inoltre, ad ogni incremento di S la dimensione del vettore corrente viene ridotta almeno della meta. Quando un coppia (p, q) viene tolta dalla pila questa rappresenta il nuovo vettore corrente e la sua dimensione non e maggiore di quella dello stesso vettore corrente appena prima dellinserimento di (p, q) in S. Quindi, durante la computazione la pila puo contenere al piu blog2 nc elementi, dove n e la dimensione dellinput. Esercizi 1) Implementare lalgoritmo iterativo appena descritto su un vettore di elementi distinti e determinare laltezza massima raggiunta dalla pila. Ripetere lesecuzione del programma diverse volte e fornire il valore medio dellaltezza raggiunta. 2) Assumendo il criterio di costo logaritmico, determinare lordine di grandezza dello spazio di memoria richiesto per mantenere la pila. CAPITOLO 8. ALGORITMI DI ORDINAMENTO 8.6 112 Statistiche dordine In questa sezione descriviamo un algoritmo per un problema legato in modo naturale allordi-namento di un vettore: determinare il k-esimo elemento piu piccolo di una data sequenza di oggetti. Fissato un insieme totalmente ordinato U il problema e definito formalmente nel modo seguente: Istanza: un vettore A = (A[1], A[2], . . . , A[n]) e un intero k, tali che 1 k n e A[i] U per ogni i {1, 2, . . . , n}. Richiesta: calcolare il k-esimo piu piccolo elemento di A. Un caso particolarmente interessante di tale problema e il calcolo della mediana, corrispondente al valore k = dn/2e. Consideriamo ora un algoritmo qualsiasi che risolve correttamente il problema. Risulta evidente che lelemento dato in uscita deve essere confrontato con tutti i rimanenti, altrimenti modificando leventuale componente non confrontata, otterremmo una risposta scorretta. Possiamo cos enunciare il seguente risultato. Teorema 8.4 Ogni algoritmo per il calcolo del k-esimo elemento piu piccolo di un vettore richiede almeno n 1 confronti su un input di lunghezza n. Una soluzione al problema puo essere quella di ordinare il vettore determinando poi la componente di posto k della sequenza ordinata. Tale metodo richiede (n log n) confronti e potrebbe quindi essere migliorabile. Mostreremo infatti un algoritmo in grado di risolvere il problema in tempo O(n), risultando quindi ottimale a meno di un fattore costante. Fissato un intero dispari 2t + 1, lalgoritmo che vogliamo descrivere suddivide il vettore di ingresso n A = (A[1], A[2], . . . , A[n]) in d 2t+1 e sottovettori di 2t + 1 elementi ciascuno (tranne al piu lultimo). Per ognuno di questi viene calcolata la mediana e quindi si richiama ricorsivamente la procedura per determinare la mediana M delle mediane ottenute. Si determinano allora i vettori A1 , A2 e A3 , formati rispettivamente dagli elementi di A minori, uguali e maggiori di M . Si distinguono quindi i seguenti casi: - se k |A1 |, si risolve ricorsivamente il problema per listanza (A1 , k); - se |A1 | < k |A1 | + |A2 | allora la risposta e M ; - se |A1 | + |A2 | < k si risolve ricorsivamente il problema per listanza (A3 , k |A1 | |A2 |); Svolgiamo una breve analisi del tempo di calcolo richiesto dallalgoritmo. Osserviamo anzitutto n che ci sono almeno b 2(2t+1) c mediane minori o uguali a M ; ciascuna di queste nel proprio campione n ammette altri t elementi minori o uguali. Quindi linsieme A1 A2 contiene almeno (t + 1)b 2(2t+1) c n elementi e di conseguenza A3 ne possiede al piu n (t + 1)b 2(2t+1) c. Lo stesso discorso vale per A1 . Non e difficile provare che, per n maggiore o uguale a un valore H opportuno, risulta n (t + 1)b n 6t + 3 c( )n. 2(2t + 1) 4(2t + 1) Modifichiamo allora lalgoritmo, richiedendo che su un input di lunghezza minore di H, il k-esimo elemento della sequenza venga calcolato direttamente mentre, su un input di lunghezza n H, si esegua il procedimento sopra descritto. Denotando quindi con T (n) il tempo di calcolo richiesto nel caso peggiore su un input di lunghezza n, otteniamo ( c se n < H T (n) = n 6t+3 T ( 2t+1 ) + T ( 4(2t+1) n) + O(n) se n H. CAPITOLO 8. ALGORITMI DI ORDINAMENTO 113 n dove c e una costante opportuna. Infatti T ( 2t+1 ) e il tempo di calcolo necessario per determinare la 6t+3 mediana delle mediane; T ( 4(2t+1) n) e il tempo necessario per eseguire leventuale chiamata ricorsiva; n infine, O(n) passi occorrono per calcolare le 2t+1 mediane e deteminare gli insiemi A1 , A2 e A3 . 6t+3 1 Osserviamo ora che 2t+1 + 4(2t+1) < 1 se e solo se t > 32 . Ricordando la soluzione delle equazioni di ricorrenza del tipo ( c se n < b T (n) = T (bnc) + T (dne) + n se n b. presentata nel corollario 6.2, possiamo concludere che per ogni t > 32 lalgoritmo lavora in tempo T (n) = O(n) nel caso peggiore. Scegliendo t = 2 possiamo fissare H = 50 e otteniamo la seguente procedura: Procedura Seleziona(A = (A[1], A[2], . . . , A[n]), k) if n < 50 then begin ordina (A[1], A[2], . . . , A[n]) return A[k] end else begin dividi A in dn/5e sequenze S1 , S2 , . . . , Sdn/5e for k {1, 2, . . . , dn/5e} do calcola la mediana Mk di Sk n M := Seleziona((M1 , M2 , . . . , Mdn/5e ), d 10 e) calcola il vettore A1 degli elementi di A minori di M calcola il vettore A2 degli elementi di A uguali a M calcola il vettore A3 degli elementi di A maggiori di M if k |A1 | then return Seleziona(A1 , k) if |A1 | < k |A1 | + |A2 | then return M if |A1 | + |A2 | < k then return Seleziona(A3 , k |A1 | |A2 |) end 8.7 Bucketsort Gli algoritmi che abbiamo presentato nelle sezioni precedenti sono tutti basati sul confronto e non utilizzano alcuna specifica proprieta degli elementi da ordinare. In questa sezione presentiamo invece un algoritmo che dipende dalle proprieta dellinsieme dal quale gli oggetti della sequenza di input sono estratti. Come vedremo, il procedimento che illustriamo e stabile, cioe fornisce in uscita un vettore nel quale gli elementi che hanno lo stesso valore conservano lordine reciproco che possedevano nella sequenza di input. Questa proprieta e spesso richiesta dalle procedure di ordinamento poiche i valori di ingresso possono essere campi di record che contengono altre informazioni e che sono originariamente ordinati secondo un criterio differente. In molti casi, qualora due elementi abbiano lo stesso valore, si vuole conservare lordine precedente. CAPITOLO 8. ALGORITMI DI ORDINAMENTO 114 Lidea dellalgoritmo Bucketsort viene solitamente presentata attraverso un semplice esempio. Supponiamo di voler ordinare n interi x1 , x2 , . . . , xn , che variano in un intervallo [0, m 1], dove m IN. Se m e abbastanza piccolo, puo essere conveniente usare il seguente procedimento: begin for i {0, 1, . . . , m 1} do crea una lista L(i) inizialmente vuota for j {1, 2, . . . , n} do aggiungi xj alla lista L(xj ) concatena le liste L(1), L(2), . . . , L(m) nel loro ordine end La lista ottenuta dalla concatenazione di L(1), L(2), . . . , L(m) fornisce ovviamente la sequenza ordinata. Questo metodo richiede chiaramente O(m + n) passi e risulta quindi lineare se m ha lo stesso ordine di grandezza di n, migliorando quindi, in questo caso, le prestazioni degli algoritmi presentati nelle sezioni precedenti. Con alcune variazioni la stessa idea puo essere usata per ordinare una sequenza di parole su un alfabeto finito. Sia un alfabeto finito formato da m simboli distinti e denotiamo con + linsieme delle parole definite su (esclusa la parola vuota). Definiamo ora lordinamento lessicografico su + che, come e ben noto, corrisponde al tradizionale ordinamento delle parole in un dizionario. Sia  una relazione dordine totale su e siano x = x1 x2 xk e y = y1 y2 yh due parole in + , dove xi , yj per ogni i e ogni j. Diciamo che x precede lessicograficamente y (x ` y) se: - x e un prefisso di y, cioe k h e xi = yi per ogni i k, - oppure esiste j, 1 j k, tale che xi = yi per ogni i < j, xj 6= yj e xj  yj . E facile verificare che ` e una relazione dordine totale su + . Descriviamo allora una procedura per ordinare, rispetto alla relazione ` , una n-pla di parole su + , tutte di lunghezza k. Applicando il procedimento illustrato sopra lalgoritmo costruisce la lista delle parole di ingresso ordinate rispetto allultima lettera. Successivamente, si ordina la lista cos ottenuta rispetto alla penultima lettera e si prosegue nello stesso modo per tutte le k posizioni in ordine decrescente. Come dimostreremo in seguito la lista che si ottiene risulta ordinata rispetto alla relazione ` . Esempio 8.1 Consideriamo lalfabeto {a, b, c}, dove a  b  c, e applichiamo il metodo illustrato alla sequenza di parole bacc, abac, baca, abbc, cbab. Lalgoritmo esegue 4 cicli, uno per ogni posizione delle parole di ingresso. Denotiamo con La , Lb , Lc le liste delle parole che hanno rispettivamente la lettera a, b, c nella posizione corrente. Le liste al termine di ciascun ciclo sono le seguenti: 1) La = (baca) Lb = (cbab) Lc = (bacc, abac, abbc) 2) La = (cbab, abac) Lb = (abbc) Lc = (baca, bacc) 3) La = (baca, bacc) Lb = (abab, abac, abbc) Lc = () 4) La = (abab, abac, abbc) Lb = (baca, bacc) Lc = () La sequenza ordinata e data dalla concatenazione delle liste La , Lb e Lc ottenute al temine dellultima iterazione. CAPITOLO 8. ALGORITMI DI ORDINAMENTO 115 Descriviamo ora nel dettaglio la procedura. Siano a1 , a2 , . . . , am gli elementi di considerati nellordine fissato. Linput e dato da una sequenza di parole X1 , X2 , . . . , Xn tali che, per ogni i {1, 2, . . . , n}, Xi = bi1 bi2 bik dove bij per ogni j. Per ogni lettera ai , L(ai ) denota la lista relativa alla lettera ai ; allinizio di ogni ciclo ciascuna di queste e vuota (). Denotiamo inoltre con Q la lista contenente la concatenazione delle L(ai ) al termine di ogni iterazione. E evidente che in una reale implementazione le liste L(ai ) e Q non conterranno effettivamente le parole di ingresso ma, piu semplicemente, una sequenza di puntatori a tali stringhe. Procedura Bucketsort Input: una sequenza di parole X1 , X2 , . . . , Xn tali che, per ogni i {1, 2, . . . , n}, Xi = bi1 bi2 bik dove bij per ogni j = 1, 2, . . . , k. begin for i = 1, 2, . . . , n do aggiungi Xi in coda a Q for j = k, k 1, . . . , 1 do begin for ` = 1, 2, . . . , m do L(a` ) := while Q 6= do begin X := Front(Q) Q := Dequeue(Q) sia at la lettera di X di posto j Inserisci in coda(L(at ), X) end for ` = 1, . . . , m do concatena L(a` ) in coda a Q end end Teorema 8.5 Lalgoritmo Bucketsort ordina lessicograficamente una sequenza di n parole di lunghezza k, definite su un alfabeto di m lettere, in tempo O(k(n + m)). Dimostrazione. Si prova la correttezza dellalgoritmo per induzione sul numero di iterazioni del loop piu esterno. Al termine della i-esima iterazione la lista Q contiene le parole ordinate lessicograficamente rispetto alle ultime i lettere. Osserva infatti che, se alla i+1-esima iterazione, due parole sono poste nella stessa lista L(at ), queste mantengono lordine reciproco che possedevano al termine delli-esimo ciclo. Nota infine che ogni ciclo richiede O(n + m) passi per analizzare uno dopo laltro gli elementi di Q e concatenare le liste L(at ). Quindi O(k(n + m)) passi sono necessari per terminare limplementazione. Concludiamo ricordando che lalgoritmo presentato puo essere migliorato ed esteso al caso in cui le parole di ingresso sono di lunghezza diversa. Se L e la somma delle lunghezza delle parole di ingresso si possono ordinare queste ultime in tempo O(m + L). Esercizi 1) Tra gli algoritmi di ordinamento presentati nelle sezioni precedenti quali sono quelli stabili? 2) Ricordando lalgoritmo per il calcolo della mediana di una sequenza, descrivere una nuova versione di Quicksort che richiede un tempo O(n log n) nel caso peggiore. Capitolo 9 Strutture dati e algoritmi di ricerca Analizzando un problema, spesso ci si accorge che esso puo essere agevolmente risolto mediante sequenze di prefissate operazioni su opportuni insiemi. In altri termini, il progetto di un algoritmo risolutivo per il problema risulta agevolato se ci si riferisce ad una naturale struttura di dati, dove conveniamo di chiamare struttura dati la famiglia di tali insiemi ed operazioni. Lindividuazione di una struttura di dati permette di suddividere il problema in (almeno) due fasi: 1. progetto di un algoritmo risolutivo astratto espresso in termini di operazioni della struttura di dati. 2. progetto di algoritmi efficienti per la rappresentazione dei dati e limplementazione delle operazioni sul modello di calcolo a disposizione. Questa impostazione unisce semplicita, chiarezza e facilita di analisi. Infatti la correttezza dellalgoritmo eseguibile puo essere ottenuta considerando separatamente la correttezza dellalgoritmo astratto e quella dellimplementazione delle operazioni. Infine, si puo osservare che molti problemi ammettono come naturale la stessa struttura dati: particolare attenzione andra in tal caso dedicata allimplementazione efficiente delle operazioni (algoritmi di base). Questo approccio e gia stato utilizzato nel capitolo 4 per introdurre le strutture dati di base quali vettori, liste, pile, ecc. Vogliamo ora studiare largomento con maggiore sistematicita presentando una nozione generale di struttura dati. 9.1 Algebre eterogenee Introduciamo innanzitutto un minimo di terminologia. La nozione centrale STRUTTURA DI DATI = INSIEMI + OPERAZIONI viene colta dal concetto di algebra eterogenea. Data una sequenza di insiemi [A1 , A2 , . . . , An ], diremo che k e il tipo dellinsieme Ak . Una funzione parziale f : Ak(1) . . .Ak(s) Al e detta operazione su [A1 , A2 , . . . , An ] di arieta (k(1)k(2) . . . k(s), l); la arieta di una operazione e dunque una coppia (w, y) dove w e una parola su {1, 2, . . . , n} e y e un elemento in {1, 2, . . . , n}. Se indichiamo con la parola vuota, conveniamo che con operazione di arieta (, k) si debba intendere un elemento di Ak ; tali operazioni sono anche dette costanti. 116 CAPITOLO 9. STRUTTURE DATI E ALGORITMI DI RICERCA 117 Definizione 9.1 : Unalgebra eterogenea A e una coppia A = h[A1 , . . . , An ], [f1 , . . . , fk ]i, dove A1 , A2 , . . . , An sono insiemi e f1 , f2 , . . . , fk sono operazioni su [A1 , A2 , . . . , An ] di data arieta. Vediamo alcuni esempi di algebre eterogenee. Esempio 9.1 [PARTI DI A] PARTI DI A = h[A, SU BSET (A), BOOL], [M EM BER, IN SERT, DELET E, , a1 , a2 , . . . , am ]i dove: A = {a1 , a2 , . . . , am } e un insieme fissato ma arbitrario, SU BSET (A) e la famiglia di sottoinsiemi di A con insieme vuoto, BOOL = {vero, f also}; M EM BER : A SU BSET (A) BOOL, IN SERT, DELET E : A SU BSET (A) SU BSET (A) sono le seguenti operazioni:  vero se x Y falso altrimenti IN SERT (x, Y ) = Y {x}, DELET E(x, Y ) = Y {x}; M EM BER(x, Y ) = Le arieta delle operazioni M EM BER, IN SERT, DELET E, , ak sono rispettivamente (12, 3), (12, 2), (12, 2), (, 2), (, 1). Esempio 9.2 [PARTI DI A ORDINATO (PAO)] PAO = h[A, SU BSET (A), BOOL], [M EM BER, IN SERT, DELET E, M IN, , a1 , . . . , am ]i dove: hA = {a1 , a2 , . . . , am }, i e un insieme totalmente ordinato, SU BSET (A) e la famiglia di sottoinsiemi di A con insieme vuoto, BOOL = {vero, f also}; M EM BER, IN SERT, DELET E sono definite come nellinsieme precedente, M IN : SU BSET (A) A e loperazione di arieta (2,1) con M IN (Y ) = min{x|x Y }. Esempio 9.3 [PARTIZIONI DI A] PARTIZIONI DI A= h[A, P ART (A)], [U N ION, F IN D, ID, a1 , . . . , am ]i dove: A = {a1 , a2 , . . . , am } e un insieme, P ART (A) e la famiglia delle partizioni di A (o equivalentemente delle relazioni di equivalenza su A) dove ID e la partizione identita ID = {{a1 }, {a2 }, . . . , {am }} U N ION : A A P ART (A) P ART (A), F IN D : A P ART (A) A sono le operazioni: U N ION (x, y, P ) = partizione ottenuta da P facendo lunione delle classi di equivalenza contenenti x e y; F IN D(x, P ) = elemento rappresentativo della classe di equivalenza in P contenente x. Osserva che, se x e y appartengono alla stessa classe di equivalenza in P , allora F IN D(x, P ) = F IN D(y, P ). La arieta delle operazioni U N ION , F IN D sono rispettivamente (112,2), (12,1). CAPITOLO 9. STRUTTURE DATI E ALGORITMI DI RICERCA 118 Esempio 9.4 [PILE DI A] PILE DI A = h[A, ST ACK(A), BOOL], [ISEM P T Y, P U SH, P OP, T OP, , a1 , . . . , am ]i dove: A = {a1 , . . . , am } e un insieme, ST ACK(A) e linsieme delle sequenze finite [ak(1) , . . . , ak(n) ] di elementi di A, compresa la sequenza vuota = { }. ISEM P T Y : ST ACK(A) BOOL, P U SH : ST ACK(A) A ST ACK(A), P OP : ST ACK(A) ST ACK(A), T OP : ST ACK(A) A sono le operazioni:  vero se S = falso altrimenti P U SH([ak(1) , . . . , ak(n) ], a) =[ak(1) , . . . , ak(n) , a] P OP ([ak(1) , . . . , ak(n) ]) =[ak(1) , . . . , ak(n1) ] T OP ([ak(1) , . . . , ak(n) ]) =ak(n) ISEM P T Y (S) = Le arieta di ISEM P T Y , P U SH, P OP , T OP , , ak sono rispettivamente (2,3), (21, 2), (2,2), (2,1) (,2) , (,1). Le operazioni POP e TOP non sono definite su . Siano ora date due algebre A = h[A1 , A2 , . . . , An ], [f1 , . . . , fs ]i e B = h[B1 , B2 , . . . , Bn ], [g1 , . . . , gs ]i, dove fi e gi hanno la stessa arieta per ogni i. Un omomorfismo : A B e una famiglia di funzioni j : Aj Bj (1 j n) tale che, per ogni indice i, se loperazione fi ha arieta (k(1)k(2) . . . k(s), l) vale: l (fi (xk(1) , . . . , xk(s) )) = gi (k(1) (xk(1) ), . . . , k(s) (xk(s) )) Se inoltre le funzioni i sono corrispondenze biunivoche, diremo che e un isomorfismo e che A e isomorfo a B, scrivendo A = B. Esempio 9.5 Siano date le algebre Z = h[{. . . , 1, 0, 1, . . .}], [+, ]i e Zp = h[{0,1, . . . ,p-1}], [+, ]i, dove + e denotano (ambiguamente) lusuale somma e prodotto e la somma e prodotto modulo p. La trasformazione hip : Z Zp che associa ad ogni intero z il resto della divisione per p e un omomorfismo di algebre. Esempio 9.6 Consideriamo lalgebra PARTIZIONI DI A prima definita e lalgebra FORESTE SU A: FORESTE SU A = h[A, F OR(A)], [U N ION, F IN D, ID, a1 , . . . , am ]i dove: A = {a1 , . . . , am } e un insieme; F OR(A) e la famiglia delle foreste con vertici in A, in cui ogni albero ha una radice; ID e la foresta in cui ogni vertice e un albero. U N ION : A A F OR(A) F OR(A), F IN D : A F OR(A) A sono le operazioni: U N ION (x, y, F ) = foresta ottenuta da F collegando la radice dellalbero che contiene il nodo x alla radice dellalbero che contiene il nodo y, identificando questultima come radice del nuovo albero ottenuto. F IN D(x, F ) = radice dellalbero in F contenente il vertice x. E facile verificare che la funzione identita I : A A e la funzione : F OR(A) P ART (A) che ad ogni foresta F associa la partizione P in cui ogni classe e formata dai vertici di un albero in F realizzano un omomorfismo tra FORESTE SU A e PARTIZIONI DI A. Osserviamo inoltre che lomomorfismo e una corrispondenza biunivoca se ristretta alle costanti, che sono [ID, a1 , . . . , am ] nel caso di FORESTE SU A e [ID, a1 , . . . , am ] nel caso di PARTIZIONI SU A. CAPITOLO 9. STRUTTURE DATI E ALGORITMI DI RICERCA 9.2 119 Programmi astratti e loro implementazioni Molti problemi incontrati in pratica si possono ridurre a sottoproblemi, ognuno dei quali puo essere astrattamente formulato come sequenza di operazioni su una struttura di dati. Questo porta in modo naturale alla seguente: Definizione 9.2 Fissata una struttura di dati A = h[A1 , A2 , . . . , An ], [f1 , . . . , fs ]i, diremo che un programma astratto S su A e una sequenza S1 ; S2 , . . . , Sh in cui Si e una istruzione del tipo Xi := fk (Z1 , . . . , Zs ) dove Z1 , . . . , Zs sono costanti o variabili in {X1 , . . . , Xi1 }. Naturalmente ogni variabile sara di un dato tipo, e se (k(1)k(2) . . . k(s), l) e larieta di fk allora Xi e di tipo l, Z1 di tipo k(1), . . ., Zs di tipo k(s). Dato il programma S sopra definito, possiamo assegnare ad ogni variabile Xi un elemento resA (Xi ) dellalgebra A, definito induttivamente nel modo seguente: resA (a) = a resA (Xi ) = fk (resA (Z1 ), . . . , resA (Zs )) per ogni costante a, se la i-esima istruzione in S e del tipo Xi := fk (Z1 , . . . , Zs ). Osserva che la definizione e ben posta poiche in S ogni variabile Xi dipende solo dalle precedenti X1 , X2 , . . . , Xi1 . Diremo inoltre che il risultato del programma S = (S1 ; S2 ; . . . ; Sh ) e lelemento resA (Xh ). Un concetto importante e quello di implementazione concreta di una struttura dati astratta; in questo caso la nozione algebrica cruciale e quella di omomorfismo biunivoco sulle costanti. Diciamo che un omomorfismo : A B tra due algebre A e B e biunivoco sulle costanti se la restrizione di sullinsieme delle costanti e una corrispondenza biunivoca. Vale il seguente: Teorema 9.1 Dato un programma S, due algebre A e B e un omomorfismo : A B biunivoco sulle costanti, se e il risultato di S su A e se e il risultato di S su B allora vale che = (). Dimostrazione. Poiche per ipotesi le costanti sono in corrispondenza biunivoca, basta dimostrare per induzione che (resA (Xi )) = resB (Xi ) per ogni variabile Xi . Infatti, se listruzione i-esima di S e della forma Xi := fk (Z1 , . . . , Zs ) abbiamo: (resA (Xi )) = (fk (resA (Z1 ), . . . , resA (Zs ))) = fk ((resA (Z1 )), . . . , (resA (Zs ))) = fk (resB (Z1 ), . . . , resB (Zs ))) = resB (Xi ) per definizione di omomorfismo per ipotesi di induzione Osserviamo come conseguenza che se due algebre A e B sono isomorfe, allora i risultati dellesecuzione di un programma S sulle due algebre sono corrispondenti nellisomorfismo; da questo punto di vista algebre isomorfe risultano implementazioni equivalenti della struttura di dati (si parla di dati astratti, cioe definiti a meno di isomorfismo). Se vogliamo implementare un programma (astratto) S su una macchina RAM , e necessario che: CAPITOLO 9. STRUTTURE DATI E ALGORITMI DI RICERCA 120 1. Ad ogni operazione dellalgebra A sia associata una procedura che opera su strutture di dati efficientemente implementabili su RAM (vettori, liste, alberi); si ottiene in tal modo una nuova algebra B; 2. Limplementazione deve essere corretta; per quanto visto questo e vero se possiamo esibire un omomorfismo : B A biunivoco sulle costanti; 3. Limplementazione deve essere efficiente. E possibile che implementazioni naturali per una operazione siano fortemente penalizzanti per le altre; in generale sara necessario scegliere lalgebra B con quella giusta ridondanza che mantenga bilanciato il costo della implementazione per tutte le operazioni. Per quanto detto, il costo dellesecuzione su RAM di un programma (astratto) S su una algebra A dipende dalla particolare implementazione. Ci limitiamo nel nostro contesto ad identificare il costo con la complessita on-line, in cui ci si riferisce esclusivamente alla esecuzione on-line di S, dove cioe lesecuzione della i-esima istruzione deve essere fatta senza conoscere le seguenti. Concludiamo osservando che alcune implementazioni non sono realizzabili se il numero di costanti dellalgebra e alto. Chiariamo questo fatto su un esempio specifico: limplementazione di P ART I DI A. Sia A = {e1 , , em }; una semplice rappresentazione di un sottoinsieme X di A e ottenuta dal suo vettore caratteristico VX , dove VX e un vettore di m bit tale che ( VX [k] = 1 se ek X 0 altrimenti Limplementazione delle operazioni M EM BER, IN SERT , DELET E, e immediata ed un programma astratto di lunghezza n e eseguibile in tempo ottimale O(n); tuttavia dobbiamo definire e mantenere in memoria RAM un vettore di dimensione m, cosa che risulta impraticabile in molte applicazione. Esempio 9.7 Un compilatore deve tener traccia in una tabella di simboli di tutti gli identificatori presenti sul programma che deve tradurre. Le operazioni che il compilatore esegue sulla tabella sono di due tipi: 1. Inserimento di ogni nuovo identificatore incontrato, con eventuali informazioni (esempio: il tipo); 2. Richiesta di informazioni su un identificatore (esempio: il tipo). E quindi richiesta una implementazione efficiente delle operazioni IN SERT , M EM BER su P ART I DI Identificatori; la rappresentazione di un insieme di identificatori mediante il suo vettore caratteristico risulta impraticabile perche il numero di possibili identificatori in un linguaggio di programmazione e generalmente elevatissimo (nel C sono possibili 27 3037 identificatori). 9.3 Implementazione di dizionari mediante Hashing Molti problemi possono essere modellati in modo naturale facendo riferimento allalgebra PARTI DI A; questa struttura viene chiamata dizionario. Obbiettivo di questa sezione e di presentare le idee di base per limplementazione di un dizionario con tecniche di Hashing. Una funzione di hashing per A e una funzione h : A {0, 1, 2, . . . , m 1}; qui supponiamo che h possa essere calcolata in tempo costante. Ogni elemento che implementa P ART I DI A e descritto da un vettore (V [0], . . . , V [m1]) dove, per ogni k, V [k] e un puntatore a una lista Lk =< ek1 , , eksk > di elementi di A, con la richiesta ulteriore che h(ekj ) = k (1 j sk ); in tal modo si individua univocamente il sottoinsieme X A formato da tutti gli elementi presenti nelle liste L0 , L1 , , Lm1 . CAPITOLO 9. STRUTTURE DATI E ALGORITMI DI RICERCA 121 La procedura che implementa IN SERT (a, X) (risp. DELET E(a, X), risp. M EM BER(a, X)) puo essere cos descritta: 1. Calcola h(a). 2. Appendi a alla lista Lh(a) (risp. cancella a dalla lista Lh(a) , risp. controlla se a e nella lista Lh(a) ). Lanalisi rispetto al caso peggiore non e molto incoraggiante. Supponiamo infatti di avere un programma S che consiste in n istruzioni IN SERT ; puo succedere che h associ lo stesso valore (per esempio 0) a tutti gli n elementi inseriti, ed in tal caso il numero di operazioni elementari effettuate sulla P lista puntata da V [0] e nk=1 k = (n2 ). Limplementazione precedente risulta invece interessante per il caso medio, nellipotesi che h(x) assuma con uguale probabilita un qualsiasi valore in {0, 1, , m 1} per un qualsiasi elemento da inserire. Allora per ogni programma S con n istruzioni la lunghezza media di ogni lista e al piu n/m e n il tempo medio risulta quindi O( m n). Se S e conosciuto prima dellesecuzione (caso off-line), scegliendo m = |S| (ovvero m = n), si ottiene un tempo medio ottimale O(n). Nel caso di algoritmi on-line, non si puo assumere a priori la conoscenza della lunghezza del programma S. Si puo allora procedere come segue: 1. Fisso un intero m 1 e costruisco una tabella di hash T0 di m elementi. 2. Quando il numero di elementi inseriti supera m, costruisco una nuova tabella T1 di dimensione 2 m e ridistribuisco gli elementi secondo T1 ; quando il numero di elementi supera 2 m costruisco una nuova tabella T2 di dimensione 22 m ridistribuendo gli elementi secondo T2 , e cos via costruendo tabelle Tk di dimensione 2k m, fino ad avere esaurito tutte le istruzioni. Fissiamo m = 1 per semplicita di analisi. Poiche il tempo medio per ridistribuire gli elementi dalla tabella Tk1 nella tabella Tk e O(2k ), il tempo complessivo necessario alle varie ridistribuzioni e O(1 + 2 + + 2M ), essendo 2M n = |S|. Poiche 1 + 2 + + 2M = 2 2M 1 = O(2M ), tale tempo e O(n). Poiche inoltre ogni lista ha lunghezza aspettata al piu 1, lesecuzione di ogni istruzione in S costa mediamente O(1). In conclusione: Proposizione 9.2 Limplementazione con tecnica di hashing di un programma S su un dizionario ha un tempo medio di calcolo O(|S|). 9.4 Alberi di ricerca binaria Descriviamo ora una struttura utile per implementare, con buona efficienza nel caso medio, programmi astratti sulla struttura dati PARTI DI A ORDINATO. I sottoinsiemi di A vengono qui rappresentati mediante alberi di ricerca binaria. Le operazioni che dobbiamo implementare sono quelle di MIN, MEMBER, INSERT e DELETE. Ricordiamo innanzitutto che A e un insieme totalmente ordinato e denotiamo con la relativa relazione dordine. Dato un sottoinsieme S A, un albero di ricerca binaria per S e un albero binario T , i cui vertici sono etichettati da elementi di S, che gode delle seguenti proprieta . Denotando con E(v) letichetta di ogni vertice v, abbiamo: 1. per ogni elemento s S, esiste uno e un solo vertice v in T tale che E(v) = s; 122 CAPITOLO 9. STRUTTURE DATI E ALGORITMI DI RICERCA 2. per ogni vertice v, se u e un vertice del sottoalbero sinistro di v allora E(u) < E(v); 3. per ogni vertice v, se u e un vertice del sottoalbero destro di v allora E(u) > E(v). Dato S = {s1 , s2 , . . . , sn }, ordinando i suoi elementi in ordine crescente otteniamo un vettore S = (S[1], S[2], . . . , S[n]); e facile dimostrare per induzione che un albero binario T di n nodi diventa un albero di ricerca binaria per S se, per ogni k = 1, 2, . . . , n, si etichetta con S[k] il k-esimo vertice visitato durante lattraversamento in ordine simmetrico di T . Per esempio, sia A linsieme delle parole della lingua italiana ordinate in modo lessicografico e sia S linsieme { la, pecora, e , un, animale, feroce }; tale insieme e rappresentato dal vettore ( animale, e , feroce, la, pecora, un ). Nel seguente albero binario di 6 nodi i vertici sono numerati secondo lordine simmetrico:  4 H  HH  HH     5 1  @ @   @ @  6 3    2  Il seguente risulta allora un albero di ricerca binaria per S:  la 4 H  HH  HH     animale 5 1 3 feroce   e pecora  @ @   @ @  6 un  2  Possiamo rappresentare un albero di ricerca binaria mediante le tabelle sin, des, E e padre. Nel nostro caso otteniamo la seguente rappresentazione: vertici 1 2 3 4 5 6 sin 0 0 2 1 0 0 des 3 0 0 5 6 0 E animale e feroce la pecora un padre 4 3 1 0 4 5 E chiaro che, per ogni nodo v di T , i nodi con etichetta minore di E(v) si trovano eventualmente nel sottoalbero sinistro di v. Quindi, possiamo determinare il vertice con etichetta minima semplicemente scorrendo il ramo sinistro dellalbero. Il procedimento e descritto dalla seguente procedura nella quale rappresentiamo con r la radice dellalbero. CAPITOLO 9. STRUTTURE DATI E ALGORITMI DI RICERCA 123 Procedure MIN(r) begin v := r while sin(v) 6= 0 do v := sin(v) return v end Analogamente, loperazione MEMBER(x, S) puo essere eseguita mediante la seguente procedura ricorsiva CERCA(x, v) che verifica se nel sottoalbero avente per radice v esiste un nodo di etichetta x. Procedure CERCA(x, v) begin if x := E(v) then return vero if x < E(v) then if sin(v) 6= 0 then return CERCA(x, sin(v)) else return f also if x > E(v) then if des(v) 6= 0 then return CERCA(x, des(v)) else return f also end Lappartenenza di un elemento allinsieme S rappresentato dallalbero di ricerca binaria T puo essere allora verificata semplicemente mediante la chiamata CERCA(x, r). Per quanto riguarda loperazione INSERT(x, S), osserviamo che se x appartiene a S linsieme non viene modificato. Se invece S e rappresentato dallalbero di ricerca binaria T e x 6 S, lalgoritmo deve inserire in T un nuovo nodo etichettato con x, preservando la struttura di albero di ricerca binaria. In questo modo lalgoritmo restituisce un albero di ricerca binaria per linsieme S {x}. Il procedimento e ottenuto mediante la seguente procedura ricorsiva che eventualmente inserisce un nodo etichettato x nel sottoalbero di radice v. Procedure INSERT(x, v) begin if x < E(v) then if sin(v) 6= 0 then INSERT(x, sin(v)) else CREA NODO SIN(v, x) if x > E(v) then if des(v) 6= 0 then INSERT(x, des(v)) else CREA NODO DES(v, x) end Qui la procedura CREA NODO SIN(v, x) introduce un nuovo nodo v, figlio sinistro di v ed etichettato con x, aggiornando la tabella come segue: sin(v) := v; padre(v) := v; E(v) := x; sin(v) := 0; des(v) := 0 La procedura CREA NODO DES(v, x) e definita in maniera analoga. Osserviamo che limplementazione delle operazioni MIN, DELETE e INSERT non richiedono luso della tabella padre(v); tale tabella risulta invece indispensabile per una efficiente implementazione delloperazione DELETE. Una possibile implementazione di DELETE(x, S), dove S e rappresentato da un albero di ricerca binaria T , richiede lesecuzione dei seguenti passi: CAPITOLO 9. STRUTTURE DATI E ALGORITMI DI RICERCA 124 1. Determina leventuale nodo v tale che x = E(v). 2. Se v possiede al piu un figlio, allora: a) se v e una foglia basta togliere v dallalbero, ovvero eliminare la riga corrispondente nella tabella aggiornando opportunamente quella relativa al padre; b) se v e radice e ha un unico figlio v basta ancora togliere v dallalbero e assegnare 0 a padre(v) rendendo cos v radice dellalbero; c) se v non e radice e ha un unico figlio v basta togliere v dallalbero collegando direttamente v con padre(v). Naturalmente se v era figlio sinistro si pone sin(padre(v)) := v, mentre se v era figlio destro si pone des(padre(v)) := v. Nel seguito indichiamo con TOGLI(v) la procedura che esegue i calcoli relativi al punto 2. E chiaro che se v possiede al piu un figlio lesecuzione di TOGLI(v) permette di ottenere un albero di ricerca binaria per S {x}. 3. Se v ha due figli si puo determinare il vertice vM che ha etichetta massima nel sottoalbero di sinistra di v. Poiche vM non ha figlio destro, associando a v letichetta di vM e applicando la procedura TOGLI(vM ) si ottiene un albero di ricerca per S {x}. Il passo 3) richiede il calcolo del vertice di etichetta massima nel sottoalbero che ha per radice un nodo qualsiasi u. Questo puo essere determinato scorrendo il ramo di destra dellalbero, come descritto nella seguente procedura. Procedure MAX(u) begin v := u while des(u) 6= 0 do v := des(v) return v end La procedura principale per limplementazione delloperazione DELETE e allora la seguente: Procedure ELIMINA(x, v) begin if x < E(v) sin(v) 6= 0 then ELIMINA(x, sin(v)) if x > E(v) des(v) 6= 0 then ELIMINA(x, des(v)) if x = E(v) then if v ha al piu un figlio then TOGLI(v) else begin vM := MAX(sin(v)) E(v) := E(vM ) TOGLI(vM ) end end Loperazione DELETE(x, S), dove S e un insieme rappresentato da un albero di ricerca binaria T , con radice r, puo quindi essere implementata dalla semplice chiamata ELIMINA(x, r), dove r e la radice di T. 125 CAPITOLO 9. STRUTTURE DATI E ALGORITMI DI RICERCA Le procedure qui delineate MEMBER, INSERT, MIN, e DELETE richiedono, se applicate a un albero di altezza h, un tempo di calcolo O(h); ricordiamo che un albero binario di n nodi ha unaltezza h tale che log n h n 1. Poiche applicando allinsieme vuoto un programma astratto di n operazioni MEMBER, INSERT, MIN e DELETE si ottengono alberi di al piu n elementi, ne segue che lesecuzione di un tale programma richiede tempo O(n2 ). Purtroppo questo limite superiore e stretto: se infatti, in un albero di ricerca binaria inizialmente vuoto, inseriamo consecutivamente n elementi a1 , a2 , . . . , an tali P n2 che a1 < a2 < < an , si eseguono n1 k=0 k 2 operazioni di confronto. Fortunatamente le prestazioni nel caso medio risultano migliori. Consideriamo infatti n elementi a1 , a2 , . . . , an tali che a1 < a2 < < an e supponiamo di voler inserire tali elementi in un ordine qualsiasi nellipotesi che tutti i possibili ordinamenti siano equiprobabili. Piu precisamente scegliamo una permutazione casuale (ap(1) , ap(2) , . . . , ap(n) ) degli n elementi assumendo che tutte le n! permutazioni abbiano la stessa probabilita di essere scelte. Consideriamo quindi il programma INSERT(ap(1) , S), INSERT(ap(2) , S), . . . , INSERT(ap(n) , S) e denotiamo con Tn il numero medio di confronti necessari per eseguire tale programma. Osserviamo innanzitutto che, per ogni k {1, 2, . . . , n}, ap(1) = ak con probabilita 1/n. In questo caso lalbero di ricerca ottenuto alla fine dellesecuzione dellalgoritmo avra la radice etichettata da ak , mentre il sottoalbero di sinistra della radice conterra k 1 elementi e quello di destra n k. Questo significa che durante la costruzione dellalbero, il valore ap(1) sara confrontato con tutti gli altri elementi della sequenza, per un totale di n 1 confronti; inoltre, eseguiremo in media Tk1 confronti per costruire il sottoalbero di sinistra e Tnk per costruire quello di destra. Di conseguenza, nel caso ap(1) = ak , si eseguono n 1 + Tk1 + Tnk confronti. Poiche questo evento si verifica con probabilta 1/n per ogni k, otteniamo la seguente equazione, valida per ciascun n > 1: Tn = n X 1 k=1 n (n 1 + Tk1 + Tnk ). Mediante semplici passaggi questa equazione si riduce alla ricorrenza ( Tn = 0 Pn 1 k=1 n (n 1 + Tk1 + Tnk ) se n = 1 altrimenti che coincide con quella ottenuta nellanalisi di Quicksort, studiata nelle sezioni 6.6.1 e 7.4.2. Si ottiene in questo modo il valore Tn = 2n log n + O(n). Possiamo allora concludere affermando che, nel caso medio, n operazioni MEMBER, INSERT e MIN eseguite su un albero di ricerca binaria inizialmente vuoto, richiedono O(n log n) passi. Esercizi 1) Le procedure CERCA, INSERT e ELIMINA appena descritte sono procedure risorsive. Quali di queste forniscono un esempio di ricorsione terminale? 2) Descrivere una versione iterativa delle procedure CERCA, INSERT e ELIMINA. 3) Descrivere un algoritmo per ordinare un insieme di elementi S assumendo in input un albero di ricerca binaria per S. Mostrare che, se S possiede n elementi, lalgoritmo funziona in tempo (n) (si assuma il criterio uniforme). 4) Usando gli alberi di ricerca binaria (e in particolare la procedura di inserimento) si descriva un algoritmo di ordinamento generico. Svolgere lanalisi dellalgoritmo nel caso peggiore e in quello medio (nelle solite ipotesi di equidistribuzione) e confrontare i risultati con quelli relativi a Quicksort. 126 CAPITOLO 9. STRUTTURE DATI E ALGORITMI DI RICERCA 9.5 Alberi 2-3 Abbiamo precedentemente osservato che limplementazione di PARTI DI A ORDINATO mediante alberi di ricerca binaria, permette di eseguire programmi astratti di n istruzioni in tempo medio O(n log n); tuttavia, nel caso peggiore, si ottengono tempi di calcolo (n2 ). In questa sezione presentiamo una struttura che permette di eseguire lo stesso calcolo in tempo O(n log n) anche nel caso peggiore. Osserviamo innanzitutto che, usando strutture ad albero per implementare PARTI DI A ORDINATO, il tempo di esecuzione di una operazione MEMBER dipende essenzialmente dalla profondita del nodo cui e applicata. Ricordiamo che ogni albero con n nodi, nel quale ogni vertice possiede al piu k figli, ha altezza maggiore o uguale a logk n. Quindi, per eseguire in maniera sempre efficiente loperazione MEMBER, e necessario che lalbero abbia una altezza prossima a logk n. Per questo motivo chiameremo informalmente bilanciati gli alberi con radice di altezza O(log n), dove n e il numero dei nodi. E allora chiaro che ogni efficiente implemetazione di PARTI DI A ORDINATO, basata su alberi, richiedera lutilizzo di alberi bilanciati. A tal fine risultano critiche le procedure INSERT e DELETE: esse dovranno preservare la proprieta di bilanciamento degli alberi trasformati. La famiglia di alberi bilanciati che prendiamo in considerazione in questa sezione e quella degli alberi 2-3. Un albero 2-3 e un albero ordinato in cui ogni nodo che non sia foglia possiede 2 o 3 figli e nel quale tutte le foglie hanno uguale profondita . Nel seguito, per ogni nodo interno v di un albero 2-3, denoteremo con f1 (v), f2 (v), f3 (v) rispettivamente il primo, il secondo e leventuale terzo figlio di v. In un albero 2-3 con n nodi e altezza h vale evidentemente la relazione h X 2k n k=0 h X 3k k=0 e quindi 2h+1 1 n (3h+1 1)/2. Questo implica che blog3 nc 1 h dlog2 ne e di conseguenza gli alberi 2-3 risultano bilanciati nel nostro senso. Consideriamo ora un insieme {a1 , a2 , . . . , an } A tale che a1 < a2 < < an . Rappresentiamo tale insieme mediante un albero 2-3 le cui foglie, da sinistra a destra, sono identificate rispettivamente dai valori a1 , a2 , . . . , an . Invece i nodi interni dellalbero contengono le informazioni necessarie alla ricerca deli elementi ai : per ogni nodo interno v denotiamo con L(v) e M (v) rispettivamente, la massima foglia del sottoalbero di radice f1 (v) e la massima foglia del sottoalbero di radice f2 (v). Per esempio, si consideri linsieme di parole { la, pecora, e , un, animale, molto, feroce } dotato dellordinamento lessicografico. Tale insieme puo essere implementato dal seguente albero 2-3, al quale si devono aggiungere le informazioni relative ai valori L e M .  1 X XXX  XXX   XXX     2 4 3  H    HH @ @   @  @     animale e feroce la molto pecora un Tale albero e rappresentato dalla corrispondente tabella, nella quale riportiamo solo i valori relativi ai nodi interni, dotata delle ulteriori informazioni L e M . 127 CAPITOLO 9. STRUTTURE DATI E ALGORITMI DI RICERCA vertici 1 2 3 4 f1 2 animale feroce molto f2 3 e la pecora f3 4 0 0 un L e animale feroce molto M la e la pecora Osserviamo che ogni insieme con almeno due elementi puo essere rappresentato da alberi 2-3, poiche lequazione 3x + 2y = n ammette soluzioni intere non negative per ogni intero n 2. La procedura per calcolare lelemento minimo di un insieme rappresentato da un albero 2-3 di radice r e immediata: Procedure MIN(r) begin v := r while f1 (v) 6= 0 do v := f1 (v) return v end Inoltre, una procedura fondamentale per implementare le altre operazioni e del tutto simile allalgoritmo di ricerca binaria presentato nella sezione 5.2.1: Procedure CERCA(a, v) begin if f1 (v) e una foglia then return v else if a L(v) then return CERCA(a, f1 (v)) else if f3 (v) = 0 a M (v) then return CERCA(a, f2 (v)) else return CERCA(a, f3 (v)) end Sia {a1 , a2 , . . . , an } linsieme rappresentato dallalbero, con a1 < a2 < < an . Per ogni valore a A e ogni nodo interno v dellalbero, la procedura CERCA(a, v) restituisce un nodo interno p, del sottoalbero che ha per radice v, i cui figli sono foglie e che verifica la seguente proprieta : se ai e leventuale elemento in {a1 , a2 , . . . , an } che precede il piu piccolo figlio di p e aj leventuale elemento di {a1 , a2 , . . . , an } che segue il piu grande figlio di p, allora ai < a < aj . . . . . . . .  p  H H  HH HH     . . . . . .  ai . . .  aj Questo significa che se a appartiene allinsieme, cioe e foglia dellalbero con radice r, allora a e figlio di p = CERCA(a, r). Di conseguenza la seguente procedura implementa loperazione MEMBER: CAPITOLO 9. STRUTTURE DATI E ALGORITMI DI RICERCA 128 Procedure MEMBER(a, r) begin p := CERCA(a, r) if a e figlio di p then return vero else return f also end Una naturale implementazione di INSERT(a, r) (o DELETE(a, r)) richiede preliminarmente di calcolare p = CERCA(a, r), aggiungendo poi (o cancellando) il nodo a ai figli di p. Il problema e che se p ha 3 figli, linserimento di a ne crea un quarto e quindi lalbero ottenuto non e piu un albero 2-3. Analogamente, se p ha due figli di cui uno e a, la cancellazione di a provoca la formazione di un nodo interno con un solo figlio, violando nuovamente la proprieta degli alberi 2-3. Per risolvere questi problemi focalizziamo lattenzione sulloperazione INSERT (loperazione DELETE puo essere trattata in maniera analoga). Definiamo anzitutto una procedura ausiliaria, denominata RIDUCI, che trasforma un albero con un vertice con 4 figli, e gli altri nodi interni con 2 o 3 figli, in un albero 2-3 con le stesse foglie. Questa procedura e descritta dal seguente schema nel quale si prevede luso esplicito della tabella padre: Procedure RIDUCI(v) if v ha 4 figli then begin crea un nodo v assegna a v i primi due figli v1 e v2 di v (aggiornando opportunamente i valori f1 , f2 , f3 di v e v e i valori padre di v1 e v2 ) crea una nuova radice r f (r) := v 1 if v e radice then f 2 (r) := v aggiorna i valori padre di v e v u := padre(v) poni v figlio di u immediatamente a sinistra di v else RIDUCI(u) end Possiamo a questo punto esibire la procedura INSERT(a, r) che implementa correttamente loperazione di inserimento quando e applicata ad alberi 2-3 con almeno due foglie: Procedure INSERT(a, r) begin p := CERCA(a, r) ( if a non e figlio di p then aggiungi in ordine il nodo a ai figli di p RIDUCI(p) end Notiamo che le procedure appena descritte vanno opportunamente modificate in modo da aggiornare i valori delle tabelle L e M relative ai nodi che si trovano lungo il cammino da p alla radice e ai nuovi 129 CAPITOLO 9. STRUTTURE DATI E ALGORITMI DI RICERCA vertici creati dal processo di riduzione. E bene notare che laggiornamento delle tabelle L e M puo giungere fino alla radice anche se il processo di creazione di nuovi nodi si arresta prima. Concludiamo osservando che, nelle procedure descritte sopra, il numero di passi di calcolo e al piu proporzionale allaltezza dellalbero considerato. Se questultimo possiede n foglie il tempo di calcolo e allora O(log n) per ogni procedura. Esempio Lalbero 2-3 descritto nella figura seguente rappresenta linsieme di lettere {a, c, e, f, g, h, i} disposte in ordine alfabetico. Nei nodi interni sono stati posti i corrispondenti valori delle tabelle L e M .  c, g  XX  XX   XXX    X  X   a, c e, f h, i  H    HH @   @ @    @  a c f e h g i Inserendo la lettera d otteniamo lalbero seguente:  e, i PP   PP  PP  P    g, i c, e H   HH    H  HH    h, i f, g d, e a, c     @ @ @ @ @  @  @  @   a c d e f Cancellando ora il nodo c otteniamo: g h i  e, g P PP  PP    PP   P     a, d h, i f, g  H    HH   @ @  @  @   a 9.6 d e f g h i B-alberi I B-alberi possono essere considerati come una generalizzazione degli alberi 2-3 descritti nella sezione precedente. Si tratta anche in questo caso di alberi bilanciati che consentono la rappresentazione di insiemi totalmente ordinati e permettono lesecuzione di ciascuna operazione MEMBER, INSERT, DELETE e MIN in tempo O(log n) su ogni insieme di n elementi. CAPITOLO 9. STRUTTURE DATI E ALGORITMI DI RICERCA 130 Fissiamo un intero m 2 che potrebbe essere chiamato indice di ramificazione dellalbero perche , come vedremo, regola il numero di figli che puo avere ciascun nodo interno. Allora un B-albero (o Btree) di ordine m su un insieme S di n elementi (che nel seguito chiameremo chiavi) e un albero ordinato che gode delle seguenti proprieta : 1. ogni nodo interno possiede al piu 2m figli; 2. ogni nodo interno possiede almeno m figli, escluso la radice che ne ha almeno 2; 3. le chiavi sono assegnate ai vari nodi dellalbero. Ogni chiave a e assegnata a un solo nodo v (diremo anche che v contiene a); 4. tutte le foglie hanno la stessa profondita e non contengono alcuna chiave; 5. ogni nodo interno v con k + 1 figli v0 , v1 , . . . , vk contiene k chiavi ordinate a1 < a2 < < ak e inoltre : a) ogni chiave a contenuta nel sottoalbero di radice v0 soddisfa la relazione a < a1 , b) ogni chiave a contenuta nel sottoalbero di radice vi (1 i k 1) soddisfa la relazione ai < a < ai+1 , c) ogni chiave a contenuta nel sottoalbero di radice vk soddisfa la relazione ak < a. Osserva che ogni nodo interno contiene almeno m 1 chiavi (esclusa la radice) e ne possiede al piu 2m 1. Inoltre, poiche il grado dei nodi puo variare di molto, conviene rappresentare il B-albero associando a ogni nodo interno v con figli v0 , v1 , . . . , vk , k + 1 puntatori p0 , p1 , . . . , pk , dove ogni pi punta al nodo vi . E chiaro allora come utilizzare le chiavi contenute nei vari nodi per indirizzare la ricerca di un elemento nellinsieme S. Il metodo e del tutto analogo a quello descritto per gli alberi 2-3 ed e basato sulla seguente procedura: Procedure MEMBER(a, v) if v e una foglia then return f also else begin siano a1 , a2 , . . . , ak le chiavi ordinate contenute in v siano v0 , v1 , . . . , vk i corrispondenti figli di v i := 1 ak+1 := + (valore convenzionale maggiore di ogni possibile chiave) while a > ai do i := i + 1 if i k then if a = ai then return vero else return MEMBER(a, vi1 ) else return MEMBER(a, vk ) end Chiaramente MEMBER(a, v) restituisce il valore vero se la chiave a e contenuta nel sottoalbero che ha per radice v, altrimenti fornisce il valore f also. Quindi, se r e la chiave dellalbero, la semplice chiamata MEMBER(a, r) permette di verificare se a appartiene allinsieme S considerato. CAPITOLO 9. STRUTTURE DATI E ALGORITMI DI RICERCA 131 Per completezza, accenniamo brevemente allimplementazione delloperazione INSERT. Per inserire una nuova chiave x in un B-albero T occorre eseguire i seguenti passi: 1. svolgi la ricerca di x in T fino a determinare un nodo interno v i cui figli sono foglie; 2. colloca x tra le chiavi di v in maniera ordinata, creando un nuovo figlio di v (una foglia) da porre immediatamente a destra o a sinistra di x; 3. se ora v possiede 2m chiavi ordinate (e quindi 2m + 1 figli) richiama la procedura RIDUCI(v) definita in seguito. Procedure RIDUCI(v) siano a1 , a2 , . . . , a2m le chiavi di v e siano f0 , f1 , . . . , f2m i suoi figli if v possiede un fratello adiacente u con meno di 2m 1 chiavi p := padre(v) y := chiave separatrice di u e v in p if u fratello maggiore di v cancella a2m da v e inseriscilo in p al posto di y then inserisci y in u come chiave minima then rendi f 2m primo figlio di u togliendolo da v cancella a1 da v e inseriscilo in p al posto di y inserisci y in u come chiave massima else rendi f ultimo figlio di u togliendolo da v 0 crea un nuovo nodo u assegna a u le chiavi am+1 , am+2 , . . . , a2m e i figli fm , fm+1 , . . . , f2m togliedoli da v if v e radice crea una nuova radice r then assegna am a r togliendolo da v else rendi v e u figli di r con am come elemento separatore p := padre(v) assegna am a p togliendolo da v rendi u figlio di p come minimo fratello maggiore di v else con am come elemento separatore if p possiede ora 2m chiavi then RIDUCI(p) Loperazione DELETE puo essere implementata in maniera simile. In questo caso si deve verificare se il nodo al quale e stata tolta la chiave contiene almeno m 1 elementi. Se non e questo il caso, bisogna prelevare una chiave dai nodi vicini ed eventualmente dal padre, ripetendo quindi loperazione di cancellazione su questultimo. La procedura che esegue la cancellazione di una chiave x da un B-albero T esegue i seguenti passi: 1. svolgi la ricerca di x in T fino a determinare il nodo interno v che contiene x; 2. esegui il seguente ciclo di istruzioni che trasforma v in uno dei suoi figli e x nella massima chiave di questultimo fino a quando i figli di v sono foglie: CAPITOLO 9. STRUTTURE DATI E ALGORITMI DI RICERCA 132 while i figli di v non sono foglie do begin sia f il figlio di v immediatamente a sinistra di x sia z lultima chiave in f sostituisci x con z in v x := z v := f end 3. cancella x in v ed elimina uno dei figli adiacenti (una foglia); 4. se ora v possiede m 2 chiavi (e quindi m 1 figli) e v non e radice, allora richiama la procedura ESPANDI(v) definita in seguito. Procedure ESPANDI(v) if v e radice ( then if v non possiede chiavi then elimina v rendi il figlio di v nuova radice else begin if v possiede un fratello adiacente u con almeno m chiavi p := padre(v) y := chiave separatrice di u e v in p if u fratello maggiore di v togli da u la chiave minima e inseriscila in p al posto di y then inserisci y in v come chiave massima then rendi il primo figlio di u ultimo figlio di v togli da u la chiave massima e inseriscila in p al posto di y inserisci y in v come chiave minima else rendi l ultimo figlio di u primo figlio di v p := padre(v) sia u un fratello di v adiacente sia y la chiave in p separatrice tra v e u inserisci in v la chiave y mantenendo lordine inserisci in v le chiavi di u mantenendo lordine else assegna a v i figli di u mantenendo lordine cancella da p la chiave y e il figlio u elimina il nodo u if p possiede ora meno di m 1 chiavi (e quindi meno di m figli) then ESPANDI(p) end Lefficienza di queste procedure dipende dallaltezza dellalbero. Per valutare tale quantita considera un B-albero di altezza h contenente n chiavi. Osserva che vi sono: un nodo a profondita 0, almeno 2 nodi a profondita 1, almeno 2m nodi a profondita 2, almeno 2m2 a profondita 3, e cos via. La radice CAPITOLO 9. STRUTTURE DATI E ALGORITMI DI RICERCA 133 contiene almeno una chiave mentre gli altri nodi ne contengono almeno m 1 (escluse le foglie che non contengono chiavi). Di conseguenza il numero n di chiavi e maggiore o uguale a 1+ h2 X ! i 2m (m 1) i=0 e, svolgendo semplici calcoli, otteniamo h 1 + logm n+1 . 2 Osserva che, scegliendo m in maniera opportuna, si puo ridurre di molto il valore dellaltezza dellalbero. Tuttavia, anche la scelta di un valore di m troppo elevato non e vantaggiosa poiche rende costosa la ricerca di un elemento allinterno di ciascun nodo. Concludiamo questa sezione ricordando che in molti algoritmi e necessario implementare operazioni MIN, INSERT e DELETE su un dato insieme ordinato senza bisogno di eseguire la ricerca di un elemento qualsiasi (MEMBER). Questo si verifica quando le operazioni INSERT e DELETE possono essere eseguite senza effettivamente cercare lelemento da inserire o da cancellare; nelloperazione INSERT questo significa che lelemento da introdurre non e presente nella struttura mentre, nelloperazione DELETE, questo vuol dire che e nota la posizione dellelemento da cancellare nella struttura (per esempio si tratta del minimo). Le strutture dati che eseguono le tre operazioni MIN, INSERT e DELETE in queste ipotesi sono chiamate code di priorita e possono essere viste come una semplificazione della struttura PARTI DI A ORDINATO. Gli alberi binari, gli alberi 2-3 e i B-alberi possono essere utilizzati come code di priorita , perche eseguono appunto le operazioni sopra considerate. Unaltra struttura che rappresenta unefficiente implementazione di una coda di priorita e costituita dagli heap rovesciati, nei quali cioe il valore associato ad ogni nodo interno e minore di quello associato ai figli. Nota che in questo caso la radice contiene il valore minimo dellinsieme e quindi loperazione MIN puo essere eseguita in tempo costante. Le altre due invece richiedono un tempo O(log n), dove n e il numero degli elementi presenti nello heap. 9.7 Operazioni UNION e FIND Una partizione di un insieme A e una famiglia {A1 , A2 , . . . , Am } di sottoinsiemi di A, a due a due disgiunti (cioe Ai Ak = per ogni i 6= k), che coprono A (ovvero tali che A = A1 A2 Am ). Il concetto di partizione e strettamente connesso a quello di relazione di equivalenza. Ricordiamo che una relazione di equivalenza su A e una relazione R A A che verifica le proprieta riflessiva, simmetrica e transitiva. In altre parole, per ogni x, y, z A, si verifica xRx, xRy yRx, e xRy yRz xRz (qui xRy significa (x, y) R). Come e noto, per ogni a A, la classe di equivalenza di a modulo R e linsieme [a]R = {x A | xRa}. E facile verificare che, per ogni relazione di equivalenza R su un insieme A, linsieme delle classi di equivalenza modulo R forma una partizione di A. Viceversa, ogni partizione {A1 , A2 , . . . , Am } di A definisce automaticamente una relazione di equivalenza R su A: basta infatti definire xRy per tutte le coppie di elementi x, y che appartengono a uno stesso sottoinsieme Ai della partizione. Di conseguenza la partizione puo essere agevolmente rappresentata da una n-pla di elementi a1 , a2 , . . . , am A, dove ogni ai appartiene a Ai ; in questo modo [ai ] = Ai per ogni i = 1, 2, . . . , m e ai e chiamato elemento rappresentativo della sua classe di equivalenza. 134 CAPITOLO 9. STRUTTURE DATI E ALGORITMI DI RICERCA Nellesempio 9.3 abbiamo definito una struttura dati basata sulla nozione di partizione. Le operazioni fondamentali definite sulle partizioni sono le operazioni UNION e FIND. Data una partizione P di un insieme A e una coppia di elementi x, y A, abbiamo U N ION (x, y, P ) = partizione ottenuta da P facendo lunione delle classi di equivalenza contenenti x e y; F IN D(x, P ) = elemento rappresentativo della classe di equivalenza in P contenente x. Esempio 9.8 Consideriamo la famiglia delle partizioni dellinsieme A = {1, 2, 3, 4, 5, 6, 7, 8, 9}, convenendo di sottolineare gli elementi rappresentativi. Allora, se P e la partizione {{1, 3, 7}, {2}, {4, 5, 6, 8, 9}}, abbiamo FIND(4, P ) = 8 UNION(3, 2, P ) = {{1, 2, 3, 7}, {4, 5, 6, 8, 9}} Descriviamo ora alcune possibili implementazioni di PARTIZIONI DI A mediante FORESTE SU A. La partizione {A1 , A2 , . . . , Am } sara rappresentata da una foresta composta da m alberi con radice T1 , T2 , . . . , Tm tali che ogni Ai e linsieme dei nodi di Ti e la radice di Ti e lelemento rappresentativo di Ai (1 i m). Una foresta puo essere facilmente rappresentata mediante una tabella padre. Esempio 9.9 La partizione {{1, 3, 7}, {2}, {4, 5, 6, 8, 9}} puo essere rappresentata dalla foresta   7  8 2    P  PP @   P @    1 3 6 4 9    5  a sua volta descritta dalla seguente tabella vertici 1 2 3 4 5 6 7 8 9 padre 7 0 7 8 9 8 0 0 8 Una semplice implementazione delloperazione FIND consiste nel risalire dal nodo considerato fino alla radice: CAPITOLO 9. STRUTTURE DATI E ALGORITMI DI RICERCA 135 Procedure FIND(v) begin x := v while padre(x) 6= 0 do x := padre(x) return x end Osserva che, anche in questo caso, il costo della procedura (in termini di tempo) e proporzionale alla profondita del nodo considerato e quindi nel caso peggiore allaltezza dellalbero di appartenenza. Analogamente, loperazione UNION puo essere realizzata determinando le radici dei due elementi e rendendo quindi la seconda figlia della prima. Procedure UNION(u, v) begin x := FIND(u) y := FIND(v) if x 6= y then padre(y) := x end Anche in questo caso il tempo di calcolo dipende essenzialmente dalla profondita dei due nodi. Notiamo tuttavia che, se i nodi di ingresso u e v sono radici, il tempo e costante. Come si vedra nel seguito, in molte applicazione le operazioni UNION vengono applicate solo alle radici. Limplementazione appena descritta non e tuttavia particolarmente efficiente. Infatti, fissato un insieme A di n elementi, e facile definire una sequenza di n operazioni UNION e FIND che richiede (n2 ) passi, se applicata alla partizione identita ID (nella quale ogni elemento di A forma un insieme). A tale scopo e sufficiente definire una sequenza di operazioni UNION che porta alla costruzione di alberi del tutto sbilanciati (per esempio formati da semplici liste di elementi) e quindi applicare le operazioni FIND corrispondenti. 9.7.1 Foreste con bilanciamento Per rimediare alla situazione appena descritta, si puo definire una diversa implementazione delle partizioni, sempre utilizzando foreste, la quale mantiene linformazione relativa alla cardinalita degli insiemi coinvolti. In questo modo loperazione UNION puo essere eseguita semplicemente rendendo la radice dellalbero piu piccolo figlia della radice dellalbero di cardinalita maggiore. Se applicato a una foreste di alberi intuitivamente bilanciati, questo accorgimento consente di mantenere la proprieta di bilanciamento e quindi di ridurre il tempo richiesto dallesecuzione delle procedure FIND. Una foresta dotata di questa informazione puo essere rappresentata aggiungendo alla tabella padre un tabella num: per ogni radice r della foresta cosiderata, num(r) indica il numero di nodi dellalbero che ha per radice r; conveniamo di porre num(v) = 0 per ogni nodo v non radice. Esempio 9.10 La foresta descritta nellesempio precedente puo essere allora rappresentata dalla seguente tabella: CAPITOLO 9. STRUTTURE DATI E ALGORITMI DI RICERCA vertici 1 2 3 4 5 6 7 8 9 padre 7 0 7 8 9 8 0 0 8 136 num 0 1 0 0 0 0 3 5 0 Loperazione UNION puo allora essere eseguita nel modo seguente: Procedure UNION(u, v) begin x := FIND(u) y := FIND(v) if x 6= y then if num(x) < num(y) then else padre(x) := y num(y) := num(x) + num(y) num(x) := 0 padre(y) := x num(x) := num(x) + num(y) num(y) := 0 end Nel seguito chiameremo foresta con bilanciamento limplementazione appena descritta. Possiamo ora provare la seguente proprieta : Proposizione 9.3 Utilizziamo una foresta con bilanciamento per implementare le partizioni di un insieme di n elementi. Allora, durante lesecuzione di n 1 operazioni UNION a partire dalla partizione identita ID, laltezza di ogni albero con k nodi non e mai superiore a blog2 kc. Dimostrazione. Ragioniamo per induzione su k. Se k = 1 la proprieta e banalmente verificata. Sia k > 1 e supponiamo la proprieta vera per ogni intero positivo i < k. Se T e un albero con k nodi, T e stato costruito unendo due alberi T1 e T2 , dove il numero di nodi di T1 e minore o uguale a quello di T2 . Quindi T1 possiede al piu bk/2c nodi e, per ipotesi di induzione, la sua altezza h(T1 ) deve essere minore o uguale a blog2 kc 1. Analogamente, T2 possiede al piu k 1 nodi e di conseguenza h(T2 ) blog2 (k 1)c blog2 kc. Osservando ora che h(T ) = max{h(T2 ), h(T1 ) + 1} otteniamo h(T ) blog2 kc. Limmediata conseguenza della proposizione precedente e che lesecuzione di O(n) operazioni UNION e FIND su un insieme di n elementi, a partire dalla partizione identita , puo essere eseguita in tempo O(n log n). 137 CAPITOLO 9. STRUTTURE DATI E ALGORITMI DI RICERCA 9.7.2 Compressione di cammino La complessita asintotica O(n log n) appena ottenuta puo essere migliorata eseguendo le operazioni FIND mediante compressione di cammino. Piu precisamente si tratta di eseguire FIND(u) memorizzando in una lista tutti i nodi che si trovano sul cammino da u alla radice r rendendo poi ciascuno di questi vertici figlio di r. Leffetto quindi della esecuzione di una FIND e quello di comprimere lalbero nel quale si trova largomento della procedura. Di conseguenza lesecuzione di ulteriori operazioni FIND sullo stesso albero potrebbe richiedere una minore quantita di tempo. Il procedimento e descritto intuitivamente dalla seguente figura.  a  JJ J T a  b a  JJ J T b c  JJ J T c FIND(d)  H HH Bl l B lHHH Ta  B H l   H H   d  b c d    % % % % Tb % Tc % Td Td Rappresentando la foresta mediante la tabella padre il procedimento descritto puo essere implementato mediante la seguente procedura. Procedure FIND(u) begin v := u L := (lista vuota) ( L := INSERISCI IN TESTA(L, v) v := padre(v) for x L do padre(x) := v return v while padre(v) 6= 0 do end Usando la compressione di cammino, la complessita in tempo dellesecuzione delle operazioni UNION e FIND viene a dipendere dalla funzione G : IN IN definita dalluguaglianza G(n) = min{k IN | n F (k)}, nella quale F e la funzione F : IN IN, tale che ( F (k) = 1 se k = 0 F (k1) 2 se k 1. Osserviamo che la funzione F cresce molto velocemente come mostra la seguente tabella: CAPITOLO 9. STRUTTURE DATI E ALGORITMI DI RICERCA k 0 1 2 3 4 5 138 F (k) 1 2 4 16 65536 265536 Viceversa la funzione G, essendo di fatto linversa di F , ha una crescita estremamente lenta. In realta si verifica G(n) 5 per ogni n 265536 (ovvero per ogni n utilizzabile in pratica). La seguente proposizione, di cui omettiamo la complicata dimostrazione, fornisce una valutazione asintotica del tempo di calcolo necessario per eseguire le operazioni UNION e FIND combinando le procedure descritte in questa sezione e nella precedente. Proposizione 9.4 Dato un insieme A, supponiamo di utilizzare una foresta per rappresentare una partizione di A e implementiamo le operazioni UNION e FIND mediante bilanciamento e compressione di cammino. Allora una sequenza di O(n) operazioni UNION e FIND, eseguita a partire dalla partizione identita , puo essere eseguita in tempo O(n G(n)). Capitolo 10 Il metodo Divide et Impera Un metodo spesso usato per risolvere un problema consiste nel partizionare i dati di ingresso in istanze di dimensioni minori, risolvere il problema su tali istanze e combinare opportunamente i risultati parziali fino ad ottenere la soluzione cercata. Questa strategia e generalmente chiamata divide et impera e consente in molti casi di progettare algoritmi asintoticamente efficienti. Nella prossima sezione descriviamo il metodo in generale mentre in quelle successive presentiamo alcuni esempi significativi di algoritmi basati su questa tecnica. 10.1 Schema generale Consideriamo un problema , descritto da una funzione Sol : I R, dove I e linsieme delle istanze di (che chiameremo anche dati) e R quello delle soluzioni (risultati). Come al solito, per ogni x I denotiamo con |x| la sua dimensione. Intuitivamente, per risolvere il problema su una istanza x, un algoritmo di tipo divide et impera procede nel modo seguente. 1. Se |x| e minore o uguale a un valore C fissato, si determina direttamente la soluzione consultando una tabella in cui sono elencate tutte le soluzioni per ogni istanza y I di dimensione minore o uguale a C, oppure applicando un algoritmo opportuno. 2. Altrimenti, si eseguono i seguenti passi: (a) partiziona x in b dati ridotti rid1 (x), rid2 (x), . . . , ridb (x) I tali che, per ogni j = 1, 2, . . . , b, |ridj (x)| = d|x|/ae oppure |ridj (x)| = b|x|/ac per un opportuno a > 1 (e quindi |ridj (x)| < |x|); (b) risolvi ricorsivamente il problema sulle istanze rid1 (x), rid2 (x), . . . , ridb (x); (c) usa le risposte sui dati ridotti per ottenere la soluzione su x. Supponiamo ora di avere una procedura Opevarie per lesecuzione del passo (2c); in altre parole Opevarie restituisce il valore Sol(x) ricevendo in ingresso le soluzioni relative ai dati rid1 (x), rid2 (x), . . ., ridb (x). Allora, la procedura generale che si ottiene e la seguente: 139 140 CAPITOLO 10. IL METODO DIVIDE ET IMPERA Procedura F (x) if |x| C then return Sol(x) else begin for j = 1, 2, . . . , b do calcola xj = ridj (x) for j = 1, 2, . . . , b do zj := F (xj ) w := Opevarie(z1 , z2 , . . . , zb ) return w end Osserviamo che le soluzioni parziali Sol(x1 ), Sol(x2 ),...,Sol(xb ) vengono calcolate in maniera indipendente le une dalle altre. Come vedremo meglio in seguito questo porta a eseguire piu volte eventuali computazioni comuni alle chiamate F (x1 ), F (x2 ), . . . , F (xb ). Daltra parte questo procedimento permette una facile parallelizzazione dellalgoritmo; infatti, disponendo di un numero opportuno di processori, le b chiamate ricorsive previste al passo (2b) possono essere eseguite contemporaneamente e in maniera indipendente mentre solo la fase di ricomposizione dei risultati prevede la sincronizzazione dei vari processi. Passiamo ora allanalisi della procedura. Indichiamo con n la dimensione di una istanza del problema; vogliamo stimare il tempo T (n) richiesto dallalgoritmo per risolvere il problema su dati di dimensione n nel caso peggiore. Chiaramente, T (n) dipendera a sua volta dal tempo necessario per eseguire la procedura Opevarie sulle risposte a dati di dimensione n/a. Per calcolare tale quantita e necessario entrare nei dettagli dei singoli casi; a questo livello di generalita possiamo rappresentarla semplicemente con Top (n) che supponiamo nota. Allora, supponendo che n sia divisibile per a, otteniamo la seguente equazione: n a   T (n) = b T + Top (n) ( se n > C). Nel caso n C possiamo assumere che T (n) sia minore o uguale a una costante fissata. Lanalisi di algoritmi ottenuti con la tecnica divide et impera e quindi ricondotta allo studio di semplici equazioni di ricorrenza del tipo n a   T (n) = bT + g(n). trattate in dettaglio nella sezione 6.4. Illustriamo ora questa tecnica con quattro esempi che riguardano classici problemi. 10.2 Calcolo del massimo e del minimo di una sequenza Come primo esempio presentiamo un algoritmo per determinare i valori massimo e minimo di una sequenza di interi. Formalmente il problema e definito nel modo seguente. Istanza: un vettore S = (S[1], S[2], . . . , S[n]) di n interi; Soluzione: gli interi a e b che rappresentano rispettivamente il valore minimo e quello massimo tra gli elementi di S. CAPITOLO 10. IL METODO DIVIDE ET IMPERA 141 Lalgoritmo che presentiamo in questa sede e basato sul confronto tra gli elementi di S. Di conseguenza la stessa procedura puo essere utilizzata per risolvere lanalogo problema definito su una sequenza di elementi qualsiasi per i quali sia fissata una relazione dordine totale. Il nostro obiettivo e quello di determinare il numero di confronti eseguiti dalla procedura su un input di dimensione n: tale quantita fornisce anche lordine di grandezza del tempo di calcolo richiesto dalla procedura su una macchina RAM a costo uniforme. Lo stesso ragionamento puo essere applicato per stimare altri parametri quali il numero di assegnamenti o di operazioni aritmetiche eseguiti. Il procedimento piu semplice per risolvere il problema scorre la sequenza di input mantenendo il valore corrente del massimo e del minimo trovati e confrontando questi ultimi con ciascuna componente di S. begin b := S[1] a := S[1] for i = 2, . . . , n do begin if S[i] < a then a := S[i] if S[i] > b then b := S[i] end end Tale procedura esegue n1 confronti per determinare il minimo e altrettanti per determinare il massimo. In totale quindi vengono eseguiti 2n 2 confronti. Descriviamo ora un altro algoritmo che permette di risolvere il medesimo problema con un numero di confronti minore del precedente. Lidea e quella di suddividere la sequenza di ingresso in due parti uguali; richiamare ricorsivamente la procedura su queste due e quindi confrontare i risultati ottenuti. Formalmente lalgoritmo e definito dalla semplice chiamata della procedura ricorsiva Maxmin(1, n) che restituisce i due valori cercati manipolando il vettore S di ingresso come una variabile globale. begin read S[1], S[2], . . . , S[n] (a, b) := Maxmin(1, n) return (a, b) end Per ogni coppia di interi i, j, tali che 1 i j n, la chiamata Maxmin(i, j) restituisce una coppia di elementi (p, q) che rappresentano rispettivamente i valori minimo e massimo del sottovettore (S[i], S[i + 1], . . . , S[j]). Tale procedura e ricorsiva e utilizza due operatori, max e min, che definiscono rispettivamente il massimo e il minimo tra due interi. Procedure Maxmin(1, n) begin if i = j then return (S[i], S[i]) else if i + 1 = j then return (min(S[i], S[j]), max(S[i], S[j])) else begin k := b i+j 2 c (a1 , b1 ) := Maxmin(i, k) (a2 , b2 ) := Maxmin(k + 1, j) 142 CAPITOLO 10. IL METODO DIVIDE ET IMPERA return (min(a1 , a2 ), max(b1 , b2 )) end end Denotiamo con C(n) il numero di confronti eseguito dallalgoritmo su un input di dimensione n. E facile verificare che C(n) soddisfa la seguente equazione di ricorrenza: 0 se n = 1 1 se n = 2 C(n) = C(b n c) + C(d n e) + 2 se n > 2 2 2 Supponendo n = 2k per k IN e sviluppando lequazione precedente, otteniamo n n = 2 + 4 + 4C = 2 4 = 2 + 4 + + 2k1 + 2k1 C(2) =     C(n) = 2 + 2C = k1 X 2j + 2k1 = 2k + 2k1 2 j=1 Quindi, ricordando che k = log2 n, si ottiene C(n) = 32 n 2. Quindi, rispetto allalgoritmo precedente, la nuova procedura esegue un numero di confronti ridotto di circa un quarto su ogni input che ha per dimensione una potenza di 2. Esercizi 1) Assumendo il criterio di costo uniforme, determinare lordine di grandezza dello spazio di memoria richiesto dallalgoritmo sopra descritto. 2) Supponendo che le n componenti del vettore S siano interi compresi tra 1 e n, svolgere lanalisi del tempo di calcolo richiesto dallalgoritmo precedente assumendo il criterio di costo logaritmico. 10.3 Mergesort In questa sezione presentiamo un altro classico esempio di algoritmo divide et impera. Si tratta dellalgoritmo Mergesort per risolvere il problema dellordinamento. Tale procedura ha notevoli applicazioni pratiche poiche su di essa sono basate gran parte delle routine di ordinamento esterno, quelle cioe che devono ordinare dati distribuiti su memoria di massa. Listanza del problema e data da un vettore A = (A[1], A[2], . . . , A[n]) le cui componenti sono estratte da un insieme U totalmente ordinato rispetto a una relazione dordine fissata. Linsieme U potrebbe essere per esempio linsieme ZZ dei numeri interi e lusuale relazione dordine tra interi. Oppure, U potrebbe rappresentare linsieme di tutte le parole su un dato alfabeto finito e potrebbe denotare la relazione di ordinamento lessicografico su . Lalgoritmo restituisce il vettore A ordinato in modo non decrescente. Il procedimento e semplice: si suddivide il vettore A in due sottovettori di dimensione quasi uguale, si richiama ricorsivamente la procedura per ordinare ciascuno di questi, quindi si immergono le due sequenze ottenute in ununica n-pla ordinata. Lalgoritmo e descritto formalmente dalla procedura Mergesort(i, j), 1 i j n, che ordina il sottovettore (A[i], A[i + 1], . . . , A[j]). La semplice chiamata di Mergesort(1, n) restituisce lintero vettore A ordinato. Durante lesecuzione del calcolo il vettore A e considerato come una variabile globale. CAPITOLO 10. IL METODO DIVIDE ET IMPERA 143 Procedura Mergesort(i, j) begin if i < j then begin k := b i+j 2 c Mergesort(i, k) Mergesort(k + 1, j) Merge(i, k, j) end end Il cuore del procedimento e costituito dalla procedura Merge(i, k, j), 1 i k < j n, che riceve i vettori ordinati (A[i], A[i + 1], . . . , A[k]), (A[k + 1], . . . , A[j]) e restituisce il vettore (A[i], A[i + 1], . . . , A[j]) ordinato definitivamente. La procedura Merge(i, k, j) scorre gli elementi delle due sequenze (A[i], A[i + 1], . . . , A[k]) e (A[k + 1], . . . , A[j]) mediante due puntatori, uno per ciascun vettore, inizialmente posizionato sulla prima componente. Ad ogni passo gli elementi scanditi dai puntatori vengono confrontati, il minore viene aggiunto a una lista prefissata (inizialmente vuota) e il puntatore corrispondente viene spostato sullelemento successivo. Quando uno dei due vettori e stato scandito interamente si aggiungono alla lista gli elementi rimanenti dellaltro nel loro ordine. Al termine la lista contiene i valori dei due vettori ordinati e viene quindi ricopiata nelle componenti A[i], A[i + 1], . . . , A[j]. La procedura e descritta formalmente dal seguente programma nel quale la lista ausiliaria e implementata dal vettore B. Procedura Merge(i, k, j) begin t := 0; p := i; q := k + 1 while p k q j do begin t := t + 1 ( B[t] := A[p] p := p + 1 ( B[t] := A[q] else q := q + 1 if A[p] A[q] then end if p k then begin u := k v := j while p u do A[v] := A[u] u := u 1 v := v 1 end for u = 1, 2, . . . , t do A[i + u 1] := B[u] end Vogliamo calcolare il massimo numero di confronti eseguiti dalla procedura Mergesort su un input di lunghezza n. Denotiamo con M (n) tale quantita. E facile verificare che Merge(i, k, j) esegue, nel CAPITOLO 10. IL METODO DIVIDE ET IMPERA 144 caso peggiore, j i confronti (quanti nel caso migliore?). Ne segue che, per ogni n > 1, M (n) soddisfa la seguente equazione: ( M (n) = 0 se n = 1 n n M (b 2 c) + M (d 2 e) + n 1 se n > 1. Applicando allora i risultati presentati nella sezione 6.4 si prova facilmente che M (n) = (n log2 n). Procedendo con maggior accuratezza si ottiene un risultato piu preciso e cioe che M (n) = ndlog2 ne 2dlog2 ne + 1. Esercizi 1) Dimostrare che M (n) = n log2 n n + 1 per ogni n potenza di 2. 2) Supponendo n una potenza di 2, determinare nel caso migliore il numero di confronti eseguiti da Mergesort su un input di dimensione n. 3) Assumendo il criterio di costo uniforme, determinare lordine di grandezza dello spazio di memoria richiesto Mergesort per ordinare n elementi. 10.4 Prodotto di interi Un altro esempio significativo di algoritmo divide et impera riguarda il problema del calcolo del prodotto di interi. Consideriamo due interi positivi x, y di n bits ciascuno, e siano x = x1 x2 xn e y = y1 y2 yn le rispettive rappresentazioni binarie dove, per ogni i {1, 2, . . . , n}, xi {0, 1} e yi {0, 1}. Vogliamo calcolare la rappresentazione binaria z = z1 z2 z2n del prodotto z = x y. Il problema e quindi definito nel modo seguente: Istanza: due stringhe binarie x, y {0, 1} di lunghezza n che rappresentano rispettivamente gli interi positivi x e y; Soluzione: la stringa binaria z {0, 1} che rappresenta il prodotto z = x y. Assumiamo che la dimensione di una istanza x, y sia data dalla lunghezza delle due stringhe. Vogliamo descrivere un algoritmo per la soluzione del problema e determinare il numero delle operazioni binarie richieste per una istanza di dimensione n. E facile verificare che il metodo tradizionale richiede O(n2 ) operazioni binarie. In questa sede presentiamo un algoritmo che riduce tale quantita a O(n1.59 ). Per semplicita, supponiamo che n sia una potenza di 2 e spezziamo x e y in due vettori di n/2 bits ciascuno: x = ab, y = cd. Se denotiamo con a, b, c, d gli interi rappresentati rispettivamente da a, b, c e d, il prodotto xy puo essere calcolato mediante la seguente espressione: xy = (a2n/2 + b)(c2n/2 + d) = ac2n + (ad + bc)2n/2 + bd Questa espressione permette di calcolare il prodotto xy mediante 4 moltiplicazioni di interi di n/2 bits piu alcune addizioni e shift (prodotti per potenze di 2). Tuttavia possiamo ridurre da 4 a 3 il numero delle moltiplicazioni osservando che il valore ad + bc si puo ottenere da ac e bd mediante un solo ulteriore prodotto: basta calcolare u = (a + b)(c + d) ed eseguire la sottrazione u ac bd = ad + bc. In questo modo il calcolo di z = xy richiede lesecuzione delle seguenti operazioni: 1) le operazioni necessarie per calcolare lintero u, cioe 2 somme di interi di n/2 bits e una moltiplicazione di due interi che hanno al piu n/2 + 1 bits; 145 CAPITOLO 10. IL METODO DIVIDE ET IMPERA 2) 2 moltiplicazioni su interi di n/2 bits per calcolare ac e bd; 3) 6 operazioni tra addizioni, sottrazioni e shift su interi di n bits. E chiaro che ogni addizione, sottrazione e shift su interi di n bits richiede O(n) operazioni binarie. Inoltre anche il calcolo di u al punto 1) puo essere eseguito mediante un prodotto tra interi di n/2 bits piu alcune addizioni e shift. Per la verifica di questo fatto, sia a1 l primo bit della somma (a + b) e sia b1 lintero rappresentato dai rimanenti. Osserva che a + b = a1 2n/2 + b1 . Analogamente esprimiamo c + d nella forma c + d = c1 2n/2 + d1 , dove c1 e il primo bit di (c + d) e d1 lintero rappresentato dai bit rimanenti. Allora, e evidente che (a + b)(c + d) = a1 c1 2n + (a1 d1 + b1 c1 )2n/2 + b1 d1 . Il calcolo di b1 d1 richiede il prodotto di due interi di n/2 bits ciascuno, mentre gli altri prodotti (a1 c1 , a1 d1 , b1 c1 ) possono essere calcolati direttamente in tempo binario O(n) perche si tratta sempre del prodotto di un intero per 1 o per 0. In totale quindi, per calcolare il prodotto di due interi di n bits, il procedimento illustrato esegue 3 moltiplicazioni su interi di (circa) n/2 bits piu O(n) operazioni binarie. Di conseguenza, per ogni n potenza di 2, il numero totale delle operazioni binarie e minore o uguale alla soluzione T (n) della seguente equazione di ricorrenza: ( T (n) = se n = 1 3T ( n2 ) + n se n > 1, dove e sono costanti opportune. Applicando ora il teorema 6.3 otteniamo T (n) = (nlog2 3 ) = O(n1.59 ) 10.5 Lalgoritmo di Strassen. Il problema considerato in questa sezione e definito nel modo seguente: Istanza: due matrici di numeri razionali A = [aik ], B = [bik ], ciascuna di dimensione n n. P Soluzione: la matrice prodotto A B = [cik ], con cik = nj=1 aij bjk . Lalgoritmo tradizionale che calcola direttamente il prodotto di matrici richiede (n3 ) somme e prodotti di numeri reali. E possibile scendere sotto questo limite? Una prima risposta affermativa e stata data con lalgoritmo di Strassen, di cui delineiamo i fondamenti. Supponiamo per semplicita che n sia una potenza di 2. Allora si puo porre: A= A11 A12 A21 A22 ! B= B11 B12 B21 B22 dove le matrici Aik , Bik , Cik sono di ordine n2 n2 . Si calcolino ora le matrici Pi (1 i 7 ): P1 = (A11 + A22 ) (B11 + B22 ) P2 = (A21 + A22 ) B11 P3 = A11 (B12 B22 ) P4 = A22 (B21 B11 ) P5 = (A11 + A12 ) B22 ! AB = C11 C12 C21 C22 ! . 146 CAPITOLO 10. IL METODO DIVIDE ET IMPERA P6 = (A21 A11 ) (B11 + B12 ) P7 = (A12 A22 ) (B21 + B22 ) Con una laboriosa ma concettualmente semplice verifica si ha: C11 = P1 + P4 P5 + P7 C12 = P3 + P5 C21 = P2 + P4 C22 = P1 + P3 P2 + P6 Le Pi sono calcolabili con 7 moltiplicazioni di matrici e 10 fra addizioni e sottrazioni. Analogamente, le Cij si possono calcolare con 8 addizioni e sottrazioni a partire dalle Pi . Il prodotto di due matrici n n puo essere allora ricondotto ricorsivamente al prodotto di 7 matrici n n n n 2 2 , mediante lesecuzione di 18 addizioni di matrici 2 2 . Poiche due matrici n n si sommano con n2 operazioni, il tempo di calcolo dellalgoritmo precedentemente delineato e dato dalla seguente equazione: n 2   T (n) = 7T + O(n2 ) La soluzione di questa equazione da T (n) = (nlg2 7 ), con lg2 7 2.81. 10.6 La trasformata discreta di Fourier Il calcolo della convoluzione di due sequenze numeriche e un problema classico che sorge in vari settori scientifici e ha numerose applicazioni sopratutto nellambito della progettazione di algoritmi per operazioni su interi e polinomi. La trasformata discreta di Fourier fornisce uno strumento per eseguire in maniera efficiente tale operazione. Si tratta di una trasformazione lineare F tra sequenze finite di elementi, definiti in generale su un anello opportuno, che mappa la convoluzione di due n-ple a, b nel prodotto, termine a termine, delle due immagini F (a) e F (b). Nota che il calcolo del prodotto termine a termine di due sequenze e certamente piu semplice del calcolo della loro convoluzione. Quindi, se disponiamo di algoritmi efficienti per determinare la trasformata discreta di Fourier e la sua inversa, possiamo di fatto disporre di una procedura per calcolare la convoluzione. Tra le applicazioni di questo metodo ve ne sono alcune di particolare interesse. Tra queste ricordiamo il noto algoritmo di Schonhage e Strassen per la moltiplicazione di due interi che fornisce tuttora il metodo asintoticamente migliore per risolvere il problema. Esso consente di calcolare il prodotto di due interi di n bit eseguendo O(n log n log log n) operazioni binarie. 10.6.1 La trasformata discreta e la sua inversa Dato un anello commutativo A = hA, +, , 0, 1i, lelemento n = nk=1 1 ammette inverso moltiplicativo P jk = 0 per ogni se esiste n1 tale che n1 n = 1. Un elemento di A tale che 6= 1, n = 1, n1 j=0 k = 1, 2, . . . , n 1, e detto n-esima radice principale dellunita. Chiameremo invece n-esime radici dellunita gli elementi 0 = 1, , 2 , . . . , n1 . Analizziamo ora due anelli particolarmente interessanti. P Esempio 10.1 Sia C = hC, +, , 0, 1i il campo dei numeri complessi. Allora si verifica facilmente che ogni n IN 2 ammette inverso moltiplicativo e che ei n e una n-esima radice principale dellunita. 147 CAPITOLO 10. IL METODO DIVIDE ET IMPERA Esempio 10.2 Dato un intero n potenza di 2 (cioe n = 2k ) e posto m = 2n + 1, sia Zm lanello dei resti modulo m, cioe Zm = {0, 1, . . . , m 1}, x + y = hx + yim , x y = hx yim . Poiche n = 2k e potenza di 2 mentre m = 2n + 1 e dispari, n e m sono primi fra loro e quindi n ammette inverso moltiplicativo n1 in Zm ; inoltre 4 e una n-esima radice principale dellunita. Mostriamo qui che = 4 e una n-esima radice principale dellunita in Zm . Intanto 4 6= 1 ed inoltre h4n i2n +1 = (h2n i2n +1 )2 = (1)2 = 1. Mostriamo ora che Pn1 i i=0 4 = 0 per = 1, 2, . . . , n 1. A tale scopo consideriamo per ogni a Zm la seguente identita k1 Y 1 + a2 i  2k 1 = i=0 X a =0 che si ottiene da k1 Y  02i a i + a12  X = aC0 +2C1 +...+2 k1 Ck1 C0 ,...,Ck1 {0,1} i=0 k osservando che per ogni (0 < 2 1) sono biunivocamente individuabili C0 , . . . , Ck1 {0, 1} tali che = C0 + 2C1 + . . . + 2k1 Ck1 . Per lidentita precedente basta allora dimostrare che per ogni (1 < n) esiste i (0 i < k) tale che i 1 + 42 = 0 mod(2n + 1). A tale riguardo sia 2 = 2l d con d dispari; si ponga i = k l, vale: i k 42 + 1 = 22 d + 1 = (1)d + 1 = 0 mod(2n + 1) Sia da ora in poi A un anello commutativo in cui n ammette inverso moltiplicativo e in cui e una n-esima radice principale dellunita. Consideriamo ora lalgebra An formata dai vettori a n componenti in A, cioe An = {a | a = (a[0], . . . , a[n 1]), a[i] A}, dotata delle seguenti operazioni: 1) Somma 2) Prodotto 3) Prodotto di convoluzione ciclica + (a + b)[k] = a[k] + b[k] (a b)[k] = a[k] b[k] X (a b)[k] = a[j] b[s] hj+sin =k Si definisce trasformata discreta di Fourier F la trasformazione F : An An realizzata dalla  ij  matrice i,j{1,2,...,n} , cioe : (F(a)) [k] = n1 X ks a[s] s=0 Valgono le seguenti proprieta: Teorema 10.1 F e una trasformazione invertibile e la sua inversa F 1 e realizzata dalla matrice n1  ij  Dimostrazione. Detto [Cij ] il prodotto delle matrici [ ij ] e n1 [ ij ], vale Cis = k X X 1 n1 1 n1 is ik ks = n k=0 n k=0 pertanto ( Cis = e quindi [Cij ] coincide con la matrice identita. 1 se i = s , 0 altrimenti 148 CAPITOLO 10. IL METODO DIVIDE ET IMPERA Teorema 10.2 Dimostrazione. Dimostriamo qui che F(a c[i] = n1 X ik (a b) [k] = k=0 = F 1 (a + b) = F 1 (a) + F 1 (b) F 1 (a b) = F 1 (a) F 1 (b) F(a + b) = F(a) + F(b); F(a b) = F(a) F(b); X b) = F(a) F(b). Posto c = F(a ik a[j] b [hk jin ] = k,j X b), vale: i(kj) b [hk jin ] ij a[j] = k,j ! X X is b[s] ij a[j] = [F(a) F(b)] i s 10.6.2 j La trasformata veloce di Fourier In tutto questo paragrafo supporremo che n sia una potenza di 2, cioe n = 2k , che A sia un anello commutativo in cui n sia invertibile e sia una n-esima radice principale dellunita. Le seguenti osservazioni permettono di applicare una strategia divide et impera al problema di determinare la trasformata di Fourier di un vettore (a[0], . . . , a[n]). Posto infatti y = F(a), vale per 0 k < n: y[k] = a[0] + k a[1] + 2k a[2] + . . . + (n1)k a[n 1] Pertanto risulta: n2 n2 1. y[k] = (a[0] + 2k a[2] + .... + 2 2 a[n 2]) + k (a[1] + 2k a[3] + . . . + 2 2 a[n 1]). 2. 2 e una n2 -esima radice principale dellunita in A. Questo prova la correttezza della seguente procedura ricorsiva per il calcolo della trasformata di Fourier, che chiameremo FFT (acronimo di Fast Fourier Transform). Procedura FFT ( ; (a[0], a[1], . . . , a[n 1]) ) if n = 1 then return(a[0]) else begin  h i b[0], . . . , b n2 := F F T ( 2 ; (a[0], a[2], . . . , a[n 2])) 2  h i c[0], . . . , c n2 := F F T ( 2 ; (a[1], a[3], . . . , a[n 1])) 2 for k = 0,h to ni 1 do h i d[k] := b hki n2 + k c hki n2 return (d[0], . . . , d[n 1]) end Osserviamo che le operazioni di base su cui tale algoritmo e costruito sono la somma di elementi dellanello A e la moltiplicazione per una potenza di . Detto T (n) il numero di tali operazioni, vale evidentemente: ( T (1) = 0 T (n) = 2T n 2  + 2n 149 CAPITOLO 10. IL METODO DIVIDE ET IMPERA Ne segue che T (n) = 2n lg n. Discorso perfettamente analogo puo essere fatto per linversa F 1 . Una immediata applicazione e un algoritmo veloce per il calcolo della convoluzione circolare. Dal Teorema 2 segue infatti che : a b = F 1 (F(a) F(b)) Poiche il calcolo della trasformata e della trasformata inversa richiede O(n lg n) operazioni di somma e di moltiplicazione, e il prodotto richiede n moltiplicazioni, si conclude che: Fatto 10.1 La convoluzione circolare a b di due vettori n-dimensionali richiede O(n lg n) operazioni di somma e di prodotto per una potenza di , e n operazioni di prodotto. 10.6.3 Prodotto di polinomi Consideriamo qui il problema di moltiplicare due polinomi di grado n a coefficienti reali con un algoritmo che utilizzi un basso numero di somme e prodotti. P P Dati due polinomi di grado n, p(z) = nk=0 pk z k e q(z) = nk=0 qk z k , il loro prodotto p(z) q(z) P2n e il polinomio, di grado 2n, s(z) = k=0 sk z k dove: k X pj qkj j=0 sk = n X pj qkj se 0 k n se n < k 2n j=kn Il calcolo diretto di sk richiede k + 1 prodotti e k somme (se k n) o 2n k + 1 prodotti e 2n k somme (se k > n). Il numero totale di prodotti e di somme e allora 2n2 . Tale numero puo essere ridotto a O(n lg n) osservando che la convoluzione circolare dei vettori a 2n + 1 componenti (p0 , p1 , . . . , pn , 0, 0, . . . , 0) e (q0 , q1 , . . . , qn , 0, 0, . . . , 0) e esattamente il vettore (s0 , s1 , . . . , s2n ). Cio unitamente alle proprieta della trasformata discreta di Fourier, prova la correttezza del seguente algoritmo: ALGORITMO: Moltiplicazione Veloce Ingresso: due polinomi rappresentati dai vettori dei coefficienti (p0 , p1 , . . . , pn ), (qo , q1 , . . . , qn ) 1. Calcola lintero N = 2k tale che 2n + 1 N < 2(2n + 1). 2. a:= vettore a N componenti (p0 , . . . , pn , 0, . . . , 0). 3. b:= vettore a N componenti (q0 , . . . , qn , 0, . . . , 0). 4. c := F 1 (F(a) F(b)). Uscita : il polinomio di grado al piu 2n i cui coefficienti sono c0 , . . . , c2n . Per quanto riguarda lanalisi di complessita, da Fatto 10.1 segue immediatamente che lalgoritmo 2i precedente richiede O(n lg n) somme e prodotti per potenza di = e N e solo O(n) operazioni di prodotto. Per quanto riguarda limplementazione su RAM, lanalisi precedente non e realistica, basandosi sulla facolta di rappresentare arbitrari numeri complessi ed eseguire somme e prodotti in tempo costante. In realta, se rappresentiamo i numeri con errore di arrotondamento, sia lerrore sul risultato sia il tempo di calcolo viene a dipendere dallerrore di arrotondamento fissato. Vogliamo qui studiare algoritmi efficienti per il calcolo esatto del prodotto di due polinomi a coefficienti interi, rispetto ad un modello di calcolo in cui: 150 CAPITOLO 10. IL METODO DIVIDE ET IMPERA 1. Gli interi sono rappresentati in notazione binaria. 2. La somma di due interi in n bit richiede n operazioni elementari. 3. Il prodotto di due interi di n bit richiede M (n) operazioni elementari. Ad esempio, utilizzando lalgoritmo di moltiplicazione che si impara alle elementari, abbiamo M (n) = n2 mentre, utilizzando lalgoritmo di Schonhage-Strassen, otteniamo M (n) = O(n lg n lg lg n). Il problema puo essere cos formulato: Problema: Prodotto Esatto. Istanza: due polinomi rappresentati da due vettori (p0 , . . . , pn ) e (q0 , . . . , qn ) di interi di al piu n bit luno. Richiesta: calcolare sk = k X pj qkj con 0 k 2n, assumendo pj = qj = 0 per ogni j=0 j tale che j < 0 oppure j > n. Osserviamo innanzitutto che se a < m, allora haim = a. Poiche ora i numeri pi , qi sono al piu di n bit, ne segue che pi , qi < 2n e quindi per ogni k (0 k 2n) vale sk = k X qj pkj < n22n < 22.5n + 1 (a meno che n < 4 ). Posto allora m 22.5n + 1, ne j=0 segue: hsk im = sk (0 k 2n). Per ottenere il corretto prodotto, basta allora considerare i coefficienti pk , qi come elementi dellanello Zm con le operazioni di somma e prodotto modulo m. Detta Fm la trasformata di Fourier su Zm con radice 4 (vedi Esempio 10.2) si ha il seguente: ALGORITMO: Prodotto Esatto Veloce. Ingresso: due polinomi rappresentati da due vettori (p0 , . . . , pn ) e (q0 , . . . , qn ) di interi di al piu n bit luno. m := 2N + 1 dove N = 2k con 2.5n N 5n a:=(p0 , . . . , pn , 0, . . . , 0) vettore a N componenti in Zm . b:=(q0 , . . . , qn , 0, . . . , 0) vettore a N componenti in Zm . 1 (F (a) F (b)). c := Fm m m Uscita: sk = ck (0 k 2n) Per quando riguarda la complessita dellalgoritmo precedente, osserviamo che esso richiede O(N lg N ) operazioni di somma e moltiplicazioni per potenze di 4 nonche N moltiplicazioni di interi di al piu N bit. Ricordando che ogni somma costa al piu N operazioni elementari, ogni prodotto per potenze di 4 e uno shift e quindi costa al piu N operazioni elementari, ogni prodotto costa al piu M (N ) operazioni elementari, concludiamo che il tempo di T (n) complessivo e :   T (n) = O n2 lg n + nM (n) 151 CAPITOLO 10. IL METODO DIVIDE ET IMPERA dove abbiamo tenuto conto che N < 5n. Implementando il prodotto col metodo di Schonhage-Strassen, si puo concludere :   T (n) = O n2 lg n lg lg n 10.6.4 Prodotto di interi Obiettivo di questo paragrafo e il progetto di un algoritmo asintoticamente veloce per il prodotto di interi di n bit. Lalgoritmo di moltiplicazione imparato alle scuole elementari richiede tempo O(n2 ); abbiamo visto che una semplice applicazione della tecnica Divide et impera permette di realizzare un algoritmo piu veloce (Tempo= O(nlog2 3 )). Presentiamo qui un applicazione delle tecniche FFT, disegnando un algoritmo quasi lineare (Tempo= O(n lg5 n)). Tale algoritmo e una semplificazione didattica di quello proposto da SchonhageStrassen, che e tuttora lalgoritmo di moltiplicazione asintoticamente piu efficiente noto (Tempo= O(n lg n lg lg n)). Parametri della procedura sono gli interi a = xn1 . . . x0 e b = yn1 . . . y0 rappresentati nella notazione binaria. Essi vengono decomposti in M = n blocchi aM , . . . , a0 e bM , . . . , b0 ognuno composto da M bit: in tal modo a e b individuano rispettivamente i polinomi A(z) = M X ak z k e k=0 B(z) = M X bk z k . Si noti che a = k=0 M X ak 2M k = A(2M ) e b = PM k=0 bk 2 M k = B(2M ) . k=0 Tali polinomi vengono moltiplicati con la tecnica veloce presentata precedentemente, in cui i prodotti vengono eseguiti richiamando ricorsivamente la procedura. Ottenuto il polinomio A(z)B(z) = C(z) = PM PM M k . Risulta infatti che C(2M ) = A(2M ) B(2M ) = a b. k M k=0 ck 2 k=0 ck z , si calcola C(2 ) = Esiste una costante C > 0 per cui la seguente procedura calcola il prodotto di 2 interi: Procedura PROD VEL(a = xn1 . . . x0 ; b = yn1 . . . y0 ) if n < C then calcola a b col metodo elementare, return (a b) else begin M = n decomponi a in M blocchi aM 1 , . . . , a0 di lunghezza M . decomponi b in M blocchi bM 1 , . . . , b0 di lunghezza M . Calcola lintero N = 2k tale che 2.5 M N < 5 M . siano a, b i vettori a N componenti a = (a0 , . . . , aM 1 , 0, . . . , 0) b = (b0 , . . . , bM 1 , 0, . . . , 0) (c0 , . . . , cN 1 ) = F2N +1 (a) (d0 , . . . , dN 1 ) = F2N +1 (b) for k = 0 to N 1 do k = PROD VEL(ck ; dk ) (z0 , . . . , zN 1 ) := F21 N +1 (0 , . . . , N 1 ) return N 1 X k=0 end ! M k zk 2 152 CAPITOLO 10. IL METODO DIVIDE ET IMPERA Una semplice analisi permette di verificare che, detto T (n) il tempo di calcolo della procedura di cui sopra: T (n) N T (N ) + O(n lg2 n). Ricordando che N 5 n, vale: T (n) 5 nT (5 n) + O(n lg2 n). Posto g(n) = n lg5 n, si verifica che per n sufficientemente grande:   lg n 5 n g(5 n) + O n lg2 n = 25n lg 5 + 2  5   + O n lg2 n g(n). Applicando ora il corollario 6.2 otteniamo: T (n) = O(n lg5 n). Esercizi 1) Consideriamo il problema di calcolare il prodotto di n interi a1 , a2 , . . . , an tali che ai {1, 2, . . . , n} per ogni i = 1, 2, . . . , n. Mantenedo gli n valori di input in un vettore A = (A[1], A[2], . . . , A[n]) di variabili globali, possiamo risolvere il problema chiamando la procedura Prodotto(1, n) definita nel modo seguente per ogni coppia di interi i, j, 1 i j n: Procedure Prodotto(i, j) if i = j then return A[i] else begin k := b i+j c 2 a :=Prodotto(i, k) b :=Prodotto(k + 1, j) return c = a b end a) Assumendo il criterio di costo uniforme, determinare lordine di grandezza del tempo di calcolo richiesto dalla procedura su un input di dimensione n nel caso peggiore. b) Svolgere lesercizio richiesto al punto a) assumendo il criterio di costo logaritmico. 2) Consideriamo il problema di calcolare lespressione b = a1 + 2a2 + 22 a3 + + 2n1 an su input a1 , a2 , . . . , an tali che ai {1, 2, . . . , n} per ogni i = 1, 2, . . . , n. Tale calcolo puo essere eseguito dalla seguente procedura: begin b := a1 for j = 2, 3, . . . , n do b := b + 2j1 aj end a) Assumendo il criterio di costo logaritmico, determinare in funzione di n lordine di grandezza del tempo di calcolo e dello spazio di memoria richiesti dalla procedura nel caso peggiore. b) Descrivere un algoritmo del tipo divide et impera che risolva in problema in tempo O(n) assumendo il criterio di costo uniforme. c) Assumendo il criterio di costo logaritmico, determinare in funzione di n lordine di grandezza del tempo di calcolo e dello spazio di memoria richiesti nel caso peggiore dallalgoritmo descritto al punto b). CAPITOLO 10. IL METODO DIVIDE ET IMPERA 3) Considera la seguente procedura ricorsiva che su input n IN, n > 0, restituisce il valore F (n): Function F (n) begin if n = 1 then return 1 else begin i := b n2 c j := d n2 e a := F (i) + i F (j) return a end end a) Assumendo il criterio di costo uniforme, determinare al crescere di n lordine di grandezza del tempo di calcolo e dello spazio di memoria richiesti dalla procedura su input n. b) Assumendo il criterio di costo logaritmico, determinare al crescere di n lordine di grandezza del tempo di calcolo e dello spazio di memoria richiesti dalla procedura su input n. 153 Capitolo 11 Programmazione dinamica Come abbiamo visto nel capitolo precedente, gli algoritmi basati sul metodo divide et impera suddividono listanza di un problema in sottoistanze di ugual dimensione e quindi risolvono queste ultime in maniera indipendente, solitamente mediante chiamate ricorsive. Vi sono pero problemi che possono essere decomposti in sottoproblemi definiti su istanze di dimensione diversa e in questo caso il metodo divide et impera non puo essere applicato (per lo meno secondo la formulazione che abbiamo descritto nel capitolo precedente). Inoltre, in molti casi interessanti, tali sottoproblemi presentano forti dipendenze tra loro: cos uneventuale procedura ricorsiva che richiama semplicemente se stessa su tutte le sottoistanze per le quali e richiesta la soluzione porta ad eseguire piu volte gli stessi calcoli. In alcuni casi il tempo dedicato alla ripetizione di computazioni gia eseguite e cos elevato da rendere lalgoritmo assolutamente inefficiente. Un metodo solitamente applicato per risolvere problemi di questo tipo e quello della programmazione dinamica. Intuitivamente esso consiste nel determinare per una data istanza i di un problema linsieme S(i) di tutte le sottoistanze da cui dipende il calcolo della soluzione per i. Con lo stesso criterio si stabilisce poi una relazione dipendenza tra i vari elementi di S(i). Quindi, rispettando tale dipendenza, si ricavano le soluzioni delle sottoistanze di S(i) a partire da quelle di dimensione minore; i risultati parziali man mano ottenuti vengono conservati in opportune aree di memoria e utilizzati per determinare le soluzioni relative a istanze di dimensione via via crescente. Cos , ogni sottoistanza del problema viene risolta una volta sola e il risultato utilizzato tutte le volte che occorre senza dover ripetere il calcolo. In questo modo, ad un modesto aumento dello spazio richiesto per lesecuzione dellalgoritmo, corrisponde spesso una drastica riduzione dei tempi di calcolo. 11.1 Un esempio semplice Un esempio di algoritmo che calcola ripetutamente le soluzioni parziali del problema dato e fornito dalla procedura per determinare i numeri di Fibonacci descritta nella sezione 5.1. Su input n IN tale procedura calcola ln-esimo numero di Fibonacci fn mediante il seguente programma ricorsivo: 154 CAPITOLO 11. PROGRAMMAZIONE DINAMICA 155 Procedura FIB(n) if n 1 then return n else begin a := n 1 x := FIB(a) b := n 2 y := FIB(b) return (x + y) end E chiaro che in questa procedura i vari termini della sequenza f0 , f1 , . . . , fn2 sono calcolati varie volte. Per esempio fn2 viene calcolato sia per determinare fn = fn1 + fn2 , sia per determinare fn1 = fn2 + fn3 . Il fenomeno poi cresce man mano che decrescono gli indici della sequenza fino al punto che i primi termini vengono calcolati un numero esponenziale di volte, rendendo cos lalgoritmo inutilizzabile anche per piccole dimensioni dellingresso. Abbiamo infatti gia osservato (sezione 7.2) 5+1 n che questa procedura richiede un tempo di calcolo (( 2 ) ). Il modo piu semplice per risolvere il problema e proprio basato sulla programmazione dinamica. I numeri di Fibonacci vengono calcolati a partire da quelli di indice minore e quindi memorizzati in un vettore apposito V = (V [1], V [2], . . . , V [n]); in questo modo essi sono calcolati una volta sola e quindi riutilizzati quando occorre: Procedura DFIB(n) begin V [0] := 0 V [1] := 1 a := 0 b := 1 k := 2 while k n do begin x := V [a] y := V [b] V [k] := x + y a := b b := k k := k + 1 end return V [n] end Come si osserva immediatamente, il tempo di calcolo della procedura DFIB e (n). DFIB calcola la stessa funzione di FIB in modo straordinariamente piu efficiente. Con poca fatica, inoltre, possiamo ottimizzare lalgoritmo osservando che non e necessario mantenere in memoria un vettore di n elementi; infatti, per calcolare ogni fi e sufficiente ricordare i due coefficienti precedenti, cioe fi1 e fi2 . Si ottiene cos la seguente procedura: CAPITOLO 11. PROGRAMMAZIONE DINAMICA 156 Procedura OttFIB(n) if n 1 then return n else begin x := 0 y := 1 for k = 2, n do t := y y := x + y x := t return (y) end Anche in questo caso abbiamo un tempo di calcolo (n), mentre lo spazio di memoria richiesto si riduce a O(1). 11.2 Il metodo generale Vogliamo ora descrivere in generale come opera la tecnica di programmazione dinamica. Consideriamo un algoritmo ricorsivo descritto dallinsieme di procedure {P1 , P2 , . . . , PM }, in cui P1 sia la procedura principale, e supponiamo per semplicita che ciascuna procedura abbia un solo parametro formale. Consideriamo ora la famiglia delle coppie [Pk , x] dove k e un intero tale che 1 k M e x e un possibile valore del parametro formale di Pk . Diciamo che [Pk , x] dipende da [Ps , y] se lesecuzione della procedura Pk con valore x assegnato al parametro formale richiede almeno una volta la chiamata di Ps con valore y assegnato al corrispondente parametro formale. Conveniamo che [Pk , x] dipenda sempre da se stesso. Data ora la coppia [P1 , z], consideriamo un ordine lineare hL[P1 , z], <i tale che: 1. L[P1 , z] = {[Ps , y]|[P1 , z]dipende da[Ps , y]}, cioe L[P1 , z] e linsieme delle coppie da cui [P1 , z] dipende. 2. Se [Pk , x], [Ps , y] L[P1 , z] e [Pk , x] dipende da [Ps , y], allora [Ps , y] < [Pk , x]. Si osservi che in generale si possono introdurre in L[P1 , z] vari ordini lineari <, compatibili con la richiesta (2); supponiamo qui di averne fissato uno. Dato hL[P1 , z], <i chiameremo PRIMO lelemento in L[P1 , z] piu piccolo rispetto allordinamento, mentre evidentemente lelemento massimo e [P1 , z]. Dato I L[P1 , z], porremo Succ(I) lelemento successivo ad I nellordine totale: poiche Succ([P1 , z]) non risulta definito, essendo [P1 , z] il massimo dellordine lineare, considereremo per completezza un nuovo elemento ND (non definito), ponendo Succ([P1 , z]) =ND. Lalgoritmo che implementa la procedura P1 usando una tecnica di programmazione dinamica costruisce nella sua esecuzione un vettore V indiciato in L[P1 , z] , ritornando alla fine V [P1 , z]. CAPITOLO 11. PROGRAMMAZIONE DINAMICA 157 Procedura DP1 () (1) Definisce lordine lineare L[P1 , z] (2) I:= PRIMO (3) while I 6= ND do begin (4) Se I = [Pj , x], esegui la procedura Pj assegnando x al parametro formale e interpretando: (a) i comandi iterativi con lusuale semantica (b) le eventuali chiamate del tipo b := Ps (l) come b := V [Ps , l] (c) le istruzioni del tipo return E come V [I] := E (5) U := I ; I:= Succ(I) end (6) return V [U ] Adottando questo punto di vista, la programmazione dinamica non e altro che una diversa semantica operazionale della programmazione ricorsiva, anche se ce qualche grado di liberta nella definizione dellordinamento lineare su L[P1 , z]. In conclusione, per risolvere un problema usando il metodo illustrato dobbiamo anzitutto considerare un algoritmo per definito da una o piu procedure ricorsive e fissare un naturale ordine lineare sulle chiamate di tali procedure. Quindi possiamo applicare il metodo descritto nello schema precedente, introducendo eventualmente qualche ulteriore miglioramento che tenga conto delle simmetrie e delle caratteristiche del problema. Abbiamo gia mostrato come il metodo appena illustrato puo essere utilizzato per calcolare i numeri di Fibonacci. Nelle sezioni successive invece presentiamo alcuni classici algoritmi basati sulla programmazione dinamica che risolvono problemi nei quali le relazioni tra le soluzioni alle varie istanze sono piu complicate e richiedono il mantenimento in memoria di tutte le soluzioni parziali fino al termine della computazione. 11.3 Moltiplicazione di n matrici Come e noto, la moltiplicazione di due matrici A = [Aik ] e B = [Bik ], di dimensione m q e q p rispettivamente, fornisce la matrice C = [Cik ] di dimensione m p tale che, per ogni i, k, Cik = q X Aij Bjk . j=1 Per valutare la complessita di calcolo di questa operazione assumiamo che le due matrici siano a componenti intere e teniamo conto, per semplicita , solo del numero di prodotti eseguiti. Di conseguenza, possiamo supporre che la moltiplicazione delle due matrici sopra considerate richieda m q p operazioni elementari. Il problema che si vuole affrontare e quello di determinare il minimo numero di operazioni necessario a calcolare il prodotto A1 A2 . . . An di n matrici A1 , A2 , . . . , An , sapendo che Ak e di dimensione rk1 rk per ogni k = 1, 2, . . . , n. Siano ad esempio A, B, C matrici rispettivamente di dimensione 3 5, 5 10, 10 2. Il prodotto A B C puo essere eseguito nei due modi diversi (A B) C o A (B C), che forniscono lo stesso risultato con un diverso numero di operazioni: (A B) C richiede 3 5 10 + 3 10 2 = 210 operazioni, CAPITOLO 11. PROGRAMMAZIONE DINAMICA 158 mentre A (B C) ne richiede 3 5 2 + 5 10 2 = 130. Risulta conveniente allora applicare il secondo procedimento. Tornando al problema iniziale, indichiamo con M [k, s] il numero minimo di operazioni necessario a calcolare Ak . . . As . Osservando che Ak . . . As = (Ak . . . Aj ) (Aj+1 . . . As ) per k j < s, una semplice procedura ricorsiva per il calcolo di M [k, s] e suggerita dalla seguente regola: ( M [k, s] = 0 se k = s M inkj<s {M [k, j] + M [j + 1, s] + rk1 rj rs } altrimenti La procedura e la seguente: Procedura COSTO[k, s] if k = s then return 0 else begin m := for j = k, k + 1, . . . , s 1 do begin A := COSTO [k, j] B := COSTO [j + 1, s] if (A + B + rk1 rj rs ) < m then m := A + B + rk1 rj rs end return m end Il programma che risolve il problema e allora: MOLT-MAT begin for 0 k n do READ(rk ) z := COSTO [1, n] write z end Limplementazione diretta del precedente algoritmo ricorsivo porta a tempi di calcolo esponenziali; applichiamo allora la tecnica di programmazione dinamica. Poiche la procedura COSTO richiama solo se stessa, potremo senza ambiguita scrivere L[k, s] anziche L(COSTO[k, s]). Poiche [k, s] dipende da [k 0 , s0 ], se k k 0 s0 s allora si ha L[1, n] = {[k, s]| 1 k s n}. Un possibile ordine lineare compatibile con la precedente nozione di dipendenza e il seguente: [k, s] [k 0 , s0 ] se e solo se s k < s0 k 0 oppure s0 k 0 = s k e k k 0 . Otteniamo in questo caso: PRIMO = [1, 1] CAPITOLO 11. PROGRAMMAZIONE DINAMICA ( Succ[k, s] = 159 [k + 1, s + 1] se s < n [1,n k + 2] se s = n ND = [1, n + 1] La procedura di programmazione dinamica risulta allora: Procedura DCOSTO [1, n] begin [k, s] := [1, 1] while [k, s] 6= [1, n + 1] do begin if k = s then V [k, s] := 0 else begin m := for j = k, k + 1, . . . , s 1 do begin A := V [k, j] B := V [j + 1, s] if (A + B + rk1 rj rs ) < m then m := A + B + rk1 rj rs end V [k, s] := m end u := [k, s] [k, s] := Succ[k, s] end return V [u] end Per quanto riguarda il tempo di calcolo (con criterio uniforme) si osserva che il ciclo while viene percorso tante volte quante sono le possibili coppie [k, s] con 1 k s n. Fissata la coppia [k, s], listruzione piu costosa nel ciclo while e il ciclo for che esegue (s k) O(1) passi, mentre il tempo richiesto dalle altre istruzioni e al piu costante. Questo significa che ogni esecuzione del ciclo while richiede un tempo (s k), dove [k, s] e lelemento corrente. Osserva che vi sono n 1 elementi [k, s] tali che s k = 1, ve ne sono n 2 tali che s k = 2 e cos via; infine vi e un solo elemento [k, s] tale che s k = n 1. Quindi il tempo complessivo e dato dalla somma X [k,s] (s k) = n1 X (n i)(i) = (n3 ). i=1 Il tempo complessivo e allora (n3 ), mentre lo spazio e essenzialmente quello richiesto per memorizzare il vettore V [k, s], quindi (n2 ). CAPITOLO 11. PROGRAMMAZIONE DINAMICA 11.4 160 Chiusura transitiva Il problema che consideriamo in questa sezione riguarda il calcolo della chiusura transitiva di un grafo. Dato un grafo orientato G = hV, Ei, si vuole determinare il grafo (orientato) G = hV, E i tale che, per ogni coppia di nodi u, v V , esiste il lato (u, v) in G se e solo se esiste in G un cammino da u a v. Un algoritmo classico per risolvere il problema e basato sul calcolo di una famiglia di coefficienti booleani che assumono valore 1 o 0 a seconda se esiste o meno un cammino tra due nodi passante per un certo insieme di vertici. Siano v1 , v2 , . . . , vn i nodi del grafo G. Lidea e quella di verificare, per ogni coppia di nodi vi , vj , se esiste un lato da vi a vj oppure un cammino da vi a vj passante per il nodo v1 ; poi, se esiste un cammino da vi a vj passante al piu per nodi di indice 1 o 2; quindi se esiste un cammino da vi a vj passante per nodi di indice minore o uguale a 3, e cos via. Si tratta quindi di eseguire n cicli: al k-esimo ciclo si verifica, per ogni coppia di nodi vi , vj , se esiste un cammino da vi a vj passante per nodi di indice minore o uguale a k (escludendo i due valori i e j). Tale verifica puo essere eseguita tenendo conto dei risultati forniti dal ciclo precedente. Infatti, esiste un cammino da vi a vj passante per nodi di indice minore o uguale a k se e solo se si verifica uno dei fatti seguenti: 1. esiste un cammino da vi a vj passante per nodi di indice minore o uguale a k 1, oppure 2. esiste un cammino da vi a vk e uno da vk a vj entrambi passanti per vertici di indice minore o uguale a k 1. k , dove k Per ogni coppia di indici i, j {1, 2, . . . , n} definiamo la famiglia di coefficienti Cij {0, 1, 2, . . . , n}, nel modo seguente: ( 0 Cij = 1 se (vi , vj ) E, 0 altrimenti, mentre, per ogni k {1, 2, . . . , n}, k Cij = 1 se (vi , vj ) E oppure esiste in G un cammino da v a v che passa per nodi di indice t tale che t k, i j 0 altrimenti. 0 ] coincide di fatto con la matrice di adiacenza del grafo di ingresso. Nota che [Cij k , al variare di i e j in {1, 2, . . . , n}, sono legati ai Per il ragionamento precedente i coefficienti Cij k1 valori Cij dalla seguente equazione: k1 k1 k1 k Cij = Cij (Cik Ckj ) (11.1) n = 1 se e solo se in G esiste un cammino da v a v . Possiamo allora facilmente Chiaramente Cij i j 0 e quindi, usando lequazione (11.1), descrivere un algorimo che calcola inizialmente i coefficienti Cij k , con i, j = 1, 2, . . . , n, possono essere tutti i successivi: per ogni k = 1, 2, . . . , n tutti i coefficienti Cij k1 ottenuti dai Cij . Questo calcolo puo essere ovviamente eseguito mantenendo due matrici di dimensione k1 k . Tuttavia, osserviamo che per n n, una per conservare i valori Cij , laltra per i corrispondenti Cij ogni i, j, k valgono le seguenti identita : k1 k1 k k Cik = Cik , Ckj = Ckj . CAPITOLO 11. PROGRAMMAZIONE DINAMICA 161 Questo significa che possiamo usare una sola matrice per mantenere entrambe le famiglie di coefficienti. Lalgoritmo complessivo e quindi descritto dalla seguente procedura che calcola la matrice di coefficienti booleani [Cij ]i,j=1,2,...,n . Tale matrice coincide inizialmente con la matrice di adiacenza del grafo G mentre, al termine della computazione, rappresenta la matrice di adiacenza della sua chiusura transitiva G . begin for i = 1, 2, . . . , n do for j = 1, 2, . . . , n do if (vi , vj ) E then Cij := 1 else Cij := 0 for k = 1, 2, . . . , n do for i = 1, 2, . . . , n do for j = 1, 2, . . . , n do if Cij = 0 then Cij := Cik Ckj end E facile verificare che, assumendo il criterio di costo uniforme, lalgoritmo richiede un tempo di calcolo (n3 ) e uno spazio di memoria (n2 ) su ogni grafo di ingresso di n nodi. 11.5 Cammini minimi Un altro problema classico che si puo risolvere mediante programmazione dinamica riguarda il calcolo dei cammini di peso minimo che connettono i nodi in un grafo pesato. Consideriamo un grafo diretto G = hV, Ei e una funzione costo w : E Q tale che w(e) 0 per ogni e E. Per ogni cammino ` in G, ` = (v1 , v2 , . . . , vm ), chiamiamo peso (o costo) di ` la somma dei costi dei suoi lati: c(`) = m1 X w(vi , vi+1 ) i=1 Per ogni coppia di nodi u, v V si vuole determinare un cammino ` di peso minimo tra tutti i cammini da u a v, insieme al suo costo c(`). Siano v1 , v2 , . . . , vn i nodi di G. Il problema puo essere risolto calcolando le matrici D = [dij ]i,j=1,2,...,n e P = [pij ]i,j=1,2,...,n le cui componenti sono definite nel modo seguente: per ogni coppia di indici distinti i, j, se esiste un cammino da vi a vj in G, dij rappresenta il costo del cammino di peso minimo che congiunge vi e vj e pij e il predecessore di vj in tale cammino; se invece non esiste un cammino da vi a vj allora dij assume un valore convenzionale , superiore al peso di ogni cammino in G, e pij assume il valore indefinito . Conoscendo la matrice P e possibile determinare un cammino di peso minimo da vi a vj semplicemente scorrendo in modo opportuno la i- esima riga della matrice: se k1 = pij allora vk1 e il penultimo nodo del cammino, se k2 = pik1 allora vk2 e il terzultimo, e cos via fino a determinare il primo nodo, ovvero vi . Abbiamo cos definito il problema per grafi con pesi non negativi. Lalgoritmo che presentiamo tuttavia risolve il problema nel caso piu generale di grafi con pesi di segno qualsiasi, purche questi non formino cicli di costo negativo. Osserviamo che se esiste un ciclo di peso negativo lo stesso problema non e ben definito. CAPITOLO 11. PROGRAMMAZIONE DINAMICA 162 Il metodo utilizzato per determinare la soluzione e simile a quello descritto nella sezione precedente per calcolare la chiusura transitiva di un grafo. Per ogni tripla di indici i, j, k {1, 2, . . . , n} definiamo il coefficiente ckij come il costo del cammino di peso minimo da vi a vj passante per nodi di indice al piu uguale a k (esclusi i e j). Chiaramente dij = cnij . Inoltre definiamo i coefficienti c0ij nel modo seguente: c0ij = w(vi , vj ) se i 6= j e (vi , vj ) E 0 se i = j altrimenti Anche per questa famiglia di coefficienti possiamo definire unequazione, analoga alla (11.1), che permette di calcolare i valori ckij , per i, j = 1, 2, . . . , n, conoscendo i ck1 ij : k1 k1 ckij = min{cij , cik + ck1 kj }. (11.2) Basta osservare infatti che se ` e un cammino di peso minimo da vi a vj passante per nodi di indice minore o uguale a k allora si verifica una dei due fatti seguenti: 1. ` passa solo per nodi di indice minore o uguale a k 1, oppure 2. ` e composto da due cammini che vanno rispettivamente da vi a vk e da vk a vj , entrambi di peso minimo tra tutti i cammini (congiungenti i rispettivi estremi) passanti per nodi di indice minore o uguale a k 1. k1 mentre nel secondo e ck1 Nel primo caso il costo di ` e ck1 ij ik + ckj ; inoltre in questultimo il predecessore di vj in ` equivale al predecessore di vj nel cammino di costo minimo da vk a vj passante per nodi di indice minore o uguale a k 1. Applicando lequazione (11.2) possiamo allora descrivere un algoritmo del tutto simile a quello presentato nella sezione precedente. Anche in questo caso possiamo limitarci a mantenere una sola matrice di valori ckij poiche valgono le identita k1 k k ck1 ik = cik e ckj = ckj per ogni tripla di indici i, j, k. Cos lalgoritmo calcola la matrice C = [cij ] dei costi e quella B = [bij ] dei predecessori dei cammini minimi che, al termine della computazione, coincideranno con le matrici D e P rispettivamente. 163 CAPITOLO 11. PROGRAMMAZIONE DINAMICA begin for i = 1, 2, . . . , n do for j = 1, 2, . . . , n do ( if i = j then cii := 0 bii := i ( cij := w(vi , vj ) b := i ( ij cij := else bij := else if (vi , vj ) E then for k = 1, 2, . . . , n do for i = 1, 2, . . . , n do for j = 1, 2, . . . , n do ( if cik + ckj < cij then cij := cik + ckj bij := bkj end Concludiamo osservando che, assumendo il criterio di costo uniforme, il tempo di calcolo richiesto dallalgoritmo su ogni grafo di input di n nodi e (n3 ), mentre lo spazio di memoria e (n2 ). Capitolo 12 Algoritmi greedy Una delle tecniche piu semplici per la progettazione di algoritmi di ottimizzazione e chiamata tecnica greedy. Letteralmente questo termine significa ingordo, ma nel seguito preferiremo tradurlo miope. Intuitivamente, questo metodo costruisce la soluzione di un problema di ottimizzazione mediante una successione di passi durante ciascuno dei quali viene scelto un elemento localmente migliore; in altre parole a ciascun passo la scelta migliore viene compiuta in un ambito limitato, senza controllare che il procedimento complessivo porti effettivamente al calcolo di una soluzione ottima per il problema. Questa strategia, se da un lato permette solitamente di ottenere algoritmi semplici e facilmente implementabili, dallaltro puo portare alla definizione di procedure che non forniscono sempre la soluzione ottima. In questo capitolo vogliamo studiare le proprieta degli algoritmi di questo tipo e verificare in quali casi e possibile garantire che la soluzione costruita sia effettivamente la migliore. 12.1 Problemi di ottimizzazione Per esprimere questi concetti in maniera precisa, introduciamo la nozione di sistema di indipendenza. Un sistema di indipendenza e una coppia hE, F i nella quale E e un insieme finito, F e una famiglia F di sottoinsiemi di E chiusa rispetto allinclusione; in altre parole, F 2E (qui e nel seguito denoteremo con 2E la famiglia di tutti i sottoinsiemi di E) e inoltre AF B AB F E evidente che, per ogni insieme finito E, la coppia hE, 2E i forma un sistema di indipendenza. Esempio 12.1 Sia G = (V, E) un grafo non orientato; diciamo che un insieme A E forma una foresta se il grafo (V, A) e privo di cicli. Denotiamo quindi con FG linsieme delle foreste di G, ovvero FG = {A E | A forma una foresta} E facile verificare che la coppia hE, FG i e un sistema di indipendenza. Viceversa, per ogni A E, sia VA linsieme dei vertici v V che sono estremi di un lato in A e diciamo che A forma un albero se il grafo (VA , A) e connesso e privo di cicli: denotando con TG la famiglia degli alberi di G, ovvero TG = {A E | A forma un albero}, si verifica facilmente che hE, TG i non e un sistema di indipendenza. Esempio 12.2 Sia sempre G = (V, E) un grafo non orientato. Diciamo che un insieme A E forma un matching se, per ogni coppia di lati distinti , A, e non hanno nodi in comune. Denotiamo inoltre con MG la famiglia dei sottoinsiemi di E che formano un matching. E facile verificare che hE, MG i e un sistema di indipendenza. 164 CAPITOLO 12. ALGORITMI GREEDY 165 Analogamente, diciamo che un insieme S V forma una clique di G se, per ogni coppia di nodi distinti s, u S, il lato {s, v} appartiene a E. Denotiamo con CG la famiglia delle clique di G. Si puo verificare anche in questo caso che hV, CG i forma un sistema di indipendenza. Vari problemi di ottimizzazione possono essere definiti in modo naturale usando sistemi di indipendenza pesati nei quali cioe ogni elemento dellinsieme base e dotato di un peso. Una funzione peso su un dato una sistema di indipendenza hE, F i, e una arbitraria funzione w : E IR+ , dove IR+ e linsieme dei reali non negativi. Tale funzione puo essere ovviamente estesa ai sottoinsiemi di E ponendo, per ogni P A E, w(A) = xA w(x). Possiamo allora formulare in modo preciso un problema di massimo: Istanza: un sistema di indipendenza hE, F i e una funzione peso w : E IR+ . Soluzione : un insieme M F tale che w(M ) sia massimo (ovvero A F w(A) w(M )). In modo analogo possiamo definire un problema di minimo. In questo caso, dato una sistema di indipendenza hE, F i, diciamo che un insieme A F e massimale se non esiste B F diverso da A che include A, ovvero, per ogni B F , A B A = B. Istanza: un sistema di indipendenza hE, F i e una funzione peso w : E IR+ . Soluzione : un insieme A F massimale che sia di peso minimo (ovvero, tale che per ogni B F massimale w(A) w(B)). Definiamo ora lalgoritmo greedy per il problema di massimo; tale algoritmo e definito dalla seguente procedura. Procedure MIOPE(E, F, w) begin S := Q := E while Q 6= do begin determina lelemento m di peso massimo in Q Q := Q {m} if S {m} F then S := S {m} end return S end Fissata una istanza del problema di ottimizzazione, ovvero una tripla E, F, w definita come sopra, la precedente procedura fornisce in uscita un insieme S che appartiene certamente a F (e quindi una soluzione ammissibile), ma non e necessariamente ottimo nel senso che puo non rappresentare un insieme di peso massimo in F . Si pongono allora, a questo livello di generalita , due problemi: 1. qual e il tempo di calcolo dellalgoritmo greedy, cioe quanti passi di calcolo devono essere compiuti avendo come ingresso un insieme E di n elementi? A questo proposito osserviamo che linput dellalgoritmo sara qui costituito dal solo insieme E e dalla funzione peso w: si suppone che F sia automaticamente definito in maniera implicita mediante una opportuna regola e sia comunque CAPITOLO 12. ALGORITMI GREEDY 166 disponibile una routine per verificare quando un insieme A E appartiene a F . La famiglia F potrebbe infatti contenere un numero di elementi esponenziale in n e quindi la sua specifica diretta sarebbe improponibile. 2. In quali casi lalgoritmo greedy fornisce effettivamente una soluzione ottima? Qualora lalgoritmo non fornisca la soluzione ottima, si pone un terzo problema, ovvero quello di valutare la bonta della soluzione prodotta. Questo porta a studiare una classe di algoritmi, detti di approssimazione, che in generale non forniscono la soluzione migliore a un dato problema ma ne producono una che approssima quella richiesta. In molti casi il calcolo della soluzione ottima e troppo costoso in termini di tempo e ci si accontenta di una soluzione approssimata, purche ovviamente questultima sia calcolabile in un tempo accettabile. Concludiamo osservando che un algoritmo analogo puo essere descritto per il problema di minimo. Esso e definito dalla seguente procedura: Procedure MIOPE-MIN(E, F, w) begin S := Q := E while Q 6= do begin determina lelemento m di peso minimo in Q Q := Q {m} if S {m} F then S := S {m} end return S end Anche per tale algoritmo valgono le osservazioni fatte a proposito della procedura MIOPE. 12.2 Analisi delle procedure greedy Presentiamo ora una analisi dei tempi di calcolo richiesti dallalgoritmo MIOPE descritto nella sezione precendente. Ovviamente, visto il livello di generalita del problema, il tempo di calcolo ottenuto dipendera dal sistema di indipendenza hE, F i di ingresso e non semplicemente dalla dimensione di E. Una prima soluzione si puo ottenere rappresentando linsieme E = {l1 , l2 , , ln } mediante un vettore Q = (Q[1], Q[2], . . . , Q[n]) dove, inizialmente, Q[i] = li per ogni i. Indichiamo con SORT(Q) una funzione che restituisce il vettore Q ordinato in modo tale che i pesi dei lati formino una progressione non crescente, ovvero w(Q[1]) w(Q[2]) w(Q[n]). La procedura puo allora essere riscritta nella forma seguente: CAPITOLO 12. ALGORITMI GREEDY 167 Procedure MIOPE begin S := Q := (l1 , l2 , , ln ) Q := SORT(Q) for i = 1, 2, . . . , n do if S {Q[i]} F then S := S {Q[i]} end Come si vede la procedura prevede due passi principali: lordinamento di un vettore di n elementi e n test per verificare se un insieme X E appartiene a F . Possiamo chiaramente eseguire il primo passo in un tempo O(n log n). Il secondo tuttavia dipende dal particolare sistema di indipendenza considerato in ingresso. Se comunque assumiamo di poter verificare lappartenenza X F in un tempo C(n), il costo complessivo di questo controllo non e superiore a O(n C(n)). Possiamo quindi concludere affermando che la procedura MIOPE richiede al piu un tempo O(n log n + nC(n)). 12.3 Matroidi e teorema di Rado Diamo in questa sezione una soluzione parziale alla seconda questione che ci siamo posti: in quali casi lalgoritmo greedy fornisce la soluzione ottima? Il nostro obiettivo e quello di caratterizzare la classe dei sistemi di indipendenza per i quali lalgoritmo greedy fornisce la soluzione ottima qualunque sia la funzione peso considerata. Dimostreremo (teorema di Rado) che un sistema di indipendenza verifica la proprieta precedente se e solo se esso e un matroide. Un sistema di indipendenza hE, F i e detto matroide se, per ogni A, B F tali che |B| = |A| + 1 (qui |X| indica la cardinalita di un insieme X), allora esiste b B A per cui A {b} F . Per esempio, e facile verificare che, per ogni insieme finito E, la coppia hE, 2E i forma un matroide. La nozione di matroide e stata introdotta nel 1935 da Birkhoff e altri per generalizzare il concetto di dipendenza lineare. Questa nozione ha trovato proficue applicazioni in vari settori, dalla teoria dei grafi agli algoritmi, e puo essere considerata un ponte tra lalgebra lineare e la matematica combinatoria. Esempio 12.3 Sia E un insieme finito di vettori di uno spazio vettoriale V . Sia F la famiglia di sottoinsiemi di E formati da vettori linearmente indipendenti. Si puo verificare facilmente che hE, F i forma un matroide, detto matroide vettoriale. Lesempio piu importante di matroide e tuttavia fornito dal sistema di indipendenza hE, FG i definito nellEsempio 12.1. La seguente proposizione dimostra questa proprieta . Proposizione 12.1 Per ogni grafo non orientato G = (V, E), la coppia hE, FG i e un matroide. Dimostrazione. DallEsempio 12.1 sappiamo che hE, FG i e un sistema di indipendenza. Dobbiamo quindi solo provare lulteriore condizione sopra definita. A tale scopo, siano A e B due insiemi di lati in FG tali che |B| = |A| + 1. Denotiamo con U linsieme U = B A. Chiaramente U 6= e possiamo porre U = {b1 , b2 , . . . , bm }. Per assurdo supponiamo ora che ogni bi formi un ciclo con A, ovvero A {bi } 6 FG per ogni i. Sia Ci tale ciclo. Allora, in Ci deve esistere un lato ai A B, altrimenti avremmo un ciclo interamente incluso in B contro lipotesi B FG . Inoltre, possiamo scegliere a2 6= a1 , altrimenti linsieme C1 C2 {a1 } forma nuovamente un ciclo in B. Per lo stesso 168 CAPITOLO 12. ALGORITMI GREEDY motivo possiamo scegliere a3 diverso da a2 e a1 e cos via: tutti gli ai , per i = 1, 2, . . . , m, possono essere scelti in modo distinto dai precedenti. Di conseguenza |A| m + |A B| = |U | + |A B| = |B|, ma questo e assurdo perche abbiamo supposto |B| = |A| + 1. La coppia hE, FG i e anche chiamata matroide grafico. Un risultato interessante, che fornisce una interpretazione algoritmica dei matroidi, e il seguente: Teorema 12.2 (Rado) Dato un sistema di indipendenza hE, F i, le seguenti proposizioni sono equivalenti: a) per ogni funzione peso w : E IR+ , lalgoritmo MIOPE fornisce una soluzione ottima al problema di massimo su input E, F, w; b) hE, F i e un matroide. Dimostrazione. Proviamo innanzitutto che se hE, F i non e un matroide allora esiste una funzione peso w : E IR+ per la quale lalgoritmo greedy non fornisce la soluzione ottima. Infatti, poiche hE, F i non e un matroide, esistono due insiemi A, B F tali che, per qualche k IN, |A| = k, |B| = k + 1, e inoltre b B A A {b} 6 F Definiamo ora una funzione peso w nel modo seguente. Scelto > 1, poniamo per ogni x E: w(x) = 1 0 se x A se x B A se x (A B)c Assegnata tale funzione peso, lalgoritmo greedy fornira una soluzione S, formata da tutti gli elementi in A (i quali, avendo peso maggiore, verranno selezionati per primi) piu , eventualmente, un insieme di elementi C (A B)c . Nota che in S non vi possono essere elementi di B A in quanto, per ogni b B A, abbiamo A {b} 6 F . Posto t = |A B|, abbiamo w(S) = w(A C) = w(A) + w(C) = |A| = k w(B) = w(B A) + w(A B) = (k + 1 t) + t Ne segue allora che w(S) < w(B) k < k + 1 t + t < 1 + 1 kt Quindi, se scegliamo tale che 1 , kt si verifica che la soluzione S costruita dallalgoritmo greedy non e ottima. 1<<1+ Viceversa, dimostriamo ora che, se hE, F i e un matroide, comunque si scelga una funzione peso w : E IR+ , lalgoritmo greedy restituisce la soluzione ottima. Sia infatti S = {b1 , b2 , . . . , bn } la soluzione fornita dallalgoritmo, con w(b1 ) w(b2 ) w(bn ). Sia A = {a1 , a2 , . . . , am } un qualunque elemento di F , con w(a1 ) w(a2 ) w(am ). Innanzitutto verifichiamo che m n. Infatti se fosse n < m, essendo hE, F i un matroide, esisterebbe aj A S tale che S {aj } F . Inoltre, tutti i sottoinsiemi di S {aj } apparterrebbero a F , CAPITOLO 12. ALGORITMI GREEDY 169 e di conseguenza lalgoritmo non avrebbe scartato aj ma lo avrebbe inserito nella soluzione, restituendo linsieme S {aj } invece di S. Quindi m n e dimostriamo allora che w(ai ) w(bi ) per ogni i = 1, 2, . . . , m. Infatti, per assurdo, sia k il primo intero tale che w(ak ) > w(bk ). Nota che linsieme D = {b1 , b2 , . . . , bk1 } appartiene a F e inoltre |D| + 1 = |{a1 , a2 , . . . , ak }|. Di conseguenza, essendo hE, F i un matroide, esiste un intero j, 1 j k, tale che aj 6 D e D {aj } F . Poiche lalgoritmo greedy sceglie al passo k-esimo lelemento di peso massimo tra quelli disponibili, abbiamo w(bk ) w(aj ); daltro lato, essendo j k, abbiamo w(aj ) w(ak ) e quindi w(bk ) w(aj ) w(ak ), contro lipotesi w(ak ) > w(bk ). Questo prova che w(A) w(S) e quindi la soluzione fornita dallalgoritmo e ottima. Anche per il problema di minimo e possibile provare che lalgoritmo greedy fornisce la soluzione ottima sui matroidi. Corollario 12.3 Se un sistema di indipendenza hE, F i e un matroide allora, per ogni funzione peso w : E IR+ , lalgoritmo MIOPE-MIN fornisce una soluzione ottima al problema di minimo (su input E, F, w). Dimostrazione. Innazitutto e facile verificare che in ogni matroide hE, F i gli insiemi A F massimali hanno la stessa cardinalita . Sia quindi m la cardinalita degli insiemi massimali in F . Inoltre, data una funzione peso w : E IR+ , fissa p = max{w(x) | x E} e definisci la funzione w0 : E IR+ ponendo w0 (x) = p w(x), per ogni x E. Allora, per ogni A F massimale, abbiamo w0 (A) = mpw(A) e quindi w0 (A) e massimo se e solo se w(A) e minimo. Per il teorema precedente la procedura MIOPE su input E, F, w0 determina lelemento S F di peso massimo rispetto a w0 . Tuttavia e facile verificare che S e anche loutput della procedura MIOPE-MIN su input E, F, w. Di conseguenza, per losservazione precedente, S e anche insieme massimale in F di peso minimo rispetto a w. Esercizi 1) Dato un grafo non orientato G = hV, Ei nel quale V e linsieme dei nodi ed E quello degli archi, definiamo la seguente famiglia di sottoinsiemi di E: F = {A E | v V tale che ogni lato A e incidente a v} Per ipotesi assumiamo F . a) La coppia hE, F i forma un sistema di indipendenza? b) La coppia hE, F i forma un matroide? c) Considera il problema di determinare lelemento di peso massimo in F assumendo per istanza un grafo G con pesi positivi associati agli archi. Descrivere un algoritmo greedy per tale problema. d) Lalgoritmo descritto al punto c) determina sempre la soluzione ottima? 2) Ricordiamo che in un grafo non orientato G = hV, Ei (dove V e linsieme dei nodi e E quello dei lati) una clique e un sottoinsieme C V tale che, per ogni u, v C, se u 6= v allora {u, v} E. Sia FG la famiglia di tutte le clique di G, cioe FG = {A V | u, v A, u 6= v {u, v} E} a) La coppia hV, FG i forma un sistema di indipendenza? b) La coppia hV, FG i forma un matroide? 170 CAPITOLO 12. ALGORITMI GREEDY c) Dato un grafo non orientato G = hV, Ei e una funzione peso w : V IR+ , ogni insieme A V ammette un peso w(A) definito da X w(A) = w(x). xA Descrivere una procedura greedy che cerca di determinare un insieme C FG di peso massimo in FG . La soluzione prodotta dallalgoritmo e sempre ottima? 3) Svolgere lanalisi dellalgoritmo greedy per il problema di minimo introdotto nella sezione 12.1. 12.4 Lalgoritmo di Kruskal Un classico problema di ottimizzazione su grafi consiste nel determinare un albero di copertura di peso minimo in un grafo assegnato. In questa sezione presentiamo uno dei principali algoritmi per risolvere tale problema, noto come algoritmo di Kruskal. Questa procedura fornisce un esempio rilevante di algoritmo greedy e nello stesso tempo rappresenta una applicazione delle operazioni UNION e FIND studiate nella sezione 9.7. Ricordiamo innanzitutto che un albero di copertura (spanning tree) di un grafo non orientato, connesso GhV, Ei e un albero T , sottografo di G, che connette tutti i nodi del grafo; formalmente quindi si tratta di un albero T = hV 0 , E 0 i tale che V 0 = V e E 0 E. Se G e dotato di una funzione peso w : E Q, il costo di un albero di copertura T = hV, E 0 i di G e semplicemente la somma dei costi dei suoi lati: w(T ) = X w(l) lE 0 Il problema che consideriamo e quello di determinare un albero di copertura di peso minimo per G (nota che in generale la soluzione non e unica). Formalmente possiamo definire il problema nel modo seguente: Istanza: un grafo non orientato, connesso G = hV, Ei e una funzione peso w : E Q. Soluzione: un insieme di lati S E tali che hV, Si sia un albero di copertura di peso minimo per G. Lidea dellalgoritmo che presentiamo e quella di costruire la soluzione S considerando uno dopo laltro i lati di G in ordine di peso non decrescente: inizialmente si pone S = e poi si aggiunge il lato corrente l solo se il nuovo insieme S {l} non forma cicli, altrimenti l viene scartato e si considera il lato successivo. Quindi, durante lesecuzione dellalgoritmo, S rappresenta un sottoinsieme di lati di G che non forma cicli. Si tratta quindi di una foresta F che definisce automaticamente una partizione P dellinsieme V dei nodi nella quale due vertici si trovano nello stesso sottoinsieme se appartengono al medesimo albero di F . Ora, per verificare se S {l} forma un ciclo, basta considerare i due estremi u e v di l e controllare se coincidono i due insiemi della partizione P cui appartengono u e v. In caso affermativo il lato l, aggiunto alla soluzione parziale S, forma un ciclo e deve quindi essere scartato; altrimenti esso va aggiunto alla soluzione poiche unisce due alberi disgiunti della foresta (ovvero, unifica due insiemi distinti della partizione). Il procedimento che abbiamo descritto sommariamente manipola tre strutture principali: 1. un insieme S destinato a contenere i lati della soluzione; 171 CAPITOLO 12. ALGORITMI GREEDY 2. un sottoinsieme Q dei lati del grafo che mantiene linsieme degli archi che non sono ancora stati considerati e dal quale dobbiamo via via estrarre il lato di peso minimo; 3. una partizione P dellinsieme dei vertici V che mantiene le famiglie di nodi tra loro connesse mediante lati in S. Inizialmente P coincide con la partizione identita ID e, ogniqualvolta un lato (u, v) viene aggiunto alla soluzione S, si unificano in P i due insiemi cui appartengono i vertici u e v. Rappresentando con {v1 , v2 , . . . , vn } linsieme V dei nodi del grafo, possiamo formalmente descrivere lalgoritmo mediante la seguente procedura: Procedure Kruskal(V, E, w) begin S := Q := E P := {{v1 }, {v2 }, . . . , {vn }} while P contiene piu di un insieme do begin determina il lato (u, v) di peso minimo in Q Cancella (u, v) da Q a := FIND(u) b := FIND(v) if a 6= b then begin UNION(a, b) Aggiungi (u, v) a S end end end Per illustrare il funzionamento dellalgoritmo, consideriamo il grafo rappresentato nella seguente figura:   a b  c 3 8 @ @ @ 1 5 8 6 @  @  2 10  7 9 5 d 8  g 3 e 4  h 6 2  f  i  Accanto a ogni lato e riportato il peso corrispondente. Nella tabella successiva indichiamo il valore di (u, v) e gli elementi di S e P al termine dellesecuzione di ciascun ciclo while; essi rappresentano il lato di peso minimo estratto da Q, la soluzione parziale dopo il suo eventuale inserimento, e la partizione corrispondente. Nella prima riga inoltre abbiamo posto i valori iniziali. CAPITOLO 12. ALGORITMI GREEDY (u, v) (a, d) (c, f ) (d, e) (a, b) (g, h) (h, f ) (b, e) (f, i) (h, i) (a, e) (d, g) S (a, d) (a, d), (c, f ) (a, d), (c, f ), (d, e) (a, d), (c, f ), (d, e), (a, b) (a, d), (c, f ), (d, e), (a, b), (g, h) (a, d), (c, f ), (d, e), (a, b), (g, h), (h, f ) (a, d), (c, f ), (d, e), (a, b), (g, h), (h, f ) (a, d), (c, f ), (d, e), (a, b), (g, h), (h, f ), (f, i) (a, d), (c, f ), (d, e), (a, b), (g, h), (h, f ), (f, i) (a, d), (c, f ), (d, e), (a, b), (g, h), (h, f ), (f, i) (a, d), (c, f ), (d, e), (a, b), (g, h), (h, f ), (f, i), (d, g) 172 P {a}, {b}, {c}, {d}, {e}, {f }, {g}, {h}, {i} {a, d}, {b}, {c}, {e}, {f }, {g}, {h}, {i} {a, d}, {b}, {c, f }, {e}, {g}, {h}, {i} {a, d, e}, {b}, {c, f }, {g}, {h}, {i} {a, b, d, e}, {c, f }, {g}, {h}, {i} {a, b, d, e}, {c, f }, {g, h}, {i} {a, b, d, e}, {c, f, g, h}, {i} {a, b, d, e}, {c, f, g, h}, {i} {a, b, d, e}, {c, f, g, h, i} {a, b, d, e}, {c, f, g, h, i} {a, b, d, e}, {c, f, g, h, i} {a, b, d, e, c, f, g, h, i} Osserva che i lati (b, e), (a, e) e (h, i) vengono scartati poiche formano cicli se aggiunti alla soluzione. La correttezza dellalgoritmo e una conseguenza del teorema di Rado. Infatti e facile verificare che lalgoritmo coincide essenzialmente con la procedura MIOPE-MIN applicata al sistema di indipendenza hE, FG i definito nellesempio 12.1. Per la proposizione 12.1 sappiamo che hE, FG i e un matroide e quindi per il corollario 12.3 la procedura MIOPE-MIN determina la soluzione ottima. Vogliamo ora valutare i tempi di calcolo richiesti e determinare le strutture dati piu idonee per implementare lalgoritmo. Osserviamo innanzitutto che linsieme S puo essere rappresentato banalmente da una lista perche lunica operazione da svolgere su questa struttura consiste nellinserimento di nuovi lati. Sulla partizione P si deve eseguire una sequenza di operazioni UNION e FIND proporzionale al piu al numero di lati del grafo di ingresso. Se quindi il grafo G possiede m lati, usando una foresta con bilanciamento, le operazioni UNION e FIND possono essere eseguite in O(m log m) passi nel caso peggiore. Infine sullinsieme Q dobbiamo eseguire una sequenza di operazioni MIN e DELETE. Possiamo semplicemente ordinare gli elementi di Q in una lista o in un vettore e quindi scorrere i suoi elementi in ordine di peso non decrescente. Tutto questo richiede (m log m) passi. Lo stesso ordine di grandezza si ottiene se utilizziamo uno heap rovesciato, nel quale cioe il valore assegnato a ciascun nodo interno e minore o uguale a quello dei figli. In questo caso non occorre ordinare inizialmente gli elementi di Q ma basta costruire uno heap, e questo richiede un tempo O(m). Qui la radice corrisponde allelemento di peso minore e quindi loperazione MIN puo essere eseguita in tempo costante, mentre ogni operazione DELETE richiede un tempo O(log m) poiche , per ricostruire lo heap, bisogna ricollocare una foglia nella posizione corretta partendo dalla radice e percorrendo un cammino pari al piu allaltezza dellalbero. Poiche la complessita dellalgoritmo puo essere ricondotta al costo delle operazioni compiute sulle strutture dati utilizzate, possiamo concludere affermando che la procedura descritta puo essere eseguita in O(m log m) passi nel caso peggiore. 12.5 Lalgoritmo di Prim Non tutti gli algoritmi che adottano una strategia greedy possono essere inquadrati nella teoria generale presentata nelle sezioni precedenti. Uno di questi e lalgoritmo di Prim per il calcolo del minimo albero di copertura di un grafo. Questa procedura si differenzia da quella di Kruskal, descritta nella sezione precedente, poiche lalbero di copertura viene costruito a partire da un nodo sorgente aggiungendo di volta in CAPITOLO 12. ALGORITMI GREEDY 173 volta il lato di peso minimo che permette di estendere linsieme dei nodi raggiunti dalla soluzione. In questo modo la soluzione parziale mantenuta dallalgoritmo non e una foresta come nel caso dellalgoritmo di Kruskal, ma e formata da un albero che ha per radice la sorgente e che inizialmente coincide con questultima. A tale albero vengono via via aggiunti nuovi nodi, scegliendoli tra quelli che si trovano a distanza minima da uno dei suoi vertici; il procedimento prosegue fino a quando tutti i nodi sono stati collegati alla soluzione. Unistanza del problema e quindi data da un grafo G = hV, Ei non orientato, connesso, dotato di una funzione peso w : E Q e da un nodo sorgente s V . Come al solito rappresentiamo il grafo associando a ogni vertice v V la lista L(v) dei nodi adiacenti. La soluzione viene calcolata fornendo linsieme T dei lati dellalbero di copertura ottenuto. Sostanzialmente lalgoritmo mantiene una partizione dei nodi del grafo in due insiemi che chiameremo S e R. Linsieme S e formato dai vertici gia raggiunti dalla soluzione, mentre R e costituito da tutti gli altri. Per ogni nodo v in R la procedura conserva il peso del lato di costo minimo che congiunge v con un nodo in S e sceglie tra tutti questi lati quello di peso minore, aggiungendolo alla soluzione ed estendendo cos linsieme dei vertici raggiunti. Per descrivere questo procedimento definiamo, per ogni nodo v R, la distanza di v da S mediante la seguente espressione: ( dist(v) = min{w({v, z}) | z S} se {z S | {v, z} E} = 6 altrimenti Inoltre denotiamo con vicino(v) il nodo z S tale che w({v, z}) = dist(v), con la convenzione di lasciare vicino(v) indefinito se dist(v) = . I valori dist(v) e vicino(v) possono essere rappresentati efficientemente mediante due vettori indiciati dagli elementi di V . Infine, denoteremo con D linsieme dei nodi in R che hanno una distanza finita da S: D = {v R | dist(v) 6= } Tale insieme viene costantemente aggiornato dallalgoritmo e da esso (insieme ai valori dist e vicino) la procedura estrae di volta in volta il nodo che si trova a distanza minima dalla soluzione. Lalgoritmo puo ora essere descritto dalla seguente procedura che utilizza effettivamente solo linsieme D, i valori dist e vicino e linsieme T che mantiene la soluzione costruita. 174 CAPITOLO 12. ALGORITMI GREEDY Procedure Prim(V, E, w, s) begin T := ( dist(v) := for v R do vicino(v) := D := {s} dist(s) := 0 while D 6= do begin (1) determina lelemento v D con dist(v) minima (2) cancella v da D aggiungi il lato {v, vicino(v)} a T for u L(v) do if dist(u) = aggiungi u a D dist(u) := w({v, u}) (3) then vicino(u) := v else if u D( w({v, u}) < dist(u) dist(u) := w(v, u) (4) then vicino(u) := v end output T {{s, }} end Per illustrare il funzionamento dellalgoritmo, consideriamo il grafo rappresentato nella seguente figura:  a 7 @ @ 5 1@  b  c 5  @ @ 3 2@ 8 @ @  @ @ d 2 e 4 f  Accanto a ogni lato e riportato il peso corrispondente e si suppone che il nodo sorgente sia d. Nella tabella successiva mostriamo il funzionamento dellalgoritmo su tale istanza, riportando i valori degli insiemi R, D, T , del vettore dist e del nodo v dopo lesecuzione di ogni ciclo while. v d e a b f c R {a, b, c, e, f } {a, b, c, f } {b, c, f } {c, f } {c} dist (5, , , , 2, ) (1, 3, , , , 4) (, 3, , , , 4) (, , 5, , , 2) (, , 5, , , ) (, , , , , ) D {a, e} {a, b, f } {b, f } {c, f } {c} T {{s, }} {e, d} {e, d}, {e, a} {e, d}, {e, a}, {e, b} {e, d}, {e, a}, {e, b}, {b, f } {e, d}, {e, a}, {e, b}, {b, f }, {b, c} CAPITOLO 12. ALGORITMI GREEDY 175 Scegliamo ora le strutture dati piu idonee per implementare lalgoritmo. Innazitutto i valori dist e vicino possono essere rappresentati mediante vettori che hanno per indice i nodi v V . La soluzione T puo essere semplicemente rappresentata da una lista poiche lunica operazione eseguita su tale insieme consiste nellaggiungere un nuovo elemento (operazione che possiamo cos eseguire in tempo costante). Sullinsieme D dobbiamo eseguire 4 operazioni: determinare il nodo v a distanza minima (linea (1) nella procedura), cancellarlo (linea (2)), introdurre un nuovo elemento (linea (3)) e aggiornare la distanza di un nodo gia presente in D (linea (4)). Descriviamo due diverse strutture per implementare D. Nel primo caso consideriamo D come una coda di priorita e utilizziamo uno heap rovesciato nel quale il valore di ogni nodo v e dato dalla sua distanza dist(v). In questo modo il nodo a distanza minima si trova nella radice dello heap e loperazione di cancellazione del minimo e di inserimento di un nuovo elemento possono essere eseguite in tempo logaritmico. Osserva che laggiornamento della distanza del nodo u, compiuto nella istruzione (4) della procedura, deve essere accompagnato da una ricollocazione di u allinterno dello heap; poiche la sua distanza diminuisce questo corrisponde a spostare u verso la radice. Chiaramente, anche questultima operazione richiede un tempo logaritmico rispetto al numero di elementi contenuti nello heap. Utilizzando tali strutture e le relative procedure, lalgoritmo di Prim, eseguito su un grafo di n nodi e m lati, richiede un tempo di calcolo O(m log n) (assumendo il criterio di costo uniforme). Diversamente, possiamo implementare linsieme D mediante un vettore D di n elementi che ha per indici i nodi del grafo di ingresso G = (V, E). Per ogni v V , poniamo se v R D dist(v) se v D D[v] = 0 se v S In questo modo il calcolo del nodo a distanza minima in D (linea (1)) richiede un tempo (n), poiche occorre scorrere lintero vettore per determinare il minimo; tuttavia le altre operazioni richiedono un tempo costante. E facile verificare che, usando questa struttura, lalgoritmo richiede un tempo dellordine di (n2 ). Di conseguenza, se il numero dei lati m e dellordine di grandezza di n2 (cioe il grafo di ingresso contiene molti lati) allora questultima implementazione e preferibile alla precedente. Se invece m << n (cioe il grafo di ingresso contiene pochi lati) luso di uno heap per implementare linsieme D risulta asintoticamente piu efficiente. Concludiamo studiando la correttezza dellalgoritmo appena descritto. Questa e basata sulla seguente proprieta . Proposizione 12.4 Sia G = hV, Ei un grafo non orientato, connesso, dotato di una funzione peso w : E Q. Sia inoltre T un albero di copertura minimo per G, sia U un sottoalbero di T e sia S linsieme dei nodi in U . Consideriamo un lato di costo minimo {a, b} che esce da S, cioe tale che a S e b 6 S. Allora esiste un albero di copertura minimo per G che contiene U come sottoalbero e che contiene anche il lato {a, b}. Dimostrazione. Se il lato {a, b} appartiene a T allora T e lalbero di copertura cercato. Altrimenti deve esistere in T un cammino semplice che congiunge a e b; tale cammino si svolge inizialmente tra nodi dellinsieme S, quindi esce da S raggiungendo b. Deve allora esistere in tale cammino un lato {a0 , b0 } tale che a0 S e b0 6 S (eventualmente si puo verificare a = a0 oppure b = b0 , ma non entrambi). Allora, sostituendo in T il lato {a0 , b0 } con il lato {a, b} non formiamo cicli e otteniamo cos un nuovo 176 CAPITOLO 12. ALGORITMI GREEDY albero di copertura T 0 . Chiaramente T 0 contiene U come sottoalbero e inoltre il peso dei suoi lati e dato da: w(T 0 ) = w(T ) w({a0 , b0 }) + w({a, b}) Poiche {a, b} e di peso minimo tra tutti i lati che escono da S, abbiamo w({a, b}) w({a0 , b0 }) e quindi dallequazione precedente otteniamo w(T 0 ) w(T ). Di conseguenza, essendo T un albero di copertura minimo, anche T 0 risulta un albero di copertura minimo. Nota che la proposizione precedente e valida anche quando linsieme S e ridotto ad un solo nodo: il nodo sorgente. Cos lapplicazione della proprieta appena provata consente di dimostrare per induzione che la soluzione fornita dallalgoritmo rappresenta un albero di copertura minimo del grafo di ingresso. Esercizio 1) Applicando losservazione precedente, provare la correttezza dellalgoritmo di Prim. 12.6 Lalgoritmo di Dijkstra Un procedimento di calcolo del tutto simile a quello presentato nella sezione precedente puo essere descritto per risolvere un problema di cammini minimi su un grafo. Abbiamo gia affrontato un problema simile nella sezione 11.5 dove e stato presentato un algoritmo per determinare i cammini di peso minimo tra tutte le coppie di nodi in un grafo pesato. In questo caso si vuole affrontare un problema piu semplice, quello di determinare, in un grafo pesato, i cammini di costo minimo che congiungono un nodo sorgente fissato con gli altri vertici. Osserviamo che tale problema e sostanzialmente diverso da quello di determinare lalbero di compertura di costo minimo in un grafo che e stato affrontato nelle sezione precedenti. Per esempio, se consideriamo il grafo rappresentato dalla seguente figura  a  b 4  3 1  c 2  d  lalbero di copertura di peso minimo e dato dal grafo seguente  a   b  3 1   c 2 d  mentre lalbero dei cammini di peso minimo che connettono il nodo sorgente a con gli altri nodi e definito dalla seguente figura: 177 CAPITOLO 12. ALGORITMI GREEDY  a 4  b  3  c 2  d  Lalgoritmo che presentiamo e noto come Algoritmo di Dijkstra ed e anche questo basato su una tecnica greedy. In questa sezione ne descriviamo formalmente il procedimento ma omettiamo la relativa analisi di complessita e la dimostrazione di correttezza che possono essere sviluppate facilmente seguendo la linea tratteggiata nella sezione precedente. Listanza del problema e costituita da un grafo orientato G = hV, Ei, un nodo sorgente s V e una funzione peso w : E Q a valori non negativi (ovvero w(`) 0 per ogni ` E). Anche in questo caso denotiamo con L(v) la lista dei nodi z tali che (v, z) E. Per ogni v V si vuole determinare il cammino di peso minimo in G che congiunge s con v e il relativo peso. Il calcolo puo essere eseguito determinando i valori pesocammino(v) e predecessore(v) cos definiti: pesocammino(v) = c predecessore(v) = u se in G esiste un cammino da s a v e c e il peso del cammino di costo minimo da s a v, se in G non esiste un cammino da s a v. se in G esiste un cammino da s a v e u e il nodo che precede v nel cammino di costo minimo da s a v, se in G non esiste un cammino da s a v oppure v = s. Lalgoritmo mantiene una partizione dei nodi del grafo in tre sottoinsiemi: linsieme S dei vertici per i quali la soluzione e gia stata calcolata, linsieme D dei nodi v, non inclusi in S, per i quali esiste un lato da un vertice in S a v e, infine, linsieme formato da tutti gli altri nodi. Inoltre, per ogni v D, la procedura mantiene aggiornati i due valori C(v) e P (v): il primo rappresenta il peso del cammino di costo minimo da s a v che passa solo per nodi in S; il secondo costituisce lultimo nodo prima di v in tale cammino. Tali valori possono essere conservati in opportuni vettori. Inizialmente, S e vuoto e D contiene solo la sorgente s mentre, per ogni vertice v, P (v) = e C(v) = tranne C(s) che assume valore 0; al termine del calcolo C(v) e P (v) coincideranno con i valori pesocammino(v) e predecessore(v) da calcolare e tali valori risulteranno definiti solo per i vertici raggiungibili da s. Lalgoritmo sceglie di volta in volta il nodo v D per il quale C(v) e minimo, lo toglie dallinsieme D, aggiornando i valori C(u) e P (u) per ogni u L(v) (eventualmente inserendo u in D). Lalgoritmo non ha in realta bisogno di mantenere linsieme S: e sufficiente conservare D. Lalgoritmo termina quando D = ; a questo punto, se C(v) = per qualche nodo v, allora v non e raggiungibile da s mediante un cammino del grafo (risultera anche P (v) = ). Lalgoritmo e descritto dalla seguente procedura: CAPITOLO 12. ALGORITMI GREEDY 178 Procedure Dijkstra(V, E, w, s) begin D := {s} ( C(v) := for v V do P (v) := C(s) := 0 while D 6= do begin determina il nodo v in D con C(v) minima cancella v da D for u L(v) do if C(v) + w(v, u) < C(u) then begin if C(u) = then aggiungi u a D C(u) := C(v) + w(v, u) P (u) := v end end return C, P end Osserva che i valori C(v) e P (v) svolgono nella procedura appena descritta lo stesso ruolo di dist(v) e vicino(v) nellalgoritmo di Prim. La dimostrazione della correttezza dellalgoritmo viene qui omessa per brevita . Essa e essenzialmente basata sullipotesi che i pesi dei lati siano non negativi e sul fatto che, durante lesecuzione dellalgoritmo, man mano che i nodi scelti in D vengono raggiunti dalla soluzione, i valori corrispondenti C(v) non decrescono. Inoltre D puo essere implementato come nellalgoritmo di Prim, usando uno heap rovesciato oppure un semplice vettore. Come nella sezione precedente si puo dimostrare che il tempo di calcolo richiesto dallalgoritmo di Dijkstra, su un input di n nodi e m lati, e O(m log n) se usiamo uno heap rovesciato per implementare D mentre, se utilizziamo un vettore, lo stesso tempo diviene O(n2 ). Anche in questo caso e dunque asintoticamente piu vantaggioso utilizzare uno heap rovesciato per grafi con pochi lati, mentre conviene usare un vettore nel caso di grafi con molti lati. Esercizi 1) Dimostrare mediante un esempio che lalgoritmo di Dijkstra non fornisce la soluzione esatta al problema quando il peso dei lati e negativo. 2) Dimostrare la correttezza dellalgoritmo di Dijkstra quando il peso dei lati e maggiore o uguale a zero. 12.7 Codici di Huffman Un problema del tutto naturale in ambito informatico e quello di codificare un file di caratteri, estratti da un alfabeto dato, mediante una stringa binaria. Si tratta di definire un codice binario, ovvero una funzione che associa ad ogni carattere una sequenza di bit in modo tale che ogni stringa binaria corrisponda al piu a una sola parola dellalfabeto dato. Cos la codifica del file originario si ottiene semplicemente concatenando le stringhe binarie associate ai vari caratteri che compongono il file. 179 CAPITOLO 12. ALGORITMI GREEDY Il primo obiettivo e quello di definire un codice che permetta di calcolare rapidamente la codifica di ogni carattere e, viceversa, di decodificare in maniera efficiente una stringa binaria determinando la sequenza di simboli corrispondente. In particolare si vuole compiere queste operazioni in tempo lineare, possibilmente mediante la semplice lettura della sequenza in ingresso. Inoltre, per ovvie ragioni di spazio, si vuole definire un codice ottimale che consenta cioe di rappresentare il file mediante la stringa binaria piu corta possibile. E evidente infatti che se le frequenze dei vari caratteri allinterno del file originario sono molto diverse tra loro, sara conveniente rappresentare i simboli piu frequenti mediante stringhe binarie relativamente corte e utilizzare quelle piu lunghe per rappresentare i simboli piu rari. In questo modo si puo risparmiare una notevole quantita di spazio (in alcuni casi anche superiore al 50%). Esempio 12.4 Supponiamo per esempio di voler codificare in binario una sequenza di 100 caratteri definita sullalfabeto {a, b, c, d, e, f, g}, nella quale i vari simboli compaiono con la seguente frequenza: 35 occorrenze di a 20 occorrenze di b 5 occorrenze di c 30 occorrenze di d 5 occorrenze di e 2 occorrenze di f 3 occorrenze di g. Definiamo ora il codice nel modo seguente: (a) = 000, (b) = 001, (c) = 010, (d) = 011, (e) = 100, (f ) = 101, (g) = 11. Utilizzando tale codice, la lunghezza della stringa binaria che rappresenta lintera sequenza e allora data da 97 3 + 2 3 = 297. Osserva che nella codifica appena definita quasi tutti i caratteri sono rappresentati con stringhe di lunghezza 3, pur avendo questi frequenze notevolmente diverse allinterno della sequenza originaria. Se invece definiamo una codifica che tenga conto delle varie frequenze possiamo migliorare notevolmente la lunghezza della stringa ottenuta. Definiamo per esempio il codice tale che (a) = 00, (b) = 010, (c) = 011, (d) = 10, (e) = 110, (f ) = 1110, (g) = 1111. E immediato verificare che, in questo caso, la lunghezza della stringa binaria ottenuta e data da 65 2 + 30 3 + 5 4 = 230, risparmiando quindi, rispetto alla precedente, piu del 20% di bit. I metodi che presentiamo in questa sezione consentono di determinare il codice binario ottimale di una stringa data conoscendo, per ogni carattere, la frequenza corrispondente allinterno della sequenza. Si tratta dei noti codici di Huffman che hanno una notevole importanza in questo ambito e sono ampiamente utilizzati nelle applicazioni. Lalgoritmo di costruzione del codice di Huffman di una sequenza assegnata e un tipico esempio di algoritmo greedy. 12.7.1 Codici binari Per formalizzare il problema ricordiamo che un alfabeto e un insieme finito di simboli (che chiameremo anche lettere) e una parola definita su un alfabeto A e una concatenazione di simboli estratti da A 1 . Denotiamo inoltre con A+ linsieme delle parole definite su A. In particolare, {0, 1}+ denota linsieme delle stringhe binarie. Denoteremo con |x| la lunghezza di un parola x. 1 Non consideriamo qui la parola vuota , quella nella quale non compare alcun simbolo. 180 CAPITOLO 12. ALGORITMI GREEDY Dato un alfabeto A, una funzione : A {0, 1}+ e un codice binario se per ogni y {0, 1}+ esiste al piu una sola sequenza di lettere x1 , x2 , . . . , xn A tale che y = (x1 )(x2 ) (xn ). In altre parole, e un codice binario se ogni stringa in {0, 1}+ puo essere decomposta al piu in un solo modo come concatenazione di parole appartenenti a (A). Nel seguito con il termine codice intenderemo sempre un codice binario. Chiaramente, ogni funzione : A {0, 1}+ puo essere estesa allinsieme A+ definendo, per ogni x A+ , limmagine (x) = (x1 )(x2 ) (xn ), dove x = x1 x2 xn e xi A per ogni i; possiamo quindi affermare che la funzione e un codice se la sua estensione allinsieme A+ e una funzione iniettiva. Esempio 12.5 Sia A = {a, b, c} e sia : A {0, 1}+ la funzione definita da (a) = 0, (b) = 01, (c) = 10. E facile verificare che non forma un codice perche , ad esempio, la stringa 010 coincide con (ba) = (ac). Viceversa, e facile verificare che la funzione : A {0, 1}+ , definita dalle uguaglianze (a) = 00, (b) = 01, (c) = 10, forma un codice binario. Tra le proprieta solitamente richieste vi e quella di poter decodificare una stringa binaria on line, ovvero determinando la decodifica di ciascun prefisso senza bisogno di considerare lintera sequenza 2 . Questa proprieta puo essere formalizzata dalla seguente definizione: un codice : A {0, 1}+ si dice prefisso se non esiste una coppia di lettere distinte a, b A per le quali (a) sia un prefisso di (b). Per esempio la funzione definita nellesempio precedente e un codice prefisso. Invece, il codice : {a, b} {0, 1}+ definito da (a) = 0, (b) = 01, non e un codice prefisso perche (a) e un prefisso di (b). Inoltre un codice : A {0, 1}+ si dice a lunghezza fissa se esiste k IN tale che |(a)| = k per ogni a A. E facile verificare che ogni codice a lunghezza fissa e anche un codice prefisso. Infine, data una parola x A+ , chiameremo costo di un codice su A la lunghezza di (x) e lo denoteremo mediante C(). Se rappresentiamo con |x|a il numero di occorrenze della lettera a in x, allora abbiamo X |x|a |(a)| C() = aA Un codice si dira ottimale rispetto una parola x se C(x) e minimo tra tutti i costi di codici prefissi definiti su A. I codici prefissi ammettono una efficace rappresentazione mediante un albero binario che consente in molti casi di definire facilmente algoritmi di costruzione e di manipolazione di queste funzioni. Lalbero binario che rappresenta un codice prefisso : A {0, 1}+ e definito nel modo seguente: 1. le foglie dellalbero coincidono con le lettere dellalfabeto; 2. ogni lato che congiunge un nodo interno con il suo figlio sinistro e etichettato dal bit 0; 3. ogni lato che congiunge un nodo interno con il suo figlio destro e etichettato dal bit 1; 4. per ogni a A, la stringa (a) coincide con letichetta del cammino che congiunge la radice dellalbero con la foglia a. Esempio 12.6 La funzione sullalfabeto {a, b, c, d, e, f, g}, a valori in {0, 1}+ , definita nellEsempio 12.4, e un codice prefisso rappresentato dallalbero seguente: 2 Ricordiamo che il prefisso di una parola x e una parola y tale che x = yz per qualche stringa z (eventualmente vuota). 181 CAPITOLO 12. ALGORITMI GREEDY  PP   P 1 PP P P 0      Q Q 1 Q  Q   Q Q 1 Q  Q    0   0   AA1 0   A  AA1 0   A  AA1 0   A a c e b d g f Analogamente, la funzione , definita nel medesimo Esempio 12.4, e un codice prefisso rappresentato dal seguente albero binario:  H  HH1 0  H  H     Q  Q 1 Q Q  0   a  AA1 0   A b c 0  @ @ 1 @  d 0 e  @ @ 1 @   AA1  A 0  f 12.7.2 g Descrizione dellalgoritmo La definizione del codice di Huffman di una data stringa si puo comprendere immediatamente descrivendo lalgoritmo che costruisce il corrispondente albero binario. Tale procedura e basata su una strategia greedy e il suo input e dato da un alfabeto A e da una funzione f r (che chiameremo frequenza) che associa da ogni lettera a A il numero di occorrenze di a nella stringa da codificare. Si comincia a costruire lalbero partendo dalle foglie, che sono ovviamente rappresentate dalle lettere dellalfabeto A. Quindi si scelgono le due foglie con frequenza minore e si definisce un nodo interno rendendo questultimo padre delle due precedenti. Si sostituiscono le due foglie estratte con il loro padre, attribuendo a questultimo un valore di frequenza pari alla somma delle frequenze dei due figli; quindi si ripete il procedimento, scegliendo nuovamente i due vertici di minor frequenza, rendendoli entrambi figli di un nuovo nodo interno e sostituendo questi con il padre nellinsieme delle possibili scelte successive. Questo procedimento va ripetuto fino a quando linsieme delle possibile scelte si riduce a un solo nodo che risulta per forza la radice dellalbero ottenuto. Per definire lalgoritmo abbiamo bisogno di due strutture dati. La prima sara data da un albero binario T che definira automaticamente il codice prodotto dallalgoritmo. La seconda sara invece una CAPITOLO 12. ALGORITMI GREEDY 182 coda di priorita Q, che mantiene linsieme dei nodi tra i quali occorre scegliere la coppia di elementi di frequenza minore, e sulla quale operare le sostituzioni opportune. Su Q dovremo quindi eseguire le operazioni Min, Insert e Delete e, inizialmente, tale struttura sara rappresentata dallinsieme delle foglie dotate della corrispondente frequenza. Possiamo formalmente definire lalgoritmo mediante la seguente procedura: Procedure Huffman(A, f r) begin n := ]A crea un albero binario T , con insieme delle foglie A, prevedendo n 1 nodi interni crea una coda di priorita Q inizialmente vuota for a A do inserisci a in Q con peso f r(a) for i = 1, 2, . . . , n 1 do begin determina i due elementi u, v in Q di peso minimo cancella u e v da Q crea un nodo interno z dellalbero T rendi u figlio sinistro di z rendi v figlio destro di z inserisci z in Q con peso f r(z) = f r(u) + f r(v) end return T end Per implementare lalgoritmo si possono usare una coppia di vettori sin e des per rappresentare lalbero T e un albero 2 3, o una qualunque struttura bilanciata, per linsieme Q. Osserviamo, in particolare, che una struttura dati molto comoda per Q e data da uno heap rovesciato nel quale loperazione Min e immediata, essendo il minimo posizionato sulla radice dello heap. Con queste strutture le operazioni Min, Delete e Insert richiedono comunque un tempo O(log n), assumendo n la cardinalita dellalfabeto, e quindi il tempo complessivo richiesto dallalgoritmo risulta O(n log n). 12.7.3 Correttezza Vogliamo ora provare che il codice di Huffman di una qualsiasi stringa e ottimale tra tutti i codici prefissi. Proposizione 12.5 Dato un alfabeto A e una frequenza f r definita su A, il codice di Huffman, ottenuto mediante la procedura data sopra e un codice prefisso ottimale. Dimostrazione. Ragioniamo per induzione sulla cardinalita n di A. Se n = 2 la proprieta e ovvia. Supponiamo n > 2 e assumiamo la proprieta vera per ogni alfabeto di dimensione k n 1. Consideriamo due lettere u, v di frequenza minima in A e definiamo lalfabeto B ottenuto da A sostituendo u e v con una nuova lettera x di frequenza f r(x) = f r(u) + f r(v). Sia il codice di Huffman di B. Per ipotesi dinduzione e un codice prefisso ottimale per B. Possiamo allora definire il codice per lalfabeto A nel modo seguente: se a 6= u e a 6= v (a) (x)0 se a = v (a) = (x)1 se a = u 183 CAPITOLO 12. ALGORITMI GREEDY Chiaramente, e un codice prefisso per A e coincide con il codice di Huffman perche di fatto si puo ottenere applicando la stessa procedura. Vogliamo ora provare che e ottimale. Osserviamo anzitutto che C() = C() |(x)|f r(x) + |(u)|f r(u) + |(v)|f r(v) = C() + f r(u) + f r(v). (12.1) Consideriamo ora un qualsiasi codice prefisso sullalfabeto A. Vogliamo provare che C() C(). Siano s e t le due lettere in A che hanno lunghezza di codice maggiore (cioe tali che |(s)| e |(t)| siano massime in {|(a)| | a A}). Senza perdita di generalita , possiamo assumere che s e t siano fratelli nellalbero che rappresenta , altrimenti sarebbe possibile costruire un nuovo codice, nel quale s e t sono fratelli, che possiede un costo minore o uguale a quello di e quindi proseguire la dimostrazione per questo. Definiamo un nuovo codice 0 che si ottiene da scambiando i le stringhe binarie associate a s e u e quelle associate a t e v. Si puo facilmente provare che il costo del nuovo codice non e maggiore di quello del precedente: C( 0 ) = C() + (|(u)| |(s)|)(f r(s) f r(u)) + (|(v)| |(t)|)(f r(t) f r(v)) Per le ipotesi fatte sulle lettere u, v, s e t, i due prodotti nel secondo termine dellequazione precedente forniscono valori minori o uguali a zero; questo prova che C( 0 ) C(). Ora poiche u e v sono fratelli nellalbero che rappresenta 0 , possiamo denotare con ` il massimo prefisso comune di 0 (u) e 0 (v), definendo cos un nuovo codice per lalfabeto B: ( (b) = 0 (b) se b 6= x ` se b = x Osserva che C( 0 ) = C() + f r(u) + f r(v). Inoltre, poiche e un codice ottimale su B, abbiamo che C() C(). Quindi, applicando la disuguaglianza (12.1), otteniamo C() = C() + f r(u) + f r(v) C() + f r(u) + f r(v) = C( 0 ) C() Esercizi 1) Costruire il codice di Huffman delle seguenti frasi interpretando il blank tra una parola e laltro come un simbolo dellalfabeto: il mago merlino e la spada nella roccia re artu e i cavalieri della tavola rotonda 2) Considera il problema Knapsack 0-1 (Zaino) definito nel modo seguente: Istanza: un intero positivo H, n valori v1 , v2 , . . . , vn IN e n dimensioni d1 , d2 , . . . , dn IN. Soluzione : un insieme S {1, 2, . . . , n} di dimesione al piu H e di valore massimo, ovvero tale che X di H, iS X iS X vi = max{ iA vi | A {1, 2, . . . , n}, X iS di H}. 184 CAPITOLO 12. ALGORITMI GREEDY a) Definire un algoritmo greedy per il problema dato (per esempio basato sul rapporto tra il valore e la dimensione di ogni elemento). b) Dimostrare mediante un esempio che lalgoritmo non fornisce sempre la soluzione ottima. 3) Considera la seguente variante al problema precedente (Knapsack frazionato): Istanza: un intero positivo H, n valori v1 , v2 , . . . , vn IN e n dimensioni d1 , d2 , . . . , dn IN. Soluzione : n numeri razionali f1 , f2 , . . . , fn tali che 0 fi 1 per ogni i e inoltre n X fi di H, i=1 n X i=1 n X fi vi = max{ gi vi | g1 , g2 , . . . , gn Q, i 0 gi 1, i=1 n X i=1 a) Definire un algoritmo greedy per il problema assegnato. b) Dimostrare che tale algoritmo determina sempre la soluzione ottima. gi di H}. Capitolo 13 I problemi NP-completi Gli argomenti principali trattati nei capitoli precedenti riguardano la sintesi e lanalisi degli algoritmi. Lo scopo era quello di illustrare le tecniche principali di progettazione di un algoritmo e i metodi che permettono di valutarne le prestazioni con particolare riferimento alle risorse di calcolo utilizzate (tempo e spazio). Vogliamo ora spostare lattenzione sui problemi, studiando la possibilita di classificare questi ultimi in base alla quantita di risorse necessarie per ottenere la soluzione. Si e riscontrato infatti, che per certi gruppi di problemi, le difficolta incontrate per trovare un algoritmo efficiente sono sostanzialmente le stesse. Lo stato di conoscenze attuale permette grossolanamente di raggruppare i problemi in tre categorie: 1. I problemi che ammettono algoritmi di soluzione efficienti; 2. I problemi che per loro natura non possono essere risolti mediante algoritmi efficienti e che quindi sono intrattabili; 3. I problemi per i quali algoritmi efficienti non sono stati trovati ma per i quali nessuno ha finora provato che tali algoritmi non esistano. Molti problemi di notevole interesse appartengono al terzo gruppo e presentano tra loro caratteristiche cos simili dal punto di vista algoritmico che risulta naturale introdurre metodi e tecniche che consentano di studiarne le proprieta complessivamente. Questo ha portato a definire e studiare le cosiddette classi di complessita , cioe classi di problemi risolubili utilizzando una certa quantita di risorse (per esempio di tempo oppure di spazio). Lo scopo e anche quello di confrontare la difficolta intrinseca dei vari problemi, verificando per esempio se un problema dato e piu o meno facile di un altro, o se e possibile trasformare un algoritmo per il primo in uno per il secondo che richieda allincirca la stessa quantita di risorse. Tra le classi di complessita definite in base al tempo di calcolo di grande interesse sono le classi P e NP che contengono un gran numero di problemi di interesse pratico. Lo studio delle relazioni tra queste classi e indubbiamente una delle piu interessanti problematiche (alcune tuttora aperte) dellinformatica teorica. 185 CAPITOLO 13. I PROBLEMI NP-COMPLETI 13.1 186 Problemi intrattabili Descriviamo ora con maggior precisione le tre classi di problemi sopra menzionate 1 . Vi e un generale accordo nel considerare come efficiente un algoritmo che opera in un tempo di calcolo che, nel caso peggiore, e limitato da un polinomio nelle dimensioni dellinput. Questo porta a considerare la classe P dei problemi risolubili su RAM in tempo polinomiale secondo il criterio di costo logaritmico (vedi capitolo 3). Risulta quindi naturale individuare P come la classe dei problemi praticamente risolubili. Vi sono risultati che dimostrano come tale classe sia sostanzialmente robusta, cioe non dipenda dal particolare modello di calcolo considerato. La naturale controparte della classe appena descritta e quella dei problemi che non possono essere risolti in un tempo polinomiale. Questi problemi sono tali che ogni algoritmo risolutivo richiede, nel caso peggiore, un tempo di calcolo esponenziale o comunque asintoticamente superiore ad ogni polinomio; essi sono chiamati intrattabili perche , pur essendo risolubili per via automatica, richiedono un tempo di calcolo molto elevato, tale da rendere ogni algoritmo di fatto inutilizzabile anche per dimensioni piccole dellinput. Per esempio, nella tabella presentata nella sezione 1.3, abbiamo visto che con tempi di calcolo dellordine di 2n , anche eseguendo 106 operazioni al secondo, sono necessari parecchi anni per risolvere istanze di dimensione n = 50. Le due classi precedenti sono ben definite ed essendo luna il complementare dellaltra, dovrebbero contenere tutti i problemi risolubili. In realta per molti problemi di notevole interesse non sono stati trovati algoritmi che funzionano in tempo polinomiale ma neppure e stato dimostrato che tali algoritmi non esistono. Tra questi vi sono problemi di decisione che provengono dalla ottimizzazione combinatoria o dalla logica, alcuni dei quali gia incontrati nei capitoli precedenti. Si tratta di problemi classici ampiamente studiati in letteratura che trovano applicazioni in vari settori e per i quali sono noti algoritmi che funzionano in tempo esponenziale o subesponenziale, oppure algoritmi di approssimazione polinomiali. La classe piu rappresentativa di questo folto gruppo e quella dei problemi NP-completi. Questa e stata definita allinizio degli anni 70 e raccoglie una grande varieta di problemi che sorgono naturalmente in vari settori dellinformatica, della ricerca operativa e della matematica discreta. Su di essi e stata sviluppata una notevole letteratura 2 e il loro numero e ormai di parecchie migliaia, raggruppati e studiati per settori specifici. Tali problemi sono computazionalmente equivalenti fra loro, nel senso che un eventuale algoritmo polinomiale per uno solo di essi implicherebbe lesistenza di un analogo algoritmo per tutti gli altri. Proprio lassenza di una tale procedura, nonostante gli sforzi compiuti, ha portato alla congettura che i problemi NP-completi non siano risolubili in tempo polinomiale e quindi non siano contenuti nella classe P (anche se finora nessuno ha dimostrato un tale risultato). 1 E bene ricordare che lanalisi che svolgiamo si riferisce comunque a problemi che ammettono un algoritmo di soluzione. Infatti, e noto che non tutti i problemi possono essere risolti mediante un algoritmo. I problemi che non ammettono un algoritmo di soluzione sono detti non risolubili o indecidibili e sono stati ampiamente studiati a partire dagli anni 30. Tra questi quello piu noto e il problema dellarresto; nel nostro contesto, tale problema puo essere formulato nel modo seguente: Istanza : un programma RAM P e un input I per P . Domanda : il programma P si arresta sullinput I? Questo e solo lesempio piu famoso di una vasta gamma di problemi indecidibili che riguardano il comportamento dei programmi oppure le proprieta delle funzioni da loro calcolate. 2 Si veda in proposito M.R.Garey, D.S.Johnson, Computers and intractability: a guide to the theory of NP-completeness, <NAME>, 1979. CAPITOLO 13. I PROBLEMI NP-COMPLETI 13.2 187 La classe P In questo capitolo studiamo principalmente la complessita computazionale di problemi di decisione. Ricordiamo che un problema si dice di decisione se ammette solo due possibili soluzioni (intuitivamente si o no). Esso puo essere rappresentato da una coppia hI, qi, dove I e linsieme delle istanze, mentre q e un predicato su I, ovvero una funzione q : I {0, 1}. Nel seguito rappresenteremo spesso il predicato q mediante una opportuna domanda relativa a una istanza qualsiasi x I. Definiamo allora P come la classe dei problemi di decisione risolubili da una RAM in tempo polinomiale secondo il criterio di costo logaritmico. Quindi, usando i simboli introdotti nel capitolo 3, possiamo affermare che un problema di decisione appartiene a P se e solo se esiste una RAM che risolve e un polinomio p(n) tali che, per ogni input x, Tl (x) p(|x|), dove |x| e la dimensione di una istanza x ovvero il numero di bit necessari per rappresentarla. E bene osservare che nella definizione precedente il criterio logaritmico non puo essere sostituito con quello uniforme. Infatti i due criteri possono fornire valutazioni molto diverse fra loro, anche se per molti programmi RAM i corrispondenti tempi di calcolo sono polinomialmente legati tra loro. In particolare, e abbastanza facile descrivere programmi RAM che, su un input x, richiedono un tempo lineare in |x| secondo il criterio uniforme e uno esponenziale secondo quello logaritmico. Abbiamo gia incontrato un programma di questo genere nellesempio 3.3. 13.3 La classe NP Ricordiamo che un problema di ottimizzazione puo essere definito nel modo seguente: dato un insieme di elementi E, una famiglia F 2E di soluzioni ammissibili e una funzione peso w : E IN, determinare una soluzione ammissibile di peso massimo. La risposta in questo caso non e binaria (si o no), quindi il problema cos formulato non e un problema di decisione. E pero possibile ottenere facilmente una versione di decisione introducendo nellistanza del problema un intero k: la questione diventa quella di decidere se esiste una soluzione ammissibile di peso almeno k. Esempio 13.1 Consideriamo il problema di ottimizzazione MAX CLIQUE cos definito MAX CLIQUE Istanza: un grafo non orientato G = hV, Ei. Soluzione ammissibili: le clique di G, cioe i sottoinsiemi C V tali che {v, w} E per ogni coppia di nodi distinti v, w C. Peso di una soluzione: il numero dei suoi elementi. Questione: determinare la clique di peso massimo in G. La versione di decisione di tale problema e invece la seguente: CLIQUE Istanza: un grafo non orientato G = hV, Ei e un intero k, 0 k ]V . Domanda: esiste una clique di peso almeno k in G? Le versioni di decisione dei problemi di ottimizzazione hanno le seguenti caratteristiche peculiari: CAPITOLO 13. I PROBLEMI NP-COMPLETI 188 1. E computazionalmente costoso in generale determinare la soluzione ammissibile di peso almeno k mediante lalgoritmo esaustivo; questo e dovuto al fatto che le soluzioni ammissibili sono spesso dellordine di 2n , dove n e la dimensione dellistanza. Per molti di tali problemi, inoltre, non e stato trovato alcun algoritmo risolutivo che lavori in tempo polinomiale. 2. E invece computazionalmente facile verificare se un certo insieme di elementi e una soluzione ammissibile di peso almeno k; questo significa che, se una istanza ammette risposta positiva, esiste una soluzione ammissibile che dimostra (o testimonia) la validita della risposta mediante una facile verifica. La classe NP raggruppa problemi che hanno le caratteristiche sopra descritte. Esso contiene molti problemi che sono la versione di decisione di problemi di ottimizzazione (sia di massimo che di minimo) insieme ad altri che richiedono piu semplicemente di verificare lesistenza di una soluzione con certe caratteristiche. Si tratta di problemi per i quali non sono noti algoritmi che funzionano in tempo polinomiale, ma per i quali si puo facilmente verificare se una certa soluzione candidata e effettivamente soluzione di una data istanza. Prima di fornire la definizione formale, ricordiamo che, dati due insiemi di istanze I e J, una relazione R I J si dice riconoscibile da una RAM M se, su input (i, j) I J, M verifica se (i, j) appartiene a R. Si assume che la dimensione dellinput (i, j) sia la somma delle dimensioni di i e j; nel caso del criterio di costo logaritmico essa equivale al numero di bit necessari per rappresentare i due elementi. Definizione 13.1 Un problema di decisione = hI, qi appartiene alla classe NP se esistono un insieme di elementi (potenziali dimostrazioni), una relazione R I riconoscibile da una RAM in tempo polinomiale (secondo il criterio logaritmico) e un polinomio p tali che per ogni istanza x I: 1. se q(x) = 1 allora esiste D tale che |D| p(|x|) e inoltre (x, D) R; 2. se q(x) = 0 allora per ogni D si verifica (x, D) 6 R. Osserva che se (x, D) R, D puo essere considerato come testimone o dimostrazione del fatto che x rende vera q. Vediamo alcuni esempi di problemi in NP. Il primo e il problema CLIQUE che abbiamo gia incontrato in un esempio precedente: CLIQUE Istanza: un grafo non orientato G = hV, Ei e un intero k, 0 k ]V . Domanda: esiste una clique di peso almeno k in G? In questo caso la dimostrazione che un grafo G ha una clique di dimensione k e semplicemente un sottoinsieme D di vertici di G contenente k elementi: si puo facilmente verificare in tempo polinomiale se D e proprio una clique di G. Un altro problema di carattere combinatorio appartenente alla classe NP e il seguente: CIRCUITO HAMILTONIANO Istanza: un grafo G = hV, Ei non orientato. Domanda: esiste in G un circuito hamiltoniano, ovvero una permutazione (v1 , v2 , . . . , vn ) dei nodi del grafo tale che {vi , vi +1} E per ogni i = 1, . . . , n1, e inoltre {vn , v1 } E? 189 CAPITOLO 13. I PROBLEMI NP-COMPLETI Qui la dimostrazione che G possiede un circuito hamiltoniamo e una permutazione D dei nodi del grafo; anche in questo caso si puo verificare in tempo polinomiale se D e effettivamente un circuito hamiltoniano di G. Un altro rilevante problema in NP e quello della soddisfacibilita per formule booleane in forma normale congiunta. Per definire tale problema ricordiamo innanzitutto che una formula booleana e un letterale, cioe una variabile x o una variabile negata x, oppure una delle seguenti espressioni: (i) (), (ii) 1 2 k , (iii) (1 2 k ), dove k 2, mentre , 1 , 2 , . . . , k sono formule booleane. Di solito si rappresentano con i simboli di somma e prodotto le tradizionali operazioni booleane e . Ogni formula booleana nella quale compaiono k variabili distinte definisce in modo ovvio una funzione f : {0, 1}k {0, 1}. Diciamo che una formula booleana e in forma normale congiunta (FNC per brevita) se e un prodotto di clausole di letterali, ovvero = E1 E2 . . . Ek (13.1) dove Ei = (`i1 + `i2 + + `iti ) per ogni i = 1, 2, . . . , k, e ciascun `ij e un letterale. Un esempio di formula booleana in FNC e dato dallespressione U (y1 , y2 , . . . , yk ) = ( k X i=1 yi ) Y (yi + yj ). (13.2) i6=j E evidente che U (y1 , y2 , . . . , yk ) vale 1 se e solo se esattamente una delle sue variabili yi assume valore 1. Allora il problema della soddisfacibilita per tali formule puo essere definito nel modo seguente: SODD Istanza: una formula in forma normale congiunta su un insieme di variabili x1 , x2 , . . . , xm . Domanda: esiste un assegnamento di valori alle variabili, cioe un funzione A : {x1 , x2 , . . . , xm } {0, 1}, che rende vera ? Anche in questo caso la dimostrazione e proprio un assegnamento A di valori alle variabili di . Dato A si verifica in tempo polinomiale se A sende vera la . Un problema che sorge ora in modo naturale e quello di confrontare le classi P e NP appena definite. Si verifica facilmente la seguente Proposizione 13.1 La classe P e contenuta in NP Dimostrazione. Infatti, se un problema di decisione = hI, qi appartiene a P e sufficiente definire linsieme = {1} e porre R = {(x, 1) | x I, q(x) = 1}. Cos lo stesso algoritmo che risolve in tempo polinomiale di fatto riconosce la relazione R in un tempo analogo, mentre le proprieta 1) e 2) della definizione 13.1 sono banalmente verificate. CAPITOLO 13. I PROBLEMI NP-COMPLETI 190 Si verifica inoltre che ogni problema in NP puo essere risolto in tempo esponenziale. Proposizione 13.2 Ogni problema in NP puo essere risolto da una RAM in tempo O(2f (n) ) secondo il criterio logaritmico, per qualche polinomio f (x). Dimostrazione. Sia = hI, qi un problema in NP; allora esistono un insieme di dimostrazioni , una relazione R e un polinomio p che soddisfano le condizioni poste dalla definizione 13.1. Sia inoltre M la RAM che riconosce R in un tempo polinomiale r(n). Possiamo allora definire una nuova RAM M 0 che su input x I calcola tutte le dimostrazioni D di dimensione minore o uguale a p(|x|) e, per ognuna di queste, verifica se (x, D) appartiene a R simulando la macchina M . Quindi M 0 restituisce il valore 1 se (x, D) appartiene a R per qualche D tra quelle considerate, mentre restituisce il valore 0 altrimenti. Chiaramente M 0 risolve il problema . Inoltre, se n e la dimensione di x, vi sono al piu 2p(n) dimostrazioni D di dimensione p(n). Per ognuna di queste la verifica se (x, D) appartiene a R si esegue in tempo r(n). Di conseguenza anche il tempo complessivo risulta O(2f (n) ), dove f (n) e un polinomio tale che f (n) = O(p(n) + r(n)) . Esercizio Dimostrare che il seguente problema appartiene a NP: COMMESSO VIAGGIATORE Istanza: un intero k > 0, un grafo diretto G = hV, Ei e una funzione peso d : E IN. Domanda: esiste un circuito che congiunge tutti i nodi grafo di peso minore o uguale a k, Pdel n1 ovvero una permutazione (v1 , v2 , . . . , vn ) di V tale che i=1 d(vi , vi+1 ) + d(vn , v1 ) k? 13.4 Riducibilita polinomiale e problemi NP-completi Obbiettivo di questa sezione e quello di classificare dal punto di vista della complessita computazionale i problemi in NP, selezionando in particolare quelli computazionalmente difficili. A tal riguardo la nozione centrale e la relazione di riducibilita polinomiale. Definizione 13.2 Consideriamo due problemi di decisione 1 = hI1 , q1 i e 2 = hI2 , q2 i, dove I1 e I2 sono gli insiemi delle istanze, mentre q1 e q2 i rispettivi predicati. Diciamo che 1 e polinomialmente riducibile a 2 , scrivendo 1 <p 2 , se esiste una funzione f : I1 I2 tale che: 1. f e calcolabile in tempo polinomiale da una RAM secondo il criterio logaritmico, 2. per ogni x I1 , q1 (x) = 1 se e solo se q2 (f (x)) = 1. Diremo che f definisce una riduzione polinomiale da 1 a 2 o anche che 1 e riducibile a 2 mendiante la funzione f . E facile verificare che la riduzione polinomiale gode della proprieta transitiva. Infatti, dati tre problemi di decisione 1 = hI1 , q1 i, 2 = hI2 , q2 i e 3 = hI3 , q3 i, se 1 <p 2 mediante una funzione f , e 2 <p 3 mediante una funzione g, allora la funzione composta h(x) = g(f (x)) definisce una riduzione polinomiale da 1 a 3 : per ogni x I1 , q1 (x) = 1 se e solo se q3 (h(x)) = 1; inoltre la funzione h e calcolabile da una RAM che, su input x I1 , simula prima la macchina che calcola f , ottenendo f (x), e quindi la macchina che calcola g, ricavando g(f (x)). Il tempo richiesto da questultimo calcolo e comunque polinomiale perche maggiorato dalla composizione di due polinomi. La nozione di riducibilita polinomiale permette di individuare una sorta di ordine di difficolta dei problemi in NP. Il seguente risultato ci dice che se 1 <p 2 e 2 e computazionalmente facile, cioe appartiene a P, allora anche p1 e computazionalmente facile. CAPITOLO 13. I PROBLEMI NP-COMPLETI 191 Proposizione 13.3 Dati due problemi di decisione 1 = hI1 , q1 i e 2 = hI2 , q2 i, se 1 <p 2 e 2 appartiene a P, allora anche 1 appartiene a P. Dimostrazione. Sia f la funzione che definisce una riduzione polinomiale da 1 a 2 e sia M una RAM che calcola f in tempo p(n), per un opportuno polinomio p. Se 2 P allora esiste una RAM M 0 che risolve 2 in tempo q(n), dove q e unaltro polinomio. Allora possiamo definire una RAM M 00 che, su input x I1 , calcola la stringa y = f (x) I2 simulando la macchina M ; quindi M 00 simula la macchina M 0 su input f (x) e mantiene la risposta di questultima. Poiche la lunghezza di f (x) e al piu data da p(|x|), la complessita in tempo di M 00 e maggiorata da un polinomio: TM 00 (|x|) p(|x|) + q(p(|x|)) Il risultato precedente permette di interpretare la relazione <p come una relazione di maggior difficolta computazionale tra problemi. In questo contesto diventa allora interessante individuare i problemi computazionalmente piu difficili da risolvere, che chiameremo NP-completi e che saranno i massimi della relazione <p . Definizione 13.3 Un problema di decisione e NP-completo se 1. appartiene a NP, 2. per ogni 0 NP si verifica 0 <p . Intuitivamente i problemi NP-completi sono quindi i problemi piu difficili nella classe NP. Lesistenza di un algoritmo che risolve in tempo polinomiale un problema NP-completo implicherebbe lequivalenza delle classi P e NP. Tuttavia lipotesi P=NP e considerata molto improbabile, dato lelevato numero di problemi in NP per i quali non sono stati trovati algoritmi polinomiali, anche se finora nessuno ha formalmente provato che le due classi sono diverse 3 . Quindi dimostrare che un problema e NPcompleto significa sostanzialmente provare che il problema e intrattabile ovvero che, quasi certamente, non ammette algoritmi di soluzione che lavorano in tempo polinomiale. 13.4.1 Riduzione polinomiale da SODD a CLIQUE Presentiamo ora un esempio di riduzione polinomiale tra problemi di decisione. Proposizione 13.4 SODD e polinomialmente riducibile a CLIQUE. Dimostrazione. Descriviamo una funzione f tra le istanze dei due problemi che definisce la riduzione. Sia una formula in FNC definita dalla uguaglianza = E1 E2 . . . Ek dove Ei = (`i1 + `i2 + + `iti ) per ogni i = 1, 2, . . . , k, e ciascun `ij e un letterale. Allora f () e listanza del problema CLIQUE data dallintero k, pari al numero di clausole di , e dal grafo G = hV, Ei definito nel modo seguente: 3 Il problema di dimostrare che P e diverso da NP e considerato oggi uno dei piu importanti problemi aperti della teoria degli algoritmi. CAPITOLO 13. I PROBLEMI NP-COMPLETI 192 - V = {[i, j] | i {1, 2, . . . , k}, j {1, 2, . . . , ti }}; - E = {{[i, j], [u, v]} | i 6= u, `ij 6= `uv }. Quindi i nodi di G sono esattamente le occorrenze di letterali in , mentre i suoi lati sono le coppie di letterali che compaiono in clausole distinte e che non sono luno il negato dellaltro. E facile verificare che la funzione f e calcolabile in tempo polinomiale. Proviamo ora che ammette un assegnamento che la rende vera se e solo se G ha una clique di dimensione k. Supponiamo che esista un tale assegnamento A. Chiaramente A assegna valore 1 ad almeno un letterale `iji per ogni clausola Ei di . Sia {j1 , j2 , . . . , jk } linsieme degli indici di questi letterali. Allora linsieme C = {[i, ji ] | i = 1, 2, . . . , k} e una clique del grafo G perche, se per qualche i, u {1, 2, . . . , k}, `iji = `uju , lassegnamento A non potrebbe rendere veri entrambi i letterali. Viceversa, sia C V una clique di G di dimensione k. Allora, per ogni coppia [i, j], [u, v] in C, abbiamo i 6= u e `ij 6= `uv . Sia ora S1 linsieme delle variabili x tali che x = `ij per qualche [i, j] C. Analogamente, denotiamo con S0 linsieme delle variabili y tali che y = `ij per qualche [i, j] C. Definiamo ora lassegnamento A che attribuisce valore 1 alle variabili in S1 e valore 0 alle variabili in S0 . A e ben definito perche, essendo C una clique, lintersezione S1 S0 e vuota. Inoltre A rende vera la formula , perche per ogni clausola Ei vi e un letterale `iji che assume valore 1. 13.4.2 Riduzione polinomiale da SODD a 3-SODD Il problema SODD puo essere ridotto polinomialmente anche a una sua variante che si ottiene restringendo le istanze alle formule booleane in FNC che possiedono solo clausole con al piu 3 letterali. 3-SODD Istanza: un formula booleana = F1 F2 . . . Fk , dove Fi = (`i1 + `i2 + `i3 ) per ogni i = 1, 2, . . . , k e ogni `ij e una variabile o una variabile negata. Domanda: esiste una assegnamento di valori 0 e 1 alle variabili che rende vera ? Proposizione 13.5 SODD e polinomialmente riducibile a 3-SODD. Dimostrazione. Sia una formula booleana in FNC e sia E = (`1 + `2 + + `t ) una clausola di dotata di t 4 letterali. Sostituiamo E in con un prodotto di clausole f (E) definito da f (E) = (`1 + `2 + y1 ) (`3 + y1 + y2 ) (`4 + y2 + y3 ) . . . (`t2 + yt4 + yt3 ) (`t1 + `t + yt3 ) dove y1 , y2 , . . . , yt3 sono nuove variabili. Proviamo ora che esiste un assegnamento che rende vera la clausola E se e solo se ne esiste uno che rende vera f (E). Infatti, se per qualche i `i = 1, basta porre yj = 1 per ogni j < i 1 e yj = 0 per ogni j i 1; in questo modo otteniamo un assegnamento che rende vera f (E). Viceversa, se esiste un assegnamento che rende vera f (E) allora deve esistere qualche letterale `i che assume valore 1. Altrimenti e facile verificare che il valore di f (E) sarebbe 0. Ne segue che lo stesso assegnamento rende vera la E. Operando questa sostituzione per tutte le clausole di dotate di piu di 3 addendi, otteniamo una formula 3-FNC che soddisfa le condizioni richieste. E inoltre evidente che il tempo di calcolo necessario per realizzare la sostituzione e polinomiale. CAPITOLO 13. I PROBLEMI NP-COMPLETI 13.5 193 Il teorema di Cook Si pone naturalmente la questione se la relazione <p ammette un massimo, cioe se esista almeno un problema NP-completo. Tale questione e stata risolta positivamente da Levin e Cook in modo indipendente. Teorema 13.6 (Levin - Cook) Il problema SODD e NP-completo. Questo risultato, provato da Cook nel 1971, e importante perche ha consentito di determinare lNPcompletezza di molti altri problemi di interesse applicativo per i quali non erano noti algoritmi di risoluzione polinomiali. Questo da una parte ha permesso di spiegare linerente difficolta di questi problemi, dallaltra ha dato avvio allo studio delle loro proprieta comuni. Osserviamo subito che, poiche la riducibilita polinomiale gode della proprieta transitiva, se SODD e polinomialmente riducibile a un problema N P , anche questultimo risulta NP-completo. Applicando quindi i risultati presentati nella sezione precedente, possiamo enunciare la seguente proposizione. Corollario 13.7 I problemi CLIQUE e 3-SODD sono NP-completi. Questo corollario mostra come, usando la riduzione polinomiale sia possibile dimostrare lNP-completezza di un problema. Quasi tutti i problemi NP-completi noti in letteratura sono stati ottenuti mediante riduzione polinomiale da un problema dello stesso tipo. La dimostrazione tradizionale del teorema di Cook richiede lintroduzione di un modello di calcolo elementare, piu semplice rispetto alle macchine RAM. Si tratta della nota macchina di Turing introdotta gia negli anni 30 per studiare le proprieta dei problemi indecidibili. Questo modello di calcolo e computazionalmente equivalente alle RAM (assumendo il costo logaritmico) ma e troppo elementare per descrivere in maniera efficace il funzionamento di algoritmi e procedure come sono oggi comunemente intese. La sua semplicita tuttavia consente di provare in maniera abbastanza diretta alcune proprieta generali di calcolo tra le quali anche lesistenza delle riduzioni polinomiali menzionate proprio nel teorema di Cook. 13.5.1 Macchine di Turing Intuitivamente una macchina di Turing (MdT nel seguito) e un modello di calcolo costituito da un insieme di stati, un nastro suddiviso in celle e una testina di lettura posizionata su una di queste. Linsieme degli stati contiene uno stato particolare, chiamato stato iniziale e un sottoinsieme di stati detti finali. Ogni cella contiene un solo simbolo estratto da un alfabeto fissato. Il funzionamento della macchina e determinato da un controllo centrale che consente di eseguire una sequenza di mosse a partire da una configurazione iniziale. In questa configurazione lo stato corrente e quello iniziale, una stringa di input e collocata nelle prime celle del nastro e la testina legge il primo simbolo; tutte le altre celle contegono un simbolo speciale che chiameremo blanck. Quindi ogni mossa e univocamente determinata dallo stato nel quale la macchina si trova e dal simbolo letto dalla testina. Eseguendo una mossa la macchina puo entrare in un nuovo stato, stampare un nuovo simbolo nella cella su cui e posizionata la testina di lettura e, infine, spostare questultima di una posizione a destra oppure a sinistra. Se la sequenza di mosse eseguite e finita diciamo che la macchina si arresta sullinput considerato e diciamo che tale input e accettato se lo stato raggiunto nellultima configurazione e finale. In questo modo si puo definire precisamente il problema di decisione risolto da una MdT: diciamo che la macchina M risolve un problema di decisione = hI, qi se I e linsieme delle possibili stringhe di input della macchina, M si arresta su ogni input x I e accetta x se e solo se q(x) = 1. 194 CAPITOLO 13. I PROBLEMI NP-COMPLETI 6 Controllo Formalmente possiamo quindi definire una macchina di Turing (deterministica) come una vettore M = hQ, , , q0 , B, , F i dove Q e un insieme finito di stati, un alfabeto di lavoro, un alfabeto di ingresso, B \ un simbolo particolare che denota il blank, q0 Q lo stato iniziale, F Q linsieme degli stati finali e una funzione transizione, cioe una funzione parziale : Q Q {1, +1} Per ogni q Q e ogni a , il valore (q, a) definisce la mossa di M quando la macchina si trova nello stato q e legge il simbolo a: se (q, a) = (p, b, `) allora p rappresenta il nuovo stato, b il simbolo scritto nella cella di lettura, ` lo spostamento della testina. La testina si sposta rispettivamente di una cella a sinistra o a destra a seconda se ` = 1 oppure ` = +1. Una configurazione di M e limmagine complessiva della macchina prima o dopo lesecuzione di una mossa. Questa e determinata dallo stato corrente, dal contenuto del nastro e dalla posizione della testina di lettura. Definiamo quindi una configurazione di M come una stringa 4 q dove q Q, , + . rappresenta la stringa collocata a sinistra della testina di lettura mentre , seguita da infiniti blank, e la sequenza di simboli che si trova alla sua destra. In particolare la testina di lettura e posizionata sul primo simbolo di . Inoltre supporremo che B non sia un suffisso di a meno che = B. In questo modo definisce la parte significativa del nastro: nel seguito diremo che essa rappresenta la porzione non blank del nastro. Denoteremo inoltre con CM linsieme delle configurazioni di M . La configurazione iniziale di M su input w e q0 B, se w e la parola vuota, mentre e la stringa q0 w, se w + . Nel seguito denoteremo con C0 (w) tale configurazione. Diremo inoltre che una configurazione q e accettante se q F . 4 Ricordiamo che una stringa (o parola) su un dato alfabeto A e una concatenazione finita di simboli estratti da A. Linsieme di tutte le stringhe su A, compresa la parola vuota , e denotato da A , mentre A+ rappresenta linsieme delle parole su A diverse da . La lunghezza di una parola x e il numero di simboli che compaiono in x ed e denotata da |x|. Chiaramente || = 0. CAPITOLO 13. I PROBLEMI NP-COMPLETI 195 Possiamo ora definire la relazione di transizione in un passo. Questa e una relazione binaria `M sullinsieme CM delle configurazioni di M . Intuitivamente, per ogni C, C 0 CM , vale C `M C 0 se la macchina M nella configurazione C raggiunge mediante una mossa la configurazione C 0 . Piu precisamente, data la configurazione q CM , supponiamo che = b 0 dove b , 0 e che (q, b) = (p, c, `). Distinguiamo quindi i seguenti casi: 1) se ` = +1 allora ( cp 0 se 0 6=  qb 0 `M cpB se 0 =  2) se ` = 1 e || 1 allora, ponendo = 0 a con 0 e a , abbiamo ( 0 qb `M 0 pa se c = B e 0 =  0 0 pac altrimenti Nel seguito denotiamo con `M la chiusura riflessiva e transitiva di `M . Osserviamo che se (q, b) non e definito, oppure ` = 1 e = , allora non esiste alcuna configurazione C 0 CM tale che q `M C 0 . In questo caso diciamo che q e una configurazione di arresto per M . Senza perdita di generalita possiamo supporre che ogni configurazione accettante sia una configurazione di arresto. Una sequenza finita {Ci }0im di configurazioni di M e una computazione di M su input w se C0 = C0 (w), Ci1 `M Ci per ogni i = 1, 2, . . . , m e Cm e una configurazione di arresto per M . Se inoltre Cm e accettante diciamo che M accetta linput w, altrimenti diciamo che lo rifiuta. Se invece la macchina M su input w non si ferma la sua computazione e definita da una sequenza infinita di configurazioni {Ci }0i<+ , tali che C0 = C0 (w) e Ci1 `M Ci per ogni i 1. Supponi ora che M si arresti su ogni input x . Allora diciamo che M risolve il problema di decisione h , qi dove, per ogni x , ( q(x) = 1 se M accetta x 0 altrimenti Esempio 13.2 Consideriamo ora il linguaggio L = {x {a, b} | x = xR }, dove xR e linversa di x. Si puo facilmente descrivere una MdT che verifica se una stringa appartiene a L. Tale macchina, su un input y {a, b} di lunghezza n, confronta i simboli di posizione i e n i + 1 e accetta se questi sono uguali tra loro per ogni i = 1, 2, . . . , n. Il calcolo avviene percoccendo n volte la stringa di input alternando la direzione do spostamento della testina e marcando opportunamente i simboli letti. Definiamo ora la nozione di tempo di calcolo di una MdT su un dato input. Data una MdT M = hQ, , , q0 , B, , F i, per ogni w , denotiamo con TM (w) il numero di mosse compiute da M su input w. Se M non si arresta poniamo TM (w) = +. Inoltre, per ogni n IN, denotiamo con TM (n) il massimo numero di mosse compiute da M su un input di lunghezza n: TM (n) = max{TM (w) | w , |x| = n} In questo modo TM e una funzione TM : IN IN {+} che chiameremo complessita in tempo di M . Data una funzione f : IN IR+ , diremo che M lavora in tempo f (n) se TM (n) f (n) per ogni n IN. Se inoltre f (n) e un polinomio in n diremo che M funziona in tempo polinomiale. Esempio 13.3 E facile verificare che la MdT descritta nellesempio 13.2 lavora in tempo O(n2 ). CAPITOLO 13. I PROBLEMI NP-COMPLETI 196 E evidente che per ogni MdT possiamo definire una macchina RAM che esegue la stessa computazione (a patto di codificare opportunamente i simboli dellalfabeto di lavoro). Vale inoltre una proprieta inversa che consente di determinare, per ogni macchina RAM, una MdT equivalente. Inoltre i tempi di calcolo delle due macchine sono polinomialmente legati tra loro. Proposizione 13.8 Se un problema di decisione e risolubile in tempo T (n) da una RAM M secondo il criterio logaritmico, allora esiste una MdT M 0 che risolve in tempo p(T (n)) per un opportuno polinomio p. Questo significa che la classe P coincide con la classe dei problemi di decisione risolubili da una MdT in tempo polinomiale. Per caratterizzare in maniera analoga la classe NP dobbiamo introdurre la nozione di macchina di Turing non deterministica. Si tratta intuitivamente di una MdT che ad ogni configurazione puo scegliere la mossa da eseguire in un insieme finito di possibili transizioni. In questo modo su un dato input la macchina puo compiere computazioni diverse a seconda delle scelte compiute ad ogni passo. Come vedremo, nonostante questo comportamento non deterministico, e sempre possibile associare alla macchina un problema di decisione e considerare questultimo come il problema risolto dalla macchina stessa. Analogamente sara possibile definire il tempo di calcolo richiesto dalla macchina su un dato input. Cos NP verra a coincidere con la classe dei problemi risolubili da MdT non deterministiche in tempo polinomiale. Formalmente una MdT non deterministica e ancora un vettore M = hQ, , , q0 , B, , F i dove Q, , , q0 , B e F sono definite come prima, mentre e una funzione : Q 2Q{1,+1} La macchina M , trovandosi nello stato q Q e leggendo il simbolo a , puo scegliere la mossa da compiere nellinsieme (q, a). Come nel caso deterministico, possiamo definire le configurazioni di M , le relazioni di transizione `M e `M , e le computazioni di M . E evidente che ora, per ogni configurazione C CM , possono esistere piu configurazioni raggiungibili da C in un passo. Per questo motivo, su un dato input, la macchina M puo eseguire computazioni diverse, una per ogni possibile sequenza di scelte compiute da M a ogni passo. Linsieme delle computazioni di M su un dato input sono rappresentabili da un albero con radice, dotato eventualmente di un numero infinito di nodi, che gode delle seguenti proprieta: (i) ogni nodo e etichettato da una configurazione di M ; (ii) la radice e etichettata dalla configurazione iniziale sullinput assegnato; (iii) se un nodo v e etichettato da una configurazione C e C1 , C2 , . . . , Ck sono le configurazioni raggiungibili da C in un passo, allora v possiede esattamente k figli etichettati rispettivamente da C1 , C2 , . . . , Ck ; (iv) un nodo v e una foglia se e solo se v e etichettato da una configurazione di arresto. Dalla definizione precedente e evidente che una computazione di M su un dato input e determinata da un ramo dellalbero di computazione corrispondente, ovvero da un cammino che va dalla radice a una foglia. Diremo che un input w e accettato da M se esiste una computazione di M su input w che conduce la macchina in una configurazione accettante, ovvero se esiste C CM tale che C = q, q F e C0 (w) `M C. CAPITOLO 13. I PROBLEMI NP-COMPLETI 197 Viceversa, se tutte le computazioni di M su input x terminano in una configurazione non accettante, diciamo che M rifiuta x. Se tutte le computazioni di M hanno termine su ogni possibile input diciamo che M risolve il problema di decisione = h , qi dove, per ogni x , ( q(x) = 1 se M accetta x 0 altrimenti Data una MdT non deterministica M = hQ, , , q0 , B, , F i e una stringa w , denotiamo con TM (w) il massimo numero di mosse che la macchina puo compiere in una computazione su input w. In altre parole TM (w) rappresenta laltezza dellalbero di computazione di M su input w. Chiaramente, TM (w) = + se e solo se esiste una computazione di M su tale input che non si arresta. Inoltre, per ogni n IN, denotiamo con TM (n) il massimo valore di TM (w) al variare delle parole w di lunghezza n: TM (n) = max{TM (w) | w , |w| = n} Di nuovo, data una funzione f : IN IR+ , diremo che una MdT non deterministica M lavora in tempo f (n) se TM (n) f (n) per ogni n IN. Proposizione 13.9 Un problema di decisione appartiene a NP se e solo se esiste un polinomio p e una MdT M non deterministica che risolve in tempo p(n). Dimostrazione. Sia = hI, qi un problema in NP; allora esistono un insieme delle dimostrazioni , una relazione R e un polinomio p che soddisfano le condizioni poste dalla definizione 13.1. Possiamo definire una MdT che su input x I sceglie in maniera non deterministica una dimostrazione D di lunghezza p(|x|) e quindi verifica, in modo deterministico, se (x, D) appartiene a R. Chiaramente tale MdT non deterministica risolve il problema considerato e lavora in tempo polinomiale. Viceversa, se e risolto da una MdT non deterministica M in tempo polinomiale, possiamo definire linsieme delle dimostrazioni di una determinata istanza x come la famiglia delle computazioni accettanti di M su input x. Si verifica facilmente che la famiglia delle dimostrazioni cos ottenute soddisfa le condizioni poste dalla definizione 13.1. 13.5.2 Dimostrazione In questa sezione presentiamo la dimostrazione del teorema 13.6. Nella sezione 13.3 abbiamo gia dimostrato che SODD appartiene a NP. Dobbiamo quindi provare che ogni problema NP e polinomialmente riducibile a SODD. In altre parole vogliamo dimostrare che per ogni MdT non deterministica M , che lavora in tempo polinomiale, esiste una funzione f , calcolabile in tempo polinomiale, che associa a ogni stringa di input w di M una formula booleana , in forma normale congiunta, tale che w e accettata dalla macchina se e solo se esiste un assegnamento di valori alle variabili che rende vera . Senza perdita di generalita, possiamo supporre che, per un opportuno polinomio p e per ogni input w di M , tutte le computazione della macchina su w abbiano la stessa lunghezza p(|w|). Infatti, se M non soddisfa questultima condizione, possiamo sempre costruire una nuova macchina che, su input w, prima calcola p(|w|), poi simula M su w tenendo un contatore del numero di mosse eseguite e prolungando ogni computazione fino a p(|w|) passi. Supponiamo inoltre che la macchina M sia definita da M = hQ, , , q1 , B, , F i, dove Q = {q1 , q2 , . . . , qs } e = {a1 , a2 , . . . , ar }. Data ora una stringa w di lunghezza n, sappiamo che CAPITOLO 13. I PROBLEMI NP-COMPLETI 198 ogni computazione di M su w e una sequenza di p(n) + 1 configurazioni, ciascuna delle quali e rappresentabile da una stringa q tale che || p(n) + 1. La corrispondente formula = f (w) sara definita su tre tipi di variabili booelane: S(u, t), C(i, j, t) e L(i, t), dove gli indici i, j, t, u variano in modo opportuno. Il significato intuitivo di queste variabili e il seguente: - la variabile S(u, t) assumera il valore 1 se al tempo t la macchina si trova nello stato qu ; - la variabile C(i, j, t) assumera valore 1 se al tempo t nella cella i-esima si trova il simbolo aj ; - la variabile L(i, t) assumera il valore 1 se al tempo t la testina di lettura e posizionata sulla cella i-esima. E chiaro che u {1, 2, . . . , s}, t {0, 1, . . . , p(n)}, i {1, 2, . . . , p(n) + 1} e j {1, 2, . . . , r}. Un assegnamento di valori 0 e 1 a queste variabili rappresenta una computazione accettante di M su input w se le seguenti condizioni sono soddisfatte: 1. per ogni t esiste un solo u tale che S(u, t) = 1 ovvero, in ogni istante la macchina si puo trovare in un solo stato; 2. per ogni t esiste un solo i tale che L(i, t) = 1 ovvero, in ogni istante la macchina legge esattamente una cella; 3. per ogni t e ogni i esiste un solo j tale che C(i, j, t) = 1 ovvero, in ogni istante ciascuna cella contiene esattamente un simbolo; 4. i valori delle variabili che hanno indice t = 0 rappresentano la configurazione iniziale su input w; 5. esiste una variabile S(u, p(n)), che assume valore 1, tale che qu F , ovvero M raggiunge uno stato finale al termine della computazione; 6. per ogni t e ogni i, se L(i, t) = 0, allora le variabili C(i, j, t) e C(i, j, t + 1) assumono lo stesso valore. In altre parole il contenuto delle celle che non sono lette dalla macchina in un dato istante, resta invariato allistante successivo; 7. se invece L(i, t) = 1, allora il valore delle variabili C(i, j, t+1), S(u, t+1) e L(i, t+1) rispettano le mosse della macchina allistante t. Associamo ora a ciascuna condizione una formula booleana, definita sulle variabili date, in modo tale che ogni assegnamento soddisfi la condizione se e solo se rende vera la formula associata. A tale scopo utilizziamo lespressione U (y1 , y2 , . . . , yk ) definita in (13.2) che assume valore 1 se e solo se una sola delle sue variabili ha valore 1. La sua lunghezza e limitata da un polinomio nel numero delle variabili (in particolare |U (y1 , y2 , . . . , yk )| = O(k 2 )). 1. La prima condizione richiede che per ogni t vi sia un solo u tale che S(u, t) = 1. Possiamo allora scrivere la seguente formula p(n) A= Y U (S(1, t), S(2, t), . . . , S(s, t)) t=0 la cui lunghezza e chiaramente O(np(n)), perche s e una costante che dipende solo da M e non dalla dimensione dellinput. E chiaro che un assegnamento rende vera la formula A se e solo se soddisfa la condizione 1. Le altre formule si ottengono in modo simile e hanno tutte lunghezza polinomiale in n. 199 CAPITOLO 13. I PROBLEMI NP-COMPLETI p(n) 2. B = Y U (L(1, t), L(2, t), . . . , L(p(n) + 1, t)) t=0 3. C = Y U (C(i, 1, t), C(i, 2, t), . . . , C(i, r, t)) t,i 4. Supponendo che w = x1 x2 xn e rappresentando impropriamente lindice di ogni xi , D = S(1, 0) L(1, 0) n Y p(n)+1 i=1 5. E = X Y C(i, xi , 0) C(i, B, 0) i=n+1 S(u, p(n)) qu F 6. Per rappresentare la sesta condizione denotiamo con x y lespressione (x + y) (x + y), che vale 1 se e solo se le variabili x e y assumono lo stesso valore. Per ogni i e ogni t (t 6= p(n)), poniamo Fit = L(i, t) + r Y (C(i, j, t) C(i, j, t + 1)). j=1 Quindi definiamo p(n)1 p(n)+1 F = Y Y t=0 i=1 Fit 7. Per ogni u, t, i, j definiamo Gutij = S(u, t) + L(i, t) + C(i, j, t)+ + (S(u0 , t + 1) C(i, j 0 , t + 1) L(i + v, t + 1)). X (qu0 ,aj 0 ,v)(qu ,aj ) Osserviamo che questa formula non e in forma normale congiunta. Tuttavia, per quanto dimostrato nella sezione 13.3, sappiamo che esiste una formula equivalente in FNC che denoteremo con Gutij . Si puo verificare che la lunghezza di Gutij e polinomiale. Ne segue che la formula associata alla settima condizione e Y G= Gutij u,t,i,j e anche la sua lunghezza risulta polinomiale in n. Poiche lespressione U e in forma normale congiunta, tutte le formule precedenti sono in FNC. Di conseguenza, anche la formula , ottenuta dal prodotto booleano delle formule precedenti, e in FNC: = A B C D E F G. Tale formula seleziona esattamente gli assegnamenti che rappresentano computazioni accettanti di M su input w. Possiamo allora affermare che esiste un assegnamento che rende vera la formula se e solo la macchina M accetta linput w. E inoltre facile verificare, per una M fissata, puo essere costruita in tempo polinomiale a partire dallinput w. Bibliografia [1] <NAME>, <NAME>, <NAME>, The design and analysis of computer algorithms, AddisonWesley, 1974. [2] <NAME>, <NAME>, <NAME>, Data structures and algorithms, Addison-Wesley, 1983. [3] <NAME>, Algoritmi e strutture di dati, UTET Libreria, 2000. [4] <NAME>, <NAME>, <NAME>, Introduction to algorithms, The MIT Press, McGrawHill, 1990. [5] <NAME>, <NAME>, <NAME>, Algoritmi e strutture dati, McGraw-Hill, 2004. [6] <NAME>, <NAME>, An introduction to the analysis of algorithms, Addison-Wesley, 1996. [7] <NAME>, <NAME>, <NAME>, Types de donnees et algorithmes, Ediscience International, 1993. [8] <NAME>, <NAME>, Computers and intractability: a guide to the theory of NP-completeness, <NAME>, 1979. [9] <NAME>, <NAME>, <NAME>, Concrete mathematics, Addison-Wesley, 1989. [10] <NAME>, <NAME>, Introduction to automata theory, languages and computation, AddisonWesley, 1979. [11] <NAME>, <NAME>, Fundamentals of computer algorithms, Computer Science Press, 1978. [12] <NAME>, The art of computer programming (volume 1): fundamental algorithms, AddisonWesley, 1973. [13] <NAME>, The art of computer programming (volume 2): seminumerical algorithms, AddisonWesley, 1973. [14] <NAME>, The art of computer programming (volume 3): sorting and searching, Addison-Wesley, 1973. [15] <NAME>, Data structures and algorithms (volume 1): sorting and searching, Springer-Verlag, 1984. [16] <NAME>, Data structures and algorithms (volume 2): graph algorithms and NP-completeness, Springer-Verlag, 1984. 200 BIBLIOGRAFIA [17] <NAME>, <NAME>, Foundations of algorithms, D.C. Heath and Company, 1996. [18] <NAME>, Computational complexity, Addison-Wesley, 1994. [19] <NAME>, Algorithms, Addison-Wesley Publishing Company, 1988. 201
fork-of-zcash-bn
rust
Rust
Struct fork_of_zcash_bn::AffineG1 === ``` #[repr(C)]pub struct AffineG1(_); ``` Implementations --- ### impl AffineG1 #### pub fn new(x: Fq, y: Fq) -> Result<Self, GroupError#### pub fn x(&self) -> Fq #### pub fn set_x(&mut self, x: Fq) #### pub fn y(&self) -> Fq #### pub fn set_y(&mut self, y: Fq) #### pub fn from_jacobian(g1: G1) -> Option<SelfTrait Implementations --- ### impl Clone for AffineG1 #### fn clone(&self) -> AffineG1 Returns a copy of the value. Read more1.0.0 · source#### const fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn from(affine: AffineG1) -> Self Converts to this type from the input type.### impl PartialEq<AffineG1> for AffineG1 #### fn eq(&self, other: &AffineG1) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more1.0.0 · source#### const fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason. ### impl Eq for AffineG1 ### impl StructuralEq for AffineG1 ### impl StructuralPartialEq for AffineG1 Auto Trait Implementations --- ### impl RefUnwindSafe for AffineG1 ### impl Send for AffineG1 ### impl Sync for AffineG1 ### impl Unpin for AffineG1 ### impl UnwindSafe for AffineG1 Blanket Implementations --- ### impl<T> Any for Twhere    T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere    T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. #### type Error = Infallible The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct fork_of_zcash_bn::AffineG2 === ``` #[repr(C)]pub struct AffineG2(_); ``` Implementations --- ### impl AffineG2 #### pub fn new(x: Fq2, y: Fq2) -> Result<Self, GroupError#### pub fn x(&self) -> Fq2 #### pub fn set_x(&mut self, x: Fq2) #### pub fn y(&self) -> Fq2 #### pub fn set_y(&mut self, y: Fq2) #### pub fn from_jacobian(g2: G2) -> Option<SelfTrait Implementations --- ### impl Clone for AffineG2 #### fn clone(&self) -> AffineG2 Returns a copy of the value. Read more1.0.0 · source#### const fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn from(affine: AffineG2) -> Self Converts to this type from the input type.### impl PartialEq<AffineG2> for AffineG2 #### fn eq(&self, other: &AffineG2) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more1.0.0 · source#### const fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason. ### impl Eq for AffineG2 ### impl StructuralEq for AffineG2 ### impl StructuralPartialEq for AffineG2 Auto Trait Implementations --- ### impl RefUnwindSafe for AffineG2 ### impl Send for AffineG2 ### impl Sync for AffineG2 ### impl Unpin for AffineG2 ### impl UnwindSafe for AffineG2 Blanket Implementations --- ### impl<T> Any for Twhere    T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere    T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. #### type Error = Infallible The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct fork_of_zcash_bn::Fq === ``` #[repr(C)]pub struct Fq(_); ``` Implementations --- ### impl Fq #### pub fn zero() -> Self #### pub fn one() -> Self #### pub fn random<R: Rng>(rng: &mutR) -> Self #### pub fn pow(&self, exp: Fq) -> Self #### pub fn from_str(s: &str) -> Option<Self#### pub fn inverse(&self) -> Option<Self#### pub fn is_zero(&self) -> bool #### pub fn interpret(buf: &[u8; 64]) -> Fq #### pub fn from_slice(slice: &[u8]) -> Result<Self, FieldError#### pub fn to_big_endian(&self, slice: &mut [u8]) -> Result<(), FieldError#### pub fn from_u256(u256: U256) -> Result<Self, FieldError#### pub fn into_u256(self) -> U256 #### pub fn modulus() -> U256 #### pub fn sqrt(&self) -> Option<SelfTrait Implementations --- ### impl Add<Fq> for Fq #### type Output = Fq The resulting type after applying the `+` operator.#### fn add(self, other: Fq) -> Fq Performs the `+` operation. #### fn clone(&self) -> Fq Returns a copy of the value. Read more1.0.0 · source#### const fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### type Output = Fq The resulting type after applying the `*` operator.#### fn mul(self, other: Fq) -> Fq Performs the `*` operation. #### type Output = Fq The resulting type after applying the `-` operator.#### fn neg(self) -> Fq Performs the unary `-` operation. #### fn eq(&self, other: &Fq) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more1.0.0 · source#### const fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason. #### type Output = Fq The resulting type after applying the `-` operator.#### fn sub(self, other: Fq) -> Fq Performs the `-` operation. ### impl Eq for Fq ### impl StructuralEq for Fq ### impl StructuralPartialEq for Fq Auto Trait Implementations --- ### impl RefUnwindSafe for Fq ### impl Send for Fq ### impl Sync for Fq ### impl Unpin for Fq ### impl UnwindSafe for Fq Blanket Implementations --- ### impl<T> Any for Twhere    T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere    T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. #### type Error = Infallible The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct fork_of_zcash_bn::Fq2 === ``` #[repr(C)]pub struct Fq2(_); ``` Implementations --- ### impl Fq2 #### pub fn one() -> Fq2 #### pub fn i() -> Fq2 #### pub fn zero() -> Fq2 #### pub fn new(a: Fq, b: Fq) -> Fq2 Initalizes new F_q2(a + bi, a is real coeff, b is imaginary) #### pub fn is_zero(&self) -> bool #### pub fn pow(&self, exp: U256) -> Self #### pub fn real(&self) -> Fq #### pub fn imaginary(&self) -> Fq #### pub fn sqrt(&self) -> Option<Self#### pub fn from_slice(bytes: &[u8]) -> Result<Self, FieldErrorTrait Implementations --- ### impl Add<Fq2> for Fq2 #### type Output = Fq2 The resulting type after applying the `+` operator.#### fn add(self, other: Self) -> Self Performs the `+` operation. #### fn clone(&self) -> Fq2 Returns a copy of the value. Read more1.0.0 · source#### const fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### type Output = Fq2 The resulting type after applying the `*` operator.#### fn mul(self, other: Self) -> Self Performs the `*` operation. #### type Output = Fq2 The resulting type after applying the `-` operator.#### fn neg(self) -> Self Performs the unary `-` operation. #### fn eq(&self, other: &Fq2) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more1.0.0 · source#### const fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason. #### type Output = Fq2 The resulting type after applying the `-` operator.#### fn sub(self, other: Self) -> Self Performs the `-` operation. ### impl Eq for Fq2 ### impl StructuralEq for Fq2 ### impl StructuralPartialEq for Fq2 Auto Trait Implementations --- ### impl RefUnwindSafe for Fq2 ### impl Send for Fq2 ### impl Sync for Fq2 ### impl Unpin for Fq2 ### impl UnwindSafe for Fq2 Blanket Implementations --- ### impl<T> Any for Twhere    T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere    T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. #### type Error = Infallible The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct fork_of_zcash_bn::Fr === ``` #[repr(C)]pub struct Fr(_); ``` Implementations --- ### impl Fr #### pub fn zero() -> Self #### pub fn one() -> Self #### pub fn random<R: Rng>(rng: &mutR) -> Self #### pub fn pow(&self, exp: Fr) -> Self #### pub fn from_str(s: &str) -> Option<Self#### pub fn inverse(&self) -> Option<Self#### pub fn is_zero(&self) -> bool #### pub fn interpret(buf: &[u8; 64]) -> Fr #### pub fn from_slice(slice: &[u8]) -> Result<Self, FieldError#### pub fn to_big_endian(&self, slice: &mut [u8]) -> Result<(), FieldError#### pub fn new(val: U256) -> Option<Self#### pub fn new_mul_factor(val: U256) -> Self #### pub fn into_u256(self) -> U256 #### pub fn set_bit(&mut self, bit: usize, to: bool) Trait Implementations --- ### impl Add<Fr> for Fr #### type Output = Fr The resulting type after applying the `+` operator.#### fn add(self, other: Fr) -> Fr Performs the `+` operation. #### fn clone(&self) -> Fr Returns a copy of the value. Read more1.0.0 · source#### const fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### type Output = Fr The resulting type after applying the `*` operator.#### fn mul(self, other: Fr) -> Fr Performs the `*` operation. #### type Output = G1 The resulting type after applying the `*` operator.#### fn mul(self, other: Fr) -> G1 Performs the `*` operation. #### type Output = G2 The resulting type after applying the `*` operator.#### fn mul(self, other: Fr) -> G2 Performs the `*` operation. #### type Output = Fr The resulting type after applying the `-` operator.#### fn neg(self) -> Fr Performs the unary `-` operation. #### fn eq(&self, other: &Fr) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more1.0.0 · source#### const fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason. #### type Output = Fr The resulting type after applying the `-` operator.#### fn sub(self, other: Fr) -> Fr Performs the `-` operation. ### impl Eq for Fr ### impl StructuralEq for Fr ### impl StructuralPartialEq for Fr Auto Trait Implementations --- ### impl RefUnwindSafe for Fr ### impl Send for Fr ### impl Sync for Fr ### impl Unpin for Fr ### impl UnwindSafe for Fr Blanket Implementations --- ### impl<T> Any for Twhere    T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere    T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. #### type Error = Infallible The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct fork_of_zcash_bn::G1 === ``` #[repr(C)]pub struct G1(_); ``` Implementations --- ### impl G1 #### pub fn new(x: Fq, y: Fq, z: Fq) -> Self #### pub fn x(&self) -> Fq #### pub fn set_x(&mut self, x: Fq) #### pub fn y(&self) -> Fq #### pub fn set_y(&mut self, y: Fq) #### pub fn z(&self) -> Fq #### pub fn set_z(&mut self, z: Fq) #### pub fn b() -> Fq #### pub fn from_compressed(bytes: &[u8]) -> Result<Self, CurveErrorTrait Implementations --- ### impl Add<G1> for G1 #### type Output = G1 The resulting type after applying the `+` operator.#### fn add(self, other: G1) -> G1 Performs the `+` operation. #### fn clone(&self) -> G1 Returns a copy of the value. Read more1.0.0 · source#### const fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn from(affine: AffineG1) -> Self Converts to this type from the input type.### impl Group for G1 #### fn zero() -> Self #### fn one() -> Self #### fn random<R: Rng>(rng: &mutR) -> Self #### fn is_zero(&self) -> bool #### fn normalize(&mut self) ### impl Mul<Fr> for G1 #### type Output = G1 The resulting type after applying the `*` operator.#### fn mul(self, other: Fr) -> G1 Performs the `*` operation. #### type Output = G1 The resulting type after applying the `-` operator.#### fn neg(self) -> G1 Performs the unary `-` operation. #### fn eq(&self, other: &G1) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more1.0.0 · source#### const fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason. #### type Output = G1 The resulting type after applying the `-` operator.#### fn sub(self, other: G1) -> G1 Performs the `-` operation. ### impl Eq for G1 ### impl StructuralEq for G1 ### impl StructuralPartialEq for G1 Auto Trait Implementations --- ### impl RefUnwindSafe for G1 ### impl Send for G1 ### impl Sync for G1 ### impl Unpin for G1 ### impl UnwindSafe for G1 Blanket Implementations --- ### impl<T> Any for Twhere    T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere    T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. #### type Error = Infallible The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct fork_of_zcash_bn::G2 === ``` #[repr(C)]pub struct G2(_); ``` Implementations --- ### impl G2 #### pub fn new(x: Fq2, y: Fq2, z: Fq2) -> Self #### pub fn x(&self) -> Fq2 #### pub fn set_x(&mut self, x: Fq2) #### pub fn y(&self) -> Fq2 #### pub fn set_y(&mut self, y: Fq2) #### pub fn z(&self) -> Fq2 #### pub fn set_z(&mut self, z: Fq2) #### pub fn b() -> Fq2 #### pub fn from_compressed(bytes: &[u8]) -> Result<Self, CurveErrorTrait Implementations --- ### impl Add<G2> for G2 #### type Output = G2 The resulting type after applying the `+` operator.#### fn add(self, other: G2) -> G2 Performs the `+` operation. #### fn clone(&self) -> G2 Returns a copy of the value. Read more1.0.0 · source#### const fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn from(affine: AffineG2) -> Self Converts to this type from the input type.### impl Group for G2 #### fn zero() -> Self #### fn one() -> Self #### fn random<R: Rng>(rng: &mutR) -> Self #### fn is_zero(&self) -> bool #### fn normalize(&mut self) ### impl Mul<Fr> for G2 #### type Output = G2 The resulting type after applying the `*` operator.#### fn mul(self, other: Fr) -> G2 Performs the `*` operation. #### type Output = G2 The resulting type after applying the `-` operator.#### fn neg(self) -> G2 Performs the unary `-` operation. #### fn eq(&self, other: &G2) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more1.0.0 · source#### const fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason. #### type Output = G2 The resulting type after applying the `-` operator.#### fn sub(self, other: G2) -> G2 Performs the `-` operation. ### impl Eq for G2 ### impl StructuralEq for G2 ### impl StructuralPartialEq for G2 Auto Trait Implementations --- ### impl RefUnwindSafe for G2 ### impl Send for G2 ### impl Sync for G2 ### impl Unpin for G2 ### impl UnwindSafe for G2 Blanket Implementations --- ### impl<T> Any for Twhere    T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere    T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. #### type Error = Infallible The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct fork_of_zcash_bn::Gt === ``` #[repr(C)]pub struct Gt(_); ``` Implementations --- ### impl Gt #### pub fn one() -> Self #### pub fn pow(&self, exp: Fr) -> Self #### pub fn inverse(&self) -> Option<Self#### pub fn final_exponentiation(&self) -> Option<SelfTrait Implementations --- ### impl Clone for Gt #### fn clone(&self) -> Gt Returns a copy of the value. Read more1.0.0 · source#### const fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### type Output = Gt The resulting type after applying the `*` operator.#### fn mul(self, other: Gt) -> Gt Performs the `*` operation. #### fn eq(&self, other: &Gt) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more1.0.0 · source#### const fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason. ### impl Eq for Gt ### impl StructuralEq for Gt ### impl StructuralPartialEq for Gt Auto Trait Implementations --- ### impl RefUnwindSafe for Gt ### impl Send for Gt ### impl Sync for Gt ### impl Unpin for Gt ### impl UnwindSafe for Gt Blanket Implementations --- ### impl<T> Any for Twhere    T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere    T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. #### type Error = Infallible The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Enum fork_of_zcash_bn::CurveError === ``` pub enum CurveError { InvalidEncoding, NotMember, Field(FieldError), ToAffineConversion, } ``` Variants --- ### `InvalidEncoding` ### `NotMember` ### `Field(FieldError)` ### `ToAffineConversion` Trait Implementations --- ### impl Debug for CurveError #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn from(fe: FieldError) -> Self Converts to this type from the input type.Auto Trait Implementations --- ### impl RefUnwindSafe for CurveError ### impl Send for CurveError ### impl Sync for CurveError ### impl Unpin for CurveError ### impl UnwindSafe for CurveError Blanket Implementations --- ### impl<T> Any for Twhere    T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Enum fork_of_zcash_bn::FieldError === ``` pub enum FieldError { InvalidSliceLength, InvalidU512Encoding, NotMember, } ``` Variants --- ### `InvalidSliceLength` ### `InvalidU512Encoding` ### `NotMember` Trait Implementations --- ### impl Debug for FieldError #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn from(fe: FieldError) -> Self Converts to this type from the input type.Auto Trait Implementations --- ### impl RefUnwindSafe for FieldError ### impl Send for FieldError ### impl Sync for FieldError ### impl Unpin for FieldError ### impl UnwindSafe for FieldError Blanket Implementations --- ### impl<T> Any for Twhere    T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Enum fork_of_zcash_bn::GroupError === ``` pub enum GroupError { NotOnCurve, NotInSubgroup, } ``` Variants --- ### `NotOnCurve` ### `NotInSubgroup` Trait Implementations --- ### impl Debug for Error #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read moreAuto Trait Implementations --- ### impl RefUnwindSafe for Error ### impl Send for Error ### impl Sync for Error ### impl Unpin for Error ### impl UnwindSafe for Error Blanket Implementations --- ### impl<T> Any for Twhere    T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Certara
cran
R
Package ‘Certara.NLME8’ June 15, 2023 Version 1.2.4 Title Utilities for Certara's Nonlinear Mixed-Effects Modeling Engine Description Perform Nonlinear Mixed-Effects (NLME) Modeling using Certara's NLME-Engine. Access the same Maximum Likelihood engines used in the Phoenix platform, including algorithms for parametric methods, individual, and pooled data analysis <https: //www.certara.com/app/uploads/2020/06/BR_PhoenixNLME-v4.pdf>. The Quasi-Random Parametric Expectation-Maximization Method (QRPEM) is also supported <https://www.page-meeting.org/default.asp?abstract=2338>. Execution is supported both locally or on remote machines. Remote execution includes support for Linux Sun Grid Engine (SGE), Terascale Open-source Resource and Queue Manager (TORQUE) grids, Linux and Windows multicore, and individual runs. Depends R (>= 4.0.0) License LGPL-3 RoxygenNote 7.2.3 Suggests testthat Imports xml2, batchtools (>= 0.9.9), reshape, utils, data.table Encoding UTF-8 NeedsCompilation no Author <NAME> [aut], <NAME> [aut], <NAME> [aut, cre], <NAME> [ctb], Certara USA, Inc. [cph, fnd] Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2023-06-15 20:00:12 UTC R topics documented: checkGC... 2 checkInstallDi... 3 checkLicenseFil... 3 checkMPISetting... 4 checkRootDi... 5 getTableName... 5 performBootstra... 6 performEstimationOnSortColumn... 6 performParallelNLMERu... 7 performProfileEstimatio... 8 performShotgunCovarSearc... 8 performStepwiseCovarSearc... 9 reconnectToBootstrapNLMERu... 9 UpdateMDLfrom_dmptx... 10 checkGCC Checks the local host for GCC version in the path Description Performs operating system dependent check for availability of GCC. Usage checkGCC(OS.type = .Platform$OS.type) Arguments OS.type Character specifying operating system type. Defaults to .Platform$OS.type. Value TRUE if GCC check is successful, otherwise FALSE. Examples checkGCC() checkInstallDir Checks the given directory for the files required for NLME run Description Checks the given directory for the files required for NLME run Usage checkInstallDir(installDir) Arguments installDir directory Location of NLME executables as set in INSTALLDIR environment variable. Value TRUE if all checks are successful, otherwise FALSE. Examples ## Not run: checkInstallDir(Sys.getenv("INSTALLDIR")) ## End(Not run) checkLicenseFile Checks if NLME run is licensed Description Checks if valid license file is available for NLME run. Usage checkLicenseFile(installDir, licenseFile = "", verbose = FALSE) Arguments installDir Directory with NLME executables as specified in INSTALLDIR environment vari- able. licenseFile Path to the license file. If not given, and Gemalto License server is not active, NLME will try to look for it in installationDirectory, and in Phoenix in- stallation directory. verbose Flag to output warnings if issues are found. Value TRUE if all checks are successful, otherwise FALSE. Examples ## Not run: checkLicenseFile(Sys.getenv("INSTALLDIR"), FALSE) ## End(Not run) checkMPISettings Check MPI settings for the given local host Description Checks if MPI settings are provided and feasible. Check is done for the hosts where MPI parallel method is used. Usage checkMPISettings(obj) Arguments obj NLME Parallel Host to be checked Value TRUE if MPI executables are ready for running, otherwise FALSE. If host does not have MPI in parallel method, it also returns TRUE. Examples ## Not run: checkMPISettings(host) ## End(Not run) checkRootDir Check NLME ROOT DIRECTORY for the given local host Description Checks if NLME ROOT DIRECTORY is provided and ready for writing. That directory is used for temporary folders writing. Usage checkRootDir(obj) Arguments obj NLME Parallel Host to be checked Value TRUE if NLME ROOT DIRECTORY exists and accessible for writing, otherwise FALSE. Examples ## Not run: checkRootDir(host) ## End(Not run) getTableNames Table names from the column definition file Description Extracts table names from the column definition file Usage getTableNames(columnDefinitionFilename, simtbl = FALSE) Arguments columnDefinitionFilename path to NLME column definition file simtbl logical. TRUE extracts simulation tables, FALSE extracts simple tables. Value vector of names of the tables in column definition file if any, empty string otherwise Examples ## Not run: getTableNames("cols1.txt", simtbl = TRUE) ## End(Not run) performBootstrap NLME Bootstrap Function Description Runs an NLME bootstrap job in parallel and produces summaries Usage performBootstrap(args, allowIntermediateResults = TRUE, reportProgress = FALSE) Arguments args Arguments for bootstrap execution allowIntermediateResults Set to TRUE to return intermediate results reportProgress Set to TRUE to report progress Value Directory path where NLME job was executed performEstimationOnSortColumns Sort specification for multiple estimations Description Runs multiple estimations sorting the input dataset by requested columns and creating multiple data sets Usage performEstimationOnSortColumns(args, reportProgress = FALSE) Arguments args a vector of arguments provided as the following: c(method, install_directory, shared_directory, localWorkingDir, nlmeArgsFile, numColumns, ColumnNames, NumProc, workflowName) reportProgress whether it is required to report the progress (for local jobs usually) Value Directory path where NLME job was executed performParallelNLMERun Runs a set of NLME jobs in parallel Description Runs a set of NLME jobs in parallel Usage performParallelNLMERun( args, partialJob = FALSE, allowIntermediateResults = TRUE, progressStage = "", func = "", func_arg = NULL, reportProgress = FALSE ) Arguments args a vector of arguments provided as the following: c(jobType, parallelMethod, in- stall_dir, shared_directory, localWorkingDir, controlFile, NumProc, workflow_name, fixefUnits) partialJob is /codeTRUE if it is not required to stop the job as for covariate stepwise search allowIntermediateResults is /codeTRUE if intermediate results are possible like for sorting progressStage stage of analysis to be reported func function to be executed after NLME job func_arg arguments to be provided to the function by name provided above reportProgress whether it is required to report the progress (for local jobs usually) Value Directory path where NLME job was executed performProfileEstimation NLME a profile estimation run on list of fixed effects Description This function runs multiple estimations sorting the input dataset by requested columns and creating multiple data sets Runs are also generated for all profiling variables Usage performProfileEstimation(args, reportProgress = FALSE) Arguments args Arguments for profile estimation reportProgress Set to TRUE to report progress Value Directory path where NLME job was executed performShotgunCovarSearch Shotgun covariate search Description Runs a set of possible covariate sets in parallel Usage performShotgunCovarSearch(args, reportProgress = FALSE) Arguments args a vector of arguments provided as the following: c(jobType, parallelMethod, in- stall_dir, shared_directory, localWorkingDir, controlFile, NumProc, workflow_name, fixefUnits) reportProgress whether it is required to report the progress (for local jobs usually) Value Directory path where NLME job was executed performStepwiseCovarSearch NLME stepwise covariate search Description This function runs a stepwise covariate NLME job in parallel It is designated to be called in com- mandline (Rscript) Usage performStepwiseCovarSearch(args, reportProgress = FALSE) Arguments args a vector of arguments provided as the following: c(method, install_directory, shared_directory, localWorkingDir, modelFile, nlmeArgsFile, listOfFilesToCopy, numCovariates, CovariateNames, NCriteria, addPValue, removePValue, NumProc, workflowName) reportProgress whether it is required to report the progress (for local jobs usually) Value Directory path where NLME job was executed reconnectToBootstrapNLMERun Use to reconnect to a grid job Description Use to reconnect to a grid job Usage reconnectToBootstrapNLMERun(args) Arguments args Arguments for reconnecting to bootstrap grid run Value Directory path where NLME job was executed UpdateMDLfrom_dmptxt Update Model text file from NLME output File Description This function updates a model file with parameter estimates obtained from a dmp file (R structure format of output generated by NLME) text file. The updated model file includes the estimated fixed effects, error terms and random effects values. Usage UpdateMDLfrom_dmptxt( dmpfile = "dmp.txt", SharedWorkingDir, model_file, compile = TRUE ) Arguments dmpfile The path to the DMP text file. SharedWorkingDir The working directory. Used if dmpfile is given without full path. model_file The name of the model file to be updated (with optional full path). compile A logical value indicating whether to compile the updated model file into NLME executable. Default is TRUE. Details TDL5 executable from NLME Engine is used. NLME engine location is identified by INSTALLDIR environment variable. The current function will give an error if TDL5 cannot be executed. Value The path to the updated model file.
rlfsm
cran
R
Package ‘rlfsm’ October 14, 2022 Type Package Title Simulations and Statistical Inference for Linear Fractional Stable Motions Version 1.1.2 Maintainer <NAME> <<EMAIL>> Author <NAME> [aut, cre] (<https://orcid.org/0000-0002-4700-7221>), <NAME> [aut] (<https://orcid.org/0000-0002-1395-9427>), <NAME> [ctb] Description Contains functions for simulating the linear fractional stable motion according to the al- gorithm developed by Mazur and Otryakhin <doi:10.32614/RJ-2020- 008> based on the method from Stoev and Taqqu (2004) <doi:10.1142/S0218348X04002379>, as well as func- tions for estimation of parameters of these processes introduced by Mazur, Otryakhin and Podol- skij (2018) <arXiv:1802.06373>, and also different related quantities. License GPL-3 URL https://gitlab.com/Dmitry_Otryakhin/ Tools-for-parameter-estimation-of-the-linear-fractional-stable-motion Encoding UTF-8 RoxygenNote 7.2.1 Depends methods, foreach, doParallel Imports ggplot2, stabledist, reshape2, plyr, Rdpack, Rcpp Suggests elliptic, testthat, stringi RdMacros Rdpack LinkingTo Rcpp NeedsCompilation yes Repository CRAN Date/Publication 2022-08-27 13:20:05 UTC R topics documented: alpha_hat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 a_p . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 a_tilda . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 ContinEstim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 GenHighEstim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 GenLowEstim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 H_hat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 h_kr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 increment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 MCestimLFSM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 m_pk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Norm_alpha . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Path_array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 phi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 phi_of_alpha . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Plot_dens . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Plot_list_paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Plot_vb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Retrieve_stats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 R_hl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 sf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 sigma_hat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 theta . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 U_g . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 U_gh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 U_ghuv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 alpha_hat Statistical estimator for alpha Description Defined for the two frequencies as log | log ϕhigh (t2 ; Hb high (p, k)n , k)n | − log | log ϕhigh (t1 ; H b high (p, k)n , k)n | α bhigh := log | log ϕlow (t2 ; k)n | − log | log ϕlow (t1 ; k)n | α blow := Usage alpha_hat(t1, t2, k, path, H, freq) Arguments t1, t2 real number such that t2 > t1 > 0 k increment order path sample path of lfsm on which the inference is to be performed H Hurst parameter freq Frequency of the motion. It can take two values: "H" for high frequency and "L" for the low frequency setting. Details The function triggers function phi, thus Hurst parameter is required only in high frequency case. In the low frequency, there is no need to assign H a value because it will not be evaluated. References <NAME>, <NAME>, <NAME> (2020). “Estimation of the linear fractional stable motion.” Bernoulli, 26(1), 226–252. https://doi.org/10.3150/19-BEJ1124. Examples m<-45; M<-60; N<-2^14-M alpha<-1.8; H<-0.8; sigma<-0.3 freq='H' r=1; k=2; p=0.4; t1=1; t2=2 # Estimating alpha in the high frequency case # using preliminary estimation of H lfsm<-path(N=N,m=m,M=M,alpha=alpha,H=H, sigma=sigma,freq='L',disable_X=FALSE,seed=3)$lfsm H_est<-H_hat(p=p,k=k,path=lfsm) H_est alpha_est<-alpha_hat(t1=t1,t2=t2,k=k,path=lfsm,H=H_est,freq=freq) alpha_est a_p Function a_p. Description Computes the corresponding function value from Mazur et al. 2018. Usage a_p(p) Arguments p power, real number from (-1,1) References <NAME>, <NAME>, <NAME> (2020). “Estimation of the linear fractional stable motion.” Bernoulli, 26(1), 226–252. https://doi.org/10.3150/19-BEJ1124. a_tilda Creates the corresponding value from the paper by Stoev and Taqqu (2004). Description a_tilda triggers a_tilda_cpp which is written in C++ and essentially performs the computation of the value. Usage a_tilda(N, m, M, alpha, H) Arguments N a number of points of the lfsm. m discretization. A number of points between two nearby motion points M truncation parameter. A number of points at which the integral representing the definition of lfsm is calculated. So, after M points back we consider the rest of the integral to be 0. alpha self-similarity parameter of alpha stable random motion. H Hurst parameter References <NAME>, Taqqu MS (2004). “Simulation methods for linear fractional stable motion and FARIMA using the Fast Fourier Transform.” Fractals, 95(1), 95-121. https://doi.org/10.1142/S0218348X04002379. ContinEstim Parameter estimation procedure in continuous case. Description Parameter freq is preserved to allow for investigation of the inference procedure in high frequency case. Usage ContinEstim(t1, t2, p, k, path, freq) Arguments t1, t2 real number such that t2 > t1 > 0 p power k increment order path sample path of lfsm on which the inference is to be performed freq Frequency of the motion. It can take two values: "H" for high frequency and "L" for the low frequency setting. References <NAME>, <NAME>, <NAME> (2020). “Estimation of the linear fractional stable motion.” Bernoulli, 26(1), 226–252. https://doi.org/10.3150/19-BEJ1124. Examples m<-45; M<-60; N<-2^10-M alpha<-0.8; H<-0.8; sigma<-0.3 p<-0.3; k=3; t1=1; t2=2 lfsm<-path(N=N,m=m,M=M,alpha=alpha,H=H, sigma=sigma,freq='L',disable_X=FALSE,seed=3)$lfsm ContinEstim(t1,t2,p,k,path=lfsm,freq='L') GenHighEstim High frequency estimation procedure for lfsm. Description General estimation procedure for high frequency case when 1/alpha is not a natural number. "Un- necessary" parameter freq is preserved to allow for investigation of the inference procedure in low frequency case Usage GenHighEstim(p, p_prime, path, freq, low_bound = 0.01, up_bound = 4) Arguments p power p_prime power path sample path of lfsm on which the inference is to be performed freq Frequency of the motion. It can take two values: "H" for high frequency and "L" for the low frequency setting. low_bound positive real number up_bound positive real number Details In this algorithm the preliminary estimate of alpha is found via using uniroot function. The latter is given the lower and the upper bounds for alpha via low_bound and up_bound parameters. It is not possible to pass 0 as the lower bound because there are numerical limitations on the alpha estimate, caused by the length of the sample path and by numerical errors. p and p_prime must belong to the interval (0,1/2) (in the notation kept in rlfsm package) The two powers cannot be equal. References <NAME>, <NAME>, <NAME> (2020). “Estimation of the linear fractional stable motion.” Bernoulli, 26(1), 226–252. https://doi.org/10.3150/19-BEJ1124. Examples m<-45; M<-60; N<-2^10-M sigma<-0.3 p<-0.2; p_prime<-0.4 #### Continuous case lfsm<-path(N=N,m=m,M=M,alpha=1.8,H=0.8, sigma=sigma,freq='L',disable_X=FALSE,seed=3)$lfsm GenHighEstim(p=p,p_prime=p_prime,path=lfsm,freq="H") #### H-1/alpha<0 case lfsm<-path(N=N,m=m,M=M,alpha=0.8,H=0.8, sigma=sigma,freq='H',disable_X=FALSE,seed=3)$lfsm GenHighEstim(p=p,p_prime=p_prime,path=lfsm,freq="H") GenLowEstim Low frequency estimation procedure for lfsm. Description General estimation procedure for low frequency case when 1/alpha is not a natural number. Usage GenLowEstim(t1, t2, p, path, freq = "L") Arguments t1, t2 real number such that t2 > t1 > 0 p power path sample path of lfsm on which the inference is to be performed freq Frequency of the motion. It can take two values: "H" for high frequency and "L" for the low frequency setting. References <NAME>, <NAME>, <NAME> (2020). “Estimation of the linear fractional stable motion.” Bernoulli, 26(1), 226–252. https://doi.org/10.3150/19-BEJ1124. Examples m<-45; M<-60; N<-2^10-M sigma<-0.3 p<-0.3; k=3; t1=1; t2=2 #### Continuous case lfsm<-path(N=N,m=m,M=M,alpha=1.8,H=0.8, sigma=sigma,freq='L',disable_X=FALSE,seed=3)$lfsm GenLowEstim(t1=t1,t2=t2,p=p,path=lfsm,freq="L") #### H-1/alpha<0 case lfsm<-path(N=N,m=m,M=M,alpha=0.8,H=0.8, sigma=sigma,freq='L',disable_X=FALSE,seed=3)$lfsm GenLowEstim(t1=t1,t2=t2,p=p,path=lfsm,freq="L") #### The procedure works also for high frequency case lfsm<-path(N=N,m=m,M=M,alpha=1.8,H=0.8, sigma=sigma,freq='H',disable_X=FALSE,seed=3)$lfsm GenLowEstim(t1=t1,t2=t2,p=p,path=lfsm,freq="H") H_hat Statistical estimator of H in high/low frequency setting Description The statistic is defined as 1 1 Hb high (p, k)n := p log2 Rhigh (p, k)n , Hb Usage H_hat(p, k, path) Arguments p power k increment order path sample path of lfsm on which the inference is to be performed References <NAME>, <NAME>, <NAME> (2020). “Estimation of the linear fractional stable motion.” Bernoulli, 26(1), 226–252. https://doi.org/10.3150/19-BEJ1124. h_kr Function h_kr Description Function hk,r : R → R is given by k   X k H−1/α hk,r (x) = (−1)j (x − rj)+ , x∈R j=0 j Usage h_kr(k, r, x, H, alpha, l = 0) Arguments k order of the increment, a natural number r difference step, a natural number x real number H Hurst parameter alpha self-similarity parameter of alpha stable random motion. l a value by which we shift x. Is used for computing function f_.+l and is passed to integrate function. References <NAME>, <NAME>, <NAME> (2020). “Estimation of the linear fractional stable motion.” Bernoulli, 26(1), 226–252. https://doi.org/10.3150/19-BEJ1124. Examples #### Plot h_kr #### s<-seq(0,10, by=0.01) h_val<-sapply(s,h_kr, k=5, r=1, H=0.3, alpha=1) plot(s,h_val) increment Higher order increments Description Difference of the kth order. Defined as following: k   j k X ∆n,r i,k X := (−1) X(i−rj)/n , i ≥ rk. j=0 j Index i here is a coordinate in terms of point_num. Although R uses vector indexes that start from 1, increment has i varying from 0 to N, so that a vector has a length N+1. It is done in order to comply with the notation of the paper. This function doesn’t allow for choosing frequency n. The frequency is determined by the path supplied, thus n equals to either the length of the path in high frequency setting or 1 in low frequency setting. increment() gives increments at certain point passed as i, which is a vector here. increments() computes high order increments for the whole sample path. The first function evaluates the formula above, while the second one uses structure diff(diff(...)) because the formula is slower at higher k. Usage increment(r, i, k, path) increments(k, r, path) Arguments r difference step, a natural number i index of the point at which the increment is to be computed, a natural number. k order of the increment, a natural number path sample path for which a kth order increment is computed References <NAME>, <NAME>, <NAME> (2020). “Estimation of the linear fractional stable motion.” Bernoulli, 26(1), 226–252. https://doi.org/10.3150/19-BEJ1124. Examples m<-45; M<-60; N<-2^10-M alpha<-0.8; H<-0.8; sigma<-0.3 lfsm<-path(N=N,m=m,M=M,alpha=alpha,H=H, sigma=sigma,freq='L',disable_X=FALSE,seed=3)$lfsm tryCatch( increment(r=1,i=length(lfsm),k=length(lfsm)+100,path=lfsm), error=function(c) 'An error occures when k is larger then the length of the sample path') increment(r=3,i=50,k=3,path=lfsm) path=c(1,4,3,6,8,5,3,5,8,5,1,8,6) r=2; k=3 n <- length(path) - 1 DeltaX = increment(seq(r*k, n), path = path, k = k, r = r) DeltaX == increments(k=k,r=r,path) MCestimLFSM Numerical properties of statistical estimators operating on the linear fractional stable motion. Description The function is useful, for instance, when one needs to compute standard deviation of α bhigh esti- mator given a fixed set of parameters. Usage MCestimLFSM(Nmc, s, m, M, alpha, H, sigma, fr, Inference, ...) Arguments Nmc Number of Monte Carlo repetitions s sequence of path lengths m discretization. A number of points between two nearby motion points M truncation parameter. A number of points at which the integral representing the definition of lfsm is calculated. So, after M points back we consider the rest of the integral to be 0. alpha self-similarity parameter of alpha stable random motion. H Hurst parameter sigma Scale parameter of lfsm fr frequency. Either "H" or "L" Inference statistical function to apply to sample paths ... parameters to pass to Inference Details MCestimLFSM performs Monte-Carlo experiments to compute parameters according to procedure Inference. More specifically, for each element of s it generates Nmc lfsm sample paths with length equal to s[i], performs the statistical inference on each, obtaining the estimates, and then returns their different statistics. It is vital that the estimator returns a list of named parameters (one or several of ’sigma’, ’alpha’ and ’H’). MCestimLFSM uses the names to lookup the true parameter value and compute its bias. For sample path generation MCestimLFSM uses a light-weight version of path, path_fast. In order to be applied, function Inference must accept argument ’path’ as a sample path. Value It returns a list containing the following components: data a data frame, values of the estimates depending on path length s data_nor a data frame, normalized values of the estimates depending on path length s means, biases, sds data frames: means, biases and standard deviations of the estimators depending on s Inference a function used to obtain estimates alpha, H, sigma the parameters for which MCestimLFSM performs path generation freq frequency, either ’L’ for low- or ’H’ for high frequency Examples #### Set of global parameters #### m<-25; M<-60 p<-.4; p_prime<-.2; k<-2 t1<-1; t2<-2 NmonteC<-5e1 S<-c(1e2,3e2) alpha<-1.8; H<-0.8; sigma<-0.3 # How to plot empirical density theor_3_1_H_clt<-MCestimLFSM(s=S,fr='H',Nmc=NmonteC, m=m,M=M,alpha=alpha,H=H, sigma=sigma,ContinEstim, t1=t1,t2=t2,p=p,k=k) l_plot<-Plot_dens(par_vec=c('sigma','alpha','H'), MC_data=theor_3_1_H_clt, Nnorm=1e7) # For MCestimLFSM() it is vital that the estimator returns a list of named parameters H_hat_f <- function(p,k,path) {hh<-H_hat(p,k,path); list(H=hh)} theor_3_1_H_clt<-MCestimLFSM(s=S,fr='H',Nmc=NmonteC, m=m,M=M,alpha=alpha,H=H, sigma=sigma,H_hat_f, p=p,k=k) # The estimator can return one, two or three of the parameters. est_1 <- function(path) list(H=1) theor_3_1_H_clt<-MCestimLFSM(s=S,fr='H',Nmc=NmonteC, m=m,M=M,alpha=alpha,H=H, sigma=sigma,est_1) est_2 <- function(path) list(H=0.8, alpha=1.5) theor_3_1_H_clt<-MCestimLFSM(s=S,fr='H',Nmc=NmonteC, m=m,M=M,alpha=alpha,H=H, sigma=sigma,est_2) est_3 <- function(path) list(sigma=5, H=0.8, alpha=1.5) theor_3_1_H_clt<-MCestimLFSM(s=S,fr='H',Nmc=NmonteC, m=m,M=M,alpha=alpha,H=H, sigma=sigma,est_3) m_pk m(-p,k) Description defined as mp,k := E[|∆k,k X|p ] for positive powers. When p is negative (-p is positive) the equality does not hold. Usage m_pk(k, p, alpha, H, sigma) Arguments k increment order p a positive number alpha self-similarity parameter of alpha stable random motion. H Hurst parameter sigma Scale parameter of lfsm Details The following identity is used for computations: (σkhk kα )−p 2(σkhk kα )−p Z m−p,k = exp(−|y|α )|y|−1+p dy = Γ(p/α) a−p R αa−p References <NAME>, <NAME>, <NAME> (2020). “Estimation of the linear fractional stable motion.” Bernoulli, 26(1), 226–252. https://doi.org/10.3150/19-BEJ1124. Norm_alpha Alpha-norm of an arbitrary function Description Alpha-norm of an arbitrary function Usage Norm_alpha(fun, alpha, ...) Arguments fun a function to compute a norm alpha self-similarity parameter of alpha stable random motion. ... a set of parameters to pass to integrate Details fun must accept a vector of values for evaluation. See ?integrate for further details. Most problems with this function appear because of rather high precision. Try to tune rel.tol parameter first. References <NAME>, <NAME>, <NAME> (2020). “Estimation of the linear fractional stable motion.” Bernoulli, 26(1), 226–252. https://doi.org/10.3150/19-BEJ1124. Examples Norm_alpha(h_kr,alpha=1.8,k=2,r=1,H=0.8,l=4) path Generator of linear fractional stable motion Description The function creates a 1-dimensional LFSM sample path using the numerical algorithm from the paper by Otryakhin and Mazur. The theoretical foundation of the method comes from the article by Stoev and Taqqu. Linear fractional stable motion is defined as Z n o Xt = (t − s)+ − (−s)+ dLs R Usage path( N = NULL, m, M, alpha, H, sigma, freq, disable_X = FALSE, levy_increments = NULL, seed = NULL ) Arguments N a number of points of the lfsm. m discretization. A number of points between two nearby motion points M truncation parameter. A number of points at which the integral representing the definition of lfsm is calculated. So, after M points back we consider the rest of the integral to be 0. alpha self-similarity parameter of alpha stable random motion. H Hurst parameter sigma Scale parameter of lfsm freq Frequency of the motion. It can take two values: "H" for high frequency and "L" for the low frequency setting. disable_X is needed to disable computation of X. The default value is FALSE. When it is TRUE, only a levy motion is returned, which in turn reduces the computation time. The feature is particularly useful for reproducibility when combined with seeding. levy_increments increments of Levy motion underlying the lfsm. seed this parameter performs seeding of path generator Value It returns a list containing the motion, the underlying Levy motion, the point number of the motions from 0 to N and the corresponding coordinate (which depends on the frequency), the parameters that were used to generate the lfsm, and the predefined frequency. References <NAME>, <NAME> (2020). “Linear Fractional Stable Motion with the rlfsm R Package.” The R Journal, 12(1), 386–405. doi:10.32614/RJ2020008. <NAME>, T<NAME> (2004). “Simulation methods for linear fractional stable motion and FARIMA using the Fast Fourier Transform.” Fractals, 95(1), 95-121. https://doi.org/10.1142/S0218348X04002379. See Also paths simulates a number of lfsm sample paths. Examples # Path generation m<-256; M<-600; N<-2^10-M alpha<-1.8; H<-0.8; sigma<-0.3 seed=2 List<-path(N=N,m=m,M=M,alpha=alpha,H=H, sigma=sigma,freq='L',disable_X=FALSE,seed=3) # Normalized paths Norm_lfsm<-List[['lfsm']]/max(abs(List[['lfsm']])) Norm_oLm<-List[['levy_motion']]/max(abs(List[['levy_motion']])) # Visualization of the paths plot(Norm_lfsm, col=2, type="l", ylab="coordinate") lines(Norm_oLm, col=3) leg.txt <- c("lfsm", "oLm") legend("topright",legend = leg.txt, col =c(2,3), pch=1) # Creating Levy motion levyIncrems<-path(N=N, m=m, M=M, alpha, H, sigma, freq='L', disable_X=TRUE, levy_increments=NULL, seed=seed) # Creating lfsm based on the levy motion lfsm_full<-path(m=m, M=M, alpha=alpha, H=H, sigma=sigma, freq='L', disable_X=FALSE, levy_increments=levyIncrems$levy_increments, seed=seed) sum(levyIncrems$levy_increments== lfsm_full$levy_increments)==length(lfsm_full$levy_increments) paths Generator of a set of lfsm paths. Description It is essentially a wrapper for path generator, which exploits the latest to create a matrix with paths in its columns. Usage paths(N_var, parallel, seed_list = rep(x = NULL, times = N_var), ...) Arguments N_var number of lfsm paths to generate parallel a TRUE/FALSE flag which determines if the paths will be created in parallel or sequentially seed_list a numerical vector of seeds to pass to path ... arguments to pass to path See Also path Examples m<-45; M<-60; N<-2^10-M alpha<-1.8; H<-0.8; sigma<-0.3 freq='L' r=1; k=2; p=0.4 Y<-paths(N_var=10,parallel=TRUE,N=N,m=m,M=M, alpha=alpha,H=H,sigma=sigma,freq='L', disable_X=FALSE,levy_increments=NULL) Hs<-apply(Y,MARGIN=2,H_hat,p=p,k=k) hist(Hs) Path_array Path array generator Description The function takes a list of parameters (alpha, H) and uses expand.grid to obtain all possible combinations of them. Based on each combination, the function simulates an lfsm sample path. It is meant to be used in conjunction with function Plot_list_paths. Usage Path_array(N, m, M, l, sigma) Arguments N a number of points of the lfsm. m discretization. A number of points between two nearby motion points M truncation parameter. A number of points at which the integral representing the definition of lfsm is calculated. So, after M points back we consider the rest of the integral to be 0. l a list of parameters to expand sigma Scale parameter of lfsm Value The returned value is a data frame containing paths and the corresponding values of alpha, H and frequency. Examples l=list(H=c(0.2,0.8),alpha=c(1,1.8), freq="H") arr<-Path_array(N=300,m=30,M=100,l=l,sigma=0.3) str(arr) head(arr) phi Phi Description Defined as ϕhigh (t; H, k)n := Vhigh (ψt ; k)n and ϕlow (t; k)n := Vlow (ψt ; k)n , where ψt (x) := cos(tx) Usage phi(t, k, path, H, freq) Arguments t positive real number k increment order path sample path of lfsm on which the inference is to be performed H Hurst parameter freq Frequency of the motion. It can take two values: "H" for high frequency and "L" for the low frequency setting. Details Hurst parameter is required only in high frequency case. In the low frequency, there is no need to assign H a value because it will not be evaluated. References <NAME>, <NAME>, <NAME> (2020). “Estimation of the linear fractional stable motion.” Bernoulli, 26(1), 226–252. https://doi.org/10.3150/19-BEJ1124. phi_of_alpha Inverse alpha estimator Description A function from a general estimation procedure which is defined as m^p_-p’_k /m^p’_-p_k, origi- nally proposed in [13]. Usage phi_of_alpha(p, p_prime, alpha) Arguments p power p_prime power alpha self-similarity parameter of alpha stable random motion. References <NAME>, <NAME>, <NAME> (2020). “Estimation of the linear fractional stable motion.” Bernoulli, 26(1), 226–252. https://doi.org/10.3150/19-BEJ1124. Plot_dens (alpha,H,sigma)- density plot Description Plots the densities of the parameters (alpha,H,sigma) estimated in Monte-Carlo experiment. Works in conjunction with MCestimLFSM function. Usage Plot_dens(par_vec = c("alpha", "H", "sigma"), MC_data, Nnorm = 1e+07) Arguments par_vec vector of parameters which are to be plotted MC_data a list created by MCestimLFSM Nnorm number of point sampled from standard normal distribution See Also Plot_vb to plot variance- and bias dependencies on n. Examples m<-45; M<-60 p<-.4; p_prime<-.2 t1<-1; t2<-2; k<-2 NmonteC<-5e2 S<-c(1e3,1e4) alpha<-.8; H<-0.8; sigma<-0.3 theor_4_1_clt_new<-MCestimLFSM(s=S,fr='L',Nmc=NmonteC, m=m,M=M, alpha=alpha,H=H,sigma=sigma, GenLowEstim,t1=t1,t2=t2,p=p) l_plot<-Plot_dens(par_vec=c('sigma','alpha','H'), MC_data=theor_4_1_clt_new, Nnorm=1e7) l_plot Plot_list_paths Rendering of path lattice Description Rendering of path lattice Usage Plot_list_paths(arr) Arguments arr a data frame produced by Path_array. Examples l=list(H=c(0.2,0.8),alpha=c(1,1.8), freq="H") arr<-Path_array(N=300,m=30,M=100,l=l,sigma=0.3) p<-Plot_list_paths(arr) p Plot_vb A function to plot variance- and bias dependencies of estimators on the lengths of sample paths. Works in conjunction with MCestimLFSM function. Description A function to plot variance- and bias dependencies of estimators on the lengths of sample paths. Works in conjunction with MCestimLFSM function. Usage Plot_vb(data) Arguments data a list created by MCestimLFSM Value The function returns a ggplot2 graph. See Also Plot_dens Examples # Light weight computaions m<-25; M<-50 alpha<-1.8; H<-0.8; sigma<-0.3 S<-c(1:3)*1e2 p<-.4; p_prime<-.2; t1<-1; t2<-2 k<-2; NmonteC<-50 # Here is the continuous H-1/alpha inference procedure theor_3_1_H_clt<-MCestimLFSM(s=S,fr='H',Nmc=NmonteC, m=m,M=M,alpha=alpha,H=H, sigma=sigma,ContinEstim, t1=t1,t2=t2,p=p,k=k) Plot_vb(theor_3_1_H_clt) # More demanding example (it is better to use multicore setup) # General low frequency inference m<-45; M<-60 alpha<-0.8; H<-0.8; sigma<-0.3 S<-c(1:15)*1e2 p<-.4; t1<-1; t2<-2 NmonteC<-50 # Here is the continuous H-1/alpha inference procedure theor_4_1_H_clt<-MCestimLFSM(s=S,fr='H',Nmc=NmonteC, m=m,M=M,alpha=alpha,H=H, sigma=sigma,GenLowEstim, t1=t1,t2=t2,p=p) Plot_vb(theor_4_1_H_clt) Retrieve_stats Retrieve statistics(bias, variance) of estimators based on a set of paths Description Retrieve statistics(bias, variance) of estimators based on a set of paths Usage Retrieve_stats(paths, true_val, Est, ...) Arguments paths real-valued matrix representing sample paths of the stochastic process being studied true_val true value of the estimated parameter Est estimator (i.e. H_hat) ... parameters to pass to Est Examples m<-45; M<-60; N<-2^10-M alpha<-1.8; H<-0.8; sigma<-0.3 freq='L';t1=1; t2=2 r=1; k=2; p=0.4 Y<-paths(N_var=10,parallel=TRUE,N=N,m=m,M=M, alpha=alpha,H=H,sigma=sigma,freq='L', disable_X=FALSE,levy_increments=NULL) Retrieve_stats(paths=Y,true_val=sigma,Est=sigma_hat,t1=t1,k=2,alpha=alpha,H=H,freq="L") R_hl R high /low Description Defined as Pn p i,k X Rhigh (p, k)n := P p , n i,k X Pn p Rlow (p, k)n := P p n Usage R_hl(p, k, path) Arguments p power k increment order path sample path of lfsm on which the inference is to be performed Details The computation procedure for high- and low frequency cases is the same, since there is no way to control frequency given a sample path. References <NAME>, <NAME>, <NAME> (2020). “Estimation of the linear fractional stable motion.” Bernoulli, 26(1), 226–252. https://doi.org/10.3150/19-BEJ1124. Examples m<-45; M<-60; N<-2^10-M alpha<-0.8; H<-0.8; sigma<-0.3 p<-0.3; k=3 lfsm<-path(N=N,m=m,M=M,alpha=alpha,H=H, sigma=sigma,freq='L',disable_X=FALSE,seed=3)$lfsm R_hl(p=p,k=k,path=lfsm) sf Statistic V Description Statistic of the form n Vhigh (f ; k, r)n := f n ∆i,k X , n i=rk n f ∆ri,k X  Vlow (f ; k, r)n := n i=rk Usage sf(path, f, k, r, H, freq, ...) Arguments path sample path for which the statistic is to be calculated. f function applied to high order increments. k order of the increments. r step of high order increments. H Hurst parameter. freq frequency. ... parameters to pass to function f Details Hurst parameter is required only in high frequency case. In the low frequency, there is no need to assign H a value because it will not be evaluated. References <NAME>, <NAME>, <NAME> (2020). “Estimation of the linear fractional stable motion.” Bernoulli, 26(1), 226–252. https://doi.org/10.3150/19-BEJ1124. See Also phi computes V statistic with f(.)=cos(t.) Examples m<-45; M<-60; N<-2^10-M alpha<-1.8; H<-0.8; sigma<-0.3 freq='L' r=1; k=2; p=0.4 S<-(1:20)*100 path_lfsm<-function(...){ List<-path(...) List$lfsm } Pths<-lapply(X=S,FUN=path_lfsm, m=m, M=M, alpha=alpha, sigma=sigma, H=H, freq=freq, disable_X = FALSE, levy_increments = NULL, seed = NULL) f_phi<-function(t,x) cos(t*x) f_pow<-function(x,p) (abs(x))^p V_cos<-sapply(Pths,FUN=sf,f=f_phi,k=k,r=r,H=H,freq=freq,t=1) ex<-exp(-(abs(sigma*Norm_alpha(h_kr,alpha=alpha,k=k,r=r,H=H,l=0)$result)^alpha)) # Illustration of the law of large numbers for phi: plot(y=V_cos, x=S, ylim = c(0,max(V_cos)+0.1)) abline(h=ex, col='brown') # Illustration of the law of large numbers for power functions: Mpk<-m_pk(k=k, p=p, alpha=alpha, H=H, sigma=sigma) sf_mod<-function(Xpath,...) { Path<-unlist(Xpath) sf(path=Path,...) } V_pow<-sapply(Pths,FUN=sf_mod,f=f_pow,k=k,r=r,H=H,freq=freq,p=p) plot(y=V_pow, x=S, ylim = c(0,max(V_pow)+0.1)) abline(h=Mpk, col='brown') sigma_hat Statistical estimator for sigma Description Statistical estimator for sigma Usage sigma_hat(t1, k, path, alpha, H, freq) Arguments t1 real number such that t1 > 0 k increment order path sample path of lfsm on which the inference is to be performed alpha self-similarity parameter of alpha stable random motion. H Hurst parameter freq Frequency of the motion. It can take two values: "H" for high frequency and "L" for the low frequency setting. Examples m<-45; M<-60; N<-2^14-M alpha<-1.8; H<-0.8; sigma<-0.3 freq='H' r=1; k=2; p=0.4; t1=1; t2=2 # Reproducing the work of ContinEstim # in high frequency case lfsm<-path(N=N,m=m,M=M,alpha=alpha,H=H, sigma=sigma,freq='L',disable_X=FALSE,seed=1)$lfsm H_est<-H_hat(p=p,k=k,path=lfsm) H_est alpha_est<-alpha_hat(t1=t1,t2=t2,k=k,path=lfsm,H=H_est,freq=freq) alpha_est sigma_est<-tryCatch( sigma_hat(t1=t1,k=k,path=lfsm, alpha=alpha_est,H=H_est,freq=freq), error=function(c) 'Impossible to compute sigma_est') sigma_est theta Function theta Description Function of the form Z θ(g, h)p = a−2 p |xy|−1−p Ug,h (x, y)dxdy Usage theta(p, alpha, sigma, g, h) Arguments p power, real number from (-1,1) alpha self-similarity parameter of alpha stable random motion. sigma Scale parameter of lfsm g, h functions g, h : R → R with finite alpha-norm (see Norm_alpha). References <NAME>, <NAME>, <NAME> (2020). “Estimation of the linear fractional stable motion.” Bernoulli, 26(1), 226–252. https://doi.org/10.3150/19-BEJ1124. U_g alpha norm of u*g Description alpha norm of u*g Usage U_g(g, u, ...) Arguments g function g : RtoR with finite alpha-norm (see Norm_alpha). u real number ... additional parameters to pass to Norm_alpha Examples g<-function(x) exp(-x^2) g<-function(x) exp(-abs(x)) U_g(g=g,u=4,alpha=1.7) U_gh alpha-norm of u*g + v*h. Description alpha-norm of u*g + v*h. Usage U_gh(g, h, u, v, ...) Arguments g, h functions g, h : R → R with finite alpha-norm (see Norm_alpha). v, u real numbers ... additional parameters to pass to Norm_alpha Examples g<-function(x) exp(-x^2) h<-function(x) exp(-abs(x)) U_gh(g=g, h=h, u=4, v=3, alpha=1.7) U_ghuv A dependence structure of 2 random variables. Description It is used when random variables R do not have finite R second moments, and thus, the covariance matrix is not defined. For X = R gs dLs and Y = R hs dLs with kgkα , khkα < ∞. Then the measure of dependence is given by Ug,h : R2 → R via α α α Ug,h (u, v) = exp(−σ α kug + vhkα ) − exp(−σ α (kugkα + kvhkα )) Usage U_ghuv(alpha, sigma, g, h, u, v, ...) Arguments alpha self-similarity parameter of alpha stable random motion. sigma Scale parameter of lfsm g, h functions g, h : R → R with finite alpha-norm (see Norm_alpha). v, u real numbers ... additional parameters to pass to U_gh and U_g Examples g<-function(x) exp(-x^2) h<-function(x) exp(-abs(x)) U_ghuv(alpha=1.5, sigma=1, g=g, h=h, u=10, v=15, rel.tol = .Machine$double.eps^0.25, abs.tol=1e-11)
tagcloud
cran
R
Package ‘tagcloud’ October 14, 2022 Type Package Title Tag Clouds Version 0.6 Date 2015-07-02 Author <NAME> Maintainer <NAME> <<EMAIL>> Description Generating Tag and Word Clouds. License GPL (>= 2) LazyLoad yes Depends Rcpp (>= 0.9.4) Imports RColorBrewer Suggests extrafont,knitr VignetteBuilder knitr LinkingTo Rcpp URL http://logfc.wordpress.com NeedsCompilation yes Repository CRAN Date/Publication 2015-07-03 11:17:02 R topics documented: editor.tagclou... 2 gambi... 3 plot.tagclou... 4 smoothPalett... 7 strmultlin... 8 editor.tagcloud Simple interactive editing of tag clouds Description A minimalistic editor for object of the tagcloud class. Usage editor.tagcloud(boxes) Arguments boxes An object of the tagcloud class, returned by the tagcloud function. Details tagcloud provides a minimalistic editor for tag clouds produced by tagcloud function. After editor.tagcloud is called, the tag cloud is plotted. First click selects the tag to be moved. The second click sends the tag such that its left lower corner is at the position indicated by the mouse. Right-clicking terminates the program. Value An object of the tagcloud class with the modified positions of the tags. Author(s) <NAME> <<EMAIL>> See Also tagcloud Examples ## Not run: data( gambia ) terms <- gambia$Term tagcloud( terms ) boxes <- editor.tagcloud( boxes ) ## End(Not run) gambia Results of GO enrichment analysis in TB Description A data.frame object containing the results of a GO enrichment analysis from the GOstats package. Format A data frame with 318 observations on the following 9 variables. GOBPID Pvalue OddsRatio ExpCount Count Size Term GOBPID Gene Ontology (GO) biological process (BP) identifier Pvalue P value from enrichment test OddsRatio Measure of enrichment ExpCount expected number of genes in the enriched partition which map to this GO term Count number of genes in the enriched partition which map to this GO term Size number of genes within this GO Term Term Gene Ontology term description Details The data results from a microarray analysis of the whole blood transcriptome of tuberculosis (TB) patients compared to healthy individuals. Genes were sorted by their p-value and analysed using the GOstats package. Significantly enriched GO terms are included in this data frame. Source <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., et al. (2011) Functional Cor- relations of Pathogenesis-Driven Gene Expression Signatures in Tuberculosis. PLoS ONE 6(10): e26938. doi:10.1371/journal.pone.0026938 Examples data(gambia) tagcloud( gambia$Term, -log( gambia$Pvalue ) ) plot.tagcloud Tag and Word Clouds Description Functions to create and display plots called tag clouds, word clouds or weighted lists, in which a usually large number of words is displayed in size correlated with a numerical value (for example, frequency in a text or enrichment of a GO term). This makes it easier to visualize the prominence of certain tags or words. Also, it looks nice. Usage ## S3 method for class 'tagcloud' plot(x, family = NULL, add = FALSE, with.box = FALSE, col = NULL, sel = NULL, ...) ## S3 method for class 'tagcloud' summary(object, ...) tagcloud(tags, weights = 1, algorithm = "oval", scale = "auto", scale.multiplier = 1, order = "size", sel = NULL, wmin = NULL, wmax = NULL, floor = 1, ceiling = 3, family = NULL, col = NULL, fvert = 0, plot = TRUE, add = FALSE) Arguments x,object An object of the type produced by tagcloud. family Font family to use, a vector containing font families to use for each tag. For the tagcloud function, the special keyword "random" can be used to assign random families (requires the extrafont package). add If TRUE, the tags will be added to the current plot instead of creating a new plot. with.box If TRUE, a rectangle will be plotted around each tag. col Color or a vector containing colors to use for drawing the tags. sel An integer or boolean vector indicating which terms from the provided list will be used to plot. The vectors col and weights will be filtered accordingly. tags A character vector containing words or tags to be shown on the plot. weights A numeric vector giving the relative proportions of text size corresponding to the given tag. algorithm Name of the algorithm to use. Can be "oval", "fill", "random", "snake", "list" or "clist". See Details. scale If "auto", text expansion will be calculated automatically to fit the available space. Otherwise, a numeric value used to modify the calculated text sizes; tune scale to achieve a better fit. scale.multiplier Multiplier for the final calculated text expansion parameter. Increase if there is too much empty space around the tag cloud; decrease if the tags go over the plot boundaries. order Determines in which order the tags will be drawn. Can be "size", "keep", "ran- dom", "height" or "width". See Details. wmin All items in the weights vector smaller than wmin will be changed to wmin wmax All items in the weights vector larger than wmax will be changed to wmax floor Minimal text size. See Details. ceiling Maximal text size. See Details. fvert Fraction of tags which will be rotated by 90 degrees counterclockwise. plot If FALSE, no plot will be produced. ... Further arguments to be passed to downstream methods. Details The package tagcloud creates and plots tag clouds (word clouds). The algorithms in the package have been designed specifically with long tags (such as GO Term descriptions) in mind. Term ordering: The available arguments are as follows: • size – tags are ordered by size, that is, their effective width multiplied by their effective height. Default. • keep – keep the order from the list of words provided • random – randomize the tag list • width – order by effective screen width • height – order by effective screen height By default, prior to plotting terms are ordered by size. Algorithms: There are four algorithms for placing tags on the plot implemented in tagcloud. • oval – creates an oval cloud. • fill – an attempt will be made to fill the available space • random – randomly distribute tags over the available space. This algorithm is slow and not very effective • snake – tags are placed clockwise around the first tag to plot • list – create a list, one tag directly beneath another, justified left • clist – create a list, one tag directly beneath another, centered Algorithms oval, fill and random attempt to fill the available space by adjusting the scaling factor for the font sizes. Calculation of tag sizes: Placing tags such that the empty space between the tags is minimized poses a non-trivial problem, because the effective bounding box of a displayed text is not linearly dependent on the cex parameter. In tagcloud, first a cex parameter is calculated for each tag separately, based on the parameters floor, ceiling and the vector of weights. Note that all weights smaller than wmin are replaced by wmin and all weights larger than wmax are replaced by wmax. Then, effective heights and widths of the tags to be displayed are calculated using the strwidth and strheight functions. Unless the argument scale is different from "auto", a scaling parameter for cex is automatically calculated based on the current area of the tags and the available plotting area. This usually results in a reasonable plot, but is neither guaranteed to occupy all of the available space without margins, nor that no tag will cross the view port. Value tagcloud returns an object of the tagcloud-class, which really is a data frame with the following columns: • tags – the tags, words or phrases shown on the plot • weights – a numeric vector that is used to calculate the size of the plotted tags • family – name of the font family to be used in plotting • vertical – whether the tag should be rotated by 90 degrees counter-clockwise • x,y – coordinates of the left lower corner of the tags bounding box • w,h – width and height of the bounding box • cex – text expansion factor, see par • s – surface of the tag (width x height) The object of the tagcloud class can be manipulated using editor.tagcloud and displayed using plot, print and summary functions. Note Care should be taken when using extra fonts loaded by the extrafont package; not all fonts can be easily copied to a PDF file. Some ideas in this package are based on the ‘wordcloud‘ package by Ian Fellows. Author(s) <NAME> <<EMAIL>> See Also editor.tagcloud – interactive editing of tagcloud objects. strmultline – splitting multi-word sentences into lines for a better cloud display. smoothPalette – mapping values onto a color gradient. Examples # a rather boring tag cloud data( gambia ) terms <- gambia$Term tagcloud( terms ) # tag cloud with weights relative to P value # colors relative to odds ratio, from light # grey to black weights <- -log( gambia$Pvalue ) colors <- smoothPalette( gambia$OddsRatio, max=4 ) tagcloud( terms, weights, col= colors, algorithm= "oval" ) # tag cloud filling the whole plot tagcloud( terms, weights, col= colors, algorithm= "fill" ) # just a list of only the first ten terms tagcloud( terms, weights, sel= 1:10, col= colors, algorithm= "list", order= "width" ) # oval, with line breaks in terms terms <- strmultline( gambia$Term ) tagcloud( terms, weights, col= colors, algorithm= "oval" ) ## Not run: # shows available font families, scaled according to # the total disk space occupied by the fonts require( extrafont ) ft <- fonttable() fi <- file.info( fonttable()$fontfile ) families <- unique( ft$FamilyName ) sizes <- sapply( families,function( x ) sum( fi[ ft$FamilyName == x, "size" ] ) ) tagcloud( families, sizes, family= families ) ## End(Not run) smoothPalette Replace a vector of numbers by a gradient of colors Description Replace a vector of numbers by a vector of colors from a palette, such that values correspond to the colors on a smooth gradient. Usage smoothPalette(x, pal = NULL, max = NULL, min = NULL, n = 9, palfunc = NULL, na.color = "white") Arguments x A numeric vector pal Character vector containing the color gradient onto which the numeric vector x will be mapped. By default, a gradient from white to black is generated. If it is a single character value, it will be treated as name of an RColorBrewer palette (see brewer.pal). max Values of x larger than max will be replaced by max min Values of x smaller than min will be replaced by min n Number of steps palfunc Palette function returned by colorRampPalette na.color NA values will be replaced by that color Details This function is used to map a continues numerical vector on an ordinal character vector, in espe- cially a vector of colors. Color palette can be specified using an RColorBrewer palette name. Value A character vector of the same length as the numeric vector x, containing the matching colors. Author(s) <NAME> <<EMAIL>> See Also tagcloud Examples smoothPalette( 1:3 ) # will print: # "#CCCCCC" "#666666" "#000000" smoothPalette( 1:3, pal= "Blues" ) # will produce: # "#F7FBFF" "#6BAED6" "#08306B" x <- runif( 100 ) plot( 1:100, x, col= smoothPalette( x, pal= "BrBG" ), pch= 19 ) strmultline Replace some spaces in multi-word sentences by newlines Description Replace a space character by a newline in a multi-word sentence to get a better height / width ratio Usage strmultline(strings, ratio = 0.2) Arguments strings a character vector containing the multi-word sentences to be split ratio the desired ratio height / width Details Very long tags, for example GO Term descriptions, make a bad tag cloud. strmultline tries to chop up such a long sentence into multiple (currently two) lines, to get a better height / width ratio. Value A character vector containing the modified sentences. Author(s) <NAME> <<EMAIL>> See Also tagcloud
twiggy
readthedoc
Python
Date: 2010-03-28 Categories: Tags: This part describes how user code can log messages with twiggy. To get started quickly, use `quick_setup()` .: ``` >>> import twiggy >>> twiggy.quick_setup() ``` See also Full details on Configuring Output. ## The Magic log¶ The main interface is the the magic `log` . ``` >>> from twiggy import log >>> log <twiggy.logger.Logger object at 0x...> ``` It works out of the box, using typical `levels` . Arbitrary levels are not supported. Note that when logging, you never need to refer to any level object; just use the methods on the log. ``` >>> log.debug('You may not care') DEBUG|You may not care >>> log.error('OMFG! Pants on fire!') ERROR|OMFG! Pants on fire! ``` The log can handle messages in several styles of format strings, defaulting to new-style. ``` >>> log.info('I wear {0} on my {where}', 'pants', where='legs') INFO:I wear pants on my legs ``` You can name your loggers. ``` >>> mylog = log.name('alfredo') >>> mylog.debug('hello') DEBUG:alfredo|hello ``` ## Better output¶ Twiggy’s default output strives to be user-friendly and to avoid pet peeves. Newlines are suppressed by default; that can be turned off per-message. ``` >>> log.info('user\ninput\nannoys\nus') INFO|user\ninput\nannoys\nus >>> log.options(suppress_newlines=False).info('we\ndeal') INFO|we deal ``` Exceptions are prefixed by `TRACE` . By default, `tracing` will use the current exception, but you can also pass an exc_info tuple. ``` >>> try: ... 1/0 ... except: ... log.trace('error').warning('oh noes') WARNING|oh noes TRACE Traceback (most recent call last): TRACE File "<doctest better-output[...]>", line 2, in <module> TRACE ZeroDivisionError: integer division or modulo by zero ``` ## Structured Logging¶ I like this method chaining style a lot. ``` >>> log.name('benito').info('hi there') INFO:benito|hi there ``` It makes structured logging easy. In the past, fielded data was stuffed in the text of your message: ``` >>> log.info('Going for a walk. path: {0} roads: {1}', "less traveled", 42) INFO:Going for a walk. path: less traveled roads: 42 ``` Instead, you can use `fields()` to add arbitrary key-value pairs. Output is easily parseable. ``` >>> log.fields(path="less traveled", roads=42).info('Going for a walk') INFO:path=less traveled:roads=42|Going for a walk ``` The `struct()` is a short cut for only logging fields. This is great for runtime statistics gathering. ``` >>> log.struct(paths=42, dolphins='thankful') INFO:dolphins=thankful:paths=42| ``` ## Partial Binding¶ Each call to `fields()` or `options()` creates a new, independent log instance that inherits all of the data of the parent. This incremental binding can be useful for webapps. ``` >>> ## an application-level log ... webapp_log = log.name("myblog") >>> ## a log for the individual request ... current_request_log = webapp_log.fields(request_id='12345') >>> current_request_log.fields(rows=100, user='frank').info('frobnicating database') INFO:myblog:request_id=12345:rows=100:user=frank|frobnicating database >>> current_request_log.fields(bytes=5678).info('sending page over tubes') INFO:myblog:bytes=5678:request_id=12345|sending page over tubes >>> ## a log for a different request ... another_log = webapp_log.fields(request_id='67890') >>> another_log.debug('Client connected') DEBUG:myblog:request_id=67890|Client connected ``` Chained style is awesome. It allows you to create complex yet parsable log messages in a concise way. ``` >>> log.name('donjuan').fields(pants='sexy').info("hello, {who} want to {what}?", who='ladies', what='dance') INFO:donjuan:pants=sexy|hello, ladies want to dance? ``` ## Sample Output¶ Routed to a `file` , the above produces the following: ``` 2010-03-28T14:23:34Z:DEBUG:You may not care 2010-03-28T14:23:34Z:ERROR:OMFG! Pants on fire! 2010-03-28T14:23:34Z:INFO:I like bikes 2010-03-28T14:23:34Z:INFO:I wear pants on my legs 2010-03-28T14:23:34Z:DEBUG:alfredo:hello 2010-03-28T14:23:34Z:INFO:user\ninput\nannoys\nus 2010-03-28T14:23:34Z:INFO:we deal 2010-03-28T14:23:34Z:WARNING:oh noes TRACE Traceback (most recent call last): TRACE File "<doctest better-output[...]>", line 35, in <module> TRACE ZeroDivisionError: integer division or modulo by zero 2010-03-28T14:23:34Z:INFO:benito:hi there 2010-03-28T14:23:34Z:INFO:Going for a walk. path: less traveled roads: 42 2010-03-28T14:23:34Z:INFO:path=less traveled:roads=42:Going for a walk 2010-03-28T14:23:34Z:INFO:dolphins=thankful:paths=42: 2010-03-28T14:23:34Z:INFO:myblog:request_id=12345:rows=100:user=frank:frobnicating database 2010-03-28T14:23:34Z:INFO:myblog:bytes=5678:request_id=12345:sending page over tubes 2010-03-28T14:23:34Z:INFO:myblog:request_id=67890:Client connected 2010-03-28T14:23:34Z:INFO:donjuan:pants=sexy:hello, ladies want to dance? 2010-03-28T14:23:34Z:INFO:myblog:request_id=12345:rows=100:user=frank:frobnicating database 2010-03-28T14:23:34Z:INFO:myblog:bytes=5678:request_id=12345:sending page over tubes 2010-03-28T14:23:34Z:DEBUG:myblog:request_id=67890:Client connected ``` This part discusses how to configure twiggy’s output of messages. You should do this once, near the start of your application’s `__main__` . ## Quick Setup¶ To quickly configure output, use the `quick_setup` function. Quick setup is limited to sending all messages to a file or `sys.stderr` . A timestamp will be prefixed when logging to a file. * `twiggy.` `quick_setup` (min_level=<LogLevel DEBUG>, file=None, msg_buffer=0) * ## twiggy_setup.py¶ Twiggy’s output side features modern, loosely coupled design. By convention, your configuration lives in a file in your application called `twiggy_setup.py` , in a function called `twiggy_setup()` . You can of course put your configuration elsewhere, but using a separate module makes integration with configuration management systems easy. You should import and run `twiggy_setup` near the top of your application. It’s particularly important to set up Twiggy before spawning new processes. A `twiggy_setup` function should create ouputs and use the `add_emitters()` convenience function to link those outputs to the `log` . ``` from twiggy import add_emitters, outputs, levels, filters, formats, emitters # import * is also ok def twiggy_setup(): alice_output = outputs.FileOutput("alice.log", format=formats.line_format) bob_output = outputs.FileOutput("bob.log", format=formats.line_format) add_emitters( # (name, min_level, filter, output), ("alice", levels.DEBUG, None, alice_output), ("betty", levels.INFO, filters.names("betty"), bob_output), ("brian.*", levels.DEBUG, filters.glob_names("brian.*"), bob_output), ) # near the top of your __main__ twiggy_setup() ``` `add_emitters()` populates the `emitters` dictionary: ``` >>> sorted(emitters.keys()) ['alice', 'betty', 'brian.*'] ``` In this example, we create two log destinations: `alice.log` and `bob.log` . alice will receive all messages, and bob will receive two sets of messages: * messages with the name field equal to `betty` and level >= `INFO` * messages with the name field glob-matching `brian.*` `Emitters` can be removed by deleting them from this dict. `filter` and `min_level` may be modified during the running of the application, but outputs cannot be changed. Instead, remove the emitter and re-add it. ``` >>> # bump level ... emitters['alice'].min_level = levels.WARNING >>> # change filter ... emitters['alice'].filter = filters.names('alice', 'andy') >>> # remove entirely ... del emitters['alice'] ``` We’ll examine the various parts in more detail. Outputs are the destinations to which log messages are written (files, databases, etc.). Several `implementations` are provided. Once created, outputs cannot be modified. Each output has an associated `format` . ### Asynchronous Logging¶ Many outputs can be configured to use a separate, dedicated process to log messages. This is known as asynchronous logging and is enabled with the `msg_buffer` argument. Asynchronous mode dramatically reduces the cost of logging, as expensive formatting and writing operations are moved out of the main thread of control. `Formats` transform a log message into a form that can be written by an output. The result of formatting is output dependent; for example, an output that posts to an HTTP server may take a format that provides JSON, whereas an output that writes to a file may produce text. ### Line-oriented formatting¶ `LineFormat` formats messages for text-oriented outputs such as a file or standard error. It uses a `ConversionTable` to stringify the arbitrary fields in a message. To customize, copy the default `line_format` and modify: ``` # in your twiggy_setup import copy my_format = copy.copy(formats.line_format) my_format.conversion.add(key = 'address', # name of the field convert_value = hex, # gets original value convert_item = "{0}={1}".format, # gets called with: key, converted_value required = True) # output messages with name 'memory' to stderr add_emitters(('memory', levels.DEBUG, filters.names('memory'), outputs.StreamOutput(format = my_format))) ``` ## Filtering Output¶ The messages output by an emitter are determined by its `min_level` and filter (a `function` which take a `Message` and returns bool). These attributes may be changed while the application is running. The `filter` attribute of emitters is `intelligent` ; you may assign strings, bools or functions and it will magically do the right thing. Assigning a list indicates that all of the filters must pass for the message to be output. ``` e = emitters['memory'] e.min_level = levels.WARNING # True allows all messages through (None works as well) e.filter = True # False blocks all messages e.filter = False # Strings are interpreted as regexes (regex objects ok too) e.filter = "^mem.*y$" # functions are passed the message; return True to emit e.filter = lambda msg: msg.fields['address'] > 0xDECAF # lists are all()'d e.filter = ["^mem.y$", lambda msg: msg.fields['address'] > 0xDECAF] ``` See also Available `filters` ## Dynamic Logging¶ Any functions in message args/fields are called and the value substitued. ``` >>> import os >>> from twiggy.lib import thread_name >>> thread_name() 'MainThread' >>> log.fields(pid=os.getpid).info("I'm in thread {0}", thread_name) INFO:pid=...:I'm in thread MainThread ``` This can be useful with partially-bound loggers, which lets us do some cool stuff. Here’s a proxy class that logs which thread accesses attributes. ``` class ThreadTracker(object): """a proxy that logs attribute access""" def __init__(self, obj): self.__obj = obj # a partially bound logger self.__log = log.name("tracker").fields(obj_id=id(obj), thread=thread_name) self.__log.debug("started tracking") def __getattr__(self, attr): self.__log.debug("accessed {0}", attr) return getattr(self.__obj, attr) class Bunch(object): pass ``` Let’s see it in action. ``` >>> foo = Bunch() >>> foo.bar = 42 >>> tracked = ThreadTracker(foo) DEBUG:tracker:obj_id=...:thread=MainThread|started tracking >>> tracked.bar DEBUG:tracker:obj_id=...:thread=MainThread|accessed bar 42 >>> import threading >>> t=threading.Thread(target = lambda: tracked.bar * 2, name = "TheDoubler") >>> t.start(); t.join() DEBUG:tracker:obj_id=...:thread=TheDoubler|accessed bar ``` If you really want to log a callable, `repr()` it or wrap it in lambda. See also `procinfo` feature ## Features!¶ `Features` are optional additons of logging functionality to the `log` . They encapsulate common logging patterns. Code can be written using a feature, enhancing what information is logged. The feature can be disabled at runtime if desired. Warning Features are currently deprecated, pending a reimplementation in version 0.5 ``` >>> from twiggy.features import socket as socket_feature >>> log.addFeature(socket_feature.socket) >>> import socket >>> s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) >>> s.connect(('www.python.org', 80)) >>> log.socket(s).debug("connected") DEBUG:host=dinsdale.python.org:ip_addr=82.94.164.162:port=80:service=www|connected >>> # turn off the feature - the name is still available ... log.disableFeature('socket') >>> log.socket(s).debug("connected") DEBUG|connected >>> # use a different implementation ... log.addFeature(socket_feature.socket_minimal, 'socket') >>> log.socket(s).debug("connected") DEBUG:ip_addr=82.94.164.162:port=80|connected ``` ## Stays Out of Your Way¶ Twiggy tries to stay out of your way. Specifically, an error in logging should never propogate outside the logging subsystem and cause your main application to crash. Instead, errors are trapped and reported by the `internal_log` . Instances of `InternalLogger` only have a single `Output` - they do not use emitters. By default, these messages are sent to standard error. You may assign an alternate ouput (such as a file) to ``` twiggy.internal_log.output ``` if desired, with the following conditions: * the output should be failsafe - any errors that occur during internal logging will be dumped to standard error, and suppressed, causing the original message to be discarded. * accordingly, networked or asynchronous outputs are not recommended. * make sure someone is reading these log messages! ## Concurrency¶ Locking in twiggy is as fine-grained as possible. Each individual output has its own lock (if necessary), and only holds that lock when writing. Using redundant outputs (ie, pointing to the same file) is not supported and will cause logfile corruption. Asynchronous loggers never lock. ## Use by Libraries¶ Libraries require special care to be polite and usable by application code. The library should have a single bound in its top-level package that’s used by modules. Library logging should generally be silent by default. ``` # in mylib/__init__.py log = twiggy.log.name('mylib') log.min_level = twiggy.levels.DISABLED # in mylib/some_module.py from . import log log.debug("hi there") ``` This allows application code to enable/disable all of library’s logging as needed. ``` # in twiggy_setup import mylib mylib.log.min_level = twiggy.levels.INFO ``` In addition to `min_level` , loggers also have a `filter` . This filter operates only on the format string, and is intended to allow users to selectively disable individual messages in a poorly-written library. ``` # in mylib: for i in xrange(1000000): log.warning("blah blah {0}", 42) # in twiggy_setup: turn off stupidness mylib.log.filter = lambda format_spec: format_spec != "blah blah {0}" ``` Note that using a filter this way is an optimization - in general, application code should use `emitters` instead. ## Tips And Tricks¶ ### Alternate Styles¶ In addition to the default new-style (braces) format specs, twiggy also supports old-style (percent, aka printf) and templates (dollar). The aliases {}, % and $ are also supported. ``` >>> log.options(style='percent').info('I like %s', "bikes") INFO|I like bikes >>> log.options(style='dollar').info('$what kill', what='Cars') INFO|Cars kill ``` ## Technical Details¶ ### Independence of logger instances¶ Each log instance created by partial binding is independent from each other. In particular, a logger’s `name()` has no relation to the object; it’s just for human use. ``` >>> log.name('bob') is log.name('bob') False ``` ### Optimizations¶ Twiggy has been written to be fast, minimizing the performance impact on the main execution path. In particular, messages that will cause no output are handled as quickly as possible. Users are therefore encouraged to add lots of logging for development/debugging purposes and then turn them off in production. The emit methods can be hidden behind an appropriate `assert` . Python will eliminate the statement entirely when run with bytecode optimization ( `python -O` ). ``` assert log.debug("This goes away with python -O") is None assert not log.debug("So does this") ``` Note The author doesn’t particularly care for code written like this, but likes making his users happy more. ## Extending Twiggy¶ When developing extensions to twiggy, use the `devel_log` . An `InternalLogger` , the devel_log is completely separate from the main `log` . By default, messages logged to the devel_log are discarded; assigning an appropriate `Output` to its `output` attribute before using. ### Writing Features¶ Warning Features are currently deprecated, pending a reimplementation in version 0.5 Features are used to encapsulate common logging patterns. They are implemented as methods added to the `Logger` class. They receive an instance as the first argument (ie, `self` ). `Enable the feature` before using. Features come in two flavors: those that add information to a message’s fields or set options, and those that cause output. Features which only add fields/set options should simply call the appropriate method on `self` and return the resultant object.: ``` def dimensions(self, shape): return self.fields(height=shape.height, width=shape.width) ``` Features can also emit messages as usual. Do not return from these methods.: ``` def sayhi(self, lang): if lang == 'en': self.info("Hello world") elif lang == 'fr': self.info("Bonjour tout le monde") ``` If the feature should add fields and emit in the same step (like `struct()` ), use the `emit()` decorators. Here’s a prototype feature that dumps information about a WSGI environ.: ``` from twiggy.logger import emit @emit.info def dump_wsgi(self, wsgi_environ): keys = ['SERVER_PROTOCOL', 'SERVER_PORT', 'SERVER_NAME', 'CONTENT_LENGTH', 'CONTENT_TYPE', 'QUERY_STRING', 'PATH_INFO', 'SCRIPT_NAME', 'REQUEST_METHOD'] d = {} for k in keys: d[k] = wsgi_environ.get(k, '') for k, v in wsgi_environ.iteritems(): if k.startswith('HTTP_'): k = k[5:].title().replace('_', '-') d[k] = v # if called on an unnamed logger, add a name if name not in self._fields: self = self.name('dumpwsgi') return self.fields_dict(d) ``` ### Writing Outputs and Formats¶ Outputs do the work of writing a message to an external resource (file, socket, etc.). User-defined outputs should inherit from `Output` or `AsyncOutput` if they wish to support asynchronous logging (preferred). An Output subclass’s `__init__` should take a format and any parameters needed to acquire resources (filename, hostname, etc.), but not the resources themselves. These are created in `_open()` . Implementations supporting asynchronous logging should also take a `msg_buffer` argument. Outputs should define the following: * `Output.` `_open` () * * `Output.` `_close` () * Release any resources acquired in `_open` * `Output.` `_write` (x) * Do the work of writing b'Parameters:' b'x &#8211; an implementation-dependent object to be written.' If the output requires locking to be thread-safe, set the class attribute `use_locks` to True (the default). Turning off may give slightly higher throughput. The `format` callable is Output-specific; it should take a `Message` and return an appropriate object (string, database row, etc.) to be written. Do not modify the received message - it is shared by all outputs. `ConversionTables` are particulary useful for formatting fields. They are commonly used with `LineFormat` to format messages for text-oriented output. ``` from twiggy.lib.converter import ConversionTable conversion = ConversionTable() fields = {'shape': 'square', 'height': 10, 'width': 5, 'color': 'blue'} # hide shape field name # uppercase value # make mandatory conversion.add(key = 'shape', convert_value = str.upper, convert_item = '{1}'.format, # stringify 2nd item (value) required = True) # format height value with two decimal places # show as "<key> is <value>" conversion.add('height', '{0:.2f}'.format, "{0} is {1}".format) # separate fields in final output by colons conversion.aggregate = ':'.join # unknown items are sorted by key # unknown values are stringified conversion.generic_value = str # show unknown items as "<key>=<value>" conversion.generic_item = "{0}={1}".format # convert! print conversion.convert(fields) ``` ``` SQUARE:height is 10.00:color=blue:width=5 ``` ## Global Objects¶ * `twiggy.` `log` ¶ * the magic log object * `twiggy.` `internal_log` ¶ * `InternalLogger` for reporting errors within Twiggy itself * `twiggy.` `devel_log` ¶ * `InternalLogger` for use by developers writing extensions to Twiggy * `twiggy.` `add_emitters` (*tuples)¶ * Add multiple emitters. `tuples` should be ``` (name_of_emitter, min_level, filter, output) ``` . The last three are passed to `Emitter` . * `twiggy.` `quick_setup` (min_level=<LogLevel DEBUG>, file=None, msg_buffer=0)¶ * ## Features¶ Optional additions of logging functionality ### procinfo¶ Logging feature to add information about process, etc. ``` twiggy.features.procinfo. ``` `procinfo` (self)¶ * Adds the following fields: b'Hostname:' b'current hostname' b'Pid:' b'current process id' b'Thread:' b'current thread name' ### socket¶ Logging feature to add information about a socket ``` twiggy.features.socket. ``` `socket` (self, s)¶ * Adds the following fields: b'ip_addr:' b'numeric IP address' b'port:' b'port number' b'host:' b'peer hostname, as returned by' `getnameinfo()` b'service:' b'the human readable name of the service on' `port` b'Parameters:' b's (socket) &#8211; the socket to extract information from' ## Filters¶ * `twiggy.filters.` `filter` (msg : Message) → bool¶ * A filter is any function that takes a `Message` and returns True if it should be `emitted` . * `twiggy.filters.` `msg_filter` (x) → filter¶ * create a `filter` intelligently You may pass: b'None, True:' b'the filter will always return True' b'False:' b'the filter will always return False' b'string:' b'compiled into a regex' b'regex:' `match()` against the message text b'callable:' b'returned as is' b'list:' b'apply' `msg_filter` to each element, and `all()` the results b'Return type:' `filter` function create a `filter` , which gives True if the messsage’s name equals any of those provided `names` will be stored as an attribute on the filter. b'Parameters:' b'names (strings) &#8211; names to match' b'Return type:' `filter` function create a `filter` , which gives True if the messsage’s name globs those provided. `names` will be stored as an attribute on the filter. This is probably quite a bit slower than `names()` . b'Parameters:' b'names (strings) &#8211; glob patterns.' b'Return type:' `filter` function Formats are single-argument callables that take a `Message` and return an object appropriate for the `Output` they are assigned to. * class `twiggy.formats.` `LineFormat` (separator=':', traceback_prefix='\nTRACE', conversion=line_conversion)¶ * * `separator` ¶ * string to separate line parts. Defaults to `:` . * `traceback_prefix` ¶ * string to prepend to traceback lines. Defaults to `\nTRACE` . Set to `'\\n'` (double backslash n) to roll up tracebacks to a single line. * `conversion` ¶ * `ConversionTable` used to format `fields` . Defaults to `line_conversion` * `format_text` (msg)¶ * format the text part of a message * `format_fields` (msg)¶ * format the fields of a message * `format_traceback` (msg)¶ * format the traceback part of a message * `twiggy.formats.` `line_conversion` ¶ * a default line-oriented `ConversionTable` . Produces a nice-looking string from `fields` . Fields are separated by a colon ( `:` ). Resultant string includes: b'time:' b'in iso8601 format (required)' b'level:' b'message level (required)' b'name:' b'logger name' Remaining fields are sorted alphabetically and formatted as `key=value` * `twiggy.formats.` `line_format` ¶ * a default `LineFormat` for output to a file. Sample output. Fields are formatted using `line_conversion` and separated from the message `text` by a colon ( `:` ). Traceback lines are prefixed by `TRACE` . * `twiggy.formats.` `shell_conversion` ¶ * a default line-oriented `ConversionTable` for use in the shell. Returns the same string as `line_conversion` but drops the `time` field. * `twiggy.formats.` `shell_format` ¶ * a default `LineFormat` for use in the shell. Same as `line_format` but uses `shell_conversion` for `fields` . ## Levels¶ Levels include (increasing severity): `DEBUG` , `INFO` , `NOTICE` , `WARNING` , `ERROR` , `CRITICAL` , `DISABLED` * class `twiggy.levels.` `LogLevel` (name, value)¶ * A log level. Users should not create new instances. Levels are opaque; they may be compared to each other, but nothing else. ## Library¶ * `twiggy.lib.` `iso8601time` (gmtime=None)¶ * convert time to ISO 8601 format - it sucks less! b'Parameters:' b'gmtime (time.struct_time) &#8211; time tuple. If None, use' `time.gmtime()` (UTC) XXX timezone is not supported * `twiggy.lib.` `thread_name` ()¶ * return the name of the current thread ### Converter¶ `Converter` (key, convert_value, convert_item, required=False)¶ * Holder for `ConversionTable` items b'Variables:' * key – the key to apply the conversion to * convert_value (function) – one-argument function to convert the value * convert_item (function) – two-argument function converting the key & converted value * required (bool) – is the item required to present. Items are optional by default. `same_value` (v)¶ * return the value unchanged `same_item` (k, v)¶ * return the item unchanged `drop` (k, v)¶ * return None, indicating the item should be dropped New in version 0.5.0: Add `same_value` , `same_item` , `drop` . `ConversionTable` (seq)¶ * Convert data dictionaries using `Converters` For each item in the dictionary to be converted: * Find one or more corresponding converters `c` by matching key. * Build a list of converted items by calling ``` c.convertItem(item_key, c.convertValue(item_value)) ``` . The list will have items in the same order as converters were supplied. * Dict items for which no converter was found are sorted by key and passed to `generic_value` / `generic_item` . These items are appended to the list from step 2. * If any required items are missing, `ValueError` is raised. * The resulting list of converted items is passed to `aggregate` . The value it returns is the result of the conversion. Users may override `generic_value` / `generic_item` / `aggregate` by subclassing or assigning a new function on a ConversionTable instance. Really, it’s pretty intuitive. * `__init__` (seq=None)¶ * b'Parameters:' b'seq &#8211; a sequence of Converters' You may also pass 3-or-4 item arg tuples or kwarg dicts (which will be used to create `Converters` ) * `convert` (d)¶ * do the conversion b'Parameters:' b'd (dict) &#8211; the data to convert. Keys should be strings.' * `copy` ()¶ * make an independent copy of this ConversionTable * `get` (key)¶ * return the first converter for key * `get_all` (key)¶ * return a list of all converters for key * `delete` (key)¶ * delete the all of the converters for key * Find one or more corresponding converters ## Logger¶ Loggers should not be created directly by users; use the global `log` instead. * class `twiggy.logger.` `BaseLogger` (fields=None, options=None, min_level=None)¶ * Base class for loggers * `_fields` ¶ * dictionary of bound fields for structured logging. By default, contains a single field `time` with value `time.gmtime()` . This function will be called for each message emitted, populating the field with the current `time.struct_time` . * `fields` (**kwargs) → bound Logger¶ * bind fields for structured logging. `kwargs` are interpreted as names/values of fields. * `fields_dict` (d) → bound Logger¶ * * `trace` (trace='error') → bound Logger¶ * convenience method to enable traceback logging * `name` (name) → bound Logger¶ * convenvience method to bind `name` field * `struct` (**kwargs) → bound Logger¶ * convenience method for structured logging. Calls `fields()` and emits at `info` * `struct_dict` (d) → bound Logger¶ * The following methods cause messages to be emitted. `format_spec` is a template string into which `args` and `kwargs` will be substitued. * `debug` (format_spec='', *args, **kwargs)¶ * Emit at `DEBUG` level * `info` (format_spec='', *args, **kwargs)¶ * Emit at `INFO` level * `notice` (format_spec='', *args, **kwargs)¶ * Emit at `NOTICE` level * `warning` (format_spec='', *args, **kwargs)¶ * Emit at `WARNING` level * `error` (format_spec='', *args, **kwargs)¶ * Emit at `ERROR` level * `critical` (format_spec='', *args, **kwargs)¶ * Emit at `CRITICAL` level * class `twiggy.logger.` `Logger` (fields=None, options=None, min_level=None)¶ * Logger for end-users. The type of the magic `log` * `filter` ¶ * Filter on `format_spec` . For optimization purposes only. Should have the following signature: * `func` (format_spec : string) → bool * Should the message be emitted. * classmethod `addFeature` (func, name=None)¶ * add a feature to the class b'Parameters:' * func – the function to add * name (string) – the name to add it under. If None, use the function’s name. * classmethod `disableFeature` (name)¶ * disable a feature. A method will still exist by this name, but it won’t do anything. b'Parameters:' b'name (string) &#8211; the name of the feature to disable.' * classmethod `delFeature` (name)¶ * delete a feature entirely b'Parameters:' b'name (string) &#8211; the name of the feature to remove' * class `twiggy.logger.` `InternalLogger` (output, fields=None, options=None, min_level=None)¶ * Special-purpose logger for internal uses. Sends messages directly to output, bypassing `emitters` . b'Variables:' b'output (Output) &#8211; an output to write to' ## Message¶ * class `twiggy.message.` `Message` (level, format_spec, fields, options, args, kwargs)¶ * A logging message. Users never create these directly. Changed in version 0.4.1: Pass args/kwargs as list/dict instead of via `*` / `**` expansion. The constructor takes a dict of `options` to control message creation. In addition to `suppress_newlines` , the following options are recognized: b'trace:' b'control traceback inclusion. Either a traceback tuple, or one of the strings' `always` , `error` , in which case a traceback will be extracted from the current stack frame. b'style:' b'the style of template used for' `format_spec` . One of `braces` , `percent` , `dollar` . The aliases `{}` , `%` and `$` are also supported. Any callables passed in `fields` , `args` or `kwargs` will be called and the returned value used instead. See dynamic messages. All attributes are read-only. * `fields` ¶ * dictionary of structured logging fields. Keys are string, values are arbitrary. A `level` item is required. * `suppress_newlines` ¶ * should newlines be escaped in output. Boolean. * `traceback` ¶ * a stringified traceback, or None. * `text` ¶ * the human-readable message. Constructed by substituting `args` / `kwargs` into `format_spec` . String. * `__init__` (level, format_spec, fields, options, args, kwargs)¶ * b'Parameters:' * level (LogLevel) – the level of the message * format_spec (string) – the human-readable message template. Should match the `style` in options. * fields (dict) – dictionary of fields for structured logging * args (tuple) – substitution arguments for `format_spec` . * kwargs (dict) – substitution keyword arguments for `format_spec` . * options (dict) – a dictionary of options to control message creation. * class `twiggy.outputs.` `Output` (format=None, close_atexit=True)¶ * * `_format` ¶ * a callable taking a `Message` and formatting it for output. None means return the message unchanged. * `use_locks` ¶ * Class variable, indicating that locks should be used when running in a synchronous, multithreaded environment. Threadsafe subclasses may disable locking for higher throughput. Defaults to True. * `__init__` (format=None, close_atexit=True)¶ * b'Parameters:' * format (format) – the format to use. If None, return the message unchanged. * close_atexit (bool) – should `close()` be registered with `atexit` . If False, the user is responsible for closing the output. New in version 0.4.1: Add the `close_atexit` parameter. * `close` ()¶ * Finalize the output. The following methods should be implemented by subclasses. * `_open` ()¶ * * `_write` (x)¶ * Do the work of writing b'Parameters:' b'x &#8211; an implementation-dependent object to be written.' * class `twiggy.outputs.` `AsyncOutput` (msg_buffer=0)¶ * An `Output` with support for asynchronous logging. Inheriting from this class transparently adds support for asynchronous logging using the multiprocessing module. This is off by default, as it can cause log messages to be dropped. b'Parameters:' b'msg_buffer (int) &#8211; number of messages to buffer in memory when using asynchronous logging.' `0` turns asynchronous output off, a negative integer means an unlimited buffer, a positive integer is the size of the buffer. * class `twiggy.outputs.` `FileOutput` (name, format, mode='a', buffering=1, msg_buffer=0, close_atexit=True)¶ * Output messages to a file `name` , `mode` , `buffering` are passed to `open()` * class `twiggy.outputs.` `StreamOutput` (format, stream=sys.stderr)¶ * Output to an externally-managed stream. The stream will be written to, but otherwise left alone (i.e., it will not be closed). * class `twiggy.outputs.` `NullOutput` (format=None, close_atexit=True)¶ * An output that just discards its messages * class `twiggy.outputs.` `ListOutput` (format=None, close_atexit=True)¶ * an output that stuffs messages in a list Useful for unittesting. b'Variables:' b'messages (list) &#8211; messages that have been emitted' Changed in version 0.4.1: Replace `DequeOutput` with more useful `ListOutput` . This part discusses how to test Twiggy to ensure that Twiggy is built and installed correctly. ## Requirements¶ The following need to be installed prior to testing: * Python 2.7.1 or greater. * The coverage module. * sphinx 1.0.8 or greater. You’ll need to get and build the sphinx source. * Twiggy source. ## Running Tests¶ Note: Tests must be run from the Twiggy root directory to work. To run all tests (unittest and Sphinx doctests): ``` ./scripts/run-twiggy-tests.sh ``` To run coverage tests, run: ``` ./scripts/cover-twiggy-tests.sh discover -b ``` To run coverage tests on a specific module, run: ``` ./scripts/cover-twiggy-tests.sh tests.test_levels ``` * asynchronous logging * performance enhancement that moves formatting and writing messages to a separate process. See Asynchronous Logging. * structured logging * logging information in easy-to-parse key-value pairs, instead of embedded in a human-readable message. See an example Date: 2010-08-11 Categories: Tags: ## 0.5.0¶ XXX Unreleased * add a NOTICE level between INFO and WARNING * add sameValue, sameItem, drop helper functions to lib.converter * support {}, %, $ as style aliases. * PEP8 name compliance * add logging_compat module for compatibility with stlib’s logging ### 0.4.7¶ 03/09/2015 - add missing classifiers to setup.py ## 0.4.6¶ 03/09/2015 - also suppress newlines in fields output - Python 3 support ## 0.4.5¶ 03/18/2013 - documentation update, move to Github ## 0.4.4¶ 07/12/2011 - support Python 2.6 ## 0.4.3¶ 12/20/2010 - add check for Python >= 2.7 to setup.py, to reduce invalid bug reports. ## 0.4.2¶ 11/11/2010 - fix broken installer ## 0.4.1¶ 11/8/2010 * full test coverage; numerous bug fixes * add close_atexit parameter to Outputs * replace DequeOutput with ListOutput * deprecate features, pending a rewrite in 0.5 * minor internal API changes ## 0.4.0¶ 10/18/2010 First serious public release Twiggy would not be possible without the support of the following people. You have our thanks. * <NAME> `<EMAIL>` * <NAME> `<EMAIL>` * <NAME> `<EMAIL>` * <NAME> `<EMAIL>` Twiggy would not be possible without the support of the following people. You have our thanks. `<EMAIL>` `<EMAIL>` `<EMAIL>` `<EMAIL>`
gopkg.in/oramatistis/iris.v12
go
Go
README [¶](#section-readme) --- ### Iris Web Framework Documentation [¶](#section-documentation) --- ### Overview [¶](#pkg-overview) * [Current Version](#hdr-Current_Version) * [Installation](#hdr-Installation) Package iris implements the highest realistic performance, easy to learn Go web framework. Iris provides a beautifully expressive and easy to use foundation for your next website, API, or distributed app. Low-level handlers compatible with `net/http` and high-level fastest MVC implementation and handlers dependency injection. Easy to learn for new gophers and advanced features for experienced, it goes as far as you dive into it! Source code and other details for the project are available at GitHub: ``` https://github.com/kataras/iris ``` #### Current Version [¶](#hdr-Current_Version) 12.2.0 #### Installation [¶](#hdr-Installation) The only requirement is the Go Programming Language, at least version 1.20. ``` $ go get github.com/kataras/iris/v12@latest ``` Wiki: ``` https://www.iris-go.com/#ebookDonateForm ``` Examples: ``` https://github.com/kataras/iris/tree/main/_examples ``` Middleware: ``` https://github.com/kataras/iris/tree/main/middleware https://github.com/iris-contrib/middleware ``` Home Page: ``` https://iris-go.com ``` ### Index [¶](#pkg-index) * [Constants](#pkg-constants) * [Variables](#pkg-variables) * [func Compression(ctx Context)](#Compression) * [func ConfigureMiddleware(handlers ...Handler) router.PartyConfigurator](#ConfigureMiddleware) * [func Minify(ctx Context)](#Minify) * [func PrefixDir(prefix string, fs http.FileSystem) http.FileSystem](#PrefixDir) * [func PrefixFS(fileSystem fs.FS, dir string) (fs.FS, error)](#PrefixFS) * [func WithSocketSharding(app *Application)](#WithSocketSharding) * [type APIContainer](#APIContainer) * [type Application](#Application) * + [func Default() *Application](#Default) + [func New() *Application](#New) * + [func (app *Application) Build() error](#Application.Build) + [func (app *Application) ConfigurationReadOnly() context.ConfigurationReadOnly](#Application.ConfigurationReadOnly) + [func (app *Application) Configure(configurators ...Configurator) *Application](#Application.Configure) + [func (app *Application) ConfigureHost(configurators ...host.Configurator) *Application](#Application.ConfigureHost) + [func (app *Application) GetContextErrorHandler() context.ErrorHandler](#Application.GetContextErrorHandler) + [func (app *Application) GetContextPool() *context.Pool](#Application.GetContextPool) + [func (app *Application) I18nReadOnly() context.I18nReadOnly](#Application.I18nReadOnly) + [func (app *Application) IsDebug() bool](#Application.IsDebug) + [func (app *Application) Listen(hostPort string, withOrWithout ...Configurator) error](#Application.Listen) + [func (app *Application) Logger() *golog.Logger](#Application.Logger) + [func (app *Application) Minifier() *minify.M](#Application.Minifier) + [func (app *Application) NewHost(srv *http.Server) *host.Supervisor](#Application.NewHost) + [func (app *Application) RegisterView(viewEngine view.Engine)](#Application.RegisterView) + [func (app *Application) Run(serve Runner, withOrWithout ...Configurator) error](#Application.Run) + [func (app *Application) SetContextErrorHandler(errHandler context.ErrorHandler) *Application](#Application.SetContextErrorHandler) + [func (app *Application) SetName(appName string) *Application](#Application.SetName) + [func (app *Application) Shutdown(ctx stdContext.Context) error](#Application.Shutdown) + [func (app *Application) String() string](#Application.String) + [func (app *Application) SubdomainRedirect(from, to router.Party) router.Party](#Application.SubdomainRedirect) + [func (app *Application) Validate(v interface{}) error](#Application.Validate) + [func (app *Application) View(writer io.Writer, filename string, layout string, bindingData interface{}) error](#Application.View) + [func (app *Application) WWW() router.Party](#Application.WWW) * [type ApplicationBuilder](#ApplicationBuilder) * [type Attachments](#Attachments) * [type CompressionGuide](#CompressionGuide) * [type Configuration](#Configuration) * + [func DefaultConfiguration() Configuration](#DefaultConfiguration) + [func TOML(filename string) Configuration](#TOML) + [func YAML(filename string) Configuration](#YAML) * + [func (c *Configuration) GetCharset() string](#Configuration.GetCharset) + [func (c *Configuration) GetDisableAutoFireStatusCode() bool](#Configuration.GetDisableAutoFireStatusCode) + [func (c *Configuration) GetDisableBodyConsumptionOnUnmarshal() bool](#Configuration.GetDisableBodyConsumptionOnUnmarshal) + [func (c *Configuration) GetDisablePathCorrection() bool](#Configuration.GetDisablePathCorrection) + [func (c *Configuration) GetDisablePathCorrectionRedirection() bool](#Configuration.GetDisablePathCorrectionRedirection) + [func (c *Configuration) GetEnableDynamicHandler() bool](#Configuration.GetEnableDynamicHandler) + [func (c *Configuration) GetEnableEasyJSON() bool](#Configuration.GetEnableEasyJSON) + [func (c *Configuration) GetEnableOptimizations() bool](#Configuration.GetEnableOptimizations) + [func (c *Configuration) GetEnablePathEscape() bool](#Configuration.GetEnablePathEscape) + [func (c *Configuration) GetEnablePathIntelligence() bool](#Configuration.GetEnablePathIntelligence) + [func (c *Configuration) GetEnableProtoJSON() bool](#Configuration.GetEnableProtoJSON) + [func (c *Configuration) GetFallbackViewContextKey() string](#Configuration.GetFallbackViewContextKey) + [func (c *Configuration) GetFireEmptyFormError() bool](#Configuration.GetFireEmptyFormError) + [func (c *Configuration) GetFireMethodNotAllowed() bool](#Configuration.GetFireMethodNotAllowed) + [func (c *Configuration) GetForceLowercaseRouting() bool](#Configuration.GetForceLowercaseRouting) + [func (c *Configuration) GetHostProxyHeaders() map[string]bool](#Configuration.GetHostProxyHeaders) + [func (c *Configuration) GetKeepAlive() time.Duration](#Configuration.GetKeepAlive) + [func (c *Configuration) GetLanguageContextKey() string](#Configuration.GetLanguageContextKey) + [func (c *Configuration) GetLanguageInputContextKey() string](#Configuration.GetLanguageInputContextKey) + [func (c *Configuration) GetLocaleContextKey() string](#Configuration.GetLocaleContextKey) + [func (c *Configuration) GetLogLevel() string](#Configuration.GetLogLevel) + [func (c *Configuration) GetOther() map[string]interface{}](#Configuration.GetOther) + [func (c *Configuration) GetPostMaxMemory() int64](#Configuration.GetPostMaxMemory) + [func (c *Configuration) GetRemoteAddrHeaders() []string](#Configuration.GetRemoteAddrHeaders) + [func (c *Configuration) GetRemoteAddrHeadersForce() bool](#Configuration.GetRemoteAddrHeadersForce) + [func (c *Configuration) GetRemoteAddrPrivateSubnets() []netutil.IPRange](#Configuration.GetRemoteAddrPrivateSubnets) + [func (c *Configuration) GetResetOnFireErrorCode() bool](#Configuration.GetResetOnFireErrorCode) + [func (c *Configuration) GetSSLProxyHeaders() map[string]string](#Configuration.GetSSLProxyHeaders) + [func (c *Configuration) GetSocketSharding() bool](#Configuration.GetSocketSharding) + [func (c *Configuration) GetTimeFormat() string](#Configuration.GetTimeFormat) + [func (c *Configuration) GetTimeout() time.Duration](#Configuration.GetTimeout) + [func (c *Configuration) GetTimeoutMessage() string](#Configuration.GetTimeoutMessage) + [func (c *Configuration) GetURLParamSeparator() *string](#Configuration.GetURLParamSeparator) + [func (c *Configuration) GetVHost() string](#Configuration.GetVHost) + [func (c *Configuration) GetVersionAliasesContextKey() string](#Configuration.GetVersionAliasesContextKey) + [func (c *Configuration) GetVersionContextKey() string](#Configuration.GetVersionContextKey) + [func (c *Configuration) GetViewDataContextKey() string](#Configuration.GetViewDataContextKey) + [func (c *Configuration) GetViewEngineContextKey() string](#Configuration.GetViewEngineContextKey) + [func (c *Configuration) GetViewLayoutContextKey() string](#Configuration.GetViewLayoutContextKey) * [type Configurator](#Configurator) * + [func WithCharset(charset string) Configurator](#WithCharset) + [func WithConfiguration(c Configuration) Configurator](#WithConfiguration) + [func WithHostProxyHeader(headers ...string) Configurator](#WithHostProxyHeader) + [func WithKeepAlive(keepAliveDur time.Duration) Configurator](#WithKeepAlive) + [func WithLogLevel(level string) Configurator](#WithLogLevel) + [func WithOtherValue(key string, val interface{}) Configurator](#WithOtherValue) + [func WithPostMaxMemory(limit int64) Configurator](#WithPostMaxMemory) + [func WithRemoteAddrHeader(header ...string) Configurator](#WithRemoteAddrHeader) + [func WithRemoteAddrPrivateSubnet(startIP, endIP string) Configurator](#WithRemoteAddrPrivateSubnet) + [func WithSSLProxyHeader(headerKey, headerValue string) Configurator](#WithSSLProxyHeader) + [func WithSitemap(startURL string) Configurator](#WithSitemap) + [func WithTimeFormat(timeformat string) Configurator](#WithTimeFormat) + [func WithTimeout(timeoutDur time.Duration, htmlBody ...string) Configurator](#WithTimeout) + [func WithoutRemoteAddrHeader(headerName string) Configurator](#WithoutRemoteAddrHeader) + [func WithoutServerError(errors ...error) Configurator](#WithoutServerError) * [type Context](#Context) * [type ContextPatches](#ContextPatches) * + [func (cp *ContextPatches) GetDomain(patchFunc func(hostport string) string)](#ContextPatches.GetDomain) + [func (cp *ContextPatches) ResolveFS(patchFunc func(fsOrDir interface{}) fs.FS)](#ContextPatches.ResolveFS) + [func (cp *ContextPatches) ResolveHTTPFS(patchFunc func(fsOrDir interface{}) http.FileSystem)](#ContextPatches.ResolveHTTPFS) + [func (cp *ContextPatches) SetCookieKVExpiration(patch time.Duration)](#ContextPatches.SetCookieKVExpiration) + [func (cp *ContextPatches) Writers() *ContextWriterPatches](#ContextPatches.Writers) * [type ContextWriterPatches](#ContextWriterPatches) * + [func (cwp *ContextWriterPatches) JSON(patchFunc func(ctx Context, v interface{}, options *JSON) error)](#ContextWriterPatches.JSON) + [func (cwp *ContextWriterPatches) JSONP(patchFunc func(ctx Context, v interface{}, options *JSONP) error)](#ContextWriterPatches.JSONP) + [func (cwp *ContextWriterPatches) Markdown(patchFunc func(ctx Context, v []byte, options *Markdown) error)](#ContextWriterPatches.Markdown) + [func (cwp *ContextWriterPatches) XML(patchFunc func(ctx Context, v interface{}, options *XML) error)](#ContextWriterPatches.XML) + [func (cwp *ContextWriterPatches) YAML(patchFunc func(ctx Context, v interface{}, indentSpace int) error)](#ContextWriterPatches.YAML) * [type Cookie](#Cookie) * [type CookieOption](#CookieOption) * [type DecodeFunc](#DecodeFunc) * [type Dir](#Dir) * [type DirCacheOptions](#DirCacheOptions) * [type DirListRichOptions](#DirListRichOptions) * [type DirOptions](#DirOptions) * [type ErrPrivate](#ErrPrivate) * [type ErrViewNotExist](#ErrViewNotExist) * [type ExecutionOptions](#ExecutionOptions) * [type ExecutionRules](#ExecutionRules) * [type FallbackView](#FallbackView) * [type FallbackViewFunc](#FallbackViewFunc) * [type FallbackViewLayout](#FallbackViewLayout) * [type Filter](#Filter) * [type GlobalPatches](#GlobalPatches) * + [func Patches() *GlobalPatches](#Patches) * + [func (p *GlobalPatches) Context() *ContextPatches](#GlobalPatches.Context) * [type Guide](#Guide) * + [func NewGuide() Guide](#NewGuide) * [type Handler](#Handler) * [type HealthGuide](#HealthGuide) * [type JSON](#JSON) * [type JSONP](#JSONP) * [type JSONReader](#JSONReader) * [type Locale](#Locale) * [type Map](#Map) * [type Markdown](#Markdown) * [type MiddlewareGuide](#MiddlewareGuide) * [type N](#N) * [type Party](#Party) * [type Problem](#Problem) * [type ProblemOptions](#ProblemOptions) * [type ProtoMarshalOptions](#ProtoMarshalOptions) * [type ProtoUnmarshalOptions](#ProtoUnmarshalOptions) * [type ResultHandler](#ResultHandler) * [type Runner](#Runner) * + [func Addr(addr string, hostConfigs ...host.Configurator) Runner](#Addr) + [func AutoTLS(addr string, domain string, email string, hostConfigs ...host.Configurator) Runner](#AutoTLS) + [func Listener(l net.Listener, hostConfigs ...host.Configurator) Runner](#Listener) + [func Raw(f func() error) Runner](#Raw) + [func Server(srv *http.Server, hostConfigs ...host.Configurator) Runner](#Server) + [func TLS(addr string, certFileOrContents, keyFileOrContents string, ...) Runner](#TLS) * [type ServiceGuide](#ServiceGuide) * [type SimpleUser](#SimpleUser) * [type Singleton](#Singleton) * + [func (c Singleton) Singleton() bool](#Singleton.Singleton) * [type Supervisor](#Supervisor) * [type TimeoutGuide](#TimeoutGuide) * [type Tunnel](#Tunnel) * [type TunnelingConfiguration](#TunnelingConfiguration) * [type UnmarshalerFunc](#UnmarshalerFunc) * [type User](#User) * [type ViewEngine](#ViewEngine) * [type XML](#XML) ### Constants [¶](#pkg-constants) ``` const ( SameSiteDefaultMode = [http](/net/http).[SameSiteDefaultMode](/net/http#SameSiteDefaultMode) SameSiteLaxMode = [http](/net/http).[SameSiteLaxMode](/net/http#SameSiteLaxMode) SameSiteStrictMode = [http](/net/http).[SameSiteStrictMode](/net/http#SameSiteStrictMode) SameSiteNoneMode = [http](/net/http).[SameSiteNoneMode](/net/http#SameSiteNoneMode) ) ``` SameSite attributes. ``` const ( // RouteOverride replaces an existing route with the new one, the default rule. RouteOverride = [router](/github.com/kataras/iris/[email protected]/core/router).[RouteOverride](/github.com/kataras/iris/[email protected]/core/router#RouteOverride) // RouteSkip keeps the original route and skips the new one. RouteSkip = [router](/github.com/kataras/iris/[email protected]/core/router).[RouteSkip](/github.com/kataras/iris/[email protected]/core/router#RouteSkip) // RouteError log when a route already exists, shown after the `Build` state, // server never starts. RouteError = [router](/github.com/kataras/iris/[email protected]/core/router).[RouteError](/github.com/kataras/iris/[email protected]/core/router#RouteError) // RouteOverlap will overlap the new route to the previous one. // If the route stopped and its response can be reset then the new route will be execute. RouteOverlap = [router](/github.com/kataras/iris/[email protected]/core/router).[RouteOverlap](/github.com/kataras/iris/[email protected]/core/router#RouteOverlap) ) ``` Constants for input argument at `router.RouteRegisterRule`. See `Party#SetRegisterRule`. ``` const ( ReferrerInvalid = [context](/github.com/kataras/iris/[email protected]/context).[ReferrerInvalid](/github.com/kataras/iris/[email protected]/context#ReferrerInvalid) ReferrerIndirect = [context](/github.com/kataras/iris/[email protected]/context).[ReferrerIndirect](/github.com/kataras/iris/[email protected]/context#ReferrerIndirect) ReferrerDirect = [context](/github.com/kataras/iris/[email protected]/context).[ReferrerDirect](/github.com/kataras/iris/[email protected]/context#ReferrerDirect) ReferrerEmail = [context](/github.com/kataras/iris/[email protected]/context).[ReferrerEmail](/github.com/kataras/iris/[email protected]/context#ReferrerEmail) ReferrerSearch = [context](/github.com/kataras/iris/[email protected]/context).[ReferrerSearch](/github.com/kataras/iris/[email protected]/context#ReferrerSearch) ReferrerSocial = [context](/github.com/kataras/iris/[email protected]/context).[ReferrerSocial](/github.com/kataras/iris/[email protected]/context#ReferrerSocial) ReferrerNotGoogleSearch = [context](/github.com/kataras/iris/[email protected]/context).[ReferrerNotGoogleSearch](/github.com/kataras/iris/[email protected]/context#ReferrerNotGoogleSearch) ReferrerGoogleOrganicSearch = [context](/github.com/kataras/iris/[email protected]/context).[ReferrerGoogleOrganicSearch](/github.com/kataras/iris/[email protected]/context#ReferrerGoogleOrganicSearch) ReferrerGoogleAdwords = [context](/github.com/kataras/iris/[email protected]/context).[ReferrerGoogleAdwords](/github.com/kataras/iris/[email protected]/context#ReferrerGoogleAdwords) ) ``` Contains the enum values of the `Context.GetReferrer()` method, shortcuts of the context subpackage. ``` const ( MethodGet = [http](/net/http).[MethodGet](/net/http#MethodGet) MethodPost = [http](/net/http).[MethodPost](/net/http#MethodPost) MethodPut = [http](/net/http).[MethodPut](/net/http#MethodPut) MethodDelete = [http](/net/http).[MethodDelete](/net/http#MethodDelete) MethodConnect = [http](/net/http).[MethodConnect](/net/http#MethodConnect) MethodHead = [http](/net/http).[MethodHead](/net/http#MethodHead) MethodPatch = [http](/net/http).[MethodPatch](/net/http#MethodPatch) MethodOptions = [http](/net/http).[MethodOptions](/net/http#MethodOptions) MethodTrace = [http](/net/http).[MethodTrace](/net/http#MethodTrace) // MethodNone is an iris-specific "virtual" method // to store the "offline" routes. MethodNone = [router](/github.com/kataras/iris/[email protected]/core/router).[MethodNone](/github.com/kataras/iris/[email protected]/core/router#MethodNone) ) ``` HTTP Methods copied from `net/http`. ``` const ( StatusContinue = [http](/net/http).[StatusContinue](/net/http#StatusContinue) // [RFC 7231](https://rfc-editor.org/rfc/rfc7231.html), 6.2.1 StatusSwitchingProtocols = [http](/net/http).[StatusSwitchingProtocols](/net/http#StatusSwitchingProtocols) // [RFC 7231](https://rfc-editor.org/rfc/rfc7231.html), 6.2.2 StatusProcessing = [http](/net/http).[StatusProcessing](/net/http#StatusProcessing) // [RFC 2518](https://rfc-editor.org/rfc/rfc2518.html), 10.1 StatusEarlyHints = [http](/net/http).[StatusEarlyHints](/net/http#StatusEarlyHints) // [RFC 8297](https://rfc-editor.org/rfc/rfc8297.html) StatusOK = [http](/net/http).[StatusOK](/net/http#StatusOK) // [RFC 7231](https://rfc-editor.org/rfc/rfc7231.html), 6.3.1 StatusCreated = [http](/net/http).[StatusCreated](/net/http#StatusCreated) // [RFC 7231](https://rfc-editor.org/rfc/rfc7231.html), 6.3.2 StatusAccepted = [http](/net/http).[StatusAccepted](/net/http#StatusAccepted) // [RFC 7231](https://rfc-editor.org/rfc/rfc7231.html), 6.3.3 StatusNonAuthoritativeInfo = [http](/net/http).[StatusNonAuthoritativeInfo](/net/http#StatusNonAuthoritativeInfo) // [RFC 7231](https://rfc-editor.org/rfc/rfc7231.html), 6.3.4 StatusNoContent = [http](/net/http).[StatusNoContent](/net/http#StatusNoContent) // [RFC 7231](https://rfc-editor.org/rfc/rfc7231.html), 6.3.5 StatusResetContent = [http](/net/http).[StatusResetContent](/net/http#StatusResetContent) // [RFC 7231](https://rfc-editor.org/rfc/rfc7231.html), 6.3.6 StatusPartialContent = [http](/net/http).[StatusPartialContent](/net/http#StatusPartialContent) // [RFC 7233](https://rfc-editor.org/rfc/rfc7233.html), 4.1 StatusMultiStatus = [http](/net/http).[StatusMultiStatus](/net/http#StatusMultiStatus) // [RFC 4918](https://rfc-editor.org/rfc/rfc4918.html), 11.1 StatusAlreadyReported = [http](/net/http).[StatusAlreadyReported](/net/http#StatusAlreadyReported) // [RFC 5842](https://rfc-editor.org/rfc/rfc5842.html), 7.1 StatusIMUsed = [http](/net/http).[StatusIMUsed](/net/http#StatusIMUsed) // [RFC 3229](https://rfc-editor.org/rfc/rfc3229.html), 10.4.1 StatusMultipleChoices = [http](/net/http).[StatusMultipleChoices](/net/http#StatusMultipleChoices) // [RFC 7231](https://rfc-editor.org/rfc/rfc7231.html), 6.4.1 StatusMovedPermanently = [http](/net/http).[StatusMovedPermanently](/net/http#StatusMovedPermanently) // [RFC 7231](https://rfc-editor.org/rfc/rfc7231.html), 6.4.2 StatusFound = [http](/net/http).[StatusFound](/net/http#StatusFound) // [RFC 7231](https://rfc-editor.org/rfc/rfc7231.html), 6.4.3 StatusSeeOther = [http](/net/http).[StatusSeeOther](/net/http#StatusSeeOther) // [RFC 7231](https://rfc-editor.org/rfc/rfc7231.html), 6.4.4 StatusNotModified = [http](/net/http).[StatusNotModified](/net/http#StatusNotModified) // [RFC 7232](https://rfc-editor.org/rfc/rfc7232.html), 4.1 StatusUseProxy = [http](/net/http).[StatusUseProxy](/net/http#StatusUseProxy) // [RFC 7231](https://rfc-editor.org/rfc/rfc7231.html), 6.4.5 StatusTemporaryRedirect = [http](/net/http).[StatusTemporaryRedirect](/net/http#StatusTemporaryRedirect) // [RFC 7231](https://rfc-editor.org/rfc/rfc7231.html), 6.4.7 StatusPermanentRedirect = [http](/net/http).[StatusPermanentRedirect](/net/http#StatusPermanentRedirect) // [RFC 7538](https://rfc-editor.org/rfc/rfc7538.html), 3 StatusBadRequest = [http](/net/http).[StatusBadRequest](/net/http#StatusBadRequest) // [RFC 7231](https://rfc-editor.org/rfc/rfc7231.html), 6.5.1 StatusUnauthorized = [http](/net/http).[StatusUnauthorized](/net/http#StatusUnauthorized) // [RFC 7235](https://rfc-editor.org/rfc/rfc7235.html), 3.1 StatusPaymentRequired = [http](/net/http).[StatusPaymentRequired](/net/http#StatusPaymentRequired) // [RFC 7231](https://rfc-editor.org/rfc/rfc7231.html), 6.5.2 StatusForbidden = [http](/net/http).[StatusForbidden](/net/http#StatusForbidden) // [RFC 7231](https://rfc-editor.org/rfc/rfc7231.html), 6.5.3 StatusNotFound = [http](/net/http).[StatusNotFound](/net/http#StatusNotFound) // [RFC 7231](https://rfc-editor.org/rfc/rfc7231.html), 6.5.4 StatusMethodNotAllowed = [http](/net/http).[StatusMethodNotAllowed](/net/http#StatusMethodNotAllowed) // [RFC 7231](https://rfc-editor.org/rfc/rfc7231.html), 6.5.5 StatusNotAcceptable = [http](/net/http).[StatusNotAcceptable](/net/http#StatusNotAcceptable) // [RFC 7231](https://rfc-editor.org/rfc/rfc7231.html), 6.5.6 StatusProxyAuthRequired = [http](/net/http).[StatusProxyAuthRequired](/net/http#StatusProxyAuthRequired) // [RFC 7235](https://rfc-editor.org/rfc/rfc7235.html), 3.2 StatusRequestTimeout = [http](/net/http).[StatusRequestTimeout](/net/http#StatusRequestTimeout) // [RFC 7231](https://rfc-editor.org/rfc/rfc7231.html), 6.5.7 StatusConflict = [http](/net/http).[StatusConflict](/net/http#StatusConflict) // [RFC 7231](https://rfc-editor.org/rfc/rfc7231.html), 6.5.8 StatusGone = [http](/net/http).[StatusGone](/net/http#StatusGone) // [RFC 7231](https://rfc-editor.org/rfc/rfc7231.html), 6.5.9 StatusLengthRequired = [http](/net/http).[StatusLengthRequired](/net/http#StatusLengthRequired) // [RFC 7231](https://rfc-editor.org/rfc/rfc7231.html), 6.5.10 StatusPreconditionFailed = [http](/net/http).[StatusPreconditionFailed](/net/http#StatusPreconditionFailed) // [RFC 7232](https://rfc-editor.org/rfc/rfc7232.html), 4.2 StatusRequestEntityTooLarge = [http](/net/http).[StatusRequestEntityTooLarge](/net/http#StatusRequestEntityTooLarge) // [RFC 7231](https://rfc-editor.org/rfc/rfc7231.html), 6.5.11 StatusRequestURITooLong = [http](/net/http).[StatusRequestURITooLong](/net/http#StatusRequestURITooLong) // [RFC 7231](https://rfc-editor.org/rfc/rfc7231.html), 6.5.12 StatusUnsupportedMediaType = [http](/net/http).[StatusUnsupportedMediaType](/net/http#StatusUnsupportedMediaType) // [RFC 7231](https://rfc-editor.org/rfc/rfc7231.html), 6.5.13 StatusRequestedRangeNotSatisfiable = [http](/net/http).[StatusRequestedRangeNotSatisfiable](/net/http#StatusRequestedRangeNotSatisfiable) // [RFC 7233](https://rfc-editor.org/rfc/rfc7233.html), 4.4 StatusExpectationFailed = [http](/net/http).[StatusExpectationFailed](/net/http#StatusExpectationFailed) // [RFC 7231](https://rfc-editor.org/rfc/rfc7231.html), 6.5.14 StatusTeapot = [http](/net/http).[StatusTeapot](/net/http#StatusTeapot) // [RFC 7168](https://rfc-editor.org/rfc/rfc7168.html), 2.3.3 StatusMisdirectedRequest = [http](/net/http).[StatusMisdirectedRequest](/net/http#StatusMisdirectedRequest) // [RFC 7540](https://rfc-editor.org/rfc/rfc7540.html), 9.1.2 StatusUnprocessableEntity = [http](/net/http).[StatusUnprocessableEntity](/net/http#StatusUnprocessableEntity) // [RFC 4918](https://rfc-editor.org/rfc/rfc4918.html), 11.2 StatusLocked = [http](/net/http).[StatusLocked](/net/http#StatusLocked) // [RFC 4918](https://rfc-editor.org/rfc/rfc4918.html), 11.3 StatusFailedDependency = [http](/net/http).[StatusFailedDependency](/net/http#StatusFailedDependency) // [RFC 4918](https://rfc-editor.org/rfc/rfc4918.html), 11.4 StatusTooEarly = [http](/net/http).[StatusTooEarly](/net/http#StatusTooEarly) // [RFC 8470](https://rfc-editor.org/rfc/rfc8470.html), 5.2. StatusUpgradeRequired = [http](/net/http).[StatusUpgradeRequired](/net/http#StatusUpgradeRequired) // [RFC 7231](https://rfc-editor.org/rfc/rfc7231.html), 6.5.15 StatusPreconditionRequired = [http](/net/http).[StatusPreconditionRequired](/net/http#StatusPreconditionRequired) // [RFC 6585](https://rfc-editor.org/rfc/rfc6585.html), 3 StatusTooManyRequests = [http](/net/http).[StatusTooManyRequests](/net/http#StatusTooManyRequests) // [RFC 6585](https://rfc-editor.org/rfc/rfc6585.html), 4 StatusRequestHeaderFieldsTooLarge = [http](/net/http).[StatusRequestHeaderFieldsTooLarge](/net/http#StatusRequestHeaderFieldsTooLarge) // [RFC 6585](https://rfc-editor.org/rfc/rfc6585.html), 5 StatusUnavailableForLegalReasons = [http](/net/http).[StatusUnavailableForLegalReasons](/net/http#StatusUnavailableForLegalReasons) // [RFC 7725](https://rfc-editor.org/rfc/rfc7725.html), 3 // Unofficial Client Errors. StatusPageExpired = [context](/github.com/kataras/iris/[email protected]/context).[StatusPageExpired](/github.com/kataras/iris/[email protected]/context#StatusPageExpired) StatusBlockedByWindowsParentalControls = [context](/github.com/kataras/iris/[email protected]/context).[StatusBlockedByWindowsParentalControls](/github.com/kataras/iris/[email protected]/context#StatusBlockedByWindowsParentalControls) StatusInvalidToken = [context](/github.com/kataras/iris/[email protected]/context).[StatusInvalidToken](/github.com/kataras/iris/[email protected]/context#StatusInvalidToken) StatusTokenRequired = [context](/github.com/kataras/iris/[email protected]/context).[StatusTokenRequired](/github.com/kataras/iris/[email protected]/context#StatusTokenRequired) // StatusInternalServerError = [http](/net/http).[StatusInternalServerError](/net/http#StatusInternalServerError) // [RFC 7231](https://rfc-editor.org/rfc/rfc7231.html), 6.6.1 StatusNotImplemented = [http](/net/http).[StatusNotImplemented](/net/http#StatusNotImplemented) // [RFC 7231](https://rfc-editor.org/rfc/rfc7231.html), 6.6.2 StatusBadGateway = [http](/net/http).[StatusBadGateway](/net/http#StatusBadGateway) // [RFC 7231](https://rfc-editor.org/rfc/rfc7231.html), 6.6.3 StatusServiceUnavailable = [http](/net/http).[StatusServiceUnavailable](/net/http#StatusServiceUnavailable) // [RFC 7231](https://rfc-editor.org/rfc/rfc7231.html), 6.6.4 StatusGatewayTimeout = [http](/net/http).[StatusGatewayTimeout](/net/http#StatusGatewayTimeout) // [RFC 7231](https://rfc-editor.org/rfc/rfc7231.html), 6.6.5 StatusHTTPVersionNotSupported = [http](/net/http).[StatusHTTPVersionNotSupported](/net/http#StatusHTTPVersionNotSupported) // [RFC 7231](https://rfc-editor.org/rfc/rfc7231.html), 6.6.6 StatusVariantAlsoNegotiates = [http](/net/http).[StatusVariantAlsoNegotiates](/net/http#StatusVariantAlsoNegotiates) // [RFC 2295](https://rfc-editor.org/rfc/rfc2295.html), 8.1 StatusInsufficientStorage = [http](/net/http).[StatusInsufficientStorage](/net/http#StatusInsufficientStorage) // [RFC 4918](https://rfc-editor.org/rfc/rfc4918.html), 11.5 StatusLoopDetected = [http](/net/http).[StatusLoopDetected](/net/http#StatusLoopDetected) // [RFC 5842](https://rfc-editor.org/rfc/rfc5842.html), 7.2 StatusNotExtended = [http](/net/http).[StatusNotExtended](/net/http#StatusNotExtended) // [RFC 2774](https://rfc-editor.org/rfc/rfc2774.html), 7 StatusNetworkAuthenticationRequired = [http](/net/http).[StatusNetworkAuthenticationRequired](/net/http#StatusNetworkAuthenticationRequired) // [RFC 6585](https://rfc-editor.org/rfc/rfc6585.html), 6 // Unofficial Server Errors. StatusBandwidthLimitExceeded = [context](/github.com/kataras/iris/[email protected]/context).[StatusBandwidthLimitExceeded](/github.com/kataras/iris/[email protected]/context#StatusBandwidthLimitExceeded) StatusInvalidSSLCertificate = [context](/github.com/kataras/iris/[email protected]/context).[StatusInvalidSSLCertificate](/github.com/kataras/iris/[email protected]/context#StatusInvalidSSLCertificate) StatusSiteOverloaded = [context](/github.com/kataras/iris/[email protected]/context).[StatusSiteOverloaded](/github.com/kataras/iris/[email protected]/context#StatusSiteOverloaded) StatusSiteFrozen = [context](/github.com/kataras/iris/[email protected]/context).[StatusSiteFrozen](/github.com/kataras/iris/[email protected]/context#StatusSiteFrozen) StatusNetworkReadTimeout = [context](/github.com/kataras/iris/[email protected]/context).[StatusNetworkReadTimeout](/github.com/kataras/iris/[email protected]/context#StatusNetworkReadTimeout) ) ``` HTTP status codes as registered with IANA. See: <http://www.iana.org/assignments/http-status-codes/http-status-codes.xhtml>. Raw Copy from the future(tip) net/http std package in order to recude the import path of "net/http" for the users. ``` const ( B = 1 << (10 * [iota](/builtin#iota)) KB MB GB TB PB EB ) ``` Byte unit helpers. ``` const NoLayout = [view](/github.com/kataras/iris/[email protected]/view).[NoLayout](/github.com/kataras/iris/[email protected]/view#NoLayout) ``` NoLayout to disable layout for a particular template file A shortcut for the `view#NoLayout`. ``` const Version = "12.2.7" ``` Version is the current version of the Iris Web Framework. ### Variables [¶](#pkg-variables) ``` var ( // BuildRevision holds the vcs commit id information of the program's build. // To display the Iris' version please use the iris.Version constant instead. // Available at go version 1.18+ BuildRevision = [context](/github.com/kataras/iris/[email protected]/context).[BuildRevision](/github.com/kataras/iris/[email protected]/context#BuildRevision) // BuildTime holds the vcs commit time information of the program's build. // Available at go version 1.18+ BuildTime = [context](/github.com/kataras/iris/[email protected]/context).[BuildTime](/github.com/kataras/iris/[email protected]/context#BuildTime) ) ``` ``` var ( // HTML view engine. // Shortcut of the view.HTML. HTML = [view](/github.com/kataras/iris/[email protected]/view).[HTML](/github.com/kataras/iris/[email protected]/view#HTML) // Blocks view engine. // Can be used as a faster alternative of the HTML engine. // Shortcut of the view.Blocks. Blocks = [view](/github.com/kataras/iris/[email protected]/view).[Blocks](/github.com/kataras/iris/[email protected]/view#Blocks) // Django view engine. // Shortcut of the view.Django. Django = [view](/github.com/kataras/iris/[email protected]/view).[Django](/github.com/kataras/iris/[email protected]/view#Django) // Handlebars view engine. // Shortcut of the view.Handlebars. Handlebars = [view](/github.com/kataras/iris/[email protected]/view).[Handlebars](/github.com/kataras/iris/[email protected]/view#Handlebars) // Pug view engine. // Shortcut of the view.Pug. Pug = [view](/github.com/kataras/iris/[email protected]/view).[Pug](/github.com/kataras/iris/[email protected]/view#Pug) // Jet view engine. // Shortcut of the view.Jet. Jet = [view](/github.com/kataras/iris/[email protected]/view).[Jet](/github.com/kataras/iris/[email protected]/view#Jet) // Ace view engine. // Shortcut of the view.Ace. Ace = [view](/github.com/kataras/iris/[email protected]/view).[Ace](/github.com/kataras/iris/[email protected]/view#Ace) ) ``` ``` var ( // AllowQuerySemicolons returns a middleware that serves requests by converting any // unescaped semicolons(;) in the URL query to ampersands(&). // // This restores the pre-Go 1.17 behavior of splitting query parameters on both // semicolons and ampersands. // (See golang.org/issue/25192 and <https://github.com/kataras/iris/issues/1875>). // Note that this behavior doesn't match that of many proxies, // and the mismatch can lead to security issues. // // AllowQuerySemicolons should be invoked before any Context read query or // form methods are called. // // To skip HTTP Server logging for this type of warning: // app.Listen/Run(..., iris.WithoutServerError(iris.ErrURLQuerySemicolon)). AllowQuerySemicolons = func(ctx [Context](#Context)) { r := ctx.Request() if s := r.URL.RawQuery; [strings](/strings).[Contains](/strings#Contains)(s, ";") { r2 := [new](/builtin#new)([http](/net/http).[Request](/net/http#Request)) *r2 = *r r2.URL = [new](/builtin#new)([url](/net/url).[URL](/net/url#URL)) *r2.URL = *r.URL r2.URL.RawQuery = [strings](/strings).[ReplaceAll](/strings#ReplaceAll)(s, ";", "&") ctx.ResetRequest(r2) } ctx.Next() } // MatchImagesAssets is a simple regex expression // that can be passed to the DirOptions.Cache.CompressIgnore field // in order to skip compression on already-compressed file types // such as images and pdf. MatchImagesAssets = [regexp](/regexp).[MustCompile](/regexp#MustCompile)("((.*).pdf|(.*).jpg|(.*).jpeg|(.*).gif|(.*).tif|(.*).tiff)$") // MatchCommonAssets is a simple regex expression which // can be used on `DirOptions.PushTargetsRegexp`. // It will match and Push // all available js, css, font and media files. // Ideal for Single Page Applications. MatchCommonAssets = [regexp](/regexp).[MustCompile](/regexp#MustCompile)("((.*).js|(.*).css|(.*).ico|(.*).png|(.*).ttf|(.*).svg|(.*).webp|(.*).gif)$") ) ``` ``` var ( // RegisterOnInterrupt registers a global function to call when CTRL+C/CMD+C pressed or a unix kill command received. // // A shortcut for the `host#RegisterOnInterrupt`. RegisterOnInterrupt = [host](/github.com/kataras/iris/[email protected]/core/host).[RegisterOnInterrupt](/github.com/kataras/iris/[email protected]/core/host#RegisterOnInterrupt) // LimitRequestBodySize is a middleware which sets a request body size limit // for all next handlers in the chain. // // A shortcut for the `context#LimitRequestBodySize`. LimitRequestBodySize = [context](/github.com/kataras/iris/[email protected]/context).[LimitRequestBodySize](/github.com/kataras/iris/[email protected]/context#LimitRequestBodySize) // NewConditionalHandler returns a single Handler which can be registered // as a middleware. // Filter is just a type of Handler which returns a boolean. // Handlers here should act like middleware, they should contain `ctx.Next` to proceed // to the next handler of the chain. Those "handlers" are registered to the per-request context. // // // It checks the "filter" and if passed then // it, correctly, executes the "handlers". // // If passed, this function makes sure that the Context's information // about its per-request handler chain based on the new "handlers" is always updated. // // If not passed, then simply the Next handler(if any) is executed and "handlers" are ignored. // Example can be found at: _examples/routing/conditional-chain. // // A shortcut for the `context#NewConditionalHandler`. NewConditionalHandler = [context](/github.com/kataras/iris/[email protected]/context).[NewConditionalHandler](/github.com/kataras/iris/[email protected]/context#NewConditionalHandler) // FileServer returns a Handler which serves files from a specific system, phyisical, directory // or an embedded one. // The first parameter is the directory, relative to the executable program. // The second optional parameter is any optional settings that the caller can use. // // See `Party#HandleDir` too. // Examples can be found at: <https://github.com/kataras/iris/tree/main/_examples/file-server> // A shortcut for the `router.FileServer`. FileServer = [router](/github.com/kataras/iris/[email protected]/core/router).[FileServer](/github.com/kataras/iris/[email protected]/core/router#FileServer) // DirList is the default `DirOptions.DirList` field. // Read more at: `core/router.DirList`. DirList = [router](/github.com/kataras/iris/[email protected]/core/router).[DirList](/github.com/kataras/iris/[email protected]/core/router#DirList) // DirListRich can be passed to `DirOptions.DirList` field // to override the default file listing appearance. // Read more at: `core/router.DirListRich`. DirListRich = [router](/github.com/kataras/iris/[email protected]/core/router).[DirListRich](/github.com/kataras/iris/[email protected]/core/router#DirListRich) // StripPrefix returns a handler that serves HTTP requests // by removing the given prefix from the request URL's Path // and invoking the handler h. StripPrefix handles a // request for a path that doesn't begin with prefix by // replying with an HTTP 404 not found error. // // Usage: // fileserver := iris.FileServer("./static_files", DirOptions {...}) // h := iris.StripPrefix("/static", fileserver) // app.Get("/static/{file:path}", h) // app.Head("/static/{file:path}", h) StripPrefix = [router](/github.com/kataras/iris/[email protected]/core/router).[StripPrefix](/github.com/kataras/iris/[email protected]/core/router#StripPrefix) // FromStd converts native http.Handler, http.HandlerFunc & func(w, r, next) to context.Handler. // // Supported form types: // .FromStd(h http.Handler) // .FromStd(func(w http.ResponseWriter, r *http.Request)) // .FromStd(func(w http.ResponseWriter, r *http.Request, next http.HandlerFunc)) // // A shortcut for the `handlerconv#FromStd`. FromStd = [handlerconv](/github.com/kataras/iris/[email protected]/core/handlerconv).[FromStd](/github.com/kataras/iris/[email protected]/core/handlerconv#FromStd) // Cache is a middleware providing server-side cache functionalities // to the next handlers, can be used as: `app.Get("/", iris.Cache, aboutHandler)`. // It should be used after Static methods. // See `iris#Cache304` for an alternative, faster way. // // Examples can be found at: <https://github.com/kataras/iris/tree/main/_examples/#caching> Cache = [cache](/github.com/kataras/iris/[email protected]/cache).[Handler](/github.com/kataras/iris/[email protected]/cache#Handler) // NoCache is a middleware which overrides the Cache-Control, Pragma and Expires headers // in order to disable the cache during the browser's back and forward feature. // // A good use of this middleware is on HTML routes; to refresh the page even on "back" and "forward" browser's arrow buttons. // // See `iris#StaticCache` for the opposite behavior. // // A shortcut of the `cache#NoCache` NoCache = [cache](/github.com/kataras/iris/[email protected]/cache).[NoCache](/github.com/kataras/iris/[email protected]/cache#NoCache) // StaticCache middleware for caching static files by sending the "Cache-Control" and "Expires" headers to the client. // It accepts a single input parameter, the "cacheDur", a time.Duration that it's used to calculate the expiration. // // If "cacheDur" <=0 then it returns the `NoCache` middleware instaed to disable the caching between browser's "back" and "forward" actions. // // Usage: `app.Use(iris.StaticCache(24 * time.Hour))` or `app.Use(iris.StaticCache(-1))`. // A middleware, which is a simple Handler can be called inside another handler as well, example: // cacheMiddleware := iris.StaticCache(...) // func(ctx iris.Context){ // cacheMiddleware(ctx) // [...] // } // // A shortcut of the `cache#StaticCache` StaticCache = [cache](/github.com/kataras/iris/[email protected]/cache).[StaticCache](/github.com/kataras/iris/[email protected]/cache#StaticCache) // Cache304 sends a `StatusNotModified` (304) whenever // the "If-Modified-Since" request header (time) is before the // time.Now() + expiresEvery (always compared to their UTC values). // Use this, which is a shortcut of the, `chache#Cache304` instead of the "github.com/kataras/iris/v12/cache" or iris.Cache // for better performance. // Clients that are compatible with the http RCF (all browsers are and tools like postman) // will handle the caching. // The only disadvantage of using that instead of server-side caching // is that this method will send a 304 status code instead of 200, // So, if you use it side by side with other micro services // you have to check for that status code as well for a valid response. // // Developers are free to extend this method's behavior // by watching system directories changes manually and use of the `ctx.WriteWithExpiration` // with a "modtime" based on the file modified date, // similar to the `HandleDir`(which sends status OK(200) and browser disk caching instead of 304). // // A shortcut of the `cache#Cache304`. Cache304 = [cache](/github.com/kataras/iris/[email protected]/cache).[Cache304](/github.com/kataras/iris/[email protected]/cache#Cache304) // CookieAllowReclaim accepts the Context itself. // If set it will add the cookie to (on `CookieSet`, `CookieSetKV`, `CookieUpsert`) // or remove the cookie from (on `CookieRemove`) the Request object too. // // A shortcut for the `context#CookieAllowReclaim`. CookieAllowReclaim = [context](/github.com/kataras/iris/[email protected]/context).[CookieAllowReclaim](/github.com/kataras/iris/[email protected]/context#CookieAllowReclaim) // CookieAllowSubdomains set to the Cookie Options // in order to allow subdomains to have access to the cookies. // It sets the cookie's Domain field (if was empty) and // it also sets the cookie's SameSite to lax mode too. // // A shortcut for the `context#CookieAllowSubdomains`. CookieAllowSubdomains = [context](/github.com/kataras/iris/[email protected]/context).[CookieAllowSubdomains](/github.com/kataras/iris/[email protected]/context#CookieAllowSubdomains) // CookieSameSite sets a same-site rule for cookies to set. // SameSite allows a server to define a cookie attribute making it impossible for // the browser to send this cookie along with cross-site requests. The main // goal is to mitigate the risk of cross-origin information leakage, and provide // some protection against cross-site request forgery attacks. // // See <https://tools.ietf.org/html/draft-ietf-httpbis-cookie-same-site-00> for details. // // A shortcut for the `context#CookieSameSite`. CookieSameSite = [context](/github.com/kataras/iris/[email protected]/context).[CookieSameSite](/github.com/kataras/iris/[email protected]/context#CookieSameSite) // CookieSecure sets the cookie's Secure option if the current request's // connection is using TLS. See `CookieHTTPOnly` too. // // A shortcut for the `context#CookieSecure`. CookieSecure = [context](/github.com/kataras/iris/[email protected]/context).[CookieSecure](/github.com/kataras/iris/[email protected]/context#CookieSecure) // CookieHTTPOnly is a `CookieOption`. // Use it to set the cookie's HttpOnly field to false or true. // HttpOnly field defaults to true for `RemoveCookie` and `SetCookieKV`. // // A shortcut for the `context#CookieHTTPOnly`. CookieHTTPOnly = [context](/github.com/kataras/iris/[email protected]/context).[CookieHTTPOnly](/github.com/kataras/iris/[email protected]/context#CookieHTTPOnly) // CookiePath is a `CookieOption`. // Use it to change the cookie's Path field. // // A shortcut for the `context#CookiePath`. CookiePath = [context](/github.com/kataras/iris/[email protected]/context).[CookiePath](/github.com/kataras/iris/[email protected]/context#CookiePath) // CookieCleanPath is a `CookieOption`. // Use it to clear the cookie's Path field, exactly the same as `CookiePath("")`. // // A shortcut for the `context#CookieCleanPath`. CookieCleanPath = [context](/github.com/kataras/iris/[email protected]/context).[CookieCleanPath](/github.com/kataras/iris/[email protected]/context#CookieCleanPath) // CookieExpires is a `CookieOption`. // Use it to change the cookie's Expires and MaxAge fields by passing the lifetime of the cookie. // // A shortcut for the `context#CookieExpires`. CookieExpires = [context](/github.com/kataras/iris/[email protected]/context).[CookieExpires](/github.com/kataras/iris/[email protected]/context#CookieExpires) // CookieEncoding accepts a value which implements `Encode` and `Decode` methods. // It calls its `Encode` on `Context.SetCookie, UpsertCookie, and SetCookieKV` methods. // And on `Context.GetCookie` method it calls its `Decode`. // // A shortcut for the `context#CookieEncoding`. CookieEncoding = [context](/github.com/kataras/iris/[email protected]/context).[CookieEncoding](/github.com/kataras/iris/[email protected]/context#CookieEncoding) // IsErrEmptyJSON reports whether the given "err" is caused by a // Context.ReadJSON call when the request body // didn't start with { or it was totally empty. IsErrEmptyJSON = [context](/github.com/kataras/iris/[email protected]/context).[IsErrEmptyJSON](/github.com/kataras/iris/[email protected]/context#IsErrEmptyJSON) // IsErrPath can be used at `context#ReadForm` and `context#ReadQuery`. // It reports whether the incoming error is type of `schema.ErrPath`, // which can be ignored when server allows unknown post values to be sent by the client. // // A shortcut for the `context#IsErrPath`. IsErrPath = [context](/github.com/kataras/iris/[email protected]/context).[IsErrPath](/github.com/kataras/iris/[email protected]/context#IsErrPath) // IsErrCanceled reports whether the "err" is caused by a cancellation or timeout. // // A shortcut for the `context#IsErrCanceled`. IsErrCanceled = [context](/github.com/kataras/iris/[email protected]/context).[IsErrCanceled](/github.com/kataras/iris/[email protected]/context#IsErrCanceled) // ErrEmptyForm is the type error which API users can make use of // to check if a form was empty on `Context.ReadForm`. // // A shortcut for the `context#ErrEmptyForm`. ErrEmptyForm = [context](/github.com/kataras/iris/[email protected]/context).[ErrEmptyForm](/github.com/kataras/iris/[email protected]/context#ErrEmptyForm) // ErrEmptyFormField reports whether if form value is empty. // An alias of `context.ErrEmptyFormField`. ErrEmptyFormField = [context](/github.com/kataras/iris/[email protected]/context).[ErrEmptyFormField](/github.com/kataras/iris/[email protected]/context#ErrEmptyFormField) // ErrNotFound reports whether a key was not found, useful // on post data, versioning feature and others. // An alias of `context.ErrNotFound`. ErrNotFound = [context](/github.com/kataras/iris/[email protected]/context).[ErrNotFound](/github.com/kataras/iris/[email protected]/context#ErrNotFound) // NewProblem returns a new Problem. // Head over to the `Problem` type godoc for more. // // A shortcut for the `context#NewProblem`. NewProblem = [context](/github.com/kataras/iris/[email protected]/context).[NewProblem](/github.com/kataras/iris/[email protected]/context#NewProblem) // XMLMap wraps a map[string]interface{} to compatible xml marshaler, // in order to be able to render maps as XML on the `Context.XML` method. // // Example: `Context.XML(XMLMap("Root", map[string]interface{}{...})`. // // A shortcut for the `context#XMLMap`. XMLMap = [context](/github.com/kataras/iris/[email protected]/context).[XMLMap](/github.com/kataras/iris/[email protected]/context#XMLMap) // ErrStopExecution if returned from a hero middleware or a request-scope dependency // stops the handler's execution, see _examples/dependency-injection/basic/middleware. ErrStopExecution = [hero](/github.com/kataras/iris/[email protected]/hero).[ErrStopExecution](/github.com/kataras/iris/[email protected]/hero#ErrStopExecution) // ErrHijackNotSupported is returned by the Hijack method to // indicate that Hijack feature is not available. // // A shortcut for the `context#ErrHijackNotSupported`. ErrHijackNotSupported = [context](/github.com/kataras/iris/[email protected]/context).[ErrHijackNotSupported](/github.com/kataras/iris/[email protected]/context#ErrHijackNotSupported) // ErrPushNotSupported is returned by the Push method to // indicate that HTTP/2 Push support is not available. // // A shortcut for the `context#ErrPushNotSupported`. ErrPushNotSupported = [context](/github.com/kataras/iris/[email protected]/context).[ErrPushNotSupported](/github.com/kataras/iris/[email protected]/context#ErrPushNotSupported) // PrivateError accepts an error and returns a wrapped private one. // A shortcut for the `context#PrivateError` function. PrivateError = [context](/github.com/kataras/iris/[email protected]/context).[PrivateError](/github.com/kataras/iris/[email protected]/context#PrivateError) // TrimParamFilePart is a middleware which trims any last part after a dot (.) character // of the current route's dynamic path parameters. // A shortcut for the `context#TrimParamFilePart` function. TrimParamFilePart [Handler](#Handler) = [context](/github.com/kataras/iris/[email protected]/context).[TrimParamFilePart](/github.com/kataras/iris/[email protected]/context#TrimParamFilePart) ) ``` ``` var ( // StatusText returns a text for the HTTP status code. It returns the empty // string if the code is unknown. // // Shortcut for core/router#StatusText. StatusText = [context](/github.com/kataras/iris/[email protected]/context).[StatusText](/github.com/kataras/iris/[email protected]/context#StatusText) // RegisterMethods adds custom http methods to the "AllMethods" list. // Use it on initialization of your program. // // Shortcut for core/router#RegisterMethods. RegisterMethods = [router](/github.com/kataras/iris/[email protected]/core/router).[RegisterMethods](/github.com/kataras/iris/[email protected]/core/router#RegisterMethods) // WebDAVMethods contains a list of WebDAV HTTP Verbs. // Register using RegiterMethods package-level function or // through HandleMany party-level method. WebDAVMethods = [][string](/builtin#string){ [MethodGet](#MethodGet), [MethodHead](#MethodHead), [MethodPatch](#MethodPatch), [MethodPut](#MethodPut), [MethodPost](#MethodPost), [MethodDelete](#MethodDelete), [MethodOptions](#MethodOptions), [MethodConnect](#MethodConnect), [MethodTrace](#MethodTrace), "MKCOL", "COPY", "MOVE", "LOCK", "UNLOCK", "PROPFIND", "PROPPATCH", "LINK", "UNLINK", "PURGE", "VIEW", } ) ``` ``` var ( // TLSNoRedirect is a `host.Configurator` which can be passed as last argument // to the `TLS` runner function. It disables the automatic // registration of redirection from "http://" to "https://" requests. // Applies only to the `TLS` runner. // See `AutoTLSNoRedirect` to register a custom fallback server for `AutoTLS` runner. TLSNoRedirect = func(su *[host](/github.com/kataras/iris/[email protected]/core/host).[Supervisor](/github.com/kataras/iris/[email protected]/core/host#Supervisor)) { su.NoRedirect() } // AutoTLSNoRedirect is a `host.Configurator`. // It registers a fallback HTTP/1.1 server for the `AutoTLS` one. // The function accepts the letsencrypt wrapper and it // should return a valid instance of http.Server which its handler should be the result // of the "acmeHandler" wrapper. // Usage: // getServer := func(acme func(http.Handler) http.Handler) *http.Server { // srv := &http.Server{Handler: acme(yourCustomHandler), ...otherOptions} // go srv.ListenAndServe() // return srv // } // app.Run(iris.AutoTLS(":443", "example.com example2.com", "<EMAIL>", getServer)) // // Note that if Server.Handler is nil then the server is automatically ran // by the framework and the handler set to automatic redirection, it's still // a valid option when the caller wants just to customize the server's fields (except Addr). // With this host configurator the caller can customize the server // that letsencrypt relies to perform the challenge. // LetsEncrypt Certification Manager relies on <http://example.com/.well-known/acme-challenge/><TOKEN>. AutoTLSNoRedirect = func(getFallbackServer func(acmeHandler func(fallback [http](/net/http).[Handler](/net/http#Handler)) [http](/net/http).[Handler](/net/http#Handler)) *[http](/net/http).[Server](/net/http#Server)) [host](/github.com/kataras/iris/[email protected]/core/host).[Configurator](/github.com/kataras/iris/[email protected]/core/host#Configurator) { return func(su *[host](/github.com/kataras/iris/[email protected]/core/host).[Supervisor](/github.com/kataras/iris/[email protected]/core/host#Supervisor)) { su.NoRedirect() su.Fallback = getFallbackServer } } ) ``` ``` var ( // ErrServerClosed is logged by the standard net/http server when the server is terminated. // Ignore it by passing this error to the `iris.WithoutServerError` configurator // on `Application.Run/Listen` method. // // An alias of the `http#ErrServerClosed`. ErrServerClosed = [http](/net/http).[ErrServerClosed](/net/http#ErrServerClosed) // ErrURLQuerySemicolon is logged by the standard net/http server when // the request contains a semicolon (;) wihch, after go1.17 it's not used as a key-value separator character. // // Ignore it by passing this error to the `iris.WithoutServerError` configurator // on `Application.Run/Listen` method. // // An alias of the `http#ErrServerClosed`. ErrURLQuerySemicolon = [errors](/errors).[New](/errors#New)("http: URL query contains semicolon, which is no longer a supported separator; parts of the query may be stripped when parsed; see golang.org/issue/25192") ) ``` ``` var DefaultTimeoutMessage = `` /* 235-byte string literal not displayed */ ``` DefaultTimeoutMessage is the default timeout message which is rendered on expired handlers when timeout handler is registered (see Timeout configuration field). ``` var WithDynamicHandler = func(app *[Application](#Application)) { app.config.EnableDynamicHandler = [true](/builtin#true) } ``` WithDynamicHandler enables for dynamic routing by setting the `EnableDynamicHandler` to true. See `Configuration`. ``` var WithEasyJSON = func(app *[Application](#Application)) { app.config.EnableEasyJSON = [true](/builtin#true) } ``` WithEasyJSON enables the fast easy json marshaler on Context.JSON method. See `Configuration` for more. ``` var WithEmptyFormError = func(app *[Application](#Application)) { app.config.FireEmptyFormError = [true](/builtin#true) } ``` WithEmptyFormError enables the setting `FireEmptyFormError`. See `Configuration`. ``` var WithFireMethodNotAllowed = func(app *[Application](#Application)) { app.config.FireMethodNotAllowed = [true](/builtin#true) } ``` WithFireMethodNotAllowed enables the FireMethodNotAllowed setting. See `Configuration`. ``` var WithGlobalConfiguration = func(app *[Application](#Application)) { app.Configure([WithConfiguration](#WithConfiguration)([YAML](#YAML)(globalConfigurationKeyword))) } ``` WithGlobalConfiguration will load the global yaml configuration file from the home directory and it will set/override the whole app's configuration to that file's contents. The global configuration file can be modified by user and be used by multiple iris instances. This is useful when we run multiple iris servers that share the same configuration, even with custom values at its "Other" field. Usage: `app.Configure(iris.WithGlobalConfiguration)` or `app.Run([iris.Runner](#Runner), iris.WithGlobalConfiguration)`. ``` var WithLowercaseRouting = func(app *[Application](#Application)) { app.config.ForceLowercaseRouting = [true](/builtin#true) } ``` WithLowercaseRouting enables for lowercase routing by setting the `ForceLowercaseRoutes` to true. See `Configuration`. ``` var WithOptimizations = func(app *[Application](#Application)) { app.config.EnableOptimizations = [true](/builtin#true) } ``` WithOptimizations can force the application to optimize for the best performance where is possible. See `Configuration`. ``` var WithPathEscape = func(app *[Application](#Application)) { app.config.EnablePathEscape = [true](/builtin#true) } ``` WithPathEscape sets the EnablePathEscape setting to true. See `Configuration`. ``` var WithPathIntelligence = func(app *[Application](#Application)) { app.config.EnablePathIntelligence = [true](/builtin#true) } ``` WithPathIntelligence enables the EnablePathIntelligence setting. See `Configuration`. ``` var WithProtoJSON = func(app *[Application](#Application)) { app.config.EnableProtoJSON = [true](/builtin#true) } ``` WithProtoJSON enables the proto marshaler on Context.JSON method. See `Configuration` for more. ``` var WithResetOnFireErrorCode = func(app *[Application](#Application)) { app.config.ResetOnFireErrorCode = [true](/builtin#true) } ``` WithResetOnFireErrorCode sets the ResetOnFireErrorCode setting to true. See `Configuration`. ``` var WithTunneling = func(app *[Application](#Application)) { conf := [TunnelingConfiguration](#TunnelingConfiguration){ Tunnels: [][Tunnel](#Tunnel){{}}, } app.config.Tunneling = conf } ``` WithTunneling is the `iris.Configurator` for the `iris.Configuration.Tunneling` field. It's used to enable http tunneling for an Iris Application, per registered host Alternatively use the `iris.WithConfiguration(iris.Configuration{Tunneling: iris.TunnelingConfiguration{ ...}}}`. ``` var WithURLParamSeparator = func(sep [string](/builtin#string)) [Configurator](#Configurator) { return func(app *[Application](#Application)) { app.config.URLParamSeparator = &sep } } ``` WithURLParamSeparator sets the URLParamSeparator setting to "sep". See `Configuration`. ``` var WithoutAutoFireStatusCode = func(app *[Application](#Application)) { app.config.DisableAutoFireStatusCode = [true](/builtin#true) } ``` WithoutAutoFireStatusCode sets the DisableAutoFireStatusCode setting to true. See `Configuration`. ``` var WithoutBanner = [WithoutStartupLog](#WithoutStartupLog) ``` WithoutBanner is a conversion for the `WithoutStartupLog` option. Turns off the information send, once, to the terminal when the main server is open. ``` var WithoutBodyConsumptionOnUnmarshal = func(app *[Application](#Application)) { app.config.DisableBodyConsumptionOnUnmarshal = [true](/builtin#true) } ``` WithoutBodyConsumptionOnUnmarshal disables BodyConsumptionOnUnmarshal setting. See `Configuration`. ``` var WithoutInterruptHandler = func(app *[Application](#Application)) { app.config.DisableInterruptHandler = [true](/builtin#true) } ``` WithoutInterruptHandler disables the automatic graceful server shutdown when control/cmd+C pressed. ``` var WithoutPathCorrection = func(app *[Application](#Application)) { app.config.DisablePathCorrection = [true](/builtin#true) } ``` WithoutPathCorrection disables the PathCorrection setting. See `Configuration`. ``` var WithoutPathCorrectionRedirection = func(app *[Application](#Application)) { app.config.DisablePathCorrection = [false](/builtin#false) app.config.DisablePathCorrectionRedirection = [true](/builtin#true) } ``` WithoutPathCorrectionRedirection disables the PathCorrectionRedirection setting. See `Configuration`. ``` var WithoutStartupLog = func(app *[Application](#Application)) { app.config.DisableStartupLog = [true](/builtin#true) } ``` WithoutStartupLog turns off the information send, once, to the terminal when the main server is open. ### Functions [¶](#pkg-functions) #### func [Compression](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L344) [¶](#Compression) added in v12.2.0 ``` func Compression(ctx [Context](#Context)) ``` Compression is a middleware which enables writing and reading using the best offered compression. Usage: app.Use (for matched routes) app.UseRouter (for both matched and 404s or other HTTP errors). #### func [ConfigureMiddleware](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L335) [¶](#ConfigureMiddleware) added in v12.2.0 ``` func ConfigureMiddleware(handlers ...[Handler](#Handler)) [router](/github.com/kataras/iris/[email protected]/core/router).[PartyConfigurator](/github.com/kataras/iris/[email protected]/core/router#PartyConfigurator) ``` ConfigureMiddleware is a PartyConfigurator which can be used as a shortcut to add middlewares on Party.PartyConfigure("/path", WithMiddleware(handler), new(example.API)). #### func [Minify](https://github.com/kataras/iris/blob/v12.2.7/iris.go#L381) [¶](#Minify) added in v12.2.0 ``` func Minify(ctx [Context](#Context)) ``` Minify is a middleware which minifies the responses based on the response content type. Note that minification might be slower, caching is advised. Customize the minifier through `Application.Minifier()`. Usage: app.Use(iris.Minify) #### func [PrefixDir](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L306) [¶](#PrefixDir) added in v12.2.0 ``` func PrefixDir(prefix [string](/builtin#string), fs [http](/net/http).[FileSystem](/net/http#FileSystem)) [http](/net/http).[FileSystem](/net/http#FileSystem) ``` PrefixDir returns a new FileSystem that opens files by adding the given "prefix" to the directory tree of "fs". Useful when having templates and static files in the same bindata AssetFile method. This way you can select which one to serve as static files and what for templates. All view engines have a `RootDir` method for that reason too but alternatively, you can wrap the given file system with this `PrefixDir`. Example: <https://github.com/kataras/iris/blob/main/_examples/file-server/single-page-application/embedded-single-page-application/main.go#### func [PrefixFS](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L311) [¶](#PrefixFS) added in v12.2.0 ``` func PrefixFS(fileSystem [fs](/io/fs).[FS](/io/fs#FS), dir [string](/builtin#string)) ([fs](/io/fs).[FS](/io/fs#FS), [error](/builtin#error)) ``` PrefixFS same as "PrefixDir" but for `fs.FS` type. #### func [WithSocketSharding](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L196) [¶](#WithSocketSharding) added in v12.2.0 ``` func WithSocketSharding(app *[Application](#Application)) ``` WithSocketSharding sets the `Configuration.SocketSharding` field to true. ### Types [¶](#pkg-types) #### type [APIContainer](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L152) [¶](#APIContainer) added in v12.2.0 ``` type APIContainer = [router](/github.com/kataras/iris/[email protected]/core/router).[APIContainer](/github.com/kataras/iris/[email protected]/core/router#APIContainer) ``` APIContainer is a wrapper of a common `Party` featured by Dependency Injection. See `Party.ConfigureContainer` for more. A shortcut for the `core/router#APIContainer`. #### type [Application](https://github.com/kataras/iris/blob/v12.2.7/iris.go#L56) [¶](#Application) ``` type Application struct { // routing embedded | exposing APIBuilder's and Router's public API. *[router](/github.com/kataras/iris/[email protected]/core/router).[APIBuilder](/github.com/kataras/iris/[email protected]/core/router#APIBuilder) *[router](/github.com/kataras/iris/[email protected]/core/router).[Router](/github.com/kataras/iris/[email protected]/core/router#Router) [router](/github.com/kataras/iris/[email protected]/core/router).[HTTPErrorHandler](/github.com/kataras/iris/[email protected]/core/router#HTTPErrorHandler) // if Router is Downgraded this is nil. ContextPool *[context](/github.com/kataras/iris/[email protected]/context).[Pool](/github.com/kataras/iris/[email protected]/context#Pool) // I18n contains localization and internationalization support. // Use the `Load` or `LoadAssets` to locale language files. // // See `Context#Tr` method for request-based translations. I18n *[i18n](/github.com/kataras/iris/[email protected]/i18n).[I18n](/github.com/kataras/iris/[email protected]/i18n#I18n) // Validator is the request body validator, defaults to nil. Validator [context](/github.com/kataras/iris/[email protected]/context).[Validator](/github.com/kataras/iris/[email protected]/context#Validator) // OnBuild is a single function which // is fired on the first `Build` method call. // If reports an error then the execution // is stopped and the error is logged. // It's nil by default except when `Switch` instead of `New` or `Default` // is used to initialize the Application. // Users can wrap it to accept more events. OnBuild func() [error](/builtin#error) // Hosts contains a list of all servers (Host Supervisors) that this app is running on. // // Hosts may be empty only if application ran(`app.Run`) with `iris.Raw` option runner, // otherwise it contains a single host (`app.Hosts[0]`). // // Additional Host Supervisors can be added to that list by calling the `app.NewHost` manually. // // Hosts field is available after `Run` or `NewHost`. Hosts []*[host](/github.com/kataras/iris/[email protected]/core/host).[Supervisor](/github.com/kataras/iris/[email protected]/core/host#Supervisor) // contains filtered or unexported fields } ``` Application is responsible to manage the state of the application. It contains and handles all the necessary parts to create a fast web server. #### func [Default](https://github.com/kataras/iris/blob/v12.2.7/iris.go#L142) [¶](#Default) ``` func Default() *[Application](#Application) ``` Default returns a new Application. Default with "debug" Logger Level. Localization enabled on "./locales" directory and HTML templates on "./views" or "./templates" directory. CORS (allow all), Recovery and Request ID middleware already registered. #### func [New](https://github.com/kataras/iris/blob/v12.2.7/iris.go#L115) [¶](#New) ``` func New() *[Application](#Application) ``` New creates and returns a fresh empty iris *Application instance. #### func (*Application) [Build](https://github.com/kataras/iris/blob/v12.2.7/iris.go#L663) [¶](#Application.Build) ``` func (app *[Application](#Application)) Build() [error](/builtin#error) ``` Build sets up, once, the framework. It builds the default router with its default macros and the template functions that are very-closed to iris. If error occurred while building the Application, the returns type of error will be an *errgroup.Group which let the callers to inspect the errors and cause, usage: import "github.com/kataras/iris/v12/core/errgroup" ``` errgroup.Walk(app.Build(), func(typ interface{}, err error) { app.Logger().Errorf("%s: %s", typ, err) }) ``` #### func (*Application) [ConfigurationReadOnly](https://github.com/kataras/iris/blob/v12.2.7/iris.go#L274) [¶](#Application.ConfigurationReadOnly) ``` func (app *[Application](#Application)) ConfigurationReadOnly() [context](/github.com/kataras/iris/[email protected]/context).[ConfigurationReadOnly](/github.com/kataras/iris/[email protected]/context#ConfigurationReadOnly) ``` ConfigurationReadOnly returns an object which doesn't allow field writing. #### func (*Application) [Configure](https://github.com/kataras/iris/blob/v12.2.7/iris.go#L263) [¶](#Application.Configure) ``` func (app *[Application](#Application)) Configure(configurators ...[Configurator](#Configurator)) *[Application](#Application) ``` Configure can called when modifications to the framework instance needed. It accepts the framework instance and returns an error which if it's not nil it's printed to the logger. See configuration.go for more. Returns itself in order to be used like `app:= New().Configure(...)` #### func (*Application) [ConfigureHost](https://github.com/kataras/iris/blob/v12.2.7/iris.go#L476) [¶](#Application.ConfigureHost) ``` func (app *[Application](#Application)) ConfigureHost(configurators ...[host](/github.com/kataras/iris/[email protected]/core/host).[Configurator](/github.com/kataras/iris/[email protected]/core/host#Configurator)) *[Application](#Application) ``` ConfigureHost accepts one or more `host#Configuration`, these configurators functions can access the host created by `app.Run` or `app.Listen`, they're being executed when application is ready to being served to the public. It's an alternative way to interact with a host that is automatically created by `app.Run`. These "configurators" can work side-by-side with the `iris#Addr, iris#Server, iris#TLS, iris#AutoTLS, iris#Listener` final arguments("hostConfigs") too. Note that these application's host "configurators" will be shared with the rest of the hosts that this app will may create (using `app.NewHost`), meaning that `app.NewHost` will execute these "configurators" everytime that is being called as well. These "configurators" should be registered before the `app.Run` or `host.Serve/Listen` functions. #### func (*Application) [GetContextErrorHandler](https://github.com/kataras/iris/blob/v12.2.7/iris.go#L457) [¶](#Application.GetContextErrorHandler) added in v12.2.0 ``` func (app *[Application](#Application)) GetContextErrorHandler() [context](/github.com/kataras/iris/[email protected]/context).[ErrorHandler](/github.com/kataras/iris/[email protected]/context#ErrorHandler) ``` GetContextErrorHandler returns the handler which handles errors on JSON write failures. #### func (*Application) [GetContextPool](https://github.com/kataras/iris/blob/v12.2.7/iris.go#L435) [¶](#Application.GetContextPool) added in v12.2.0 ``` func (app *[Application](#Application)) GetContextPool() *[context](/github.com/kataras/iris/[email protected]/context).[Pool](/github.com/kataras/iris/[email protected]/context#Pool) ``` GetContextPool returns the Iris sync.Pool which holds the contexts values. Iris automatically releases the request context, so you don't have to use it. It's only useful to manually release the context on cases that connection is hijacked by a third-party middleware and the http handler return too fast. #### func (*Application) [I18nReadOnly](https://github.com/kataras/iris/blob/v12.2.7/iris.go#L333) [¶](#Application.I18nReadOnly) added in v12.1.0 ``` func (app *[Application](#Application)) I18nReadOnly() [context](/github.com/kataras/iris/[email protected]/context).[I18nReadOnly](/github.com/kataras/iris/[email protected]/context#I18nReadOnly) ``` I18nReadOnly returns the i18n's read-only features. See `I18n` method for more. #### func (*Application) [IsDebug](https://github.com/kataras/iris/blob/v12.2.7/iris.go#L327) [¶](#Application.IsDebug) added in v12.2.0 ``` func (app *[Application](#Application)) IsDebug() [bool](/builtin#bool) ``` IsDebug reports whether the application is running under debug/development mode. It's just a shortcut of Logger().Level >= golog.DebugLevel. The same method existss as Context.IsDebug() too. #### func (*Application) [Listen](https://github.com/kataras/iris/blob/v12.2.7/iris.go#L1015) [¶](#Application.Listen) added in v12.1.7 ``` func (app *[Application](#Application)) Listen(hostPort [string](/builtin#string), withOrWithout ...[Configurator](#Configurator)) [error](/builtin#error) ``` Listen builds the application and starts the server on the TCP network address "host:port" which handles requests on incoming connections. Listen always returns a non-nil error. Ignore specific errors by using an `iris.WithoutServerError(iris.ErrServerClosed)` as a second input argument. Listen is a shortcut of `app.Run(iris.Addr(hostPort, withOrWithout...))`. See `Run` for details. #### func (*Application) [Logger](https://github.com/kataras/iris/blob/v12.2.7/iris.go#L319) [¶](#Application.Logger) ``` func (app *[Application](#Application)) Logger() *[golog](/github.com/kataras/golog).[Logger](/github.com/kataras/golog#Logger) ``` Logger returns the golog logger instance(pointer) that is being used inside the "app". Available levels: - "disable" - "fatal" - "error" - "warn" - "info" - "debug" Usage: app.Logger().SetLevel("error") Or set the level through Configurartion's LogLevel or WithLogLevel functional option. Defaults to "info" level. Callers can use the application's logger which is the same `golog.Default.LastChild()` logger, to print custom logs too. Usage: app.Logger().Error/Errorf("...") app.Logger().Warn/Warnf("...") app.Logger().Info/Infof("...") app.Logger().Debug/Debugf("...") Setting one or more outputs: app.Logger().SetOutput(io.Writer...) Adding one or more outputs : app.Logger().AddOutput(io.Writer...) Adding custom levels requires import of the `github.com/kataras/golog` package: ``` First we create our level to a golog.Level in order to be used in the Log functions. var SuccessLevel golog.Level = 6 Register our level, just three fields. golog.Levels[SuccessLevel] = &golog.LevelMetadata{ Name: "success", RawText: "[SUCC]", // ColorfulText (Green Color[SUCC]) ColorfulText: "\x1b[32m[SUCC]\x1b[0m", } ``` Usage: app.Logger().SetLevel("success") app.Logger().Logf(SuccessLevel, "a custom leveled log message") #### func (*Application) [Minifier](https://github.com/kataras/iris/blob/v12.2.7/iris.go#L401) [¶](#Application.Minifier) added in v12.2.0 ``` func (app *[Application](#Application)) Minifier() *minify.M ``` Minifier returns the minifier instance. By default it can minifies: - text/html - text/css - image/svg+xml - application/text(javascript, ecmascript, json, xml). Use that instance to add custom Minifiers before server ran. #### func (*Application) [NewHost](https://github.com/kataras/iris/blob/v12.2.7/iris.go#L520) [¶](#Application.NewHost) ``` func (app *[Application](#Application)) NewHost(srv *[http](/net/http).[Server](/net/http#Server)) *[host](/github.com/kataras/iris/[email protected]/core/host).[Supervisor](/github.com/kataras/iris/[email protected]/core/host#Supervisor) ``` NewHost accepts a standard *http.Server object, completes the necessary missing parts of that "srv" and returns a new, ready-to-use, host (supervisor). #### func (*Application) [RegisterView](https://github.com/kataras/iris/blob/v12.2.7/iris.go#L408) [¶](#Application.RegisterView) ``` func (app *[Application](#Application)) RegisterView(viewEngine [view](/github.com/kataras/iris/[email protected]/view).[Engine](/github.com/kataras/iris/[email protected]/view#Engine)) ``` RegisterView registers a view engine for the application. Children can register their own too. If no Party view Engine is registered then this one will be used to render the templates instead. #### func (*Application) [Run](https://github.com/kataras/iris/blob/v12.2.7/iris.go#L1032) [¶](#Application.Run) ``` func (app *[Application](#Application)) Run(serve [Runner](#Runner), withOrWithout ...[Configurator](#Configurator)) [error](/builtin#error) ``` Run builds the framework and starts the desired `Runner` with or without configuration edits. Run should be called only once per Application instance, it blocks like http.Server. If more than one server needed to run on the same iris instance then create a new host and run it manually by `go NewHost(*http.Server).Serve/ListenAndServe` etc... or use an already created host: h := NewHost(*http.Server) Run(Raw(h.ListenAndServe), WithCharset("utf-8"), WithRemoteAddrHeader("CF-Connecting-IP")) The Application can go online with any type of server or iris's host with the help of the following runners: `Listener`, `Server`, `Addr`, `TLS`, `AutoTLS` and `Raw`. #### func (*Application) [SetContextErrorHandler](https://github.com/kataras/iris/blob/v12.2.7/iris.go#L450) [¶](#Application.SetContextErrorHandler) added in v12.2.0 ``` func (app *[Application](#Application)) SetContextErrorHandler(errHandler [context](/github.com/kataras/iris/[email protected]/context).[ErrorHandler](/github.com/kataras/iris/[email protected]/context#ErrorHandler)) *[Application](#Application) ``` SetContextErrorHandler can optionally register a handler to handle and fire a customized error body to the client on JSON write failures. ExampleCode: ``` type contextErrorHandler struct{} func (e *contextErrorHandler) HandleContextError(ctx iris.Context, err error) { errors.InvalidArgument.Err(ctx, err) } ... app.SetContextErrorHandler(new(contextErrorHandler)) ``` #### func (*Application) [SetName](https://github.com/kataras/iris/blob/v12.2.7/iris.go#L202) [¶](#Application.SetName) added in v12.2.0 ``` func (app *[Application](#Application)) SetName(appName [string](/builtin#string)) *[Application](#Application) ``` SetName sets a unique name to this Iris Application. It sets a child prefix for the current Application's Logger. Look `String` method too. It returns this Application. #### func (*Application) [Shutdown](https://github.com/kataras/iris/blob/v12.2.7/iris.go#L626) [¶](#Application.Shutdown) ``` func (app *[Application](#Application)) Shutdown(ctx [stdContext](/context).[Context](/context#Context)) [error](/builtin#error) ``` Shutdown gracefully terminates all the application's server hosts and any tunnels. Returns an error on the first failure, otherwise nil. #### func (*Application) [String](https://github.com/kataras/iris/blob/v12.2.7/iris.go#L218) [¶](#Application.String) added in v12.2.0 ``` func (app *[Application](#Application)) String() [string](/builtin#string) ``` String completes the fmt.Stringer interface and it returns the application's name. If name was not set by `SetName` or `IRIS_APP_NAME` environment variable then this will return an empty string. #### func (*Application) [SubdomainRedirect](https://github.com/kataras/iris/blob/v12.2.7/iris.go#L251) [¶](#Application.SubdomainRedirect) ``` func (app *[Application](#Application)) SubdomainRedirect(from, to [router](/github.com/kataras/iris/[email protected]/core/router).[Party](/github.com/kataras/iris/[email protected]/core/router#Party)) [router](/github.com/kataras/iris/[email protected]/core/router).[Party](/github.com/kataras/iris/[email protected]/core/router#Party) ``` SubdomainRedirect registers a router wrapper which redirects(StatusMovedPermanently) a (sub)domain to another subdomain or to the root domain as fast as possible, before the router's try to execute route's handler(s). It receives two arguments, they are the from and to/target locations, 'from' can be a wildcard subdomain as well (app.WildcardSubdomain()) 'to' is not allowed to be a wildcard for obvious reasons, 'from' can be the root domain(app) when the 'to' is not the root domain and visa-versa. Usage: www := app.Subdomain("www") <- same as app.Party("www.") app.SubdomainRedirect(app, www) This will redirect all http(s)://mydomain.com/%anypath% to http(s)://www.mydomain.com/%anypath%. One or more subdomain redirects can be used to the same app instance. If you need more information about this implementation then you have to navigate through the `core/router#NewSubdomainRedirectWrapper` function instead. Example: <https://github.com/kataras/iris/tree/main/_examples/routing/subdomains/redirect#### func (*Application) [Validate](https://github.com/kataras/iris/blob/v12.2.7/iris.go#L339) [¶](#Application.Validate) added in v12.2.0 ``` func (app *[Application](#Application)) Validate(v interface{}) [error](/builtin#error) ``` Validate validates a value and returns nil if passed or the failure reason if does not. #### func (*Application) [View](https://github.com/kataras/iris/blob/v12.2.7/iris.go#L421) [¶](#Application.View) ``` func (app *[Application](#Application)) View(writer [io](/io).[Writer](/io#Writer), filename [string](/builtin#string), layout [string](/builtin#string), bindingData interface{}) [error](/builtin#error) ``` View executes and writes the result of a template file to the writer. First parameter is the writer to write the parsed template. Second parameter is the relative, to templates directory, template filename, including extension. Third parameter is the layout, can be empty string. Forth parameter is the bindable data to the template, can be nil. Use context.View to render templates to the client instead. Returns an error on failure, otherwise nil. #### func (*Application) [WWW](https://github.com/kataras/iris/blob/v12.2.7/iris.go#L227) [¶](#Application.WWW) ``` func (app *[Application](#Application)) WWW() [router](/github.com/kataras/iris/[email protected]/core/router).[Party](/github.com/kataras/iris/[email protected]/core/router#Party) ``` WWW creates and returns a "www." subdomain. The difference from `app.Subdomain("www")` or `app.Party("www.")` is that the `app.WWW()` method wraps the router so all http(s)://mydomain.com will be redirect to http(s)://www.mydomain.com. Other subdomains can be registered using the app: `sub := app.Subdomain("mysubdomain")`, child subdomains can be registered using the www := app.WWW(); www.Subdomain("wwwchildSubdomain"). #### type [ApplicationBuilder](https://github.com/kataras/iris/blob/v12.2.7/iris_guide.go#L281) [¶](#ApplicationBuilder) added in v12.2.5 ``` type ApplicationBuilder interface { // Handle registers a simple route on specific method and (dynamic) path. // It simply calls the Iris Application's Handle method. // Use the "API" method instead to keep the app organized. Handle(method, path [string](/builtin#string), handlers ...[Handler](#Handler)) [ApplicationBuilder](#ApplicationBuilder) // API registers a router which is responsible to serve the /api group. API(pathPrefix [string](/builtin#string), c ...[router](/github.com/kataras/iris/[email protected]/core/router).[PartyConfigurator](/github.com/kataras/iris/[email protected]/core/router#PartyConfigurator)) [ApplicationBuilder](#ApplicationBuilder) // Build builds the application with the prior configuration and returns the // Iris Application instance for further customizations. // // Use "Build" before "Listen" or "Run" to apply further modifications // to the framework before starting the server. Calling "Build" is optional. Build() *[Application](#Application) // optional call. // Listen calls the Application's Listen method which is a shortcut of Run(iris.Addr("hostPort")). // Use "Run" instead if you need to customize the HTTP/2 server itself. Listen(hostPort [string](/builtin#string), configurators ...[Configurator](#Configurator)) [error](/builtin#error) // Listen OR Run. // Run calls the Application's Run method. // The 1st argument is a Runner (iris.Listener, iris.Server, iris.Addr, iris.TLS, iris.AutoTLS and iris.Raw). // The 2nd argument can be used to add custom configuration right before the server is up and running. Run(runner [Runner](#Runner), configurators ...[Configurator](#Configurator)) [error](/builtin#error) } ``` ApplicationBuilder is the final step of the Guide. It is used to register APIs controllers (PartyConfigurators) and its Build, Listen and Run methods configure and build the actual Iris application based on the previous steps. #### type [Attachments](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L170) [¶](#Attachments) added in v12.2.0 ``` type Attachments = [router](/github.com/kataras/iris/[email protected]/core/router).[Attachments](/github.com/kataras/iris/[email protected]/core/router#Attachments) ``` Attachments options for files to be downloaded and saved locally by the client. See `DirOptions`. #### type [CompressionGuide](https://github.com/kataras/iris/blob/v12.2.7/iris_guide.go#L236) [¶](#CompressionGuide) added in v12.2.5 ``` type CompressionGuide interface { // Compression enables or disables the gzip (or any other client-preferred) compression algorithm // for response writes. Compression(b [bool](/builtin#bool)) [HealthGuide](#HealthGuide) } ``` CompressionGuide is the 2nd step of the Guide. Compression (gzip or any other client requested) can be enabled or disabled. #### type [Configuration](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L630) [¶](#Configuration) ``` type Configuration struct { // VHost lets you customize the trusted domain this server should run on. // Its value will be used as the return value of Context.Domain() too. // It can be retrieved by the context if needed (i.e router for subdomains) VHost [string](/builtin#string) `ini:"v_host" json:"vHost" yaml:"VHost" toml:"VHost" env:"V_HOST"` // LogLevel is the log level the application should use to output messages. // Logger, by default, is mostly used on Build state but it is also possible // that debug error messages could be thrown when the app is running, e.g. // when malformed data structures try to be sent on Client (i.e Context.JSON/JSONP/XML...). // // Defaults to "info". Possible values are: // * "disable" // * "fatal" // * "error" // * "warn" // * "info" // * "debug" LogLevel [string](/builtin#string) `ini:"log_level" json:"logLevel" yaml:"LogLevel" toml:"LogLevel" env:"LOG_LEVEL"` // SocketSharding enables SO_REUSEPORT (or SO_REUSEADDR for windows) // on all registered Hosts. // This option allows linear scaling server performance on multi-CPU servers. // // Please read the following: // 1. <https://stackoverflow.com/a/14388707> // 2. <https://stackoverflow.com/a/59692868> // 3. <https://www.nginx.com/blog/socket-sharding-nginx-release-1-9-1/> // 4. (BOOK) Learning HTTP/2: A Practical Guide for Beginners: // Page 37, To Shard or Not to Shard? // // Defaults to false. SocketSharding [bool](/builtin#bool) `ini:"socket_sharding" json:"socketSharding" yaml:"SocketSharding" toml:"SocketSharding" env:"SOCKET_SHARDING"` // KeepAlive sets the TCP connection's keep-alive duration. // If set to greater than zero then a tcp listener featured keep alive // will be used instead of the simple tcp one. // // Defaults to 0. KeepAlive [time](/time).[Duration](/time#Duration) `ini:"keepalive" json:"keepAlive" yaml:"KeepAlive" toml:"KeepAlive" env:"KEEP_ALIVE"` // Timeout wraps the application's router with an http timeout handler // if the value is greater than zero. // // The underline response writer supports the Pusher interface but does not support // the Hijacker or Flusher interfaces when Timeout handler is registered. // // Read more at: <https://pkg.go.dev/net/http#TimeoutHandler>. Timeout [time](/time).[Duration](/time#Duration) `ini:"timeout" json:"timeout" yaml:"Timeout" toml:"Timeout"` // TimeoutMessage specifies the HTML body when a handler hits its life time based // on the Timeout configuration field. TimeoutMessage [string](/builtin#string) `ini:"timeout_message" json:"timeoutMessage" yaml:"TimeoutMessage" toml:"TimeoutMessage"` // Tunneling can be optionally set to enable ngrok http(s) tunneling for this Iris app instance. // See the `WithTunneling` Configurator too. Tunneling [TunnelingConfiguration](#TunnelingConfiguration) `ini:"tunneling" json:"tunneling,omitempty" yaml:"Tunneling" toml:"Tunneling"` // IgnoreServerErrors will cause to ignore the matched "errors" // from the main application's `Run` function. // This is a slice of string, not a slice of error // users can register these errors using yaml or toml configuration file // like the rest of the configuration fields. // // See `WithoutServerError(...)` function too. // // Example: <https://github.com/kataras/iris/tree/main/_examples/http-server/listen-addr/omit-server-errors> // // Defaults to an empty slice. IgnoreServerErrors [][string](/builtin#string) `ini:"ignore_server_errors" json:"ignoreServerErrors,omitempty" yaml:"IgnoreServerErrors" toml:"IgnoreServerErrors"` // DisableStartupLog if set to true then it turns off the write banner on server startup. // // Defaults to false. DisableStartupLog [bool](/builtin#bool) `ini:"disable_startup_log" json:"disableStartupLog,omitempty" yaml:"DisableStartupLog" toml:"DisableStartupLog"` // DisableInterruptHandler if set to true then it disables the automatic graceful server shutdown // when control/cmd+C pressed. // Turn this to true if you're planning to handle this by your own via a custom host.Task. // // Defaults to false. DisableInterruptHandler [bool](/builtin#bool) `` /* 134-byte string literal not displayed */ // DisablePathCorrection disables the correcting // and redirecting or executing directly the handler of // the requested path to the registered path // for example, if /home/ path is requested but no handler for this Route found, // then the Router checks if /home handler exists, if yes, // (permanent)redirects the client to the correct path /home. // // See `DisablePathCorrectionRedirection` to enable direct handler execution instead of redirection. // // Defaults to false. DisablePathCorrection [bool](/builtin#bool) `` /* 126-byte string literal not displayed */ // DisablePathCorrectionRedirection works whenever configuration.DisablePathCorrection is set to false // and if DisablePathCorrectionRedirection set to true then it will fire the handler of the matching route without // the trailing slash ("/") instead of send a redirection status. // // Defaults to false. DisablePathCorrectionRedirection [bool](/builtin#bool) `` /* 171-byte string literal not displayed */ // EnablePathIntelligence if set to true, // the router will redirect HTTP "GET" not found pages to the most closest one path(if any). For example // you register a route at "/contact" path - // a client tries to reach it by "/cont", the path will be automatic fixed // and the client will be redirected to the "/contact" path // instead of getting a 404 not found response back. // // Defaults to false. EnablePathIntelligence [bool](/builtin#bool) `` /* 130-byte string literal not displayed */ // EnablePathEscape when is true then its escapes the path and the named parameters (if any). // When do you need to Disable(false) it: // accepts parameters with slash '/' // Request: <http://localhost:8080/details/Project%2FDelta> // ctx.Param("project") returns the raw named parameter: Project%2FDelta // which you can escape it manually with net/url: // projectName, _ := url.QueryUnescape(c.Param("project"). // // Defaults to false. EnablePathEscape [bool](/builtin#bool) `ini:"enable_path_escape" json:"enablePathEscape,omitempty" yaml:"EnablePathEscape" toml:"EnablePathEscape"` // ForceLowercaseRouting if enabled, converts all registered routes paths to lowercase // and it does lowercase the request path too for matching. // // Defaults to false. ForceLowercaseRouting [bool](/builtin#bool) `` /* 126-byte string literal not displayed */ // EnableOptimizations enables dynamic request handler. // It gives the router the feature to add routes while in serve-time, // when `RefreshRouter` is called. // If this setting is set to true, the request handler will use a mutex for data(trie routing) protection, // hence the performance cost. // // Defaults to false. EnableDynamicHandler [bool](/builtin#bool) `ini:"enable_dynamic_handler" json:"enableDynamicHandler,omitempty" yaml:"EnableDynamicHandler" toml:"EnableDynamicHandler"` // FireMethodNotAllowed if it's true router checks for StatusMethodNotAllowed(405) and // fires the 405 error instead of 404 // Defaults to false. FireMethodNotAllowed [bool](/builtin#bool) `ini:"fire_method_not_allowed" json:"fireMethodNotAllowed,omitempty" yaml:"FireMethodNotAllowed" toml:"FireMethodNotAllowed"` // DisableAutoFireStatusCode if true then it turns off the http error status code // handler automatic execution on error code from a `Context.StatusCode` call. // By-default a custom http error handler will be fired when "Context.StatusCode(errorCode)" called. // // Defaults to false. DisableAutoFireStatusCode [bool](/builtin#bool) `` /* 144-byte string literal not displayed */ // ResetOnFireErrorCode if true then any previously response body or headers through // response recorder will be ignored and the router // will fire the registered (or default) HTTP error handler instead. // See `core/router/handler#FireErrorCode` and `Context.EndRequest` for more details. // // Read more at: <https://github.com/kataras/iris/issues/1531> // // Defaults to false. ResetOnFireErrorCode [bool](/builtin#bool) `ini:"reset_on_fire_error_code" json:"resetOnFireErrorCode,omitempty" yaml:"ResetOnFireErrorCode" toml:"ResetOnFireErrorCode"` // URLParamSeparator defines the character(s) separator for Context.URLParamSlice. // If empty or null then request url parameters with comma separated values will be retrieved as one. // // Defaults to comma ",". URLParamSeparator *[string](/builtin#string) `ini:"url_param_separator" json:"urlParamSeparator,omitempty" yaml:"URLParamSeparator" toml:"URLParamSeparator"` // EnableOptimization when this field is true // then the application tries to optimize for the best performance where is possible. // // Defaults to false. // Deprecated. As of version 12.2.x this field does nothing. EnableOptimizations [bool](/builtin#bool) `ini:"enable_optimizations" json:"enableOptimizations,omitempty" yaml:"EnableOptimizations" toml:"EnableOptimizations"` // EnableProtoJSON when this field is true // enables the proto marshaler on given proto messages when calling the Context.JSON method. // // Defaults to false. EnableProtoJSON [bool](/builtin#bool) `ini:"enable_proto_json" json:"enableProtoJSON,omitempty" yaml:"EnableProtoJSON" toml:"EnableProtoJSON"` // EnableEasyJSON when this field is true // enables the fast easy json marshaler on compatible struct values when calling the Context.JSON method. // // Defaults to false. EnableEasyJSON [bool](/builtin#bool) `ini:"enable_easy_json" json:"enableEasyJSON,omitempty" yaml:"EnableEasyJSON" toml:"EnableEasyJSON"` // DisableBodyConsumptionOnUnmarshal manages the reading behavior of the context's body readers/binders. // If set to true then it // disables the body consumption by the `context.UnmarshalBody/ReadJSON/ReadXML`. // // By-default io.ReadAll` is used to read the body from the `context.Request.Body which is an `io.ReadCloser`, // if this field set to true then a new buffer will be created to read from and the request body. // The body will not be changed and existing data before the // context.UnmarshalBody/ReadJSON/ReadXML will be not consumed. // // See `Context.RecordRequestBody` method for the same feature, per-request. DisableBodyConsumptionOnUnmarshal [bool](/builtin#bool) `` /* 163-byte string literal not displayed */ // FireEmptyFormError returns if set to tue true then the `context.ReadForm/ReadQuery/ReadBody` // will return an `iris.ErrEmptyForm` on empty request form data. FireEmptyFormError [bool](/builtin#bool) `ini:"fire_empty_form_error" json:"fireEmptyFormError,omitempty" yaml:"FireEmptyFormError" toml:"FireEmptyFormError"` // TimeFormat time format for any kind of datetime parsing // Defaults to "Mon, 02 Jan 2006 15:04:05 GMT". TimeFormat [string](/builtin#string) `ini:"time_format" json:"timeFormat,omitempty" yaml:"TimeFormat" toml:"TimeFormat"` // Charset character encoding for various rendering // used for templates and the rest of the responses // Defaults to "utf-8". Charset [string](/builtin#string) `ini:"charset" json:"charset,omitempty" yaml:"Charset" toml:"Charset"` // PostMaxMemory sets the maximum post data size // that a client can send to the server, this differs // from the overall request body size which can be modified // by the `context#SetMaxRequestBodySize` or `iris#LimitRequestBodySize`. // // Defaults to 32MB or 32 << 20 if you prefer. PostMaxMemory [int64](/builtin#int64) `ini:"post_max_memory" json:"postMaxMemory" yaml:"PostMaxMemory" toml:"PostMaxMemory"` // Context values' keys for various features. // // LocaleContextKey is used by i18n to get the current request's locale, which contains a translate function too. // // Defaults to "iris.locale". LocaleContextKey [string](/builtin#string) `ini:"locale_context_key" json:"localeContextKey,omitempty" yaml:"LocaleContextKey" toml:"LocaleContextKey"` // LanguageContextKey is the context key which a language can be modified by a middleware. // It has the highest priority over the rest and if it is empty then it is ignored, // if it set to a static string of "default" or to the default language's code // then the rest of the language extractors will not be called at all and // the default language will be set instead. // // Use with `Context.SetLanguage("el-GR")`. // // See `i18n.ExtractFunc` for a more organised way of the same feature. // Defaults to "iris.locale.language". LanguageContextKey [string](/builtin#string) `ini:"language_context_key" json:"languageContextKey,omitempty" yaml:"LanguageContextKey" toml:"LanguageContextKey"` // LanguageInputContextKey is the context key of a language that is given by the end-user. // It's the real user input of the language string, matched or not. // // Defaults to "iris.locale.language.input". LanguageInputContextKey [string](/builtin#string) `` /* 135-byte string literal not displayed */ // VersionContextKey is the context key which an API Version can be modified // via a middleware through `SetVersion` method, e.g. `versioning.SetVersion(ctx, ">=1.0.0 <2.0.0")`. // Defaults to "iris.api.version". VersionContextKey [string](/builtin#string) `ini:"version_context_key" json:"versionContextKey" yaml:"VersionContextKey" toml:"VersionContextKey"` // VersionAliasesContextKey is the context key which the versioning feature // can look up for alternative values of a version and fallback to that. // Head over to the versioning package for more. // Defaults to "iris.api.version.aliases" VersionAliasesContextKey [string](/builtin#string) `` /* 129-byte string literal not displayed */ // ViewEngineContextKey is the context's values key // responsible to store and retrieve(view.Engine) the current view engine. // A middleware or a Party can modify its associated value to change // a view engine that `ctx.View` will render through. // If not an engine is registered by the end-developer // then its associated value is always nil, // meaning that the default value is nil. // See `Party.RegisterView` and `Context.ViewEngine` methods as well. // // Defaults to "iris.view.engine". ViewEngineContextKey [string](/builtin#string) `ini:"view_engine_context_key" json:"viewEngineContextKey,omitempty" yaml:"ViewEngineContextKey" toml:"ViewEngineContextKey"` // ViewLayoutContextKey is the context's values key // responsible to store and retrieve(string) the current view layout. // A middleware can modify its associated value to change // the layout that `ctx.View` will use to render a template. // // Defaults to "iris.view.layout". ViewLayoutContextKey [string](/builtin#string) `ini:"view_layout_context_key" json:"viewLayoutContextKey,omitempty" yaml:"ViewLayoutContextKey" toml:"ViewLayoutContextKey"` // ViewDataContextKey is the context's values key // responsible to store and retrieve(interface{}) the current view binding data. // A middleware can modify its associated value to change // the template's data on-fly. // // Defaults to "iris.view.data". ViewDataContextKey [string](/builtin#string) `ini:"view_data_context_key" json:"viewDataContextKey,omitempty" yaml:"ViewDataContextKey" toml:"ViewDataContextKey"` // FallbackViewContextKey is the context's values key // responsible to store the view fallback information. // // Defaults to "iris.view.fallback". FallbackViewContextKey [string](/builtin#string) `` /* 131-byte string literal not displayed */ // RemoteAddrHeaders are the allowed request headers names // that can be valid to parse the client's IP based on. // By-default no "X-" header is consired safe to be used for retrieving the // client's IP address, because those headers can manually change by // the client. But sometimes are useful e.g. when behind a proxy // you want to enable the "X-Forwarded-For" or when cloudflare // you want to enable the "CF-Connecting-IP", indeed you // can allow the `ctx.RemoteAddr()` to use any header // that the client may sent. // // Defaults to an empty slice but an example usage is: // RemoteAddrHeaders { // "X-Real-Ip", // "X-Forwarded-For", // "CF-Connecting-IP", // "True-Client-Ip", // "X-Appengine-Remote-Addr", // } // // Look `context.RemoteAddr()` for more. RemoteAddrHeaders [][string](/builtin#string) `ini:"remote_addr_headers" json:"remoteAddrHeaders,omitempty" yaml:"RemoteAddrHeaders" toml:"RemoteAddrHeaders"` // RemoteAddrHeadersForce forces the `Context.RemoteAddr()` method // to return the first entry of a request header as a fallback, // even if that IP is a part of the `RemoteAddrPrivateSubnets` list. // The default behavior, if a remote address is part of the `RemoteAddrPrivateSubnets`, // is to retrieve the IP from the `Request.RemoteAddr` field instead. RemoteAddrHeadersForce [bool](/builtin#bool) `` /* 131-byte string literal not displayed */ // RemoteAddrPrivateSubnets defines the private sub-networks. // They are used to be compared against // IP Addresses fetched through `RemoteAddrHeaders` or `Context.Request.RemoteAddr`. // For details please navigate through: <https://github.com/kataras/iris/issues/1453> // Defaults to: // { // Start: "10.0.0.0", // End: "10.255.255.255", // }, // { // Start: "100.64.0.0", // End: "100.127.255.255", // }, // { // Start: "172.16.0.0", // End: "172.31.255.255", // }, // { // Start: "192.0.0.0", // End: "192.0.0.255", // }, // { // Start: "192.168.0.0", // End: "192.168.255.255", // }, // { // Start: "198.18.0.0", // End: "198.19.255.255", // } // // Look `Context.RemoteAddr()` for more. RemoteAddrPrivateSubnets [][netutil](/github.com/kataras/iris/[email protected]/core/netutil).[IPRange](/github.com/kataras/iris/[email protected]/core/netutil#IPRange) `` /* 129-byte string literal not displayed */ // SSLProxyHeaders defines the set of header key values // that would indicate a valid https Request (look `Context.IsSSL()`). // Example: `map[string]string{"X-Forwarded-Proto": "https"}`. // // Defaults to empty map. SSLProxyHeaders map[[string](/builtin#string)][string](/builtin#string) `ini:"ssl_proxy_headers" json:"sslProxyHeaders" yaml:"SSLProxyHeaders" toml:"SSLProxyHeaders"` // HostProxyHeaders defines the set of headers that may hold a proxied hostname value for the clients. // Look `Context.Host()` for more. // Defaults to empty map. HostProxyHeaders map[[string](/builtin#string)][bool](/builtin#bool) `ini:"host_proxy_headers" json:"hostProxyHeaders" yaml:"HostProxyHeaders" toml:"HostProxyHeaders"` // Other are the custom, dynamic options, can be empty. // This field used only by you to set any app's options you want. // // Defaults to empty map. Other map[[string](/builtin#string)]interface{} `ini:"other" json:"other,omitempty" yaml:"Other" toml:"Other"` } ``` Configuration holds the necessary settings for an Iris Application instance. All fields are optionally, the default values will work for a common web application. A Configuration value can be passed through `WithConfiguration` Configurator. Usage: conf := iris.Configuration{ ... } app := iris.New() app.Configure(iris.WithConfiguration(conf)) OR app.Run/Listen(..., iris.WithConfiguration(conf)). #### func [DefaultConfiguration](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L1371) [¶](#DefaultConfiguration) ``` func DefaultConfiguration() [Configuration](#Configuration) ``` DefaultConfiguration returns the default configuration for an iris station, fills the main Configuration #### func [TOML](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L125) [¶](#TOML) ``` func TOML(filename [string](/builtin#string)) [Configuration](#Configuration) ``` TOML reads Configuration from a toml-compatible document file. Read more about toml's implementation at: <https://github.com/toml-lang/tomlAccepts the absolute path of the configuration file. An error will be shown to the user via panic with the error message. Error may occur when the file does not exist or is not formatted correctly. Note: if the char '~' passed as "filename" then it tries to load and return the configuration from the $home_directory + iris.tml, see `WithGlobalConfiguration` for more information. Usage: app.Configure(iris.WithConfiguration(iris.TOML("myconfig.tml"))) or app.Run([iris.Runner](#Runner), iris.WithConfiguration(iris.TOML("myconfig.tml"))). #### func [YAML](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L92) [¶](#YAML) ``` func YAML(filename [string](/builtin#string)) [Configuration](#Configuration) ``` YAML reads Configuration from a configuration.yml file. Accepts the absolute path of the cfg.yml. An error will be shown to the user via panic with the error message. Error may occur when the cfg.yml does not exist or is not formatted correctly. Note: if the char '~' passed as "filename" then it tries to load and return the configuration from the $home_directory + iris.yml, see `WithGlobalConfiguration` for more information. Usage: app.Configure(iris.WithConfiguration(iris.YAML("myconfig.yml"))) or app.Run([iris.Runner](#Runner), iris.WithConfiguration(iris.YAML("myconfig.yml"))). #### func (*Configuration) [GetCharset](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L1083) [¶](#Configuration.GetCharset) ``` func (c *[Configuration](#Configuration)) GetCharset() [string](/builtin#string) ``` GetCharset returns the Charset field. #### func (*Configuration) [GetDisableAutoFireStatusCode](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L1063) [¶](#Configuration.GetDisableAutoFireStatusCode) ``` func (c *[Configuration](#Configuration)) GetDisableAutoFireStatusCode() [bool](/builtin#bool) ``` GetDisableAutoFireStatusCode returns the DisableAutoFireStatusCode field. #### func (*Configuration) [GetDisableBodyConsumptionOnUnmarshal](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L1053) [¶](#Configuration.GetDisableBodyConsumptionOnUnmarshal) ``` func (c *[Configuration](#Configuration)) GetDisableBodyConsumptionOnUnmarshal() [bool](/builtin#bool) ``` GetDisableBodyConsumptionOnUnmarshal returns the DisableBodyConsumptionOnUnmarshal field. #### func (*Configuration) [GetDisablePathCorrection](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L1003) [¶](#Configuration.GetDisablePathCorrection) ``` func (c *[Configuration](#Configuration)) GetDisablePathCorrection() [bool](/builtin#bool) ``` GetDisablePathCorrection returns the DisablePathCorrection field. #### func (*Configuration) [GetDisablePathCorrectionRedirection](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L1008) [¶](#Configuration.GetDisablePathCorrectionRedirection) ``` func (c *[Configuration](#Configuration)) GetDisablePathCorrectionRedirection() [bool](/builtin#bool) ``` GetDisablePathCorrectionRedirection returns the DisablePathCorrectionRedirection field. #### func (*Configuration) [GetEnableDynamicHandler](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L1028) [¶](#Configuration.GetEnableDynamicHandler) added in v12.2.4 ``` func (c *[Configuration](#Configuration)) GetEnableDynamicHandler() [bool](/builtin#bool) ``` GetEnableDynamicHandler returns the EnableDynamicHandler field. #### func (*Configuration) [GetEnableEasyJSON](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L1048) [¶](#Configuration.GetEnableEasyJSON) added in v12.2.0 ``` func (c *[Configuration](#Configuration)) GetEnableEasyJSON() [bool](/builtin#bool) ``` GetEnableEasyJSON returns the EnableEasyJSON field. #### func (*Configuration) [GetEnableOptimizations](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L1038) [¶](#Configuration.GetEnableOptimizations) ``` func (c *[Configuration](#Configuration)) GetEnableOptimizations() [bool](/builtin#bool) ``` GetEnableOptimizations returns the EnableOptimizations. #### func (*Configuration) [GetEnablePathEscape](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L1018) [¶](#Configuration.GetEnablePathEscape) ``` func (c *[Configuration](#Configuration)) GetEnablePathEscape() [bool](/builtin#bool) ``` GetEnablePathEscape returns the EnablePathEscape field. #### func (*Configuration) [GetEnablePathIntelligence](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L1013) [¶](#Configuration.GetEnablePathIntelligence) added in v12.2.0 ``` func (c *[Configuration](#Configuration)) GetEnablePathIntelligence() [bool](/builtin#bool) ``` GetEnablePathIntelligence returns the EnablePathIntelligence field. #### func (*Configuration) [GetEnableProtoJSON](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L1043) [¶](#Configuration.GetEnableProtoJSON) added in v12.2.0 ``` func (c *[Configuration](#Configuration)) GetEnableProtoJSON() [bool](/builtin#bool) ``` GetEnableProtoJSON returns the EnableProtoJSON field. #### func (*Configuration) [GetFallbackViewContextKey](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L1133) [¶](#Configuration.GetFallbackViewContextKey) added in v12.2.0 ``` func (c *[Configuration](#Configuration)) GetFallbackViewContextKey() [string](/builtin#string) ``` GetFallbackViewContextKey returns the FallbackViewContextKey field. #### func (*Configuration) [GetFireEmptyFormError](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L1058) [¶](#Configuration.GetFireEmptyFormError) added in v12.2.0 ``` func (c *[Configuration](#Configuration)) GetFireEmptyFormError() [bool](/builtin#bool) ``` GetFireEmptyFormError returns the DisableBodyConsumptionOnUnmarshal field. #### func (*Configuration) [GetFireMethodNotAllowed](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L1033) [¶](#Configuration.GetFireMethodNotAllowed) ``` func (c *[Configuration](#Configuration)) GetFireMethodNotAllowed() [bool](/builtin#bool) ``` GetFireMethodNotAllowed returns the FireMethodNotAllowed field. #### func (*Configuration) [GetForceLowercaseRouting](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L1023) [¶](#Configuration.GetForceLowercaseRouting) added in v12.2.0 ``` func (c *[Configuration](#Configuration)) GetForceLowercaseRouting() [bool](/builtin#bool) ``` GetForceLowercaseRouting returns the ForceLowercaseRouting field. #### func (*Configuration) [GetHostProxyHeaders](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L1158) [¶](#Configuration.GetHostProxyHeaders) added in v12.2.0 ``` func (c *[Configuration](#Configuration)) GetHostProxyHeaders() map[[string](/builtin#string)][bool](/builtin#bool) ``` GetHostProxyHeaders returns the HostProxyHeaders field. #### func (*Configuration) [GetKeepAlive](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L988) [¶](#Configuration.GetKeepAlive) added in v12.2.0 ``` func (c *[Configuration](#Configuration)) GetKeepAlive() [time](/time).[Duration](/time#Duration) ``` GetKeepAlive returns the KeepAlive field. #### func (*Configuration) [GetLanguageContextKey](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L1098) [¶](#Configuration.GetLanguageContextKey) added in v12.2.0 ``` func (c *[Configuration](#Configuration)) GetLanguageContextKey() [string](/builtin#string) ``` GetLanguageContextKey returns the LanguageContextKey field. #### func (*Configuration) [GetLanguageInputContextKey](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L1103) [¶](#Configuration.GetLanguageInputContextKey) added in v12.2.0 ``` func (c *[Configuration](#Configuration)) GetLanguageInputContextKey() [string](/builtin#string) ``` GetLanguageInputContextKey returns the LanguageInputContextKey field. #### func (*Configuration) [GetLocaleContextKey](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L1093) [¶](#Configuration.GetLocaleContextKey) added in v12.1.0 ``` func (c *[Configuration](#Configuration)) GetLocaleContextKey() [string](/builtin#string) ``` GetLocaleContextKey returns the LocaleContextKey field. #### func (*Configuration) [GetLogLevel](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L978) [¶](#Configuration.GetLogLevel) added in v12.2.0 ``` func (c *[Configuration](#Configuration)) GetLogLevel() [string](/builtin#string) ``` GetLogLevel returns the LogLevel field. #### func (*Configuration) [GetOther](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L1163) [¶](#Configuration.GetOther) ``` func (c *[Configuration](#Configuration)) GetOther() map[[string](/builtin#string)]interface{} ``` GetOther returns the Other field. #### func (*Configuration) [GetPostMaxMemory](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L1088) [¶](#Configuration.GetPostMaxMemory) ``` func (c *[Configuration](#Configuration)) GetPostMaxMemory() [int64](/builtin#int64) ``` GetPostMaxMemory returns the PostMaxMemory field. #### func (*Configuration) [GetRemoteAddrHeaders](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L1138) [¶](#Configuration.GetRemoteAddrHeaders) ``` func (c *[Configuration](#Configuration)) GetRemoteAddrHeaders() [][string](/builtin#string) ``` GetRemoteAddrHeaders returns the RemoteAddrHeaders field. #### func (*Configuration) [GetRemoteAddrHeadersForce](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L1143) [¶](#Configuration.GetRemoteAddrHeadersForce) added in v12.2.0 ``` func (c *[Configuration](#Configuration)) GetRemoteAddrHeadersForce() [bool](/builtin#bool) ``` GetRemoteAddrHeadersForce returns RemoteAddrHeadersForce field. #### func (*Configuration) [GetRemoteAddrPrivateSubnets](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L1153) [¶](#Configuration.GetRemoteAddrPrivateSubnets) added in v12.2.0 ``` func (c *[Configuration](#Configuration)) GetRemoteAddrPrivateSubnets() [][netutil](/github.com/kataras/iris/[email protected]/core/netutil).[IPRange](/github.com/kataras/iris/[email protected]/core/netutil#IPRange) ``` GetRemoteAddrPrivateSubnets returns the RemoteAddrPrivateSubnets field. #### func (*Configuration) [GetResetOnFireErrorCode](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L1068) [¶](#Configuration.GetResetOnFireErrorCode) added in v12.2.0 ``` func (c *[Configuration](#Configuration)) GetResetOnFireErrorCode() [bool](/builtin#bool) ``` GetResetOnFireErrorCode returns ResetOnFireErrorCode field. #### func (*Configuration) [GetSSLProxyHeaders](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L1148) [¶](#Configuration.GetSSLProxyHeaders) added in v12.2.0 ``` func (c *[Configuration](#Configuration)) GetSSLProxyHeaders() map[[string](/builtin#string)][string](/builtin#string) ``` GetSSLProxyHeaders returns the SSLProxyHeaders field. #### func (*Configuration) [GetSocketSharding](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L983) [¶](#Configuration.GetSocketSharding) added in v12.2.0 ``` func (c *[Configuration](#Configuration)) GetSocketSharding() [bool](/builtin#bool) ``` GetSocketSharding returns the SocketSharding field. #### func (*Configuration) [GetTimeFormat](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L1078) [¶](#Configuration.GetTimeFormat) ``` func (c *[Configuration](#Configuration)) GetTimeFormat() [string](/builtin#string) ``` GetTimeFormat returns the TimeFormat field. #### func (*Configuration) [GetTimeout](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L993) [¶](#Configuration.GetTimeout) added in v12.2.0 ``` func (c *[Configuration](#Configuration)) GetTimeout() [time](/time).[Duration](/time#Duration) ``` GetTimeout returns the Timeout field. #### func (*Configuration) [GetTimeoutMessage](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L998) [¶](#Configuration.GetTimeoutMessage) added in v12.2.0 ``` func (c *[Configuration](#Configuration)) GetTimeoutMessage() [string](/builtin#string) ``` GetTimeoutMessage returns the TimeoutMessage field. #### func (*Configuration) [GetURLParamSeparator](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L1073) [¶](#Configuration.GetURLParamSeparator) added in v12.2.0 ``` func (c *[Configuration](#Configuration)) GetURLParamSeparator() *[string](/builtin#string) ``` GetURLParamSeparator returns URLParamSeparator field. #### func (*Configuration) [GetVHost](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L973) [¶](#Configuration.GetVHost) ``` func (c *[Configuration](#Configuration)) GetVHost() [string](/builtin#string) ``` GetVHost returns the non-exported vhost config field. #### func (*Configuration) [GetVersionAliasesContextKey](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L1113) [¶](#Configuration.GetVersionAliasesContextKey) added in v12.2.0 ``` func (c *[Configuration](#Configuration)) GetVersionAliasesContextKey() [string](/builtin#string) ``` GetVersionAliasesContextKey returns the VersionAliasesContextKey field. #### func (*Configuration) [GetVersionContextKey](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L1108) [¶](#Configuration.GetVersionContextKey) added in v12.2.0 ``` func (c *[Configuration](#Configuration)) GetVersionContextKey() [string](/builtin#string) ``` GetVersionContextKey returns the VersionContextKey field. #### func (*Configuration) [GetViewDataContextKey](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L1128) [¶](#Configuration.GetViewDataContextKey) ``` func (c *[Configuration](#Configuration)) GetViewDataContextKey() [string](/builtin#string) ``` GetViewDataContextKey returns the ViewDataContextKey field. #### func (*Configuration) [GetViewEngineContextKey](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L1118) [¶](#Configuration.GetViewEngineContextKey) added in v12.2.0 ``` func (c *[Configuration](#Configuration)) GetViewEngineContextKey() [string](/builtin#string) ``` GetViewEngineContextKey returns the ViewEngineContextKey field. #### func (*Configuration) [GetViewLayoutContextKey](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L1123) [¶](#Configuration.GetViewLayoutContextKey) ``` func (c *[Configuration](#Configuration)) GetViewLayoutContextKey() [string](/builtin#string) ``` GetViewLayoutContextKey returns the ViewLayoutContextKey field. #### type [Configurator](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L168) [¶](#Configurator) ``` type Configurator func(*[Application](#Application)) ``` Configurator is just an interface which accepts the framework instance. It can be used to register a custom configuration with `Configure` in order to modify the framework instance. Currently Configurator is being used to describe the configuration's fields values. #### func [WithCharset](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L382) [¶](#WithCharset) ``` func WithCharset(charset [string](/builtin#string)) [Configurator](#Configurator) ``` WithCharset sets the Charset setting. See `Configuration`. #### func [WithConfiguration](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L1175) [¶](#WithConfiguration) ``` func WithConfiguration(c [Configuration](#Configuration)) [Configurator](#Configurator) ``` WithConfiguration sets the "c" values to the framework's configurations. Usage: app.Listen(":8080", iris.WithConfiguration(iris.Configuration{/* fields here */ })) or iris.WithConfiguration(iris.YAML("./cfg/iris.yml")) or iris.WithConfiguration(iris.TOML("./cfg/iris.tml")) #### func [WithHostProxyHeader](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L462) [¶](#WithHostProxyHeader) added in v12.2.0 ``` func WithHostProxyHeader(headers ...[string](/builtin#string)) [Configurator](#Configurator) ``` WithHostProxyHeader sets a HostProxyHeaders key value pair. Example: WithHostProxyHeader("X-Host"). See `Context.Host` for more. #### func [WithKeepAlive](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L203) [¶](#WithKeepAlive) added in v12.2.0 ``` func WithKeepAlive(keepAliveDur [time](/time).[Duration](/time#Duration)) [Configurator](#Configurator) ``` WithKeepAlive sets the `Configuration.KeepAlive` field to the given duration. #### func [WithLogLevel](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L184) [¶](#WithLogLevel) added in v12.2.0 ``` func WithLogLevel(level [string](/builtin#string)) [Configurator](#Configurator) ``` WithLogLevel sets the `Configuration.LogLevel` field. #### func [WithOtherValue](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L476) [¶](#WithOtherValue) ``` func WithOtherValue(key [string](/builtin#string), val interface{}) [Configurator](#Configurator) ``` WithOtherValue adds a value based on a key to the Other setting. See `Configuration.Other`. #### func [WithPostMaxMemory](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L394) [¶](#WithPostMaxMemory) ``` func WithPostMaxMemory(limit [int64](/builtin#int64)) [Configurator](#Configurator) ``` WithPostMaxMemory sets the maximum post data size that a client can send to the server, this differs from the overall request body size which can be modified by the `context#SetMaxRequestBodySize` or `iris#LimitRequestBodySize`. Defaults to 32MB or 32 << 20 or 32*iris.MB if you prefer. #### func [WithRemoteAddrHeader](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L402) [¶](#WithRemoteAddrHeader) ``` func WithRemoteAddrHeader(header ...[string](/builtin#string)) [Configurator](#Configurator) ``` WithRemoteAddrHeader adds a new request header name that can be used to validate the client's real IP. #### func [WithRemoteAddrPrivateSubnet](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L437) [¶](#WithRemoteAddrPrivateSubnet) added in v12.2.0 ``` func WithRemoteAddrPrivateSubnet(startIP, endIP [string](/builtin#string)) [Configurator](#Configurator) ``` WithRemoteAddrPrivateSubnet adds a new private sub-net to be excluded from `context.RemoteAddr`. See `WithRemoteAddrHeader` too. #### func [WithSSLProxyHeader](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L449) [¶](#WithSSLProxyHeader) added in v12.2.0 ``` func WithSSLProxyHeader(headerKey, headerValue [string](/builtin#string)) [Configurator](#Configurator) ``` WithSSLProxyHeader sets a SSLProxyHeaders key value pair. Example: WithSSLProxyHeader("X-Forwarded-Proto", "https"). See `Context.IsSSL` for more. #### func [WithSitemap](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L506) [¶](#WithSitemap) added in v12.1.0 ``` func WithSitemap(startURL [string](/builtin#string)) [Configurator](#Configurator) ``` WithSitemap enables the sitemap generator. Use the Route's `SetLastMod`, `SetChangeFreq` and `SetPriority` to modify the sitemap's URL child element properties. Excluded routes: - dynamic - subdomain - offline - ExcludeSitemap method called It accepts a "startURL" input argument which is the prefix for the registered routes that will be included in the sitemap. If more than 50,000 static routes are registered then sitemaps will be splitted and a sitemap index will be served in /sitemap.xml. If `Application.I18n.Load/LoadAssets` is called then the sitemap will contain translated links for each static route. If the result does not complete your needs you can take control and use the github.com/kataras/sitemap package to generate a customized one instead. Example: <https://github.com/kataras/iris/tree/main/_examples/sitemap>. #### func [WithTimeFormat](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L373) [¶](#WithTimeFormat) ``` func WithTimeFormat(timeformat [string](/builtin#string)) [Configurator](#Configurator) ``` WithTimeFormat sets the TimeFormat setting. See `Configuration`. #### func [WithTimeout](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L210) [¶](#WithTimeout) added in v12.2.0 ``` func WithTimeout(timeoutDur [time](/time).[Duration](/time#Duration), htmlBody ...[string](/builtin#string)) [Configurator](#Configurator) ``` WithTimeout sets the `Configuration.Timeout` field to the given duration. #### func [WithoutRemoteAddrHeader](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L423) [¶](#WithoutRemoteAddrHeader) ``` func WithoutRemoteAddrHeader(headerName [string](/builtin#string)) [Configurator](#Configurator) ``` WithoutRemoteAddrHeader removes an existing request header name that can be used to validate and parse the client's real IP. Look `context.RemoteAddr()` for more. #### func [WithoutServerError](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L229) [¶](#WithoutServerError) ``` func WithoutServerError(errors ...[error](/builtin#error)) [Configurator](#Configurator) ``` WithoutServerError will cause to ignore the matched "errors" from the main application's `Run/Listen` function. Usage: err := app.Listen(":8080", iris.WithoutServerError(iris.ErrServerClosed)) will return `nil` if the server's error was `http/iris#ErrServerClosed`. See `Configuration#IgnoreServerErrors []string` too. Example: <https://github.com/kataras/iris/tree/main/_examples/http-server/listen-addr/omit-server-errors#### type [Context](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L47) [¶](#Context) ``` type Context = *[context](/github.com/kataras/iris/[email protected]/context).[Context](/github.com/kataras/iris/[email protected]/context#Context) ``` Context is the middle-man server's "object" for the clients. A New context is being acquired from a sync.Pool on each connection. The Context is the most important thing on the iris's http flow. Developers send responses to the client's request through a Context. Developers get request information from the client's request by a Context. #### type [ContextPatches](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L795) [¶](#ContextPatches) added in v12.2.0 ``` type ContextPatches struct { // contains filtered or unexported fields } ``` ContextPatches contains the available global Iris context modifications. #### func (*ContextPatches) [GetDomain](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L807) [¶](#ContextPatches.GetDomain) added in v12.2.0 ``` func (cp *[ContextPatches](#ContextPatches)) GetDomain(patchFunc func(hostport [string](/builtin#string)) [string](/builtin#string)) ``` GetDomain modifies the way a domain is fetched from `Context#Domain` method, which is used on subdomain redirect feature, i18n's language cookie for subdomain sharing and the rewrite middleware. #### func (*ContextPatches) [ResolveFS](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L824) [¶](#ContextPatches.ResolveFS) added in v12.2.0 ``` func (cp *[ContextPatches](#ContextPatches)) ResolveFS(patchFunc func(fsOrDir interface{}) [fs](/io/fs).[FS](/io/fs#FS)) ``` ResolveHTTPFS modifies the default way to resolve a filesystem by any type of value. It affects the view engine's filesystem resolver. #### func (*ContextPatches) [ResolveHTTPFS](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L818) [¶](#ContextPatches.ResolveHTTPFS) added in v12.2.0 ``` func (cp *[ContextPatches](#ContextPatches)) ResolveHTTPFS(patchFunc func(fsOrDir interface{}) [http](/net/http).[FileSystem](/net/http#FileSystem)) ``` ResolveHTTPFS modifies the default way to resolve a filesystem by any type of value. It affects the Application's API Builder's `HandleDir` method. #### func (*ContextPatches) [SetCookieKVExpiration](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L812) [¶](#ContextPatches.SetCookieKVExpiration) added in v12.2.0 ``` func (cp *[ContextPatches](#ContextPatches)) SetCookieKVExpiration(patch [time](/time).[Duration](/time#Duration)) ``` SetCookieKVExpiration modifies the default cookie expiration time on `Context#SetCookieKV` method. #### func (*ContextPatches) [Writers](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L800) [¶](#ContextPatches.Writers) added in v12.2.0 ``` func (cp *[ContextPatches](#ContextPatches)) Writers() *[ContextWriterPatches](#ContextWriterPatches) ``` Writers returns the available global Iris context modifications for REST writers. #### type [ContextWriterPatches](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L829) [¶](#ContextWriterPatches) added in v12.2.0 ``` type ContextWriterPatches struct{} ``` ContextWriterPatches features the context's writers patches. #### func (*ContextWriterPatches) [JSON](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L832) [¶](#ContextWriterPatches.JSON) added in v12.2.0 ``` func (cwp *[ContextWriterPatches](#ContextWriterPatches)) JSON(patchFunc func(ctx [Context](#Context), v interface{}, options *[JSON](#JSON)) [error](/builtin#error)) ``` JSON sets a custom function which runs and overrides the default behavior of the `Context#JSON` method. #### func (*ContextWriterPatches) [JSONP](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L837) [¶](#ContextWriterPatches.JSONP) added in v12.2.0 ``` func (cwp *[ContextWriterPatches](#ContextWriterPatches)) JSONP(patchFunc func(ctx [Context](#Context), v interface{}, options *[JSONP](#JSONP)) [error](/builtin#error)) ``` JSONP sets a custom function which runs and overrides the default behavior of the `Context#JSONP` method. #### func (*ContextWriterPatches) [Markdown](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L847) [¶](#ContextWriterPatches.Markdown) added in v12.2.0 ``` func (cwp *[ContextWriterPatches](#ContextWriterPatches)) Markdown(patchFunc func(ctx [Context](#Context), v [][byte](/builtin#byte), options *[Markdown](#Markdown)) [error](/builtin#error)) ``` Markdown sets a custom function which runs and overrides the default behavior of the `Context#Markdown` method. #### func (*ContextWriterPatches) [XML](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L842) [¶](#ContextWriterPatches.XML) added in v12.2.0 ``` func (cwp *[ContextWriterPatches](#ContextWriterPatches)) XML(patchFunc func(ctx [Context](#Context), v interface{}, options *[XML](#XML)) [error](/builtin#error)) ``` XML sets a custom function which runs and overrides the default behavior of the `Context#XML` method. #### func (*ContextWriterPatches) [YAML](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L852) [¶](#ContextWriterPatches.YAML) added in v12.2.0 ``` func (cwp *[ContextWriterPatches](#ContextWriterPatches)) YAML(patchFunc func(ctx [Context](#Context), v interface{}, indentSpace [int](/builtin#int)) [error](/builtin#error)) ``` YAML sets a custom function which runs and overrides the default behavior of the `Context#YAML` method. #### type [Cookie](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L201) [¶](#Cookie) added in v12.2.0 ``` type Cookie = [http](/net/http).[Cookie](/net/http#Cookie) ``` Cookie is a type alias for the standard net/http Cookie struct type. See `Context.SetCookie`. #### type [CookieOption](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L198) [¶](#CookieOption) ``` type CookieOption = [context](/github.com/kataras/iris/[email protected]/context).[CookieOption](/github.com/kataras/iris/[email protected]/context#CookieOption) ``` CookieOption is the type of function that is accepted on context's methods like `SetCookieKV`, `RemoveCookie` and `SetCookie` as their (last) variadic input argument to amend the end cookie's form. Any custom or builtin `CookieOption` is valid, see `CookiePath`, `CookieCleanPath`, `CookieExpires` and `CookieHTTPOnly` for more. An alias for the `context.CookieOption`. #### type [DecodeFunc](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L68) [¶](#DecodeFunc) added in v12.2.0 ``` type DecodeFunc = [context](/github.com/kataras/iris/[email protected]/context).[DecodeFunc](/github.com/kataras/iris/[email protected]/context#DecodeFunc) ``` DecodeFunc is a generic type of decoder function. When the returned error is not nil the decode operation is terminated and the error is received by the ReadJSONStream method, otherwise it continues to read the next available object. Look the `Context.ReadJSONStream` method. Example: <https://github.com/kataras/iris/blob/main/_examples/request-body/read-json-stream>. #### type [Dir](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L174) [¶](#Dir) added in v12.2.0 ``` type Dir = [http](/net/http).[Dir](/net/http#Dir) ``` Dir implements FileSystem using the native file system restricted to a specific directory tree, can be passed to the `FileServer` function and `HandleDir` method. It's an alias of `http.Dir`. #### type [DirCacheOptions](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L163) [¶](#DirCacheOptions) added in v12.2.0 ``` type DirCacheOptions = [router](/github.com/kataras/iris/[email protected]/core/router).[DirCacheOptions](/github.com/kataras/iris/[email protected]/core/router#DirCacheOptions) ``` DirCacheOptions holds the options for the cached file system. See `DirOptions`. #### type [DirListRichOptions](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L167) [¶](#DirListRichOptions) added in v12.2.0 ``` type DirListRichOptions = [router](/github.com/kataras/iris/[email protected]/core/router).[DirListRichOptions](/github.com/kataras/iris/[email protected]/core/router#DirListRichOptions) ``` DirListRichOptions the options for the `DirListRich` helper function. A shortcut for the `router.DirListRichOptions`. Useful when `DirListRich` function is passed to `DirOptions.DirList` field. #### type [DirOptions](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L160) [¶](#DirOptions) ``` type DirOptions = [router](/github.com/kataras/iris/[email protected]/core/router).[DirOptions](/github.com/kataras/iris/[email protected]/core/router#DirOptions) ``` DirOptions contains the optional settings that `FileServer` and `Party#HandleDir` can use to serve files and assets. A shortcut for the `router.DirOptions`, useful when `FileServer` or `HandleDir` is being used. #### type [ErrPrivate](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L216) [¶](#ErrPrivate) added in v12.2.0 ``` type ErrPrivate = [context](/github.com/kataras/iris/[email protected]/context).[ErrPrivate](/github.com/kataras/iris/[email protected]/context#ErrPrivate) ``` ErrPrivate if provided then the error saved in context should NOT be visible to the client no matter what. An alias for the `context.ErrPrivate`. #### type [ErrViewNotExist](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L280) [¶](#ErrViewNotExist) added in v12.2.0 ``` type ErrViewNotExist = [context](/github.com/kataras/iris/[email protected]/context).[ErrViewNotExist](/github.com/kataras/iris/[email protected]/context#ErrViewNotExist) ``` ErrViewNotExist reports whether a template was not found in the parsed templates tree. #### type [ExecutionOptions](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L188) [¶](#ExecutionOptions) ``` type ExecutionOptions = [router](/github.com/kataras/iris/[email protected]/core/router).[ExecutionOptions](/github.com/kataras/iris/[email protected]/core/router#ExecutionOptions) ``` ExecutionOptions is a set of default behaviors that can be changed in order to customize the execution flow of the routes' handlers with ease. See `ExecutionRules` and `core/router/Party#SetExecutionRules` for more. #### type [ExecutionRules](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L184) [¶](#ExecutionRules) ``` type ExecutionRules = [router](/github.com/kataras/iris/[email protected]/core/router).[ExecutionRules](/github.com/kataras/iris/[email protected]/core/router#ExecutionRules) ``` ExecutionRules gives control to the execution of the route handlers outside of the handlers themselves. Usage: ``` Party#SetExecutionRules(ExecutionRules { Done: ExecutionOptions{Force: true}, }) ``` See `core/router/Party#SetExecutionRules` for more. Example: <https://github.com/kataras/iris/tree/main/_examples/mvc/middleware/without-ctx-next#### type [FallbackView](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L290) [¶](#FallbackView) added in v12.2.0 ``` type FallbackView = [context](/github.com/kataras/iris/[email protected]/context).[FallbackView](/github.com/kataras/iris/[email protected]/context#FallbackView) ``` FallbackView is a helper to register a single template filename as a fallback when the provided tempate filename was not found. #### type [FallbackViewFunc](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L287) [¶](#FallbackViewFunc) added in v12.2.0 ``` type FallbackViewFunc = [context](/github.com/kataras/iris/[email protected]/context).[FallbackViewFunc](/github.com/kataras/iris/[email protected]/context#FallbackViewFunc) ``` FallbackViewFunc is a function that can be registered to handle view fallbacks. It accepts the Context and a special error which contains information about the previous template error. It implements the FallbackViewProvider interface. See `Context.View` method. #### type [FallbackViewLayout](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L293) [¶](#FallbackViewLayout) added in v12.2.0 ``` type FallbackViewLayout = [context](/github.com/kataras/iris/[email protected]/context).[FallbackViewLayout](/github.com/kataras/iris/[email protected]/context#FallbackViewLayout) ``` FallbackViewLayout is a helper to register a single template filename as a fallback layout when the provided layout filename was not found. #### type [Filter](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L89) [¶](#Filter) ``` type Filter = [context](/github.com/kataras/iris/[email protected]/context).[Filter](/github.com/kataras/iris/[email protected]/context#Filter) ``` Filter is just a type of func(Context) bool which reports whether an action must be performed based on the incoming request. See `NewConditionalHandler` for more. An alias for the `context/Filter`. #### type [GlobalPatches](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L775) [¶](#GlobalPatches) added in v12.2.0 ``` type GlobalPatches struct { // contains filtered or unexported fields } ``` GlobalPatches is a singleton features a uniform way to apply global/package-level modifications. See the `Patches` package-level function. #### func [Patches](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L785) [¶](#Patches) added in v12.2.0 ``` func Patches() *[GlobalPatches](#GlobalPatches) ``` Patches returns the singleton of GlobalPatches, an easy way to modify global(package-level) configuration for Iris applications. See its `Context` method. Example: <https://github.com/kataras/iris/blob/main/_examples/response-writer/json-third-party/main.go#### func (*GlobalPatches) [Context](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L790) [¶](#GlobalPatches.Context) added in v12.2.0 ``` func (p *[GlobalPatches](#GlobalPatches)) Context() *[ContextPatches](#ContextPatches) ``` Context returns the available context patches. #### type [Guide](https://github.com/kataras/iris/blob/v12.2.7/iris_guide.go#L227) [¶](#Guide) added in v12.2.5 ``` type Guide interface { // AllowOrigin defines the CORS allowed domains. // Many can be splitted by comma. // If "*" is provided then all origins are accepted (use it for public APIs). AllowOrigin(originLine [string](/builtin#string)) [CompressionGuide](#CompressionGuide) } ``` Guide is the simplify API builder. It's a step-by-step builder which can be used to build an Iris Application with the most common features. #### func [NewGuide](https://github.com/kataras/iris/blob/v12.2.7/iris_guide.go#L219) [¶](#NewGuide) added in v12.2.0 ``` func NewGuide() [Guide](#Guide) ``` NewGuide returns a simple Iris API builder. Example Code: ``` package main import ( "context" "database/sql" "time" "github.com/kataras/iris/v12" "github.com/kataras/iris/v12/x/errors" ) func main() { iris.NewGuide(). AllowOrigin("*"). Compression(true). Health(true, "development", "kataras"). Timeout(0, 20*time.Second, 20*time.Second). Middlewares(). Services( // openDatabase(), // NewSQLRepoRegistry, NewMemRepoRegistry, NewTestService, ). API("/tests", new(TestAPI)). Listen(":80") } // Recommendation: move it to /api/tests/api.go file. type TestAPI struct { TestService *TestService } func (api *TestAPI) Configure(r iris.Party) { r.Get("/", api.listTests) } func (api *TestAPI) listTests(ctx iris.Context) { tests, err := api.TestService.ListTests(ctx) if err != nil { errors.Internal.LogErr(ctx, err) return } ctx.JSON(tests) } // Recommendation: move it to /pkg/storage/sql/db.go file. type DB struct { *sql.DB } func openDatabase( your database configuration... ) *DB { conn, err := sql.Open(...) // handle error. return &DB{DB: conn} } func (db *DB) Close() error { return nil } // Recommendation: move it to /pkg/repository/registry.go file. type RepoRegistry interface { Tests() TestRepository InTransaction(ctx context.Context, fn func(RepoRegistry) error) error } // Recommendation: move it to /pkg/repository/registry/memory.go file. type repoRegistryMem struct { tests TestRepository } func NewMemRepoRegistry() RepoRegistry { return &repoRegistryMem{ tests: NewMemTestRepository(), } } func (r *repoRegistryMem) Tests() TestRepository { return r.tests } func (r *repoRegistryMem) InTransaction(ctx context.Context, fn func(RepoRegistry) error) error { return nil } // Recommendation: move it to /pkg/repository/registry/sql.go file. type repoRegistrySQL struct { db *DB tests TestRepository } func NewSQLRepoRegistry(db *DB) RepoRegistry { return &repoRegistrySQL{ db: db, tests: NewSQLTestRepository(db), } } func (r *repoRegistrySQL) Tests() TestRepository { return r.tests } func (r *repoRegistrySQL) InTransaction(ctx context.Context, fn func(RepoRegistry) error) error { return nil // your own database transaction code, may look something like that: // tx, err := r.db.BeginTx(ctx, nil) // if err != nil { // return err // } // defer tx.Rollback() // newRegistry := NewSQLRepoRegistry(tx) // if err := fn(newRegistry);err!=nil{ // return err // } // return tx.Commit() } // Recommendation: move it to /pkg/test/test.go type Test struct { Name string `db:"name"` } // Recommendation: move it to /pkg/test/repository.go type TestRepository interface { ListTests(ctx context.Context) ([]Test, error) } type testRepositoryMem struct { tests []Test } func NewMemTestRepository() TestRepository { list := []Test{ {Name: "test1"}, {Name: "test2"}, {Name: "test3"}, } return &testRepositoryMem{ tests: list, } } func (r *testRepositoryMem) ListTests(ctx context.Context) ([]Test, error) { return r.tests, nil } type testRepositorySQL struct { db *DB } func NewSQLTestRepository(db *DB) TestRepository { return &testRepositorySQL{db: db} } func (r *testRepositorySQL) ListTests(ctx context.Context) ([]Test, error) { query := `SELECT * FROM tests ORDER BY created_at;` rows, err := r.db.QueryContext(ctx, query) if err != nil { return nil, err } defer rows.Close() tests := make([]Test, 0) for rows.Next() { var t Test if err := rows.Scan(&t.Name); err != nil { return nil, err } tests = append(tests, t) } if err := rows.Err(); err != nil { return nil, err } return tests, nil } // Recommendation: move it to /pkg/service/test_service.go file. type TestService struct { repos RepoRegistry } func NewTestService(registry RepoRegistry) *TestService { return &TestService{ repos: registry, } } func (s *TestService) ListTests(ctx context.Context) ([]Test, error) { return s.repos.Tests().ListTests(ctx) } ``` #### type [Handler](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L83) [¶](#Handler) ``` type Handler = [context](/github.com/kataras/iris/[email protected]/context).[Handler](/github.com/kataras/iris/[email protected]/context#Handler) ``` A Handler responds to an HTTP request. It writes reply headers and data to the Context.ResponseWriter() and then return. Returning signals that the request is finished; it is not valid to use the Context after or concurrently with the completion of the Handler call. Depending on the HTTP client software, HTTP protocol version, and any intermediaries between the client and the iris server, it may not be possible to read from the Context.Request().Body after writing to the context.ResponseWriter(). Cautious handlers should read the Context.Request().Body first, and then reply. Except for reading the body, handlers should not modify the provided Context. If Handler panics, the server (the caller of Handler) assumes that the effect of the panic was isolated to the active request. It recovers the panic, logs a stack trace to the server error log, and hangs up the connection. #### type [HealthGuide](https://github.com/kataras/iris/blob/v12.2.7/iris_guide.go#L244) [¶](#HealthGuide) added in v12.2.5 ``` type HealthGuide interface { // Health enables the /health route. // If "env" and "developer" are given, these fields will be populated to the client // through headers and environment on health route. Health(b [bool](/builtin#bool), env, developer [string](/builtin#string)) [TimeoutGuide](#TimeoutGuide) } ``` HealthGuide is the 3rd step of the Guide. Health enables the /health route. #### type [JSON](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L114) [¶](#JSON) ``` type JSON = [context](/github.com/kataras/iris/[email protected]/context).[JSON](/github.com/kataras/iris/[email protected]/context#JSON) ``` JSON the optional settings for JSON renderer. It is an alias of the `context#JSON` type. #### type [JSONP](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L122) [¶](#JSONP) added in v12.2.0 ``` type JSONP = [context](/github.com/kataras/iris/[email protected]/context).[JSONP](/github.com/kataras/iris/[email protected]/context#JSONP) ``` JSONP the optional settings for JSONP renderer. It is an alias of the `context#JSONP` type. #### type [JSONReader](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L118) [¶](#JSONReader) added in v12.2.0 ``` type JSONReader = [context](/github.com/kataras/iris/[email protected]/context).[JSONReader](/github.com/kataras/iris/[email protected]/context#JSONReader) ``` JSONReader holds the JSON decode options of the `Context.ReadJSON, ReadBody` methods. It is an alias of the `context#JSONReader` type. #### type [Locale](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L212) [¶](#Locale) added in v12.2.0 ``` type Locale = [context](/github.com/kataras/iris/[email protected]/context).[Locale](/github.com/kataras/iris/[email protected]/context#Locale) ``` Locale describes the i18n locale. An alias for the `context.Locale`. #### type [Map](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L91) [¶](#Map) ``` type Map = [context](/github.com/kataras/iris/[email protected]/context).[Map](/github.com/kataras/iris/[email protected]/context#Map) ``` A Map is an alias of map[string]interface{}. #### type [Markdown](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L135) [¶](#Markdown) added in v12.2.0 ``` type Markdown = [context](/github.com/kataras/iris/[email protected]/context).[Markdown](/github.com/kataras/iris/[email protected]/context#Markdown) ``` Markdown the optional settings for Markdown renderer. See `Context.Markdown` for more. It is an alias of the `context#Markdown` type. #### type [MiddlewareGuide](https://github.com/kataras/iris/blob/v12.2.7/iris_guide.go#L261) [¶](#MiddlewareGuide) added in v12.2.5 ``` type MiddlewareGuide interface { // RouterMiddlewares registers one or more handlers to run before everything else. RouterMiddlewares(handlers ...[Handler](#Handler)) [MiddlewareGuide](#MiddlewareGuide) // Middlewares registers one or more handlers to run before the requested route's handler. Middlewares(handlers ...[Handler](#Handler)) [ServiceGuide](#ServiceGuide) } ``` MiddlewareGuide is the 5th step of the Guide. It registers one or more handlers to run before everything else (RouterMiddlewares) or before registered routes (Middlewares). #### type [N](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L209) [¶](#N) ``` type N = [context](/github.com/kataras/iris/[email protected]/context).[N](/github.com/kataras/iris/[email protected]/context#N) ``` N is a struct which can be passed on the `Context.Negotiate` method. It contains fields which should be filled based on the `Context.Negotiation()` server side values. If no matched mime then its "Other" field will be sent, which should be a string or []byte. It completes the `context/context.ContentSelector` interface. An alias for the `context.N`. #### type [Party](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L147) [¶](#Party) ``` type Party = [router](/github.com/kataras/iris/[email protected]/core/router).[Party](/github.com/kataras/iris/[email protected]/core/router#Party) ``` Party is just a group joiner of routes which have the same prefix and share same middleware(s) also. Party could also be named as 'Join' or 'Node' or 'Group' , Party chosen because it is fun. Look the `core/router#APIBuilder` for its implementation. A shortcut for the `core/router#Party`, useful when `PartyFunc` is being used. #### type [Problem](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L105) [¶](#Problem) ``` type Problem = [context](/github.com/kataras/iris/[email protected]/context).[Problem](/github.com/kataras/iris/[email protected]/context#Problem) ``` Problem Details for HTTP APIs. Pass a Problem value to `context.Problem` to write an "application/problem+json" response. Read more at: <https://github.com/kataras/iris/blob/main/_examples/routing/http-errors>. It is an alias of the `context#Problem` type. #### type [ProblemOptions](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L110) [¶](#ProblemOptions) ``` type ProblemOptions = [context](/github.com/kataras/iris/[email protected]/context).[ProblemOptions](/github.com/kataras/iris/[email protected]/context#ProblemOptions) ``` ProblemOptions the optional settings when server replies with a Problem. See `Context.Problem` method and `Problem` type for more details. It is an alias of the `context#ProblemOptions` type. #### type [ProtoMarshalOptions](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L124) [¶](#ProtoMarshalOptions) added in v12.2.0 ``` type ProtoMarshalOptions = [context](/github.com/kataras/iris/[email protected]/context).[ProtoMarshalOptions](/github.com/kataras/iris/[email protected]/context#ProtoMarshalOptions) ``` ProtoMarshalOptions is a type alias for protojson.MarshalOptions. #### type [ProtoUnmarshalOptions](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L126) [¶](#ProtoUnmarshalOptions) added in v12.2.0 ``` type ProtoUnmarshalOptions = [context](/github.com/kataras/iris/[email protected]/context).[ProtoUnmarshalOptions](/github.com/kataras/iris/[email protected]/context#ProtoUnmarshalOptions) ``` ProtoUnmarshalOptions is a type alias for protojson.UnmarshalOptions. #### type [ResultHandler](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L155) [¶](#ResultHandler) added in v12.2.0 ``` type ResultHandler = [hero](/github.com/kataras/iris/[email protected]/hero).[ResultHandler](/github.com/kataras/iris/[email protected]/hero#ResultHandler) ``` ResultHandler describes the function type which should serve the "v" struct value. See `APIContainer.UseResultHandler`. #### type [Runner](https://github.com/kataras/iris/blob/v12.2.7/iris.go#L795) [¶](#Runner) ``` type Runner func(*[Application](#Application)) [error](/builtin#error) ``` Runner is just an interface which accepts the framework instance and returns an error. It can be used to register a custom runner with `Run` in order to set the framework's server listen action. Currently `Runner` is being used to declare the builtin server listeners. See `Run` for more. #### func [Addr](https://github.com/kataras/iris/blob/v12.2.7/iris.go#L856) [¶](#Addr) ``` func Addr(addr [string](/builtin#string), hostConfigs ...[host](/github.com/kataras/iris/[email protected]/core/host).[Configurator](/github.com/kataras/iris/[email protected]/core/host#Configurator)) [Runner](#Runner) ``` Addr can be used as an argument for the `Run` method. It accepts a host address which is used to build a server and a listener which listens on that host and port. Addr should have the form of [host](/github.com/kataras/iris/[email protected]/core/host):port, i.e localhost:8080 or :8080. Second argument is optional, it accepts one or more `func(*host.Configurator)` that are being executed on that specific host that this function will create to start the server. Via host configurators you can configure the back-end host supervisor, i.e to add events for shutdown, serve or error. An example of this use case can be found at: <https://github.com/kataras/iris/blob/main/_examples/http-server/notify-on-shutdown/main.go> Look at the `ConfigureHost` too. See `Run` for more. #### func [AutoTLS](https://github.com/kataras/iris/blob/v12.2.7/iris.go#L959) [¶](#AutoTLS) ``` func AutoTLS( addr [string](/builtin#string), domain [string](/builtin#string), email [string](/builtin#string), hostConfigs ...[host](/github.com/kataras/iris/[email protected]/core/host).[Configurator](/github.com/kataras/iris/[email protected]/core/host#Configurator), ) [Runner](#Runner) ``` AutoTLS can be used as an argument for the `Run` method. It will start the Application's secure server using certifications created on the fly by the "autocert" golang/x package, so localhost may not be working, use it at "production" machine. Addr should have the form of [host](/github.com/kataras/iris/[email protected]/core/host):port, i.e mydomain.com:443. The whitelisted domains are separated by whitespace in "domain" argument, i.e "iris-go.com", can be different than "addr". If empty, all hosts are currently allowed. This is not recommended, as it opens a potential attack where clients connect to a server by IP address and pretend to be asking for an incorrect host name. Manager will attempt to obtain a certificate for that host, incorrectly, eventually reaching the CA's rate limit for certificate requests and making it impossible to obtain actual certificates. For an "e-mail" use a non-public one, letsencrypt needs that for your own security. Note: `AutoTLS` will start a new server for you which will redirect all http versions to their https, including subdomains as well. Last argument is optional, it accepts one or more `func(*host.Configurator)` that are being executed on that specific host that this function will create to start the server. Via host configurators you can configure the back-end host supervisor, i.e to add events for shutdown, serve or error. An example of this use case can be found at: <https://github.com/kataras/iris/blob/main/_examples/http-server/notify-on-shutdown/main.go> Look at the `ConfigureHost` too. Usage: app.Run(iris.AutoTLS("iris-go.com:443", "iris-go.com www.iris-go.com", "<EMAIL>")) See `Run` and `core/host/Supervisor#ListenAndServeAutoTLS` for more. #### func [Listener](https://github.com/kataras/iris/blob/v12.2.7/iris.go#L810) [¶](#Listener) ``` func Listener(l [net](/net).[Listener](/net#Listener), hostConfigs ...[host](/github.com/kataras/iris/[email protected]/core/host).[Configurator](/github.com/kataras/iris/[email protected]/core/host#Configurator)) [Runner](#Runner) ``` Listener can be used as an argument for the `Run` method. It can start a server with a custom net.Listener via server's `Serve`. Second argument is optional, it accepts one or more `func(*host.Configurator)` that are being executed on that specific host that this function will create to start the server. Via host configurators you can configure the back-end host supervisor, i.e to add events for shutdown, serve or error. An example of this use case can be found at: <https://github.com/kataras/iris/blob/main/_examples/http-server/notify-on-shutdown/main.go> Look at the `ConfigureHost` too. See `Run` for more. #### func [Raw](https://github.com/kataras/iris/blob/v12.2.7/iris.go#L980) [¶](#Raw) ``` func Raw(f func() [error](/builtin#error)) [Runner](#Runner) ``` Raw can be used as an argument for the `Run` method. It accepts any (listen) function that returns an error, this function should be block and return an error only when the server exited or a fatal error caused. With this option you're not limited to the servers that iris can run by-default. See `Run` for more. #### func [Server](https://github.com/kataras/iris/blob/v12.2.7/iris.go#L832) [¶](#Server) ``` func Server(srv *[http](/net/http).[Server](/net/http#Server), hostConfigs ...[host](/github.com/kataras/iris/[email protected]/core/host).[Configurator](/github.com/kataras/iris/[email protected]/core/host#Configurator)) [Runner](#Runner) ``` Server can be used as an argument for the `Run` method. It can start a server with a *http.Server. Second argument is optional, it accepts one or more `func(*host.Configurator)` that are being executed on that specific host that this function will create to start the server. Via host configurators you can configure the back-end host supervisor, i.e to add events for shutdown, serve or error. An example of this use case can be found at: <https://github.com/kataras/iris/blob/main/_examples/http-server/notify-on-shutdown/main.go> Look at the `ConfigureHost` too. See `Run` for more. #### func [TLS](https://github.com/kataras/iris/blob/v12.2.7/iris.go#L917) [¶](#TLS) ``` func TLS(addr [string](/builtin#string), certFileOrContents, keyFileOrContents [string](/builtin#string), hostConfigs ...[host](/github.com/kataras/iris/[email protected]/core/host).[Configurator](/github.com/kataras/iris/[email protected]/core/host#Configurator)) [Runner](#Runner) ``` TLS can be used as an argument for the `Run` method. It will start the Application's secure server. Use it like you used to use the http.ListenAndServeTLS function. Addr should have the form of [host](/github.com/kataras/iris/[email protected]/core/host):port, i.e localhost:443 or :443. "certFileOrContents" & "keyFileOrContents" should be filenames with their extensions or raw contents of the certificate and the private key. Last argument is optional, it accepts one or more `func(*host.Configurator)` that are being executed on that specific host that this function will create to start the server. Via host configurators you can configure the back-end host supervisor, i.e to add events for shutdown, serve or error. An example of this use case can be found at: <https://github.com/kataras/iris/blob/main/_examples/http-server/notify-on-shutdown/main.go> Look at the `ConfigureHost` too. See `Run` for more. #### type [ServiceGuide](https://github.com/kataras/iris/blob/v12.2.7/iris_guide.go#L270) [¶](#ServiceGuide) added in v12.2.5 ``` type ServiceGuide interface { // Deferrables registers one or more functions to be ran when the server is terminated. Deferrables(closers ...func()) [ServiceGuide](#ServiceGuide) // Services registers one or more dependencies that APIs can use. Services(deps ...interface{}) [ApplicationBuilder](#ApplicationBuilder) } ``` ServiceGuide is the 6th step of the Guide. It is used to register deferrable functions and, most importantly, dependencies that APIs can use. #### type [SimpleUser](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L97) [¶](#SimpleUser) added in v12.2.0 ``` type SimpleUser = [context](/github.com/kataras/iris/[email protected]/context).[SimpleUser](/github.com/kataras/iris/[email protected]/context#SimpleUser) ``` SimpleUser is a simple implementation of the User interface. #### type [Singleton](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L858) [¶](#Singleton) added in v12.2.7 ``` type Singleton struct{} ``` Singleton is a structure which can be used as an embedded field on struct/controllers that should be marked as singletons on `PartyConfigure` or `MVC` Applications. #### func (Singleton) [Singleton](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L861) [¶](#Singleton.Singleton) added in v12.2.7 ``` func (c [Singleton](#Singleton)) Singleton() [bool](/builtin#bool) ``` Singleton returns true as this controller is a singleton. #### type [Supervisor](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L139) [¶](#Supervisor) ``` type Supervisor = [host](/github.com/kataras/iris/[email protected]/core/host).[Supervisor](/github.com/kataras/iris/[email protected]/core/host#Supervisor) ``` Supervisor is a shortcut of the `host#Supervisor`. Used to add supervisor configurators on common Runners without the need of importing the `core/host` package. #### type [TimeoutGuide](https://github.com/kataras/iris/blob/v12.2.7/iris_guide.go#L253) [¶](#TimeoutGuide) added in v12.2.5 ``` type TimeoutGuide interface { // Timeout defines the http timeout, server read & write timeouts. Timeout(requestResponseLife, read [time](/time).[Duration](/time#Duration), write [time](/time).[Duration](/time#Duration)) [MiddlewareGuide](#MiddlewareGuide) } ``` TimeoutGuide is the 4th step of the Guide. Timeout defines the http timeout, server read & write timeouts. #### type [Tunnel](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L618) [¶](#Tunnel) ``` type Tunnel = [tunnel](/github.com/kataras/tunnel).[Tunnel](/github.com/kataras/tunnel#Tunnel) ``` Tunnel is the Tunnels field of the TunnelingConfiguration structure. #### type [TunnelingConfiguration](https://github.com/kataras/iris/blob/v12.2.7/configuration.go#L616) [¶](#TunnelingConfiguration) ``` type TunnelingConfiguration = [tunnel](/github.com/kataras/tunnel).[Configuration](/github.com/kataras/tunnel#Configuration) ``` TunnelingConfiguration contains configuration for the optional tunneling through ngrok feature. Note that the ngrok should be already installed at the host machine. #### type [UnmarshalerFunc](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L60) [¶](#UnmarshalerFunc) ``` type UnmarshalerFunc = [context](/github.com/kataras/iris/[email protected]/context).[UnmarshalerFunc](/github.com/kataras/iris/[email protected]/context#UnmarshalerFunc) ``` UnmarshalerFunc a shortcut, an alias for the `context#UnmarshalerFunc` type which implements the `context#Unmarshaler` interface for reading request's body via custom decoders, most of them already implement the `context#UnmarshalerFunc` like the json.Unmarshal, xml.Unmarshal, yaml.Unmarshal and every library which follows the best practises and is aligned with the Go standards. See 'context#UnmarshalBody` for more. Example: <https://github.com/kataras/iris/blob/main/_examples/request-body/read-custom-via-unmarshaler/main.go#### type [User](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L95) [¶](#User) added in v12.2.0 ``` type User = [context](/github.com/kataras/iris/[email protected]/context).[User](/github.com/kataras/iris/[email protected]/context#User) ``` User is a generic view of an authorized client. See `Context.User` and `SetUser` methods for more. An alias for the `context/User` type. #### type [ViewEngine](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L50) [¶](#ViewEngine) added in v12.2.0 ``` type ViewEngine = [context](/github.com/kataras/iris/[email protected]/context).[ViewEngine](/github.com/kataras/iris/[email protected]/context#ViewEngine) ``` ViewEngine is an alias of `context.ViewEngine`. See HTML, Blocks, Django, Jet, Pug, Ace, Handlebars and e.t.c. #### type [XML](https://github.com/kataras/iris/blob/v12.2.7/aliases.go#L130) [¶](#XML) ``` type XML = [context](/github.com/kataras/iris/[email protected]/context).[XML](/github.com/kataras/iris/[email protected]/context#XML) ``` XML the optional settings for XML renderer. It is an alias of the `context#XML` type.
js-beautify
npm
JavaScript
This little beautifier will reformat and re-indent bookmarklets, ugly JavaScript, unpack scripts packed by <NAME>’s popular packer, as well as partly deobfuscate scripts processed by the npm package [javascript-obfuscator](https://github.com/javascript-obfuscator/javascript-obfuscator). Open [beautifier.io](https://beautifier.io/) to try it out. Options are available via the UI. Contributors Needed === I'm putting this front and center above because existing owners have very limited time to work on this project currently. This is a popular project and widely used but it desperately needs contributors who have time to commit to fixing both customer facing bugs and underlying problems with the internal design and implementation. If you are interested, please take a look at the [CONTRIBUTING.md](https://github.com/beautify-web/js-beautify/blob/main/CONTRIBUTING.md) then fix an issue marked with the ["Good first issue"](https://github.com/beautify-web/js-beautify/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22) label and submit a PR. Repeat as often as possible. Thanks! Installation === You can install the beautifier for Node.js or Python. Node.js JavaScript --- You may install the NPM package `js-beautify`. When installed globally, it provides an executable `js-beautify` script. As with the Python script, the beautified result is sent to `stdout` unless otherwise configured. ``` $ npm -g install js-beautify $ js-beautify foo.js ``` You can also use `js-beautify` as a `node` library (install locally, the `npm` default): ``` $ npm install js-beautify ``` Node.js JavaScript (vNext) --- The above install the latest stable release. To install beta or RC versions: ``` $ npm install js-beautify@next ``` Web Library --- The beautifier can be added on your page as web library. JS Beautifier is hosted on two CDN services: [cdnjs](https://cdnjs.com/libraries/js-beautify) and rawgit. To pull the latest version from one of these services include one set of the script tags below in your document: ``` <script src="https://cdnjs.cloudflare.com/ajax/libs/js-beautify/1.14.9/beautify.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/js-beautify/1.14.9/beautify-css.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/js-beautify/1.14.9/beautify-html.js"></script<script src="https://cdnjs.cloudflare.com/ajax/libs/js-beautify/1.14.9/beautify.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/js-beautify/1.14.9/beautify-css.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/js-beautify/1.14.9/beautify-html.min.js"></script> ``` Example usage of a JS tag in html: ``` <!DOCTYPE html> <html lang="en"> <body. . . <script src="https://cdnjs.cloudflare.com/ajax/libs/js-beautify/1.14.9/beautify.min.js"></script> <script src="script.js"></script> </body> </html> ``` Older versions are available by changing the version number. Disclaimer: These are free services, so there are [no uptime or support guarantees](https://github.com/rgrove/rawgit/wiki/Frequently-Asked-Questions#i-need-guaranteed-100-uptime-should-i-use-cdnrawgitcom). Python --- To install the Python version of the beautifier: ``` $ pip install jsbeautifier ``` Unlike the JavaScript version, the Python version can only reformat JavaScript. It does not work against HTML or CSS files, but you can install *css-beautify* for CSS: ``` $ pip install cssbeautifier ``` Usage === You can beautify JavaScript using JS Beautifier in your web browser, or on the command-line using Node.js or Python. Web Browser --- Open [beautifier.io](https://beautifier.io/). Options are available via the UI. Web Library --- After you embed the `<script>` tags in your `html` file, they expose three functions: `js_beautify`, `css_beautify`, and `html_beautify` Example usage of beautifying a json string: ``` const options = { indent_size: 2, space_in_empty_paren: true } const dataObj = {completed: false,id: 1,title: "delectus aut autem",userId: 1,} const dataJson = JSON.stringify(dataObj) js_beautify(dataJson, options) /* OUTPUT { "completed": false, "id": 1, "title": "delectus aut autem", "userId": 1, } */ ``` Node.js JavaScript --- When installed globally, the beautifier provides an executable `js-beautify` script. The beautified result is sent to `stdout` unless otherwise configured. ``` $ js-beautify foo.js ``` To use `js-beautify` as a `node` library (after install locally), import and call the appropriate beautifier method for JavaScript (JS), CSS, or HTML. All three method signatures are `beautify(code, options)`. `code` is the string of code to be beautified. options is an object with the settings you would like used to beautify the code. The configuration option names are the same as the CLI names but with underscores instead of dashes. For example, `--indent-size 2 --space-in-empty-paren` would be `{ indent_size: 2, space_in_empty_paren: true }`. ``` var beautify = require('js-beautify/js').js, fs = require('fs'); fs.readFile('foo.js', 'utf8', function (err, data) { if (err) { throw err; } console.log(beautify(data, { indent_size: 2, space_in_empty_paren: true })); }); ``` Python --- After installing, to beautify using Python: ``` $ js-beautify file.js ``` Beautified output goes to `stdout` by default. To use `jsbeautifier` as a library is simple: ``` import jsbeautifier res = jsbeautifier.beautify('your JavaScript string') res = jsbeautifier.beautify_file('some_file.js') ``` ...or, to specify some options: ``` opts = jsbeautifier.default_options() opts.indent_size = 2 opts.space_in_empty_paren = True res = jsbeautifier.beautify('some JavaScript', opts) ``` The configuration option names are the same as the CLI names but with underscores instead of dashes. The example above would be set on the command-line as `--indent-size 2 --space-in-empty-paren`. Options === These are the command-line flags for both Python and JS scripts: ``` CLI Options: -f, --file Input file(s) (Pass '-' for stdin) -r, --replace Write output in-place, replacing input -o, --outfile Write output to file (default stdout) --config Path to config file --type [js|css|html] ["js"] Select beautifier type (NOTE: Does *not* filter files, only defines which beautifier type to run) -q, --quiet Suppress logging to stdout -h, --help Show this help -v, --version Show the version Beautifier Options: -s, --indent-size Indentation size [4] -c, --indent-char Indentation character [" "] -t, --indent-with-tabs Indent with tabs, overrides -s and -c -e, --eol Character(s) to use as line terminators. [first newline in file, otherwise "\n] -n, --end-with-newline End output with newline --editorconfig Use EditorConfig to set up the options -l, --indent-level Initial indentation level [0] -p, --preserve-newlines Preserve line-breaks (--no-preserve-newlines disables) -m, --max-preserve-newlines Number of line-breaks to be preserved in one chunk [10] -P, --space-in-paren Add padding spaces within paren, ie. f( a, b ) -E, --space-in-empty-paren Add a single space inside empty paren, ie. f( ) -j, --jslint-happy Enable jslint-stricter mode -a, --space-after-anon-function Add a space before an anonymous function's parens, ie. function () --space-after-named-function Add a space before a named function's parens, i.e. function example () -b, --brace-style [collapse|expand|end-expand|none][,preserve-inline] [collapse,preserve-inline] -u, --unindent-chained-methods Don't indent chained method calls -B, --break-chained-methods Break chained method calls across subsequent lines -k, --keep-array-indentation Preserve array indentation -x, --unescape-strings Decode printable characters encoded in xNN notation -w, --wrap-line-length Wrap lines that exceed N characters [0] -X, --e4x Pass E4X xml literals through untouched --good-stuff Warm the cockles of Crockford's heart -C, --comma-first Put commas at the beginning of new line instead of end -O, --operator-position Set operator position (before-newline|after-newline|preserve-newline) [before-newline] --indent-empty-lines Keep indentation on empty lines --templating List of templating languages (auto,django,erb,handlebars,php,smarty) ["auto"] auto = none in JavaScript, all in HTML ``` Which correspond to the underscored option keys for both library interfaces **defaults per CLI options** ``` { "indent_size": 4, "indent_char": " ", "indent_with_tabs": false, "editorconfig": false, "eol": "\n", "end_with_newline": false, "indent_level": 0, "preserve_newlines": true, "max_preserve_newlines": 10, "space_in_paren": false, "space_in_empty_paren": false, "jslint_happy": false, "space_after_anon_function": false, "space_after_named_function": false, "brace_style": "collapse", "unindent_chained_methods": false, "break_chained_methods": false, "keep_array_indentation": false, "unescape_strings": false, "wrap_line_length": 0, "e4x": false, "comma_first": false, "operator_position": "before-newline", "indent_empty_lines": false, "templating": ["auto"] } ``` **defaults not exposed in the cli** ``` { "eval_code": false, "space_before_conditional": true } ``` Notice not all defaults are exposed via the CLI. Historically, the Python and JS APIs have not been 100% identical. There are still a few other additional cases keeping us from 100% API-compatibility. Loading settings from environment or .jsbeautifyrc (JavaScript-Only) --- In addition to CLI arguments, you may pass config to the JS executable via: * any `jsbeautify_`-prefixed environment variables * a `JSON`-formatted file indicated by the `--config` parameter * a `.jsbeautifyrc` file containing `JSON` data at any level of the filesystem above `$PWD` Configuration sources provided earlier in this stack will override later ones. Setting inheritance and Language-specific overrides --- The settings are a shallow tree whose values are inherited for all languages, but can be overridden. This works for settings passed directly to the API in either implementation. In the JavaScript implementation, settings loaded from a config file, such as .jsbeautifyrc, can also use inheritance/overriding. Below is an example configuration tree showing all the supported locations for language override nodes. We'll use `indent_size` to discuss how this configuration would behave, but any number of settings can be inherited or overridden: ``` { "indent_size": 4, "html": { "end_with_newline": true, "js": { "indent_size": 2 }, "css": { "indent_size": 2 } }, "css": { "indent_size": 1 }, "js": { "preserve-newlines": true } } ``` Using the above example would have the following result: * HTML files + Inherit `indent_size` of 4 spaces from the top-level setting. + The files would also end with a newline. + JavaScript and CSS inside HTML - Inherit the HTML `end_with_newline` setting. - Override their indentation to 2 spaces. * CSS files + Override the top-level setting to an `indent_size` of 1 space. * JavaScript files + Inherit `indent_size` of 4 spaces from the top-level setting. + Set `preserve-newlines` to `true`. CSS & HTML --- In addition to the `js-beautify` executable, `css-beautify` and `html-beautify` are also provided as an easy interface into those scripts. Alternatively, `js-beautify --css` or `js-beautify --html` will accomplish the same thing, respectively. ``` // Programmatic access var beautify_js = require('js-beautify'); // also available under "js" export var beautify_css = require('js-beautify').css; var beautify_html = require('js-beautify').html; // All methods accept two arguments, the string to be beautified, and an options object. ``` The CSS & HTML beautifiers are much simpler in scope, and possess far fewer options. ``` CSS Beautifier Options: -s, --indent-size Indentation size [4] -c, --indent-char Indentation character [" "] -t, --indent-with-tabs Indent with tabs, overrides -s and -c -e, --eol Character(s) to use as line terminators. (default newline - "\\n") -n, --end-with-newline End output with newline -b, --brace-style [collapse|expand] ["collapse"] -L, --selector-separator-newline Add a newline between multiple selectors -N, --newline-between-rules Add a newline between CSS rules --indent-empty-lines Keep indentation on empty lines HTML Beautifier Options: -s, --indent-size Indentation size [4] -c, --indent-char Indentation character [" "] -t, --indent-with-tabs Indent with tabs, overrides -s and -c -e, --eol Character(s) to use as line terminators. (default newline - "\\n") -n, --end-with-newline End output with newline -p, --preserve-newlines Preserve existing line-breaks (--no-preserve-newlines disables) -m, --max-preserve-newlines Maximum number of line-breaks to be preserved in one chunk [10] -I, --indent-inner-html Indent <head> and <body> sections. Default is false. -b, --brace-style [collapse-preserve-inline|collapse|expand|end-expand|none] ["collapse"] -S, --indent-scripts [keep|separate|normal] ["normal"] -w, --wrap-line-length Maximum characters per line (0 disables) [250] -A, --wrap-attributes Wrap attributes to new lines [auto|force|force-aligned|force-expand-multiline|aligned-multiple|preserve|preserve-aligned] ["auto"] -M, --wrap-attributes-min-attrs Minimum number of html tag attributes for force wrap attribute options [2] -i, --wrap-attributes-indent-size Indent wrapped attributes to after N characters [indent-size] (ignored if wrap-attributes is "aligned") -d, --inline List of tags to be considered inline tags --inline_custom_elements Inline custom elements [true] -U, --unformatted List of tags (defaults to inline) that should not be reformatted -T, --content_unformatted List of tags (defaults to pre) whose content should not be reformatted -E, --extra_liners List of tags (defaults to [head,body,/html] that should have an extra newline before them. --editorconfig Use EditorConfig to set up the options --indent_scripts Sets indent level inside script tags ("normal", "keep", "separate") --unformatted_content_delimiter Keep text content together between this string [""] --indent-empty-lines Keep indentation on empty lines --templating List of templating languages (auto,none,django,erb,handlebars,php,smarty) ["auto"] auto = none in JavaScript, all in html ``` Directives --- Directives let you control the behavior of the Beautifier from within your source files. Directives are placed in comments inside the file. Directives are in the format `/* beautify {name}:{value} */` in CSS and JavaScript. In HTML they are formatted as `<!-- beautify {name}:{value} -->`. ### Ignore directive The `ignore` directive makes the beautifier completely ignore part of a file, treating it as literal text that is not parsed. The input below will remain unchanged after beautification: ``` // Use ignore when the content is not parsable in the current language, JavaScript in this case. var a = 1; /* beautify ignore:start */ {This is some strange{template language{using open-braces? /* beautify ignore:end */ ``` ### Preserve directive NOTE: this directive only works in HTML and JavaScript, not CSS. The `preserve` directive makes the Beautifier parse and then keep the existing formatting of a section of code. The input below will remain unchanged after beautification: ``` // Use preserve when the content is valid syntax in the current language, JavaScript in this case. // This will parse the code and preserve the existing formatting. /* beautify preserve:start */ { browserName: 'internet explorer', platform: 'Windows 7', version: '8' } /* beautify preserve:end */ ``` License === You are free to use this in any way you want, in case you find this useful or working for you but you must keep the copyright notice and license. (MIT) Credits === * Created by <NAME>, [<EMAIL>](mailto:<EMAIL>) * Python version flourished by <NAME> [<EMAIL>](mailto:<EMAIL>) * Command-line for node.js by <NAME> [<EMAIL>](mailto:<EMAIL>) * Maintained and expanded by <NAME> [<EMAIL>](mailto:<EMAIL>) Thanks also to <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME> and others. (README.md: [email protected]) Readme --- ### Keywords * beautify * beautifier * code-quality
manymome
cran
R
Package ‘manymome’ October 7, 2023 Title Mediation, Moderation and Moderated-Mediation After Model Fitting Version 0.1.13 Description Computes indirect effects, conditional effects, and conditional indirect effects in a structural equation model or path model after model fitting, with no need to define any user parameters or label any paths in the model syntax, using the approach presented in Cheung and Cheung (2023) <doi:10.3758/s13428-023-02224-z>. Can also form bootstrap confidence intervals by doing bootstrapping only once and reusing the bootstrap estimates in all subsequent computations. Supports bootstrap confidence intervals for standardized (partially or completely) indirect effects, conditional effects, and conditional indirect effects as described in Cheung (2009) <doi:10.3758/BRM.41.2.425> and Cheung, <NAME>, and Vong (2022) <doi:10.1037/hea0001188>. Model fitting can be done by structural equation modeling using lavaan() or regression using lm(). URL https://sfcheung.github.io/manymome/ BugReports https://github.com/sfcheung/manymome/issues License GPL (>= 3) Encoding UTF-8 RoxygenNote 7.2.3 Suggests knitr, rmarkdown, semPlot, semptools, semTools, Amelia, mice, testthat (>= 3.0.0) Config/testthat/edition 3 Config/testthat/parallel true Config/testthat/start-first cond_indirect_* Imports lavaan, boot, parallel, pbapply, stats, ggplot2, igraph, MASS, methods Depends R (>= 3.5.0) LazyData true VignetteBuilder knitr NeedsCompilation no Author <NAME> [aut, cre] (<https://orcid.org/0000-0002-9871-9448>), <NAME> [aut] (<https://orcid.org/0000-0001-5182-0752>) Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2023-10-06 22:40:02 UTC R topics documented: all_indirect_path... 4 check_pat... 6 coef.cond_indirect_dif... 7 coef.cond_indirect_effect... 8 coef.delta_me... 9 coef.indirec... 10 coef.indirect_lis... 12 coef.indirect_proportio... 13 coef.lm_from_lavaa... 14 cond_indirec... 15 cond_indirect_dif... 22 confint.cond_indirect_dif... 24 confint.cond_indirect_effect... 25 confint.delta_me... 26 confint.indirec... 28 confint.indirect_lis... 29 data_me... 31 data_med_complicate... 31 data_med_mod_... 32 data_med_mod_a... 33 data_med_mod_ab... 34 data_med_mod_... 35 data_med_mod_b_mo... 36 data_med_mod_paralle... 37 data_med_mod_parallel_ca... 38 data_med_mod_seria... 39 data_med_mod_serial_ca... 40 data_med_mod_serial_paralle... 41 data_med_mod_serial_parallel_ca... 42 data_mo... 43 data_mod... 43 data_mod_ca... 44 data_mome_dem... 45 data_mome_demo_missin... 46 data_paralle... 47 data_se... 48 data_seria... 49 data_serial_paralle... 50 data_serial_parallel_laten... 51 delta_me... 52 do_boo... 55 do_m... 57 factor2va... 59 fit2boot_ou... 60 fit2mc_ou... 62 get_one_cond_indirect_effec... 64 get_pro... 65 index_of_mom... 67 indirect_effects_from_lis... 71 indirect_... 73 indirect_proportio... 75 lm2boot_ou... 77 lm2lis... 79 lm_from_lavaan_lis... 80 math_indirec... 81 merge_mod_level... 83 modmed_x1m3w4y... 84 mod_level... 85 plot.cond_indirect_effect... 88 predict.lm_from_lavaa... 91 predict.lm_from_lavaan_lis... 92 predict.lm_lis... 94 print.all_path... 95 print.boot_ou... 96 print.cond_indirect_dif... 97 print.cond_indirect_effect... 98 print.delta_me... 100 print.indirec... 102 print.indirect_lis... 104 print.indirect_proportio... 106 print.lm_lis... 107 print.mc_ou... 108 simple_mediation_laten... 109 subsetting_cond_indirect_effect... 110 subsetting_wlevel... 111 summary.lm_lis... 112 terms.lm_from_lavaa... 113 total_indirect_effec... 114 all_indirect_paths Enumerate All Indirect Effects in a Model Description Check all indirect paths in a model and return them as a list of arguments of x, y, and m, to be used by indirect_effect(). Usage all_indirect_paths(fit = NULL, exclude = NULL, x = NULL, y = NULL) all_paths_to_df(all_paths) Arguments fit A fit object. Either the output of lavaan::lavaan() or its wrapper such as lavaan::sem(), or a list of the output of lm() or the output of lm2list(). exclude A character vector of variables to be excluded in the search, such as control variables. x A character vector of variables that will be included as the x variables. If sup- plied, only paths that start from these variables will be included in the search. If NULL, the default, then all variables that are one of the predictors in at least one regression equation will be included in the search. y A character vector of variables that will be included as the y variables. If sup- plied, only paths that start from these variables will be included in the search. If NULL, the default, then all variables that are the outcome variables in at least one regression equation will be included in the search. all_paths An all_paths-class object. For example, the output of all_indirect_paths(). Details It makes use of igraph::all_simple_paths() to identify paths in a model. Value all_indirect_paths() returns a list of the class all_paths. Each argument is a list of three character vectors, x, the name of the predictor that starts a path, y, the name of the outcome that ends a path, and m, a character vector of one or more names of the mediators, from x to y. This class has a print method. all_paths_to_df() returns a data frame with three columns, x, y, and m, which can be used by functions such as indirect_effect(). Functions • all_indirect_paths(): Enumerate all indirect paths. • all_paths_to_df(): Convert the output of all_indirect_paths() to a data frame with three columns: x, y, and m. Author(s) <NAME> https://orcid.org/0000-0002-9871-9448 See Also indirect_effect(), lm2list(). many_indirect_effects() Examples library(lavaan) data(data_serial_parallel) mod <- " m11 ~ x + c1 + c2 m12 ~ m11 + x + c1 + c2 m2 ~ x + c1 + c2 y ~ m12 + m2 + m11 + x + c1 + c2 " fit <- sem(mod, data_serial_parallel, fixed.x = FALSE) # All indirect paths out1 <- all_indirect_paths(fit) out1 names(out1) # Exclude c1 and c2 in the search out2 <- all_indirect_paths(fit, exclude = c("c1", "c2")) out2 names(out2) # Exclude c1 and c2, and only consider paths start # from x and end at y out3 <- all_indirect_paths(fit, exclude = c("c1", "c2"), x = "x", y = "y") out3 names(out3) check_path Check a Path Exists in a Model Description It checks whether a path, usually an indirect path, exists in a model. Usage check_path(x, y, m = NULL, fit = NULL, est = NULL) Arguments x Character. The name of predictor at the start of the path. y Character. The name of the outcome variable at the end of the path. m A vector of the variable names of the mediators. The path goes from the first mediator successively to the last mediator. If NULL, the default, the path goes from x to y. fit The fit object. Currently only supports a lavaan::lavaan object or a list of outputs of lm(). It can also be a lavaan.mi object returned by semTools::runMI() or its wrapper, such as semTools::sem.mi(). est The output of lavaan::parameterEstimates(). If NULL, the default, it will be generated from fit. If supplied, fit will ge ignored. Details It checks whether the path defined by a predictor (x), an outcome (y), and optionally a sequence of mediators (m), exists in a model. It can check models in a lavaan::lavaan object or a list of outputs of lm(). It also support lavaan.mi objects returned by semTools::runMI() or its wrapper, such as semTools::sem.mi(). For example, in the ql in lavaan syntax m1 ~ x m2 ~ m1 m3 ~ x y ~ m2 + m3 This path is valid: x = "x", y = "y", m = c("m1", "m2") This path is invalid: x = "x", y = "y", m = c("m2") This path is also invalid: x = "x", y = "y", m = c("m1", "m2") Value A logical vector of length one. TRUE if the path is valid, FALSE if the path is invalid. Examples library(lavaan) data(data_serial_parallel) dat <- data_serial_parallel mod <- " m11 ~ x + c1 + c2 m12 ~ m11 + x + c1 + c2 m2 ~ x + c1 + c2 y ~ m12 + m2 + m11 + x + c1 + c2 " fit <- sem(mod, dat, meanstructure = TRUE, fixed.x = FALSE) # The following paths are valid check_path(x = "x", y = "y", m = c("m11", "m12"), fit = fit) check_path(x = "x", y = "y", m = "m2", fit = fit) # The following paths are invalid check_path(x = "x", y = "y", m = c("m11", "m2"), fit = fit) check_path(x = "x", y = "y", m = c("m12", "m11"), fit = fit) coef.cond_indirect_diff Print the Output of ’cond_indirect_diff()’ Description Extract the change in conditional indirect effect. Usage ## S3 method for class 'cond_indirect_diff' coef(object, ...) Arguments object The output of cond_indirect_diff(). ... Optional arguments. Ignored. Details The coef method of the cond_indirect_diff-class object. Value Scalar: The change of conditional indirect effect in object. See Also cond_indirect_diff() coef.cond_indirect_effects Estimates of Conditional Indirect Effects or Conditional Effects Description Return the estimates of the conditional indirect effects or conditional effects for all levels in the output of cond_indirect_effects(). Usage ## S3 method for class 'cond_indirect_effects' coef(object, ...) Arguments object The output of cond_indirect_effects(). ... Optional arguments. Ignored by the function. Details It extracts and returns the column ind or std in the output of cond_indirect_effects(). Value A numeric vector: The estimates of the conditional effects or conditional indirect effects. See Also cond_indirect_effects() Examples library(lavaan) dat <- modmed_x1m3w4y1 mod <- " m1 ~ x + w1 + x:w1 m2 ~ m1 y ~ m2 + x + w4 + m2:w4 " fit <- sem(mod, dat, meanstructure = TRUE, fixed.x = FALSE, se = "none", baseline = FALSE) est <- parameterEstimates(fit) # Conditional effects from x to m1 when w1 is equal to each of the levels out1 <- cond_indirect_effects(x = "x", y = "m1", wlevels = c("w1"), fit = fit) out1 coef(out1) # Conditional indirect effects from x1 through m1 and m2 to y, out2 <- cond_indirect_effects(x = "x", y = "y", m = c("m1", "m2"), wlevels = c("w1", "w4"), fit = fit) out2 coef(out2) # Standardized conditional indirect effects from x1 through m1 and m2 to y, out2std <- cond_indirect_effects(x = "x", y = "y", m = c("m1", "m2"), wlevels = c("w1", "w4"), fit = fit, standardized_x = TRUE, standardized_y = TRUE) out2std coef(out2std) coef.delta_med Delta_Med in a ’delta_med’-Class Object Description Return the estimate of Delta_Med in a ’delta_med’-class object. Usage ## S3 method for class 'delta_med' coef(object, ...) Arguments object The output of delta_med(). ... Optional arguments. Ignored. Details It just extracts and returns the element delta_med in the output of delta_med(), the estimate of the Delta_Med proposed by Liu, Yuan, and Li (2023), an R2 -like measure of indirect effect. Value A scalar: The estimate of Delta_Med. Author(s) <NAME> https://orcid.org/0000-0002-9871-9448 References <NAME>., <NAME>., & <NAME>. (2023). A systematic framework for defining R-squared measures in mediation analysis. Psychological Methods. Advance online publication. https://doi.org/10.1037/met0000571 See Also delta_med() Examples library(lavaan) dat <- data_med mod <- " m ~ x y ~ m + x " fit <- sem(mod, dat) dm <- delta_med(x = "x", y = "y", m = "m", fit = fit) dm print(dm, full = TRUE) coef(dm) coef.indirect Extract the Indirect Effect or Conditional Indirect Effect Description Return the estimate of the indirect effect in the output of indirect_effect() or or the conditional indirect in the output of cond_indirect(). Usage ## S3 method for class 'indirect' coef(object, ...) Arguments object The output of indirect_effect() or cond_indirect(). ... Optional arguments. Ignored by the function. Details It extracts and returns the element indirect. in an object. If standardized effect is requested when calling indirect_effect() or cond_indirect(), the effect returned is also standardized. Value A scalar: The estimate of the indirect effect or conditional indirect effect. See Also indirect_effect() and cond_indirect(). Examples library(lavaan) dat <- modmed_x1m3w4y1 mod <- " m1 ~ x + w1 + x:w1 m2 ~ x y ~ m1 + m2 + x " fit <- sem(mod, dat, meanstructure = TRUE, fixed.x = FALSE, se = "none", baseline = FALSE) est <- parameterEstimates(fit) # Examples for indirect_effect(): # Inidrect effect from x through m2 to y out1 <- indirect_effect(x = "x", y = "y", m = "m2", fit = fit) out1 coef(out1) # Conditional Indirect effect from x1 through m1 to y, # when w1 is 1 SD above mean hi_w1 <- mean(dat$w1) + sd(dat$w1) out2 <- cond_indirect(x = "x", y = "y", m = "m1", wvalues = c(w1 = hi_w1), fit = fit) out2 coef(out2) coef.indirect_list Extract the Indirect Effects from a ’indirect_list’ Object Description Return the estimates of the indirect effects in the output of many_indirect_effects(). Usage ## S3 method for class 'indirect_list' coef(object, ...) Arguments object The output of many_indirect_effects(). ... Optional arguments. Ignored by the function. Details It extracts the estimates in each ’indirect’-class object in the list. If standardized effect is requested when calling many_indirect_effects(), the effects returned are also standardized. Value A numeric vector of the indirect effects. See Also many_indirect_effects() Examples library(lavaan) data(data_serial_parallel) mod <- " m11 ~ x + c1 + c2 m12 ~ m11 + x + c1 + c2 m2 ~ x + c1 + c2 y ~ m12 + m2 + m11 + x + c1 + c2 " fit <- sem(mod, data_serial_parallel, fixed.x = FALSE) # All indirect paths from x to y paths <- all_indirect_paths(fit, x = "x", y = "y") paths # Indirect effect estimates out <- many_indirect_effects(paths, fit = fit) out coef(out) coef.indirect_proportion Extract the Proportion of Effect Mediated Description Return the proportion of effect mediated in the output of indirect_proportion(). Usage ## S3 method for class 'indirect_proportion' coef(object, ...) Arguments object The output of indirect_proportion() ... Not used. Details It extracts and returns the element proportion in the input object. Value A scalar: The proportion of effect mediated. See Also indirect_proportion() Examples library(lavaan) dat <- data_med head(dat) mod <- " m ~ x + c1 + c2 y ~ m + x + c1 + c2 " fit <- sem(mod, dat, fixed.x = FALSE) out <- indirect_proportion(x = "x", y = "y", m = "m", fit = fit) out coef(out) coef.lm_from_lavaan Coefficients of an ’lm_from_lavaan’-Class Object Description Returns the path coefficients of the terms in an lm_from_lavaan-class object. Usage ## S3 method for class 'lm_from_lavaan' coef(object, ...) Arguments object A ’lm_from_lavaan’-class object. ... Additional arguments. Ignored. Details An lm_from_lavaan-class object converts a regression model for a variable in a lavaan-class ob- ject to a formula-class object. This function simply extracts the path coefficients estimates. Inter- cept is always included, and set to zero if mean structure is not in the source lavaan-class object. This is an advanced helper used by plot.cond_indirect_effects(). Exported for advanced users and developers. Value A numeric vector of the path coefficients. See Also lm_from_lavaan_list() Examples library(lavaan) data(data_med) mod <- " m ~ a * x + c1 + c2 y ~ b * m + x + c1 + c2 " fit <- sem(mod, data_med, fixed.x = FALSE) fit_list <- lm_from_lavaan_list(fit) coef(fit_list$m) coef(fit_list$y) cond_indirect Conditional, Indirect, and Conditional Indirect Effects Description Compute the conditional effects, indirect effects, or conditional indirect effects in a structural model fitted by lm(), lavaan::sem(), or semTools::sem.mi(). Usage cond_indirect( x, y, m = NULL, fit = NULL, est = NULL, implied_stats = NULL, wvalues = NULL, standardized_x = FALSE, standardized_y = FALSE, boot_ci = FALSE, level = 0.95, boot_out = NULL, R = 100, seed = NULL, parallel = TRUE, ncores = max(parallel::detectCores(logical = FALSE) - 1, 1), make_cluster_args = list(), progress = TRUE, save_boot_full = FALSE, prods = NULL, get_prods_only = FALSE, save_boot_out = TRUE, mc_ci = FALSE, mc_out = NULL, save_mc_full = FALSE, save_mc_out = TRUE, ci_out = NULL, save_ci_full = FALSE, save_ci_out = TRUE, ci_type = NULL ) cond_indirect_effects( wlevels, x, y, m = NULL, fit = NULL, w_type = "auto", w_method = "sd", sd_from_mean = NULL, percentiles = NULL, est = NULL, implied_stats = NULL, boot_ci = FALSE, R = 100, seed = NULL, parallel = TRUE, ncores = max(parallel::detectCores(logical = FALSE) - 1, 1), make_cluster_args = list(), progress = TRUE, boot_out = NULL, output_type = "data.frame", mod_levels_list_args = list(), mc_ci = FALSE, mc_out = NULL, ci_out = NULL, ci_type = NULL, ... ) indirect_effect( x, y, m = NULL, fit = NULL, est = NULL, implied_stats = NULL, standardized_x = FALSE, standardized_y = FALSE, boot_ci = FALSE, level = 0.95, boot_out = NULL, R = 100, seed = NULL, parallel = TRUE, ncores = max(parallel::detectCores(logical = FALSE) - 1, 1), make_cluster_args = list(), progress = TRUE, save_boot_full = FALSE, mc_ci = FALSE, mc_out = NULL, save_mc_full = FALSE, save_mc_out = TRUE, ci_out = NULL, save_ci_full = FALSE, save_ci_out = TRUE, ci_type = NULL ) many_indirect_effects(paths, ...) Arguments x Character. The name of the predictor at the start of the path. y Character. The name of the outcome variable at the end of the path. m A vector of the variable names of the mediator(s). The path goes from the first mediator successively to the last mediator. If NULL, the default, the path goes from x to y. fit The fit object. Can be a lavaan::lavaan object or a list of lm() outputs. It can also be a lavaan.mi object returned by semTools::runMI() or its wrapper, such as semTools::sem.mi(). est The output of lavaan::parameterEstimates(). If NULL, the default, it will be generated from fit. If supplied, fit will be ignored. implied_stats Implied means, variances, and covariances of observed variables, of the form of the output of lavaan::lavInspect() with what set to "implied". The stan- dard deviations are extracted from this object for standardization. Default is NULL, and implied statistics will be computed from fit if required. wvalues A numeric vector of named elements. The names are the variable names of the moderators, and the values are the values to which the moderators will be set to. Default is NULL. standardized_x Logical. Whether x will be standardized. Default is FALSE. standardized_y Logical. Whether y will be standardized. Default is FALSE. boot_ci Logical. Whether bootstrap confidence interval will be formed. Default is FALSE. level The level of confidence for the bootstrap confidence interval. Default is .95. boot_out If boot_ci is TRUE, users can supply pregenerated bootstrap estimates. This can be the output of do_boot(). For indirect_effect() and cond_indirect_effects(), this can be the output of a previous call to cond_indirect_effects(), indirect_effect(), or cond_indirect() with bootstrap confidence intervals requested. These stored estimates will be reused such that there is no need to do bootstrapping again. If not supplied, the function will try to generate them from fit. R Integer. If boot_ci is TRUE, boot_out is NULL, and bootstrap standard errors not requested if fit is a lavaan object, this function will do bootstrapping on fit. R is the number of bootstrap samples. Default is 100. For Monte Carlo simulation, this is the number of replications. seed If bootstrapping or Monte Carlo simulation is conducted, this is the seed for the bootstrapping or simulation. Default is NULL and seed is not set. parallel Logical. If bootstrapping is conducted, whether parallel processing will be used. Default is TRUE. If fit is a list of lm() outputs, parallel processing will not be used. ncores Integer. The number of CPU cores to use when parallel is TRUE. Default is the number of non-logical cores minus one (one minimum). Will raise an error if greater than the number of cores detected by parallel::detectCores(). If ncores is set, it will override make_cluster_args in do_boot(). make_cluster_args A named list of additional arguments to be passed to parallel::makeCluster(). For advanced users. See parallel::makeCluster() for details. Default is list(). progress Logical. Display bootstrapping progress or not. Default is TRUE. save_boot_full If TRUE, full bootstrapping results will be stored. Default is FALSE. prods The product terms found. For internal use. get_prods_only IF TRUE, will quit early and return the product terms found. The results can be passed to the prod argument when calling this function. Default is FALSE. This function is for internal use. save_boot_out If boot_out is supplied, whether it will be saved in the output. Default is TRUE. mc_ci Logical. Whether Monte Carlo confidence interval will be formed. Default is FALSE. mc_out If mc_ci is TRUE, users can supply pregenerated Monte Carlo estimates. This can be the output of do_mc(). For indirect_effect() and cond_indirect_effects(), this can be the output of a previous call to cond_indirect_effects(), indirect_effect(), or cond_indirect() with Monte Carlo confidence intervals requested. These stored estimates will be reused such that there is no need to do Monte Carlo simulation again. If not supplied, the function will try to generate them from fit. save_mc_full If TRUE, full Monte Carlo results will be stored. Default is FALSE. save_mc_out If mc_out is supplied, whether it will be saved in the output. Default is TRUE. ci_out If ci_type is supplied, this is the corresponding argument. If ci_type is "boot", this argument will be used as boot_out. If ci_type is "mc", this argument will be used as mc_out. save_ci_full If TRUE, full bootstrapping or Monte Carlo results will be stored. Default is FALSE. save_ci_out If either mc_out or boot_out is supplied, whether it will be saved in the output. Default is TRUE. ci_type The type of confidence intervals to be formed. Can be either "boot" (boot- strapping) or "mc" (Monte Carlo). If not supplied or is NULL, will check other arguments (e.g, boot_ci and mc_ci). If supplied, will override boot_ci and mc_ci. wlevels The output of merge_mod_levels(), or the moderator(s) to be passed to mod_levels_list(). If all the moderators can be represented by one variable, that is, each moderator is (a) a numeric variable, (b) a dichotomous categorical variable, or (c) a factor or string variable used in lm() in fit, then it is a vector of the names of the moderators as appeared in the data frame. If at least one of the moderators is a categorical variable represented by more than one variable, such as user-created dummy variables used in lavaan::sem(), then it must be a list of the names of the moderators, with such moderators represented by a vector of names. For example: list("w1", c("gpgp2", "gpgp3"), the first moderator w1 and the second moderator a three-categorical variable represented by gpgp2 and gpgp3. w_type Character. Whether the moderator is a "numeric" variable or a "categorical" variable. If "auto", the function will try to determine the type automatically. See mod_levels_list() for further information. w_method Character, either "sd" or "percentile". If "sd", the levels are defined by the distance from the mean in terms of standard deviation. if "percentile", the levels are defined in percentiles. See mod_levels_list() for further informa- tion. sd_from_mean A numeric vector. Specify the distance in standard deviation from the mean for each level. Default is c(-1, 0, 1) when there is only one moderator, and c(-1, 1) when there are more than one moderator. Ignored if w_method is not equal to "sd". See mod_levels_list() for further information. percentiles A numeric vector. Specify the percentile (in proportion) for each level. Default is c(.16, .50, .84) if there is one moderator, and c(.16, .84) when there are more than one moderator. Ignored if w_method is not equal to "percentile". See mod_levels_list() for further information. output_type The type of output of cond_indirect_effects(). If "data.frame", the de- fault, the output will be converted to a data frame. If any other values, the output is a list of the outputs from cond_indirect(). mod_levels_list_args Additional arguments to be passed to mod_levels_list() if it is called for creating the levels of moderators. Default is list(). ... For many_indirect_effects(), these are arguments to be passed to indirect_effect(). paths The output of all_indirect_paths() Details For a model with a mediation path moderated by one or more moderators, cond_indirect_effects() can be used to compute the conditional indirect effect from one variable to another variable, at one or more set of selected value(s) of the moderator(s). If only the effect for one set of value(s) of the moderator(s) is needed, cond_indirect() can be used. If only the mediator(s) is/are specified (m) and no values of moderator(s) are specified, then the indirect effect from one variable (x) to another variable (y) is computed. A convenient wrapper indirect_effect() can be used to compute the indirect effect. If only the value(s) of moderator(s) is/are specified (wvalues or wlevels) and no mediators (m) are specified when calling cond_indirect_effects() or cond_indirect(), then the conditional direct effects from one variable to another are computed. All three functions support using nonparametric bootstrapping (for lavaan or lm outputs) or Monte Carlo simulation (for lavaan outputs only) to form confidence intervals. Bootstrapping or Monte Carlo simulation only needs to be done once. These are the possible ways to form bootstrapping: 1. Do bootstrapping or Monte Carlo simulation in the first call to one of these functions, by set- ting boot_ci or mc_ci to TRUE and R to the number of bootstrap samples or replications, level to the level of confidence (default .95 or 95%), and seed to reproduce the results (parallel and ncores are optional for bootstrapping). This will take some time to run for bootstrapping. The output will have all bootstrap or Monte Carlo estimates stored. This output, whether it is from indirect_effect(), cond_indirect_effects(), or cond_indirect(), can be reused by any of these three functions by setting boot_out (for bootstrapping) or mc_out (for Monte Carlo simulation) to this output. They will form the confidence intervals using the stored bootstrap or Monte Carlo estimates. 2. Do bootstrapping using do_boot() or Monte Carlo simulation us8ing do_mc(). The output can be used in the boot_out (for bootstrapping) or mc_out (for Monte Carlo simulation) argument of indirect_effect(), cond_indirect_effects() and cond_indirect(). 3. For bootstrapping, if lavaan::sem() is used to fit a model and se = "boot" is used, do_boot() can extract them to generate a boot_out-class object that again can be used in the boot_out argument. If boot_out or mc_out is set, arguments such as R, seed, and parallel will be ignored. Value indirect_effect() and cond_indirect() return an indirect-class object. cond_indirect_effects() returns a cond_indirect_effects-class object. These two classes of objects have their own print methods for printing the results (see print.indirect() and print.cond_indirect_effects()). They also have a coef method for extracting the esti- mates (coef.indirect() and coef.cond_indirect_effects()) and a confint method for ex- tracting the confidence intervals (confint.indirect() and confint.cond_indirect_effects()). Addition and subtraction can also be conducted on indirect-class object to estimate and test a function of effects (see math_indirect) Functions • cond_indirect(): Compute conditional, indirect, or conditional indirect effects for one set of levels. • cond_indirect_effects(): Compute the conditional effects or conditional indirect effects for several sets of levels of the moderator(s). • indirect_effect(): Compute the indirect effect. A wrapper of cond_indirect(). Can be used when there is no moderator. • many_indirect_effects(): Compute the indirect effects along more than one paths. It call indirect_effect() once for each of the path. See Also mod_levels() and merge_mod_levels() for generating levels of moderators. do_boot for doing bootstrapping before calling these functions. Examples library(lavaan) dat <- modmed_x1m3w4y1 mod <- " m1 ~ a1 * x + d1 * w1 + e1 * x:w1 m2 ~ a2 * x y ~ b1 * m1 + b2 * m2 + cp * x " fit <- sem(mod, dat, meanstructure = TRUE, fixed.x = FALSE, se = "none", baseline = FALSE) est <- parameterEstimates(fit) hi_w1 <- mean(dat$w1) + sd(dat$w1) # Examples for cond_indirect(): # Conditional effect from x to m1 when w1 is 1 SD above mean cond_indirect(x = "x", y = "m1", wvalues = c(w1 = hi_w1), fit = fit) # Indirect effect from x1 through m2 to y indirect_effect(x = "x", y = "y", fit = fit) # Conditional Indirect effect from x1 through m1 to y, when w1 is 1 SD above mean cond_indirect(x = "x", y = "y", m = "m1", wvalues = c(w1 = hi_w1), fit = fit) # Examples for cond_indirect_effects(): # Create levels of w1, the moderators w1levels <- mod_levels("w1", fit = fit) w1levels # Conditional effects from x to m1 when w1 is equal to each of the levels cond_indirect_effects(x = "x", y = "m1", wlevels = w1levels, fit = fit) # Conditional Indirect effect from x1 through m1 to y, # when w1 is equal to each of the levels cond_indirect_effects(x = "x", y = "y", m = "m1", wlevels = w1levels, fit = fit) # Examples for many_indirect_effects(): library(lavaan) data(data_serial_parallel) mod <- " m11 ~ x + c1 + c2 m12 ~ m11 + x + c1 + c2 m2 ~ x + c1 + c2 y ~ m12 + m2 + m11 + x + c1 + c2 " fit <- sem(mod, data_serial_parallel, fixed.x = FALSE) # All indirect paths from x to y paths <- all_indirect_paths(fit, x = "x", y = "y") paths # Indirect effect estimates out <- many_indirect_effects(paths, fit = fit) out cond_indirect_diff Differences In Conditional Indirect Effects Description Compute the difference in conditional indirect effects between two sets of levels of the moderators. Usage cond_indirect_diff(output, from = NULL, to = NULL, level = 0.95) Arguments output A cond_indirect_effects-class object: The output of cond_indirect_effects(). from A row number of output. to A row number of output. The change in indirect effects is computed by the change in the level(s) of the moderator(s) from Row from to Row to. level The level of confidence for the confidence interval. Default is .95. Details Ths function takes the output of cond_indirect_effects() and computes the difference in con- ditional indirect effects between any two rows, that is, between levels of the moderator, or two sets of levels of the moderators when the path has more than one moderator. The difference is meaningful when the difference between the two levels or sets of levels are mean- ingful. For example, if the two levels are the mean of the moderator and one standard deviation above mean of the moderator, then this difference is the change in indirect effect when the modera- tor increases by one standard deviation. If the two levels are 0 and 1, then this difference is the index of moderated mediation as proposed by Hayes (2015). (This index can also be computed directly by index_of_mome(), designed specifi- cally for this purpose.) The function can also compute the change in the standardized indirect effect between two levels of a moderator or two sets of levels of the moderators. This function is intended to be a general purpose function that allows users to compute the difference between any two levels or sets of levels that are meaningful in a context. This function itself does not set the levels of comparison. The levels to be compared need to be set when calling cond_indirect_effects(). This function extracts required information from the output of cond_indirect_effects(). If bootstrap or Monte Carlo estimates are available in the input or bootstrap or Monte Carlo confi- dence intervals are requested in calling cond_indirect_effects(), cond_indirect_diff() will also form the percentile confidence interval for the difference in conditional indirect effects using the stored estimates. Value A cond_indirect_diff-class object. This class has a print method (print.cond_indirect_diff()), a coef method (coef.cond_indirect_diff()), and a confint method (confint.cond_indirect_diff()). Functions • cond_indirect_diff(): Compute the difference in in conditional indirect effect between two rows in the output of cond_indirect_effects(). References <NAME>. (2015). An index and test of linear moderated mediation. Multivariate Behavioral Research, 50(1), 1-22. doi:10.1080/00273171.2014.962683 See Also index_of_mome() for computing the index of moderated mediation, index_of_momome() for com- puting the index of moderated moderated mediation, cond_indirect_effects(), mod_levels(), and merge_mod_levels() for preparing the levels to be compared. Examples library(lavaan) dat <- modmed_x1m3w4y1 dat$xw1 <- dat$x * dat$w1 mod <- " m1 ~ a * x + f * w1 + d * xw1 y ~ b * m1 + cp * x " fit <- sem(mod, dat, meanstructure = TRUE, fixed.x = FALSE, se = "none", baseline = FALSE) est <- parameterEstimates(fit) # Create levels of w1, the moderators w1levels <- mod_levels("w1", fit = fit) w1levels # Conditional effects from x to y when w1 is equal to each of the levels boot_out <- fit2boot_out_do_boot(fit, R = 40, seed = 4314, progress = FALSE) out <- cond_indirect_effects(x = "x", y = "y", m = "m1", wlevels = w1levels, fit = fit, boot_ci = TRUE, boot_out = boot_out) out out_ind <- cond_indirect_diff(out, from = 2, to = 1) out_ind coef(out_ind) confint(out_ind) confint.cond_indirect_diff Confidence Interval of the Output of ’cond_indirect_diff()’ Description Extract the confidence interval the output of cond_indirect_diff(). Usage ## S3 method for class 'cond_indirect_diff' confint(object, parm, level = 0.95, ...) Arguments object The output of cond_indirect_diff(). parm Ignored. level The level of confidence for the confidence interval. Default is .95. Must match the level of the stored confidence interval. ... Optional arguments. Ignored. Details The confint method of the cond_indirect_diff-class object. The type of confidence intervals depends on the call used to create the object. This function merely extracts the stored confidence intervals. Value A one-row-two-column data frame of the confidence limits. If confidence interval is not available, the limits are NAs. confint.cond_indirect_effects Confidence Intervals of Indirect Effects or Conditional Indirect Effects Description Return the confidence intervals of the conditional indirect effects or conditional effects in the output of cond_indirect_effects(). Usage ## S3 method for class 'cond_indirect_effects' confint(object, parm, level = 0.95, ...) Arguments object The output of cond_indirect_effects(). parm Ignored. Always returns the confidence intervals of the effects for all levels stored. level The level of confidence, default is .95, returning the 95% confidence interval. Ignored for now and will use the level of the stored intervals. ... Additional arguments. Ignored by the function. Details It extracts and returns the columns for confidence intervals, if available. The type of confidence intervals depends on the call used to compute the effects. This function merely retrieves the confidence intervals stored, if any, which could be formed by nonparametric bootstrapping, Monte Carlo simulation, or other methods to be supported in the future. Value A data frame with two columns, one for each confidence limit of the confidence intervals. The number of rows is equal to the number of rows of object. See Also cond_indirect_effects() Examples library(lavaan) dat <- modmed_x1m3w4y1 mod <- " m1 ~ x + w1 + x:w1 m2 ~ m1 y ~ m2 + x + w4 + m2:w4 " fit <- sem(mod, dat, meanstructure = TRUE, fixed.x = FALSE, se = "none", baseline = FALSE) est <- parameterEstimates(fit) # Examples for cond_indirect(): # Create levels of w1 and w4 w1levels <- mod_levels("w1", fit = fit) w1levels w4levels <- mod_levels("w4", fit = fit) w4levels w1w4levels <- merge_mod_levels(w1levels, w4levels) # Conditional effects from x to m1 when w1 is equal to each of the levels # R should be at least 2000 or 5000 in real research. out1 <- suppressWarnings(cond_indirect_effects(x = "x", y = "m1", wlevels = w1levels, fit = fit, boot_ci = TRUE, R = 20, seed = 54151, parallel = FALSE, progress = FALSE)) confint(out1) confint.delta_med Confidence Interval for Delta_Med in a ’xxx’-Class Object Description Return the confidence interval of the Delta_Med in the output of delta_med(). Usage ## S3 method for class 'delta_med' confint(object, parm, level = NULL, ...) Arguments object The output of delta_med(). parm Not used because only one parameter, the Delta_Med, is allowed. level The level of confidence, default is NULL and the level used when the object was created will be used. ... Optional arguments. Ignored. Details It returns the nonparametric bootstrap percentile confidence interval of Delta_Med, proposed byLiu, Yuan, and Li (2023). The object must be the output of delta_med(), with bootstrap confidence interval requested when calling delta_med(). However, the level of confidence can be different from that used when call delta_med(). Value A one-row matrix of the confidence interval. All values are NA if bootstrap confidence interval was not requested when calling delta_med(). Author(s) <NAME> https://orcid.org/0000-0002-9871-9448 See Also delta_med() Examples library(lavaan) dat <- data_med mod <- " m ~ x y ~ m + x " fit <- sem(mod, dat) # Call do_boot() to generate # bootstrap estimates # Use 2000 or even 5000 for R in real studies # Set parallel to TRUE in real studies for faster bootstrapping boot_out <- do_boot(fit, R = 45, seed = 879, parallel = FALSE, progress = FALSE) # Remove 'progress = FALSE' in practice dm_boot <- delta_med(x = "x", y = "y", m = "m", fit = fit, boot_out = boot_out, progress = FALSE) dm_boot confint(dm_boot) confint.indirect Confidence Interval of Indirect Effect or Conditional Indirect Effect Description Return the confidence interval of the indirect effect or conditional indirect effect stored in the output of indirect_effect() or cond_indirect(). Usage ## S3 method for class 'indirect' confint(object, parm, level = 0.95, ...) Arguments object The output of indirect_effect() or cond_indirect(). parm Ignored because the stored object always has only one parameter. level The level of confidence, default is .95, returning the 95% confidence interval. ... Additional arguments. Ignored by the function. Details It extracts and returns the stored confidence interval if available. The type of confidence interval depends on the call used to compute the effect. This function merely retrieves the stored estimates, which could be generated by nonparametric bootstrapping, Monte Carlo simulation, or other methods to be supported in the future, and uses them to form the percentile confidence interval. Value A numeric vector of two elements, the limits of the confidence interval. See Also indirect_effect() and cond_indirect() Examples dat <- modmed_x1m3w4y1 # Indirect Effect library(lavaan) mod1 <- " m1 ~ x m2 ~ m1 y ~ m2 + x " fit <- sem(mod1, dat, meanstructure = TRUE, fixed.x = FALSE, se = "none", baseline = FALSE) # R should be at least 2000 or 5000 in real research. out1 <- indirect_effect(x = "x", y = "y", m = c("m1", "m2"), fit = fit, boot_ci = TRUE, R = 45, seed = 54151, parallel = FALSE, progress = FALSE) out1 confint(out1) confint.indirect_list Confidence Intervals of Indirect Effects in an ’indirect_list’ Object Description Return the confidence intervals of the indirect effects stored in the output of many_indirect_effects(). Usage ## S3 method for class 'indirect_list' confint(object, parm = NULL, level = 0.95, ...) Arguments object The output of many_indirect_effects(). parm Ignored for now. level The level of confidence, default is .95, returning the 95% confidence interval. ... Additional arguments. Ignored by the function. Details It extracts and returns the stored confidence interval if available. The type of confidence intervals depends on the call used to compute the effects. This function merely retrieves the stored estimates, which could be generated by nonparametric bootstrapping, Monte Carlo simulation, or other methods to be supported in the future, and uses them to form the percentile confidence interval. Value A two-column data frame. The columns are the limits of the confidence intervals. See Also many_indirect_effects() Examples library(lavaan) data(data_serial_parallel) mod <- " m11 ~ x + c1 + c2 m12 ~ m11 + x + c1 + c2 m2 ~ x + c1 + c2 y ~ m12 + m2 + m11 + x + c1 + c2 " fit <- sem(mod, data_serial_parallel, fixed.x = FALSE) # All indirect paths from x to y paths <- all_indirect_paths(fit, x = "x", y = "y") paths # Indirect effect estimates # R should be 2000 or even 5000 in real research # parallel should be used in real research. fit_boot <- do_boot(fit, R = 45, seed = 8974, parallel = FALSE, progress = FALSE) out <- many_indirect_effects(paths, fit = fit, boot_ci = TRUE, boot_out = fit_boot) out confint(out) data_med Sample Dataset: Simple Mediation Description A simple mediation model. Usage data_med Format A data frame with 100 rows and 5 variables: x Predictor. Numeric. m Mediator. Numeric. y Outcome variable. Numeric. c1 Control variable. Numeric. c2 Control variable. Numeric. Examples library(lavaan) data(data_med) mod <- " m ~ a * x + c1 + c2 y ~ b * m + x + c1 + c2 ab := a * b " fit <- sem(mod, data_med, fixed.x = FALSE) parameterEstimates(fit) data_med_complicated Sample Dataset: A Complicated Mediation Model Description A mediation model with two predictors, two pathways, Usage data_med_complicated Format A data frame with 300 rows and 5 variables: x1 Predictor 1. Numeric. x2 Predictor 2. Numeric. m11 Mediator 1 in Path 1. Numeric. m12 Mediator 2 in Path 1. Numeric. m2 Mediator in Path 2. Numeric. y1 Outcome variable 1. Numeric. y2 Outcome variable 2. Numeric. c1 Control variable. Numeric. c2 Control variable. Numeric. Examples data(data_med_complicated) dat <- data_med_complicated summary(lm_m11 <- lm(m11 ~ x1 + x1 + x2 + c1 + c2, dat)) summary(lm_m12 <- lm(m12 ~ m11 + x1 + x2 + c1 + c2, dat)) summary(lm_m2 <- lm(m2 ~ x1 + x2 + c1 + c2, dat)) summary(lm_y1 <- lm(y1 ~ m11 + m12 + m2 + x1 + x2 + c1 + c2, dat)) summary(lm_y2 <- lm(y2 ~ m11 + m12 + m2 + x1 + x2 + c1 + c2, dat)) data_med_mod_a Sample Dataset: Simple Mediation with a-Path Moderated Description A simple mediation model with a-path moderated. Usage data_med_mod_a Format A data frame with 100 rows and 6 variables: x Predictor. Numeric. w Moderator. Numeric. m Mediator. Numeric. y Outcome variable. Numeric. c1 Control variable. Numeric. c2 Control variable. Numeric. Examples library(lavaan) data(data_med_mod_a) data_med_mod_a$xw <- data_med_mod_a$x * data_med_mod_a$w mod <- " m ~ a * x + w + d * xw + c1 + c2 y ~ b * m + x + w + c1 + c2 w ~~ v_w * w w ~ m_w * 1 ab := a * b ab_lo := (a + d * (m_w - sqrt(v_w))) * b ab_hi := (a + d * (m_w + sqrt(v_w))) * b " fit <- sem(mod, data_med_mod_a, meanstructure = TRUE, fixed.x = FALSE) parameterEstimates(fit)[c(1, 3, 6, 11, 12, 31:33), ] data_med_mod_ab Sample Dataset: Simple Mediation with Both Paths Moderated (Two Moderators) Description A simple mediation model with a-path and b-path each moderated by a moderator. Usage data_med_mod_ab Format A data frame with 100 rows and 7 variables: x Predictor. Numeric. w1 Moderator 1. Numeric. w2 Moderator 2. Numeric. m Mediator. Numeric. y Outcome variable. Numeric. c1 Control variable. Numeric. c2 Control variable. Numeric. Examples library(lavaan) data(data_med_mod_ab) data_med_mod_ab$xw1 <- data_med_mod_ab$x * data_med_mod_ab$w1 data_med_mod_ab$mw2 <- data_med_mod_ab$m * data_med_mod_ab$w2 mod <- " m ~ a * x + w1 + d1 * xw1 + c1 + c2 y ~ b * m + x + w1 + w2 + d2 * mw2 + c1 + c2 w1 ~~ v_w1 * w1 w1 ~ m_w1 * 1 w2 ~~ v_w2 * w2 w2 ~ m_w2 * 1 ab := a * b ab_lolo := (a + d1 * (m_w1 - sqrt(v_w1))) * (b + d2 * (m_w2 - sqrt(v_w2))) ab_lohi := (a + d1 * (m_w1 - sqrt(v_w1))) * (b + d2 * (m_w2 + sqrt(v_w2))) ab_hilo := (a + d1 * (m_w1 + sqrt(v_w1))) * (b + d2 * (m_w2 - sqrt(v_w2))) ab_hihi := (a + d1 * (m_w1 + sqrt(v_w1))) * (b + d2 * (m_w2 + sqrt(v_w2))) " fit <- sem(mod, data_med_mod_ab, meanstructure = TRUE, fixed.x = FALSE) parameterEstimates(fit)[c(1, 3, 6, 10, 41:45), ] data_med_mod_ab1 Sample Dataset: Simple Mediation with Both Paths Moderated By a Moderator Description A simple mediation model with a-path and b-path moderated by one moderator. Usage data_med_mod_ab1 Format A data frame with 100 rows and 6 variables: x Predictor. Numeric. w Moderator. Numeric. m Mediator. Numeric. y Outcome variable. Numeric. c1 Control variable. Numeric. c2 Control variable. Numeric. Examples library(lavaan) data(data_med_mod_ab1) data_med_mod_ab1$xw <- data_med_mod_ab1$x * data_med_mod_ab1$w data_med_mod_ab1$mw <- data_med_mod_ab1$m * data_med_mod_ab1$w mod <- " m ~ a * x + w + da * xw + c1 + c2 y ~ b * m + x + w + db * mw + c1 + c2 w ~~ v_w * w w ~ m_w * 1 ab := a * b ab_lo := (a + da * (m_w - sqrt(v_w))) * (b + db * (m_w - sqrt(v_w))) ab_hi := (a + da * (m_w + sqrt(v_w))) * (b + db * (m_w + sqrt(v_w))) " fit <- sem(mod, data_med_mod_ab1, meanstructure = TRUE, fixed.x = FALSE) parameterEstimates(fit)[c(1, 3, 6, 9, 38:40), ] data_med_mod_b Sample Dataset: Simple Mediation with b-Path Moderated Description A simple mediation model with b-path moderated. Usage data_med_mod_b Format A data frame with 100 rows and 6 variables: x Predictor. Numeric. w Moderator. Numeric. m Mediator. Numeric. y Outcome variable. Numeric. c1 Control variable. Numeric. c2 Control variable. Numeric. Examples library(lavaan) data(data_med_mod_b) data_med_mod_b$mw <- data_med_mod_b$m * data_med_mod_b$w mod <- " m ~ a * x + w + c1 + c2 y ~ b * m + x + d * mw + c1 + c2 w ~~ v_w * w w ~ m_w * 1 ab := a * b ab_lo := a * (b + d * (m_w - sqrt(v_w))) ab_hi := a * (b + d * (m_w + sqrt(v_w))) " fit <- sem(mod, data_med_mod_b, meanstructure = TRUE, fixed.x = FALSE) parameterEstimates(fit)[c(1, 5, 7, 10, 11, 30:32), ] data_med_mod_b_mod Sample Dataset: A Simple Mediation Model with b-Path Moderated- Moderation Description A simple mediation model with moderated-mediation on the b-path. Usage data_med_mod_b_mod Format A data frame with 100 rows and 5 variables: x Predictor. Numeric. w1 Moderator on b-path. Numeric. w2 Moderator on the moderating effect of w1. Numeric. m Mediator. Numeric. y Outcome variable. Numeric. c1 Control variable. Numeric. c2 Control variable. Numeric. Examples data(data_med_mod_b_mod) dat <- data_med_mod_b_mod summary(lm_m <- lm(m ~ x + c1 + c2, dat)) summary(lm_y <- lm(y ~ m*w1*w2 + x + c1 + c2, dat)) data_med_mod_parallel Sample Dataset: Parallel Mediation with Two Moderators Description A parallel mediation model with a1-path and b2-path moderated. Usage data_med_mod_parallel Format A data frame with 100 rows and 8 variables: x Predictor. Numeric. w1 Moderator 1. Numeric. w2 Moderator 2. Numeric. m1 Mediator 1. Numeric. m2 Mediator 2. Numeric. y Outcome variable. Numeric. c1 Control variable. Numeric. c2 Control variable. Numeric. Examples library(lavaan) data(data_med_mod_parallel) data_med_mod_parallel$xw1 <- data_med_mod_parallel$x * data_med_mod_parallel$w1 data_med_mod_parallel$m2w2 <- data_med_mod_parallel$m2 * data_med_mod_parallel$w2 mod <- " m1 ~ a1 * x + w1 + da1 * xw1 + c1 + c2 m2 ~ a2 * x + w1 + c1 + c2 y ~ b1 * m1 + b2 * m2 + x + w1 + w2 + db2 * m2w2 + c1 + c2 w1 ~~ v_w1 * w1 w1 ~ m_w1 * 1 w2 ~~ v_w2 * w2 w2 ~ m_w2 * 1 a1b1 := a1 * b1 a2b2 := a2 * b2 a1b1_w1lo := (a1 + da1 * (m_w1 - sqrt(v_w1))) * b1 a1b1_w1hi := (a1 + da1 * (m_w1 + sqrt(v_w1))) * b2 a2b2_w2lo := a2 * (b2 + db2 * (m_w2 - sqrt(v_w2))) a2b2_w2hi := a2 * (b2 + db2 * (m_w2 + sqrt(v_w2))) " fit <- sem(mod, data_med_mod_parallel, meanstructure = TRUE, fixed.x = FALSE) parameterEstimates(fit)[c(1, 3, 6, 10, 11, 15, 48:53), ] data_med_mod_parallel_cat Sample Dataset: Parallel Moderated Mediation with Two Categorical Moderators Description A parallel mediation model with two categorical moderators. Usage data_med_mod_parallel_cat Format A data frame with 300 rows and 8 variables: x Predictor. Numeric. w1 Moderator. String. Values: "group1", "group2", "group3" w2 Moderator. String. Values: "team1", "team2" m1 Mediator 1. Numeric. m2 Mediator 2. Numeric. y Outcome variable. Numeric. c1 Control variable. Numeric. c2 Control variable. Numeric. Examples data(data_med_mod_parallel_cat) dat <- data_med_mod_parallel_cat summary(lm_m1 <- lm(m1 ~ x*w1 + c1 + c2, dat)) summary(lm_m2 <- lm(m2 ~ x*w1 + c1 + c2, dat)) summary(lm_y <- lm(y ~ m1*w2 + m2*w2 + m1 + x + w1 + c1 + c2, dat)) data_med_mod_serial Sample Dataset: Serial Mediation with Two Moderators Description A simple mediation model with a-path and b2-path moderated. Usage data_med_mod_serial Format A data frame with 100 rows and 8 variables: x Predictor. Numeric. w1 Moderator 1. Numeric. w2 Moderator 2. Numeric. m1 Mediator 1. Numeric. m2 Mediator 2. Numeric. y Outcome variable. Numeric. c1 Control variable. Numeric. c2 Control variable. Numeric. Examples library(lavaan) data(data_med_mod_serial) data_med_mod_serial$xw1 <- data_med_mod_serial$x * data_med_mod_serial$w1 data_med_mod_serial$m2w2 <- data_med_mod_serial$m2 * data_med_mod_serial$w2 mod <- " m1 ~ a * x + w1 + da1 * xw1 + c1 + c2 m2 ~ b1 * m1 + x + w1 + c1 + c2 y ~ b2 * m2 + m1 + x + w1 + w2 + db2 * m2w2 + c1 + c2 w1 ~~ v_w1 * w1 w1 ~ m_w1 * 1 w2 ~~ v_w2 * w2 w2 ~ m_w2 * 1 ab1b2 := a * b1 * b2 ab1b2_lolo := (a + da1 * (m_w1 - sqrt(v_w1))) * b1 * (b2 + db2 * (m_w2 - sqrt(v_w2))) ab1b2_lohi := (a + da1 * (m_w1 - sqrt(v_w1))) * b1 * (b2 + db2 * (m_w2 + sqrt(v_w2))) ab1b2_hilo := (a + da1 * (m_w1 + sqrt(v_w1))) * b1 * (b2 + db2 * (m_w2 - sqrt(v_w2))) ab1b2_hihi := (a + da1 * (m_w1 + sqrt(v_w1))) * b1 * (b2 + db2 * (m_w2 + sqrt(v_w2))) " fit <- sem(mod, data_med_mod_serial, meanstructure = TRUE, fixed.x = FALSE) parameterEstimates(fit)[c(1, 3, 6, 11, 16, 49:53), ] data_med_mod_serial_cat Sample Dataset: Serial Moderated Mediation with Two Categorical Moderators Description A serial mediation model with two categorical moderators. Usage data_med_mod_serial_cat Format A data frame with 300 rows and 8 variables: x Predictor. Numeric. w1 Moderator. String. Values: "group1", "group2", "group3" w2 Moderator. String. Values: "team1", "team2" m1 Mediator 1. Numeric. m2 Mediator 2. Numeric. y Outcome variable. Numeric. c1 Control variable. Numeric. c2 Control variable. Numeric. Examples data(data_med_mod_serial_cat) dat <- data_med_mod_serial_cat summary(lm_m1 <- lm(m1 ~ x*w1 + c1 + c2, dat)) summary(lm_m2 <- lm(m2 ~ m1 + x + w1 + c1 + c2, dat)) summary(lm_y <- lm(y ~ m2*w2 + m1 + x + w1 + c1 + c2, dat)) data_med_mod_serial_parallel Sample Dataset: Serial-Parallel Mediation with Two Moderators Description A serial-parallel mediation model with some paths moderated. Usage data_med_mod_serial_parallel Format A data frame with 100 rows and 9 variables: x Predictor. Numeric. w1 Moderator 1. Numeric. w2 Moderator 2. Numeric. m11 Mediator 1 in Path 1. Numeric. m12 Mediator 2 in Path 2. Numeric. m2 Mediator 2. Numeric. y Outcome variable. Numeric. c1 Control variable. Numeric. c2 Control variable. Numeric. Examples library(lavaan) data(data_med_mod_serial_parallel) data_med_mod_serial_parallel$xw1 <- data_med_mod_serial_parallel$x * data_med_mod_serial_parallel$w1 data_med_mod_serial_parallel$m2w2 <- data_med_mod_serial_parallel$m2 * data_med_mod_serial_parallel$w2 mod <- " m11 ~ a1 * x + w1 + da11 * xw1 + c1 + c2 m12 ~ b11 * m11 + x + w1 + c1 + c2 m2 ~ a2 * x + c1 + c2 y ~ b12 * m12 + b2 * m2 + m11 + x + w1 + w2 + db2 * m2w2 + c1 + c2 w1 ~~ v_w1 * w1 w1 ~ m_w1 * 1 w2 ~~ v_w2 * w2 w2 ~ m_w2 * 1 a1b11b22 := a1 * b11 * b12 a2b2 := a2 * b2 ab := a1b11b22 + a2b2 a1b11b12_w1lo := (a1 + da11 * (m_w1 - sqrt(v_w1))) * b11 * b12 a1b11b12_w1hi := (a1 + da11 * (m_w1 + sqrt(v_w1))) * b11 * b12 a2b2_w2lo := a2 * (b2 + db2 * (m_w2 - sqrt(v_w2))) a2b2_w2hi := a2 * (b2 + db2 * (m_w2 + sqrt(v_w2))) " fit <- sem(mod, data_med_mod_serial_parallel, meanstructure = TRUE, fixed.x = FALSE) parameterEstimates(fit)[parameterEstimates(fit)$label != "", ] data_med_mod_serial_parallel_cat Sample Dataset: Serial-Parallel Moderated Mediation with Two Cat- egorical Moderators Description A serial-parallel mediation model with two categorical moderators. Usage data_med_mod_serial_parallel_cat Format A data frame with 300 rows and 8 variables: x Predictor. Numeric. w1 Moderator. String. Values: "group1", "group2", "group3" w2 Moderator. String. Values: "team1", "team2" m11 Mediator 1 in Path 1. Numeric. m12 Mediator 2 in Path 1. Numeric. m2 Mediator in Path 2. Numeric. y Outcome variable. Numeric. c1 Control variable. Numeric. c2 Control variable. Numeric. Examples data(data_med_mod_serial_parallel_cat) dat <- data_med_mod_serial_parallel_cat summary(lm_m11 <- lm(m11 ~ x*w1 + c1 + c2, dat)) summary(lm_m12 <- lm(m12 ~ m11 + x + w1 + c1 + c2, dat)) summary(lm_m2 <- lm(m2 ~ x + w1 + c1 + c2, dat)) summary(lm_y <- lm(y ~ m12 + m2*w2 + m12 + x + c1 + c2, dat)) data_mod Sample Dataset: One Moderator Description A one-moderator model. Usage data_mod Format A data frame with 100 rows and 5 variables: x Predictor. Numeric. w Moderator. Numeric. y Outcome variable. Numeric. c1 Control variable. Numeric. c2 Control variable. Numeric. Examples library(lavaan) data(data_mod) data_mod$xw <- data_mod$x * data_mod$w mod <- " y ~ a * x + w + d * xw + c1 + c2 w ~~ v_w * w w ~ m_w * 1 a_lo := a + d * (m_w - sqrt(v_w)) a_hi := a + d * (m_w + sqrt(v_w)) " fit <- sem(mod, data_mod, fixed.x = FALSE) parameterEstimates(fit)[c(1, 3, 6, 7, 24, 25), ] data_mod2 Sample Dataset: Two Moderators Description A two-moderator model. Usage data_mod2 Format A data frame with 100 rows and 6 variables: x Predictor. Numeric. w1 Moderator 1. Numeric. w2 Moderator 2. Numeric. y Outcome variable. Numeric. c1 Control variable. Numeric. c2 Control variable. Numeric. Examples library(lavaan) data(data_mod2) data_mod2$xw1 <- data_mod2$x * data_mod2$w1 data_mod2$xw2 <- data_mod2$x * data_mod2$w2 mod <- " y ~ a * x + w1 + w2 + d1 * xw1 + d2 * xw2 + c1 + c2 w1 ~~ v_w1 * w1 w1 ~ m_w1 * 1 w2 ~~ v_w2 * w2 w2 ~ m_w2 * 1 a_lolo := a + d1 * (m_w1 - sqrt(v_w1)) + d2 * (m_w2 - sqrt(v_w2)) a_lohi := a + d1 * (m_w1 - sqrt(v_w1)) + d2 * (m_w2 + sqrt(v_w2)) a_hilo := a + d1 * (m_w1 + sqrt(v_w1)) + d2 * (m_w2 - sqrt(v_w2)) a_hihi := a + d1 * (m_w1 + sqrt(v_w1)) + d2 * (m_w2 + sqrt(v_w2)) " fit <- sem(mod, data_mod2, fixed.x = FALSE) parameterEstimates(fit)[c(1, 4, 5, 8:11, 34:37), ] data_mod_cat Sample Dataset: Moderation with One Categorical Moderator Description A moderation model with a categorical moderator. Usage data_mod_cat Format A data frame with 300 rows and 5 variables: x Predictor. Numeric. w Moderator. String. Values: "group1", "group2", "group3" y Outcome variable. Numeric. c1 Control variable. Numeric. c2 Control variable. Numeric. Examples data(data_mod_cat) dat <- data_mod_cat summary(lm_y <- lm(y ~ x*w + c1 + c2, dat)) data_mome_demo Sample Dataset: A Complicated Moderated-Mediation Model Description Generated from a complicated moderated-mediation model for demonstration. Usage data_mome_demo Format A data frame with 200 rows and 11 variables: x1 Predictor 1. Numeric. x2 Predictor 2. Numeric. m1 Mediator 1. Numeric. m2 Mediator 2. Numeric. m3 Mediator 3. Numeric. y1 Outcome Variable 1. Numeric. y2 Outcome Variable 2. Numeric. w1 Moderator 1. Numeric. w2 Moderator 21. Numeric. c1 Control Variable 1. Numeric. c2 Control Variable 2. Numeric. Details The model: # w1x1 <- x1 * w1 # w2m2 <- w2 * m2 m1 ~ x1 + w1 + w1x1 + x2 + c1 + c2 m2 ~ m1 + c1 + c2 m3 ~ x2 + x1 + c1 + c2 y1 ~ m2 + w2 + w2m2 + x1 + x2 + m3 + c1 + c2 y2 ~ m3 + x2 + x1 + m2 + c1 + c2 # Covariances excluded for brevity data_mome_demo_missing Sample Dataset: A Complicated Moderated-Mediation Model With Missing Data Description Generated from a complicated moderated-mediation model for demonstration, with missing data Usage data_mome_demo_missing Format A data frame with 200 rows and 11 variables: x1 Predictor 1. Numeric. x2 Predictor 2. Numeric. m1 Mediator 1. Numeric. m2 Mediator 2. Numeric. m3 Mediator 3. Numeric. y1 Outcome Variable 1. Numeric. y2 Outcome Variable 2. Numeric. w1 Moderator 1. Numeric. w2 Moderator 21. Numeric. c1 Control Variable 1. Numeric. c2 Control Variable 2. Numeric. Details A copy of data_mome_demo with some randomly selected cells changed to NA. The number of cases with no missing data is 169. The model: # w1x1 <- x1 * w1 # w2m2 <- w2 * m2 m1 ~ x1 + w1 + w1x1 + x2 + c1 + c2 m2 ~ m1 + c1 + c2 m3 ~ x2 + x1 + c1 + c2 y1 ~ m2 + w2 + w2m2 + x1 + x2 + m3 + c1 + c2 y2 ~ m3 + x2 + x1 + m2 + c1 + c2 # Covariances excluded for brevity data_parallel Sample Dataset: Parallel Mediation Description A parallel mediation model. Usage data_parallel Format A data frame with 100 rows and 6 variables: x Predictor. Numeric. m1 Mediator 1. Numeric. m2 Mediator 2. Numeric. y Outcome variable. Numeric. c1 Control variable. Numeric. c2 Control variable. Numeric. Examples library(lavaan) data(data_parallel) mod <- " m1 ~ a1 * x + c1 + c2 m2 ~ a2 * x + c1 + c2 y ~ b2 * m2 + b1 * m1 + x + c1 + c2 indirect1 := a1 * b1 indirect2 := a2 * b2 indirect := a1 * b1 + a2 * b2 " fit <- sem(mod, data_parallel, meanstructure = TRUE, fixed.x = FALSE) parameterEstimates(fit)[c(1, 4, 7, 8, 27:29), ] data_sem Sample Dataset: A Latent Variable Mediation Model With 4 Factors Description This data set is for testing functions in a four-factor structural model. Usage data_sem Format A data frame with 200 rows and 14 variables: x01 Indicator. Numeric. x02 Indicator. Numeric. x03 Indicator. Numeric. x04 Indicator. Numeric. x05 Indicator. Numeric. x06 Indicator. Numeric. x07 Indicator. Numeric. x08 Indicator. Numeric. x09 Indicator. Numeric. x10 Indicator. Numeric. x11 Indicator. Numeric. x12 Indicator. Numeric. x13 Indicator. Numeric. x14 Indicator. Numeric. Examples data(data_sem) dat <- data_med_mod_b_mod mod <- 'f1 =~ x01 + x02 + x03 f2 =~ x04 + x05 + x06 + x07 f3 =~ x08 + x09 + x10 f4 =~ x11 + x12 + x13 + x14 f3 ~ a1*f1 + a2*f2 f4 ~ b1*f1 + b3*f3 a1b3 := a1 * b3 a2b3 := a2 * b3 ' fit <- lavaan::sem(model = mod, data = data_sem) summary(fit) data_serial Sample Dataset: Serial Mediation Description A serial mediation model. Usage data_serial Format A data frame with 100 rows and 6 variables: x Predictor. Numeric. m1 Mediator 1. Numeric. m2 Mediator 2. Numeric. y Outcome variable. Numeric. c1 Control variable. Numeric. c2 Control variable. Numeric. Examples library(lavaan) data(data_serial) mod <- " m1 ~ a * x + c1 + c2 m2 ~ b1 * m1 + x + c1 + c2 y ~ b2 * m2 + m1 + x + c1 + c2 indirect := a * b1 * b2 " fit <- sem(mod, data_serial, meanstructure = TRUE, fixed.x = FALSE) parameterEstimates(fit)[c(1, 4, 8, 28), ] data_serial_parallel Sample Dataset: Serial-Parallel Mediation Description A mediation model with both serial and parallel components. Usage data_serial_parallel Format A data frame with 100 rows and 7 variables: x Predictor. Numeric. m11 Mediator 1 in Path 1. Numeric. m12 Mediator 2 in Path 1. Numeric. m2 Mediator in Path 2. Numeric. y Outcome variable. Numeric. c1 Control variable. Numeric. c2 Control variable. Numeric. Examples library(lavaan) data(data_serial_parallel) mod <- " m11 ~ a11 * x + c1 + c2 m12 ~ b11 * m11 + x + c1 + c2 m2 ~ a2 * x + c1 + c2 y ~ b12 * m12 + b2 * m2 + m11 + x + c1 + c2 indirect1 := a11 * b11 * b12 indirect2 := a2 * b2 indirect := a11 * b11 * b12 + a2 * b2 " fit <- sem(mod, data_serial_parallel, meanstructure = TRUE, fixed.x = FALSE) parameterEstimates(fit)[c(1, 4, 8, 11, 12, 34:36), ] data_serial_parallel_latent Sample Dataset: A Latent Mediation Model With Three Mediators Description Generated from a 3-mediator mediation model among eight latent factors, fx1, fx2, fm11, fm12, fy1, and fy2, each has three indicators. Usage data_serial_parallel_latent Format A data frame with 500 rows and 21 variables: x1 Indicator of fx1. Numeric. x2 Indicator of fx1. Numeric. x3 Indicator of fx1. Numeric. x4 Indicator of fx2. Numeric. x5 Indicator of fx2. Numeric. x6 Indicator of fx2. Numeric. m11a Indicator of fm11. Numeric. m11b Indicator of fm11. Numeric. m11c Indicator of fm11. Numeric. m12a Indicator of fm12. Numeric. m12b Indicator of fm12. Numeric. m12c Indicator of fm12. Numeric. m2a Indicator of fm2. Numeric. m2b Indicator of fm2. Numeric. m2c Indicator of fm2. Numeric. y1 Indicator of fy1. Numeric. y2 Indicator of fy1. Numeric. y3 Indicator of fy1. Numeric. y4 Indicator of fy2. Numeric. y5 Indicator of fy2. Numeric. y6 Indicator of fy2. Numeric. Details The model: fx1 =~ x1 + x2 + x3 fx2 =~ x4 + x5 + x6 fm11 =~ m11a + m11b + m11c fm12 =~ m12a + m12b + m12c fm2 =~ m2a + m2b + m2c fy1 =~ y1 + y2 + y3 fy2 =~ y3 + y4 + y5 fm11 ~ a1 * fx1 fm12 ~ b11 * fm11 + a2m * fx2 fm2 ~ a2 * fx2 fy1 ~ b12 * fm12 + b11y1 * fm11 + cp1 * fx1 fy2 ~ b2 * fm2 + cp2 * fx2 a1b11b12 := a1 * b11 * b12 a1b11y1 := a1 * b11y1 a2b2 := a2 * b2 a2mb12 := a2m * b12 delta_med Delta_Med by Liu, Yuan, and Li (2023) Description It computes the Delta_Med proposed by Liu, Yuan, and Li (2023), an R2 -like measure of indirect effect. Usage delta_med( x, y, m, fit, paths_to_remove = NULL, boot_out = NULL, level = 0.95, progress = TRUE, skip_check_single_x = FALSE, skip_check_m_between_x_y = FALSE, skip_check_x_to_y = FALSE, skip_check_latent_variables = FALSE ) Arguments x The name of the x variable. Must be supplied as a quoted string. y The name of the y variable. Must be supplied as a quoted string. m A vector of the variable names of the mediator(s). If more than one mediators, they do not have to be on the same path from x to y. Cannot be NULL for this function. fit The fit object. Must be a lavaan::lavaan object. paths_to_remove A character vector of paths users want to manually remove, specified in lavaan model syntax. For example, c("m2~x", "m3~m2") removes the path from x to m2 and the path from m2 to m3. The default is NULL, and the paths to remove will be determined using the method by Liu et al. (2023). If supplied, then only paths specified explicitly will be removed. boot_out The output of do_boot(). If supplied, the stored bootstrap estimates will be used to form the nonparametric percentile bootstrap confidence interval of Delta_Med. level The level of confidence of the bootstrap confidence interval. Default is .95. progress Logical. Display bootstrapping progress or not. Default is TRUE. skip_check_single_x Logical Check whether the model has one and only one x-variable. Default is TRUE. skip_check_m_between_x_y Logical. Check whether all m variables are along a path from x to y. Default is TRUE. skip_check_x_to_y Logical. Check whether there is a direct path from x to y. Default is TRUE. skip_check_latent_variables Logical. Check whether the model has any latent variables. Default is TRUE. Details It computes Delta_Med, an R2 -like effect size measure for the indirect effect from one variable (the y-variable) to another variable (the x-variable) through one or more mediators (m, or m1, m2, etc. when there are more than one mediator). The Delta_Med of one or more mediators was computed as the difference between two R2 s: • R12 , the R2 when y is predicted by x and all mediators. • R22 , the R2 when the mediator(s) of interest is/are removed from the models, while the error term(s) of the mediator(s) is/are kept. Delta_Med is given by R12 − R22 . Please refer to Liu et al. (2023) for the technical details. The function can also form a nonparametric percentile bootstrap confidence of Delta_Med. Value A delta_med class object. It is a list-like object with these major elements: • delta_med: The Delta_Med. • x: The name of the x-variable. • y: The name of the y-variable. • m: A character vector of the mediator(s) along a path. The path runs from the first element to the last element. This class has a print method, a coef method, and a confint method. See print.delta_med(), coef.delta_med(), and confint.delta_med(). Implementation The function identifies all the path(s) pointing to the mediator(s) of concern and fixes the path(s) to zero, effectively removing the mediator(s). However, the model is not refitted, hence keeping the estimates of all other parameters unchanged. It then uses lavaan::lav_model_set_parameters() to update the parameters, lavaan::lav_model_implied() to update the implied statistics, and then calls lavaan::lavInspect() to retrieve the implied variance of the predicted values of y for computing the R22 . Subtracting this R22 from R12 of y can then yield Delta_Med. Model Requirements For now, by defaul, it only computes Delta_Med for the types of models discussed in Liu et al. (2023): • Having one predictor (the x-variable). • Having one or more mediators, the m-variables, with arbitrary way to mediate the effect of x on the outcome variable (y-variable). • Having one or more outcome variables. Although their models only have outcome variables, the computation of the Delta_Med is not affected by the presence of other outcome variables. • Having no control variables. • The mediator(s), m, and the y-variable are continuous. • x can be continuous or categorical. If categorical, it needs to be handle appropriately when fitting the model. • x has a direct path to y. • All the mediators listed in the argument m is present in at least one path from x to y. • None of the paths from x to y are moderated. It can be used for other kinds of models but support for them is disabled by default. To use this function for cases not discussed in Liu et al. (2023), please disable relevant requirements stated above using the relevant skip_check_* arguments. An error will be raised if the models failed any of the checks not skipped by users. References <NAME>., <NAME>., & <NAME>. (2023). A systematic framework for defining R-squared measures in mediation analysis. Psychological Methods. Advance online publication. https://doi.org/10.1037/met0000571 See Also print.delta_med(), coef.delta_med(), and confint.delta_med(). Examples library(lavaan) dat <- data_med mod <- " m ~ x y ~ m + x " fit <- sem(mod, dat) dm <- delta_med(x = "x", y = "y", m = "m", fit = fit) dm print(dm, full = TRUE) # Call do_boot() to generate # bootstrap estimates # Use 2000 or even 5000 for R in real studies # Set parallel to TRUE in real studies for faster bootstrapping boot_out <- do_boot(fit, R = 45, seed = 879, parallel = FALSE, progress = FALSE) # Remove 'progress = FALSE' in practice dm_boot <- delta_med(x = "x", y = "y", m = "m", fit = fit, boot_out = boot_out, progress = FALSE) dm_boot confint(dm_boot) do_boot Bootstrap Estimates for ’indirect_effects’ and ’cond_indirect_effects’ Description Generate bootstrap estimates to be used by cond_indirect_effects(), indirect_effect(), and cond_indirect(), Usage do_boot( fit, R = 100, seed = NULL, parallel = TRUE, ncores = max(parallel::detectCores(logical = FALSE) - 1, 1), make_cluster_args = list(), progress = TRUE ) Arguments fit Either (a) a list of lm class objects, or the output of lm2list() (i.e., an lm_list- class object), or (b) the output of lavaan::sem(). R The number of bootstrap samples. Default is 100. seed The seed for the bootstrapping. Default is NULL and seed is not set. parallel Logical. Whether parallel processing will be used. Default is TRUE. ncores Integer. The number of CPU cores to use when parallel is TRUE. Default is the number of non-logical cores minus one (one minimum). Will raise an error if greater than the number of cores detected by parallel::detectCores(). If ncores is set, it will override make_cluster_args. make_cluster_args A named list of additional arguments to be passed to parallel::makeCluster(). For advanced users. See parallel::makeCluster() for details. Default is list(), no additional arguments. progress Logical. Display progress or not. Default is TRUE. Details It does nonparametric bootstrapping to generate bootstrap estimates of the parameter estimates in a model fitted either by lavaan::sem() or by a sequence of calls to lm(). The stored estimates can then be used by cond_indirect_effects(), indirect_effect(), and cond_indirect() to form bootstrapping confidence intervals. This approach removes the need to repeat bootstrapping in each call to cond_indirect_effects(), indirect_effect(), and cond_indirect(). It also ensures that the same set of bootstrap samples is used in all subsequent analysis. It determines the type of the fit object automatically and then calls lm2boot_out(), fit2boot_out(), or fit2boot_out_do_boot(). Value A boot_out-class object that can be used for the boot_out argument of cond_indirect_effects(), indirect_effect(), and cond_indirect() for forming bootstrap confidence intervals. The ob- ject is a list with the number of elements equal to the number of bootstrap samples. Each element is a list of the parameter estimates and sample variances and covariances of the variables in each bootstrap sample. See Also lm2boot_out(), fit2boot_out(), and fit2boot_out_do_boot(), which implements the boot- strapping. Examples data(data_med_mod_ab1) dat <- data_med_mod_ab1 lm_m <- lm(m ~ x*w + c1 + c2, dat) lm_y <- lm(y ~ m*w + x + c1 + c2, dat) lm_out <- lm2list(lm_m, lm_y) # In real research, R should be 2000 or even 5000 # In real research, no need to set parallel and progress to FALSE # Parallel processing is enabled by default and # progress is displayed by default. lm_boot_out <- do_boot(lm_out, R = 50, seed = 1234, parallel = FALSE, progress = FALSE) wlevels <- mod_levels(w = "w", fit = lm_out) wlevels out <- cond_indirect_effects(wlevels = wlevels, x = "x", y = "y", m = "m", fit = lm_out, boot_ci = TRUE, boot_out = lm_boot_out) out do_mc Monte Carlo Estimates for ’indirect_effects’ and ’cond_indirect_effects’ Description Generate Monte Carlo estimates to be used by cond_indirect_effects(), indirect_effect(), and cond_indirect(), Usage do_mc( fit, R = 100, seed = NULL, parallel = TRUE, ncores = max(parallel::detectCores(logical = FALSE) - 1, 1), make_cluster_args = list(), progress = TRUE ) gen_mc_est(fit, R = 100, seed = NULL) Arguments fit The output of lavaan::sem(). It can also be a lavaan.mi object returned by semTools::runMI() or its wrapper, such as semTools::sem.mi(). The output of stats::lm() is not supported. R The number of replications. Default is 100. seed The seed for the generating Monte Carlo estimates. Default is NULL and seed is not set. parallel Not used. Kept for compatibility with do_boot(). ncores Not used. Kept for compatibility with do_boot(). make_cluster_args Not used. Kept for compatibility with do_boot(). progress Logical. Display progress or not. Default is TRUE. Details It uses the parameter estimates and their variance-covariance matrix to generate Monte Carlo es- timates of the parameter estimates in a model fitted by lavaan::sem(). The stored estimates can then be used by cond_indirect_effects(), indirect_effect(), and cond_indirect() to form Monte Carlo confidence intervals. It also supports a model estimated by multiple imputation using semTools::runMI() or its wrapper, such as semTools::sem.mi(). The pooled estimates and their variance-covariance matrix will be used to generate the Monte Carlo estimates. This approach removes the need to repeat Monte Carlo simulation in each call to cond_indirect_effects(), indirect_effect(), and cond_indirect(). It also ensures that the same set of Monte Carlo es- timates is used in all subsequent analysis. Value A mc_out-class object that can be used for the mc_out argument of cond_indirect_effects(), indirect_effect(), and cond_indirect() for forming Monte Carlo confidence intervals. The object is a list with the number of elements equal to the number of Monte Carlo replications. Each element is a list of the parameter estimates and sample variances and covariances of the variables in each Monte Carlo replication. Functions • do_mc(): A general purpose function for creating Monte Carlo estimates to be reused by other functions. It returns a mc_out-class object. • gen_mc_est(): Generate Monte Carlo estimates and store them in the external slot: external$manymome$mc. For advanced users. See Also fit2mc_out(), which implements the Monte Carlo simulation. Examples library(lavaan) data(data_med_mod_ab1) dat <- data_med_mod_ab1 mod <- " m ~ x + w + x:w + c1 + c2 y ~ m + w + m:w + x + c1 + c2 " fit <- sem(mod, dat) # In real research, R should be 5000 or even 10000 mc_out <- do_mc(fit, R = 100, seed = 1234) wlevels <- mod_levels(w = "w", fit = fit) wlevels out <- cond_indirect_effects(wlevels = wlevels, x = "x", y = "y", m = "m", fit = fit, mc_ci = TRUE, mc_out = mc_out) out factor2var Create Dummy Variables Description Create dummy variables from a categorical variable. Usage factor2var( x_value, x_contrasts = "contr.treatment", prefix = "", add_rownames = TRUE ) Arguments x_value The vector of categorical variable. x_contrasts The contrast to be used. Default is "constr.treatment". prefix The prefix to be added to the variables to be created. Default is "". add_rownames Whether row names will be added to the output. Default is TRUE. Details Its main use is for creating dummy variables (indicator variables) from a categorical variable, to be used in lavaan::sem(). Optionally, the other contrasts can be used through the argument x_contrasts. Value It always returns a matrix with the number of rows equal to the length of the vector (x_value). If the categorical has only two categories and so only one dummy variable is needed, the output is still a one-column "matrix" in R. Examples dat <- data_mod_cat dat <- data.frame(dat, factor2var(dat$w, prefix = "gp", add_rownames = FALSE)) head(dat[, c("w", "gpgroup2", "gpgroup3")], 15) fit2boot_out Bootstrap Estimates for a lavaan Output Description Generate bootstrap estimates from the output of lavaan::sem(). Usage fit2boot_out(fit) fit2boot_out_do_boot( fit, R = 100, seed = NULL, parallel = FALSE, ncores = max(parallel::detectCores(logical = FALSE) - 1, 1), make_cluster_args = list(), progress = TRUE, internal = list() ) Arguments fit The fit object. This function only supports a lavaan::lavaan object. R The number of bootstrap samples. Default is 100. seed The seed for the random resampling. Default is NULL. parallel Logical. Whether parallel processing will be used. Default is NULL. ncores Integer. The number of CPU cores to use when parallel is TRUE. Default is the number of non-logical cores minus one (one minimum). Will raise an error if greater than the number of cores detected by parallel::detectCores(). If ncores is set, it will override make_cluster_args. make_cluster_args A named list of additional arguments to be passed to parallel::makeCluster(). For advanced users. See parallel::makeCluster() for details. Default is list(). progress Logical. Display progress or not. Default is TRUE. internal A list of arguments to be used internally for debugging. Default is list(). Details This function is for advanced users. do_boot() is a function users should try first because do_boot() has a general interface for input-specific functions like this one. If bootstrapping confidence intervals was requested when calling lavaan::sem() by setting se = "boot", fit2boot_out() can be used to extract the stored bootstrap estimates so that they can be reused by indirect_effect(), cond_indirect_effects() and related functions to form boot- strapping confidence intervals for effects such as indirect effects and conditional indirect effects. If bootstrapping confidence was not requested when fitting the model by lavaan::sem(), fit2boot_out_do_boot() can be used to generate nonparametric bootstrap estimates from the output of lavaan::sem() and store them for use by indirect_effect(), cond_indirect_effects(), and related functions. This approach removes the need to repeat bootstrapping in each call to indirect_effect(), cond_indirect_effects(), and related functions. It also ensures that the same set of bootstrap samples is used in all subsequent analyses. Value A boot_out-class object that can be used for the boot_out argument of indirect_effect(), cond_indirect_effects(), and related functions for forming bootstrapping confidence intervals. The object is a list with the number of elements equal to the number of bootstrap samples. Each element is a list of the parameter estimates and sample variances and covariances of the variables in each bootstrap sample. Functions • fit2boot_out(): Process stored bootstrap estimates for functions such as cond_indirect_effects(). • fit2boot_out_do_boot(): Do bootstrapping and store information to be used by cond_indirect_effects() and related functions. Support parallel processing. See Also do_boot(), the general purpose function that users should try first before using this function. Examples library(lavaan) data(data_med_mod_ab1) dat <- data_med_mod_ab1 dat$"x:w" <- dat$x * dat$w dat$"m:w" <- dat$m * dat$w mod <- " m ~ x + w + x:w + c1 + c2 y ~ m + w + m:w + x + c1 + c2 " # Bootstrapping not requested in calling lavaan::sem() fit <- sem(model = mod, data = dat, fixed.x = FALSE, se = "none", baseline = FALSE) fit_boot_out <- fit2boot_out_do_boot(fit = fit, R = 40, seed = 1234, progress = FALSE) out <- cond_indirect_effects(wlevels = "w", x = "x", y = "y", m = "m", fit = fit, boot_ci = TRUE, boot_out = fit_boot_out) out fit2mc_out Monte Carlo Estimates for a lavaan Output Description Generate Monte Carlo estimates from the output of lavaan::sem(). Usage fit2mc_out(fit, progress = TRUE) Arguments fit The fit object. This function only supports a lavaan::lavaan object. It can also be a lavaan.mi object returned by semTools::runMI() or its wrapper, such as semTools::sem.mi(). progress Logical. Display progress or not. Default is TRUE. Details This function is for advanced users. do_mc() is a function users should try first because do_mc() has a general interface for input-specific functions like this one. fit2mc_out() can be used to extract the stored Monte Carlo estimates so that they can be reused by indirect_effect(), cond_indirect_effects() and related functions to form Monte Carlo confidence intervals for effects such as indirect effects and conditional indirect effects. This approach removes the need to repeat Monte Carlo simulation in each call to indirect_effect(), cond_indirect_effects(), and related functions. It also ensures that the same set of Monte Carlo estimates is used in all subsequent analyses. Value A mc_out-class object that can be used for the mc_out argument of indirect_effect(), cond_indirect_effects(), and related functions for forming Monte Carlo confidence intervals. The object is a list with the number of elements equal to the number of Monte Carlo replications. Each element is a list of the parameter estimates and sample variances and covariances of the vari- ables in each Monte Carlo replication. See Also do_mc(), the general purpose function that users should try first before using this function. Examples library(lavaan) data(data_med_mod_ab1) dat <- data_med_mod_ab1 dat$"x:w" <- dat$x * dat$w dat$"m:w" <- dat$m * dat$w mod <- " m ~ x + w + x:w + c1 + c2 y ~ m + w + m:w + x + c1 + c2 " fit <- sem(model = mod, data = dat, fixed.x = FALSE, baseline = FALSE) # In real research, R should be 5000 or even 10000. fit <- gen_mc_est(fit, R = 100, seed = 453253) fit_mc_out <- fit2mc_out(fit) out <- cond_indirect_effects(wlevels = "w", x = "x", y = "y", m = "m", fit = fit, mc_ci = TRUE, mc_out = fit_mc_out) out get_one_cond_indirect_effect Get The Conditional Indirect Effect for One Row of ’cond_indirect_effects’ Output Description Return the conditional indirect effect of one row of the output of cond_indirect_effects(). Usage get_one_cond_indirect_effect(object, row) get_one_cond_effect(object, row) Arguments object The output of cond_indirect_effects(). row The row number of the row to be retrieved. Details It just extracts the corresponding output of cond_indirect() from the requested row. Value An indirect-class object, similar to the output of indirect_effect() and cond_indirect(). See [indirect_effect)] and cond_indirect() for details on these classes. [indirect_effect)]: R:indirect_effect) cond_indirect(): R:cond_indirect() Functions • get_one_cond_effect(): An alias to get_one_cond_indirect_effect() See Also cond_indirect_effects Examples library(lavaan) dat <- modmed_x1m3w4y1 mod <- " m1 ~ x + w1 + x:w1 m2 ~ m1 y ~ m2 + x + w4 + m2:w4 " fit <- sem(mod, dat, meanstructure = TRUE, fixed.x = FALSE, se = "none", baseline = FALSE) est <- parameterEstimates(fit) # Examples for cond_indirect(): # Conditional effects from x to m1 # when w1 is equal to each of the default levels out1 <- cond_indirect_effects(x = "x", y = "m1", wlevels = c("w1", "w4"), fit = fit) get_one_cond_indirect_effect(out1, 3) # Conditional Indirect effect from x1 through m1 to y, # when w1 is equal to each of the levels out2 <- cond_indirect_effects(x = "x", y = "y", m = c("m1", "m2"), wlevels = c("w1", "w4"), fit = fit) get_one_cond_indirect_effect(out2, 4) get_prod Product Terms (if Any) Along a Path Description Identify the product term(s), if any, along a path in a model and return the term(s), with the variables involved and the coefficient(s) of the term(s). Usage get_prod( x, y, operator = ":", fit = NULL, est = NULL, data = NULL, expand = FALSE ) Arguments x Character. Variable name. y Character. Variable name. operator Character. The string used to indicate a product term. Default is ":", used in both lm() and lavaan::sem() for observed variables. fit The fit object. Currently only supports a lavaan::lavaan object. It can also be a lavaan.mi object returned by semTools::runMI() or its wrapper, such as semTools::sem.mi(). est The output of lavaan::parameterEstimates(). If NULL, the default, it will be generated from fit. If supplied, fit will ge ignored. data Data frame (optional). If supplied, it will be used to identify the product terms. expand Whether products of more than two terms will be searched. FALSE by default. Details This function is used by several functions in manymome to identify product terms along a path. If possible, this is done by numerically checking whether a column in a dataset is the product of two other columns. If not possible (e.g., the "product term" is the "product" of two latent variables, formed by the products of indicators), then it requires the user to specify an operator. The detailed workflow of this function can be found in the article https://sfcheung.github.io/ manymome/articles/get_prod.html This function is not intended to be used by users. It is exported such that advanced users or devel- opers can use it. Value If at least one product term is found, it returns a list with these elements: • prod: The names of the product terms found. • b: The coefficients of these product terms. • w: The variable, other than x, in each product term. • x: The x-variable, that is, where the path starts. • y: The y-variable, that is, where the path ends. It returns NA if no product term is found along the path. Examples dat <- modmed_x1m3w4y1 library(lavaan) mod <- " m1 ~ x + w1 + x:w1 m2 ~ m1 + w2 + m1:w2 m3 ~ m2 y ~ m3 + w4 + m3:w4 + x + w3 + x:w3 + x:w4 " fit <- sem(model = mod, data = dat, meanstructure = TRUE, fixed.x = FALSE) # One product term get_prod(x = "x", y = "m1", fit = fit) # Two product terms get_prod(x = "x", y = "y", fit = fit) # No product term get_prod(x = "m2", y = "m3", fit = fit) index_of_mome Index of Moderated Mediation and Index of Moderated Moderated Mediation Description It computes the index of moderated mediation and the index of moderated moderated mediation proposed by Hayes (2015, 2018). Usage index_of_mome( x, y, m = NULL, w = NULL, fit = NULL, boot_ci = FALSE, level = 0.95, boot_out = NULL, R = 100, seed = NULL, progress = TRUE, mc_ci = FALSE, mc_out = NULL, ci_type = NULL, ci_out = NULL, ... ) index_of_momome( x, y, m = NULL, w = NULL, z = NULL, fit = NULL, boot_ci = FALSE, level = 0.95, boot_out = NULL, R = 100, seed = NULL, progress = TRUE, mc_ci = FALSE, mc_out = NULL, ci_type = NULL, ci_out = NULL, ... ) Arguments x Character. The name of the predictor at the start of the path. y Character. The name of the outcome variable at the end of the path. m A vector of the variable names of the mediator(s). The path goes from the first mediator successively to the last mediator. If NULL, the default, the path goes from x to y. w Character. The name of the moderator. fit The fit object. Can be a lavaan::lavaan object, a list of lm() outputs, or an object created by lm2list(). It can also be a lavaan.mi object returned by semTools::runMI() or its wrapper, such as semTools::sem.mi(). boot_ci Logical. Whether bootstrap confidence interval will be formed. Default is FALSE. level The level of confidence for the bootstrap confidence interval. Default is .95. boot_out If boot_ci is TRUE, users can supply pregenerated bootstrap estimates. This can be the output of do_boot(). For indirect_effect() and cond_indirect_effects(), this can be the output of a previous call to cond_indirect_effects(), indirect_effect(), or cond_indirect() with bootstrap confidence intervals requested. These stored estimates will be reused such that there is no need to do bootstrapping again. If not supplied, the function will try to generate them from fit. R Integer. If boot_ci is TRUE, boot_out is NULL, and bootstrap standard errors not requested if fit is a lavaan object, this function will do bootstrapping on fit. R is the number of bootstrap samples. Default is 100. For Monte Carlo simulation, this is the number of replications. seed If bootstrapping or Monte Carlo simulation is conducted, this is the seed for the bootstrapping or simulation. Default is NULL and seed is not set. progress Logical. Display bootstrapping progress or not. Default is TRUE. mc_ci Logical. Whether Monte Carlo confidence interval will be formed. Default is FALSE. mc_out If mc_ci is TRUE, users can supply pregenerated Monte Carlo estimates. This can be the output of do_mc(). For indirect_effect() and cond_indirect_effects(), this can be the output of a previous call to cond_indirect_effects(), indirect_effect(), or cond_indirect() with Monte Carlo confidence intervals requested. These stored estimates will be reused such that there is no need to do Monte Carlo simulation again. If not supplied, the function will try to generate them from fit. ci_type The type of confidence intervals to be formed. Can be either "boot" (boot- strapping) or "mc" (Monte Carlo). If not supplied or is NULL, will check other arguments (e.g, boot_ci and mc_ci). If supplied, will override boot_ci and mc_ci. ci_out If ci_type is supplied, this is the corresponding argument. If ci_type is "boot", this argument will be used as boot_out. If ci_type is "mc", this argument will be used as mc_out. ... Arguments to be passed to cond_indirect_effects() z Character. The name of the second moderator, for computing the index of mod- erated moderated mediation. Details The function index_of_mome() computes the index of moderated mediation proposed by Hayes (2015). It supports any path in a model with one (and only one) component path moderated. For example, x->m1->m2->y with x->m1 moderated by w. It measures the change in indirect effect when the moderator increases by one unit. The function index_of_momome() computes the index of moderated moderated mediation proposed by Hayes (2018). It supports any path in a model, with two component paths moderated, each by one moderator. For example, x->m1->m2->y with x->m1 moderated by w and m2->y moderated by z. It measures the change in the index of moderated mediation of one moderator when the other moderator increases by one unit. Value It returns a cond_indirect_diff-class object. This class has a print method (print.cond_indirect_diff()), a coef method for extracting the index (coef.cond_indirect_diff()), and a confint method for extracting the confidence interval if available (confint.cond_indirect_diff()). Functions • index_of_mome(): Compute the index of moderated mediation. • index_of_momome(): Compute the index of moderated moderated mediation. References <NAME>. (2015). An index and test of linear moderated mediation. Multivariate Behavioral Research, 50(1), 1-22. doi:10.1080/00273171.2014.962683 <NAME>. (2018). Partial, conditional, and moderated moderated mediation: Quantification, in- ference, and interpretation. Communication Monographs, 85(1), 4-40. doi:10.1080/03637751.2017.1352100 See Also cond_indirect_effects() Examples library(lavaan) dat <- modmed_x1m3w4y1 dat$xw1 <- dat$x * dat$w1 mod <- " m1 ~ a * x + f * w1 + d * xw1 y ~ b * m1 + cp * x ind_mome := d * b " fit <- sem(mod, dat, meanstructure = TRUE, fixed.x = FALSE, se = "none", baseline = FALSE) est <- parameterEstimates(fit) # R should be at least 2000 or even 5000 in real research. # parallel is set to TRUE by default. # Therefore, in research, the argument parallel can be omitted. out_mome <- index_of_mome(x = "x", y = "y", m = "m1", w = "w1", fit = fit, boot_ci = TRUE, R = 42, seed = 4314, parallel = FALSE, progress = FALSE) out_mome coef(out_mome) # From lavaan print(est[19, ], nd = 8) confint(out_mome) library(lavaan) dat <- modmed_x1m3w4y1 dat$xw1 <- dat$x * dat$w1 dat$m1w4 <- dat$m1 * dat$w4 mod <- " m1 ~ a * x + f1 * w1 + d1 * xw1 y ~ b * m1 + f4 * w4 + d4 * m1w4 + cp * x ind_momome := d1 * d4 " fit <- sem(mod, dat, meanstructure = TRUE, fixed.x = FALSE, se = "none", baseline = FALSE) est <- parameterEstimates(fit) # See the example of index_of_mome on how to request # bootstrap confidence interval. out_momome <- index_of_momome(x = "x", y = "y", m = "m1", w = "w1", z = "w4", fit = fit) out_momome coef(out_momome) print(est[32, ], nd = 8) indirect_effects_from_list Coefficient Table of an ’indirect_list’ Class Object Description Create a coefficient table for the point estimates and confidence intervals (if available) in the output of many_indirect_effects(). Usage indirect_effects_from_list(object, add_sig = TRUE, pvalue = FALSE, se = FALSE) Arguments object The output of indirect_effect() or cond_indirect(). add_sig Whether a column of significance test results will be added. Default is TRUE. pvalue Logical. If TRUE, asymmetric p-values based on bootstrapping will be added available. Default is FALSE. se Logical. If TRUE and confidence intervals are available, the standard errors of the estimates are also added. They are simply the standard deviations of the bootstrap estimates or Monte Carlo simulated values, depending on the method used to form the confidence intervals. Details If bootstrapping confidence interval was requested, this method has the option to add p-values computed by the method presented in Asparouhov and Muthén (2021). Note that these p-values is asymmetric bootstrap p-values based on the distribution of the bootstrap estimates. They are not computed based on the distribution under the null hypothesis. For a p-value of a, it means that a 100(1 - a)% bootstrapping confidence interval will have one of its limits equal to 0. A confidence interval with a higher confidence level will include zero, while a confidence interval with a lower confidence level will exclude zero. Value A data frame with the indirect effect estimates and confidence intervals (if available). It also has A string column, "Sig", for #’ significant test results if add_sig is TRUE and confidence intervals are available. References <NAME>., & <NAME>. (2021). Bootstrap p-value computation. Retrieved from https://www.statmodel.com/download Bootstrap%20-%20Pvalue.pdf See Also many_indirect_effects() Examples library(lavaan) data(data_serial_parallel) mod <- " m11 ~ x + c1 + c2 m12 ~ m11 + x + c1 + c2 m2 ~ x + c1 + c2 y ~ m12 + m2 + m11 + x + c1 + c2 " fit <- sem(mod, data_serial_parallel, fixed.x = FALSE) # All indirect paths from x to y paths <- all_indirect_paths(fit, x = "x", y = "y") paths # Indirect effect estimates out <- many_indirect_effects(paths, fit = fit) out # Create a data frame of the indirect effect estimates out_df <- indirect_effects_from_list(out) out_df indirect_i Indirect Effect (No Bootstrapping) Description It computes an indirect effect, optionally conditional on the value(s) of moderator(s) if present. Usage indirect_i( x, y, m = NULL, fit = NULL, est = NULL, implied_stats = NULL, wvalues = NULL, standardized_x = FALSE, standardized_y = FALSE, computation_digits = 5, prods = NULL, get_prods_only = FALSE, data = NULL, expand = TRUE, warn = TRUE, allow_mixing_lav_and_obs = TRUE ) Arguments x Character. The name of the predictor at the start of the path. y Character. The name of the outcome variable at the end of the path. m A vector of the variable names of the mediator(s). The path goes from the first mediator successively to the last mediator. If NULL, the default, the path goes from x to y. fit The fit object. Currently only supports lavaan::lavaan objects. Support for lists of lm() output is implemented by high level functions such as indirect_effect() and cond_indirect_effects(). It can also be a lavaan.mi object returned by semTools::runMI() or its wrapper, such as semTools::sem.mi(). est The output of lavaan::parameterEstimates(). If NULL, the default, it will be generated from fit. If supplied, fit will be ignored. implied_stats Implied means, variances, and covariances of observed variables and latent vari- ables (if any), of the form of the output of lavaan::lavInspect() with what set to "implied", but with means extracted with what set to "mean.ov" and "mean.lv". The standard deviations are extracted from this object for standard- ization. Default is NULL, and implied statistics will be computed from fit if required. wvalues A numeric vector of named elements. The names are the variable names of the moderators, and the values are the values to which the moderators will be set to. Default is NULL. standardized_x Logical. Whether x will be standardized. Default is FALSE. standardized_y Logical. Whether y will be standardized. Default is FALSE. computation_digits The number of digits in storing the computation in text. Default is 3. prods The product terms found. For internal use. get_prods_only IF TRUE, will quit early and return the product terms found. The results can be passed to the prod argument when calling this function. Default is FALSE. For internal use. data Data frame (optional). If supplied, it will be used to identify the product terms. For internal use. expand Whether products of more than two terms will be searched. TRUE by default. For internal use. warn If TRUE, the default, the function will warn against possible misspecification, such as not setting the value of a moderator which moderate one of the compo- nent path. Set this to FALSE will suppress these warnings. Suppress them only when the moderators are omitted intentionally. allow_mixing_lav_and_obs If TRUE, it accepts a path with both latent variables and observed variables. De- fault is TRUE. Details This function is a low-level function called by indirect_effect(), cond_indirect_effects(), and cond_indirect(), which call this function multiple times if bootstrap confidence interval is requested. This function usually should not be used directly. It is exported for advanced users and developers Value It returns an indirect-class object. This class has the following methods: coef.indirect(), print.indirect(). The confint.indirect() method is used only when called by cond_indirect() or cond_indirect_effects(). See Also indirect_effect(), cond_indirect_effects(), and cond_indirect(), the high level func- tions that should usually be used. Examples library(lavaan) dat <- modmed_x1m3w4y1 mod <- " m1 ~ a1 * x + b1 * w1 + d1 * x:w1 m2 ~ a2 * m1 + b2 * w2 + d2 * m1:w2 m3 ~ a3 * m2 + b3 * w3 + d3 * m2:w3 y ~ a4 * m3 + b4 * w4 + d4 * m3:w4 " fit <- sem(mod, dat, meanstructure = TRUE, fixed.x = FALSE, se = "none", baseline = FALSE) est <- parameterEstimates(fit) wvalues <- c(w1 = 5, w2 = 4, w3 = 2, w4 = 3) # Compute the conditional indirect effect by indirect_i() indirect_1 <- indirect_i(x = "x", y = "y", m = c("m1", "m2", "m3"), fit = fit, wvalues = wvalues) # Manually compute the conditional indirect effect indirect_2 <- (est[est$label == "a1", "est"] + wvalues["w1"] * est[est$label == "d1", "est"]) * (est[est$label == "a2", "est"] + wvalues["w2"] * est[est$label == "d2", "est"]) * (est[est$label == "a3", "est"] + wvalues["w3"] * est[est$label == "d3", "est"]) * (est[est$label == "a4", "est"] + wvalues["w4"] * est[est$label == "d4", "est"]) # They should be the same coef(indirect_1) indirect_2 indirect_proportion Proportion of Effect Mediated Description It computes the proportion of effect mediated along a pathway. Usage indirect_proportion(x, y, m = NULL, fit = NULL) Arguments x The name of the x variable. Must be supplied as a quoted string. y The name of the y variable. Must be supplied as a quoted string. m A vector of the variable names of the mediator(s). The path goes from the first mediator successively to the last mediator. Cannot be NULL for this function. fit The fit object. Can be a lavaan::lavaan object or a list of lm() outputs. It can also be a lavaan.mi object returned by semTools::runMI() or its wrapper, such as semTools::sem.mi(). Details The proportion of effect mediated along a path from x to y is the indirect effect along this path divided by the total effect from x to y (<NAME>, 1975). This total effect is equal to the sum of all indirect effects from x to y and the direct effect from x to y. To ensure that the proportion can indeed be interpreted as a proportion, this function computes the the proportion only if the signs of all the indirect and direct effects from x to y are same (i.e., all effects positive or all effects negative). Value An indirect_proportion class object. It is a list-like object with these major elements: • proportion: The proportion of effect mediated. • x: The name of the x-variable. • y: The name of the y-variable. • m: A character vector of the mediator(s) along a path. The path runs from the first element to the last element. This class has a print method and a coef method. References <NAME>., & <NAME>. (1975). The decomposition of effects in path analysis. American Sociological Review, 40(1), 37. doi:10.2307/2094445 See Also print.indirect_proportion() for the print method, and coef.indirect_proportion() for the coef method. Examples library(lavaan) dat <- data_med head(dat) mod <- " m ~ x + c1 + c2 y ~ m + x + c1 + c2 " fit <- sem(mod, dat, fixed.x = FALSE) out <- indirect_proportion(x = "x", y = "y", m = "m", fit = fit) out lm2boot_out Bootstrap Estimates for lm Outputs Description Generate bootstrap estimates for models in a list of ’lm’ outputs. Usage lm2boot_out(outputs, R = 100, seed = NULL, progress = TRUE) lm2boot_out_parallel( outputs, R = 100, seed = NULL, parallel = FALSE, ncores = max(parallel::detectCores(logical = FALSE) - 1, 1), make_cluster_args = list(), progress = TRUE ) Arguments outputs A list of lm class objects, or the output of lm2list() (i.e., an lm_list-class object). R The number of bootstrap samples. Default is 100. seed The seed for the random resampling. Default is NULL. progress Logical. Display progress or not. Default is TRUE. parallel Logical. Whether parallel processing will be used. Default is NULL. ncores Integer. The number of CPU cores to use when parallel is TRUE. Default is the number of non-logical cores minus one (one minimum). Will raise an error if greater than the number of cores detected by parallel::detectCores(). If ncores is set, it will override make_cluster_args. make_cluster_args A named list of additional arguments to be passed to parallel::makeCluster(). For advanced users. See parallel::makeCluster() for details. Default is list(). Details This function is for advanced users. do_boot() is a function users should try first because do_boot() has a general interface for input-specific functions like this one. It does nonparametric bootstrapping to generate bootstrap estimates of the regression coefficients in the regression models of a list of lm() outputs, or an lm_list-class object created by lm2list(). The stored estimates can be used by indirect_effect(), cond_indirect_effects(), and re- lated functions in forming bootstrapping confidence intervals for effects such as indirect effect and conditional indirect effects. This approach removes the need to repeat bootstrapping in each call to indirect_effect(), cond_indirect_effects(), and related functions. It also ensures that the same set of bootstrap samples is used in all subsequent analyses. Value A boot_out-class object that can be used for the boot_out argument of indirect_effect(), cond_indirect_effects(), and related functions for forming bootstrapping confidence intervals. The object is a list with the number of elements equal to the number of bootstrap samples. Each element is a list of the parameter estimates and sample variances and covariances of the variables in each bootstrap sample. Functions • lm2boot_out(): Generate bootstrap estimates using one process (serial, without paralleliza- tion). • lm2boot_out_parallel(): Generate bootstrap estimates using parallel processing. See Also do_boot(), the general purpose function that users should try first before using this function. Examples data(data_med_mod_ab1) dat <- data_med_mod_ab1 lm_m <- lm(m ~ x*w + c1 + c2, dat) lm_y <- lm(y ~ m*w + x + c1 + c2, dat) lm_out <- lm2list(lm_m, lm_y) # In real research, R should be 2000 or even 5000 # In real research, no need to set progress to FALSE # Progress is displayed by default. lm_boot_out <- lm2boot_out(lm_out, R = 100, seed = 1234, progress = FALSE) out <- cond_indirect_effects(wlevels = "w", x = "x", y = "y", m = "m", fit = lm_out, boot_ci = TRUE, boot_out = lm_boot_out) out lm2list Join ’lm()’ Output to Form an ’lm_list‘-Class Object Description The resulting model can be used by indirect_effect(), cond_indirect_effects(), or cond_indirect() as a path method, as if fitted by lavaan::sem(). Usage lm2list(...) Arguments ... The lm() outputs to be grouped in a list. Details If a path model with mediation and/or moderation is fitted by a set of regression models using lm(), this function can combine them to an object of the class lm_list that represents a path model, as one fitted by structural equation model functions such as lavaan::sem(). This class of object can be used by some functions, such as indirect_effect(), cond_indirect_effects(), and cond_indirect() as if they were the output of fitting a path model, with the regression coefficients treated as path coefficients. The regression outputs to be combined need to meet the following requirements: • All models must be connected to at least one another model. That is, a regression model must either have (a) at least on predictor that is the outcome variable of another model, or (b) its outcome variable the predictor of another model. • All models must be fitted to the same sample. This implies that the sample size must be the same in all analysis. Value It returns an lm_list-class object that forms a path model represented by a set of regression models. This class has a summary method that shows the summary of each regression model stored (see summary.lm_list()), and a print method that prints the models stored (see print.lm_list()). See Also summary.lm_list() and print.lm_list() for related methods, indirect_effect() and cond_indirect_effects() which accept lm_list-class objects as input. Examples data(data_serial_parallel) lm_m11 <- lm(m11 ~ x + c1 + c2, data_serial_parallel) lm_m12 <- lm(m12 ~ m11 + x + c1 + c2, data_serial_parallel) lm_m2 <- lm(m2 ~ x + c1 + c2, data_serial_parallel) lm_y <- lm(y ~ m11 + m12 + m2 + x + c1 + c2, data_serial_parallel) # Join them to form a lm_list-class object lm_serial_parallel <- lm2list(lm_m11, lm_m12, lm_m2, lm_y) lm_serial_parallel summary(lm_serial_parallel) # Compute indirect effect from x to y through m11 and m12 outm11m12 <- cond_indirect(x = "x", y = "y", m = c("m11", "m12"), fit = lm_serial_parallel) outm11m12 # Compute indirect effect from x to y # through m11 and m12 with bootstrapping CI # R should be at least 2000 or even 5000 in read study. # In real research, parallel and progress can be omitted. # They are est to TRUE by default. outm11m12 <- cond_indirect(x = "x", y = "y", m = c("m11", "m12"), fit = lm_serial_parallel, boot_ci = TRUE, R = 100, seed = 1234, parallel = FALSE, progress = FALSE) outm11m12 lm_from_lavaan_list ’lavaan’-class to ’lm_from_lavaan_list’-Class Description Converts the regression models in a lavaan-class model to an lm_from_lavaan_list-class object. Usage lm_from_lavaan_list(fit) Arguments fit A lavaan-class object, usually the output of lavaan::lavaan() or its wrap- pers. Details It identifies all dependent variables in a lavaan model and creates an lm_from_lavaan-class object for each of them. This is an advanced helper used by plot.cond_indirect_effects(). Exported for advanced users and developers. Value An lm_from_lavaan_list-class object, which is a list of lm_from_lavaan objects. It has a predict-method (predict.lm_from_lavaan_list()) for computing the predicted values from one variable to another. See Also predict.lm_from_lavaan_list Examples library(lavaan) data(data_med) mod <- " m ~ a * x + c1 + c2 y ~ b * m + x + c1 + c2 " fit <- sem(mod, data_med, fixed.x = FALSE) fit_list <- lm_from_lavaan_list(fit) tmp <- data.frame(x = 1, c1 = 2, c2 = 3, m = 4) predict(fit_list, x = "x", y = "y", m = "m", newdata = tmp) math_indirect Math Operators for ’indirect’-Class Objects Description Mathematic operators for ’indirect’-class object, the output of indirect_effect() and cond_indirect(). Usage ## S3 method for class 'indirect' e1 + e2 ## S3 method for class 'indirect' e1 - e2 Arguments e1 An ’indirect’-class object. e2 An ’indirect’-class object. Details For now, only + operator and - operator are supported. These operators can be used to estimate and test a function of effects between the same pair of variables but along different paths. For example, they can be used to compute and test the total effects along different paths. They can also be used to compute and test the difference between the effects along two paths. The operators will check whether an operation is valid. An operation is not valid if 1. the two paths do not start from the same variable, 2. the two paths do not end at the same variable, (c) a path appears in both objects, 3. moderators are involved but they are not set to the same values in both objects, and 4. bootstrap estimates stored in boot_out, if any, are not identical. 5. Monte Carlo simulated estimates stored in mc_out, if any, are not identical. Value An ’indirect’-class object with a list of effects stored. See indirect_effect() on details for this class. See Also indirect_effect() and cond_indirect() Examples library(lavaan) dat <- modmed_x1m3w4y1 mod <- " m1 ~ a1 * x + d1 * w1 + e1 * x:w1 m2 ~ m1 + a2 * x y ~ b1 * m1 + b2 * m2 + cp * x " fit <- sem(mod, dat, meanstructure = TRUE, fixed.x = FALSE, se = "none", baseline = FALSE) est <- parameterEstimates(fit) hi_w1 <- mean(dat$w1) + sd(dat$w1) # Examples for cond_indirect(): # Conditional effect from x to m1 when w1 is 1 SD above mean out1 <- cond_indirect(x = "x", y = "y", m = c("m1", "m2"), wvalues = c(w1 = hi_w1), fit = fit) out2 <- cond_indirect(x = "x", y = "y", m = c("m2"), wvalues = c(w1 = hi_w1), fit = fit) out3 <- cond_indirect(x = "x", y = "y", wvalues = c(w1 = hi_w1), fit = fit) out12 <- out1 + out2 out12 out123 <- out1 + out2 + out3 out123 coef(out1) + coef(out2) + coef(out3) merge_mod_levels Merge the Generated Levels of Moderators Description Merge the levels of moderators generated by mod_levels() into a data frame. Usage merge_mod_levels(...) Arguments ... The output from mod_levels(), or a list of levels generated by mod_levels_list(). Details It merges the levels of moderators generated by mod_levels() into a data frame, with each row represents a combination of the levels. The output is to be used by cond_indirect_effects(). Users usually do not need to use this function because cond_indirect_effects() will merge the levels internally if necessary. This function is used when users need to customize the levels for each moderator and so cannot use mod_levels_list() or the default levels in cond_indirect_effects(). Value A wlevels-class object, which is a data frame of the combinations of levels, with additional at- tributes about the levels. See Also mod_levels() on generating the levels of a moderator. Examples data(data_med_mod_ab) dat <- data_med_mod_ab # Form the levels from a list of lm() outputs lm_m <- lm(m ~ x*w1 + c1 + c2, dat) lm_y <- lm(y ~ m*w2 + x + w1 + c1 + c2, dat) lm_out <- lm2list(lm_m, lm_y) w1_levels <- mod_levels(lm_out, w = "w1") w1_levels w2_levels <- mod_levels(lm_out, w = "w2") w2_levels merge_mod_levels(w1_levels, w2_levels) modmed_x1m3w4y1 Sample Dataset: Moderated Serial Mediation Description Generated from a serial mediation model with one predictor, three mediators, and one outcome variable, with one moderator in each stage. Usage modmed_x1m3w4y1 Format A data frame with 200 rows and 11 variables: x Predictor. Numeric. w1 Moderator 1. Numeric. w2 Moderator 2. Numeric. w3 Moderator 3. Numeric. w4 Moderator 4. Numeric. m1 Mediator 1. Numeric. m2 Mediator 2. Numeric. m3 Mediator 3. Numeric. y Outcome variable. Numeric. gp Three values: "earth", "mars", "venus". String. city Four values: "alpha", "beta", "gamma", "sigma". String. mod_levels Create Levels of Moderators Description Create levels of moderators to be used by indirect_effect(), cond_indirect_effects(), and cond_indirect(). Usage mod_levels( w, fit, w_type = c("auto", "numeric", "categorical"), w_method = c("sd", "percentile"), sd_from_mean = c(-1, 0, 1), percentiles = c(0.16, 0.5, 0.84), extract_gp_names = TRUE, prefix = NULL, values = NULL, reference_group_label = NULL, descending = TRUE ) mod_levels_list( ..., fit, w_type = "auto", w_method = "sd", sd_from_mean = NULL, percentiles = NULL, extract_gp_names = TRUE, prefix = NULL, descending = TRUE, merge = FALSE ) Arguments w Character. The names of the moderator. If the moderator is categorical with 3 or more groups, this is the vector of the indicator variables. fit The fit object. Can be a lavaan::lavaan object or a list of lm() outputs. It can also be a lavaan.mi object returned by semTools::runMI() or its wrapper, such as semTools::sem.mi(). w_type Character. Whether the moderator is a "numeric" variable or a "categorical" variable. If "auto", the function will try to determine the type automatically. w_method Character, either "sd" or "percentile". If "sd", the levels are defined by the distance from the mean in terms of standard deviation. if "percentile", the levels are defined in percentiles. sd_from_mean A numeric vector. Specify the distance in standard deviation from the mean for each level. Default is c(-1, 0, 1) for mod_levels(). For mod_levels_list(), the default is c(-1, 0, 1) when there is only one moderator, and c(-1, 1) when there are more than one moderator. Ignored if w_method is not equal to "sd". percentiles A numeric vector. Specify the percentile (in proportion) for each level. Default is c(.16, .50, .84) for mod_levels(), corresponding approximately to one standard deviation below mean, mean, and one standard deviation above mean in a normal distribution. For mod_levels_list(), default is c(.16, .50, .84) if there is one moderator, and c(.16, .84) when there are more than one mod- erator. Ignored if w_method is not equal to "percentile". extract_gp_names Logical. If TRUE, the default, the function will try to determine the name of each group from the variable names. prefix Character. If extract_gp_names is TRUE and prefix is supplied, it will be removed from the variable names to create the group names. Default is NULL, and the function will try to determine the prefix automatically. values For numeric moderators, a numeric vector. These are the values to be used and will override other options. For categorical moderators, a named list of numeric vector, each vector has length equal to the number of indicator variables. If the vector is named, the names will be used to label the values. For example, if set to list(gp1 = c(0, 0), gp3 = c(0, 1), two levels will be returned, one named gp1 with the indicator variables equal to 0 and 0, the other named gp3 with the indicator variables equal to 0 and 1. Default is NULL. reference_group_label For categorical moderator, if the label for the reference group (group with all in- dicators equal to zero) cannot be determined, the default label is "Reference". To change it, set reference_group_label to the desired label. Ignored if values is set. descending If TRUE (default), the rows are sorted in descending order for numerical moder- ators: The highest value on the first row and the lowest values on the last row. For user supplied values, the first value is on the last row and the last value is on the first row. If FALSE, the rows are sorted in ascending order. ... The names of moderators variables. For a categorical variable, it should be a vector of variable names. merge If TRUE, mod_levels_list() will call merge_mod_levels() and return the merged levels. Default is FALSE. Details It creates values of a moderator that can be used to compute conditional effect or conditional indirect effect. By default, for a numeric moderator, it uses one standard deviation below mean, mean, and one standard deviation above mean. The percentiles of these three levels in a normal distribution (16th, 50th, and 84th) can also be used. For categorical variable, it will simply collect the unique categories in the data. The generated levels are then used by cond_indirect() and cond_indirect_effects(). If a model has more than one moderator, mod_levels_list() can be used to generate combinations of levels. The output can then passed to cond_indirect_effects() to compute the conditional effects or conditional indirect effects for all the combinations. Value mod_levels() returns a wlevels-class object which is a data frame with additional attributes about the levels. mod_levels_list() returns a list of wlevels-class objects, or a wlevels-class object which is a data frame of the merged levels if merge = TRUE. Functions • mod_levels(): Generate levels for one moderator. • mod_levels_list(): Generate levels for several moderators. See Also cond_indirect_effects() for computing conditional indiret effects; merge_mod_levels() for merging levels of moderators. Examples library(lavaan) data(data_med_mod_ab) dat <- data_med_mod_ab # Form the levels from a list of lm() outputs lm_m <- lm(m ~ x*w1 + c1 + c2, dat) lm_y <- lm(y ~ m*w2 + x + w1 + c1 + c2, dat) lm_out <- lm2list(lm_m, lm_y) w1_levels <- mod_levels(lm_out, w = "w1") w1_levels w2_levels <- mod_levels(lm_out, w = "w2") w2_levels # Indirect effect from x to y through m, at the first levels of w1 and w2 cond_indirect(x = "x", y = "y", m = "m", fit = lm_out, wvalues = c(w1 = w1_levels$w1[1], w2 = w2_levels$w2[1])) # Can form the levels based on percentiles w1_levels2 <- mod_levels(lm_out, w = "w1", w_method = "percentile") w1_levels2 # Form the levels from a lavaan output # Compute the product terms before fitting the model dat$mw2 <- dat$m * dat$w2 mod <- " m ~ x + w1 + x:w1 + c1 + c2 y ~ m + x + w1 + w2 + mw2 + c1 + c2 " fit <- sem(mod, dat, fixed.x = FALSE) cond_indirect(x = "x", y = "y", m = "m", fit = fit, wvalues = c(w1 = w1_levels$w1[1], w2 = w2_levels$w2[1])) # Can pass all levels to cond_indirect_effects() # First merge the levels by merge_mod_levels() w1w2_levels <- merge_mod_levels(w1_levels, w2_levels) cond_indirect_effects(x = "x", y = "y", m = "m", fit = fit, wlevels = w1w2_levels) # mod_levels_list() forms a combinations of levels in one call # It returns a list, by default. # Form the levels from a list of lm() outputs # "merge = TRUE" is optional. cond_indirect_effects will merge the levels # automatically. w1w2_levels <- mod_levels_list("w1", "w2", fit = fit, merge = TRUE) w1w2_levels cond_indirect_effects(x = "x", y = "y", m = "m", fit = fit, wlevels = w1w2_levels) # Can work without merge = TRUE: w1w2_levels <- mod_levels_list("w1", "w2", fit = fit) w1w2_levels cond_indirect_effects(x = "x", y = "y", m = "m", fit = fit, wlevels = w1w2_levels) plot.cond_indirect_effects Plot Conditional Effects Description Plot the conditional effects for different levels of moderators. Usage ## S3 method for class 'cond_indirect_effects' plot( x, x_label, w_label = "Moderator(s)", y_label, title, x_from_mean_in_sd = 1, x_method = c("sd", "percentile"), x_percentiles = c(0.16, 0.84), x_sd_to_percentiles = NA, note_standardized = TRUE, no_title = FALSE, line_width = 1, point_size = 5, graph_type = c("default", "tumble"), ... ) Arguments x The output of cond_indirect_effects(). (Named x because it is required in the naming of arguments of the plot generic function.) x_label The label for the X-axis. Default is the value of the predictor in the output of cond_indirect_effects(). w_label The label for the legend for the lines. Default is "Moderator(s)". y_label The label for the Y-axis. Default is the name of the response variable in the model. title The title of the graph. If not supplied, it will be generated from the variable names or labels (in x_label, y_label, and w_label). If "", no title will be printed. This can be used when the plot is for manuscript submission and figures are required to have no titles. x_from_mean_in_sd How many SD from mean is used to define "low" and "high" for the focal vari- able. Default is 1. x_method How to define "high" and "low" for the focal variable levels. Default is in terms of the standard deviation of the focal variable, "sd". If equal to "percentile", then the percentiles of the focal variable in the dataset is used. x_percentiles If x_method is "percentile", then this argument specifies the two percentiles to be used, divided by 100. It must be a vector of two numbers. The default is c(.16, .84), the 16th and 84th percentiles, which corresponds approximately to one SD below and above mean for a normal distribution, respectively. x_sd_to_percentiles If x_method is "percentile" and this argument is set to a number, this number will be used to determine the percentiles to be used. The lower percentile is the percentile in a normal distribution that is x_sd_to_percentiles SD below the mean. The upper percentile is the percentile in a normal distribution that is x_sd_to_percentiles SD above the mean. Therefore, if x_sd_to_percentiles is set to 1, then the lower and upper percentiles are 16th and 84th, respectively. Default is NA. note_standardized If TRUE, will check whether a variable has SD nearly equal to one. If yes, will report this in the plot. Default is TRUE. no_title If TRUE, title will be suppressed. Default is FALSE. line_width The width of the lines as used in ggplot2::geom_segment(). Default is 1. point_size The size of the points as used in ggplot2::geom_point(). Default is 5. graph_type If "default", the typical line-graph with equal end-points will be plotted. If "tubmle", then the tumble graph proposed by Bodner (2016) will be plotted. Default is "default". ... Additional arguments. Ignored. Details This function is a plot method of the output of cond_indirect_effects(). It will use the levels of moderators in the output. It plots the conditional effect from x to y in a model for different levels of the moderators. It does not support conditional indirect effects. If there is one or more mediators in x, it will raise an error. Value A ggplot2 graph. Plotted if not assigned to a name. It can be further modified like a usual ggplot2 graph. References <NAME>. (2016). Tumble graphs: Avoiding misleading end point extrapolation when graphing interactions from a moderated multiple regression analysis. Journal of Educational and Behavioral Statistics, 41(6), 593-604. doi:10.3102/1076998616657080 See Also cond_indirect_effects() Examples library(lavaan) dat <- modmed_x1m3w4y1 n <- nrow(dat) set.seed(860314) dat$gp <- sample(c("gp1", "gp2", "gp3"), n, replace = TRUE) dat <- cbind(dat, factor2var(dat$gp, prefix = "gp", add_rownames = FALSE)) # Categorical moderator mod <- " m3 ~ m1 + x + gpgp2 + gpgp3 + x:gpgp2 + x:gpgp3 y ~ m2 + m3 + x " fit <- sem(mod, dat, meanstructure = TRUE, fixed.x = FALSE) out_mm_1 <- mod_levels(c("gpgp2", "gpgp3"), sd_from_mean = c(-1, 1), fit = fit) out_1 <- cond_indirect_effects(wlevels = out_mm_1, x = "x", y = "m3", fit = fit) plot(out_1) plot(out_1, graph_type = "tumble") # Numeric moderator dat <- modmed_x1m3w4y1 mod2 <- " m3 ~ m1 + x + w1 + x:w1 y ~ m3 + x " fit2 <- sem(mod2, dat, meanstructure = TRUE, fixed.x = FALSE) out_mm_2 <- mod_levels("w1", w_method = "percentile", percentiles = c(.16, .84), fit = fit2) out_mm_2 out_2 <- cond_indirect_effects(wlevels = out_mm_2, x = "x", y = "m3", fit = fit2) plot(out_2) plot(out_2, graph_type = "tumble") predict.lm_from_lavaan Predicted Values of a ’lm_from_lavaan’-Class Object Description Compute the predicted values based on the model stored in a ’lm_from_lavaan‘-class object. Usage ## S3 method for class 'lm_from_lavaan' predict(object, newdata, ...) Arguments object A ’lm_from_lavaan’-class object. newdata Required. A data frame of the new data. It must be a data frame. ... Additional arguments. Ignored. Details An lm_from_lavaan-class method that converts a regression model for a variable in a lavaan model to a formula object. This function uses the stored model to compute predicted values using user-supplied data. This is an advanced helper used by plot.cond_indirect_effects(). Exported for advanced users and developers. Value A numeric vector of the predicted values, with length equal to the number of rows of user-supplied data. See Also lm_from_lavaan_list() Examples library(lavaan) data(data_med) mod <- " m ~ a * x + c1 + c2 y ~ b * m + x + c1 + c2 " fit <- sem(mod, data_med, fixed.x = FALSE) fit_list <- lm_from_lavaan_list(fit) tmp <- data.frame(x = 1, c1 = 2, c2 = 3, m = 4) predict(fit_list$m, newdata = tmp) predict(fit_list$y, newdata = tmp) predict.lm_from_lavaan_list Predicted Values of an ’lm_from_lavaan_list’-Class Object Description It computes the predicted values based on the models stored in an ’lm_from_lavaan_list‘-class ob- ject. Usage ## S3 method for class 'lm_from_lavaan_list' predict(object, x = NULL, y = NULL, m = NULL, newdata, ...) Arguments object A ’lm_from_lavaan’-class object. x The variable name at the start of a path. y The variable name at the end of a path. m Optional. The mediator(s) from x to y. A numeric vector of the names of the mediators. The path goes from the first element to the last element. For example, if m = c("m1", "m2"), then the path is x -> m1 -> m2 -> y. newdata Required. A data frame of the new data. It must be a data frame. ... Additional arguments. Ignored. Details An lm_from_lavaan_list-class object is a list of lm_from_lavaan-class objects. This is an advanced helper used by plot.cond_indirect_effects(). Exported for advanced users and developers. Value A numeric vector of the predicted values, with length equal to the number of rows of user-supplied data. See Also lm_from_lavaan_list() Examples library(lavaan) data(data_med) mod <- " m ~ a * x + c1 + c2 y ~ b * m + x + c1 + c2 " fit <- sem(mod, data_med, fixed.x = FALSE) fit_list <- lm_from_lavaan_list(fit) tmp <- data.frame(x = 1, c1 = 2, c2 = 3, m = 4) predict(fit_list, x = "x", y = "y", m = "m", newdata = tmp) predict.lm_list Predicted Values of an ’lm_list’-Class Object Description Compute the predicted values based on the models stored in an ’lm_list‘-class object. Usage ## S3 method for class 'lm_list' predict(object, x = NULL, y = NULL, m = NULL, newdata, ...) Arguments object An ’lm_list’-class object. x The variable name at the start of a path. y The variable name at the end of a path. m Optional. The mediator(s) from x to y. A numeric vector of the names of the mediators. The path goes from the first element to the last element. For example, if m = c("m1", "m2"), then the path is x -> m1 -> m2 -> y. newdata Required. A data frame of the new data. It must be a data frame. ... Additional arguments. Ignored. Details An lm_list-class object is a list of lm-class objects, this function is similar to the stats::predict() method of lm() but it works on a system defined by a list of regression models. This is an advanced helper used by some functions in this package. Exported for advanced users. Value A numeric vector of the predicted values, with length equal to the number of rows of user-supplied data. See Also lm2list() Examples data(data_serial_parallel) lm_m11 <- lm(m11 ~ x + c1 + c2, data_serial_parallel) lm_m12 <- lm(m12 ~ m11 + x + c1 + c2, data_serial_parallel) lm_m2 <- lm(m2 ~ x + c1 + c2, data_serial_parallel) lm_y <- lm(y ~ m11 + m12 + m2 + x + c1 + c2, data_serial_parallel) # Join them to form a lm_list-class object lm_serial_parallel <- lm2list(lm_m11, lm_m12, lm_m2, lm_y) lm_serial_parallel summary(lm_serial_parallel) newdat <- data_serial_parallel[3:5, ] predict(lm_serial_parallel, x = "x", y = "y", m = "m2", newdata = newdat) print.all_paths Print ’all_paths’ Class Object Description Print the content of ’all_paths’-class object, which can be generated by all_indirect_paths(). Usage ## S3 method for class 'all_paths' print(x, ...) Arguments x A ’all_paths’-class object. ... Optional arguments. Details This function is used to print the paths identified in a readable format. Value x is returned invisibly. Called for its side effect. Author(s) <NAME> https://orcid.org/0000-0002-9871-9448 See Also all_indirect_paths() Examples library(lavaan) data(data_serial_parallel) mod <- " m11 ~ x + c1 + c2 m12 ~ m11 + x + c1 + c2 m2 ~ x + c1 + c2 y ~ m12 + m2 + m11 + x + c1 + c2 " fit <- sem(mod, data_serial_parallel, fixed.x = FALSE) # All indirect paths out1 <- all_indirect_paths(fit) out1 print.boot_out Print a boot_out-Class Object Description Print the content of the output of do_boot() or related functions. Usage ## S3 method for class 'boot_out' print(x, ...) Arguments x The output of do_boot(), or any boot_out-class object returned by similar functions. ... Other arguments. Not used. Value x is returned invisibly. Called for its side effect. Examples data(data_med_mod_ab1) dat <- data_med_mod_ab1 lm_m <- lm(m ~ x*w + c1 + c2, dat) lm_y <- lm(y ~ m*w + x + c1 + c2, dat) lm_out <- lm2list(lm_m, lm_y) # In real research, R should be 2000 or even 5000 # In real research, no need to set parallel to FALSE # In real research, no need to set progress to FALSE # Progress is displayed by default. lm_boot_out <- do_boot(lm_out, R = 100, seed = 1234, progress = FALSE, parallel = FALSE) # Print the output of do_boot() lm_boot_out library(lavaan) data(data_med_mod_ab1) dat <- data_med_mod_ab1 dat$"x:w" <- dat$x * dat$w dat$"m:w" <- dat$m * dat$w mod <- " m ~ x + w + x:w + c1 + c2 y ~ m + w + m:w + x + c1 + c2 " fit <- sem(model = mod, data = dat, fixed.x = FALSE, se = "none", baseline = FALSE) # In real research, R should be 2000 or even 5000 # In real research, no need to set progress to FALSE # In real research, no need to set parallel to FALSE # Progress is displayed by default. fit_boot_out <- do_boot(fit = fit, R = 40, seed = 1234, parallel = FALSE, progress = FALSE) # Print the output of do_boot() fit_boot_out print.cond_indirect_diff Print the Output of ’cond_indirect_diff’ Description Print the output of cond_indirect_diff(). Usage ## S3 method for class 'cond_indirect_diff' print(x, digits = 3, pvalue = FALSE, pvalue_digits = 3, se = FALSE, ...) Arguments x The output of cond_indirect_diff(). digits The number of decimal places in the printout. pvalue Logical. If TRUE, asymmetric p-value based on bootstrapping will be printed if available. Default is FALSE. pvalue_digits Number of decimal places to display for the p-value. Default is 3. se Logical. If TRUE and confidence intervals are available, the standard errors of the estimates are also printed. They are simply the standard deviations of the bootstrap estimates or Monte Carlo simulated values, depending on the method used to form the confidence intervals. ... Optional arguments. Ignored. Details The print method of the cond_indirect_diff-class object. If bootstrapping confidence interval was requested, this method has the option to print a p-value computed by the method presented in Asparouhov and Muthén (2021). Note that this p-value is asymmetric bootstrap p-value based on the distribution of the bootstrap estimates. It is not computed based on the distribution under the null hypothesis. For a p-value of a, it means that a 100(1 - a)% bootstrapping confidence interval will have one of its limits equal to 0. A confidence interval with a higher confidence level will include zero, while a confidence interval with a lower confidence level will exclude zero. Value It returns x invisibly. Called for its side effect. References <NAME>., & <NAME>. (2021). Bootstrap p-value computation. Retrieved from https://www.statmodel.com/download Bootstrap%20-%20Pvalue.pdf See Also cond_indirect_diff() print.cond_indirect_effects Print a ’cond_indirect_effects’ Class Object Description Print the content of the output of cond_indirect_effects() Usage ## S3 method for class 'cond_indirect_effects' print( x, digits = 3, annotation = TRUE, pvalue = FALSE, pvalue_digits = 3, se = FALSE, ... ) Arguments x The output of cond_indirect_effects(). digits Number of digits to display. Default is 3. annotation Logical. Whether the annotation after the table of effects is to be printed. Default is TRUE. pvalue Logical. If TRUE, asymmetric p-values based on bootstrapping will be printed if available. Default is FALSE. pvalue_digits Number of decimal places to display for the p-values. Default is 3. se Logical. If TRUE and confidence intervals are available, the standard errors of the estimates are also printed. They are simply the standard deviations of the bootstrap estimates or Monte Carlo simulated values, depending on the method used to form the confidence intervals. ... Other arguments. Not used. Details The print method of the cond_indirect_effects-class object. If bootstrapping confidence intervals were requested, this method has the option to print p-values computed by the method presented in Asparouhov and Muthén (2021). Note that these p-values are asymmetric bootstrap p-values based on the distribution of the bootstrap estimates. They not computed based on the distribution under the null hypothesis. For a p-value of a, it means that a 100(1 - a)% bootstrapping confidence interval will have one of its limits equal to 0. A confidence interval with a higher confidence level will include zero, while a confidence interval with a lower confidence level will exclude zero. Value x is returned invisibly. Called for its side effect. References <NAME>., & <NAME>. (2021). Bootstrap p-value computation. Retrieved from https://www.statmodel.com/download Bootstrap%20-%20Pvalue.pdf See Also cond_indirect_effects() Examples library(lavaan) dat <- modmed_x1m3w4y1 mod <- " m1 ~ a1 * x + d1 * w1 + e1 * x:w1 m2 ~ a2 * x y ~ b1 * m1 + b2 * m2 + cp * x " fit <- sem(mod, dat, meanstructure = TRUE, fixed.x = FALSE, se = "none", baseline = FALSE) # Conditional effects from x to m1 when w1 is equal to each of the default levels cond_indirect_effects(x = "x", y = "m1", wlevels = "w1", fit = fit) # Conditional Indirect effect from x1 through m1 to y, # when w1 is equal to each of the default levels out <- cond_indirect_effects(x = "x", y = "y", m = "m1", wlevels = "w1", fit = fit) out print(out, digits = 5) print(out, annotation = FALSE) print.delta_med Print a ’delta_med’ Class Object Description Print the content of a delta_med-class object. Usage ## S3 method for class 'delta_med' print(x, digits = 3, level = NULL, full = FALSE, ...) Arguments x A delta_med-class object. digits The number of digits after the decimal. Default is 3. level The level of confidence of bootstrap confidence interval, if requested when cre- ated. If NULL, the default, the level requested when calling delta_med() is used. If not null, then this level will be used. full Logical. Whether additional information will be printed. Default is FALSE. ... Optional arguments. Ignored. Details It prints the output of delta_med(), which is a delta_med-class object. Value x is returned invisibly. Called for its side effect. Author(s) <NAME> https://orcid.org/0000-0002-9871-9448 See Also delta_med() Examples library(lavaan) dat <- data_med mod <- " m ~ x y ~ m + x " fit <- sem(mod, dat) dm <- delta_med(x = "x", y = "y", m = "m", fit = fit) dm print(dm, full = TRUE) # Call do_boot() to generate # bootstrap estimates # Use 2000 or even 5000 for R in real studies # Set parallel to TRUE in real studies for faster bootstrapping boot_out <- do_boot(fit, R = 45, seed = 879, parallel = FALSE, progress = FALSE) # Remove 'progress = FALSE' in practice dm_boot <- delta_med(x = "x", y = "y", m = "m", fit = fit, boot_out = boot_out, progress = FALSE) dm_boot confint(dm_boot) confint(dm_boot, level = .90) print.indirect Print an ’indirect’ Class Object Description Print the content of the output of indirect_effect() or cond_indirect(). Usage ## S3 method for class 'indirect' print(x, digits = 3, pvalue = FALSE, pvalue_digits = 3, se = FALSE, ...) Arguments x The output of indirect_effect() or cond_indirect(). digits Number of digits to display. Default is 3. pvalue Logical. If TRUE, asymmetric p-value based on bootstrapping will be printed if available. pvalue_digits Number of decimal places to display for the p-value. Default is 3. se Logical. If TRUE and confidence interval is available, the standard error of the estimate is also printed. This is simply the standard deviation of the bootstrap estimates or Monte Carlo simulated values, depending on the method used to form the confidence interval. ... Other arguments. Not used. Details The print method of the indirect-class object. If bootstrapping confidence interval was requested, this method has the option to print a p-value computed by the method presented in Asparouhov and Muthén (2021). Note that this p-value is asymmetric bootstrap p-value based on the distribution of the bootstrap estimates. It is not computed based on the distribution under the null hypothesis. For a p-value of a, it means that a 100(1 - a)% bootstrapping confidence interval will have one of its limits equal to 0. A confidence interval with a higher confidence level will include zero, while a confidence interval with a lower confidence level will exclude zero. We recommend using confidence interval directly. Therefore, p-value is not printed by default. Nevertheless, users who need it can request it by setting pvalue to TRUE. Value x is returned invisibly. Called for its side effect. References <NAME>., & <NAME>. (2021). Bootstrap p-value computation. Retrieved from https://www.statmodel.com/download Bootstrap%20-%20Pvalue.pdf See Also indirect_effect() and cond_indirect() Examples library(lavaan) dat <- modmed_x1m3w4y1 mod <- " m1 ~ a1 * x + b1 * w1 + d1 * x:w1 m2 ~ a2 * m1 + b2 * w2 + d2 * m1:w2 m3 ~ a3 * m2 + b3 * w3 + d3 * m2:w3 y ~ a4 * m3 + b4 * w4 + d4 * m3:w4 " fit <- sem(mod, dat, meanstructure = TRUE, fixed.x = FALSE, se = "none", baseline = FALSE) est <- parameterEstimates(fit) wvalues <- c(w1 = 5, w2 = 4, w3 = 2, w4 = 3) indirect_1 <- cond_indirect(x = "x", y = "y", m = c("m1", "m2", "m3"), fit = fit, wvalues = wvalues) indirect_1 dat <- modmed_x1m3w4y1 mod2 <- " m1 ~ a1 * x m2 ~ a2 * m1 m3 ~ a3 * m2 y ~ a4 * m3 + x " fit2 <- sem(mod2, dat, meanstructure = TRUE, fixed.x = FALSE, se = "none", baseline = FALSE) est <- parameterEstimates(fit) indirect_2 <- indirect_effect(x = "x", y = "y", m = c("m1", "m2", "m3"), fit = fit2) indirect_2 print(indirect_2, digits = 5) print.indirect_list Print an ’indirect_list’ Class Object Description Print the content of the output of many_indirect_effects(). Usage ## S3 method for class 'indirect_list' print( x, digits = 3, annotation = TRUE, pvalue = FALSE, pvalue_digits = 3, se = FALSE, ... ) Arguments x The output of many_indirect_effects(). digits Number of digits to display. Default is 3. annotation Logical. Whether the annotation after the table of effects is to be printed. Default is TRUE. pvalue Logical. If TRUE, asymmetric p-values based on bootstrapping will be printed if available. pvalue_digits Number of decimal places to display for the p-values. Default is 3. se Logical. If TRUE and confidence intervals are available, the standard errors of the estimates are also printed. They are simply the standard deviations of the bootstrap estimates or Monte Carlo simulated values, depending on the method used to form the confidence intervals. ... Other arguments. Not used. Details The print method of the indirect_list-class object. If bootstrapping confidence interval was requested, this method has the option to print a p-value computed by the method presented in Asparouhov and Muthén (2021). Note that this p-value is asymmetric bootstrap p-value based on the distribution of the bootstrap estimates. It is not computed based on the distribution under the null hypothesis. For a p-value of a, it means that a 100(1 - a)% bootstrapping confidence interval will have one of its limits equal to 0. A confidence interval with a higher confidence level will include zero, while a confidence interval with a lower confidence level will exclude zero. Value x is returned invisibly. Called for its side effect. References <NAME>., & <NAME>. (2021). Bootstrap p-value computation. Retrieved from https://www.statmodel.com/download Bootstrap%20-%20Pvalue.pdf See Also many_indirect_effects() Examples library(lavaan) data(data_serial_parallel) mod <- " m11 ~ x + c1 + c2 m12 ~ m11 + x + c1 + c2 m2 ~ x + c1 + c2 y ~ m12 + m2 + m11 + x + c1 + c2 " fit <- sem(mod, data_serial_parallel, fixed.x = FALSE) # All indirect paths from x to y paths <- all_indirect_paths(fit, x = "x", y = "y") paths # Indirect effect estimates out <- many_indirect_effects(paths, fit = fit) out print.indirect_proportion Print an ’indirect_proportion’-Class Object Description Print the content of an ’indirect_proportion’-class object, the output of indirect_proportion(). Usage ## S3 method for class 'indirect_proportion' print(x, digits = 3, annotation = TRUE, ...) Arguments x An ’indirect_proportion’-class object. digits Number of digits to display. Default is 3. annotation Logical. Whether additional information should be printed. Default is TRUE. ... Optional arguments. Not used. Details The print method of the indirect_proportion-class object, which is produced by indirect_proportion(). In addition to the proportion of effect mediated, it also prints additional information such as the path for which the proportion is computed, and all indirect path(s) from the x-variable to the y-variable. To get the proportion as a scalar, use the coef method of indirect_proportion objects. Value x is returned invisibly. Called for its side effect. See Also indirect_proportion() Examples library(lavaan) dat <- data_med head(dat) mod <- " m ~ x + c1 + c2 y ~ m + x + c1 + c2 " fit <- sem(mod, dat, fixed.x = FALSE) out <- indirect_proportion(x = "x", y = "y", m = "m", fit = fit) out print(out, digits = 5) print.lm_list Print an lm_list-Class Object Description Print the content of the output of lm2list(). Usage ## S3 method for class 'lm_list' print(x, ...) Arguments x The output of lm2list(). ... Other arguments. Not used. Value x is returned invisibly. Called for its side effect. Examples data(data_serial_parallel) lm_m11 <- lm(m11 ~ x + c1 + c2, data_serial_parallel) lm_m12 <- lm(m12 ~ m11 + x + c1 + c2, data_serial_parallel) lm_m2 <- lm(m2 ~ x + c1 + c2, data_serial_parallel) lm_y <- lm(y ~ m11 + m12 + m2 + x + c1 + c2, data_serial_parallel) # Join them to form a lm_list-class object lm_serial_parallel <- lm2list(lm_m11, lm_m12, lm_m2, lm_y) lm_serial_parallel print.mc_out Print a mc_out-Class Object Description Print the content of the output of do_mc() or related functions. Usage ## S3 method for class 'mc_out' print(x, ...) Arguments x The output of do_mc(), or any mc_out-class object returned by similar func- tions. ... Other arguments. Not used. Value x is returned invisibly. Called for its side effect. Examples library(lavaan) data(data_med_mod_ab1) dat <- data_med_mod_ab1 mod <- " m ~ x + w + x:w + c1 + c2 y ~ m + w + m:w + x + c1 + c2 " fit <- sem(mod, dat) # In real research, R should be 5000 or even 10000 mc_out <- do_mc(fit, R = 100, seed = 1234) # Print the output of do_boot() mc_out simple_mediation_latent Sample Dataset: A Simple Latent Mediation Model Description Generated from a simple mediation model among xthree latent factors, fx, fm, and fy, xeach has three indicators. Usage simple_mediation_latent Format A data frame with 200 rows and 11 variables: x1 Indicator of fx. Numeric. x2 Indicator of fx. Numeric. x3 Indicator of fx. Numeric. m1 Indicator of fm. Numeric. m2 Indicator of fm. Numeric. m3 Indicator of fm. Numeric. y1 Indicator of fy. Numeric. y2 Indicator of fy. Numeric. y3 Indicator of fy. Numeric. Details The model: fx =~ x1 + x2 + x3 fm =~ m1 + m2 + m3 fy =~ y1 + y2 + y3 fm ~ a * fx fy ~ b * fm + cp * fx indirect := a * b subsetting_cond_indirect_effects Extraction Methods for ’cond_indirect_effects’ Outputs Description For subsetting a ’cond_indirect_effects’-class object. Usage ## S3 method for class 'cond_indirect_effects' x[i, j, drop = if (missing(i)) TRUE else length(j) == 1] Arguments x A ’cond_indirect_effects’-class object. i A numeric vector of row number(s), a character vector of row name(s), or a logical vector of row(s) to be selected. j A numeric vector of column number(s), a character vector of column name(s), or a logical vector of column(s) to be selected. drop Whether dropping a dimension if it only have one row/column. Details Customized [ for ’cond_indirect_effects’-class objects, to ensure that these operations work as they would be on a data frame object, while information specific to conditional effects is modified correctly. Value A ’cond_indirect_effects’-class object. See cond_indirect_effects() for details on this class. Examples library(lavaan) dat <- modmed_x1m3w4y1 mod <- " m1 ~ x + w1 + x:w1 m2 ~ m1 y ~ m2 + x + w4 + m2:w4 " fit <- sem(mod, dat, meanstructure = TRUE, fixed.x = FALSE, se = "none", baseline = FALSE) est <- parameterEstimates(fit) # Examples for cond_indirect(): # Conditional effects from x to m1 when w1 is equal to each of the levels out1 <- cond_indirect_effects(x = "x", y = "m1", wlevels = "w1", fit = fit) out1[2, ] # Conditional Indirect effect from x1 through m1 to y, # when w1 is equal to each of the levels out2 <- cond_indirect_effects(x = "x", y = "y", m = c("m1", "m2"), wlevels = c("w1", "w4"), fit = fit) out2[c(1, 3), ] subsetting_wlevels Extraction Methods for a ’wlevels’-class Object Description For subsetting a ’wlevels’-class object. Attributes related to the levels will be preserved if appropri- ate. Usage ## S3 method for class 'wlevels' x[i, j, drop = if (missing(i)) TRUE else length(j) == 1] ## S3 replacement method for class 'wlevels' x[i, j] <- value ## S3 replacement method for class 'wlevels' x[[i, j]] <- value Arguments x A ’wlevels’-class object. i A numeric vector of row number(s), a character vector of row name(s), or a logical vector of row(s) to be selected. j A numeric vector of column number(s), a character vector of column name(s), or a logical vector of column(s) to be selected. drop Whether dropping a dimension if it only have one row/column. value Ignored. Details Customized [ for ’wlevels’-class objects, to ensure that these operations work as they would be on a data frame object, while information specific to a wlevels-class object modified correctly. The assignment methods [<- and [[<- for wlevels-class objects will raise an error. This class of objects should be created by mod_levels() or related functions. Subsetting the output of mod_levels() is possible but not recommended. It is more reliable to generate the levels using mod_levels() and related functions. Nevertheless, there are situations in which subsetting is preferred. Value A ’wlevels’-class object. See mod_levels() and merge_mod_levels() for details on this class. See Also mod_levels(), mod_levels_list(), and merge_mod_levels() Examples data(data_med_mod_ab) dat <- data_med_mod_ab # Form the levels from a list of lm() outputs lm_m <- lm(m ~ x*w1 + c1 + c2, dat) lm_y <- lm(y ~ m*w2 + x + w1 + c1 + c2, dat) lm_out <- lm2list(lm_m, lm_y) w1_levels <- mod_levels(lm_out, w = "w1") w1_levels w1_levels[2, ] w1_levels[c(2, 3), ] dat <- data_med_mod_serial_cat lm_m1 <- lm(m1 ~ x*w1 + c1 + c2, dat) lm_y <- lm(y ~ m1 + x + w1 + c1 + c2, dat) lm_out <- lm2list(lm_m1, lm_y) w1gp_levels <- mod_levels(lm_out, w = "w1") w1gp_levels w1gp_levels[2, ] w1gp_levels[3, ] merged_levels <- merge_mod_levels(w1_levels, w1gp_levels) merged_levels merged_levels[4:6, ] merged_levels[1:3, c(2, 3)] merged_levels[c(1, 4, 7), 1, drop = FALSE] summary.lm_list Summary of an lm_list-Class Object Description The summary of content of the output of lm2list(). Usage ## S3 method for class 'lm_list' summary(object, ...) ## S3 method for class 'summary_lm_list' print(x, digits = 3, ...) Arguments object The output of lm2list(). ... Other arguments. Not used. x An object of class summary_lm_list. digits The number of significant digits in printing numerical results. Value summary.lm_list() returns a summary_lm_list-class object, which is a list of the summary() outputs of the lm() outputs stored. print.summary_lm_list() returns x invisibly. Called for its side effect. Functions • print(summary_lm_list): Print method for output of summary for lm_list. Examples data(data_serial_parallel) lm_m11 <- lm(m11 ~ x + c1 + c2, data_serial_parallel) lm_m12 <- lm(m12 ~ m11 + x + c1 + c2, data_serial_parallel) lm_m2 <- lm(m2 ~ x + c1 + c2, data_serial_parallel) lm_y <- lm(y ~ m11 + m12 + m2 + x + c1 + c2, data_serial_parallel) # Join them to form a lm_list-class object lm_serial_parallel <- lm2list(lm_m11, lm_m12, lm_m2, lm_y) lm_serial_parallel summary(lm_serial_parallel) terms.lm_from_lavaan Model Terms of an ’lm_from_lavaan’-Class Object Description It extracts the terms object from an lm_from_lavaan-class object. Usage ## S3 method for class 'lm_from_lavaan' terms(x, ...) Arguments x An ’lm_from_lavaan’-class object. ... Additional arguments. Ignored. Details A method for lm_from_lavaan-class that converts a regression model for a variable in a lavaan model to a formula object. This function simply calls stats::terms() on the formula object to extract the predictors of a variable. Value A terms-class object. See terms.object for details. See Also terms.object, lm_from_lavaan_list() Examples library(lavaan) data(data_med) mod <- " m ~ a * x + c1 + c2 y ~ b * m + x + c1 + c2 " fit <- sem(mod, data_med, fixed.x = FALSE) fit_list <- lm_from_lavaan_list(fit) terms(fit_list$m) terms(fit_list$y) total_indirect_effect Total Indirect Effect Between Two Variables Description Compute the total indirect effect between two variables in the paths estimated by many_indirect_effects(). Usage total_indirect_effect(object, x, y) Arguments object The output of many_indirect_effects(), or a list of indirect-class objects. x Character. The name of the x variable. All paths start from x will be included. y Character. The name of the y variable. All paths end at y will be included. Details It extracts the indirect-class objects of relevant paths and then add the indirect effects together using the + operator. Value An indirect-class object. See Also many_indirect_effects() Examples library(lavaan) data(data_serial_parallel) mod <- " m11 ~ x + c1 + c2 m12 ~ m11 + x + c1 + c2 m2 ~ x + c1 + c2 y ~ m12 + m2 + m11 + x + c1 + c2 " fit <- sem(mod, data_serial_parallel, fixed.x = FALSE) # All indirect paths, control variables excluded paths <- all_indirect_paths(fit, exclude = c("c1", "c2")) paths # Indirect effect estimates out <- many_indirect_effects(paths, fit = fit) out # Total indirect effect from x to y total_indirect_effect(out, x = "x", y = "y")
github.com/syyongx/php2go
go
Go
README [¶](#section-readme) --- ### PHP2Go [![GoDoc](https://godoc.org/github.com/syyongx/php2go?status.svg)](https://godoc.org/github.com/syyongx/php2go) [![Go Report Card](https://goreportcard.com/badge/github.com/syyongx/php2go)](https://goreportcard.com/report/github.com/syyongx/php2go) [![MIT licensed](https://img.shields.io/badge/license-MIT-blue.svg)](https://github.com/syyongx/php2go/blob/v0.9.8/LICENSE) Use Golang to implement PHP's common built-in functions. About 140+ functions have been implemented. #### Install ``` go get github.com/syyongx/php2go ``` #### Requirements Go 1.10 or above. #### PHP Functions ##### Date/Time Functions ``` time() strtotime() date() checkdate() sleep() usleep() ``` ##### String Functions ``` strpos() stripos() strrpos() strripos() str_replace() ucfirst() lcfirst() ucwords() substr() strrev() number_format() chunk_split() str_word_count() wordwrap() strlen() mb_strlen() str_repeat() strstr() strtr() str_shuffle() trim() ltrim() rtrim() explode() strtoupper() strtolower() chr() ord() nl2br() json_encode() json_decode() addslashes() stripslashes() quotemeta() htmlentities() html_entity_decode() md5() md5_file() sha1() sha1_file() crc32() levenshtein() similar_text() soundex() parse_str() ``` ##### URL Functions ``` base64_encode() base64_decode() parse_url() urlencode() urldecode() rawurlencode() rawurldecode() http_build_query() ``` ##### Array(Slice/Map) Functions ``` array_fill() array_flip() array_keys() array_values() array_merge() array_chunk() array_pad() array_slice() array_rand() array_column() array_push() array_pop() array_unshift() array_shift() array_key_exists() array_combine() array_reverse() implode() in_array() ``` ##### Mathematical Functions ``` abs() rand() round() floor() ceil() pi() max() min() decbin() bindec() hex2bin() bin2hex() dechex() hexdec() decoct() Octdec() base_convert() is_nan() ``` ##### CSPRNG Functions ``` random_bytes() random_int() ``` ##### Directory/Filesystem Functions ``` stat() pathinfo() file_exists() is_file() is_dir() filesize() file_put_contents() file_get_contents() unlink() delete() copy() is_readable() is_writeable() rename() touch() mkdir() getcwd() realpath() basename() chmod() chown() fclose() filemtime() fgetcsv() glob() ``` ##### Variable handling Functions ``` empty() is_numeric() ``` ##### Program execution Functions ``` exec() system() passthru() ``` ##### Network Functions ``` gethostname() gethostbyname() gethostbynamel() gethostbyaddr() ip2long() long2ip() ``` ##### Misc. Functions ``` echo() uniqid() exit() die() getenv() putenv() memory_get_usage() memory_get_peak_usage() version_compare() zip_open() Ternary(condition bool, trueVal, falseVal interface{}) interface{} ``` #### LICENSE PHP2Go source code is licensed under the [MIT](https://github.com/syyongx/php2go/raw/master/LICENSE) Licence. Documentation [¶](#section-documentation) --- [Rendered for](https://go.dev/about#build-context) linux/amd64 windows/amd64 darwin/amd64 js/wasm ### Index [¶](#pkg-index) * [func Abs(number float64) float64](#Abs) * [func Addslashes(str string) string](#Addslashes) * [func ArrayChunk(s []interface{}, size int) [][]interface{}](#ArrayChunk) * [func ArrayColumn(input map[string]map[string]interface{}, columnKey string) []interface{}](#ArrayColumn) * [func ArrayCombine(s1, s2 []interface{}) map[interface{}]interface{}](#ArrayCombine) * [func ArrayFill(startIndex int, num uint, value interface{}) map[int]interface{}](#ArrayFill) * [func ArrayFlip(m map[interface{}]interface{}) map[interface{}]interface{}](#ArrayFlip) * [func ArrayKeyExists(key interface{}, m map[interface{}]interface{}) bool](#ArrayKeyExists) * [func ArrayKeys(elements map[interface{}]interface{}) []interface{}](#ArrayKeys) * [func ArrayMerge(ss ...[]interface{}) []interface{}](#ArrayMerge) * [func ArrayPad(s []interface{}, size int, val interface{}) []interface{}](#ArrayPad) * [func ArrayPop(s *[]interface{}) interface{}](#ArrayPop) * [func ArrayPush(s *[]interface{}, elements ...interface{}) int](#ArrayPush) * [func ArrayRand(elements []interface{}) []interface{}](#ArrayRand) * [func ArrayReverse(s []interface{}) []interface{}](#ArrayReverse) * [func ArrayShift(s *[]interface{}) interface{}](#ArrayShift) * [func ArraySlice(s []interface{}, offset, length uint) []interface{}](#ArraySlice) * [func ArrayUnshift(s *[]interface{}, elements ...interface{}) int](#ArrayUnshift) * [func ArrayValues(elements map[interface{}]interface{}) []interface{}](#ArrayValues) * [func Base64Decode(str string) (string, error)](#Base64Decode) * [func Base64Encode(str string) string](#Base64Encode) * [func BaseConvert(number string, frombase, tobase int) (string, error)](#BaseConvert) * [func Basename(path string) string](#Basename) * [func Bin2hex(str string) (string, error)](#Bin2hex) * [func Bindec(str string) (string, error)](#Bindec) * [func Ceil(value float64) float64](#Ceil) * [func Checkdate(month, day, year int) bool](#Checkdate) * [func Chmod(filename string, mode os.FileMode) bool](#Chmod) * [func Chown(filename string, uid, gid int) bool](#Chown) * [func Chr(ascii int) string](#Chr) * [func ChunkSplit(body string, chunklen uint, end string) string](#ChunkSplit) * [func Copy(source, dest string) (bool, error)](#Copy) * [func Crc32(str string) uint32](#Crc32) * [func Date(format string, timestamp int64) string](#Date) * [func Decbin(number int64) string](#Decbin) * [func Dechex(number int64) string](#Dechex) * [func Decoct(number int64) string](#Decoct) * [func Delete(filename string) error](#Delete) * [func Die(status int)](#Die) * [func DiskFreeSpace(directory string) (uint64, error)](#DiskFreeSpace) * [func DiskTotalSpace(directory string) (uint64, error)](#DiskTotalSpace) * [func Echo(args ...interface{})](#Echo) * [func Empty(val interface{}) bool](#Empty) * [func Exec(command string, output *[]string, returnVar *int) string](#Exec) * [func Exit(status int)](#Exit) * [func Explode(delimiter, str string) []string](#Explode) * [func Fclose(handle *os.File) error](#Fclose) * [func Fgetcsv(handle *os.File, length int, delimiter rune) ([][]string, error)](#Fgetcsv) * [func FileExists(filename string) bool](#FileExists) * [func FileGetContents(filename string) (string, error)](#FileGetContents) * [func FilePutContents(filename string, data string, mode os.FileMode) error](#FilePutContents) * [func FileSize(filename string) (int64, error)](#FileSize) * [func Filemtime(filename string) (int64, error)](#Filemtime) * [func Floor(value float64) float64](#Floor) * [func Getcwd() (string, error)](#Getcwd) * [func Getenv(varname string) string](#Getenv) * [func Gethostbyaddr(ipAddress string) (string, error)](#Gethostbyaddr) * [func Gethostbyname(hostname string) (string, error)](#Gethostbyname) * [func Gethostbynamel(hostname string) ([]string, error)](#Gethostbynamel) * [func Gethostname() (string, error)](#Gethostname) * [func Glob(pattern string) ([]string, error)](#Glob) * [func HTMLEntityDecode(str string) string](#HTMLEntityDecode) * [func HTTPBuildQuery(queryData url.Values) string](#HTTPBuildQuery) * [func Hex2bin(data string) (string, error)](#Hex2bin) * [func Hexdec(str string) (int64, error)](#Hexdec) * [func Htmlentities(str string) string](#Htmlentities) * [func IP2long(ipAddress string) uint32](#IP2long) * [func Implode(glue string, pieces []string) string](#Implode) * [func InArray(needle interface{}, haystack interface{}) bool](#InArray) * [func IsDir(filename string) (bool, error)](#IsDir) * [func IsFile(filename string) bool](#IsFile) * [func IsNan(val float64) bool](#IsNan) * [func IsNumeric(val interface{}) bool](#IsNumeric) * [func IsReadable(filename string) bool](#IsReadable) * [func IsWriteable(filename string) bool](#IsWriteable) * [func JSONDecode(data []byte, val interface{}) error](#JSONDecode) * [func JSONEncode(val interface{}) ([]byte, error)](#JSONEncode) * [func Lcfirst(str string) string](#Lcfirst) * [func Levenshtein(str1, str2 string, costIns, costRep, costDel int) int](#Levenshtein) * [func Long2ip(properAddress uint32) string](#Long2ip) * [func Ltrim(str string, characterMask ...string) string](#Ltrim) * [func Max(nums ...float64) float64](#Max) * [func MbStrlen(str string) int](#MbStrlen) * [func Md5(str string) string](#Md5) * [func Md5File(path string) (string, error)](#Md5File) * [func MemoryGetPeakUsage(realUsage bool) uint64](#MemoryGetPeakUsage) * [func MemoryGetUsage(realUsage bool) uint64](#MemoryGetUsage) * [func Min(nums ...float64) float64](#Min) * [func Mkdir(filename string, mode os.FileMode) error](#Mkdir) * [func Nl2br(str string, isXhtml bool) string](#Nl2br) * [func NumberFormat(number float64, decimals uint, decPoint, thousandsSep string) string](#NumberFormat) * [func Octdec(str string) (int64, error)](#Octdec) * [func Ord(char string) int](#Ord) * [func Pack(order binary.ByteOrder, data interface{}) (string, error)](#Pack) * [func ParseStr(encodedString string, result map[string]interface{}) error](#ParseStr) * [func ParseURL(str string, component int) (map[string]string, error)](#ParseURL) * [func Passthru(command string, returnVar *int)](#Passthru) * [func Pathinfo(path string, options int) map[string]string](#Pathinfo) * [func Pi() float64](#Pi) * [func Putenv(setting string) error](#Putenv) * [func Quotemeta(str string) string](#Quotemeta) * [func Rand(min, max int) int](#Rand) * [func RandomBytes(length int) ([]byte, error)](#RandomBytes) * [func RandomInt(min, max int) (int, error)](#RandomInt) * [func Rawurldecode(str string) (string, error)](#Rawurldecode) * [func Rawurlencode(str string) string](#Rawurlencode) * [func Realpath(path string) (string, error)](#Realpath) * [func Rename(oldname, newname string) error](#Rename) * [func Round(value float64, precision int) float64](#Round) * [func Rtrim(str string, characterMask ...string) string](#Rtrim) * [func Sha1(str string) string](#Sha1) * [func Sha1File(path string) (string, error)](#Sha1File) * [func SimilarText(first, second string, percent *float64) int](#SimilarText) * [func Sleep(t int64)](#Sleep) * [func Soundex(str string) string](#Soundex) * [func Stat(filename string) (os.FileInfo, error)](#Stat) * [func StrRepeat(input string, multiplier int) string](#StrRepeat) * [func StrReplace(search, replace, subject string, count int) string](#StrReplace) * [func StrShuffle(str string) string](#StrShuffle) * [func StrWordCount(str string) []string](#StrWordCount) * [func Stripos(haystack, needle string, offset int) int](#Stripos) * [func Stripslashes(str string) string](#Stripslashes) * [func Strlen(str string) int](#Strlen) * [func Strpos(haystack, needle string, offset int) int](#Strpos) * [func Strrev(str string) string](#Strrev) * [func Strripos(haystack, needle string, offset int) int](#Strripos) * [func Strrpos(haystack, needle string, offset int) int](#Strrpos) * [func Strstr(haystack string, needle string) string](#Strstr) * [func Strtolower(str string) string](#Strtolower) * [func Strtotime(format, strtime string) (int64, error)](#Strtotime) * [func Strtoupper(str string) string](#Strtoupper) * [func Strtr(haystack string, params ...interface{}) string](#Strtr) * [func Substr(str string, start uint, length int) string](#Substr) * [func System(command string, returnVar *int) string](#System) * [func Ternary(condition bool, trueVal, falseVal interface{}) interface{}](#Ternary) * [func Time() int64](#Time) * [func Touch(filename string) (bool, error)](#Touch) * [func Trim(str string, characterMask ...string) string](#Trim) * [func URLDecode(str string) (string, error)](#URLDecode) * [func URLEncode(str string) string](#URLEncode) * [func Ucfirst(str string) string](#Ucfirst) * [func Ucwords(str string) string](#Ucwords) * [func Umask(mask int) int](#Umask) * [func Uniqid(prefix string) string](#Uniqid) * [func Unlink(filename string) error](#Unlink) * [func Unpack(order binary.ByteOrder, data string) (interface{}, error)](#Unpack) * [func Usleep(t int64)](#Usleep) * [func VersionCompare(version1, version2, operator string) bool](#VersionCompare) * [func Wordwrap(str string, width uint, br string, cut bool) string](#Wordwrap) * [func ZipOpen(filename string) (*zip.ReadCloser, error)](#ZipOpen) ### Constants [¶](#pkg-constants) This section is empty. ### Variables [¶](#pkg-variables) This section is empty. ### Functions [¶](#pkg-functions) #### func [Abs](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1321) [¶](#Abs) ``` func Abs(number [float64](/builtin#float64)) [float64](/builtin#float64) ``` Abs abs() #### func [Addslashes](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L728) [¶](#Addslashes) ``` func Addslashes(str [string](/builtin#string)) [string](/builtin#string) ``` Addslashes addslashes() #### func [ArrayChunk](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1147) [¶](#ArrayChunk) ``` func ArrayChunk(s []interface{}, size [int](/builtin#int)) [][]interface{} ``` ArrayChunk array_chunk() #### func [ArrayColumn](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1208) [¶](#ArrayColumn) ``` func ArrayColumn(input map[[string](/builtin#string)]map[[string](/builtin#string)]interface{}, columnKey [string](/builtin#string)) []interface{} ``` ArrayColumn array_column() #### func [ArrayCombine](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1262) [¶](#ArrayCombine) ``` func ArrayCombine(s1, s2 []interface{}) map[interface{}]interface{} ``` ArrayCombine array_combine() #### func [ArrayFill](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1094) [¶](#ArrayFill) ``` func ArrayFill(startIndex [int](/builtin#int), num [uint](/builtin#uint), value interface{}) map[[int](/builtin#int)]interface{} ``` ArrayFill array_fill() #### func [ArrayFlip](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1105) [¶](#ArrayFlip) ``` func ArrayFlip(m map[interface{}]interface{}) map[interface{}]interface{} ``` ArrayFlip array_flip() #### func [ArrayKeyExists](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1256) [¶](#ArrayKeyExists) ``` func ArrayKeyExists(key interface{}, m map[interface{}]interface{}) [bool](/builtin#bool) ``` ArrayKeyExists array_key_exists() #### func [ArrayKeys](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1114) [¶](#ArrayKeys) ``` func ArrayKeys(elements map[interface{}]interface{}) []interface{} ``` ArrayKeys array_keys() #### func [ArrayMerge](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1134) [¶](#ArrayMerge) ``` func ArrayMerge(ss ...[]interface{}) []interface{} ``` ArrayMerge array_merge() #### func [ArrayPad](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1166) [¶](#ArrayPad) ``` func ArrayPad(s []interface{}, size [int](/builtin#int), val interface{}) []interface{} ``` ArrayPad array_pad() #### func [ArrayPop](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1227) [¶](#ArrayPop) ``` func ArrayPop(s *[]interface{}) interface{} ``` ArrayPop array_pop() Pop the element off the end of slice #### func [ArrayPush](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1220) [¶](#ArrayPush) ``` func ArrayPush(s *[]interface{}, elements ...interface{}) [int](/builtin#int) ``` ArrayPush array_push() Push one or more elements onto the end of slice #### func [ArrayRand](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1198) [¶](#ArrayRand) ``` func ArrayRand(elements []interface{}) []interface{} ``` ArrayRand array_rand() #### func [ArrayReverse](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1274) [¶](#ArrayReverse) ``` func ArrayReverse(s []interface{}) []interface{} ``` ArrayReverse array_reverse() #### func [ArrayShift](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1246) [¶](#ArrayShift) ``` func ArrayShift(s *[]interface{}) interface{} ``` ArrayShift array_shift() Shift an element off the beginning of slice #### func [ArraySlice](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1186) [¶](#ArraySlice) ``` func ArraySlice(s []interface{}, offset, length [uint](/builtin#uint)) []interface{} ``` ArraySlice array_slice() #### func [ArrayUnshift](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1239) [¶](#ArrayUnshift) ``` func ArrayUnshift(s *[]interface{}, elements ...interface{}) [int](/builtin#int) ``` ArrayUnshift array_unshift() Prepend one or more elements to the beginning of a slice #### func [ArrayValues](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1124) [¶](#ArrayValues) ``` func ArrayValues(elements map[interface{}]interface{}) []interface{} ``` ArrayValues array_values() #### func [Base64Decode](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1076) [¶](#Base64Decode) ``` func Base64Decode(str [string](/builtin#string)) ([string](/builtin#string), [error](/builtin#error)) ``` Base64Decode base64_decode() #### func [Base64Encode](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1071) [¶](#Base64Encode) ``` func Base64Encode(str [string](/builtin#string)) [string](/builtin#string) ``` Base64Encode base64_encode() #### func [BaseConvert](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1450) [¶](#BaseConvert) ``` func BaseConvert(number [string](/builtin#string), frombase, tobase [int](/builtin#int)) ([string](/builtin#string), [error](/builtin#error)) ``` BaseConvert base_convert() #### func [Basename](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1669) [¶](#Basename) ``` func Basename(path [string](/builtin#string)) [string](/builtin#string) ``` Basename basename() #### func [Bin2hex](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1411) [¶](#Bin2hex) ``` func Bin2hex(str [string](/builtin#string)) ([string](/builtin#string), [error](/builtin#error)) ``` Bin2hex bin2hex() #### func [Bindec](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1393) [¶](#Bindec) ``` func Bindec(str [string](/builtin#string)) ([string](/builtin#string), [error](/builtin#error)) ``` Bindec bindec() #### func [Ceil](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1354) [¶](#Ceil) ``` func Ceil(value [float64](/builtin#float64)) [float64](/builtin#float64) ``` Ceil ceil() #### func [Checkdate](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L65) [¶](#Checkdate) ``` func Checkdate(month, day, year [int](/builtin#int)) [bool](/builtin#bool) ``` Checkdate checkdate() Validate a Gregorian date #### func [Chmod](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1674) [¶](#Chmod) ``` func Chmod(filename [string](/builtin#string), mode [os](/os).[FileMode](/os#FileMode)) [bool](/builtin#bool) ``` Chmod chmod() #### func [Chown](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1679) [¶](#Chown) ``` func Chown(filename [string](/builtin#string), uid, gid [int](/builtin#int)) [bool](/builtin#bool) ``` Chown chown() #### func [Chr](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L674) [¶](#Chr) ``` func Chr(ascii [int](/builtin#int)) [string](/builtin#string) ``` Chr chr() #### func [ChunkSplit](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L463) [¶](#ChunkSplit) ``` func ChunkSplit(body [string](/builtin#string), chunklen [uint](/builtin#uint), end [string](/builtin#string)) [string](/builtin#string) ``` ChunkSplit chunk_split() #### func [Copy](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1599) [¶](#Copy) ``` func Copy(source, dest [string](/builtin#string)) ([bool](/builtin#bool), [error](/builtin#error)) ``` Copy copy() #### func [Crc32](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L844) [¶](#Crc32) ``` func Crc32(str [string](/builtin#string)) [uint32](/builtin#uint32) ``` Crc32 crc32() #### func [Date](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L59) [¶](#Date) ``` func Date(format [string](/builtin#string), timestamp [int64](/builtin#int64)) [string](/builtin#string) ``` Date date() Date("02/01/2006 15:04:05 PM", 1524799394) Note: the behavior is inconsistent with php's date function #### func [Decbin](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1388) [¶](#Decbin) ``` func Decbin(number [int64](/builtin#int64)) [string](/builtin#string) ``` Decbin decbin() #### func [Dechex](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1430) [¶](#Dechex) ``` func Dechex(number [int64](/builtin#int64)) [string](/builtin#string) ``` Dechex dechex() #### func [Decoct](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1440) [¶](#Decoct) ``` func Decoct(number [int64](/builtin#int64)) [string](/builtin#string) ``` Decoct decoct() #### func [Delete](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1594) [¶](#Delete) ``` func Delete(filename [string](/builtin#string)) [error](/builtin#error) ``` Delete delete() #### func [Die](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L2041) [¶](#Die) ``` func Die(status [int](/builtin#int)) ``` Die die() #### func [DiskFreeSpace](https://github.com/syyongx/php2go/blob/v0.9.8/php_unix.go#L17) [¶](#DiskFreeSpace) ``` func DiskFreeSpace(directory [string](/builtin#string)) ([uint64](/builtin#uint64), [error](/builtin#error)) ``` DiskFreeSpace disk_free_space() #### func [DiskTotalSpace](https://github.com/syyongx/php2go/blob/v0.9.8/php_unix.go#L27) [¶](#DiskTotalSpace) ``` func DiskTotalSpace(directory [string](/builtin#string)) ([uint64](/builtin#uint64), [error](/builtin#error)) ``` DiskTotalSpace disk_total_space() #### func [Echo](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L2025) [¶](#Echo) ``` func Echo(args ...interface{}) ``` Echo echo #### func [Empty](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1718) [¶](#Empty) ``` func Empty(val interface{}) [bool](/builtin#bool) ``` Empty empty() #### func [Exec](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1807) [¶](#Exec) ``` func Exec(command [string](/builtin#string), output *[][string](/builtin#string), returnVar *[int](/builtin#int)) [string](/builtin#string) ``` Exec exec() returnVar, 0: succ; 1: fail Return the last line from the result of the command. command format eg: ``` "ls -a" "/bin/bash -c \"ls -a\"" ``` #### func [Exit](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L2036) [¶](#Exit) ``` func Exit(status [int](/builtin#int)) ``` Exit exit() #### func [Explode](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L669) [¶](#Explode) ``` func Explode(delimiter, str [string](/builtin#string)) [][string](/builtin#string) ``` Explode explode() #### func [Fclose](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1684) [¶](#Fclose) ``` func Fclose(handle *[os](/os).[File](/os#File)) [error](/builtin#error) ``` Fclose fclose() #### func [Fgetcsv](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1703) [¶](#Fgetcsv) ``` func Fgetcsv(handle *[os](/os).[File](/os#File), length [int](/builtin#int), delimiter [rune](/builtin#rune)) ([][][string](/builtin#string), [error](/builtin#error)) ``` Fgetcsv fgetcsv() #### func [FileExists](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1541) [¶](#FileExists) ``` func FileExists(filename [string](/builtin#string)) [bool](/builtin#bool) ``` FileExists file_exists() #### func [FileGetContents](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1583) [¶](#FileGetContents) ``` func FileGetContents(filename [string](/builtin#string)) ([string](/builtin#string), [error](/builtin#error)) ``` FileGetContents file_get_contents() #### func [FilePutContents](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1578) [¶](#FilePutContents) ``` func FilePutContents(filename [string](/builtin#string), data [string](/builtin#string), mode [os](/os).[FileMode](/os#FileMode)) [error](/builtin#error) ``` FilePutContents file_put_contents() #### func [FileSize](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1569) [¶](#FileSize) ``` func FileSize(filename [string](/builtin#string)) ([int64](/builtin#int64), [error](/builtin#error)) ``` FileSize filesize() #### func [Filemtime](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1689) [¶](#Filemtime) ``` func Filemtime(filename [string](/builtin#string)) ([int64](/builtin#int64), [error](/builtin#error)) ``` Filemtime filemtime() #### func [Floor](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1349) [¶](#Floor) ``` func Floor(value [float64](/builtin#float64)) [float64](/builtin#float64) ``` Floor floor() #### func [Getcwd](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1658) [¶](#Getcwd) ``` func Getcwd() ([string](/builtin#string), [error](/builtin#error)) ``` Getcwd getcwd() #### func [Getenv](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L2046) [¶](#Getenv) ``` func Getenv(varname [string](/builtin#string)) [string](/builtin#string) ``` Getenv getenv() #### func [Gethostbyaddr](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1995) [¶](#Gethostbyaddr) ``` func Gethostbyaddr(ipAddress [string](/builtin#string)) ([string](/builtin#string), [error](/builtin#error)) ``` Gethostbyaddr gethostbyaddr() Get the Internet host name corresponding to a given IP address #### func [Gethostbyname](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1964) [¶](#Gethostbyname) ``` func Gethostbyname(hostname [string](/builtin#string)) ([string](/builtin#string), [error](/builtin#error)) ``` Gethostbyname gethostbyname() Get the IPv4 address corresponding to a given Internet host name #### func [Gethostbynamel](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1979) [¶](#Gethostbynamel) ``` func Gethostbynamel(hostname [string](/builtin#string)) ([][string](/builtin#string), [error](/builtin#error)) ``` Gethostbynamel gethostbynamel() Get a list of IPv4 addresses corresponding to a given Internet host name #### func [Gethostname](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1958) [¶](#Gethostname) ``` func Gethostname() ([string](/builtin#string), [error](/builtin#error)) ``` Gethostname gethostname() #### func [Glob](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1711) [¶](#Glob) ``` func Glob(pattern [string](/builtin#string)) ([][string](/builtin#string), [error](/builtin#error)) ``` Glob glob() #### func [HTMLEntityDecode](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L777) [¶](#HTMLEntityDecode) ``` func HTMLEntityDecode(str [string](/builtin#string)) [string](/builtin#string) ``` HTMLEntityDecode html_entity_decode() #### func [HTTPBuildQuery](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1066) [¶](#HTTPBuildQuery) ``` func HTTPBuildQuery(queryData [url](/net/url).[Values](/net/url#Values)) [string](/builtin#string) ``` HTTPBuildQuery http_build_query() #### func [Hex2bin](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1402) [¶](#Hex2bin) ``` func Hex2bin(data [string](/builtin#string)) ([string](/builtin#string), [error](/builtin#error)) ``` Hex2bin hex2bin() #### func [Hexdec](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1435) [¶](#Hexdec) ``` func Hexdec(str [string](/builtin#string)) ([int64](/builtin#int64), [error](/builtin#error)) ``` Hexdec hexdec() #### func [Htmlentities](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L772) [¶](#Htmlentities) ``` func Htmlentities(str [string](/builtin#string)) [string](/builtin#string) ``` Htmlentities htmlentities() #### func [IP2long](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L2005) [¶](#IP2long) ``` func IP2long(ipAddress [string](/builtin#string)) [uint32](/builtin#uint32) ``` IP2long ip2long() IPv4 #### func [Implode](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1282) [¶](#Implode) ``` func Implode(glue [string](/builtin#string), pieces [][string](/builtin#string)) [string](/builtin#string) ``` Implode implode() #### func [InArray](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1296) [¶](#InArray) ``` func InArray(needle interface{}, haystack interface{}) [bool](/builtin#bool) ``` InArray in_array() haystack supported types: slice, array or map #### func [IsDir](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1559) [¶](#IsDir) ``` func IsDir(filename [string](/builtin#string)) ([bool](/builtin#bool), [error](/builtin#error)) ``` IsDir is_dir() #### func [IsFile](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1550) [¶](#IsFile) ``` func IsFile(filename [string](/builtin#string)) [bool](/builtin#bool) ``` IsFile is_file() #### func [IsNan](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1459) [¶](#IsNan) ``` func IsNan(val [float64](/builtin#float64)) [bool](/builtin#bool) ``` IsNan is_nan() #### func [IsNumeric](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1747) [¶](#IsNumeric) ``` func IsNumeric(val interface{}) [bool](/builtin#bool) ``` IsNumeric is_numeric() Numeric strings consist of optional sign, any number of digits, optional decimal part and optional exponential part. Thus +0123.45e6 is a valid numeric value. In PHP hexadecimal (e.g. 0xf4c3b00c) is not supported, but IsNumeric is supported. #### func [IsReadable](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1618) [¶](#IsReadable) ``` func IsReadable(filename [string](/builtin#string)) [bool](/builtin#bool) ``` IsReadable is_readable() #### func [IsWriteable](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1628) [¶](#IsWriteable) ``` func IsWriteable(filename [string](/builtin#string)) [bool](/builtin#bool) ``` IsWriteable is_writeable() #### func [JSONDecode](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L718) [¶](#JSONDecode) ``` func JSONDecode(data [][byte](/builtin#byte), val interface{}) [error](/builtin#error) ``` JSONDecode json_decode() #### func [JSONEncode](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L723) [¶](#JSONEncode) ``` func JSONEncode(val interface{}) ([][byte](/builtin#byte), [error](/builtin#error)) ``` JSONEncode json_encode() #### func [Lcfirst](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L198) [¶](#Lcfirst) ``` func Lcfirst(str [string](/builtin#string)) [string](/builtin#string) ``` Lcfirst lcfirst() #### func [Levenshtein](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L852) [¶](#Levenshtein) ``` func Levenshtein(str1, str2 [string](/builtin#string), costIns, costRep, costDel [int](/builtin#int)) [int](/builtin#int) ``` Levenshtein levenshtein() costIns: Defines the cost of insertion. costRep: Defines the cost of replacement. costDel: Defines the cost of deletion. #### func [Long2ip](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L2015) [¶](#Long2ip) ``` func Long2ip(properAddress [uint32](/builtin#uint32)) [string](/builtin#string) ``` Long2ip long2ip() IPv4 #### func [Ltrim](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L653) [¶](#Ltrim) ``` func Ltrim(str [string](/builtin#string), characterMask ...[string](/builtin#string)) [string](/builtin#string) ``` Ltrim ltrim() #### func [Max](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1364) [¶](#Max) ``` func Max(nums ...[float64](/builtin#float64)) [float64](/builtin#float64) ``` Max max() #### func [MbStrlen](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L546) [¶](#MbStrlen) ``` func MbStrlen(str [string](/builtin#string)) [int](/builtin#int) ``` MbStrlen mb_strlen() #### func [Md5](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L782) [¶](#Md5) ``` func Md5(str [string](/builtin#string)) [string](/builtin#string) ``` Md5 md5() #### func [Md5File](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L789) [¶](#Md5File) ``` func Md5File(path [string](/builtin#string)) ([string](/builtin#string), [error](/builtin#error)) ``` Md5File md5_file() #### func [MemoryGetPeakUsage](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L2070) [¶](#MemoryGetPeakUsage) added in v0.9.5 ``` func MemoryGetPeakUsage(realUsage [bool](/builtin#bool)) [uint64](/builtin#uint64) ``` MemoryGetPeakUsage memory_get_peak_usage() return in bytes #### func [MemoryGetUsage](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L2062) [¶](#MemoryGetUsage) ``` func MemoryGetUsage(realUsage [bool](/builtin#bool)) [uint64](/builtin#uint64) ``` MemoryGetUsage memory_get_usage() return in bytes #### func [Min](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1376) [¶](#Min) ``` func Min(nums ...[float64](/builtin#float64)) [float64](/builtin#float64) ``` Min min() #### func [Mkdir](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1653) [¶](#Mkdir) ``` func Mkdir(filename [string](/builtin#string), mode [os](/os).[FileMode](/os#FileMode)) [error](/builtin#error) ``` Mkdir mkdir() #### func [Nl2br](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L686) [¶](#Nl2br) ``` func Nl2br(str [string](/builtin#string), isXhtml [bool](/builtin#bool)) [string](/builtin#string) ``` Nl2br nl2br() \n\r, \r\n, \r, \n #### func [NumberFormat](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L420) [¶](#NumberFormat) ``` func NumberFormat(number [float64](/builtin#float64), decimals [uint](/builtin#uint), decPoint, thousandsSep [string](/builtin#string)) [string](/builtin#string) ``` NumberFormat number_format() decimals: Sets the number of decimal points. decPoint: Sets the separator for the decimal point. thousandsSep: Sets the thousands' separator. #### func [Octdec](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1445) [¶](#Octdec) ``` func Octdec(str [string](/builtin#string)) ([int64](/builtin#int64), [error](/builtin#error)) ``` Octdec Octdec() #### func [Ord](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L679) [¶](#Ord) ``` func Ord(char [string](/builtin#string)) [int](/builtin#int) ``` Ord ord() #### func [Pack](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L2287) [¶](#Pack) ``` func Pack(order [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder), data interface{}) ([string](/builtin#string), [error](/builtin#error)) ``` Pack pack() #### func [ParseStr](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L276) [¶](#ParseStr) ``` func ParseStr(encodedString [string](/builtin#string), result map[[string](/builtin#string)]interface{}) [error](/builtin#error) ``` ParseStr parse_str() f1=m&f2=n -> map[f1:m f2:n] f[a]=m&f[b]=n -> map[f:map[a:m b:n]] f[a][a]=m&f[a][b]=n -> map[f:map[a:map[a:m b:n]]] f[]=m&f[]=n -> map[f:[m n]] f[a][]=m&f[a][]=n -> map[f:map[a:[m n]]] f[][]=m&f[][]=n -> map[f:[map[]]] // Currently does not support nested slice. f=m&f[a]=n -> error // This is not the same as PHP. a .[[b=c -> map[a___[b:c] #### func [ParseURL](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1009) [¶](#ParseURL) ``` func ParseURL(str [string](/builtin#string), component [int](/builtin#int)) (map[[string](/builtin#string)][string](/builtin#string), [error](/builtin#error)) ``` ParseURL parse_url() Parse a URL and return its components -1: all; 1: scheme; 2: host; 4: port; 8: user; 16: pass; 32: path; 64: query; 128: fragment #### func [Passthru](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1920) [¶](#Passthru) ``` func Passthru(command [string](/builtin#string), returnVar *[int](/builtin#int)) ``` Passthru passthru() returnVar, 0: succ; 1: fail #### func [Pathinfo](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1503) [¶](#Pathinfo) ``` func Pathinfo(path [string](/builtin#string), options [int](/builtin#int)) map[[string](/builtin#string)][string](/builtin#string) ``` Pathinfo pathinfo() -1: all; 1: dirname; 2: basename; 4: extension; 8: filename Usage: Pathinfo("/home/go/path/src/php2go/php2go.go", 1|2|4|8) #### func [Pi](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1359) [¶](#Pi) ``` func Pi() [float64](/builtin#float64) ``` Pi pi() #### func [Putenv](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L2052) [¶](#Putenv) ``` func Putenv(setting [string](/builtin#string)) [error](/builtin#error) ``` Putenv putenv() The setting, like "FOO=BAR" #### func [Quotemeta](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L759) [¶](#Quotemeta) ``` func Quotemeta(str [string](/builtin#string)) [string](/builtin#string) ``` Quotemeta quotemeta() #### func [Rand](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1327) [¶](#Rand) ``` func Rand(min, max [int](/builtin#int)) [int](/builtin#int) ``` Rand rand() Range: [0, 2147483647] #### func [RandomBytes](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1466) [¶](#RandomBytes) added in v0.9.7 ``` func RandomBytes(length [int](/builtin#int)) ([][byte](/builtin#byte), [error](/builtin#error)) ``` RandomBytes random_bytes() #### func [RandomInt](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1477) [¶](#RandomInt) added in v0.9.7 ``` func RandomInt(min, max [int](/builtin#int)) ([int](/builtin#int), [error](/builtin#error)) ``` RandomInt random_int() #### func [Rawurldecode](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1061) [¶](#Rawurldecode) ``` func Rawurldecode(str [string](/builtin#string)) ([string](/builtin#string), [error](/builtin#error)) ``` Rawurldecode rawurldecode() #### func [Rawurlencode](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1056) [¶](#Rawurlencode) ``` func Rawurlencode(str [string](/builtin#string)) [string](/builtin#string) ``` Rawurlencode rawurlencode() #### func [Realpath](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1664) [¶](#Realpath) ``` func Realpath(path [string](/builtin#string)) ([string](/builtin#string), [error](/builtin#error)) ``` Realpath realpath() #### func [Rename](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1638) [¶](#Rename) ``` func Rename(oldname, newname [string](/builtin#string)) [error](/builtin#error) ``` Rename rename() #### func [Round](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1343) [¶](#Round) ``` func Round(value [float64](/builtin#float64), precision [int](/builtin#int)) [float64](/builtin#float64) ``` Round round() #### func [Rtrim](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L661) [¶](#Rtrim) ``` func Rtrim(str [string](/builtin#string), characterMask ...[string](/builtin#string)) [string](/builtin#string) ``` Rtrim rtrim() #### func [Sha1](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L826) [¶](#Sha1) ``` func Sha1(str [string](/builtin#string)) [string](/builtin#string) ``` Sha1 sha1() #### func [Sha1File](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L833) [¶](#Sha1File) ``` func Sha1File(path [string](/builtin#string)) ([string](/builtin#string), [error](/builtin#error)) ``` Sha1File sha1_file() #### func [SimilarText](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L901) [¶](#SimilarText) ``` func SimilarText(first, second [string](/builtin#string), percent *[float64](/builtin#float64)) [int](/builtin#int) ``` SimilarText similar_text() #### func [Sleep](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L89) [¶](#Sleep) ``` func Sleep(t [int64](/builtin#int64)) ``` Sleep sleep() #### func [Soundex](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L947) [¶](#Soundex) ``` func Soundex(str [string](/builtin#string)) [string](/builtin#string) ``` Soundex soundex() Calculate the soundex key of a string. #### func [Stat](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1495) [¶](#Stat) ``` func Stat(filename [string](/builtin#string)) ([os](/os).[FileInfo](/os#FileInfo), [error](/builtin#error)) ``` Stat stat() #### func [StrRepeat](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L551) [¶](#StrRepeat) ``` func StrRepeat(input [string](/builtin#string), multiplier [int](/builtin#int)) [string](/builtin#string) ``` StrRepeat str_repeat() #### func [StrReplace](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L174) [¶](#StrReplace) ``` func StrReplace(search, replace, subject [string](/builtin#string), count [int](/builtin#int)) [string](/builtin#string) ``` StrReplace str_replace() #### func [StrShuffle](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L634) [¶](#StrShuffle) ``` func StrShuffle(str [string](/builtin#string)) [string](/builtin#string) ``` StrShuffle str_shuffle() #### func [StrWordCount](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L486) [¶](#StrWordCount) ``` func StrWordCount(str [string](/builtin#string)) [][string](/builtin#string) ``` StrWordCount str_word_count() #### func [Stripos](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L118) [¶](#Stripos) ``` func Stripos(haystack, needle [string](/builtin#string), offset [int](/builtin#int)) [int](/builtin#int) ``` Stripos stripos() #### func [Stripslashes](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L741) [¶](#Stripslashes) ``` func Stripslashes(str [string](/builtin#string)) [string](/builtin#string) ``` Stripslashes stripslashes() #### func [Strlen](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L541) [¶](#Strlen) ``` func Strlen(str [string](/builtin#string)) [int](/builtin#int) ``` Strlen strlen() #### func [Strpos](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L101) [¶](#Strpos) ``` func Strpos(haystack, needle [string](/builtin#string), offset [int](/builtin#int)) [int](/builtin#int) ``` Strpos strpos() #### func [Strrev](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L259) [¶](#Strrev) ``` func Strrev(str [string](/builtin#string)) [string](/builtin#string) ``` Strrev strrev() #### func [Strripos](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L155) [¶](#Strripos) ``` func Strripos(haystack, needle [string](/builtin#string), offset [int](/builtin#int)) [int](/builtin#int) ``` Strripos strripos() #### func [Strrpos](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L136) [¶](#Strrpos) ``` func Strrpos(haystack, needle [string](/builtin#string), offset [int](/builtin#int)) [int](/builtin#int) ``` Strrpos strrpos() #### func [Strstr](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L556) [¶](#Strstr) ``` func Strstr(haystack [string](/builtin#string), needle [string](/builtin#string)) [string](/builtin#string) ``` Strstr strstr() #### func [Strtolower](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L184) [¶](#Strtolower) ``` func Strtolower(str [string](/builtin#string)) [string](/builtin#string) ``` Strtolower strtolower() #### func [Strtotime](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L48) [¶](#Strtotime) ``` func Strtotime(format, strtime [string](/builtin#string)) ([int64](/builtin#int64), [error](/builtin#error)) ``` Strtotime strtotime() Strtotime("02/01/2006 15:04:05", "02/01/2016 15:04:05") == 1451747045 Strtotime("3 04 PM", "8 41 PM") == -62167144740 #### func [Strtoupper](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L179) [¶](#Strtoupper) ``` func Strtoupper(str [string](/builtin#string)) [string](/builtin#string) ``` Strtoupper strtoupper() #### func [Strtr](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L573) [¶](#Strtr) ``` func Strtr(haystack [string](/builtin#string), params ...interface{}) [string](/builtin#string) ``` Strtr strtr() If the parameter length is 1, type is: map[string]string Strtr("baab", map[string]string{"ab": "01"}) will return "ba01" If the parameter length is 2, type is: string, string Strtr("baab", "ab", "01") will return "1001", a => 0; b => 1. #### func [Substr](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L241) [¶](#Substr) ``` func Substr(str [string](/builtin#string), start [uint](/builtin#uint), length [int](/builtin#int)) [string](/builtin#string) ``` Substr substr() #### func [System](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1847) [¶](#System) ``` func System(command [string](/builtin#string), returnVar *[int](/builtin#int)) [string](/builtin#string) ``` System system() returnVar, 0: succ; 1: fail Returns the last line of the command output on success, and "" on failure. #### func [Ternary](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L2311) [¶](#Ternary) ``` func Ternary(condition [bool](/builtin#bool), trueVal, falseVal interface{}) interface{} ``` Ternary Ternary expression max := Ternary(a > b, a, b).(int) #### func [Time](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L41) [¶](#Time) ``` func Time() [int64](/builtin#int64) ``` Time time() #### func [Touch](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1643) [¶](#Touch) ``` func Touch(filename [string](/builtin#string)) ([bool](/builtin#bool), [error](/builtin#error)) ``` Touch touch() #### func [Trim](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L645) [¶](#Trim) ``` func Trim(str [string](/builtin#string), characterMask ...[string](/builtin#string)) [string](/builtin#string) ``` Trim trim() #### func [URLDecode](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1051) [¶](#URLDecode) ``` func URLDecode(str [string](/builtin#string)) ([string](/builtin#string), [error](/builtin#error)) ``` URLDecode urldecode() #### func [URLEncode](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1046) [¶](#URLEncode) ``` func URLEncode(str [string](/builtin#string)) [string](/builtin#string) ``` URLEncode urlencode() #### func [Ucfirst](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L189) [¶](#Ucfirst) ``` func Ucfirst(str [string](/builtin#string)) [string](/builtin#string) ``` Ucfirst ucfirst() #### func [Ucwords](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L207) [¶](#Ucwords) ``` func Ucwords(str [string](/builtin#string)) [string](/builtin#string) ``` Ucwords ucwords() #### func [Umask](https://github.com/syyongx/php2go/blob/v0.9.8/php_unix.go#L12) [¶](#Umask) ``` func Umask(mask [int](/builtin#int)) [int](/builtin#int) ``` Umask umask() #### func [Uniqid](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L2030) [¶](#Uniqid) ``` func Uniqid(prefix [string](/builtin#string)) [string](/builtin#string) ``` Uniqid uniqid() #### func [Unlink](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L1589) [¶](#Unlink) ``` func Unlink(filename [string](/builtin#string)) [error](/builtin#error) ``` Unlink unlink() #### func [Unpack](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L2298) [¶](#Unpack) ``` func Unpack(order [binary](/encoding/binary).[ByteOrder](/encoding/binary#ByteOrder), data [string](/builtin#string)) (interface{}, [error](/builtin#error)) ``` Unpack unpack() #### func [Usleep](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L94) [¶](#Usleep) ``` func Usleep(t [int64](/builtin#int64)) ``` Usleep usleep() #### func [VersionCompare](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L2084) [¶](#VersionCompare) ``` func VersionCompare(version1, version2, operator [string](/builtin#string)) [bool](/builtin#bool) ``` VersionCompare version_compare() The possible operators are: <, lt, <=, le, >, gt, >=, ge, ==, =, eq, !=, <>, ne respectively. special version strings these are handled in the following order, (any string not found) < dev < alpha = a < beta = b < RC = rc < # < pl = p Usage: VersionCompare("1.2.3-alpha", "1.2.3RC7", '>=') VersionCompare("1.2.3-beta", "1.2.3pl", 'lt') VersionCompare("1.1_dev", "1.2any", 'eq') #### func [Wordwrap](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L491) [¶](#Wordwrap) ``` func Wordwrap(str [string](/builtin#string), width [uint](/builtin#uint), br [string](/builtin#string), cut [bool](/builtin#bool)) [string](/builtin#string) ``` Wordwrap wordwrap() #### func [ZipOpen](https://github.com/syyongx/php2go/blob/v0.9.8/php.go#L2282) [¶](#ZipOpen) ``` func ZipOpen(filename [string](/builtin#string)) (*[zip](/archive/zip).[ReadCloser](/archive/zip#ReadCloser), [error](/builtin#error)) ``` ZipOpen zip_open() ### Types [¶](#pkg-types) This section is empty.
synoptReg
cran
R
Package ‘synoptReg’ October 14, 2022 Type Package Title Synoptic Climate Classification and Spatial Regionalization of Environmental Data Version 1.2.1 Depends R (>= 3.5) Description Set of functions to compute different types of synoptic classification meth- ods and for analysing their effect on environmental variables. More information about the meth- ods used in Lemus-Canovas et al. 2019 <DOI:10.1016/j.atmosres.2019.01.018>, Martin- Vide et al. 2008 <DOI:10.5194/asr-2-99-2008>, Jenkinson and Collison 1977. License GPL (>= 3) Maintainer <NAME> <<EMAIL>> URL <https://lemuscanovas.github.io/synoptreg/> BugReports https://github.com/lemuscanovas/synoptReg/issues Encoding UTF-8 LazyData true Imports dplyr, ggplot2, lubridate, magrittr, sf, rnaturalearth, rnaturalearthdata, metR, raster, RNCEP, stringr, tidyr, tibble, kohonen Suggests maptools, ncdf4, rgeos, udunits2, gridExtra NeedsCompilation no RoxygenNote 7.1.1 Author <NAME> [aut, cre] (<https://orcid.org/0000-0002-0925-3827>), <NAME> [ctb] (<https://orcid.org/0000-0002-5516-6396>) Repository CRAN Date/Publication 2021-04-21 23:20:02 UTC R topics documented: ct2en... 2 download_nce... 3 get_lamb_point... 4 lamb_cla... 5 msl... 6 pca_decisio... 7 pc... 8 plot_lamb_schem... 9 raster_pc... 9 regionalizatio... 10 som_cla... 11 synoptcla... 12 tidy_n... 14 z50... 15 ct2env Establishing the relationship between CT and a environmental vari- able Description This function applies the approach: "circulation types to environment". Usage ct2env(x, clas, fun = mean, out = "data.frame") Arguments x data.frame. A data.frame containing the environmental data (i.e. precipita- tion, temperature, PM10, etc.) with the following variables: lon, lat, time, value, anom_value. See tidy_nc. clas data.frame. A data.frame of the synoptic classification (time and WT) obtained from the synoptclas function. fun function. A function to be applied to the environmental variable for each WT. out character. Choose between "data.frame" (default) or "raster" A function to be applied to the environmental variable for each WT. Value a data.frame or a Raster Stack containing the environmental grids based on the weather types. Examples # Load data (mslp or precp_grid) data(mslp) data(z500) # Tidying our atmospheric variables (500 hPa geopotential height # and mean sea level pressure) together. # Time subset between two dates atm_data1 <- tidy_nc(x = list(mslp,z500), name_vars = c("mslp","z500")) # S-mode classification smode_clas <- synoptclas(atm_data1, ncomp = 6) # ct2env (precipitation example) ct2env(x = pcp, clas = smode_clas$clas, fun = mean, out = "data.frame") download_ncep Download NCEP/NCAR data Description Weather Data from NCEP/NCAR Reanalysis via RNCEP package Usage download_ncep( var = "slp", level = "surface", month_range = c(1, 12), year_range = c(2010, 2017), lat_range = c(30, 60), lon_range = c(-30, 10), dailymean = TRUE, hour = NULL, reanalysis2 = TRUE, save_download = TRUE, file_name = NULL ) Arguments var slp ’sea level pressure’ (default) for more variables see help of ?NCEP.gather level surface (default) month_range min,max month c(1,12) (default) year_range min,max year c(2010,2017) (default) lat_range min,max latitude c(30, 60) (default) lon_range min,max longitud c(-30, 10) (default) dailymean daily avarage of the variable retrived. Default TRUE. hour One hour of the following: 0,6,12 or 18. reanalysis2 Logical. Default TRUE. variables are downloaded from the NCEP-DOE Re- analysis 2. If FALSE, data downloaded from NCEP/NCAR Reanalysis 1 save_download Logical. Default TRUE. Do yoy want to save the downloaded data into an RDS file? file_name character. Provide a name for the file downloaded. Value a data.frame with the following variables: lon, lat, time, value Examples ## Not run: #Daily mean air temperature 2m for 2017 #ta_data <- download_ncep(year_range=2017) #Air temperature 2m at 06:00 for 2017 #ta_data_h6 <- download_ncep(year_range=2017,dailymean = FALSE,hour=6) ## End(Not run) get_lamb_points Determine the 16 grid points for the Lamb classification Description Compute the 16 pair of coordinates necessary for using the objective version of the Lamb method Usage get_lamb_points(x, y) Arguments x longitude coordinate of the central point of the scheme. y latitude coordinate of the central point of the scheme. Value a data.frame with the 16 points of coordinates. Examples points <- get_lamb_points(x = -5, y = 40) points lamb_clas Objective Lamb Weather Type Classification Description Calculates the classification of the main weather types for the 16-points defined in get_lamb_points. Wind-flow characteristics are computed for the daily pressure field according to the rules proposed by the original Jenkinson and Collison classification (see Jenkinson and Collison, 1977; Jones et al., 2013) (1), and to the rules proposed by Trigo and DaCamara, 2000 (2). Usage lamb_clas(points, mslp, U = FALSE, thr = c(6, 6)) Arguments points 16 point pair of coordinates obtained from get_lamb_points. mslp Mean Sea Level pressure gridded data. U Logical. If T, Jones et al. 2013 approach is applied, maintaining the U-type in the classification. If F, U is removed as detailed in Trigo and DaCamara, 2000. thr threshold used for Unclassified days (total shear vorticity and total flow, respec- tively). Default c(6,6). Value A list with: • A data.frame containing the dates and the weather types. • A data frame containing the gridded data grouped by circulation types. References <NAME>., <NAME> (1977) An initial climatology of gales over the North Sea Synoptic Climatology Branch Memorandum, No. 62.Meteorological Office: Bracknell, England. <NAME>., <NAME>., <NAME>. (1993) A comparison of Lamb circulation types with an objec- tive classification scheme Int. J. Climatol. 13: 655–663. <NAME>., <NAME>, <NAME>. (2013) Lamb weather types derived from Reanalysis products Int. J. Climatol. 33: 1129–1139. <NAME>., <NAME>. (2000) Circulation weather types and their impact on the precipitation regime in Portugal Int. J. Climatol. 20: 1559-1581. See Also get_lamb_points Examples data(mslp) points <- get_lamb_points(x = 5,y = 40) lamb_clas(points = points, mslp = mslp) mslp Mean Sea Level pressure data Description Data from the NCEP/NCAR Reanalysis 1 (https://psl.noaa.gov/data/gridded/data.ncep. reanalysis.html). This data corresponds to daily values of mean sea level pressure with 2.5 x 2.5º of spatial resolution from January 2000 to december 2002. Usage data(mslp) Format A data.frame with the following variables: lon,lat,time,value. geographical area: -10,30,30,60 time period: 2000-01-01 to 2002-12-31 units: Pascals References Kalnay et al. (1996) The NCEP/NCAR 40-year reanalysis project, Bull. Amer. Meteor. Soc., 77, 437-470, 1996 Examples data(mslp) pca_decision PCA decision Description pca_decision plots the explained variances against the number of the principal component. In addition, it returns all the information about the PCA performance. Usage pca_decision(x, ncomp = 30, norm = T, matrix_mode = "S-mode") Arguments x data.frame. A data.frame with the following variables: lon, lat, time, value, anom_value. See tidy_nc. ncomp integer. Number of principal components to show/retain norm logical. Default TRUE. norm = TRUE is recommended for classify two ore more variables. matrix_mode character. The mode of matrix to use. Choose between S-mode and T-mode Value a list with: • A list with class princomp containing all the results of the PCA • A data frame containing the main results of the ncomp selected (standard deviation, proportion of variance and cumulative variance). • A ggplot2 object to visualize the scree test Note To perform the PCA the x must contain more rows than columns. In addition, x cannot contain NA values. See Also tidy_nc Examples # Load data (mslp or precp_grid) data(mslp) data(z500) # Tidying our atmospheric variables (500 hPa geopotential height # and mean sea level pressure) together. # Time subset between two dates atm_data1 <- tidy_nc(x = list(mslp,z500)) # Deciding on the number of PC to retain info <- pca_decision(atm_data1) pcp Daily precipitation grid of Balearic Islands (Spain) Description Data from the SPREAD data set downloaded from the Spanish National Research Council (CSIC). (http://spread.csic.es/info.html). This data corresponds to daily values of precipitation with a spatial resolution of 5 x 5 km from January 2000 to december 2010 Usage data(pcp) Format A data.frame with the following variables: lon,lat,time,value. geographical area: Balearic Islands time period: 2000-01-01 to 2010-12-31 units: mm*10 coordinates reference system: +proj=utm +zone=30 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs References Serrano-Notivoli et al. (2017) SPREAD: a high-resolution daily gridded precipitation dataset for Spain, an extreme events frequency and intensity overview. Earth Syst. Sci. Data, 9, 721-738, 2017, https://doi.org/10.5194/essd-9-721-2017 Examples data(pcp) plot_lamb_scheme Plot Lamb Scheme Description Visualize the Lamb Scheme Usage plot_lamb_scheme(points) Arguments points points obtained from the get_lamb_points function. Value a ggplot map. Examples points <- get_lamb_points(x = -5, y = 40) plot_lamb_scheme(points) raster_pca Raster PCA Description Perform a Principal Component Analysis on a RasterStack Usage raster_pca(raststack, aggregate = 0, focal = 0) Arguments raststack Raster Stack. aggregate Integer. Aggregation factor based on function aggregate of raster package. focal Integer. smooth filter based on function focal of raster package. Value a list with: • A raster stack containing the results of the PCA • A data frame containing the main results of the PCA (standard deviation, proportion of vari- ance and cumulative variance regionalization Environmental regionalization Description Perform an unspervised clustering of the Raster Stack Usage regionalization(raststack, centers, iter.max = 100, nstart = 100) Arguments raststack Raster Stack. centers Integer. Number of clusters. iter.max Integer. The maximum number of iterations allowed. Default 100. nstart Integer. How many random sets should be chosen? Default 100. Value a list with: • A raster with the final regionalization • A list with the results of the K-means performance • A raster displaying a pseudo-MAE error based on the difference between each pixel value and its respective centroide • A numeric pseudo-MAE mean value for the entire map som_clas Self-Organizing Maps classification Description som_clas allows to perform a SOM synoptic classification Usage som_clas( x, xdim, ydim, iter = 2000, alpha = c(0.05, 0.01), dist.fcts = "euclidean", mode = "online", cores = 1, norm = T ) Arguments x data.frame. A data.frame with the following variables: lon, lat, time, value, anom_value. See tidy_nc. xdim Integer. X dimension of the grid. See somgrid from kohonen package. ydim Integer. Y dimension of the grid. See somgrid from kohonen package. iter integer. Number of iterations. alpha vector. learning rate. See som from kohonen package for details. dist.fcts character. vector of distance functions to be used for the individual data layers. See som from kohonen package for details. mode carachter. type of learning algorithm. Default "on-line". See kohonen package for details. cores Integer. Parallel processing only available for "pbatch" algorithm. norm logical. Default TRUE. norm = TRUE is recommended for classifying two ore more variables. Value A list with: • A data.frame containing the dates and the weather types. • A data frame containing the gridded data grouped by circulation types. • An object of class kohonen with all the components returned by the function som References <NAME>. and BuydenL. (2007) Self- and Super-organizing Maps in R: The kohonen Package Journal of Statistical Software, 21(5), 1 - 19. See Also tidy_nc Examples # Load data data(z500) # Tidying our atmospheric variables (500 hPa geopotential height). z500_tidy <- tidy_nc(x = list(z500), name_vars = c("z500")) # SOM classification som_cl <- som_clas(z500_tidy, xdim = 4, ydim = 4, iter = 200) synoptclas PCA Synoptic classification Description synoptclas allows to perform several types of synoptic classification approaches based on one or several atmospheric variables (i.e. mean sea level pressure, geoptential height at 500 hPa, etc.) Usage synoptclas(x, ncomp, norm = T, matrix_mode = "S-mode", extreme_scores = 2) Arguments x data.frame. A data.frame with the following variables: lon, lat, time, value, anom_value. See tidy_nc. ncomp Integer. Number of components to be retained. norm logical. Default TRUE. norm = TRUE is recommended for classifying two ore more variables. matrix_mode character. The mode of matrix to use. Choose between S-mode and T-mode extreme_scores Integer. Definition of extreme score threshold (Esteban et al., 2005). Default is 2. Only applicable for a matrix_mode = "S-mode" Details The matrix_mode argument allows to conduct different types of synoptic classifications depending on the user’s objective. If the user wants to perform a synoptic classification of a long and con- tinuous series, he must set the matrix_mode = "S-mode". When we apply the PCA to a matrix in S-mode, the variables are the grid points (lon,lat) and the observations are the days (time series), so the linear relationships that the PCA establishes are between the time series of the grid points. One of the results we obtain from the PCA are the "scores", which indicate the degree of represen- tativeness of each day for each of the principal components. However, the scores do not allow us to directly obtain the weather types (WT) classification, since one day can be represented by several principal components. For this reason, a clustering method is required to group each day to an spe- cific WT based on the multivariate coordinates provided by the "scores". Before using a clustering method, a VARIMAX rotation is performed on the principal Components retained, with the aim of redistributing the variance of such components. With the rotated components, the scores are used to apply the extreme scores method (Esteban et al., 2005). The scores show the degree of represen- tativeness associated with the variation modes of each principal component, i.e., the classification of each day to its more representative centroid. Thus, the extreme scores method uses the scores > 2 and < -2, establishing a positive and negative phase for each principal component. The extreme scores procedure establishes the number of groups and their centroids in order to apply the K-means method without iterations. Conversely, if the user wants to perform a synoptic classification of spe- cific events (i.e. flood events, extreme temperatures events,etc.), he must set the matrix_mode = "T-mode". In this case, the variables are the days (time series) and the observations are the grid points. The relationships established in this case are between each daily gridded map. For this reason, the eigenvalues (correlations) allow to allow us to associate each day to a WT without using a clustering method as in the case of the S-mode matrix. Value A list with: • A data.frame containing the dates and the weather types. If "T-mode" is selected, two classi- fications are returned (absolute and positive/negative classification). • A data frame containing the gridded data grouped by circulation types.If "T-mode" is se- lected, 3 classifications are returned (absolute correlation,maximum positive correlation, and positive/negative classification). In addition, p-values of a t-test computed to the anomalies, comparing them to 0 with a conf.level = 0.95, are returned References <NAME>. , <NAME>., Martin.Vide, J. Atmospheric circulation patterns related to heavy snowfall days in Andorra, Pyrenees Int. J. Climatol. 25: 319-329. doi:10.1002/joc.1103 See Also pca_decision Examples # Load data (mslp or precp_grid) data(mslp) data(z500) # Tidying our atmospheric variables (500 hPa geopotential height # and mean sea level pressure) together. atm_data1 <- tidy_nc(x = list(mslp,z500), name_vars = c("mslp","z500")) # S-mode classification smode_cl <- synoptclas(atm_data1, ncomp = 6) # Time subset using a vector of dates of interest dates_int <- c("2000-01-25","2000-04-01","2000-07-14","2001-05-08","2002-12-20") atm_data2 <- tidy_nc(x = list(mslp,z500), time_subset = dates_int, name_vars = c("mslp","z500")) # T-mode classification tmode_cl <- synoptclas(atm_data2, ncomp = 2, matrix_mode = "T-mode") tidy_nc Set the time period and the geogprahical extension, as well as com- putes the anomaly of the atmospheric variable/s Description This function allows to subset the time series and geogprahical area of your atmospheric variable. In addition, even if no argument is given, the anomaly of the atmospheric variable/s will be computed. The anomaly value is provided in order to facilitate the visualization of the results after use the synoptclas function. It is mandatory to pass the tidy_nc even if you do not want to change the time period or the geographical extension. Usage tidy_nc( x, time_subset = NULL, geo_subset = NULL, monthly_subset = NULL, name_vars = NULL ) Arguments x data.frame. A data.frame with the following variables: lon, lat, time, value. The same structure returned when using download_ncep. time_subset vector. Starting and ending date, or a vector of dates of interest. geo_subset vector. A vector providing the xmin,xmax,ymin,ymax. monthly_subset an integer or a vector of integers. Number of the month/s desired. name_vars character or a vector of characters. Name of the atmospheric variable/s. If name is not specified, then will be coded as integers. Value A data.frame with the following variables: lon, lat, time, value, anom_value See Also download_ncep Examples # Load data (mslp or precp_grid) data(mslp) data(z500) # Tidying our atmospheric variables (500 hPa geopotential height # and mean sea level pressure) together. # Time subset between two dates atm_data1 <- tidy_nc(x = list(mslp,z500), time_subset = c("2000-05-01","2001-04-30")) # Time subset using a vector of dates of interest. Including a geographical crop dates_int <- c("2000-01-25","2000-04-01","2000-07-14","2001-05-08","2002-12-20") atm_data1 <- tidy_nc(x = list(mslp,z500), time_subset = dates_int, geo_subset = c(-20,10,30,50), name_vars = c("mslp","z500")) # following the list sequence z500 500 hPa Geopotential Height Description Data from the NCEP/NCAR Reanalysis 1 (https://psl.noaa.gov/data/gridded/data.ncep. reanalysis.html). This data corresponds to global daily values of 500 hPa geopotential height with 2.5 x 2.5?? of spatial resolution from January 2000 to december 2002. Usage data(z500) Format A data.frame with the following variables: lon,lat,time,value. geographical area: -10,30,30,60 time period: 2000-01-01 to 2002-12-31 units: meters References Poli et al. (2016) Kalnay et al., The NCEP/NCAR 40-year reanalysis project, Bull. Amer. Meteor. Soc., 77, 437-470, 1996 Examples data(z500)
gopkg.in/sarulabs/di.v2
go
Go
README [¶](#section-readme) --- ![DI](https://raw.githubusercontent.com/sarulabs/assets/master/di/logo.png) Dependency injection framework for go programs (golang). DI handles the life cycle of the objects in your application. It creates them when they are needed, resolves their dependencies and closes them properly when they are no longer used. If you do not know if DI could help improving your application, learn more about dependency injection and dependency injection containers: * [What is a dependency injection container and why use one ?](https://www.sarulabs.com/post/2/2018-06-12/what-is-a-dependency-injection-container-and-why-use-one.html) There is also an [Examples](#readme-examples) section at the end of the documentation. DI is focused on performance. It does not rely on reflection. ### Table of contents [![Build Status](https://travis-ci.org/sarulabs/di.svg?branch=master)](https://travis-ci.org/sarulabs/di) [![GoDoc](https://godoc.org/github.com/sarulabs/di?status.svg)](http://godoc.org/github.com/sarulabs/di) [![Test Coverage](https://api.codeclimate.com/v1/badges/5af97cbfd6e4fe7257e3/test_coverage)](https://codeclimate.com/github/sarulabs/di/test_coverage) [![Maintainability](https://api.codeclimate.com/v1/badges/5af97cbfd6e4fe7257e3/maintainability)](https://codeclimate.com/github/sarulabs/di/maintainability) [![codebeat](https://codebeat.co/badges/d6095401-7dcf-4f63-ab75-7fac5c6aa898)](https://codebeat.co/projects/github-com-sarulabs-di) [![goreport](https://goreportcard.com/badge/github.com/sarulabs/di)](https://goreportcard.com/report/github.com/sarulabs/di) * [Basic usage](#readme-basic-usage) + [Object definition](#readme-object-definition) + [Object retrieval](#readme-object-retrieval) + [Definitions and dependencies](#readme-definitions-and-dependencies) * [Scopes](#readme-scopes) + [The principle](#readme-the-principle) + [Scopes in practice](#readme-scopes-in-practice) + [Scopes and dependencies](#readme-scopes-and-dependencies) * [Container deletion](#readme-container-deletion) * [Methods to retrieve an object](#readme-methods-to-retrieve-an-object) + [Get](#readme-get) + [SafeGet](#readme-safeget) + [Fill](#readme-fill) * [Unscoped retrieval](#readme-unscoped-retrieval) * [Panic in Build and Close functions](#readme-panic-in-build-and-close-functions) * [HTTP helpers](#readme-http-helpers) * [Examples](#readme-examples) * [Migration from v1](#readme-migration-from-v1) ### Basic usage #### Object definition A Definition contains at least the `Name` of the object and a `Build` function to create the object. ``` di.Def{ Name: "my-object", Build: func(ctn di.Container) (interface{}, error) { return &MyObject{}, nil }, } ``` The definition can be added to a Builder with the `Add` method: ``` builder, _ := di.NewBuilder() builder.Add(di.Def{ Name: "my-object", Build: func(ctn di.Container) (interface{}, error) { return &MyObject{}, nil }, }) ``` #### Object retrieval Once the definitions have been added to a Builder, the Builder can generate a `Container`. This Container will provide the objects defined in the Builder. ``` ctn := builder.Build() // create the container obj := ctn.Get("my-object").(*MyObject) // retrieve the object ``` The `Get` method returns an `interface{}`. You need to cast the interface before using the object. The objects are stored as singletons in the Container. You will retrieve the exact same object every time you call the `Get` method on the same Container. The `Build` function will only be called once. #### Definitions and dependencies The `Build` function can also use the `Get` method of the Container. That allows to build objects that depend on other objects defined in the Container. ``` di.Def{ Name: "object-with-dependency", Build: func(ctn di.Container) (interface{}, error) { return &MyObjectWithDependency{ Object: ctn.Get("my-object").(*MyObject), }, nil }, } ``` You can not create a cycle in the definitions (A needs B and B needs A). If that happens, an error will be returned at the time of the creation of the object. ### Scopes #### The principle Definitions can also have a scope. They can be useful in request based applications, like a web application. ``` di.Def{ Name: "my-object", Scope: di.Request, Build: func(ctn di.Container) (interface{}, error) { return &MyObject{}, nil }, } ``` The available scopes are defined when the Builder is created: ``` builder, err := di.NewBuilder(di.App, di.Request) ``` Scopes are defined from the more generic to the more specific (eg: `App` ≻ `Request` ≻ `SubRequest`). If no scope is given to `NewBuilder`, the Builder is created with the three default scopes: `di.App`, `di.Request` and `di.SubRequest`. These scopes should be enough almost all the time. The containers belong to one of these scopes. A container may have a parent in a more generic scope and children in a more specific scope. The Builder generates a Container in the most generic scope. Then the Container can generate children in the next scope thanks to the `SubContainer` method. A container is only able to build objects defined in its own scope, but it can retrieve objects in a more generic scope thanks to its parent. For example a `Request` container can retrieve an `App` object, but an `App` container can not retrieve a `Request` object. If a Definition does not have a scope, the most generic scope will be used. #### Scopes in practice ``` // Create a Builder with the default scopes (App, Request, SubRequest). builder, _ := di.NewBuilder() // Define an object in the App scope. builder.Add(di.Def{ Name: "app-object", Scope: di.App, // this line is optional, di.App is the default scope Build: func(ctn di.Container) (interface{}, error) { return &MyObject{}, nil }, }) // Define an object in the Request scope. builder.Add(di.Def{ Name: "request-object", Scope: di.Request, Build: func(ctn di.Container) (interface{}, error) { return &MyObject{}, nil }, }) // Build creates a Container in the most generic scope (App). app := builder.Build() // The App Container can create sub-containers in the Request scope. req1, _ := app.SubContainer() req2, _ := app.SubContainer() // app-object can be retrieved from the three containers. // The retrieved objects are the same: o1 == o2 == o3. // The object is stored in app. o1 := app.Get("app-object").(*MyObject) o2 := req1.Get("app-object").(*MyObject) o3 := req2.Get("app-object").(*MyObject) // request-object can only be retrieved from req1 and req2. // The retrieved objects are not the same: o4 != o5. // o4 is stored in req1, and o5 is stored in req2. o4 := req1.Get("request-object").(*MyObject) o5 := req2.Get("request-object").(*MyObject) ``` More graphically, the containers could be represented like this: ![](https://raw.githubusercontent.com/sarulabs/assets/master/di/scopes.jpg) The `App` container can only get the `App` object. A `Request` container or a `SubRequest` container can get either the `App` object or the `Request` object, possibly by using their parent. The objects are built and stored in containers that have the same scope. They are only created when they are requested. #### Scopes and dependencies If an object depends on other objects defined in the container, the scopes of the dependencies must be either equal or more generic compared to the object scope. For example the following definitions are not valid: ``` di.Def{ Name: "request-object", Scope: di.Request, Build: func(ctn di.Container) (interface{}, error) { return &MyObject{}, nil }, } di.Def{ Name: "object-with-dependency", Scope: di.App, // NOT ALLOWED !!! should be di.Request or di.SubRequest Build: func(ctn di.Container) (interface{}, error) { return &MyObjectWithDependency{ Object: ctn.Get("request-object").(*MyObject), }, nil }, } ``` ### Container deletion A definition can also have a `Close` function. ``` di.Def{ Name: "my-object", Scope: di.App, Build: func(ctn di.Container) (interface{}, error) { return &MyObject{}, nil }, Close: func(obj interface{}) error { // assuming that MyObject has a Close method that returns an error return obj.(*MyObject).Close() }, } ``` This function is called when the `Delete` method is called on a Container. ``` // Create the Container. app := builder.Build() // Retrieve an object. obj := app.Get("my-object").(*MyObject) // Delete the Container, the Close function will be called on obj. app.Delete() ``` Delete closes all the objects stored in the Container. Once a Container has been deleted, it becomes unusable. It is important to always use `Delete` even if the objects definitions do not have a `Close` function. It allows to free the memory taken by the Container. There are actually two delete methods: `Delete` and `DeleteWithSubContainers` `DeleteWithSubContainers` deletes the children of the Container and then the Container. It does this right away. `Delete` is a softer approach. It does not delete the children of the Container. Actually it does not delete the Container as long as it still has a child alive. So you have to call `Delete` on all the children. The parent Container will be deleted when `Delete` is called on the last child. You probably want to use `Delete` and close the children manually. `DeleteWithSubContainers` can cause errors if the parent is deleted while its children are still used. ### Methods to retrieve an object When a container is asked to retrieve an object, it starts by checking if the object has already been created. If it has, the container returns the already built instance of the object. Otherwise it uses the Build function of the associated definition to create the object. It returns the object, but also keeps a reference to be able to return the same instance if the object is requested again. A container can only build objects defined in the same scope. If the container is asked to retrieve an object that belongs to a different scope. It forwards the request to its parent. There are three methods to retrieve an object: `Get`, `SafeGet` and `Fill`. #### Get `Get` returns an interface that can be cast afterwards. If the object can not be created, the `Get` function panics. ``` obj := ctn.Get("my-object").(*MyObject) ``` #### SafeGet `Get` is an easy way to retrieve an object. The problem is that it can panic. If it is a problem for you, you can use `SafeGet`. Instead of panicking, it returns an error. ``` objectInterface, err := ctn.SafeGet("my-object") object, ok := objectInterface.(*MyObject) ``` #### Fill The third and last method to retrieve an object is `Fill`. It returns an error if something goes wrong like `SafeGet`, but it may be more practical in some situations. It uses reflection to fill the given object. Using reflection makes it is slower than `SafeGet`. ``` var object *MyObject err := ctn.Fill("my-object", &object) ``` ### Unscoped retrieval The previous methods can retrieve an object defined in the same scope or a more generic one. If you need an object defined in a more specific scope, you need to create a sub-container to retrieve it. For example, an `App` container can not create a `Request` object. A `Request` container should be created to retrieve the `Request` object. It is logical but not always very practical. `UnscopedGet`, `UnscopedSafeGet` and `UnscopedFill` work like `Get`, `SafeGet` and `Fill` but can retrieve objects defined in a more generic scope. To do so, they generate sub-containers that can only be accessed internally by these three methods. To remove these containers without deleting the current container, you can call the `Clean` method. ``` builder, _ := di.NewBuilder() builder.Add(di.Def{ Name: "request-object", Scope: di.Request, Build: func(ctn di.Container) (interface{}, error) { return &MyObject{}, nil }, Close: func(obj interface{}) error { return obj.(*MyObject).Close() }, }) app := builder.Build() // app can retrieve a request-object with unscoped methods. obj := app.UnscopedGet("request-object").(*MyObject) // Once the objects created with unscoped methods are no longer used, // you can call the Clean method. In this case, the Close function // will be called on the object. app.Clean() ``` ### Panic in Build and Close functions Panics in `Build` and `Close` functions of a definition are recovered and converted into errors. In particular that allows you to use the `Get` method in a `Build` function. ### HTTP helpers DI includes some elements to ease its integration in a web application. The `HTTPMiddleware` function can be used to inject a container in an `http.Request`. ``` // create an App container builder, _ := NewBuilder() builder.Add(/* some definitions */) app := builder.Build() handlerWithDiMiddleware := di.HTTPMiddleware(handler, app, func(msg string) { logger.Error(msg) // use your own logger here, it is used to log container deletion errors }) ``` For each `http.Request`, a sub-container of the `app` container is created. It is deleted at the end of the http request. The container can be used in the handler: ``` handler := func(w http.ResponseWriter, r *http.Request) { // retrieve the Request container with the C function ctn := di.C(r) obj := ctn.Get("object").(*MyObject) // there is a shortcut to do that obj := di.Get(r, "object").(*MyObject) } ``` The handler and the middleware can panic. Do not forget to use another middleware to recover from the panic and log the errors. ### Examples The [sarulabs/di-example](https://github.com/sarulabs/di-example) repository is a good example to understand how DI can be used in a web application. More explanations about this repository can be found in this blog post: * [How to write a REST API in Go with DI](https://www.sarulabs.com/post/3/2018-08-02/how-to-write-a-rest-api-in-go-with-di.html) If you do not have time to check this repository, here is a shorter example that does not use the HTTP helpers. It does not handle the errors to be more concise. ``` package main import ( "context" "database/sql" "net/http" "github.com/sarulabs/di" _ "github.com/go-sql-driver/mysql" ) func main() { app := createApp() defer app.Delete() http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) { // Create a request and delete it once it has been handled. // Deleting the request will close the connection. request, _ := app.SubContainer() defer request.Delete() handler(w, r, request) }) http.ListenAndServe(":8080", nil) } func createApp() di.Container { builder, _ := di.NewBuilder() builder.Add([]di.Def{ { // Define the connection pool in the App scope. // There will be one for the whole application. Name: "mysql-pool", Scope: di.App, Build: func(ctn di.Container) (interface{}, error) { db, err := sql.Open("mysql", "user:password@/") db.SetMaxOpenConns(1) return db, err }, Close: func(obj interface{}) error { return obj.(*sql.DB).Close() }, }, { // Define the connection in the Request scope. // Each request will use its own connection. Name: "mysql", Scope: di.Request, Build: func(ctn di.Container) (interface{}, error) { pool := ctn.Get("mysql-pool").(*sql.DB) return pool.Conn(context.Background()) }, Close: func(obj interface{}) error { return obj.(*sql.Conn).Close() }, }, }...) // Returns the app Container. return builder.Build() } func handler(w http.ResponseWriter, r *http.Request, ctn di.Container) { // Retrieve the connection. conn := ctn.Get("mysql").(*sql.Conn) var variable, value string row := conn.QueryRowContext(context.Background(), "SHOW STATUS WHERE `variable_name` = 'Threads_connected'") row.Scan(&variable, &value) // Display how many connections are opened. // As the connection is closed when the request is deleted, // the value should not be be higher than the number set with db.SetMaxOpenConns(1). w.Write([]byte(variable + ": " + value)) } ``` ### Migration from v1 DI `v2` improves error handling. It should also be faster. Migrating to `v2` is highly recommended and should not be too difficult. There should not be any more changes in the API for a long time. ##### Renamed elements Some elements have been renamed. A `Context` is now a `Container`. The Context methods `SubContext`, `NastySafeGet`, `NastyGet`, `NastyFill` have been renamed. Their new names are `SubContainer`, `UnscopedSafeGet`, `UnscopedGet`, and `UnscopedFill`. `Definition` is now `Def`. The `AddDefinition` of the Builder is now `Add` and can take more than one definition as parameter. Definition `Tags` have been removed. ##### Errors The `Close` function in a definition now returns an `error`. The Container methods `Clean`, `Delete` and `DeleteWithSubContainers` also return an `error`. ##### Get The `Get` method used to return `nil` if it could not retrieve the object. Now it panics with the error. ##### Logger The `Logger` does not exist anymore. The errors are now directly handled by the retrieval functions. ``` // remove this line if you have it builder.Logger = ... ``` ##### Builder.Set The `Set` method of the builder does not exist anymore. You should use the `Add` method and a `Def`. Documentation [¶](#section-documentation) --- ### Index [¶](#pkg-index) * [Constants](#pkg-constants) * [Variables](#pkg-variables) * [func Get(i interface{}, name string) interface{}](#Get) * [func HTTPMiddleware(h http.HandlerFunc, app Container, logFunc func(msg string)) http.HandlerFunc](#HTTPMiddleware) * [type Builder](#Builder) * + [func NewBuilder(scopes ...string) (*Builder, error)](#NewBuilder) * + [func (b *Builder) Add(defs ...Def) error](#Builder.Add) + [func (b *Builder) Build() Container](#Builder.Build) + [func (b *Builder) Definitions() DefMap](#Builder.Definitions) + [func (b *Builder) IsDefined(name string) bool](#Builder.IsDefined) + [func (b *Builder) Scopes() ScopeList](#Builder.Scopes) * [type Container](#Container) * [type ContainerKey](#ContainerKey) * [type Def](#Def) * [type DefMap](#DefMap) * + [func (m DefMap) Copy() DefMap](#DefMap.Copy) * [type ScopeList](#ScopeList) * + [func (l ScopeList) Contains(scope string) bool](#ScopeList.Contains) + [func (l ScopeList) Copy() ScopeList](#ScopeList.Copy) + [func (l ScopeList) ParentScopes(scope string) ScopeList](#ScopeList.ParentScopes) + [func (l ScopeList) SubScopes(scope string) ScopeList](#ScopeList.SubScopes) ### Constants [¶](#pkg-constants) ``` const App = "app" ``` App is the name of the application scope. ``` const Request = "request" ``` Request is the name of the request scope. ``` const SubRequest = "subrequest" ``` SubRequest is the name of the subrequest scope. ### Variables [¶](#pkg-variables) ``` var C = func(i interface{}) [Container](#Container) { if c, ok := i.([Container](#Container)); ok { return c } r, ok := i.(*[http](/net/http).[Request](/net/http#Request)) if !ok { [panic](/builtin#panic)("could not get the container with C()") } c, ok := r.Context().Value([ContainerKey](#ContainerKey)("di")).([Container](#Container)) if !ok { [panic](/builtin#panic)("could not get the container from the given *http.Request") } return c } ``` C retrieves a Container from an interface. The function panics if the Container can not be retrieved. The interface can be : * a Container * an *http.Request containing a Container in its context.Context for the ContainerKey("di") key. The function can be changed to match the needs of your application. ### Functions [¶](#pkg-functions) #### func [Get](https://github.com/sarulabs/di/blob/v2.0.0/http.go#L72) [¶](#Get) ``` func Get(i interface{}, name [string](/builtin#string)) interface{} ``` Get is a shortcut for C(i).Get(name). #### func [HTTPMiddleware](https://github.com/sarulabs/di/blob/v2.0.0/http.go#L23) [¶](#HTTPMiddleware) ``` func HTTPMiddleware(h [http](/net/http).[HandlerFunc](/net/http#HandlerFunc), app [Container](#Container), logFunc func(msg [string](/builtin#string))) [http](/net/http).[HandlerFunc](/net/http#HandlerFunc) ``` HTTPMiddleware adds a container in the request context. The container injected in each request, is a new sub-container of the app container given as parameter. It can panic, so it should be used with another middleware to recover from the panic, and to log the error. It uses logFunc, a function that can log an error. logFunc is used to log the errors during the container deletion. ### Types [¶](#pkg-types) #### type [Builder](https://github.com/sarulabs/di/blob/v2.0.0/builder.go#L12) [¶](#Builder) ``` type Builder struct { // contains filtered or unexported fields } ``` Builder can be used to create a Container. The Builder should be created with NewBuilder. Then you can add definitions with the Add method, and finally build the Container with the Build method. #### func [NewBuilder](https://github.com/sarulabs/di/blob/v2.0.0/builder.go#L23) [¶](#NewBuilder) ``` func NewBuilder(scopes ...[string](/builtin#string)) (*[Builder](#Builder), [error](/builtin#error)) ``` NewBuilder is the only way to create a working Builder. It initializes a Builder with a list of scopes. The scopes are ordered from the most generic to the most specific. If no scope is provided, the default scopes are used: [App, Request, SubRequest] It can return an error if the scopes are not valid. #### func (*Builder) [Add](https://github.com/sarulabs/di/blob/v2.0.0/builder.go#L75) [¶](#Builder.Add) ``` func (b *[Builder](#Builder)) Add(defs ...[Def](#Def)) [error](/builtin#error) ``` Add adds one or more definitions in the Builder. It returns an error if a definition can not be added. #### func (*Builder) [Build](https://github.com/sarulabs/di/blob/v2.0.0/builder.go#L119) [¶](#Builder.Build) ``` func (b *[Builder](#Builder)) Build() [Container](#Container) ``` Build creates a Container in the most generic scope with all the definitions registered in the Builder. #### func (*Builder) [Definitions](https://github.com/sarulabs/di/blob/v2.0.0/builder.go#L63) [¶](#Builder.Definitions) ``` func (b *[Builder](#Builder)) Definitions() [DefMap](#DefMap) ``` Definitions returns a map with the all the objects definitions registered with the Add method. The key of the map is the name of the Definition. #### func (*Builder) [IsDefined](https://github.com/sarulabs/di/blob/v2.0.0/builder.go#L68) [¶](#Builder.IsDefined) ``` func (b *[Builder](#Builder)) IsDefined(name [string](/builtin#string)) [bool](/builtin#bool) ``` IsDefined returns true if there is a definition with the given name. #### func (*Builder) [Scopes](https://github.com/sarulabs/di/blob/v2.0.0/builder.go#L56) [¶](#Builder.Scopes) ``` func (b *[Builder](#Builder)) Scopes() [ScopeList](#ScopeList) ``` Scopes returns the list of available scopes. #### type [Container](https://github.com/sarulabs/di/blob/v2.0.0/containerInterface.go#L12) [¶](#Container) ``` type Container interface { // Definition returns the map of the available definitions ordered by name. // These definitions represent all the objects that this Container can build. Definitions() map[[string](/builtin#string)][Def](#Def) // Scope returns the Container scope. Scope() [string](/builtin#string) // Scopes returns the list of available scopes. Scopes() [][string](/builtin#string) // ParentScopes returns the list of scopes that are more generic than the Container scope. ParentScopes() [][string](/builtin#string) // SubScopes returns the list of scopes that are more specific than the Container scope. SubScopes() [][string](/builtin#string) // Parent returns the parent Container. Parent() [Container](#Container) // SubContainer creates a new Container in the next sub-scope // that will have this Container as parent. SubContainer() ([Container](#Container), [error](/builtin#error)) // SafeGet retrieves an object from the Container. // The object has to belong to this scope or a more generic one. // If the object does not already exist, it is created and saved in the Container. // If the object can not be created, it returns an error. SafeGet(name [string](/builtin#string)) (interface{}, [error](/builtin#error)) // Get is similar to SafeGet but it does not return the error. // Instead it panics. Get(name [string](/builtin#string)) interface{} // Fill is similar to SafeGet but it does not return the object. // Instead it fills the provided object with the value returned by SafeGet. // The provided object must be a pointer to the value returned by SafeGet. Fill(name [string](/builtin#string), dst interface{}) [error](/builtin#error) // UnscopedSafeGet retrieves an object from the Container, like SafeGet. // The difference is that the object can be retrieved // even if it belongs to a more specific scope. // To do so, UnscopedSafeGet creates a sub-container. // When the created object is no longer needed, // it is important to use the Clean method to delete this sub-container. UnscopedSafeGet(name [string](/builtin#string)) (interface{}, [error](/builtin#error)) // UnscopedGet is similar to UnscopedSafeGet but it does not return the error. // Instead it panics. UnscopedGet(name [string](/builtin#string)) interface{} // UnscopedFill is similar to UnscopedSafeGet but copies the object in dst instead of returning it. UnscopedFill(name [string](/builtin#string), dst interface{}) [error](/builtin#error) // Clean deletes the sub-container created by UnscopedSafeGet, UnscopedGet or UnscopedFill. Clean() [error](/builtin#error) // DeleteWithSubContainers takes all the objects saved in this Container // and calls the Close function of their Definition on them. // It will also call DeleteWithSubContainers on each child and remove its reference in the parent Container. // After deletion, the Container can no longer be used. // The sub-containers are deleted even if they are still used in other goroutines. // It can cause errors. You may want to use the Delete method instead. DeleteWithSubContainers() [error](/builtin#error) // Delete works like DeleteWithSubContainers if the Container does not have any child. // But if the Container has sub-containers, it will not be deleted right away. // The deletion only occurs when all the sub-containers have been deleted manually. // So you have to call Delete or DeleteWithSubContainers on all the sub-containers. Delete() [error](/builtin#error) // IsClosed returns true if the Container has been deleted. IsClosed() [bool](/builtin#bool) } ``` Container represents a dependency injection container. To create a Container, you should use a Builder or another Container. A Container has a scope and may have a parent in a more generic scope and children in a more specific scope. Objects can be retrieved from the Container. If the requested object does not already exist in the Container, it is built thanks to the object definition. The following attempts to get this object will return the same object. #### type [ContainerKey](https://github.com/sarulabs/di/blob/v2.0.0/http.go#L11) [¶](#ContainerKey) ``` type ContainerKey [string](/builtin#string) ``` ContainerKey is a type that can be used to store a container in the context.Context of an http.Request. By default, it is used in the C function and the HTTPMiddleware. #### type [Def](https://github.com/sarulabs/di/blob/v2.0.0/definition.go#L4) [¶](#Def) ``` type Def struct { Name [string](/builtin#string) Scope [string](/builtin#string) Build func(ctn [Container](#Container)) (interface{}, [error](/builtin#error)) Close func(obj interface{}) [error](/builtin#error) } ``` Def contains information to build and close an object inside a Container. #### type [DefMap](https://github.com/sarulabs/di/blob/v2.0.0/definition.go#L12) [¶](#DefMap) ``` type DefMap map[[string](/builtin#string)][Def](#Def) ``` DefMap is a collection of Def ordered by name. #### func (DefMap) [Copy](https://github.com/sarulabs/di/blob/v2.0.0/definition.go#L15) [¶](#DefMap.Copy) ``` func (m [DefMap](#DefMap)) Copy() [DefMap](#DefMap) ``` Copy returns a copy of the DefMap. #### type [ScopeList](https://github.com/sarulabs/di/blob/v2.0.0/scope.go#L13) [¶](#ScopeList) ``` type ScopeList [][string](/builtin#string) ``` ScopeList is a slice of scope. #### func (ScopeList) [Contains](https://github.com/sarulabs/di/blob/v2.0.0/scope.go#L49) [¶](#ScopeList.Contains) ``` func (l [ScopeList](#ScopeList)) Contains(scope [string](/builtin#string)) [bool](/builtin#bool) ``` Contains returns true if the ScopeList contains the given scope. #### func (ScopeList) [Copy](https://github.com/sarulabs/di/blob/v2.0.0/scope.go#L16) [¶](#ScopeList.Copy) ``` func (l [ScopeList](#ScopeList)) Copy() [ScopeList](#ScopeList) ``` Copy returns a copy of the ScopeList. #### func (ScopeList) [ParentScopes](https://github.com/sarulabs/di/blob/v2.0.0/scope.go#L23) [¶](#ScopeList.ParentScopes) ``` func (l [ScopeList](#ScopeList)) ParentScopes(scope [string](/builtin#string)) [ScopeList](#ScopeList) ``` ParentScopes returns the scopes before the one given as parameter. #### func (ScopeList) [SubScopes](https://github.com/sarulabs/di/blob/v2.0.0/scope.go#L36) [¶](#ScopeList.SubScopes) ``` func (l [ScopeList](#ScopeList)) SubScopes(scope [string](/builtin#string)) [ScopeList](#ScopeList) ``` SubScopes returns the scopes after the one given as parameter.
beaker
ruby
Ruby
Beaker === [![License](https://img.shields.io/github/license/voxpupuli/beaker.svg)](https://github.com/voxpupuli/beaker/blob/master/LICENSE) [![Test](https://github.com/voxpupuli/beaker/actions/workflows/test.yml/badge.svg)](https://github.com/voxpupuli/beaker/actions/workflows/test.yml) [![codecov](https://codecov.io/gh/voxpupuli/beaker/branch/master/graph/badge.svg?token=Mypkl78hvK)](https://codecov.io/gh/voxpupuli/beaker) [![Release](https://github.com/voxpupuli/beaker/actions/workflows/release.yml/badge.svg)](https://github.com/voxpupuli/beaker/actions/workflows/release.yml) [![RubyGem Version](https://img.shields.io/gem/v/beaker.svg)](https://rubygems.org/gems/beaker) [![RubyGem Downloads](https://img.shields.io/gem/dt/beaker.svg)](https://rubygems.org/gems/beaker) [![Donated by Puppet Inc](https://img.shields.io/badge/donated%20by-Puppet%20Inc-fb7047.svg)](#transfer-notice) Beaker is a test harness focused on acceptance testing via interactions between multiple (virtual) machines. It provides platform abstraction between different Systems Under Test (SUTs), and it can also be used as a virtual machine provisioner - setting up machines, running any commands on those machines, and then exiting. Beaker runs tests written in Ruby, and provides additional Domain-Specific Language (DSL) methods. This gives you access to all standard Ruby along with acceptance testing specific commands. Installation === See [Beaker Installation](docs/tutorials/installation.md). Documentation === Documentation for Beaker can be found in this repository in [the docs/ folder](docs/README.md). Table of Contents --- * [Tutorials](docs/tutorials) take you by the hand through the steps to setup a beaker run. Start here if you’re new to Beaker or test development. * [Concepts](docs/concepts) discuss key topics and concepts at a fairly high level and provide useful background information and explanation. * [Rubydocs](http://rubydoc.info/github/puppetlabs/beaker/frames) contains the technical reference for APIs and other aspects of Beaker. They describe how it works and how to use it but assume that you have a basic understanding of key concepts. * [How-to guides](docs/how_to) are recipes. They guide you through the steps involved in addressing key problems and use-cases. They are more advanced than tutorials and assume some knowledge of how Beaker works. Beaker Libraries === Beaker functionality has been extended through the use of libraries available as gems. See the [complete list](docs/concepts/beaker_libraries.md) for available gems. See the [beaker-template documentation](https://github.com/puppetlabs/beaker-template/blob/master/README.md) for documentation on creating beaker-libraries. Support & Issues === Please log tickets and issues at our [Beaker Issue Tracker](https://tickets.puppetlabs.com/issues/?jql=project%20%3D%20BKR). In addition there is an active #puppet-dev channel on Freenode. For additional information on filing tickets, please check out our [CONTRIBUTOR doc](CONTRIBUTING.md), and for ticket lifecycle information, check out our [ticket process doc](docs/concepts/ticket_process.md). Contributing === If you'd like to contribute improvements to Beaker, please see [CONTRIBUTING](CONTRIBUTING.md). Maintainers === For information on project maintainers, please check out our [CODEOWNERS doc](CODEOWNERS). Transfer Notice --- This plugin was originally authored by [Puppet Inc](http://puppet.com). The maintainer preferred that Puppet Community take ownership of the module for future improvement and maintenance. Existing pull requests and issues were transferred over, please fork and continue to contribute here. Previously: <https://github.com/puppetlabs/beakerLicense --- This gem is licensed under the Apache-2 license. Release information --- To make a new release, please do: * update the version in the gemspec file * Install gems with `bundle install --with release --path .vendor` * generate the changelog with `bundle exec rake changelog` * Check if the new version matches the closed issues/PRs in the changelog * Create a PR with it * After it got merged, push a tag. GitHub actions will do the actual release to rubygems and GitHub Packages
Require
cran
R
Package ‘Require’ May 22, 2023 Type Package Title Installing and Loading R Packages for Reproducible Workflows Description A single key function, 'Require' that makes rerun-tolerant versions of 'install.packages' and `require` for CRAN packages, packages no longer on CRAN (i.e., archived), specific versions of packages, and GitHub packages. This approach is developed to create reproducible workflows that are flexible and fast enough to use while in development stages, while able to build snapshots once a stable package collection is found. As with other functions in a reproducible workflow, this package emphasizes functions that return the same result whether it is the first or subsequent times running the function, with subsequent times being sufficiently fast that they can be run every time without undue waiting burden on the user or developer. URL https://Require.predictiveecology.org, https://github.com/PredictiveEcology/Require Date 2023-05-22 Version 0.3.1 Depends R (>= 4.0) Imports data.table (>= 1.10.4), methods, tools, utils Suggests covr, parallel, remotes, testit Encoding UTF-8 Language en-CA License GPL-3 BugReports https://github.com/PredictiveEcology/Require/issues ByteCompile yes RoxygenNote 7.2.3 NeedsCompilation no Author <NAME> [aut, cre] (<https://orcid.org/0000-0002-6914-8316>), <NAME> [ctb] (<https://orcid.org/0000-0001-7146-8135>), Her Majesty the Queen in Right of Canada, as represented by the Minister of Natural Resources Canada [cph] Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2023-05-22 21:00:02 UTC R topics documented: Require-packag... 3 .downloadFileMasterMainAut... 8 archiveVersionsAvailabl... 9 availablePackagesOverrid... 10 availableVersionO... 11 checkPat... 11 clearRequirePackageCach... 12 DESCRIPTIONFileVersion... 13 detachAl... 14 extractPkgNam... 15 getOptionRPackageCach... 16 invertLis... 16 linkOrCop... 17 messageD... 17 modifyList... 19 normPat... 20 paddedFloatToCha... 21 parseGitHu... 22 pkgDe... 23 pkgDepIfDepRemove... 26 pkgSnapsho... 28 RequireCacheDi... 30 RequireOption... 31 rversion... 32 setdiffName... 33 setLibPath... 34 setLinuxBinaryRep... 36 setu... 36 sourcePkg... 37 tempdir... 38 tempfile... 38 trimVersionNumbe... 39 Require-package Require: Installing and Loading R Packages for Reproducible Work- flows Description A single key function, ’Require’ that makes rerun-tolerant versions of ’install.packages’ and ‘re- quire‘ for CRAN packages, packages no longer on CRAN (i.e., archived), specific versions of packages, and GitHub packages. This approach is developed to create reproducible workflows that are flexible and fast enough to use while in development stages, while able to build snapshots once a stable package collection is found. As with other functions in a reproducible workflow, this package emphasizes functions that return the same result whether it is the first or subsequent times running the function, with subsequent times being sufficiently fast that they can be run every time without undue waiting burden on the user or developer. This is an "all in one" function that will run install.packages for CRAN and GitHub https: //github.com/ packages and will install specific versions of each package if versions are specified either via an (in)equality (e.g., "glue (>=1.6.2)" or "glue (==1.6.2)" for an exact version) or with a packageVersionFile. If require = TRUE, the default, the function will then run require on all named packages that satisfy their version requirements. If packages are already installed (packages supplied), and their optional version numbers are satisfied, then the "install" component will be skipped. Usage Require( packages, packageVersionFile, libPaths, install_githubArgs = list(), install.packagesArgs = list(INSTALL_opts = "--no-multiarch"), standAlone = getOption("Require.standAlone", FALSE), install = getOption("Require.install", TRUE), require = getOption("Require.require", TRUE), repos = getOption("repos"), purge = getOption("Require.purge", FALSE), verbose = getOption("Require.verbose", FALSE), type = getOption("pkgType"), upgrade = FALSE, ... ) Install( packages, packageVersionFile, libPaths, install_githubArgs = list(), install.packagesArgs = list(INSTALL_opts = "--no-multiarch"), standAlone = getOption("Require.standAlone", FALSE), install = TRUE, repos = getOption("repos"), purge = getOption("Require.purge", FALSE), verbose = getOption("Require.verbose", FALSE), type = getOption("pkgType"), upgrade = FALSE, ... ) Arguments packages Character vector of packages to install via install.packages, then load (i.e., with library). If it is one package, it can be unquoted (as in require). In the case of a GitHub package, it will be assumed that the name of the repository is the name of the package. If this is not the case, then pass a named character vector here, where the names are the package names that could be different than the GitHub repository name. packageVersionFile Character string of a file name or logical. If TRUE, then this function will load the default file, getOption("Require.packageVersionFile"). If this argument is provided, then this will override all any packages passed to packages. libPaths The library path (or libraries) where all packages should be installed, and looked for to load (i.e., call library). This can be used to create isolated, stand alone package installations, if used with standAlone = TRUE. Currently, the path sup- plied here will be prepended to .libPaths() (temporarily during this call) to Require if standAlone = FALSE or will set (temporarily) .libPaths() to c(libPaths, tail(libPaths(), 1) to keep base packages. install_githubArgs Deprecated. Values passed here are merged with install.packagesArgs, with the install.packagesArgs taking precedence if conflicting. install.packagesArgs List of optional named arguments, passed to install.packages. Default is only --no-multi-arch, meaning that only the current architecture will be built and installed (e.g., 64 bit, not 32 bit, in many cases). standAlone Logical. If TRUE, all packages will be installed to and loaded from the libPaths only. NOTE: If TRUE, THIS WILL CHANGE THE USER’S .libPaths(), sim- ilar to e.g., the checkpoint package. If FALSE, then libPath will be prepended to .libPaths() during the Require call, resulting in shared packages, i.e., it will include the user’s default package folder(s). This can be create dramatically faster installs if the user has a substantial number of the packages already in their personal library. Default FALSE to minimize package installing. install Logical or "force". If FALSE, this will not try to install anything. If "force", then it will force installation of requested packages, mimicking a call to e.g., install.packages. If TRUE, the default, then this function will try to install any missing packages or dependencies. require Logical or character string. If TRUE, the default, then the function will attempt to call require on all requested packages, possibly after they are installed. If a character string, then it will only call require on those specific packages (i.e., it will install the ones listed in packages, but load the packages listed in require) repos The remote repository (e.g., a CRAN mirror), passed to either install.packages, install_github or installVersions. purge Logical. Should all caches be purged? Default is getOption("Require.purge", FALSE). There is a lot of internal caching of results throughout the Require package. These help with speed and reduce calls to internet sources. However, sometimes these caches must be purged. The cached values are renewed when found to be too old, with the age limit. This maximum age can be set in seconds with the environment variable R_AVAILABLE_PACKAGES_CACHE_CONTROL_MAX_AGE, or if unset, defaults to 3600 (one hour – see utils::available.packages). Internally, there are calls to available.packages. verbose Numeric or logical indicating how verbose should the function be. If -1 or -2, then as little verbosity as possible. If 0 or FALSE, then minimal outputs; if 1 or TRUE, more outputs; 2 even more. NOTE: in Require function, when verbose >= 2, the return object will have an attribute: attr(.., "Require") which has lots of information about the processes of the installs. type See utils::install.packages upgrade When FALSE, the default, will only upgrade a package when the version on in the local library is not adequate for the version requirements of the packages. Note: for convenience, update can be used for this argument. ... Passed to install.packages. Good candidates are e.g., type or dependencies. This can be used with install_githubArgs or install.packageArgs which give individual options for those 2 internal function calls. Details Install is the same as Require(..., require = FALSE), for convenience. Value Require is intended to replace base::require, thus it returns a logical, named vector indicating whether the named packages have been loaded. Because Require also has the ability to install packages, a return value of FALSE does not mean that it did not install correctly; rather, it means it did not attach with require, which could be because it did not install correctly, or also because e.g., require = FALSE. standAlone will either put the Required packages and their dependencies all within the libPaths (if TRUE) or if FALSE will only install packages and their dependencies that are otherwise not in- stalled in .libPaths()[1], i.e., the current active R package directory. Any packages or depen- dencies that are not yet installed will be installed in libPaths. GitHub Package Follows remotes::install_github standard. As with remotes::install_github, it is not pos- sible to specify a past version of a GitHub package unless that version is a tag or the user passes the SHA that had that package version. Similarly, if a developer does a local install e.g., via pkgload::install, of an active project, this package will not be able know of the GitHub state, and thus pkgSnapshot will not be able to recover this state as there is no SHA associated with a local installation. Use Require (or remotes::install_github) to create a record of the GitHub state. Package Snapshots To build a snapshot of the desired packages and their versions, first run Require with all packages, then pkgSnapshot. If a libPaths is used, it must be used in both functions. Mutual Dependencies This function works best if all required packages are called within one Require call, as all de- pendencies can be identified together, and all package versions will be addressed (if there are no conflicts), allowing a call to pkgSnapshot() to take a snapshot or "record" of the current collection of packages and versions. Local Cache of Packages When installing new packages, Require will put all source and binary files in an R-version specific subfolder of getOption("Require.RPackageCache") whose default is RPackageCache(), mean- ing cache packages locally in a project-independent location, and will reuse them if needed. To turn off this feature, set options("Require.RPackageCache" = FALSE). Note For advanced use and diagnosis, the user can set verbose = TRUE or 1 or 2 (or via options("Require.verbose")). This will attach an attribute attr(obj, "Require") to the output of this function. Author(s) Maintainer: <NAME> <<EMAIL>> (ORCID) Other contributors: • <NAME> <<EMAIL>> (ORCID) [contributor] • Her Majesty the Queen in Right of Canada, as represented by the Minister of Natural Re- sources Canada [copyright holder] See Also Useful links: • https://Require.predictiveecology.org • https://github.com/PredictiveEcology/Require • Report bugs at https://github.com/PredictiveEcology/Require/issues Examples ## Not run: # simple usage, like conditional install.packages then library opts <- Require:::.setupExample() library(Require) getCRANrepos(ind = 1) Require("stats") # analogous to require(stats), but it checks for # pkg dependencies, and installs them, if missing if (Require:::.runLongExamples()) { # Install in a new local library (libPaths) tempPkgFolder <- file.path(tempdir(), "Packages") # use standAlone, means it will put it in libPaths, even if it already exists # in another local library (e.g., personal library) Install("crayon", libPaths = tempPkgFolder, standAlone = TRUE) # make a package version snapshot of installed packages tf <- tempfile() (pkgSnapshot(tf, standAlone = TRUE)) # Change the libPaths to emulate a new computer or project tempPkgFolder <- file.path(tempdir(), "Packages2") # Reinstall and reload the exact version from previous Require(packageVersionFile = tf, libPaths = tempPkgFolder, standAlone = TRUE) # Mutual dependencies, only installs once -- e.g., curl tempPkgFolder <- file.path(tempdir(), "Packages") Install(c("remotes", "testit"), libPaths = tempPkgFolder, standAlone = TRUE) # Mutual dependencies, only installs once -- e.g., curl tempPkgFolder <- file.path(tempdir(), "Packages") Install(c("covr", "httr"), libPaths = tempPkgFolder, standAlone = TRUE) ##################################################################################### # Isolated projects -- Use a project folder and pass to libPaths or set .libPaths() # ##################################################################################### # GitHub packages ProjectPackageFolder <- file.path(tempdir(), "ProjectA") Require("PredictiveEcology/fpCompare@development", libPaths = ProjectPackageFolder, standAlone = FALSE ) Install("PredictiveEcology/fpCompare@development", libPaths = ProjectPackageFolder, standAlone = TRUE ) # the latest version on GitHub ############################################################################ # Mixing and matching GitHub, CRAN, with and without version numbering ############################################################################ pkgs <- c( "remotes (<=2.4.1)", # old version "digest (>= 0.6.28)", # recent version "PredictiveEcology/fpCompare@a0260b8476b06628bba0ae73af3430cce9620ca0" # exact version ) Require::Require(pkgs, libPaths = ProjectPackageFolder) Require:::.cleanup(opts) } ## End(Not run) .downloadFileMasterMainAuth GITHUB_PAT-aware and main-master-aware download from GitHub Description Equivalent to utils::download.file, but taking the GITHUB_PAT environment variable and using it to access the Github url. Usage .downloadFileMasterMainAuth( url, destfile, need = "HEAD", verbose = getOption("Require.verbose"), verboseLevel = 2 ) Arguments url a character string (or longer vector for the "libcurl" method) naming the URL of a resource to be downloaded. destfile a character string (or vector, see the url argument) with the file path where the downloaded file is to be saved. Tilde-expansion is performed. need If specified, user can suggest which master or main or HEAD to try first. If unspecified, HEAD is used. verbose Numeric or logical indicating how verbose should the function be. If -1 or -2, then as little verbosity as possible. If 0 or FALSE, then minimal outputs; if 1 or TRUE, more outputs; 2 even more. NOTE: in Require function, when verbose >= 2, the return object will have an attribute: attr(.., "Require") which has lots of information about the processes of the installs. verboseLevel A numeric indicating what verbose threshold (level) above which this message will show. Value This is called for its side effect, namely, the same as utils::download.file, but using a GITHUB_PAT, it if is in the environment, and trying both master and main if the actual url specifies either master or main and it does not exist. archiveVersionsAvailable Available and archived versions Description These are wrappers around available.packages and also get the archived versions available on CRAN. Usage archiveVersionsAvailable(package, repos) available.packagesCached( repos, purge, verbose = getOption("Require.verbose"), returnDataTable = TRUE, type ) Arguments package A single package name (without version or github specifications) repos The remote repository (e.g., a CRAN mirror), passed to either install.packages, install_github or installVersions. purge Logical. Should all caches be purged? Default is getOption("Require.purge", FALSE). There is a lot of internal caching of results throughout the Require package. These help with speed and reduce calls to internet sources. However, sometimes these caches must be purged. The cached values are renewed when found to be too old, with the age limit. This maximum age can be set in seconds with the environment variable R_AVAILABLE_PACKAGES_CACHE_CONTROL_MAX_AGE, or if unset, defaults to 3600 (one hour – see utils::available.packages). Internally, there are calls to available.packages. verbose Numeric or logical indicating how verbose should the function be. If -1 or -2, then as little verbosity as possible. If 0 or FALSE, then minimal outputs; if 1 or TRUE, more outputs; 2 even more. NOTE: in Require function, when verbose >= 2, the return object will have an attribute: attr(.., "Require") which has lots of information about the processes of the installs. returnDataTable Logical. If TRUE, the default, then the return is a data.table. Otherwise, it is a matrix, as per available.packages type See utils::install.packages Details archiveVersionsAvailable searches CRAN Archives for available versions. It has been bor- rowed from a sub-set of the code in a non-exported function: remotes:::download_version_url availablePackagesOverride Create a custom "available.packages" object Description This is the mechanism by which install.packages determines which packages should be installed from where. With this override, we can indicate arbitrary repos, Package, File for each individual package. Usage availablePackagesOverride(toInstall, repos, purge, type = getOption("pkgType")) Arguments toInstall A pkgDT object repos The remote repository (e.g., a CRAN mirror), passed to either install.packages, install_github or installVersions. purge Logical. Should all caches be purged? Default is getOption("Require.purge", FALSE). There is a lot of internal caching of results throughout the Require package. These help with speed and reduce calls to internet sources. However, sometimes these caches must be purged. The cached values are renewed when found to be too old, with the age limit. This maximum age can be set in seconds with the environment variable R_AVAILABLE_PACKAGES_CACHE_CONTROL_MAX_AGE, or if unset, defaults to 3600 (one hour – see utils::available.packages). Internally, there are calls to available.packages. type See utils::install.packages availableVersionOK Needs VersionOnRepos, versionSpec and inequality columns Description Needs VersionOnRepos, versionSpec and inequality columns Usage availableVersionOK(pkgDT) Arguments pkgDT A pkgDT object checkPath Check directory path Description Checks the specified path to a directory for formatting consistencies, such as trailing slashes, etc. Usage checkPath(path, create) ## S4 method for signature 'character,logical' checkPath(path, create) ## S4 method for signature 'character,missing' checkPath(path) ## S4 method for signature '`NULL`,ANY' checkPath(path) ## S4 method for signature 'missing,ANY' checkPath() Arguments path A character string corresponding to a directory path. create A logical indicating whether the path should be created if it does not exist. De- fault is FALSE. Value Character string denoting the cleaned up filepath. Note This will not work for paths to files. To check for existence of files, use file.exists(). To normalize a path to a file, use normPath() or normalizePath(). See Also file.exists(), dir.create(). Examples ## normalize file paths paths <- list("./aaa/zzz", "./aaa/zzz/", ".//aaa//zzz", ".//aaa//zzz/", ".\\\\aaa\\\\zzz", ".\\\\aaa\\\\zzz\\\\", file.path(".", "aaa", "zzz")) checked <- normPath(paths) length(unique(checked)) ## 1; all of the above are equivalent ## check to see if a path exists tmpdir <- file.path(tempdir(), "example_checkPath") dir.exists(tmpdir) ## FALSE tryCatch(checkPath(tmpdir, create = FALSE), error = function(e) FALSE) ## FALSE checkPath(tmpdir, create = TRUE) dir.exists(tmpdir) ## TRUE unlink(tmpdir, recursive = TRUE) # clean up clearRequirePackageCache Clear Require Cache elements Description Clear Require Cache elements Usage clearRequirePackageCache( packages, ask = interactive(), Rversion = rversion(), clearCranCache = FALSE, verbose = getOption("Require.verbose") ) Arguments packages Either missing or a character vector of package names (currently cannot specify version number) to remove from the local Require Cache. ask Logical. If TRUE, then it will ask user to confirm Rversion An R version (major dot minor, e.g., "4.2"). Defaults to current R version. clearCranCache Logical. If TRUE, then this will also clear the local crancache cache, which is only relevant if options(Require.useCranCache = TRUE), i.e., if Require is using the crancache cache also verbose Numeric or logical indicating how verbose should the function be. If -1 or -2, then as little verbosity as possible. If 0 or FALSE, then minimal outputs; if 1 or TRUE, more outputs; 2 even more. NOTE: in Require function, when verbose >= 2, the return object will have an attribute: attr(.., "Require") which has lots of information about the processes of the installs. DESCRIPTIONFileVersionV GitHub package tools Description A series of helpers to access and deal with GitHub packages Usage DESCRIPTIONFileVersionV(file, purge = getOption("Require.purge", FALSE)) DESCRIPTIONFileOtherV(file, other = "RemoteSha") getGitHubDESCRIPTION( pkg, purge = getOption("Require.purge", FALSE), verbose = getOption("Require.verbose") ) Arguments file A file path to a DESCRIPTION file purge Logical. Should all caches be purged? Default is getOption("Require.purge", FALSE). There is a lot of internal caching of results throughout the Require package. These help with speed and reduce calls to internet sources. However, sometimes these caches must be purged. The cached values are renewed when found to be too old, with the age limit. This maximum age can be set in seconds with the environment variable R_AVAILABLE_PACKAGES_CACHE_CONTROL_MAX_AGE, or if unset, defaults to 3600 (one hour – see utils::available.packages). Internally, there are calls to available.packages. other Any other keyword in a DESCRIPTION file that precedes a ":". The rest of the line will be retrieved. pkg A character string with a GitHub package specification (c.f. remotes) verbose Numeric or logical indicating how verbose should the function be. If -1 or -2, then as little verbosity as possible. If 0 or FALSE, then minimal outputs; if 1 or TRUE, more outputs; 2 even more. NOTE: in Require function, when verbose >= 2, the return object will have an attribute: attr(.., "Require") which has lots of information about the processes of the installs. Details getGitHubDESCRIPTION retrieves the DESCRIPTION file from GitHub.com detachAll Detach and unload all packages Description This uses pkgDepTopoSort internally so that the package dependency tree is determined, and then packages are unloaded in the reverse order. Some packages don’t unload successfully for a variety of reasons. Several known packages that have this problem are identified internally and not unloaded. Currently, these are glue, rlang, ps, ellipsis, and, processx. Usage detachAll( pkgs, dontTry = NULL, doSort = TRUE, verbose = getOption("Require.verbose") ) Arguments pkgs A character vector of packages to detach. Will be topologically sorted unless doSort is FALSE. dontTry A character vector of packages to not try. This can be used by a user if they find a package fails in attempts to unload it, e.g., "ps" doSort If TRUE (the default), then the pkgs will be topologically sorted. If FALSE, then it won’t. Useful if the pkgs are already sorted. verbose Numeric or logical indicating how verbose should the function be. If -1 or -2, then as little verbosity as possible. If 0 or FALSE, then minimal outputs; if 1 or TRUE, more outputs; 2 even more. NOTE: in Require function, when verbose >= 2, the return object will have an attribute: attr(.., "Require") which has lots of information about the processes of the installs. Value A numeric named vector, with names of the packages that were attempted. 2 means the package was successfully unloaded, 1 it was tried, but failed, 3 it was in the search path and was detached and unloaded. extractPkgName Extract info from package character strings Description Cleans a character vector of non-package name related information (e.g., version) Usage extractPkgName(pkgs, filenames) extractVersionNumber(pkgs, filenames) extractInequality(pkgs) extractPkgGitHub(pkgs) Arguments pkgs A character string vector of packages with or without GitHub path or versions filenames Can be supplied instead of pkgs if it is a filename e.g., a .tar.gz or .zip that was downloaded from CRAN. Value Just the package names without extraneous info. See Also trimVersionNumber() Examples extractPkgName("Require (>=0.0.1)") extractVersionNumber(c( "Require (<=0.0.1)", "PredictiveEcology/Require@development (<=0.0.4)" )) extractInequality("Require (<=0.0.1)") extractPkgGitHub("PredictiveEcology/Require") getOptionRPackageCache Get the option for Require.RPackageCache Description First checks if an environment variable Require.RPackageCache is set and defines a path. If not set, checks whether the options("Require.RPackageCache") is set. If a character string, then it returns that. If TRUE, then use RequirePkgCacheDir(). If FALSE then returns NULL. Usage getOptionRPackageCache() invertList Invert a 2-level list Description This is a simple version of purrr::transpose, only for lists with 2 levels. Usage invertList(l) Arguments l A list with 2 levels. If some levels are absent, they will be NULL Value A list with 2 levels deep, inverted from l Examples # create a 2-deep, 2 levels in first, 3 levels in second a <- list(a = list(d = 1, e = 2:3, f = 4:6), b = list(d = 5, e = 55)) invertList(a) # creates 2-deep, now 3 levels outer --> 2 levels inner linkOrCopy Create link to file, falling back to making a copy if linking fails. Description First try to create a hardlink to the file. If that fails, try a symbolic link (symlink) before falling back to copying the file. "File" here can mean a file or a directory. Usage linkOrCopy(from, to, allowSymlink = FALSE) fileRenameOrMove(from, to) Arguments from, to character vectors, containing file names or paths. allowSymlink Logical. If FALSE, the default, then it will try file.link first, then file.copy, omitting the file.symlink step messageDF Use message to print a clean square data structure Description Sends to message, but in a structured way so that a data.frame-like can be cleanly sent to messaging. This will only show a message if the value of verbose is greater than the verboseLevel. This is mostly useful for developers of code who want to give users of their code easy access to how verbose their code will be. A developer of a function will place this messageVerbose internally, setting the verboseLevel according to how advanced they may want the message to be. 1 is a reasonable default for standard use, 0 would be for "a very important message for all users", 2 or above would be increasing levels of details for e.g., advanced use. If a user sets to -1 with this numeric approach, they can avoid all messaging. Usage messageDF(df, round, verbose = getOption("Require.verbose"), verboseLevel = 1) messageVerbose(..., verbose = getOption("Require.verbose"), verboseLevel = 1) messageVerboseCounter( pre = "", post = "", verbose = getOption("Require.verbose"), verboseLevel = 1, counter = 1, total = 1, minCounter = 1 ) Arguments df A data.frame, data.table, matrix round An optional numeric to pass to round verbose Numeric or logical indicating how verbose should the function be. If -1 or -2, then as little verbosity as possible. If 0 or FALSE, then minimal outputs; if 1 or TRUE, more outputs; 2 even more. NOTE: in Require function, when verbose >= 2, the return object will have an attribute: attr(.., "Require") which has lots of information about the processes of the installs. verboseLevel A numeric indicating what verbose threshold (level) above which this message will show. ... Passed to install.packages. Good candidates are e.g., type or dependencies. This can be used with install_githubArgs or install.packageArgs which give individual options for those 2 internal function calls. pre A single text string to paste before the counter post A single text string to paste after the counter counter An integer indicating which iteration is being done total An integer indicating the total number to be done. minCounter An integer indicating the minimum (i.e,. starting value) Value Used for side effects, namely messaging that can be turned on or off with different numeric values of verboseLevel. A user sets the verboseLevel for a particular message. modifyList2 modifyList for multiple lists Description This calls utils::modifyList iteratively using base::Reduce, so it can handle >2 lists. The subsequent list elements that share a name will override previous list elements with that same name. It also will handle the case where any list is a NULL. Note: default keep.null = TRUE, which is different than modifyList Usage modifyList2(..., keep.null = FALSE) modifyList3(..., keep.null = TRUE) Arguments ... One or more named lists. keep.null If TRUE, NULL elements in val become NULL elements in x. Otherwise, the cor- responding element, if present, is deleted from x. Details More or less a convenience around Reduce(modifyList, list(...)), with some checks, and the addition of keep.null = TRUE by default. Note modifyList3 retains the original behaviour of modifyList2 (prior to Oct 2022); however, it cannot retain NULL values in lists. Examples modifyList2(list(a = 1), list(a = 2, b = 2)) modifyList2(list(a = 1), NULL, list(a = 2, b = 2)) modifyList2( list(a = 1), list(x = NULL), list(a = 2, b = 2), list(a = 3, c = list(1:10)) ) normPath Normalize filepath Description Checks the specified filepath for formatting consistencies: 1. use slash instead of backslash; 2. do tilde etc. expansion; 3. remove trailing slash. Usage normPath(path) ## S4 method for signature 'character' normPath(path) ## S4 method for signature 'list' normPath(path) ## S4 method for signature '`NULL`' normPath(path) ## S4 method for signature 'missing' normPath() ## S4 method for signature 'logical' normPath(path) Arguments path A character vector of filepaths. Value Character vector of cleaned up filepaths. Examples ## normalize file paths paths <- list("./aaa/zzz", "./aaa/zzz/", ".//aaa//zzz", ".//aaa//zzz/", ".\\\\aaa\\\\zzz", ".\\\\aaa\\\\zzz\\\\", file.path(".", "aaa", "zzz")) checked <- normPath(paths) length(unique(checked)) ## 1; all of the above are equivalent ## check to see if a path exists tmpdir <- file.path(tempdir(), "example_checkPath") dir.exists(tmpdir) ## FALSE tryCatch(checkPath(tmpdir, create = FALSE), error = function(e) FALSE) ## FALSE checkPath(tmpdir, create = TRUE) dir.exists(tmpdir) ## TRUE unlink(tmpdir, recursive = TRUE) # clean up paddedFloatToChar Convert numeric to character with padding Description This will pad floating point numbers, right or left. For integers, either class integer or functionally integer (e.g., 1.0), it will not pad right of the decimal. For more specific control or to get exact padding right and left of decimal, try the stringi package. It will also not do any rounding. See examples. Usage paddedFloatToChar(x, padL = ceiling(log10(x + 1)), padR = 3, pad = "0") Arguments x numeric. Number to be converted to character with padding padL numeric. Desired number of digits on left side of decimal. If not enough, pad will be used to pad. padR numeric. Desired number of digits on right side of decimal. If not enough, pad will be used to pad. pad character to use as padding (nchar(pad) == 1 must be TRUE). Currently, can be only "0" or " " (i.e., space). Value Character string representing the filename. Author(s) <NAME> and <NAME> Examples paddedFloatToChar(1.25) paddedFloatToChar(1.25, padL = 3, padR = 5) paddedFloatToChar(1.25, padL = 3, padR = 1) # no rounding, so keeps 2 right of decimal parseGitHub Parse a github package specification Description This converts a specification like PredictiveEcology/Require@development into separate columns, "Account", "Repo", "Branch", "GitSubFolder" (if there is one) Usage parseGitHub(pkgDT, verbose = getOption("Require.verbose")) Arguments pkgDT A pkgDT data.table. verbose Numeric or logical indicating how verbose should the function be. If -1 or -2, then as little verbosity as possible. If 0 or FALSE, then minimal outputs; if 1 or TRUE, more outputs; 2 even more. NOTE: in Require function, when verbose >= 2, the return object will have an attribute: attr(.., "Require") which has lots of information about the processes of the installs. Details parseGitHub turns the single character string representation into 3 or 4: Account, Repo, Branch, SubFolder. Value parseGitHub returns a data.table with added columns. pkgDep Determine package dependencies Description This will first look in local filesystem (in .libPaths()) and will use a local package to find its dependencies. If the package does not exist locally, including whether it is the correct version, then it will look in (currently) CRAN and its archives (if the current CRAN version is not the desired version to check). It will also look on GitHub if the package description is of the form of a GitHub package with format account/repo@branch or account/repo@commit. For this, it will attempt to get package dependencies from the GitHub ‘DESCRIPTION’ file. This is intended to replace tools::package_dependencies or pkgDep in the miniCRAN package, but with modifications to allow multiple sources to be searched in the same function call. pkgDep2 is a convenience wrapper of pkgDep that "goes one level in", i.e., the first order dependen- cies, and runs the pkgDep on those. This is a wrapper around tools::dependsOnPkgs, but with the added option of sorted, which will sort them such that the packages at the top will have the least number of dependencies that are in pkgs. This is essentially a topological sort, but it is done heuristically. This can be used to e.g., detach or unloadNamespace packages in order so that they each of their dependencies are detached or unloaded first. Usage pkgDep( packages, libPath = .libPaths(), which = c("Depends", "Imports", "LinkingTo"), recursive = FALSE, depends, imports, suggests, linkingTo, repos = getOption("repos"), keepVersionNumber = TRUE, includeBase = FALSE, sort = TRUE, purge = getOption("Require.purge", FALSE), verbose = getOption("Require.verbose"), includeSelf = TRUE, type = getOption("pkgType") ) pkgDep2( packages, recursive = TRUE, which = c("Depends", "Imports", "LinkingTo"), depends, imports, suggests, linkingTo, repos = getOption("repos"), sorted = TRUE, purge = getOption("Require.purge", FALSE), includeSelf = TRUE, verbose = getOption("Require.verbose") ) pkgDepTopoSort( pkgs, deps, reverse = FALSE, topoSort = TRUE, libPath = .libPaths(), useAllInSearch = FALSE, returnFull = TRUE, recursive = TRUE, purge = getOption("Require.purge", FALSE), which = c("Depends", "Imports", "LinkingTo"), type = getOption("pkgType"), verbose = getOption("Require.verbose") ) Arguments packages Character vector of packages to install via install.packages, then load (i.e., with library). If it is one package, it can be unquoted (as in require). In the case of a GitHub package, it will be assumed that the name of the repository is the name of the package. If this is not the case, then pass a named character vector here, where the names are the package names that could be different than the GitHub repository name. libPath A path to search for installed packages. Defaults to .libPaths() which a character vector listing the types of dependencies, a subset of c("Depends", "Imports", "LinkingTo", "Suggests", "Enhances"). Character string "all" is shorthand for that vector, character string "most" for the same vector without "Enhances". recursive Logical. Should dependencies of dependencies be searched, recursively. NOTE: Dependencies of suggests will not be recursive. Default TRUE. depends Logical. Include packages listed in "Depends". Default TRUE. imports Logical. Include packages listed in "Imports". Default TRUE. suggests Logical. Include packages listed in "Suggests". Default FALSE. linkingTo Logical. Include packages listed in "LinkingTo". Default TRUE. repos The remote repository (e.g., a CRAN mirror), passed to either install.packages, install_github or installVersions. keepVersionNumber Logical. If TRUE, then the package dependencies returned will include version number. Default is FALSE includeBase Logical. Should R base packages be included, specifically, those in tail(.libPath(), 1) sort Logical. If TRUE, the default, then the packages will be sorted alphabetically. If FALSE, the packages will not have a discernible order as they will be a concate- nation of the possibly recursive package dependencies. purge Logical. Should all caches be purged? Default is getOption("Require.purge", FALSE). There is a lot of internal caching of results throughout the Require package. These help with speed and reduce calls to internet sources. However, sometimes these caches must be purged. The cached values are renewed when found to be too old, with the age limit. This maximum age can be set in seconds with the environment variable R_AVAILABLE_PACKAGES_CACHE_CONTROL_MAX_AGE, or if unset, defaults to 3600 (one hour – see utils::available.packages). Internally, there are calls to available.packages. verbose Numeric or logical indicating how verbose should the function be. If -1 or -2, then as little verbosity as possible. If 0 or FALSE, then minimal outputs; if 1 or TRUE, more outputs; 2 even more. NOTE: in Require function, when verbose >= 2, the return object will have an attribute: attr(.., "Require") which has lots of information about the processes of the installs. includeSelf Logical. If TRUE, the default, then the dependencies will include the package itself in the returned list elements, otherwise, only the "dependencies" type See utils::install.packages sorted Logical. If TRUE, the default, the packages will be sorted in the returned list from most number of dependencies to least. pkgs A vector of package names to evaluate their reverse depends (i.e., the packages that use each of these packages) deps An optional named list of (reverse) dependencies. If not supplied, then tools::dependsOnPkgs(..., recursive = TRUE) will be used reverse Logical. If TRUE, then this will use tools::pkgDependsOn to determine which packages depend on the pkgs topoSort Logical. If TRUE, the default, then the returned list of packages will be in order with the least number of dependencies listed in pkgs at the top of the list. useAllInSearch Logical. If TRUE, then all non-core R packages in search() will be appended to pkgs to allow those to also be identified returnFull Logical. Primarily useful when reverse = TRUE. If TRUE, then then all installed packages will be searched. If FALSE, the default, only packages that are currently in the search() path and passed in pkgs will be included in the possible reverse dependencies. Value A possibly ordered, named (with packages as names) list where list elements are either full reverse depends. Note tools::package_dependencies and pkgDep will differ under the following circumstances: 1. GitHub packages are not detected using tools::package_dependencies; 2. tools::package_dependencies does not detect the dependencies of base packages among themselves, e.g., methods depends on stats and graphics. Examples ## Not run: if (Require:::.runLongExamples()) { opts <- Require:::.setupExample() pkgDep("tidyverse", recursive = TRUE) # GitHub, local, and CRAN packages pkgDep(c("PredictiveEcology/reproducible", "Require", "plyr")) Require:::.cleanup(opts) } ## End(Not run) ## Not run: if (Require:::.runLongExamples()) { opts <- Require:::.setupExample() pkgDep2("reproducible") # much bigger one pkgDep2("tidyverse") Require:::.cleanup(opts) } ## End(Not run) ## Not run: if (Require:::.runLongExamples()) { opts <- Require:::.setupExample() pkgDepTopoSort(c("Require", "data.table"), reverse = TRUE) Require:::.cleanup(opts) } ## End(Not run) pkgDepIfDepRemoved Package dependencies when one or more packages removed Description This is primarily for package developers. It allows the testing of what the recursive dependencies would be if a package was removed from the immediate dependencies. Usage pkgDepIfDepRemoved( pkg = character(), depsRemoved = character(), verbose = getOption() ) Arguments pkg A package name to be testing the dependencies depsRemoved A vector of package names who are to be "removed" from the pkg immediate dependencies verbose Numeric or logical indicating how verbose should the function be. If -1 or -2, then as little verbosity as possible. If 0 or FALSE, then minimal outputs; if 1 or TRUE, more outputs; 2 even more. NOTE: in Require function, when verbose >= 2, the return object will have an attribute: attr(.., "Require") which has lots of information about the processes of the installs. Value A list with 3 named lists Direct, Recursive and IfRemoved. Direct will show the top level direct dependencies, either Remaining or Removed. Recursive will show the full recursive dependencies, either Remaining or Removed. IfRemoved returns all package dependencies that are removed for each top level dependency. If a top level dependency is not listed in this final list, then it means that it is also a recursive dependency elsewhere, so its removal has no effect. Examples ## Not run: if (Require:::.runLongExamples()) { opts <- Require:::.setupExample() pkgDepIfDepRemoved("reproducible", "data.table") Require:::.cleanup(opts) } ## End(Not run) pkgSnapshot Take a snapshot of all the packages and version numbers Description This can be used later by Require to install or re-install the correct versions. See examples. Usage pkgSnapshot( packageVersionFile = getOption("Require.packageVersionFile"), libPaths = .libPaths(), standAlone = FALSE, purge = getOption("Require.purge", FALSE), exact = TRUE, includeBase = FALSE, verbose = getOption("Require.verbose") ) pkgSnapshot2( packageVersionFile = getOption("Require.packageVersionFile"), libPaths, standAlone = FALSE, purge = getOption("Require.purge", FALSE), exact = TRUE, includeBase = FALSE, verbose = getOption("Require.verbose") ) Arguments packageVersionFile A filename to save the packages and their currently installed version numbers. Defaults to "packageVersions.txt". If this is specified to be NULL, the func- tion will return the exact Require call needed to install all the packages at their current versions. This can be useful to add to a script to allow for reproducibility of a script. libPaths The path to the local library where packages are installed. Defaults to the .libPaths()[1]. standAlone Logical. If TRUE, all packages will be installed to and loaded from the libPaths only. NOTE: If TRUE, THIS WILL CHANGE THE USER’S .libPaths(), sim- ilar to e.g., the checkpoint package. If FALSE, then libPath will be prepended to .libPaths() during the Require call, resulting in shared packages, i.e., it will include the user’s default package folder(s). This can be create dramatically faster installs if the user has a substantial number of the packages already in their personal library. Default FALSE to minimize package installing. purge Logical. Should all caches be purged? Default is getOption("Require.purge", FALSE). There is a lot of internal caching of results throughout the Require package. These help with speed and reduce calls to internet sources. However, sometimes these caches must be purged. The cached values are renewed when found to be too old, with the age limit. This maximum age can be set in seconds with the environment variable R_AVAILABLE_PACKAGES_CACHE_CONTROL_MAX_AGE, or if unset, defaults to 3600 (one hour – see utils::available.packages). Internally, there are calls to available.packages. exact Logical. If TRUE, the default, then for GitHub packages, it will install the exact SHA, rather than the head of the account/repo@branch. For CRAN packages, it will install the exact version. If FALSE, then GitHub packages will identify their branch if that had been specified upon installation, not a SHA. If the pack- age had been installed with reference to a SHA, then it will return the SHA as it does not know what branch it came from. Similarly, CRAN packages will report their version and specify with a >=, allowing a subsequent user to install with a minimum version number, as opposed to an exact version number. includeBase Logical. Should R base packages be included, specifically, those in tail(.libPath(), 1) verbose Numeric or logical indicating how verbose should the function be. If -1 or -2, then as little verbosity as possible. If 0 or FALSE, then minimal outputs; if 1 or TRUE, more outputs; 2 even more. NOTE: in Require function, when verbose >= 2, the return object will have an attribute: attr(.., "Require") which has lots of information about the processes of the installs. Details A file is written with the package names and versions of all packages within libPaths. This can later be passed to Require. pkgSnapshot2 returns a vector of package names and versions, with no file output. See examples. Value Will both write a file, and (invisibly) return a vector of packages with the version numbers. This vector can be used directly in Require, though it should likely be used with require = FALSE to prevent attaching all the packages. Examples ## Not run: if (Require:::.runLongExamples()) { opts <- Require:::.setupExample() # install one archived version so that below does something interesting libForThisEx <- tempdir2("Example") Require("crayon (==1.5.1)", libPaths = libForThisEx, require = FALSE) # Normal use -- using the libForThisEx for example; # normally libPaths would be omitted to get all # packages in user or project library tf <- tempfile() # writes to getOption("Require.packageVersionFile") # within project; also returns a vector # of packages with version pkgs <- pkgSnapshot( packageVersionFile = tf, libPaths = libForThisEx ) # Now move this file to another computer e.g. by committing in git, # emailing, googledrive # on next computer/project Require(packageVersionFile = tf, libPaths = libForThisEx) # Using pkgSnapshot2 to get the vector of packages and versions tf <- tempfile() pkgs <- pkgSnapshot2( packageVersionFile = tf, libPaths = libForThisEx ) Require(pkgs, require = FALSE) # will install packages from previous line # (likely want require = FALSE # and not load them all) Require:::.cleanup(opts) unlink(getOption("Require.packageVersionFile")) } ## End(Not run) RequireCacheDir Path to (package) cache directory Description Sets (if create = TRUE) or gets the cache directory associated with the Require package. Usage RequireCacheDir(create) RequirePkgCacheDir(create) Arguments create A logical indicating whether the path should be created if it does not exist. De- fault is FALSE. Details To set a different directory than the default, set the system variable: R_USER_CACHE_DIR = "somePath" and/or R_REQUIRE_PKG_CACHE = "somePath" e.g., in .Renviron file or Sys.setenv(). See Note below. Value If !is.null(getOptionRPackageCache()), i.e., a cache path exists, the cache directory will be created, with a README placed in the folder. Otherwise, this function will just return the path of what the cache directory would be. Note Currently, there are 2 different Cache directories used by Require: RequireCacheDir and RequirePkgCacheDir. The RequirePkgCacheDir is intended to be a sub-directory of the RequireCacheDir. If you set Sys.setenv("R_USER_CACHE_DIR" = "somedir"), then both the package cache and cache dirs will be set, with the package cache a sub-directory. You can, however, set them independently, if you set "R_USER_CACHE_DIR" and "R_REQUIRE_PKG_CACHE" environment variable. The package cache can also be set with options("Require.RPackageCache" = "somedir"). RequireOptions Require options Description These provide top-level, powerful settings for a comprehensive reproducible workflow. See Details below. Usage RequireOptions() getRequireOptions() Details RequireOptions() prints the default values of package options set at startup, which may have been changed (e.g., by the user) during the current session. getRequireOptions() prints the current values of package options. Below are options that can be set with options("Require.xxx" = newValue), where xxx is one of the values below, and newValue is a new value to give the option. Sometimes these options can be placed in the user’s .Rprofile file so they persist between sessions. The following options are likely of interest to most users: install Default: TRUE. This is the default argument to Require, but does not affect Install. If this is FALSE, then no installations will be attempted, and missing packages will result in an error. RPackageCache Default: getOptionRPackageCache(), which must be either a path or a logical. To turn off package caching, set this to FALSE. This can be set using an environment variable e.g. Sys.setenv(R_REQUIRE_PKG_CACHE = "somePath"), or Sys.setenv(R_REQUIRE_PKG_CACHE = "TRUE"); if that is not set, then an either a path or logical option (options(Require.RPackageCache = "somePath") or options(Require.RPackageCache = TRUE)). If TRUE, the default folder location RequirePkgCacheDir() will be used. If this is TRUE or a path is provided, then bi- nary and source packages will be cached here. Subsequent downloads of same package will use local copy. Default is to have packages not be cached locally so each install of the same version will be from the original source, e.g., CRAN, GitHub. otherPkgs Default: A character vector of packages that are generally more successful if installed from Source on Unix-alikes. Since there are repositories that offer binary packages builds for Linux (e.g., RStudio Package Manager), the vector of package names indicated here will default to a standard CRAN repository, forcing a source install. See also spatialPkgs option, which does the same for spatial packages. purge Default: FALSE. If set to (almost) all internal caches used by Require will be deleted and rebuilt. This should not generally be necessary as it will automatically be deleted after (by de- fault) 1 hour (set via R_AVAILABLE_PACKAGES_CACHE_CONTROL_MAX_AGE environment vari- able in seconds) spatialPkgs Default: A character vector of packages that are generally more successful if in- stalled from Source on Unix-alikes. Since there are repositories that offer binary packages builds for Linux (e.g., RStudio Package Manager), the vector of package names indicated here will default to a standard CRAN repository, forcing a source install. See also otherPkgs option, which does the same for non-spatial packages. useCranCache Default: FALSE. A user can optionally use the locally cached packages that are available due to a user’s use of the crancache package. verbose Default: 1. See ?Require. rversions R versions Description Reference table of R versions and their release dates (2018 and later). Usage rversions Format An object of class data.frame with 21 rows and 2 columns. Details Update this as needed using rversions::r_versions(): # install.packages("rversions") v = rversions::r_versions() keep = which(as.Date(v$date, format = " as.Date("2018-01-01", format = " dput(v[keep, c("version", "date")]) setdiffNamed Like setdiff, but takes into account names Description This will identify the elements in l1 that are not in l2. If missingFill is provided, then elements that are in l2, but not in l1 will be returned, assigning missingFill to their values. This might be NULL or "", i.e., some sort of empty value. This function will work on named lists, named vectors and likely on other named classes. Usage setdiffNamed(l1, l2, missingFill) Arguments l1 A named list or named vector l2 A named list or named vector (must be same class as l1) missingFill A value, such as NULL or "" or "missing" that will be given to the elements returned, that are in l2, but not in l1 Details There are 3 types of differences that might occur with named elements: 1. a new named element, 2. an removed named element, and 3. a modified named element. This function captures all of these. In the case of unnamed elements, e.g., setdiff, the first two are not seen as differences, if the values are not different. Value A vector or list of the elements in l1 that are not in l2, and optionally the elements of l2 that are not in l1, with values set to missingFill setLibPaths Set .libPaths Description This will set the .libPaths() by either adding a new path to it if standAlone = FALSE, or will concatenate c(libPath, tail(.libPaths(), 1)) if standAlone = TRUE. Currently, the default is to make this new .libPaths() "sticky", meaning it becomes associated with the current directory even through a restart of R. It does this by adding and/updating the ‘.Rprofile’ file in the current directory. If this current directory is a project, then the project will have the new .libPaths() associated with it, even through an R restart. Usage setLibPaths( libPaths, standAlone = TRUE, updateRprofile = getOption("Require.updateRprofile", FALSE), exact = FALSE, verbose = getOption("Require.verbose") ) Arguments libPaths A new path to append to, or replace all existing user components of .libPath() standAlone Logical. If TRUE, all packages will be installed to and loaded from the libPaths only. NOTE: If TRUE, THIS WILL CHANGE THE USER’S .libPaths(), sim- ilar to e.g., the checkpoint package. If FALSE, then libPath will be prepended to .libPaths() during the Require call, resulting in shared packages, i.e., it will include the user’s default package folder(s). This can be create dramatically faster installs if the user has a substantial number of the packages already in their personal library. Default FALSE to minimize package installing. updateRprofile Logical or Character string. If TRUE, then this function will put several lines of code in the current directory’s .Rprofile file setting up the package libraries for this and future sessions. If a character string, then this should be the path to an .Rprofile file. To reset back to normal, run setLibPaths() without a libPath. Default: getOption("Require.updateRprofile", FALSE), mean- ing FALSE, but it can be set with an option or within a single call. exact Logical. This function will automatically append the R version number to the libPaths to maintain separate R package libraries for each R version on the system. There are some cases where this behaviour is not desirable. Set exact to TRUE to override this automatic appending and use the exact, unaltered libPaths. Default is FALSE verbose Numeric or logical indicating how verbose should the function be. If -1 or -2, then as little verbosity as possible. If 0 or FALSE, then minimal outputs; if 1 or TRUE, more outputs; 2 even more. NOTE: in Require function, when verbose >= 2, the return object will have an attribute: attr(.., "Require") which has lots of information about the processes of the installs. Details This details of this code were modified from https://github.com/milesmcbain. A different, likely non-approved by CRAN approach that also works is here: https://stackoverflow.com/ a/36873741/3890027. Value The main point of this function is to set .libPaths(), which will be changed as a side effect of this function. As when setting options, this will return the previous state of .libPaths() allowing the user to reset easily. Examples ## Not run: if (Require:::.runLongExamples()) { opts <- Require:::.setupExample() origDir <- setwd(tempdir()) td <- tempdir() setLibPaths(td) # set a new R package library locally setLibPaths() # reset it to original setwd(origDir) # Using standAlone = FALSE means that newly installed packages # will be installed # in the new package library, but loading packages can come # from any of the ones listed in .libPaths() # will have 2 or more paths otherLib <- file.path(td, "newProjectLib") setLibPaths(otherLib, standAlone = FALSE) # Can restart R, and changes will stay # remove the custom .libPaths() setLibPaths() # reset to previous; remove from .Rprofile # because libPath arg is empty Require:::.cleanup(opts) unlink(otherLib, recursive = TRUE) } ## End(Not run) setLinuxBinaryRepo Setup for binary Linux repositories Description Enable use of binary package builds for Linux from the RStudio Package Manager repo. This will set the repos option, affecting the current R session. It will put this binaryLinux in the first position. If the getOption("repos") is NULL, it will put backupCRAN in second position. Usage setLinuxBinaryRepo( binaryLinux = "https://packagemanager.posit.co/", backupCRAN = srcPackageURLOnCRAN ) Arguments binaryLinux A CRAN repository serving binary Linux packages. backupCRAN If there is no CRAN repository set setup Setup a project library, cache, options Description setup and setupOff are currently deprecated. These may be re-created in a future version. In its place, a user can simply put .libPaths(libs, include.site = FALSE) in their .Rprofile file, where libs is the directory where the packages should be installed and should be a folder with the R version number, e.g., derived by using checkLibPaths(libs). Usage setup( newLibPaths, RPackageFolders, RPackageCache = getOptionRPackageCache(), standAlone = getOption("Require.standAlone", TRUE), verbose = getOption("Require.verbose") ) setupOff(removePackages = FALSE, verbose = getOption("Require.verbose")) Arguments newLibPaths Same as RPackageFolders. This is for more consistent naming with Require(..., libPaths = ...). RPackageFolders One or more folders where R packages are installed to and loaded from. In the case of more than one folder provided, installation will only happen in the first one. RPackageCache See ?RequireOptions. standAlone Logical. If TRUE, all packages will be installed to and loaded from the libPaths only. NOTE: If TRUE, THIS WILL CHANGE THE USER’S .libPaths(), sim- ilar to e.g., the checkpoint package. If FALSE, then libPath will be prepended to .libPaths() during the Require call, resulting in shared packages, i.e., it will include the user’s default package folder(s). This can be create dramatically faster installs if the user has a substantial number of the packages already in their personal library. Default FALSE to minimize package installing. verbose Numeric or logical indicating how verbose should the function be. If -1 or -2, then as little verbosity as possible. If 0 or FALSE, then minimal outputs; if 1 or TRUE, more outputs; 2 even more. NOTE: in Require function, when verbose >= 2, the return object will have an attribute: attr(.., "Require") which has lots of information about the processes of the installs. removePackages Deprecated. Please remove packages manually from the .libPaths() sourcePkgs A list of R packages that should likely be installed from Source, not Binary Description The list of R packages that Require installs from source on Linux, even if the getOptions("repos") is a binary repository. This list can be updated by the user by modifying the options Require.spatialPkgs or Require.otherPkgs. Default "force source only packages" are visible with RequireOptions(). Usage sourcePkgs(additional = NULL, spatialPkgs = NULL, otherPkgs = NULL) Arguments additional Any other packages to be added to the other 2 argument vectors spatialPkgs A character vector of package names that focus on spatial analyses. otherPkgs A character vector of package names that often require system specific compi- lation. Value A sorted concatenation of the 3 input parameters. tempdir2 Make a temporary (sub-)directory Description Create a temporary subdirectory in .RequireTempPath(), or a temporary file in that temporary subdirectory. Usage tempdir2( sub = "", tempdir = getOption("Require.tempPath", .RequireTempPath()), create = TRUE ) Arguments sub Character string, length 1. Can be a result of file.path("smth", "smth2") for nested temporary sub directories. tempdir Optional character string where the temporary dir should be placed. Defaults to .RequireTempPath() create Logical. Should the directory be created. Default TRUE See Also tempfile2() tempfile2 Make a temporary subfile in a temporary (sub-)directory Description Make a temporary subfile in a temporary (sub-)directory Usage tempfile2( sub = "", tempdir = getOption("Require.tempPath", .RequireTempPath()), ... ) Arguments sub Character string, length 1. Can be a result of file.path("smth", "smth2") for nested temporary sub directories. tempdir Optional character string where the temporary dir should be placed. Defaults to .RequireTempPath() ... passed to tempfile, e.g., fileext See Also tempdir2() trimVersionNumber Trim version number off a compound package name Description The resulting string(s) will have only name (including github.com repository if it exists). Usage trimVersionNumber(pkgs) Arguments pkgs A character string vector of packages with or without GitHub path or versions See Also extractPkgName() Examples trimVersionNumber("PredictiveEcology/Require (<=0.0.1)")
rusnat
ctan
TeX
## Podorozhniki: zkhzhenniana forma Polorozhniki - prelimnhesstvenno rozetonnye rastenii, to est' osnovnashos obobennosti hk poberovoi sistemy - ukorozhennye mezhdouzhni vetsativnykh poberov. Danke u rasteniii, otnoschitski k rodu _Psyllium_, v nazuakh dintsev razivnyanotu obkovye khoronennye vetsatrivnyev i dliahennye repartiennye poeetin. U rasteniii iz roda _Plantago_ stvedl' s dullinennymi mezhdouzhnimi i oberediny (v odinneie ot _Psyllium_) listorsgrishodiokinenomformvetsia dostatonno pedko: u nekotorykh tikhookenshikh drevorhodiuk vidov skekii _Palaeorsyllium_, t sredizemomorskikh _P. lagorus_ L. i P. _amplexiauls_ Sav., a takzhe u glizhevostoiuhny i amerikanaskikh predstavistevite-dieh skekii _Alvicans_. I _Gnaphaloides_ Burn. Na nanei repritorii stu vidly ne vsterematov. V pol'ninstva rasteniii iz roda _Plantago_ osi reparto pozuka dliteel'noe vremia pastet monopoludal'no, obrazu (pri pomoshki kontrasti)noi dletel'noi obkovykh korhei -?) nespichal'nizovannoe zintsepeningoe kornevnye, v ostab kotorogo u mochikh polorozhnikov vkhodit vverkhnia nast' plavnogo korna, pishodi dlia i nizkine mezhdouzhni pobera, a eneeratinnye pokii zaklad'nyaotsi v nazuakh otmernykh profilopodichnye pistev lintsev tekhnieiu podda (?). V nazuakh kol'ovnykh rozetonnye poberi poli'nistva mipodentikh vidov zimuut s elelennymi distymi. Birdy podorozhnika razlichnosta stepen'no vetveleniia vetetativnykh poberov. Tak, u _P. major_ vetveleniie kornevnye nabloduetsia krafiie redko, noskol'k, kak prival'o, vel'd za ztim prokshodit rassteniia na otled'nye napritsuli (?). Mi nablodali statistici _P. uligniosa_ s vetsiinim-si kornevnye sborakh s territichni Teiniparakkoi oblasti i Resubchii-ki Komn (LEU), prirem nskhodititel'no s dvul'trennykh mestobitannii. _P. media_ i v obsobennosti _P. lancebala_ vetvatsia ziantitel'no sil'ne, a _P. maritima_ i sudr. _subpolaris_ (Andreev) Tzvel. (v odinne ot _P. schrenkii_ S. Koch) chasto obrazuet tormadnye (lbloidal') do 2 m2) kloup, vozhnikivie, po-bil'imum, za sist vetveleniia i posleduipeno rasial' odnoi osoboi. Odnoletne podorozhniki (_P. tenuiflora, P. minuta_) priatsii ne vetsitsa. Footnote 1: 1}\)Bioda takzhe 3/8 (?). Aatomneskie priznakh stroeniia derenseniy u podorozhnikov s odrevensevaionniu stochimi dovol'no odnovrazniy (?) i mogut ispol'ovaty-si, po-bil'imum, dlia otonehenniia polozhenniia vsevo seneistva v sisteme vetkovykh rasteni. V predstavitel' roda _Psyllium_ distoraspolozhenie neerekrestonornoe. Drutne sindy podorozhnikov, po nanim dlainym, takzhe mogut ottihat'sia priznakami raspolozhenniia plastev na pobe', v dastnosti, nislom raznykh zelelennykh distev i formuioni integraloniei (2/5 y _P. major_; 1/3 y _P. media_1, _P. maritima_ i _P. lancebala_). Otrimrauchie v tehene semaa inkine distya u stenykh mipodentikh vidov (nagnimer, u _P. maxima_ Juss. ex Jacq. i u tetraliondihykh rastenii
@zendeskgarden/react-accordions
npm
JavaScript
[@zendeskgarden/react-accordions](#zendeskgardenreact-accordions-) === This package includes components related to accordions in the [Garden Design System](https://zendeskgarden.github.io/). [Installation](#installation) --- ``` npm install @zendeskgarden/react-accordions # Peer Dependencies - Also Required npm install react react-dom styled-components @zendeskgarden/react-theming ``` [Usage](#usage) --- ### [Accordion](#accordion) ``` import { ThemeProvider } from '@zendeskgarden/react-theming'; import { Accordion } from '@zendeskgarden/react-accordions'; /** * Place a `ThemeProvider` at the root of your React application */ <ThemeProvider> <Accordion level={3}> <Accordion.Section> <Accordion.Header> <Accordion.Label>Turnip greens yarrow</Accordion.Label> </Accordion.Header> <Accordion.Panel> Turnip greens yarrow ricebean rutabaga endive cauliflower sea lettuce kohlrabi amaranth water spinach avocado daikon napa cabbage asparagus winter purslane kale. </Accordion.Panel> </Accordion.Section> <Accordion.Section> <Accordion.Header> <Accordion.Label>Corn amaranth salsify</Accordion.Label> </Accordion.Header> <Accordion.Panel> Corn amaranth salsify bunya nuts nori azuki bean chickweed potato bell pepper artichoke. Nori grape silver beet broccoli kombu beet greens fava bean potato quandong celery. </Accordion.Panel> </Accordion.Section> <Accordion.Section> <Accordion.Header> <Accordion.Label>Celery quandong swiss</Accordion.Label> </Accordion.Header> <Accordion.Panel> Celery quandong swiss chard chicory earthnut pea potato. Salsify taro catsear garlic gram celery bitterleaf wattle seed collard greens nori. </Accordion.Panel> </Accordion.Section> </Accordion> </ThemeProvider>; ``` ### [Stepper](#stepper) ``` import { ThemeProvider } from '@zendeskgarden/react-theming'; import { Stepper } from '@zendeskgarden/react-accordions'; /** * Place a `ThemeProvider` at the root of your React application */ <ThemeProvider> <Stepper> <Stepper.Step> <Stepper.Label>Brussels</Stepper.Label> <Stepper.Content> Brussels sprout coriander water chestnut gourd swiss chard wakame kohlrabi radish artichoke. </Stepper.Content> </Stepper.Step> <Stepper.Step> <Stepper.Label>Beetroot</Stepper.Label> <Stepper.Content> Beetroot carrot watercress. Corn amaranth salsify bunya nuts nori azuki bean turnip greens. </Stepper.Content> </Stepper.Step> <Stepper.Step> <Stepper.Label>Turnip</Stepper.Label> <Stepper.Content> Turnip chicory salsify pea sprouts fava bean. Dandelion zucchini burdock yarrow chickpea. </Stepper.Content> </Stepper.Step> </Stepper> </ThemeProvider>; ``` Readme --- ### Keywords * accordions * components * garden * react * zendesk
contextlib2
readthedoc
Python
contextlib2 21.6.0 documentation [contextlib2](index.html#document-index) --- contextlib2 — Updated utilities for context management[¶](#module-contextlib2) === This module provides backports of features in the latest version of the standard library’s [`contextlib`](https://docs.python.org/3/library/contextlib.html#module-contextlib) module to earlier Python versions. It also serves as a real world proving ground for potential future enhancements to that module. Like [`contextlib`](https://docs.python.org/3/library/contextlib.html#module-contextlib), this module provides utilities for common tasks involving the `with` and `async with` statements. Additions Relative to the Standard Library[¶](#additions-relative-to-the-standard-library) --- This module is primarily a backport of the Python 3.10 version of [`contextlib`](https://docs.python.org/3/library/contextlib.html#module-contextlib) to earlier releases. The async context management features require asynchronous generator support in the language runtime, so the oldest supported version is now Python 3.6 (contextlib2 0.6.0 and earlier support older Python versions by omitting all asynchronous features). This module is also a proving ground for new features not yet part of the standard library. There are currently no such features in the module. Finally, this module contains some deprecated APIs which never graduated to standard library inclusion. These interfaces are no longer documented, but may still be present in the code (emitting `DeprecationWarning` if used). Using the Module[¶](#using-the-module) === API Reference[¶](#api-reference) --- Functions and classes provided: *class* `AbstractContextManager`[¶](#AbstractContextManager) An [abstract base class](https://docs.python.org/3/glossary.html#term-abstract-base-class) for classes that implement [`object.__enter__()`](https://docs.python.org/3/reference/datamodel.html#object.__enter__) and [`object.__exit__()`](https://docs.python.org/3/reference/datamodel.html#object.__exit__). A default implementation for [`object.__enter__()`](https://docs.python.org/3/reference/datamodel.html#object.__enter__) is provided which returns `self` while [`object.__exit__()`](https://docs.python.org/3/reference/datamodel.html#object.__exit__) is an abstract method which by default returns `None`. See also the definition of [Context Manager Types](https://docs.python.org/3/library/stdtypes.html#typecontextmanager). New in version 0.6.0: Part of the standard library in Python 3.6 and later *class* `AbstractAsyncContextManager`[¶](#AbstractAsyncContextManager) An [abstract base class](https://docs.python.org/3/glossary.html#term-abstract-base-class) for classes that implement [`object.__aenter__()`](https://docs.python.org/3/reference/datamodel.html#object.__aenter__) and [`object.__aexit__()`](https://docs.python.org/3/reference/datamodel.html#object.__aexit__). A default implementation for [`object.__aenter__()`](https://docs.python.org/3/reference/datamodel.html#object.__aenter__) is provided which returns `self` while [`object.__aexit__()`](https://docs.python.org/3/reference/datamodel.html#object.__aexit__) is an abstract method which by default returns `None`. See also the definition of [Asynchronous Context Managers](https://docs.python.org/3/reference/datamodel.html#async-context-managers). New in version 21.6.0: Part of the standard library in Python 3.7 and later `@``contextmanager`[¶](#contextmanager) This function is a [decorator](https://docs.python.org/3/glossary.html#term-decorator) that can be used to define a factory function for [`with`](https://docs.python.org/3/reference/compound_stmts.html#with) statement context managers, without needing to create a class or separate `__enter__()` and `__exit__()` methods. While many objects natively support use in with statements, sometimes a resource needs to be managed that isn’t a context manager in its own right, and doesn’t implement a `close()` method for use with `contextlib.closing` An abstract example would be the following to ensure correct resource management: ``` from contextlib import contextmanager @contextmanager def managed_resource(*args, **kwds): # Code to acquire resource, e.g.: resource = acquire_resource(*args, **kwds) try: yield resource finally: # Code to release resource, e.g.: release_resource(resource) >>> with managed_resource(timeout=3600) as resource: ... # Resource is released at the end of this block, ... # even if code in the block raises an exception ``` The function being decorated must return a [generator](https://docs.python.org/3/glossary.html#term-generator)-iterator when called. This iterator must yield exactly one value, which will be bound to the targets in the [`with`](https://docs.python.org/3/reference/compound_stmts.html#with) statement’s `as` clause, if any. At the point where the generator yields, the block nested in the [`with`](https://docs.python.org/3/reference/compound_stmts.html#with) statement is executed. The generator is then resumed after the block is exited. If an unhandled exception occurs in the block, it is reraised inside the generator at the point where the yield occurred. Thus, you can use a [`try`](https://docs.python.org/3/reference/compound_stmts.html#try)…[`except`](https://docs.python.org/3/reference/compound_stmts.html#except)…[`finally`](https://docs.python.org/3/reference/compound_stmts.html#finally) statement to trap the error (if any), or ensure that some cleanup takes place. If an exception is trapped merely in order to log it or to perform some action (rather than to suppress it entirely), the generator must reraise that exception. Otherwise the generator context manager will indicate to the `with` statement that the exception has been handled, and execution will resume with the statement immediately following the `with` statement. [`contextmanager()`](#contextmanager) uses [`ContextDecorator`](#ContextDecorator) so the context managers it creates can be used as decorators as well as in [`with`](https://docs.python.org/3/reference/compound_stmts.html#with) statements. When used as a decorator, a new generator instance is implicitly created on each function call (this allows the otherwise “one-shot” context managers created by [`contextmanager()`](#contextmanager) to meet the requirement that context managers support multiple invocations in order to be used as decorators). `@``asynccontextmanager`[¶](#asynccontextmanager) Similar to [`contextmanager()`](https://docs.python.org/3/library/contextlib.html#contextlib.contextmanager), but creates an [asynchronous context manager](https://docs.python.org/3/reference/datamodel.html#async-context-managers). This function is a [decorator](https://docs.python.org/3/glossary.html#term-decorator) that can be used to define a factory function for [`async with`](https://docs.python.org/3/reference/compound_stmts.html#async-with) statement asynchronous context managers, without needing to create a class or separate `__aenter__()` and `__aexit__()` methods. It must be applied to an [asynchronous generator](https://docs.python.org/3/glossary.html#term-asynchronous-generator) function. A simple example: ``` from contextlib import asynccontextmanager @asynccontextmanager async def get_connection(): conn = await acquire_db_connection() try: yield conn finally: await release_db_connection(conn) async def get_all_users(): async with get_connection() as conn: return conn.query('SELECT ...') ``` New in version 21.6.0: Part of the standard library in Python 3.7 and later, enhanced in Python 3.10 and later to allow created async context managers to be used as async function decorators. Context managers defined with [`asynccontextmanager()`](#asynccontextmanager) can be used either as decorators or with [`async with`](https://docs.python.org/3/reference/compound_stmts.html#async-with) statements: ``` import time async def timeit(): now = time.monotonic() try: yield finally: print(f'it took {time.monotonic() - now}s to run') @timeit() async def main(): # ... async code ... ``` When used as a decorator, a new generator instance is implicitly created on each function call. This allows the otherwise “one-shot” context managers created by [`asynccontextmanager()`](#asynccontextmanager) to meet the requirement that context managers support multiple invocations in order to be used as decorators. `closing`(*thing*)[¶](#closing) Return a context manager that closes *thing* upon completion of the block. This is basically equivalent to: ``` from contextlib import contextmanager @contextmanager def closing(thing): try: yield thing finally: thing.close() ``` And lets you write code like this: ``` from contextlib import closing from urllib.request import urlopen with closing(urlopen('http://www.python.org')) as page: for line in page: print(line) ``` without needing to explicitly close `page`. Even if an error occurs, `page.close()` will be called when the [`with`](https://docs.python.org/3/reference/compound_stmts.html#with) block is exited. *class* `aclosing`(*thing*)[¶](#aclosing) Return an async context manager that calls the `aclose()` method of *thing* upon completion of the block. This is basically equivalent to: ``` from contextlib import asynccontextmanager @asynccontextmanager async def aclosing(thing): try: yield thing finally: await thing.aclose() ``` Significantly, `aclosing()` supports deterministic cleanup of async generators when they happen to exit early by [`break`](https://docs.python.org/3/reference/simple_stmts.html#break) or an exception. For example: ``` from contextlib import aclosing async with aclosing(my_generator()) as values: async for value in values: if value == 42: break ``` This pattern ensures that the generator’s async exit code is executed in the same context as its iterations (so that exceptions and context variables work as expected, and the exit code isn’t run after the lifetime of some task it depends on). New in version 21.6.0: Part of the standard library in Python 3.10 and later `nullcontext`(*enter_result=None*)[¶](#nullcontext) Return a context manager that returns *enter_result* from `__enter__`, but otherwise does nothing. It is intended to be used as a stand-in for an optional context manager, for example: ``` def myfunction(arg, ignore_exceptions=False): if ignore_exceptions: # Use suppress to ignore all exceptions. cm = contextlib.suppress(Exception) else: # Do not ignore any exceptions, cm has no effect. cm = contextlib.nullcontext() with cm: # Do something ``` An example using *enter_result*: ``` def process_file(file_or_path): if isinstance(file_or_path, str): # If string, open file cm = open(file_or_path) else: # Caller is responsible for closing file cm = nullcontext(file_or_path) with cm as file: # Perform processing on the file ``` It can also be used as a stand-in for [asynchronous context managers](https://docs.python.org/3/reference/datamodel.html#async-context-managers): ``` async def send_http(session=None): if not session: # If no http session, create it with aiohttp cm = aiohttp.ClientSession() else: # Caller is responsible for closing the session cm = nullcontext(session) async with cm as session: # Send http requests with session ``` New in version 0.6.0: Part of the standard library in Python 3.7 and later Changed in version 21.6.0: Updated to Python 3.10 version with [asynchronous context manager](https://docs.python.org/3/glossary.html#term-asynchronous-context-manager) support `suppress`(**exceptions*)[¶](#suppress) Return a context manager that suppresses any of the specified exceptions if they occur in the body of a `with` statement and then resumes execution with the first statement following the end of the `with` statement. As with any other mechanism that completely suppresses exceptions, this context manager should be used only to cover very specific errors where silently continuing with program execution is known to be the right thing to do. For example: ``` from contextlib import suppress with suppress(FileNotFoundError): os.remove('somefile.tmp') with suppress(FileNotFoundError): os.remove('someotherfile.tmp') ``` This code is equivalent to: ``` try: os.remove('somefile.tmp') except FileNotFoundError: pass try: os.remove('someotherfile.tmp') except FileNotFoundError: pass ``` This context manager is [reentrant](#reentrant-cms). New in version 0.5: Part of the standard library in Python 3.4 and later `redirect_stdout`(*new_target*)[¶](#redirect_stdout) Context manager for temporarily redirecting [`sys.stdout`](https://docs.python.org/3/library/sys.html#sys.stdout) to another file or file-like object. This tool adds flexibility to existing functions or classes whose output is hardwired to stdout. For example, the output of [`help()`](https://docs.python.org/3/library/functions.html#help) normally is sent to *sys.stdout*. You can capture that output in a string by redirecting the output to an [`io.StringIO`](https://docs.python.org/3/library/io.html#io.StringIO) object. The replacement stream is returned from the `__enter__` method and so is available as the target of the [`with`](https://docs.python.org/3/reference/compound_stmts.html#with) statement: ``` with redirect_stdout(io.StringIO()) as f: help(pow) s = f.getvalue() ``` To send the output of [`help()`](https://docs.python.org/3/library/functions.html#help) to a file on disk, redirect the output to a regular file: ``` with open('help.txt', 'w') as f: with redirect_stdout(f): help(pow) ``` To send the output of [`help()`](https://docs.python.org/3/library/functions.html#help) to *sys.stderr*: ``` with redirect_stdout(sys.stderr): help(pow) ``` Note that the global side effect on [`sys.stdout`](https://docs.python.org/3/library/sys.html#sys.stdout) means that this context manager is not suitable for use in library code and most threaded applications. It also has no effect on the output of subprocesses. However, it is still a useful approach for many utility scripts. This context manager is [reentrant](#reentrant-cms). New in version 0.5: Part of the standard library in Python 3.4 and later `redirect_stderr`(*new_target*)[¶](#redirect_stderr) Similar to [`redirect_stdout()`](https://docs.python.org/3/library/contextlib.html#contextlib.redirect_stdout) but redirecting [`sys.stderr`](https://docs.python.org/3/library/sys.html#sys.stderr) to another file or file-like object. This context manager is [reentrant](#reentrant-cms). New in version 0.5: Part of the standard library in Python 3.5 and later *class* `ContextDecorator`[¶](#ContextDecorator) A base class that enables a context manager to also be used as a decorator. Context managers inheriting from `ContextDecorator` have to implement `__enter__` and `__exit__` as normal. `__exit__` retains its optional exception handling even when used as a decorator. `ContextDecorator` is used by [`contextmanager()`](#contextmanager), so you get this functionality automatically. Example of `ContextDecorator`: ``` from contextlib import ContextDecorator class mycontext(ContextDecorator): def __enter__(self): print('Starting') return self def __exit__(self, *exc): print('Finishing') return False >>> @mycontext() ... def function(): ... print('The bit in the middle') ... >>> function() Starting The bit in the middle Finishing >>> with mycontext(): ... print('The bit in the middle') ... Starting The bit in the middle Finishing ``` This change is just syntactic sugar for any construct of the following form: ``` def f(): with cm(): # Do stuff ``` `ContextDecorator` lets you instead write: ``` @cm() def f(): # Do stuff ``` It makes it clear that the `cm` applies to the whole function, rather than just a piece of it (and saving an indentation level is nice, too). Existing context managers that already have a base class can be extended by using `ContextDecorator` as a mixin class: ``` from contextlib import ContextDecorator class mycontext(ContextBaseClass, ContextDecorator): def __enter__(self): return self def __exit__(self, *exc): return False ``` Note As the decorated function must be able to be called multiple times, the underlying context manager must support use in multiple [`with`](https://docs.python.org/3/reference/compound_stmts.html#with) statements. If this is not the case, then the original construct with the explicit `with` statement inside the function should be used. *class* `AsyncContextDecorator`[¶](#AsyncContextDecorator) Similar to [`ContextDecorator`](#ContextDecorator) but only for asynchronous functions. Example of `AsyncContextDecorator`: ``` from asyncio import run from contextlib import AsyncContextDecorator class mycontext(AsyncContextDecorator): async def __aenter__(self): print('Starting') return self async def __aexit__(self, *exc): print('Finishing') return False >>> @mycontext() ... async def function(): ... print('The bit in the middle') ... >>> run(function()) Starting The bit in the middle Finishing >>> async def function(): ... async with mycontext(): ... print('The bit in the middle') ... >>> run(function()) Starting The bit in the middle Finishing ``` New in version 21.6.0: Part of the standard library in Python 3.10 and later *class* `ExitStack`[¶](#ExitStack) A context manager that is designed to make it easy to programmatically combine other context managers and cleanup functions, especially those that are optional or otherwise driven by input data. For example, a set of files may easily be handled in a single with statement as follows: ``` with ExitStack() as stack: files = [stack.enter_context(open(fname)) for fname in filenames] # All opened files will automatically be closed at the end of # the with statement, even if attempts to open files later # in the list raise an exception ``` Each instance maintains a stack of registered callbacks that are called in reverse order when the instance is closed (either explicitly or implicitly at the end of a [`with`](https://docs.python.org/3/reference/compound_stmts.html#with) statement). Note that callbacks are *not* invoked implicitly when the context stack instance is garbage collected. This stack model is used so that context managers that acquire their resources in their `__init__` method (such as file objects) can be handled correctly. Since registered callbacks are invoked in the reverse order of registration, this ends up behaving as if multiple nested [`with`](https://docs.python.org/3/reference/compound_stmts.html#with) statements had been used with the registered set of callbacks. This even extends to exception handling - if an inner callback suppresses or replaces an exception, then outer callbacks will be passed arguments based on that updated state. This is a relatively low level API that takes care of the details of correctly unwinding the stack of exit callbacks. It provides a suitable foundation for higher level context managers that manipulate the exit stack in application specific ways. New in version 0.4: Part of the standard library in Python 3.3 and later `enter_context`(*cm*)[¶](#ExitStack.enter_context) Enters a new context manager and adds its `__exit__()` method to the callback stack. The return value is the result of the context manager’s own `__enter__()` method. These context managers may suppress exceptions just as they normally would if used directly as part of a [`with`](https://docs.python.org/3/reference/compound_stmts.html#with) statement. `push`(*exit*)[¶](#ExitStack.push) Adds a context manager’s `__exit__()` method to the callback stack. As `__enter__` is *not* invoked, this method can be used to cover part of an `__enter__()` implementation with a context manager’s own `__exit__()` method. If passed an object that is not a context manager, this method assumes it is a callback with the same signature as a context manager’s `__exit__()` method and adds it directly to the callback stack. By returning true values, these callbacks can suppress exceptions the same way context manager `__exit__()` methods can. The passed in object is returned from the function, allowing this method to be used as a function decorator. `callback`(*callback*, */*, **args*, ***kwds*)[¶](#ExitStack.callback) Accepts an arbitrary callback function and arguments and adds it to the callback stack. Unlike the other methods, callbacks added this way cannot suppress exceptions (as they are never passed the exception details). The passed in callback is returned from the function, allowing this method to be used as a function decorator. `pop_all`()[¶](#ExitStack.pop_all) Transfers the callback stack to a fresh [`ExitStack`](#ExitStack) instance and returns it. No callbacks are invoked by this operation - instead, they will now be invoked when the new stack is closed (either explicitly or implicitly at the end of a [`with`](https://docs.python.org/3/reference/compound_stmts.html#with) statement). For example, a group of files can be opened as an “all or nothing” operation as follows: ``` with ExitStack() as stack: files = [stack.enter_context(open(fname)) for fname in filenames] # Hold onto the close method, but don't call it yet. close_files = stack.pop_all().close # If opening any file fails, all previously opened files will be # closed automatically. If all files are opened successfully, # they will remain open even after the with statement ends. # close_files() can then be invoked explicitly to close them all. ``` `close`()[¶](#ExitStack.close) Immediately unwinds the callback stack, invoking callbacks in the reverse order of registration. For any context managers and exit callbacks registered, the arguments passed in will indicate that no exception occurred. *class* `AsyncExitStack`[¶](#AsyncExitStack) An [asynchronous context manager](https://docs.python.org/3/reference/datamodel.html#async-context-managers), similar to [`ExitStack`](#ExitStack), that supports combining both synchronous and asynchronous context managers, as well as having coroutines for cleanup logic. The `close()` method is not implemented, [`aclose()`](#AsyncExitStack.aclose) must be used instead. `enter_async_context`(*cm*)[¶](#AsyncExitStack.enter_async_context) Similar to `enter_context()` but expects an asynchronous context manager. `push_async_exit`(*exit*)[¶](#AsyncExitStack.push_async_exit) Similar to `push()` but expects either an asynchronous context manager or a coroutine function. `push_async_callback`(*callback*, */*, **args*, ***kwds*)[¶](#AsyncExitStack.push_async_callback) Similar to `callback()` but expects a coroutine function. `aclose`()[¶](#AsyncExitStack.aclose) Similar to `close()` but properly handles awaitables. Continuing the example for [`asynccontextmanager()`](#asynccontextmanager): ``` async with AsyncExitStack() as stack: connections = [await stack.enter_async_context(get_connection()) for i in range(5)] # All opened connections will automatically be released at the end of # the async with statement, even if attempts to open a connection # later in the list raise an exception. ``` New in version 21.6.0: Part of the standard library in Python 3.7 and later Examples and Recipes[¶](#examples-and-recipes) --- This section describes some examples and recipes for making effective use of the tools provided by [`contextlib`](https://docs.python.org/3/library/contextlib.html#module-contextlib). ### Supporting a variable number of context managers[¶](#supporting-a-variable-number-of-context-managers) The primary use case for [`ExitStack`](#ExitStack) is the one given in the class documentation: supporting a variable number of context managers and other cleanup operations in a single [`with`](https://docs.python.org/3/reference/compound_stmts.html#with) statement. The variability may come from the number of context managers needed being driven by user input (such as opening a user specified collection of files), or from some of the context managers being optional: ``` with ExitStack() as stack: for resource in resources: stack.enter_context(resource) if need_special_resource(): special = acquire_special_resource() stack.callback(release_special_resource, special) # Perform operations that use the acquired resources ``` As shown, [`ExitStack`](#ExitStack) also makes it quite easy to use [`with`](https://docs.python.org/3/reference/compound_stmts.html#with) statements to manage arbitrary resources that don’t natively support the context management protocol. ### Catching exceptions from `__enter__` methods[¶](#catching-exceptions-from-enter-methods) It is occasionally desirable to catch exceptions from an `__enter__` method implementation, *without* inadvertently catching exceptions from the [`with`](https://docs.python.org/3/reference/compound_stmts.html#with) statement body or the context manager’s `__exit__` method. By using [`ExitStack`](#ExitStack) the steps in the context management protocol can be separated slightly in order to allow this: ``` stack = ExitStack() try: x = stack.enter_context(cm) except Exception: # handle __enter__ exception else: with stack: # Handle normal case ``` Actually needing to do this is likely to indicate that the underlying API should be providing a direct resource management interface for use with [`try`](https://docs.python.org/3/reference/compound_stmts.html#try)/[`except`](https://docs.python.org/3/reference/compound_stmts.html#except)/[`finally`](https://docs.python.org/3/reference/compound_stmts.html#finally) statements, but not all APIs are well designed in that regard. When a context manager is the only resource management API provided, then [`ExitStack`](#ExitStack) can make it easier to handle various situations that can’t be handled directly in a [`with`](https://docs.python.org/3/reference/compound_stmts.html#with) statement. ### Cleaning up in an `__enter__` implementation[¶](#cleaning-up-in-an-enter-implementation) As noted in the documentation of [`ExitStack.push()`](#ExitStack.push), this method can be useful in cleaning up an already allocated resource if later steps in the `__enter__()` implementation fail. Here’s an example of doing this for a context manager that accepts resource acquisition and release functions, along with an optional validation function, and maps them to the context management protocol: ``` from contextlib import contextmanager, AbstractContextManager, ExitStack class ResourceManager(AbstractContextManager): def __init__(self, acquire_resource, release_resource, check_resource_ok=None): self.acquire_resource = acquire_resource self.release_resource = release_resource if check_resource_ok is None: def check_resource_ok(resource): return True self.check_resource_ok = check_resource_ok @contextmanager def _cleanup_on_error(self): with ExitStack() as stack: stack.push(self) yield # The validation check passed and didn't raise an exception # Accordingly, we want to keep the resource, and pass it # back to our caller stack.pop_all() def __enter__(self): resource = self.acquire_resource() with self._cleanup_on_error(): if not self.check_resource_ok(resource): msg = "Failed validation for {!r}" raise RuntimeError(msg.format(resource)) return resource def __exit__(self, *exc_details): # We don't need to duplicate any of our resource release logic self.release_resource() ``` ### Replacing any use of `try-finally` and flag variables[¶](#replacing-any-use-of-try-finally-and-flag-variables) A pattern you will sometimes see is a `try-finally` statement with a flag variable to indicate whether or not the body of the `finally` clause should be executed. In its simplest form (that can’t already be handled just by using an `except` clause instead), it looks something like this: ``` cleanup_needed = True try: result = perform_operation() if result: cleanup_needed = False finally: if cleanup_needed: cleanup_resources() ``` As with any `try` statement based code, this can cause problems for development and review, because the setup code and the cleanup code can end up being separated by arbitrarily long sections of code. [`ExitStack`](#ExitStack) makes it possible to instead register a callback for execution at the end of a `with` statement, and then later decide to skip executing that callback: ``` from contextlib import ExitStack with ExitStack() as stack: stack.callback(cleanup_resources) result = perform_operation() if result: stack.pop_all() ``` This allows the intended cleanup up behaviour to be made explicit up front, rather than requiring a separate flag variable. If a particular application uses this pattern a lot, it can be simplified even further by means of a small helper class: ``` from contextlib import ExitStack class Callback(ExitStack): def __init__(self, callback, /, *args, **kwds): super().__init__() self.callback(callback, *args, **kwds) def cancel(self): self.pop_all() with Callback(cleanup_resources) as cb: result = perform_operation() if result: cb.cancel() ``` If the resource cleanup isn’t already neatly bundled into a standalone function, then it is still possible to use the decorator form of [`ExitStack.callback()`](#ExitStack.callback) to declare the resource cleanup in advance: ``` from contextlib import ExitStack with ExitStack() as stack: @stack.callback def cleanup_resources(): ... result = perform_operation() if result: stack.pop_all() ``` Due to the way the decorator protocol works, a callback function declared this way cannot take any parameters. Instead, any resources to be released must be accessed as closure variables. ### Using a context manager as a function decorator[¶](#using-a-context-manager-as-a-function-decorator) [`ContextDecorator`](#ContextDecorator) makes it possible to use a context manager in both an ordinary `with` statement and also as a function decorator. For example, it is sometimes useful to wrap functions or groups of statements with a logger that can track the time of entry and time of exit. Rather than writing both a function decorator and a context manager for the task, inheriting from [`ContextDecorator`](#ContextDecorator) provides both capabilities in a single definition: ``` from contextlib import ContextDecorator import logging logging.basicConfig(level=logging.INFO) class track_entry_and_exit(ContextDecorator): def __init__(self, name): self.name = name def __enter__(self): logging.info('Entering: %s', self.name) def __exit__(self, exc_type, exc, exc_tb): logging.info('Exiting: %s', self.name) ``` Instances of this class can be used as both a context manager: ``` with track_entry_and_exit('widget loader'): print('Some time consuming activity goes here') load_widget() ``` And also as a function decorator: ``` @track_entry_and_exit('widget loader') def activity(): print('Some time consuming activity goes here') load_widget() ``` Note that there is one additional limitation when using context managers as function decorators: there’s no way to access the return value of `__enter__()`. If that value is needed, then it is still necessary to use an explicit `with` statement. See also [**PEP 343**](https://www.python.org/dev/peps/pep-0343) - The “with” statement The specification, background, and examples for the Python [`with`](https://docs.python.org/3/reference/compound_stmts.html#with) statement. Single use, reusable and reentrant context managers[¶](#single-use-reusable-and-reentrant-context-managers) --- Most context managers are written in a way that means they can only be used effectively in a [`with`](https://docs.python.org/3/reference/compound_stmts.html#with) statement once. These single use context managers must be created afresh each time they’re used - attempting to use them a second time will trigger an exception or otherwise not work correctly. This common limitation means that it is generally advisable to create context managers directly in the header of the [`with`](https://docs.python.org/3/reference/compound_stmts.html#with) statement where they are used (as shown in all of the usage examples above). Files are an example of effectively single use context managers, since the first [`with`](https://docs.python.org/3/reference/compound_stmts.html#with) statement will close the file, preventing any further IO operations using that file object. Context managers created using [`contextmanager()`](#contextmanager) are also single use context managers, and will complain about the underlying generator failing to yield if an attempt is made to use them a second time: ``` >>> from contextlib import contextmanager >>> @contextmanager ... def singleuse(): ... print("Before") ... yield ... print("After") ... >>> cm = singleuse() >>> with cm: ... pass ... Before After >>> with cm: ... pass ... Traceback (most recent call last): ... RuntimeError: generator didn't yield ``` ### Reentrant context managers[¶](#reentrant-context-managers) More sophisticated context managers may be “reentrant”. These context managers can not only be used in multiple [`with`](https://docs.python.org/3/reference/compound_stmts.html#with) statements, but may also be used *inside* a `with` statement that is already using the same context manager. [`threading.RLock`](https://docs.python.org/3/library/threading.html#threading.RLock) is an example of a reentrant context manager, as are [`suppress()`](#suppress) and [`redirect_stdout()`](#redirect_stdout). Here’s a very simple example of reentrant use: ``` >>> from contextlib import redirect_stdout >>> from io import StringIO >>> stream = StringIO() >>> write_to_stream = redirect_stdout(stream) >>> with write_to_stream: ... print("This is written to the stream rather than stdout") ... with write_to_stream: ... print("This is also written to the stream") ... >>> print("This is written directly to stdout") This is written directly to stdout >>> print(stream.getvalue()) This is written to the stream rather than stdout This is also written to the stream ``` Real world examples of reentrancy are more likely to involve multiple functions calling each other and hence be far more complicated than this example. Note also that being reentrant is *not* the same thing as being thread safe. [`redirect_stdout()`](#redirect_stdout), for example, is definitely not thread safe, as it makes a global modification to the system state by binding [`sys.stdout`](https://docs.python.org/3/library/sys.html#sys.stdout) to a different stream. ### Reusable context managers[¶](#reusable-context-managers) Distinct from both single use and reentrant context managers are “reusable” context managers (or, to be completely explicit, “reusable, but not reentrant” context managers, since reentrant context managers are also reusable). These context managers support being used multiple times, but will fail (or otherwise not work correctly) if the specific context manager instance has already been used in a containing with statement. [`threading.Lock`](https://docs.python.org/3/library/threading.html#threading.Lock) is an example of a reusable, but not reentrant, context manager (for a reentrant lock, it is necessary to use [`threading.RLock`](https://docs.python.org/3/library/threading.html#threading.RLock) instead). Another example of a reusable, but not reentrant, context manager is [`ExitStack`](#ExitStack), as it invokes *all* currently registered callbacks when leaving any with statement, regardless of where those callbacks were added: ``` >>> from contextlib import ExitStack >>> stack = ExitStack() >>> with stack: ... stack.callback(print, "Callback: from first context") ... print("Leaving first context") ... Leaving first context Callback: from first context >>> with stack: ... stack.callback(print, "Callback: from second context") ... print("Leaving second context") ... Leaving second context Callback: from second context >>> with stack: ... stack.callback(print, "Callback: from outer context") ... with stack: ... stack.callback(print, "Callback: from inner context") ... print("Leaving inner context") ... print("Leaving outer context") ... Leaving inner context Callback: from inner context Callback: from outer context Leaving outer context ``` As the output from the example shows, reusing a single stack object across multiple with statements works correctly, but attempting to nest them will cause the stack to be cleared at the end of the innermost with statement, which is unlikely to be desirable behaviour. Using separate [`ExitStack`](#ExitStack) instances instead of reusing a single instance avoids that problem: ``` >>> from contextlib import ExitStack >>> with ExitStack() as outer_stack: ... outer_stack.callback(print, "Callback: from outer context") ... with ExitStack() as inner_stack: ... inner_stack.callback(print, "Callback: from inner context") ... print("Leaving inner context") ... print("Leaving outer context") ... Leaving inner context Callback: from inner context Leaving outer context Callback: from outer context ``` Obtaining the Module[¶](#obtaining-the-module) === This module can be installed directly from the [Python Package Index](http://pypi.python.org) with [pip](http://www.pip-installer.org): ``` pip install contextlib2 ``` Alternatively, you can download and unpack it manually from the [contextlib2 PyPI page](http://pypi.python.org/pypi/contextlib2). There are no operating system or distribution specific versions of this module - it is a pure Python module that should work on all platforms. Supported Python versions are currently 3.6+. Development and Support[¶](#development-and-support) --- contextlib2 is developed and maintained on [GitHub](https://github.com/jazzband/contextlib2). Problems and suggested improvements can be posted to the [issue tracker](https://github.com/jazzband/contextlib2/issues). Release History[¶](#release-history) --- ### 21.6.0 (2021-06-27)[¶](#id1) * License update: due to the inclusion of type hints from the `typeshed` project, the `contextlib2` project is now under a combination of the Python Software License (existing license) and the Apache License 2.0 (`typeshed` license) * Switched to calendar based versioning using a “year”-“month”-“serial” scheme, rather than continuing with pre-1.0 semantic versioning * Due to the inclusion of asynchronous features from Python 3.7+, the minimum supported Python version is now Python 3.6 ([#29](https://github.com/jazzband/contextlib2/issues/29)) * Synchronised with the Python 3.10 version of contextlib ([#12](https://github.com/jazzband/contextlib2/issues/12)), making the following new features available on Python 3.6+: + `asyncontextmanager` (added in Python 3.7, enhanced in Python 3.10) + `aclosing` (added in Python 3.10) + `AbstractAsyncContextManager` (added in Python 3.7) + `AsyncContextDecorator` (added in Python 3.10) + `AsyncExitStack` (added in Python 3.7) + async support in `nullcontext` (Python 3.10) * `contextlib2` now includes an adapted copy of the `contextlib` type hints from `typeshed` (the adaptation removes the Python version dependencies from the API definition) ([#33](https://github.com/jazzband/contextlib2/issues/33)) * to incorporate the type hints stub file and the `py.typed` marker file, `contextlib2` is now installed as a package rather than as a module * Updates to the default compatibility testing matrix: + Added: CPython 3.9, CPython 3.10 + Dropped: CPython 2.7, CPython 3.5, PyPy2 ### 0.6.0.post1 (2019-10-10)[¶](#post1-2019-10-10) * Issue [#24](https://github.com/jazzband/contextlib2/issues/24): Correctly update NEWS.rst for the 0.6.0 release. ### 0.6.0 (2019-09-21)[¶](#id2) * Issue [#16](https://github.com/jazzband/contextlib2/issues/16): Backport AbstractContextManager from Python 3.6 and nullcontext from Python 3.7 (patch by <NAME>) ### 0.5.5 (2017-04-25)[¶](#id3) * Issue [#13](https://github.com/jazzband/contextlib2/issues/13): `setup.py` now falls back to plain `distutils` if `setuptools` is not available (patch by <NAME>) * Updates to the default compatibility testing matrix: + Added: PyPy3, CPython 3.6 (maintenance), CPython 3.7 (development) + Dropped: CPython 3.3 ### 0.5.4 (2016-07-31)[¶](#id4) * Thanks to the welcome efforts of <NAME>, contextlib2 is now a [Jazzband](<https://jazzband.co/>) project! This means that I (<NAME>) am no longer a single point of failure for backports of future contextlib updates to earlier Python versions. * Issue [#7](https://github.com/jazzband/contextlib2/issues/7): Backported fix for CPython issue [#27122](http://bugs.python.org/issue27122), preventing a potential infinite loop on Python 3.5 when handling `RuntimeError` (CPython updates by <NAME> & <NAME>) ### 0.5.3 (2016-05-02)[¶](#id5) * `ExitStack` now correctly handles context managers implemented as old-style classes in Python 2.x (such as `codecs.StreamReader` and `codecs.StreamWriter`) * `setup.py` has been migrated to setuptools and configured to emit a universal wheel file by default ### 0.5.2 (2016-05-02)[¶](#id6) * development migrated from BitBucket to GitHub * `redirect_stream`, `redirect_stdout`, `redirect_stderr` and `suppress` now explicitly inherit from `object`, ensuring compatibility with `ExitStack` when run under Python 2.x (patch contributed by <NAME>). * `MANIFEST.in` is now included in the published sdist, ensuring the archive can be precisely recreated even without access to the original source repo (patch contributed by <NAME>) ### 0.5.1 (2016-01-13)[¶](#id7) * Python 2.6 compatilibity restored (patch contributed by <NAME>) * README converted back to reStructured Text formatting ### 0.5.0 (2016-01-12)[¶](#id8) * Updated to include all features from the Python 3.4 and 3.5 releases of contextlib (also includes some `ExitStack` enhancements made following the integration into the standard library for Python 3.3) * The legacy `ContextStack` and `ContextDecorator.refresh_cm` APIs are no longer documented and emit `DeprecationWarning` when used * Python 2.6, 3.2 and 3.3 have been dropped from compatibility testing * tox is now supported for local version compatibility testing (patch by <NAME>) ### 0.4.0 (2012-05-05)[¶](#id9) * (BitBucket) Issue #8: Replace ContextStack with ExitStack (old ContextStack API retained for backwards compatibility) * Fall back to unittest2 if unittest is missing required functionality ### 0.3.1 (2012-01-17)[¶](#id10) * (BitBucket) Issue #7: Add MANIFEST.in so PyPI package contains all relevant files (patch contributed by <NAME>) ### 0.3 (2012-01-04)[¶](#id11) * (BitBucket) Issue #5: ContextStack.register no longer pointlessly returns the wrapped function * (BitBucket) Issue #2: Add examples and recipes section to docs * (BitBucket) Issue #3: ContextStack.register_exit() now accepts objects with __exit__ attributes in addition to accepting exit callbacks directly * (BitBucket) Issue #1: Add ContextStack.preserve() to move all registered callbacks to a new ContextStack object * Wrapped callbacks now expose __wrapped__ (for direct callbacks) or __self__ (for context manager methods) attributes to aid in introspection * Moved version number to a VERSION.txt file (read by both docs and setup.py) * Added NEWS.rst (and incorporated into documentation) ### 0.2 (2011-12-15)[¶](#id12) * Renamed CleanupManager to ContextStack (hopefully before anyone started using the module for anything, since I didn’t alias the old name at all) ### 0.1 (2011-12-13)[¶](#id13) * Initial release as a backport module * Added CleanupManager (based on a [Python feature request](http://bugs.python.org/issue13585)) * Added ContextDecorator.refresh_cm() (based on a [Python tracker issue](http://bugs.python.org/issue11647)) Indices and tables[¶](#indices-and-tables) === * [Index](genindex.html) * [Search Page](search.html)
active_link_to
ruby
Ruby
active_link_to === [![Gem Version](https://img.shields.io/gem/v/active_link_to.svg?style=flat)](http://rubygems.org/gems/active_link_to) [![Gem Downloads](https://img.shields.io/gem/dt/active_link_to.svg?style=flat)](http://rubygems.org/gems/active_link_to) [![Build Status](https://img.shields.io/travis/comfy/active_link_to.svg?style=flat)](https://travis-ci.org/comfy/active_link_to) Creates a link tag of the given name using a URL created by the set of options. Please see documentation for [link_to](http://api.rubyonrails.org/classes/ActionView/Helpers/UrlHelper.html#method-i-link_to), as `active_link_to` is basically a wrapper for it. This method accepts an optional :active parameter that dictates if the given link will have an extra css class attached that marks it as 'active'. Install --- When installing for Rails 3/4/5 applications add this to the Gemfile: `gem 'active_link_to'` and run `bundle install`. For older Rails apps add `config.gem 'active_link_to'` in config/environment.rb and run `rake gems:install`. Or just checkout this repo into /vendor/plugins directory. Super Simple Example --- Here's a link that will have a class attached if it happens to be rendered on page with path `/users` or any child of that page, like `/users/123` ``` active_link_to 'Users', '/users' # => <a href="/users" class="active">Users</a> ``` This is exactly the same as: ``` active_link_to 'Users', '/users', active: :inclusive # => <a href="/users" class="active">Users</a> ``` Active Options --- Here's a list of available options that can be used as the `:active` value ``` * Boolean -> true | false * Symbol -> :exclusive | :inclusive | :exact * Regex -> /regex/ * Controller/Action Pair -> [[:controller], [:action_a, :action_b]] * Controller/Specific Action Pair -> [controller: :action_a, controller_b: :action_b] * Hash -> { param_a: 1, param_b: 2 } ``` More Examples --- Most of the functionality of `active_link_to` depends on the current url. Specifically, `request.original_fullpath` value. We covered the basic example already, so let's try something more fun. We want to highlight a link that matches immediate url, but not the children nodes. Most commonly used for 'home' links. ``` # For URL: /users will be active active_link_to 'Users', users_path, active: :exclusive # => <a href="/users" class="active">Users</a> ``` ``` # But for URL: /users/123 it will not be active active_link_to 'Users', users_path, active: :exclusive # => <a href="/users">Users</a> ``` If we need to set link to be active based on some regular expression, we can do that as well. Let's try to activate links urls of which begin with 'use': ``` active_link_to 'Users', users_path, active: /^\/use/ ``` If we need to set link to be active based on an exact match, for example on filter made via a query string, we can do that as well: ``` active_link_to 'Users', users_path(role_eq: 'admin'), active: :exact ``` What if we need to mark link active for all URLs that match a particular controller, or action, or both? Or any number of those at the same time? Sure, why not: ``` # For matching multiple controllers and actions: active_link_to 'User Edit', edit_user_path(@user), active: [['people', 'news'], ['show', 'edit']] # For matching specific controllers and actions: active_link_to 'User Edit', edit_user_path(@user), active: [people: :show, news: :edit] # for matching all actions under given controllers: active_link_to 'User Edit', edit_user_path(@user), active: [['people', 'news'], []] # for matching all controllers for a particular action active_link_to 'User Edit', edit_user_path(@user), active: [[], ['edit']] ``` Sometimes it should be as easy as giving link true or false value: ``` active_link_to 'Users', users_path, active: true ``` If we need to set link to be active based on `params`, we can do that as well: ``` active_link_to 'Admin users', users_path(role_eq: 'admin'), active: { role_eq: 'admin' } ``` More Options --- You can specify active and inactive css classes for links: ``` active_link_to 'Users', users_path, class_active: 'enabled' # => <a href="/users" class="enabled">Users</aactive_link_to 'News', news_path, class_inactive: 'disabled' # => <a href="/news" class="disabled">News</a> ``` Sometimes you want to replace link tag with a span if it's active: ``` active_link_to 'Users', users_path, active_disable: true # => <span class="active">Users</span> ``` If you are constructing navigation menu it might be helpful to wrap links in another tag, like `<li>`: ``` active_link_to 'Users', users_path, wrap_tag: :li # => <li class="active"><a href="/users">Users</a></li> ``` You can specify css classes for the `wrap_tag`: ``` active_link_to 'Users', users_path, wrap_tag: :li, wrap_class: 'nav-item' # => <li class="nav-item active"><a href="/users">Users</a></li> ``` Helper Methods --- You may directly use methods that `active_link_to` relies on. `is_active_link?` will return true or false based on the URL and value of the `:active` parameter: ``` is_active_link?(users_path, :inclusive) # => true ``` `active_link_to_class` will return the css class: ``` active_link_to_class(users_path, active: :inclusive) # => 'active' ``` ### Copyright Copyright (c) 2009-17 <NAME>. See LICENSE for details.
codeigniter_com_userguide3_index_html
free_programming_book
Unknown
Date: Categories: Tags: ## Installation¶ ## Introduction¶ ## Contributing to CodeIgniter¶ ## General Topics¶ * General Topics * CodeIgniter URLs * Controllers * Reserved Names * Views * Models * Helpers * Using CodeIgniter Libraries * Creating Libraries * Using CodeIgniter Drivers * Creating Drivers * Creating Core System Classes * Creating Ancillary Classes * Hooks - Extending the Framework Core * Auto-loading Resources * Common Functions * Compatibility Functions * URI Routing * Error Handling * Caching * Profiling Your Application * Running via the CLI * Managing your Applications * Handling Multiple Environments * Alternate PHP Syntax for View Files * Security * PHP Style Guide ## Library Reference¶ * Libraries * Benchmarking Class * Caching Driver * Calendaring Class * Shopping Cart Class * Config Class * Email Class * Encrypt Class * Encryption Library * File Uploading Class * Form Validation * FTP Class * Image Manipulation Class * Input Class * Javascript Class * Language Class * Loader Class * Migrations Class * Output Class * Pagination Class * Template Parser Class * Security Class * Session Library * HTML Table Class * Trackback Class * Typography Class * Unit Testing Class * URI Class * User Agent Class * XML-RPC and XML-RPC Server Classes * Zip Encoding Class ## Database Reference¶ * Database Reference * Quick Start: Usage Examples * Database Configuration * Connecting to a Database * Running Queries * Generating Query Results * Query Helper Functions * Query Builder Class * Transactions * Getting MetaData * Custom Function Calls * Query Caching * Database Manipulation with Database Forge * Database Utilities Class * Database Driver Reference
Kernelheaping
cran
R
Package ‘Kernelheaping’ October 12, 2022 Type Package Title Kernel Density Estimation for Heaped and Rounded Data Version 2.3.0 Date 2022-01-26 Depends R (>= 2.15.0), MASS, ks, sparr Imports sp, plyr, dplyr, fastmatch, fitdistrplus, GB2, magrittr, mvtnorm Author <NAME> [aut, cre], <NAME> [aut], <NAME> [ctb] Maintainer <NAME> <<EMAIL>> Description In self-reported or anonymised data the user often encounters heaped data, i.e. data which are rounded (to a possibly different degree of coarseness). While this is mostly a minor problem in parametric density estimation the bias can be very large for non-parametric methods such as kernel density estimation. This package implements a partly Bayesian algorithm treating the true unknown values as additional parameters and estimates the rounding parameters to give a corrected kernel density estimate. It supports various standard bandwidth selection methods. Varying rounding probabilities (depending on the true value) and asymmetric rounding is estimable as well: Gross, M. and Rend- tel, U. (2016) (<doi:10.1093/jssam/smw011>). Additionally, bivariate non-parametric density estima- tion for rounded data, Gross, M. et al. (2016) (<doi:10.1111/rssa.12179>), as well as data aggregated on areas is supported. License GPL-2 | GPL-3 RoxygenNote 7.1.0 NeedsCompilation no Repository CRAN Date/Publication 2022-01-26 18:42:52 UTC R topics documented: createSim.Kernelheapin... 2 dbiv... 3 dclas... 4 dheapin... 6 dshape3dPro... 8 dshapebiv... 9 dshapebivrPro... 10 Kernelheapin... 12 plot.bivroundin... 12 plot.Kernelheapin... 13 sim.Kernelheapin... 14 simSummary.Kernelheapin... 15 student... 16 summary.Kernelheapin... 16 toOtherShap... 17 tracePlot... 17 createSim.Kernelheaping Create heaped data for Simulation Description Create heaped data for Simulation Usage createSim.Kernelheaping( n, distribution, rounds, thresholds, offset = 0, downbias = 0.5, Beta = 0, ... ) Arguments n sample size distribution name of the distribution where random sampling is available, e.g. "norm" rounds rounding values thresholds rounding thresholds (for Beta=0) offset certain value added to all observed random samples downbias bias parameter Beta acceleration paramter ... additional attributes handed over to "rdistribution" (i.e. rnorm, rgamma,..) Value List of heaped values, true values and input parameters dbivr Bivariate kernel density estimation for rounded data Description Bivariate kernel density estimation for rounded data Usage dbivr( xrounded, roundvalue, burnin = 2, samples = 5, adaptive = FALSE, gridsize = 200 ) Arguments xrounded rounded values from which to estimate bivariate density, matrix with 2 columns (x,y) roundvalue rounding value (side length of square in that the true value lies around the rounded one) burnin burn-in sample size samples sampling iteration size adaptive set to TRUE for adaptive bandwidth gridsize number of evaluation grid points Value The function returns a list object with the following objects (besides all input objects): Mestimates kde object containing the corrected density estimate gridx Vector Grid on which density is evaluated (x) gridy Vector Grid on which density is evaluated (y) resultDensity Array with Estimated Density for each iteration resultX Matrix of true latent values X estimates delaigle Matrix of Delaigle estimator estimates Examples # Create Mu and Sigma ----------------------------------------------------------- mu1 <- c(0, 0) mu2 <- c(5, 3) mu3 <- c(-4, 1) Sigma1 <- matrix(c(4, 3, 3, 4), 2, 2) Sigma2 <- matrix(c(3, 0.5, 0.5, 1), 2, 2) Sigma3 <- matrix(c(5, 4, 4, 6), 2, 2) # Mixed Normal Distribution ------------------------------------------------------- mus <- rbind(mu1, mu2, mu3) Sigmas <- rbind(Sigma1, Sigma2, Sigma3) props <- c(1/3, 1/3, 1/3) ## Not run: xtrue=rmvnorm.mixt(n=1000, mus=mus, Sigmas=Sigmas, props=props) roundvalue=2 xrounded=plyr::round_any(xtrue,roundvalue) est <- dbivr(xrounded,roundvalue=roundvalue,burnin=5,samples=10) #Plot corrected and Naive distribution plot(est,trueX=xtrue) #for comparison: plot true density dens=dmvnorm.mixt(x=expand.grid(est$Mestimates$eval.points[[1]],est$Mestimates$eval.points[[2]]), mus=mus, Sigmas=Sigmas, props=props) dens=matrix(dens,nrow=length(est$gridx),ncol=length(est$gridy)) contour(dens,x=est$Mestimates$eval.points[[1]],y=est$Mestimates$eval.points[[2]], xlim=c(min(est$gridx),max(est$gridx)),ylim=c(min(est$gridy),max(est$gridy)),main="True Density") ## End(Not run) dclass Kernel density estimation for classified data Description Kernel density estimation for classified data Usage dclass( xclass, burnin = 2, samples = 5, boundary = FALSE, bw = "nrd0", evalpoints = 200, adjust = 1, dFunc = NULL ) Arguments xclass classified values; matrix with two columns: lower and upper value burnin burn-in sample size samples sampling iteration size boundary TRUE for positive only data (no positive density for negative values) bw bandwidth selector method, defaults to "nrd0" see density for more options evalpoints number of evaluation grid points adjust as in density, the user can multiply the bandwidth by a certain factor such that bw=adjust*bw dFunc character optional density (with "d", "p" and "q" functions) function name for parametric estimation such as "norm" "gamma" or "lnorm" Value The function returns a list object with the following objects (besides all input objects): Mestimates kde object containing the corrected density estimate gridx Vector Grid on which density is evaluated resultDensity Matrix with Estimated Density for each iteration resultX Matrix of true latent values X estimates Examples x=rlnorm(500, meanlog = 8, sdlog = 1) classes <- c(0,500,1000,1500,2000,2500,3000,4000,5000,6000,8000,10000,15000,Inf) xclass <- cut(x,breaks=classes) xclass <- cbind(classes[as.numeric(xclass)], classes[as.numeric(xclass) + 1]) densityEst <- dclass(xclass=xclass, burnin=20, samples=50, evalpoints=1000) plot(densityEst$Mestimates~densityEst$gridx ,lwd=2, type = "l") dheaping Kernel density estimation for heaped data Description Kernel density estimation for heaped data Usage dheaping( xheaped, rounds, burnin = 5, samples = 10, setBias = FALSE, weights = NULL, bw = "nrd0", boundary = FALSE, unequal = FALSE, random = FALSE, adjust = 1, recall = F, recallParams = c(1/3, 1/3) ) Arguments xheaped heaped values from which to estimate density of x rounds rounding values, numeric vector of length >=1 burnin burn-in sample size samples sampling iteration size setBias if TRUE a rounding Bias parameter is estimated. For values above 0.5, the respondents are more prone to round down, while for values < 0.5 they are more likely to round up weights optional numeric vector of sampling weights bw bandwidth selector method, defaults to "nrd0" see density for more options boundary TRUE for positive only data (no positive density for negative values) unequal if TRUE a probit model is fitted for the rounding probabilities with log(true value) as regressor random if TRUE a random effect probit model is fitted for rounding probabilities adjust as in density, the user can multiply the bandwidth by a certain factor such that bw=adjust*bw recall if TRUE a recall error is introduced to the heaping model recallParams recall error model parameters expression(nu) and expression(eta). Default is c(1/3, 1/3) Value The function returns a list object with the following objects (besides all input objects): meanPostDensity Vector of Mean Posterior Density gridx Vector Grid on which density is evaluated resultDensity Matrix with Estimated Density for each iteration resultRR Matrix with rounding probability threshold values for each iteration (on probit scale) resultBias Vector with estimated Bias parameter for each iteration resultBeta Vector with estimated Beta parameter for each iteration resultX Matrix of true latent values X estimates Examples #Simple Rounding ---------------------------------------------------------- xtrue=rnorm(3000) xrounded=round(xtrue) est <- dheaping(xrounded,rounds=1,burnin=20,samples=50) plot(est,trueX=xtrue) ##################### #####Heaping ##################### #Real Data Example ---------------------------------------------------------- # Student learning hours per week data(students) xheaped <- as.numeric(na.omit(students$StudyHrs)) ## Not run: est <- dheaping(xheaped,rounds=c(1,2,5,10), boundary=TRUE, unequal=TRUE,burnin=20,samples=50) plot(est) summary(est) ## End(Not run) #Simulate Data ---------------------------------------------------------- Sim1 <- createSim.Kernelheaping(n=500, distribution="norm",rounds=c(1,10,100), thresholds=c(-0.5244005, 0.5244005), sd=100) ## Not run: est <- dheaping(Sim1$xheaped,rounds=Sim1$rounds) plot(est,trueX=Sim1$x) ## End(Not run) #Biased rounding Sim2 <- createSim.Kernelheaping(n=500, distribution="gamma",rounds=c(1,2,5,10), thresholds=c(-1.2815516, -0.6744898, 0.3853205),downbias=0.2, shape=4,scale=8,offset=45) ## Not run: est <- dheaping(Sim2$xheaped, rounds=Sim2$rounds, setBias=T, bw="SJ") plot(est, trueX=Sim2$x) summary(est) tracePlots(est) ## End(Not run) Sim3 <- createSim.Kernelheaping(n=500, distribution="gamma",rounds=c(1,2,5,10), thresholds=c(1.84, 2.64, 3.05), downbias=0.75, Beta=-0.5, shape=4, scale=8) ## Not run: est <- dheaping(Sim3$xheaped,rounds=Sim3$rounds,boundary=TRUE,unequal=TRUE,setBias=T) plot(est,trueX=Sim3$x) ## End(Not run) dshape3dProp 3d Kernel density estimation for data classified in polygons or shapes Description 3d Kernel density estimation for data classified in polygons or shapes Usage dshape3dProp( data, burnin = 2, samples = 5, shapefile, gridsize = 200, boundary = FALSE, deleteShapes = NULL, fastWeights = TRUE, numChains = 1, numThreads = 1 ) Arguments data data.frame with 5 columns: x-coordinate, y-coordinate (i.e. center of polygon) and number of observations in area for partial population and number of obser- vations for complete observations and third variable (numeric). burnin burn-in sample size samples sampling iteration size shapefile shapefile with number of polygons equal to nrow(data) / length(unique(data[,5])) gridsize number of evaluation grid points boundary boundary corrected kernel density estimate? deleteShapes shapefile containing areas without observations fastWeights if TRUE weigths for boundary estimation are only computed for first 10 percent of samples to speed up computation numChains number of chains of SEM algorithm numThreads number of threads to be used (only applicable if more than one chains) dshapebivr Bivariate Kernel density estimation for data classified in polygons or shapes Description Bivariate Kernel density estimation for data classified in polygons or shapes Usage dshapebivr( data, burnin = 2, samples = 5, adaptive = FALSE, shapefile, gridsize = 200, boundary = FALSE, deleteShapes = NULL, fastWeights = TRUE, numChains = 1, numThreads = 1 ) Arguments data data.frame with 3 columns: x-coordinate, y-coordinate (i.e. center of polygon) and number of observations in area. burnin burn-in sample size samples sampling iteration size adaptive TRUE for adaptive kernel density estimation shapefile shapefile with number of polygons equal to nrow(data) gridsize number of evaluation grid points boundary boundary corrected kernel density estimate? deleteShapes shapefile containing areas without observations fastWeights if TRUE weigths for boundary estimation are only computed for first 10 percent of samples to speed up computation numChains number of chains of SEM algorithm numThreads number of threads to be used (only applicable if more than one chains) Value The function returns a list object with the following objects (besides all input objects): Mestimates kde object containing the corrected density estimate gridx Vector Grid of x-coordinates on which density is evaluated gridy Vector Grid of y-coordinates on which density is evaluated resultDensity Matrix with Estimated Density for each iteration resultX Matrix of true latent values X estimates Examples ## Not run: library(maptools) # Read Shapefile of Berlin Urban Planning Areas (download available from: # https://www.statistik-berlin-brandenburg.de/opendata/RBS_OD_LOR_2015_12.zip) Berlin <- rgdal::readOGR("X:/SomeDir/RBS_OD_LOR_2015_12.shp") #(von daten.berlin.de) # Get Dataset of Berlin Population (download available from: # https://www.statistik-berlin-brandenburg.de/opendata/EWR201512E_Matrix.csv) data <- read.csv2("X:/SomeDir/EWR201512E_Matrix.csv") # Form Dataset for Estimation Process dataIn <- cbind(t(sapply(1:length(Berlin@polygons), function(x) Berlin@polygons[[x]]@labpt)), data$E_E65U80) #Estimate Bivariate Density Est <- dshapebivr(data = dataIn, burnin = 5, samples = 10, adaptive = FALSE, shapefile = Berlin, gridsize = 325, boundary = TRUE) ## End(Not run) # Plot Density over Area: ## Not run: breaks <- seq(1E-16,max(Est$Mestimates$estimate),length.out = 20) image.plot(x=Est$Mestimates$eval.points[[1]],y=Est$Mestimates$eval.points[[2]], z=Est$Mestimates$estimate, asp=1, breaks = breaks, col = colorRampPalette(brewer.pal(9,"YlOrRd"))(length(breaks)-1)) plot(Berlin, add=TRUE) ## End(Not run) dshapebivrProp Bivariate Kernel density estimation for data classified in polygons or shapes Description Bivariate Kernel density estimation for data classified in polygons or shapes Usage dshapebivrProp( data, burnin = 2, samples = 5, adaptive = FALSE, shapefile, gridsize = 200, boundary = FALSE, deleteShapes = NULL, fastWeights = TRUE, numChains = 1, numThreads = 1 ) Arguments data data.frame with 4 columns: x-coordinate, y-coordinate (i.e. center of polygon) and number of observations in area for partial population and number of obser- vations for complete observations. burnin burn-in sample size samples sampling iteration size adaptive TRUE for adaptive kernel density estimation shapefile shapefile with number of polygons equal to nrow(data) gridsize number of evaluation grid points boundary boundary corrected kernel density estimate? deleteShapes shapefile containing areas without observations fastWeights if TRUE weigths for boundary estimation are only computed for first 10 percent of samples to speed up computation numChains number of chains of SEM algorithm numThreads number of threads to be used (only applicable if more than one chains) Examples ## Not run: library(maptools) # Read Shapefile of Berlin Urban Planning Areas (download available from: https://www.statistik-berlin-brandenburg.de/opendata/RBS_OD_LOR_2015_12.zip) Berlin <- rgdal::readOGR("X:/SomeDir/RBS_OD_LOR_2015_12.shp") #(von daten.berlin.de) # Get Dataset of Berlin Population (download available from: # https://www.statistik-berlin-brandenburg.de/opendata/EWR201512E_Matrix.csv) data <- read.csv2("X:/SomeDir/EWR201512E_Matrix.csv") # Form Dataset for Estimation Process dataIn <- cbind(t(sapply(1:length(Berlin@polygons), function(x) Berlin@polygons[[x]]@labpt)), data$E_E65U80, data$E_E) #Estimate Bivariate Proportions (may take some minutes) PropEst <- dshapebivrProp(data = dataIn, burnin = 5, samples = 20, adaptive = FALSE, shapefile = Berlin, gridsize=325, numChains = 16, numThreads = 4) ## End(Not run) # Plot Proportions over Area: ## Not run: breaks <- seq(0,0.4,by=0.025) image.plot(x=PropEst$Mestimates$eval.points[[1]],y=PropEst$Mestimates$eval.points[[2]], z=PropEst$proportion+1E-96, asp=1, breaks = breaks, col = colorRampPalette(brewer.pal(9,"YlOrRd"))(length(breaks)-1)) plot(Berlin, add=TRUE) ## End(Not run) Kernelheaping Kernel Density Estimation for Heaped Data Description In self-reported or anonymized data the user often encounters heaped data, i.e. data which are rounded (to a possibly different degree of coarseness). While this is mostly a minor problem in parametric density estimation the bias can be very large for non-parametric methods such as kernel density estimation. This package implements a partly Bayesian algorithm treating the true unknown values as additional parameters and estimates the rounding parameters to give a corrected kernel density estimate. It supports various standard bandwidth selection methods. Varying rounding probabilities (depending on the true value) and asymmetric rounding is estimable as well. Addi- tionally, bivariate non-parametric density estimation for rounded data is supported. Details The most important function is dheaping. See the help and the attached examples on how to use the package. plot.bivrounding Plot Kernel density estimate of heaped data naively and corrected by partly bayesian model Description Plot Kernel density estimate of heaped data naively and corrected by partly bayesian model Usage ## S3 method for class 'bivrounding' plot(x, trueX = NULL, ...) Arguments x bivrounding object produced by dbivr function trueX optional, if true values X are known (in simulations, for example) the ’Oracle’ density estimate is added as well ... additional arguments given to standard plot function Value plot with Kernel density estimates (Naive, Corrected and True (if provided)) plot.Kernelheaping Plot Kernel density estimate of heaped data naively and corrected by partly bayesian model Description Plot Kernel density estimate of heaped data naively and corrected by partly bayesian model Usage ## S3 method for class 'Kernelheaping' plot(x, trueX = NULL, ...) Arguments x Kernelheaping object produced by dheaping function trueX optional, if true values X are known (in simulations, for example) the ’Oracle’ density estimate is added as well ... additional arguments given to standard plot function Value plot with Kernel density estimates (Naive, Corrected and True (if provided)) sim.Kernelheaping Simulation of heaping correction method Description Simulation of heaping correction method Usage sim.Kernelheaping( simRuns, n, distribution, rounds, thresholds, downbias = 0.5, setBias = FALSE, Beta = 0, unequal = FALSE, burnin = 5, samples = 10, bw = "nrd0", offset = 0, boundary = FALSE, adjust = 1, ... ) Arguments simRuns number of simulations runs n sample size distribution name of the distribution where random sampling is available, e.g. "norm" rounds rounding values, numeric vector of length >=1 thresholds rounding thresholds downbias Bias parameter used in the simulation setBias if TRUE a rounding Bias parameter is estimated. For values above 0.5, the respondents are more prone to round down, while for values < 0.5 they are more likely to round up Beta Parameter of the probit model for rounding probabilities used in simulation unequal if TRUE a probit model is fitted for the rounding probabilities with log(true value) as regressor burnin burn-in sample size samples sampling iteration size bw bandwidth selector method, defaults to "nrd0" see density for more options offset location shift parameter used simulation in simulation boundary TRUE for positive only data (no positive density for negative values) adjust as in density, the user can multiply the bandwidth by a certain factor such that bw=adjust*bw ... additional attributes handed over to createSim.Kernelheaping Value List of estimation results Examples ## Not run: Sims1 <- sim.Kernelheaping(simRuns=2, n=500, distribution="norm", rounds=c(1,10,100), thresholds=c(0.3,0.4,0.3), sd=100) ## End(Not run) simSummary.Kernelheaping Simulation Summary Description Simulation Summary Usage simSummary.Kernelheaping(sim, coverage = 0.9) Arguments sim Simulation object returned from sim.Kernelheaping coverage probability for computing coverage intervals Value list with summary statistics students Student0405 Description Data collected during 2004 and 2005 from students in statistics classes at a large state university in the northeastern United States. Source http://mathfaculty.fullerton.edu/mori/Math120/Data/readme References <NAME>., & <NAME>. (2011). Mind on statistics. Cengage Learning. summary.Kernelheaping Prints some descriptive statistics (means and quantiles) for the esti- mated rounding, bias and acceleration (beta) parameters Description Prints some descriptive statistics (means and quantiles) for the estimated rounding, bias and accel- eration (beta) parameters Usage ## S3 method for class 'Kernelheaping' summary(object, ...) Arguments object Kernelheaping object produced by dheaping function ... unused Value Prints summary statistics toOtherShape Transfer observations to other shape Description Transfer observations to other shape Usage toOtherShape(Mestimates, shapefile) Arguments Mestimates Estimation object created by functions dshapebivr and dbivr shapefile The new shapefile for which the observations shall be transferred to Value The function returns the count, sd and 90 tracePlots Plots some trace plots for the rounding, bias and acceleration (beta) parameters Description Plots some trace plots for the rounding, bias and acceleration (beta) parameters Usage tracePlots(x, ...) Arguments x Kernelheaping object produced by dheaping function ... additional arguments given to standard plot function Value Prints summary statistics
hsrecombi
cran
R
Package ‘hsrecombi’ June 7, 2023 Type Package Title Estimation of Recombination Rate and Maternal LD in Half-Sibs Version 1.0.1 Date 2023-06-07 Description Paternal recombination rate and maternal linkage disequilibrium (LD) are estimated for pairs of biallelic markers such as single nucleotide polymorphisms (SNPs) from progeny genotypes and sire haplotypes. The implementation relies on paternal half-sib families. If maternal half-sib families are used, the roles of sire/dam are swapped. Multiple families can be considered. For parameter estimation, at least one sire has to be double heterozygous at the investigated pairs of SNPs. Based on recombination rates, genetic distances between markers can be estimated. Markers with unusually large recombination rate to markers in close proximity (i.e. putatively misplaced markers) shall be discarded in this derivation. A workflow description is attached as vignette. *A pipeline is available at GitHub* <https://github.com/wittenburg/hsrecombi> Hampel, Teuscher, Gomez-Raya, <NAME> (2018) ``Estimation of recombination rate and maternal linkage disequilibrium in half-sibs'' <doi:10.3389/fgene.2018.00186>. Gomez-Raya (2012) ``Maximum likelihood estimation of linkage disequilibrium in half-sib families'' <doi:10.1534/genetics.111.137521>. Depends R (>= 3.5.0) Imports Rcpp (>= 1.0.3), hsphase, dplyr, data.table, rlist, quadprog, curl, Matrix License GPL (>= 2) Encoding UTF-8 LazyData true LinkingTo Rcpp RoxygenNote 7.2.3 Suggests knitr, rmarkdown, formatR, AlphaSimR (>= 0.13.0), doParallel, ggplot2 VignetteBuilder knitr Language en-GB NeedsCompilation yes Author <NAME> [aut, cre] Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2023-06-07 08:20:06 UTC R topics documented: bestmapfu... 2 checkCandidate... 3 countNumber... 5 daughterSir... 5 editra... 5 felsenstei... 7 geneticPositio... 7 genotype.ch... 9 haldan... 9 hapSir... 10 hsrecomb... 10 karli... 12 kosamb... 13 LDHScp... 13 loglikfu... 14 makeha... 15 makehaplis... 16 makehapp... 17 map.ch... 18 ra... 18 rao invers... 19 startvalu... 20 targetregio... 21 bestmapfun Best fitting genetic-map function Description Approximation of mixing parameter of system of map functions Usage bestmapfun(theta, dist_M) Arguments theta vector of recombination rates dist_M vector of genetic positions Details The genetic mapping function that fits best to the genetic data (recombination rate and genetic distances) is obtained from Rao’s system of genetic-map functions. The corresponding mixing parameter is estimated via 1-dimensional constrained optimisation. See vignette for its application to estimated data. Value list (LEN 2) mixing mixing parameter of system of genetic mapping functions mse minimum value of target function (theta - dist_M)^2 References <NAME>., <NAME>., <NAME>., <NAME>. & <NAME> (1977) A mapping function for man. Human Heredity 27: 99-104. doi: 10.1159/000152856 Examples theta <- seq(0, 0.5, 0.01) gendist <- -log(1 - 2 * theta) / 2 bestmapfun(theta, gendist) checkCandidates Candidates for misplacement Description Search for SNPs with unusually large estimates of recombination rate Usage checkCandidates(final, map1, win = 30, quant = 0.99) Arguments final table of results produced by editraw with pairwise estimates of recombination rate between p SNPs within chromosome; minimum required data frame with columns SNP1, SNP2 and theta map1 data.frame containing information on physical map, at least: SNP SNP ID locus_Mb physical position in Mbp of SNP on chromosomes Chr chromosome of SNP win optional value for window size; default value 30 quant optional value; default value 0.99, see details Details Markers with unusually large estimates of recombination rate to close SNPs are candidates for misplacement in the underlying assembly. The mean of recombination rate estimates with win subsequent or preceding markers is calculated and those SNPs with mean value exceeding the quant quantile are denoted as candidates which have to be manually curated! This can be done, for instance, by visual inspection of a correlation plot containing estimates of recombination rate in a selected region. Value vector of SNP IDs for further verification References <NAME>., <NAME>., <NAME>., <NAME>. & <NAME>. (2018) Estimation of recombination rate and maternal linkage disequilibrium in half-sibs. Frontiers in Genetics 9:186. doi: 10.3389/fgene.2018.00186 Examples ### test data data(targetregion) ### make list for paternal half-sib families hap <- makehaplist(daughterSire, hapSire) ### parameter estimates on a chromosome res <- hsrecombi(hap, genotype.chr) ### post-processing to achieve final and valid set of estimates final <- editraw(res, map.chr) ### check for candidates of misplacement snp <- checkCandidates(final, map.chr) countNumbers Count genotype combinations at 2 SNPs Description Count genotype combinations at 2 SNPs Arguments X integer matrix of genotypes Value count vector of counts of 9 possible genotypes at SNP pair daughterSire targetregion: allocation of paternal half-sib families Description Vector of sire ID for each progeny Usage daughterSire Format An object of class integer of length 265. editraw Editing results of hsrecombi Description Process raw results from hsrecombi, decide which out of two sets of estimates is more likely and prepare list of final results Usage editraw(Roh, map1) Arguments Roh list of raw results from hsrecombi map1 data.frame containing information on physical map, at least: SNP SNP ID locus_Mb physical position in Mbp of SNP on chromosomes Chr chromosome of SNP Value final table of results SNP1 index 1. SNP SNP2 index 2. SNP D maternal LD fAA frequency of maternal haplotype 1-1 fAB frequency of maternal haplotype 1-0 fBA frequency of maternal haplotype 0-1 fBB frequency of maternal haplotype 0-0 p1 Maternal allele frequency (allele 1) at SNP1 p2 Maternal allele frequency (allele 1) at SNP2 nfam1 size of genomic family 1 nfam2 size of genomic family 2 error 0 if computations were without error; 1 if EM algorithm did not converge iteration number of EM iterations theta paternal recombination rate r2 r2 of maternal LD logL value of log likelihood function unimodal 1 if likelihood is unimodal; 0 if likelihood is bimodal critical 0 if parameter estimates were unique; 1 if parameter estimates were obtained via deci- sion process locus_Mb physical distance between SNPs in Mbp Examples ### test data data(targetregion) ### make list for paternal half-sib families hap <- makehaplist(daughterSire, hapSire) ### parameter estimates on a chromosome res <- hsrecombi(hap, genotype.chr) ### post-processing to achieve final and valid set of estimates final <- editraw(res, map.chr) felsenstein Felsenstein’s genetic map function Description Calculation of genetic distances from recombination rates given an interference parameter Usage felsenstein(K, x, inverse = F) Arguments K parameter (numeric) corresponding to the intensity of crossover interference x vector of recombination rates inverse logical, if FALSE recombination rate is mapped to Morgan unit, if TRUE Mor- gan unit is mapped to recombination rate (default is FALSE) Value vector of genetic positions in Morgan units References Felsenstein, J. (1979) A mathematically tractable family of genetic mapping functions with different amounts of interference. Genetics 91:769-775. Examples felsenstein(0.1, seq(0, 0.5, 0.01)) geneticPosition Estimation of genetic position Description Estimation of genetic positions (in centi Morgan) Usage geneticPosition(final, map1, exclude = NULL, threshold = 0.05) Arguments final table of results produced by editraw with pairwise estimates of recombination rate between p SNPs within chromosome; minimum required data frame with columns SNP1, SNP2 and theta map1 data.frame containing information on physical map, at least: SNP SNP ID locus_Mb physical position in Mbp of SNP on chromosomes Chr chromosome of SNP exclude optional vector (LEN < p) of SNP IDs to be excluded (e.g., candidates of mis- placed SNPs; default NULL) threshold optional value; recombination rates <= threshold are considered for smoothing approach assuming theta ~ Morgan (default 0.05) Details Smoothing of recombination rates (theta) <= 0.05 via quadratic optimization provides an approx- imation of genetic distances (in Morgan) between SNPs. The cumulative sum * 100 yields the genetic positions in cM. The minimization problem (theta - D d)^2 is solved s.t. d > 0 where d is the vector of genetic distances between adjacent markers but theta is not restricted to adjacent markers. The incidence matrix D contains 1’s for those intervals contributing to the total distance relevant for each theta. Estimates of theta = 1e-6 are neglected as these values coincide with start values and indicate that (because of a very flat likelihood surface) no meaningful estimate of recombination rate has been obtained. Value list (LEN 2) gen.cM vector (LEN p) of genetic positions of SNPs (in cM) gen.Mb vector (LEN p) of physical positions of SNPs (in Mbp) References <NAME>. & <NAME>. (2020) Male recombination map of the autosomal genome in German Holstein. Genetics Selection Evolution 52:73. doi: 10.1186/s1271102000593z Examples ### test data data(targetregion) ### make list for paternal half-sib families hap <- makehaplist(daughterSire, hapSire) ### parameter estimates on a chromosome res <- hsrecombi(hap, genotype.chr) ### post-processing to achieve final and valid set of estimates final <- editraw(res, map.chr) ### approximation of genetic positions pos <- geneticPosition(final, map.chr) genotype.chr targetregion: progeny genotypes Description matrix of progeny genotypes in target region on chromosome BTA1 Usage genotype.chr Format An object of class matrix (inherits from array) with 265 rows and 200 columns. haldane Haldane’s genetic map function Description Calculation of genetic distances from recombination rates Usage haldane(x, inverse = F) Arguments x vector of recombination rates inverse logical, if FALSE recombination rate is mapped to Morgan unit, if TRUE Mor- gan unit is mapped to recombination rate (default is FALSE) Value vector of genetic positions in Morgan units References <NAME> (1919) The combination of linkage values, and the calculation of distances between the loci of linked factors. J Genet 8: 299-309. Examples haldane(seq(0, 0.5, 0.01)) hapSire targetregion: sire haplotypes Description matrix of sire haplotypes in target region on chromosome BTA1 Usage hapSire Format An object of class matrix (inherits from array) with 10 rows and 201 columns. hsrecombi Estimation of recombination rate and maternal LD Description Wrapper function for estimating recombination rate and maternal linkage disequilibrium between intra-chromosomal SNP pairs by calling EM algorithm Usage hsrecombi(hap, genotype.chr, exclude = NULL, only.adj = FALSE, prec = 1e-06) Arguments hap list (LEN 2) of lists famID list (LEN number of sires) of vectors (LEN n.progeny) of progeny in- dices relating to lines in genotype matrix sireHap list (LEN number of sires) of matrices (DIM 2 x p) of sire haplotypes (0, 1) on investigated chromosome genotype.chr matrix (DIM n x p) of all progeny genotypes (0, 1, 2) on a chromosome with p SNPs; 9 indicates missing genotype exclude vector (LEN < p) of SNP IDs (for filtering column names of genotype.chr) to be excluded from analysis (default NULL) only.adj logical; if TRUE, recombination rate is calculated only between neighbouring markers prec scalar; precision of estimation Details Paternal recombination rate and maternal linkage disequilibrium (LD) are estimated for pairs of biallelic markers (such as single nucleotide polymorphisms; SNPs) from progeny genotypes and sire haplotypes. At least one sire has to be double heterozygous at the investigated pairs of SNPs. All progeny are merged in two genomic families: (1) coupling phase family if sires are double heterozygous 0-0/1-1 and (2) repulsion phase family if sires are double heterozygous 0-1/1-0. So far it is recommended processing the chromosomes separately. If maternal half-sib families are used, the roles of sire/dam are swapped. Multiple families can be considered. Value list (LEN p - 1) of data.frames; for each SNP, parameters are estimated with all following SNPs; two solutions (prefix sln1 and sln2) are obtained for two runs of the EM algorithm SNP1 ID of 1. SNP SNP2 ID of 2. SNP D maternal LD fAA frequency of maternal haplotype 1-1 fAB frequency of maternal haplotype 1-0 fBA frequency of maternal haplotype 0-1 fBB frequency of maternal haplotype 0-0 p1 Maternal allele frequency (allele 1) at SNP1 p2 Maternal allele frequency (allele 1) at SNP2 nfam1 size of genomic family 1 nfam2 size of genomic family 2 error 0 if computations were without error; 1 if EM algorithm did not converge iteration number of EM iterations theta paternal recombination rate r2 r2 of maternal LD logL value of log likelihood function unimodal 1 if likelihood is unimodal; 0 if likelihood is bimodal critical 0 if parameter estimates are unique; 1 if parameter estimates at both solutions are valid, then decision process follows in post-processing function "editraw" Afterwards, solutions are compared and processed with function editraw, yielding the final esti- mates for each valid pair of SNPs. References <NAME>., <NAME>., <NAME>., <NAME>. & <NAME>. (2018) Estimation of recombination rate and maternal linkage disequilibrium in half-sibs. Frontiers in Genetics 9:186. doi: 10.3389/fgene.2018.00186 <NAME>. (2012) Maximum likelihood estimation of linkage disequilibrium in half-sib fam- ilies. Genetics 191:195-213. Examples ### test data data(targetregion) ### make list for paternal half-sib families hap <- makehaplist(daughterSire, hapSire) ### parameter estimates on a chromosome res <- hsrecombi(hap, genotype.chr) ### post-processing to achieve final and valid set of estimates final <- editraw(res, map.chr) karlin Liberman and Karlin’s genetic map function Description Calculation of genetic distances from recombination rates given a parameter Usage karlin(N, x, inverse = F) Arguments N parameter (positive integer) required by the binomial model to assess the count (of crossover) distribution; N = 1 corresponds to Morgan’s map function x vector of recombination rates inverse logical, if FALSE recombination rate is mapped to Morgan unit, if TRUE Mor- gan unit is mapped to recombination rate (default is FALSE) Value vector of genetic positions in Morgan units References <NAME>. & <NAME>. (1984) Theoretical models of genetic map functions. Theor Popul Biol 25:331-346. Examples karlin(2, seq(0, 0.5, 0.01)) kosambi Kosambi’s genetic map function Description Calculation of genetic distances from recombination rates Usage kosambi(x, inverse = F) Arguments x vector of recombination rates inverse logical, if FALSE recombination rate is mapped to Morgan unit, if TRUE Mor- gan unit is mapped to recombination rate (default is FALSE) Value vector of genetic positions in Morgan units References <NAME>. (1944) The estimation of map distance from recombination values. Ann. Eugen. 12: 172-175. Examples kosambi(seq(0, 0.5, 0.01)) LDHScpp Expectation Maximisation (EM) algorithm Description Expectation Maximisation (EM) algorithm Usage LDHScpp(XGF1, XGF2, fAA, fAB, fBA, theta, display, threshold) Arguments XGF1 integer matrix of progeny genotypes in genomic family 1 XGF2 integer matrix of progeny genotypes in genomic family 2 fAA frequency of maternal haplotype 1-1 fAB frequency of maternal haplotype 1-0 fBA frequency of maternal haplotype 0-1 theta paternal recombination rate display logical for displaying additional information threshold convergence criterion Value list of parameter estimates D maternal LD fAA frequency of maternal haplotype 1-1 fAB frequency of maternal haplotype 1-0 fBA frequency of maternal haplotype 0-1 fBB frequency of maternal haplotype 0-0 p1 Maternal allele frequency (allele 1) at 1. SNP p2 Maternal allele frequency (allele 1) at 2. SNP nfam1 size of genomic family 1 nfam2 size of genomic family 2 error 0 if computations were without error; 1 if EM algorithm did not converge iteration number of EM iterations theta paternal recombination rate r2 r2 of maternal LD logL value of log likelihood function loglikfun Calculate log-likelihood function Description Calculate log-likelihood function Arguments counts integer vector of observed 2-locus genotype fAA frequency of maternal haplotype 1-1 fAB frequency of maternal haplotype 1-0 fBA frequency of maternal haplotype 0-1 fBB frequency of maternal haplotype 0-0 theta paternal recombination rate Value lik value of log likelihood at parameter estimates makehap Make list of imputed sire haplotypes Description List of sire haplotypes is set up in the format required for hsrecombi. Sire haplotypes are imputed from progeny genotypes using R package hsphase. Usage makehap(sireID, daughterSire, genotype.chr, nmin = 30, exclude = NULL) Arguments sireID vector (LEN N) of IDs of all sires daughterSire vector (LEN n) of sire ID for each progeny genotype.chr matrix (DIM n x p) of progeny genotypes (0, 1, 2) on a single chromosome with p SNPs; 9 indicates missing genotype nmin scalar, minimum required number of progeny for proper imputation, default 30 exclude vector (LEN < p) of SNP indices to be excluded from analysis Value list (LEN 2) of lists. For each sire: famID list (LEN N) of vectors (LEN n.progeny) of progeny indices relating to lines in genotype matrix sireHap list (LEN N) of matrices (DIM 2 x p) of sire haplotypes (0, 1) on investigated chromosome References <NAME>., <NAME>., <NAME>., <NAME>. & <NAME>. (2014) hsphase: an R package for pedigree reconstruction, detection of recombination events, phasing and imputation of half-sib family groups BMC Bioinformatics 15:172. https://CRAN.R-project.org/package=hsphase Examples data(targetregion) hap <- makehap(unique(daughterSire), daughterSire, genotype.chr) makehaplist Make list of sire haplotypes Description List of sire haplotypes is set up in the format required for hsrecombi. Haplotypes (obtained by external software) are provided. Usage makehaplist(daughterSire, hapSire, nmin = 1) Arguments daughterSire vector (LEN n) of sire ID for each progeny hapSire matrix (DIM 2N x p + 1) of sire haplotype at p SNPs; 2 lines per sire, 1. columns contains sire ID nmin scalar, minimum number of progeny required, default 1 Value list (LEN 2) of lists. For each sire: famID list (LEN N) of vectors (LEN n.progeny) of progeny indices relating to lines in genotype matrix sireHap list (LEN N) of matrices (DIM 2 x p) of sire haplotypes (0, 1) on investigated chromosome Examples data(targetregion) hap <- makehaplist(daughterSire, hapSire) makehappm Make list of imputed haplotypes and estimate recombination rate Description List of sire haplotypes is set up in the format required for hsrecombi. Sire haplotypes are imputed from progeny genotypes using R package hsphase. Furthermore, recombination rate estimates between adjacent SNPs from hsphase are reported. Usage makehappm(sireID, daughterSire, genotype.chr, nmin = 30, exclude = NULL) Arguments sireID vector (LEN N) of IDs of all sires daughterSire vector (LEN n) of sire ID for each progeny genotype.chr matrix (DIM n x p) of progeny genotypes (0, 1, 2) on a single chromosome with p SNPs; 9 indicates missing genotype nmin scalar, minimum required number of progeny for proper imputation, default 30 exclude vector (LEN < p) of SNP IDs (for filtering column names of genotype.chr) to be excluded from analysis Value list (LEN 2) of lists. For each sire: famID list (LEN N) of vectors (LEN n.progeny) of progeny indices relating to lines in genotype matrix sireHap list (LEN N) of matrices (DIM 2 x p) of sire haplotypes (0, 1) on investigated chromosome probRec vector (LEN p - 1) of proportion of recombinant progeny over all families between adja- cent SNPs numberRec list (LEN N) of vectors (LEN n.progeny) of number of recombination events per ani- mal gen vector (LEN p) of genetic positions of SNPs (in cM) References <NAME>., <NAME>., <NAME>., <NAME>. & <NAME>. (2014) hsphase: an R package for pedigree reconstruction, detection of recombination events, phasing and imputation of half-sib family groups BMC Bioinformatics 15:172. https://CRAN.R-project.org/package=hsphase Examples data(targetregion) hap <- makehappm(unique(daughterSire), daughterSire, genotype.chr, exclude = paste0('V', 301:310)) map.chr targetregion: physical map Description SNP marker map in target region on chromosome BTA1 according to ARS-UCD1.2 Usage map.chr Arguments map.chr data frame SNP SNP index Chr chromosome of SNP locus_bp physical position of SNP in bp locus_Mb physical position of SNP in Mbp markername official SNP name Format An object of class data.frame with 200 rows and 6 columns. rao System of genetic-map functions Description Calculation of genetic distances from recombination rates given a mixing parameter Usage rao(p, x, inverse = F) Arguments p mixing parameter (see details); 0 <= p <= 1 x vector of recombination rates inverse logical, if FALSE recombination rate is mapped to Morgan unit, if TRUE Mor- gan unit is mapped to recombination rate (default is FALSE) Details Mixing parameter p=0 would match to Morgan, p=0.25 to Carter, p=0.5 to Kosambi and p=1 to Haldane map function. As an inverse of Rao’s system of functions does not exist, NA will be produced if inverse = T. To approximate the inverse call function rao.inv(p, x). Value vector of genetic positions in Morgan units References <NAME>., <NAME>., <NAME>., <NAME>. & <NAME> (1977) A mapping function for man. Human Heredity 27: 99-104. doi: 10.1159/000152856 Examples rao(0.25, seq(0, 0.5, 0.01)) rao inverse Approximation to inverse of Rao’s system of map functions Description Calculation of recombination rates from genetic distances given a mixing parameter Usage rao.inv(p, x) Arguments p mixing parameter (see details); 0 <= p <= 1 x vector in Morgan units Details Mixing parameter p=0 would match to Morgan, p=0.25 to Carter, p=0.5 to Kosambi and p=1 to Haldane map function. Value vector of recombination rates References <NAME>., <NAME>., <NAME>., <NAME>. & <NAME> (1977) A mapping function for man. Human Heredity 27: 99-104. doi: 10.1159/000152856 Examples rao.inv(0.25, seq(0, 01, 0.1)) startvalue Start value for maternal allele and haplotype frequencies Description Determine default start values for Expectation Maximisation (EM) algorithm that is used to estimate paternal recombination rate and maternal haplotype frequencies Usage startvalue(Fam1, Fam2, Dd = 0, prec = 1e-06) Arguments Fam1 matrix (DIM n.progeny x 2) of progeny genotypes (0, 1, 2) of genomic family with coupling phase sires (1) at SNP pair Fam2 matrix (DIM n.progeny x 2) of progeny genotypes (0, 1, 2) of genomic family with repulsion phase sires (2) at SNP pair Dd maternal LD, default 0 prec minimum accepted start value for fAA, fAB, fBA; default 1e-6 Value list (LEN 8) fAA.start frequency of maternal haplotype 1-1 fAB.start frequency of maternal haplotype 1-0 fBA.start frequency of maternal haplotype 0-1 p1 estimate of maternal allele frequency (allele 1) when sire is heterozygous at SNP1 p2 estimate of maternal allele frequency (allele 1) when sire is heterozygous at SNP2 L1 lower bound of maternal LD L2 upper bound for maternal LD critical 0 if parameter estimates are unique; 1 if parameter estimates at both solutions are valid Examples n1 <- 100 n2 <- 20 G1 <- matrix(ncol = 2, nrow = n1, sample(c(0:2), replace = TRUE, size = 2 * n1)) G2 <- matrix(ncol = 2, nrow = n2, sample(c(0:2), replace = TRUE, size = 2 * n2)) startvalue(G1, G2) targetregion Description of the targetregion data set Description The data set contains sire haplotypes, assignment of progeny to sire, progeny genotypes and physi- cal map information in a target region The raw data can be downloaded at the source given below. Then, executing the following R code leads to the data provided in targetregion.RData. hapSire matrix of sire haplotypes of each sire; 2 lines per sire; 1. column contains sireID daughterSire vector of sire ID for each progeny genotype.chr matrix of progeny genotypes map.chr SNP marker map in target region Source The data are available at RADAR doi: 10.22000/280 Examples ## Not run: # download data from RADAR (requires about 1.4 GB) url <- "https://www.radar-service.eu/radar-backend/archives/fqSPQoIvjtOGJlav/versions/1/content" curl_download(url = url, 'tmp.tar') untar('tmp.tar') file.remove('tmp.tar') path <- '10.22000-280/data/dataset' ## list of haplotypes of sires for each chromosome load(file.path(path, 'sire_haplotypes.RData')) ## assign progeny to sire daughterSire <- read.table(file.path(path, 'assign_to_family.txt'))[, 1] ## progeny genotypes X <- as.matrix(read.table(file.path(path, 'XFam-ARS.txt'))) ## physical and approximated genetic map map <- read.table(file.path(path, 'map50K_ARS_reordered.txt'), header = T) ## select target region chr <- 1 window <- 301:500 ## map information of target region map.chr <- map[map$Chr == chr, ][window, ] ## matrix of sire haplotypes in target region hapSire <- rlist::list.rbind(haps[[chr]]) sireID <- 1:length(unique(daughterSire)) hapSire <- cbind(rep(sireID, each = 2), hapSire[, window]) ## matrix of progeny genotypes genotype.chr <- X[, map.chr$SNP] colnames(genotype.chr) <- map.chr$SNP save(list = c('genotype.chr', 'hapSire', 'map.chr', 'daughterSire'), file = 'targetregion.RData', compress = 'xz') ## End(Not run)
github.com/jfrog/terraform-provider-artifactory/v6
go
Go
README [¶](#section-readme) --- [![JFrog logo](https://github.com/jfrog/terraform-provider-artifactory/raw/v6.37.0/.github/jfrog-logo-2022.svg "JFrog")](https://jfrog.com) ### Terraform Provider Artifactory [![Actions Status](https://github.com/jfrog/terraform-provider-artifactory/workflows/release/badge.svg)](https://github.com/jfrog/terraform-provider-artifactory/actions) [![Go Report Card](https://goreportcard.com/badge/github.com/jfrog/terraform-provider-artifactory)](https://goreportcard.com/report/github.com/jfrog/terraform-provider-artifactory) #### Releases Current provider major release: **6.x** See [CHANGELOG.md](https://github.com/jfrog/terraform-provider-artifactory/blob/v6.37.0/CHANGELOG.md) for full details #### Versions We maintain two major versions of Terraform Provider - 6.x and 7.x. Version 6.x is compatible with the Artifactory versions 7.49.x and below, version 7.x is only compatible with Artifactory 7.50.x and above due to changes in the projects functionality. #### Quick Start Create a new Terraform file with `artifactory` resources. Also see [sample.tf](https://github.com/jfrog/terraform-provider-artifactory/blob/v6.37.0/sample.tf): HCL Example ``` # Required for Terraform 0.13 and up (https://www.terraform.io/upgrade-guides/0-13.html) terraform { required_providers { artifactory = { source = "registry.terraform.io/jfrog/artifactory" version = "6.6.1" } } } provider "artifactory" { // supply ARTIFACTORY_USERNAME, ARTIFACTORY_ACCESS_TOKEN, and ARTIFACTORY_URL as env vars } resource "artifactory_local_pypi_repository" "pypi-local" { key = "pypi-local" description = "Repo created by Terraform Provider Artifactory" } resource "artifactory_artifact_webhook" "artifact-webhook" { key = "artifact-webhook" event_types = ["deployed", "deleted", "moved", "copied"] criteria { any_local = true any_remote = false repo_keys = [artifactory_local_pypi_repository.pypi-local.key] include_patterns = ["foo/**"] exclude_patterns = ["bar/**"] } url = "http://tempurl.org/webhook" secret = "some-secret" proxy = "proxy-key" custom_http_headers = { header-1 = "value-1" header-2 = "value-2" } depends_on = [artifactory_local_pypi_repository.pypi-local] } ``` Initialize Terrform: ``` $ terraform init ``` Plan (or Apply): ``` $ terraform plan ``` #### Documentation To use this provider in your Terraform module, follow the documentation on [Terraform Registry](https://registry.terraform.io/providers/jfrog/artifactory/latest/docs). #### License requirements This provider requires access to Artifactory APIs, which are only available in the *licensed* pro and enterprise editions. You can determine which license you have by accessing the following URL `${host}/artifactory/api/system/licenses/` You can either access it via API, or web browser - it requires admin level credentials, but it's one of the few APIs that will work without a license (side node: you can also install your license here with a `POST`) ``` $ curl -sL ${host}/artifactory/api/system/licenses/ | jq . ``` ``` { "type" : "Enterprise Plus Trial", "validThrough" : "Jan 29, 2022", "licensedTo" : "JFrog Ltd" } ``` The following 3 license types (`jq .type`) do **NOT** support APIs: * Community Edition for C/C++ * JCR Edition * OSS #### Versioning In general, this project follows [Terraform Versioning Specification](https://www.terraform.io/plugin/sdkv2/best-practices/versioning#versioning-specification) as closely as we can for tagging releases of the package. #### Developers Wiki You can find building, testing and debugging information in the [Developers Wiki](https://github.com/jfrog/terraform-provider-artifactory/wiki) on GitHub. #### Contributors See the [contribution guide](https://github.com/jfrog/terraform-provider-artifactory/blob/v6.37.0/CONTRIBUTIONS.md). #### License Copyright (c) 2023 JFrog. Apache 2.0 licensed, see [LICENSE](https://github.com/jfrog/terraform-provider-artifactory/blob/v6.37.0/LICENSE) file. Documentation [¶](#section-documentation) --- ![The Go Gopher](/static/shared/gopher/airplane-1200x945.svg) There is no documentation for this package.
pyMTurkR
cran
R
Package ‘pyMTurkR’ October 14, 2022 Type Package Title A Client for the 'MTurk' Requester API Version 1.1.5 Description Provides access to the latest 'Amazon Mechanical Turk' ('MTurk') <https://www.mturk. com> Requester API (version '2017–01–17'), replacing the now deprecated 'MTurkR' package. License GPL-2 Encoding UTF-8 Imports reticulate, curl, stats, utils, XML RoxygenNote 7.1.2 Suggests testthat (>= 2.1.0), covr, knitr, rmarkdown NeedsCompilation no Author <NAME> [aut] (https://twitter.com/tylerburleigh), <NAME> [aut] (<https://orcid.org/0000-0003-4097-6326>), <NAME> [ctb], <NAME> [ctb], <NAME> [ctb], <NAME> [ctb], <NAME> [cre] Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2021-11-25 08:40:06 UTC R topics documented: pyMTurkR-packag... 3 AccountBalanc... 4 ApproveAssignmen... 5 AssignQualificatio... 6 BlockWorke... 8 ChangeHITTyp... 10 CheckAWSKey... 12 ContactWorke... 13 CreateHI... 15 CreateQualificationTyp... 17 DisableHI... 20 DisposeQualificationTyp... 22 emptyd... 23 ExtendHI... 23 GenerateExternalQuestio... 25 GenerateHITReviewPolic... 27 GenerateHITsFromTemplat... 30 GenerateHTMLQuestio... 32 GenerateNotificatio... 33 GenerateQualificationRequiremen... 35 GetAssignmen... 36 GetBonuse... 38 GetClien... 40 GetHI... 41 GetHITsForQualificationTyp... 42 GetQualificationRequest... 43 GetQualification... 45 GetQualificationScor... 46 GetQualificationTyp... 48 GetReviewableHIT... 49 GetReviewResultsForHI... 50 GrantBonu... 52 GrantQualificatio... 54 RegisterHITTyp... 55 RejectAssignmen... 57 RevokeQualificatio... 58 SearchHIT... 60 SearchQualificationType... 61 second... 62 SendTestEventNotificatio... 63 SetHITAsReviewin... 65 SetHITTypeNotificatio... 66 ToDataFrameAssignmen... 68 ToDataFrameBonusPayment... 69 ToDataFrameHIT... 69 ToDataFrameQualificationRequest... 70 ToDataFrameQualificationRequirement... 70 ToDataFrameQualification... 71 ToDataFrameQualificationType... 71 ToDataFrameQuestionFormAnswer... 72 ToDataFrameReviewableHIT... 72 ToDataFrameReviewResult... 73 ToDataFrameWorkerBloc... 73 UpdateQualificationScor... 74 UpdateQualificationTyp... 75 pyMTurkR-package R Client for the MTurk Requester API Description This package provides access to the Amazon Mechanical Turk (MTurk) Requester API. The pack- age provides users of the MTurk Requester User Interface with access to a variety of functions currently unavailable to them (the creation and maintenance of worker Qualifications, email noti- fications to workers through ContactWorker, automated reviewing of assignments using Review Policies, and streamlined bonus payments through GrantBonus). It also provides users with all functions available in the RUI directly in R as well as a large number of other functions, and a simple, interactive command-line tool for performing many operations. Most users will find themselves using three principal functions: CreateHIT, GetAssignments, and ApproveAssignments, to create one or more HITs on the MTurk server, to retrieve completed assignments, and to approve assignments (and thus pay workers), respectively. As task complexity increases, additional functions are provided to handle worker qualifications, bonuses, emails to workers, automated review policies, bulk creation of HITs, and so forth. Critically important, nothing in pyMTurkR will work during a given session without either first setting AWS credentials. The easiest way to do this is to specify ‘AWS_ACCESS_KEY_ID’ and ‘AWS_SECRET_ACCESS_KEY’ environment variables using Sys.setenv() or by placing these values in an .Renviron file. Credentials can also be specified in an AWS CLI credentials file as described here. This package is a reboot of the MTurkR package after the MTurk API was updated in June 2019 and rendered it obsolete. This package uses reticulate to wrap boto3, the AWS SDK for Python, and access the MTurk API functions. Author(s) <NAME> Maintainer: <NAME> <<EMAIL>> References Amazon Mechanical Turk Amazon Mechanical Turk API Documentation See Also To get started using pyMTurkR, see the documentation for CreateHIT (for creating single tasks). For some tutorials on how to use MTurkR for specific use cases, see the following: AccountBalance Retrieve MTurk account balance Description Retrieves the amount of money (in US Dollars) in your MTurk account. Usage AccountBalance() Details AccountBalance takes no arguments. accountbalance(), get_account_balance() and getbalance() are aliases for AccountBalance. Value Returns a list of length 2: “AvailableBalance”, the balance of the account in US Dollars, and “Re- questMetadata”, the metadata for the request. Note: list is returned invisibly. Author(s) <NAME>, <NAME> References API Reference MTurk Pricing Structure Examples ## Not run: AccountBalance() ## End(Not run) ApproveAssignment Approve Assignment(s) Description Approve one or more submitted assignments, or approve all assignments for a given HIT or HIT- Type. Also allows you to approve a previously rejected assignment. This function spends money from your MTurk account. Usage ApproveAssignment( assignments, feedback = NULL, rejected = FALSE, verbose = getOption("pyMTurkR.verbose", TRUE) ) Arguments assignments A character string containing an AssignmentId, or a vector of multiple character strings containing multiple AssignmentIds, to approve. feedback An optional character string containing any feedback for a worker. This must have length 1 or length equal to the number of workers. Maximum of 1024 characters. rejected A logical indicating whether the assignment(s) had previously been rejected (de- fault FALSE), or a vector of logicals. verbose Optionally print the results of the API request to the standard output. Default is taken from getOption('pyMTurkR.verbose',TRUE). Details Approve assignments, by AssignmentId (as returned by GetAssignment or by HITId or HITTypeId. Must specify assignments. ApproveAllAssignments approves all assignments of a given HIT or HITType without first having to perform GetAssignment. ApproveAssignments() and approve() are aliases for ApproveAssignment. approveall() is an alias for ApproveAllAssignments. Value A data frame containing the list of AssignmentIds, feedback (if any), whether previous rejections were to be overriden, and whether or not each approval request was valid. Author(s) <NAME>, <NAME> References API Reference: Approve Assignment API Reference: Approve Rejected Assignment See Also RejectAssignment Examples ## Not run: # Approve one assignment ApproveAssignment(assignments = "26XXH0JPPSI23H54YVG7BKLEXAMPLE") # Approve multiple assignments with the same feedback ApproveAssignment(assignments = c("26XXH0JPPSI23H54YVG7BKLEXAMPLE1", "26XXH0JPPSI23H54YVG7BKLEXAMPLE2"), feedback = "Great work!") ## End(Not run) AssignQualification Assign Qualification Description Assign a Qualification to one or more workers. The QualificationType should have already been created by CreateQualificationType, or the details of a new QualificationType can be specified atomically. This function also provides various options for automatically specifying the value of a worker’s QualificationScore based upon a worker’s statistics. Usage AssignQualification( qual = NULL, workers, value = 1, notify = FALSE, name = NULL, description = NULL, keywords = NULL, status = NULL, retry.delay = NULL, test = NULL, answerkey = NULL, test.duration = NULL, auto = NULL, auto.value = NULL, verbose = getOption("pyMTurkR.verbose", TRUE) ) Arguments qual A character string containing a QualificationTypeId. workers A character string containing a WorkerId, or a vector of character strings con- taining multiple WorkerIds. value A character string containing the value to be assigned to the worker(s) for the QualificationType. notify A logical indicating whether workers should be notified that they have been assigned the qualification. Default is FALSE. name An optional character string specifying a name for a new QualificationType. This is visible to workers. Cannot be modified by UpdateQualificationType. description An optional character string specifying a longer description of the Qualification- Type. This is visible to workers. Maximum of 2000 characters. keywords An optional character string containing a comma-separated set of keywords by which workers can search for the QualificationType. Cannot be modified by UpdateQualificationType. Maximum of 1000 characters. status A character vector of “Active” or “Inactive”, indicating whether the Qualifica- tionType should be active and visible. retry.delay An optional time (in seconds) indicating how long workers have to wait before requesting the QualificationType after an initial rejection. test An optional character string consisting of a QuestionForm data structure, used as a test a worker must complete before the QualificationType is granted to them. answerkey An optional character string consisting of an AnswerKey data structure, used to automatically score the test. test.duration An optional time (in seconds) indicating how long workers have to complete the test. auto A logical indicating whether the Qualification is automatically granted to work- ers who request it. Default is FALSE. auto.value An optional parameter specifying the value that is automatically assigned to workers when they request it (if the Qualification is automatically granted). verbose Optionally print the results of the API request to the standard output. Default is taken from getOption('pyMTurkR.verbose',TRUE). Details A very robust function to assign a Qualification to one or more workers. The simplest use of the function is to assign a Qualification of the specified value to one worker, but assignment to multiple workers is possible. Workers can be assigned a Qualification previously created by CreateQualificationType, with the characteristics of a new QualificationType specified atomi- cally, or a QualificationTypeID for a qualification created in the MTurk RUI. AssignQualifications(), assignqual() and AssociateQualificationWithWorker() are aliases. Value A data frame containing the list of workers, the QualificationTypeId, the value each worker was assigned, whether they were notified of their QualificationType assignment, and whether the request was valid. Author(s) <NAME>, <NAME> References API Reference Examples ## Not run: qual1 <- CreateQualificationType(name="Worked for me before", description="This qualification is for people who have worked for me before", status = "Active", keywords="Worked for me before") # assign qualification to single worker AssignQualification(qual1$QualificationTypeId, "A1RO9UJNWXMU65", value = "50") # delete the qualification DeleteQualificationType(qual1) # assign a new qualification (defined atomically) AssignQualification(workers = "A1RO9UJNWXMU65", name = "Worked for me before", description = "This qualification is for people who have worked for me before", status = "Active", keywords = "Worked for me before") ## End(Not run) BlockWorker Block Worker(s) Description Block a worker. This prevents a worker from completing any HITs for you while they are blocked, but does not affect their ability to complete work for other requesters or affect their worker statistics. Usage BlockWorker( workers, reasons = NULL, verbose = getOption("pyMTurkR.verbose", TRUE) ) Arguments workers A character string containing a WorkerId, or a vector of character strings con- taining multiple WorkerIds. reasons A character string containing a reason for blocking a worker. This must have length 1 or length equal to the number of workers. verbose Optionally print the results of the API request to the standard output. Default is taken from getOption('pyMTurkR.verbose',TRUE). Details BlockWorker prevents the specified worker from completing any of your HITs. BlockWorkers(), block() and CreateWorkerBlock(), are aliases for BlockWorker. UnblockWorkers(), unblock(), and DeleteWorkerBlock() are aliases for UnblockWorker. blockedworkers() is an alias for GetBlockedWorkers. Value BlockWorker returns a data frame containing the list of workers, reasons (for blocking them), and whether the request to block was valid. Author(s) <NAME>, <NAME> References API Reference: Block Examples ## Not run: BlockWorker("A1RO9UJNWXMU65", reasons="Did not follow HIT instructions.") UnblockWorker("A1RO9UJNWXMU65") ## End(Not run) ChangeHITType Change HITType Properties of a HIT Description Change the HITType of a HIT from one HITType to another (e.g., to change the title, description, or qualification requirements associated with a HIT). This will cause a HIT to no longer be grouped with HITs of the previous HITType and instead be grouped with those of the new HITType. You cannot change the payment associated with a HIT without expiring the current HIT and creating a new one. Usage ChangeHITType( hit = NULL, old.hit.type = NULL, new.hit.type = NULL, title = NULL, description = NULL, reward = NULL, duration = NULL, keywords = NULL, auto.approval.delay = as.integer(2592000), qual.req = NULL, old.annotation = NULL, verbose = getOption("pyMTurkR.verbose", TRUE) ) Arguments hit An optional character string containing the HITId whose HITTypeId is to be changed, or a vector of character strings containing each of multiple HITIds to be changed. Must specify hit xor old.hit.type xor annotation. old.hit.type An optional character string containing the HITTypeId (or a vector of HITType- Ids) whose HITs are to be changed to the new HITTypeId. Must specify hit xor old.hit.type xor annotation. new.hit.type An optional character string specifying the new HITTypeId that this HIT should be visibly grouped with (and whose properties, e.g. reward amount, this HIT should inherit). title An optional character string containing the title for the HITType. All HITs of this HITType will be visibly grouped to workers according to this title. description An optional character string containing a description of the HITType. This is visible to workers. reward An optional character string containing the per-assignment reward amount, in U.S. Dollars (e.g., “0.15”). duration An optional character string containing the duration of each HIT, in seconds (for example, as returned by seconds). keywords An optional character string containing a comma-separated set of keywords by which workers can search for HITs of this HITType. auto.approval.delay An optional character string specifying the amount of time, in seconds (for ex- ample, as returned by seconds), before a submitted assignment is automatically granted. qual.req An optional character string containing one a QualificationRequirement data structure, as returned by GenerateQualificationRequirement. old.annotation An optional character string specifying the value of the RequesterAnnotation field for a batch of HITs to change the HITType of. This can be used to change the HITType for all HITs from a “batch” created in the online Requester User Interface (RUI). To use a batch ID, the batch must be written in a character string of the form “BatchId:78382;”, where “73832” is the batch ID shown in the RUI. Must specify hit xor old.hit.type xor annotation. verbose Optionally print the results of the API request to the standard output. Default is taken from getOption('pyMTurkR.verbose',TRUE). Details This function changes the HITType of a specified HIT (or multiple specific HITs or all HITs of a specified HITType) to a new HITType. hit xor old.hit.type must be specified. Then, either a new HITTypeId can be specified or a new HITType can be created by atomically by specifying the characteristics of the new HITType. changehittype() and UpdateHITTypeOfHIT() are aliases. Value A data frame listing the HITId of each HIT who HITType was changed, its old HITTypeId and new HITTypeId, and whether the request for each HIT was valid. Author(s) <NAME>, <NAME> References API Reference See Also CreateHIT RegisterHITType Examples ## Not run: hittype1 <- RegisterHITType(title = "10 Question Survey", description = "Complete a 10-question survey about news coverage and your opinions", reward = ".20", duration = seconds(hours=1), keywords = "survey, questionnaire, politics") a <- GenerateExternalQuestion("https://www.example.com/", "400") hit <- CreateHIT(hit.type = hittype1$HITTypeId, assignments = 1, expiration = seconds(days=1), question = a$string) # change to HITType with new reward amount hittype2 <- RegisterHITType(title = "10 Question Survey", description = "Complete a 10-question survey about news coverage and your opinions", reward = ".45", duration = seconds(hours=1), keywords = "survey, questionnaire, politics") ChangeHITType(hit = hit$HITId, new.hit.type=hittype2$HITTypeId) # Change to new HITType, with arguments stated atomically ChangeHITType(hit = hit$HITId, title = "10 Question Survey", description = "Complete a 10-question survey about news coverage and your opinions", reward = ".20", duration = seconds(hours=1), keywords = "survey, questionnaire, politics") # expire and dispose HIT ExpireHIT(hit = hit$HITId) DeleteHIT(hit = hit$HITId) ## End(Not run) CheckAWSKeys Helper function to check AWS Keys Description Checks for the existence of environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. Usage CheckAWSKeys() Value A logical indicating whether AWS Keys were found as environment variables. ContactWorker Contact Worker(s) Description Contact one or more workers. This sends an email with specified subject line and body text to one or more workers. This can be used to recontact workers in panel/longitudinal research or to send follow-up work. Usage ContactWorker( subjects, msgs, workers, batch = FALSE, verbose = getOption("pyMTurkR.verbose", TRUE) ) Arguments subjects A character string containing subject line of an email, or a vector of character strings of of length equal to the number of workers to be contacted containing the subject line of the email for each worker. Maximum of 200 characters. msgs A character string containing body text of an email, or a vector of character strings of of length equal to the number of workers to be contacted containing the body text of the email for each worker. Maximum of 4096 characters. workers A character string containing a WorkerId, or a vector of character strings con- taining multiple WorkerIds. batch A logical (default is FALSE), indicating whether workers should be contacted in batches of 100 (the maximum allowed by the API). This significantly reduces the time required to contact workers, but eliminates the ability to send customized messages to each worker. verbose Optionally print the results of the API request to the standard output. Default is taken from getOption('pyMTurkR.verbose',TRUE). Details Send an email to one or more workers, either with a common subject and body text or subject and body customized for each worker. In batch mode (when batch=TRUE), workers are contacted in batches of 100 with a single identical email. If one email fails (e.g., for one worker) the other emails should be sent successfully. That is to say, the request as a whole will be valid but will return additional information about which workers were not contacted. This information can be found in the MTurkR log file and viewing the XML responses directly. Note: It is only possible to contact workers who have performed work for you previously. When attempting to contact a worker who has not worked for you before, this function will indicate that the request was successful even though the email is not sent. The function will return a value of “HardFailure” for Valid when this occurs. The printed results may therefore appear contradictory because MTurk reports that requests to contact these workers are Valid, but they are not actually contacted. In batch, this means that a batch will be valid but individual ineligible workers will be reported as not contacted. ContactWorkers(), contact(), NotifyWorkers, NotifyWorker(), and notify() are aliases. Value A data frame containing the list of workers, subjects, and messages, and whether the request to contact each of them was valid. Author(s) <NAME>, <NAME> References API Reference Examples ## Not run: a <- "Complete a follow-up survey for $.50" b <- "Thanks for completing my HIT! I will pay a $.50 bonus if you complete a follow-up survey by Friday at 5:00pm. The survey can be completed at http://www.surveymonkey.com/s/pssurvey?c=A1RO9UEXAMPLE." # contact one worker c1 <- "A1RO9UEXAMPLE" d <- ContactWorker(subjects = a, msgs = b, workers = c1) # contact multiple workers in batch c2 <- c("A1RO9EXAMPLE1","A1RO9EXAMPLE2","A1RO9EXAMPLE3") e <- ContactWorker(subjects = a, msgs = b, workers = c2, batch = TRUE) ## End(Not run) CreateHIT Create HIT Description Create a single HIT. This is the most important function in the package. It creates a HIT based upon the specified parameters: (1) characteristics inherited from a HITType or specification of those parameters and (2) some kind of Question data structure. Usage CreateHIT( hit.type = NULL, question = NULL, expiration, assignments = NULL, assignment.review.policy = NULL, hit.review.policy = NULL, annotation = NULL, unique.request.token = NULL, title = NULL, description = NULL, reward = NULL, duration = NULL, keywords = NULL, auto.approval.delay = NULL, qual.req = NULL, hitlayoutid = NULL, hitlayoutparameters = NULL, verbose = getOption("pyMTurkR.verbose", TRUE) ) Arguments hit.type An optional character string specifying the HITTypeId that this HIT should be generated from. If used, the HIT will inherit title, description, keywords, reward, and other properties of the HIT. question A mandatory (unless layoutid is specified) character string containing a Ques- tionForm, HTMLQuestion, or ExternalQuestion data structure. In lieu of a ques- tion parameter, a hitlayoutid and, optionally, hitlayoutparameters can be specified. expiration The time (in seconds) that the HIT should be available to workers. Must be between 30 and 31536000 seconds. assignments A character string specifying the number of assignments assignment.review.policy An optional character string containing an Assignment-level ReviewPolicy data structure as returned by GenerateAssignmentReviewPolicy. hit.review.policy An optional character string containing a HIT-level ReviewPolicy data structure as returned by GenerateHITReviewPolicy. annotation An optional character string annotating the HIT. This is not visible to workers, but can be used as a label by which to identify the HIT from the API. unique.request.token An optional character string, included only for advanced users. It can be used to prevent creating a duplicate HIT. A HIT will not be creatd if a HIT was previ- ously granted (within a short time window) using the same unique.request.token. title A character string containing the title for the HITType. All HITs of this HITType will be visibly grouped to workers according to this title. Maximum of 128 characters. description A character string containing a description of the HITType. This is visible to workers. Maximum of 2000 characters. reward A character string containing the per-assignment reward amount, in U.S. Dollars (e.g., “0.15”). duration A character string containing the amount of time workers have to complete an assignment for HITs of this HITType, in seconds (for example, as returned by seconds). Minimum of 30 seconds and maximum of 365 days. keywords An optional character string containing a comma-separated set of keywords by which workers can search for HITs of this HITType. Maximum of 1000 charac- ters. auto.approval.delay An optional character string specifying the amount of time, in seconds (for ex- ample, as returned by seconds), before a submitted assignment is automatically granted. Maximum of 30 days. qual.req An optional list containing one or more QualificationRequirements, for example as returned by GenerateQualificationRequirement. hitlayoutid An optional character string including a HITLayoutId retrieved from a HIT “project” template generated in the Requester User Interface at ‘https://requester.mturk.com/creat If the HIT template includes variable placeholders, must also specify hitlayoutparameters. hitlayoutparameters Required if using a hitlayoutid with placeholder values. This must be a list of lists containing Name and String values. verbose Optionally print the results of the API request to the standard output. Default is taken from getOption('pyMTurkR.verbose',TRUE). Details This function creates a new HIT and makes it available to workers. Characteristics of the HIT can either be specified by including a valid HITTypeId for “hit.type” or creating a new HITType by atomically specifying the characteristics of a new HITType. When creating a HIT, some kind of Question data structure must be specified. Either, a Ques- tionForm, HTMLQuestion, or ExternalQuestion data structure can be specified for the question parameter or, if a HIT template created in the Requester User Interface (RUI) is being used, the appropriate hitlayoutid can be specified. If the HIT template contains variable placeholders, then the hitlayoutparameters should also be specified. When creating a ExternalQuestion HITs, the GenerateHITsFromTemplate function can emulate the HIT template functionality by converting a template .html file into a set of individual HIT .html files (that would also have to be uploaded to a web server) and executing CreateHIT for each of these external files with an appropriate ExternalQuestion data structure specified for the question parameter. Note that HIT and Assignment Review Policies are not currently supported. createhit(), create(), CreateHITWithHITType(), and createhitwithhittype() are aliases. Value A data frame containing the HITId and other details of the newly created HIT. Author(s) <NAME>, <NAME> References API Reference Examples ## Not run: CreateHIT(title = "Survey", description = "5 question survey", reward = "0.10", assignments = 1, expiration = seconds(days = 4), duration = seconds(hours = 1), keywords = "survey, questionnaire", question = GenerateExternalQuestion("https://www.example.com/","400")) ## End(Not run) CreateQualificationType Create QualificationType Description Create a QualificationType. This creates a QualificationType, but does not assign it to any workers. All characteristics of the QualificationType (except name and keywords) can be changed later with UpdateQualificationType. Usage CreateQualificationType( name, description, status, keywords = NULL, retry.delay = NULL, test = NULL, answerkey = NULL, test.duration = NULL, auto = NULL, auto.value = NULL, verbose = getOption("pyMTurkR.verbose", TRUE) ) Arguments name A name for the QualificationType. This is visible to workers. It cannot be modified by UpdateQualificationType. description A longer description of the QualificationType. This is visible to workers. Maxi- mum of 2000 characters. status A character vector of “Active” or “Inactive”, indicating whether the Qualifica- tionType should be active and visible. keywords An optional character string containing a comma-separated set of keywords by which workers can search for the QualificationType. Maximum 1000 characters. These cannot be modified by UpdateQualificationType. retry.delay An optional time (in seconds) indicating how long workers have to wait before requesting the QualificationType after an initial rejection. If not specified, retries are disabled and Workers can request a Qualification of this type only once, even if the Worker has not been granted the Qualification. test An optional character string consisting of a QuestionForm data structure, used as a test a worker must complete before the QualificationType is granted to them. answerkey An optional character string consisting of an AnswerKey data structure, used to automatically score the test test.duration An optional time (in seconds) indicating how long workers have to complete the test. auto A logical indicating whether the Qualification is automatically granted to work- ers who request it. Default is NULL meaning FALSE. auto.value An optional parameter specifying the value that is automatically assigned to workers when they request it (if the Qualification is automatically granted). verbose Optionally print the results of the API request to the standard output. Default is taken from getOption('pyMTurkR.verbose',TRUE). Details A function to create a QualificationType. Active QualificationTypes are visible to workers and to other requesters. All characteristics of the QualificationType, other than the name and keywords, can later be modified by UpdateQualificationType. Qualifications can then be used to assign Qualifications to workers with AssignQualification and invoked as QualificationRequirements in RegisterHITType and/or CreateHIT operations. createqual() is an alias. Value A data frame containing the QualificationTypeId and other details of the newly created Qualifica- tionType. Author(s) <NAME>, <NAME> References API Reference Examples ## Not run: # Create a Qualification Type qual1 <- CreateQualificationType(name="Worked for me before", description="This qualification is for people who have worked for me before", status = "Active", keywords = "Worked for me before") DisposeQualificationType(qual1$QualificationTypeId) # Create a Qualification Type with a Qualification Test f <- system.file("templates/qualificationtest1.xml", package = "pyMTurkR") QuestionForm <- paste0(readLines(f, warn = FALSE), collapse = "") qual2 <- CreateQualificationType(name = "Qual0001", description = "This is a qualification", status = "Active", test = QuestionForm, test.duration = 30) DisposeQualificationType(qual2$QualificationTypeId) # Create a Qualification Type with a Qualification Test and Answer Key f <- system.file("templates/qualificationtest1.xml", package = "pyMTurkR") QuestionForm <- paste0(readLines(f, warn = FALSE), collapse = "") f <- system.file("templates/answerkey1.xml", package = "pyMTurkR") AnswerKey <- paste0(readLines(f, warn = FALSE), collapse = "") qual3 <- CreateQualificationType(name = "Qual0001", description = "This is a qualification", status = "Active", test = QuestionForm, test.duration = 30, answerkey = AnswerKey) DisposeQualificationType(qual3$QualificationTypeId) ## End(Not run) DisableHIT Disable/Expire or Delete HIT Description This function will allow you to expire a HIT early, which means it will no longer be available for new workers to accept. Optionally, when disabling the HIT you can approve all pending assign- ments and you can also try to delete the HIT. Usage DisableHIT( hit = NULL, hit.type = NULL, annotation = NULL, approve.pending.assignments = FALSE, skip.delete.prompt = FALSE, verbose = getOption("pyMTurkR.verbose", TRUE) ) Arguments hit A character string containing a HITId or a vector of character strings containing multiple HITIds. Must specify hit xor hit.type xor annotation. hit.type An optional character string containing a HITTypeId (or a vector of HITType- Ids). Must specify hit xor hit.type xor annotation. annotation An optional character string specifying the value of the RequesterAnnotation field for a batch of HITs. This can be used to disable all HITs from a “batch” created in the online Requester User Interface (RUI). To use a batch ID, the batch must be written in a character string of the form “BatchId:78382;”, where “73832” is the batch ID shown in the RUI. Must specify hit xor hit.type xor annotation. approve.pending.assignments A logical indicating whether the pending assignments should be approved when the HIT is disabled. skip.delete.prompt A logical indicating whether to skip the prompt that asks you to confirm the delete operation. If TRUE, you will not be asked to confirm that you wish to Delete the HITs. The prompt is a safeguard flag to protect the user from mistak- enly deleting HITs. verbose Optionally print the results of the API request to the standard output. Default is taken from getOption('pyMTurkR.verbose',TRUE). Details Be careful when deleting a HIT: this will also delete the assignment data! Calling this function with DeleteHIT(), deletehit(), DisposeHIT(), or disposehit() will result in deleting the HIT. The user will be prompted before continuing, unless skip.delete.prompt is TRUE. If you disable a HIT while workers are still working on an assignment, they will still be able to complete their task. DisposeHIT(), ExpireHIT(), DeleteHIT(), disablehit(), disposehit(), expirehit(), deletehit() are aliases. Value A data frame containing a list of HITs and whether the request to disable each of them was valid. Author(s) <NAME>, <NAME> References API Reference: Update Expiration for HIT API Reference: Delete HIT Examples ## Not run: # Disable a single HIT hittype1 <- RegisterHITType(title = "10 Question Survey", description = "Complete a 10-question survey", reward = ".20", duration = seconds(hours=1), keywords = "survey, questionnaire, politics") a <- GenerateExternalQuestion("https://www.example.com/", "400") hit1 <- CreateHIT(hit.type = hittype1$HITTypeId, assignments = 1, expiration = seconds(days=1), question = a$string) DisableHIT(hit = hit1$HITId) # Disable all HITs of a given HITType DisableHIT(hit.type = hit1$HITTypeId) # Disable all HITs of a given batch from the RUI DisableHIT(annotation="BatchId:78382;") # Delete the HIT previously disabled DeleteHIT(hit = hit1$HITId) ## End(Not run) DisposeQualificationType Dispose QualificationType Description Dispose of a QualificationType. This deletes the QualificationType, Qualification scores for all workers, and all records thereof. Usage DisposeQualificationType(qual, verbose = getOption("pyMTurkR.verbose", TRUE)) Arguments qual A character string containing a QualificationTypeId. verbose Optionally print the results of the API request to the standard output. Default is taken from getOption('pyMTurkR.verbose',TRUE). Details A function to dispose of a QualificationType that is no longer needed. It will dispose of the Qual- ification type and any HIT types that are associated with it. It does not not revoke Qualifications already assigned to Workers. Any pending requests for this Qualification are automatically rejected. DisposeQualificationType(), disposequal(), and deletequal() are aliases. Value A data frame containing the QualificationTypeId and whether the request to dispose was valid. Author(s) <NAME>, <NAME> References API Reference Examples ## Not run: qual1 <- CreateQualificationType(name = "Worked for me before", description = "This qualification is for people who have worked for me before", status = "Active", keywords = "Worked for me before") DisposeQualificationType(qual1$QualificationTypeId) ## End(Not run) emptydf Helper function that creates an empty data.frame Description Helper function that creates an empty data.frame Usage emptydf(nrow, ncol, names) Arguments nrow Number of rows ncol Number of columns names Number of names of the columns Value A data frame of NAs, with the given column names ExtendHIT Extend HIT Description Extend the time remaining on a HIT or the number of assignments available for the HIT. Usage ExtendHIT( hit = NULL, hit.type = NULL, annotation = NULL, add.assignments = NULL, add.seconds = NULL, unique.request.token = NULL, verbose = getOption("pyMTurkR.verbose", TRUE) ) Arguments hit An optional character string containing a HITId or a vector of character strings containing multiple HITIds. Must specify hit xor hit.type xor annotation. hit.type An optional character string containing a HITTypeId (or a vector of HITType- Ids). Must specify hit xor hit.type xor annotation. annotation An optional character string specifying the value of the RequesterAnnotation field for a batch of HITs. This can be used to extend all HITs from a “batch” created in the online Requester User Interface (RUI). To use a batch ID, the batch must be written in a character string of the form “BatchId:78382;”, where “73832” is the batch ID shown in the RUI. Must specify hit xor hit.type xor annotation. add.assignments An optional character string containing the number of assignments to add to the HIT. Must be between 1 and 1000000000. add.seconds An optional character string containing the amount of time to extend the HIT, in seconds (for example, returned by seconds). Must be between 1 hour (3600 seconds) and 365 days. unique.request.token An optional character string, included only for advanced users. verbose Optionally print the results of the API request to the standard output. Default is taken from getOption('pyMTurkR.verbose',TRUE). Details A useful function for adding time and/or additional assignments to a HIT. If the HIT is already expired, this reactivates the HIT for the specified amount of time. If all assignments have already been submitted, this reactivates the HIT with the specified number of assignments and previously specified expiration. Must specify a HITId xor a HITTypeId. If multiple HITs or a HITTypeId are specified, each HIT is extended by the specified amount. extend() is an alias. Value A data frame containing the HITId, assignment increment, time increment, and whether each ex- tension request was valid. Author(s) <NAME>, <NAME> References API Reference: Update Expiration API Reference: Create Additional Assignments for HIT Examples ## Not run: a <- GenerateExternalQuestion("https://www.example.com/","400") hit1 <- CreateHIT(title = "Example", description = "Simple Example HIT", reward = ".01", expiration = seconds(days = 4), duration = seconds(hours = 1), keywords = "example", question = a$string) # add assignments ExtendHIT(hit = hit1$HITId, add.assignments = "20") # add time ExtendHIT(hit = hit1$HITId, add.seconds = seconds(days=1)) # add assignments and time ExtendHIT(hit = hit1$HITId, add.assignments = "20", add.seconds = seconds(days=1)) # cleanup DisableHIT(hit = hit1$HITId) ## End(Not run) ## Not run: # Extend all HITs of a given batch from the RUI ExtendHIT(annotation="BatchId:78382;", add.assignments = "20") ## End(Not run) GenerateExternalQuestion Generate ExternalQuestion Description Generate an ExternalQuestion data structure for use in the ‘Question’ parameter of the CreateHIT operation. Usage GenerateExternalQuestion(url, frame.height = 400) Arguments url A character string containing the URL (served over HTTPS) of a HIT file stored anywhere other than the MTurk server. frame.height A character string containing the integer value (in pixels) of the frame height for the ExternalQuestion iframe. Details An ExternalQuestion is a HIT stored anywhere other than the MTurk server that is displayed to workers within an HTML iframe of the specified height. The URL should point to a page — likely an HTML form — that can retrieve several URL GET parameters for “AssignmentId” and “WorkerId”, which are attached by MTurk when opening the URL. Note: url must be HTTPS. Value A list containing xml.parsed, an XML data structure, string, xml formatted as a character string, and url.encoded, character string containing a URL query parameter-formatted HTMLQuestion data structure for use in the question parameter of CreateHIT. Author(s) <NAME>, <NAME> References API Reference See Also CreateHIT Examples ## Not run: a <- GenerateExternalQuestion(url="http://www.example.com/", frame.height="400") hit1 <- CreateHIT(title = "Survey", description = "5 question survey", reward = ".10", expiration = seconds(days = 4), duration = seconds(hours = 1), keywords = "survey, questionnaire", question = a$string) ExpireHIT(hit1$HITId) DisposeHIT(hit1$HITId) ## End(Not run) GenerateHITReviewPolicy Generate HIT and/or Assignment ReviewPolicies Description Generate a HIT ReviewPolicy and/or Assignment ReviewPolicy data structure for use in CreateHIT. Usage GenerateHITReviewPolicy(...) Arguments ... ReviewPolicy parameters passed as named arguments. Details Converts a list of ReviewPolicy parameters into a ReviewPolicy data structure. A ReviewPolicy works by testing whether an assignment or a set of assignments satisfies a particu- lar condition. If that condition is satisfied, then specified actions are taken. ReviewPolicies come in two “flavors”: Assignment-level ReviewPolicies take actions based on “known” answers to ques- tions in the HIT and HIT-level ReviewPolicies take actions based on agreement among multiple assignments. It is possible to specify both Assignment-level and HIT-level ReviewPolicies for the same HIT. Assignment-level ReviewPolicies involve checking whether that assignment includes particular (“correct”) answers. For example, an assignment might be tested to see whether a correct answer is given to one question by each worker as a quality control measure. The ReviewPolicy works by checking whether a specified percentage of known answers are correct. So, if a ReviewPolicy spec- ifies two known answers for a HIT and the worker gets one of those known answers correct, the Re- viewPolicy scores the assignment at 50 (i.e., 50 percent). The ReviewPolicy can then be customized to take three kinds of actions depending on that score: ApproveIfKnownAnswerScoreIsAtLeast (approve the assignment automatically), RejectIfKnownAnswerScoreIsLessThan (reject the as- signment automatically), and ExtendIfKnownAnswerScoreIsLessThan (add additional assignments and/or time to the HIT automatically). The various actions can be combined to, e.g., both reject an assignment and add further assignments if a score is below the threshold, or reject below a threshold and approve above, etc. HIT-level ReviewPolicies involve checking whether multiple assignments submitted for the same HIT “agree” with one another. Agreement here is very strict: answers must be exactly the same across assignments for them to be a matched. As such, it is probably only appropriate to use closed- ended (e.g., multiple choice) questions for HIT-level ReviewPolicies otherwise ReviewPolicy ac- tions might be taken on irrelevant differences (e.g., word capitalization, spacing, etc.). The Review- Policy works by checking whether answers to multiple assignments are the same (or at least whether a specified percentage of answers to a given question are the same). For example, if the goal is to categorize an image into one of three categories, the ReviewPolicy will check whether two of three workers agree on the categorization (known as the “HIT Agreement Score”, which is a percentage of all workers who agree). Depending on the value of the HIT Agreement Score, actions can be taken. As of October 2014, only one action can be taken: ExtendIfHITAgreementScoreIsLessThan (ex- tending the HIT in assignments by the number of assignments specified in ExtendMaximumAssignments or time as specified in ExtendMinimumTimeInSeconds). Another agreement score (the “Worker Agreement Score”), measured the percentage of a worker’s responses that agree with other workers’ answers. Depending on the Worker Agreement Score, two actions can be taken: ApproveIfWorkerAgreementScoreIsAtLeast (to approve the assignment automatically) or RejectIfWorkerAgreementScoreIsLessThan (to reject the assignment auto- matically, with an optional reject reason supplied with RejectReason). A logical value (DisregardAssignmentIfRejected) specifies whether to exclude rejected assignments from the calculation of the HIT Agreement Score. Note: An optional DisregardAssignmentIfKnownAnswerScoreIsLessThan excludes assignments if those assignments score below a specified “known” answers threshold as determined by a separate Assignment-level ReviewPolicy. Value A dictionary object HITReviewPolicy or AssignmentReviewPolicy. Author(s) <NAME>, <NAME> References API Reference: QuestionForm API Reference (ReviewPolicies) APIReference (Data Structure) Examples ## Not run: # Generate a HIT Review Policy with GenerateHITReviewPolicy lista <- list(QuestionIds = c("Question1", "Question2"), QuestionAgreementThreshold = 75, ApproveIfWorkerAgreementScoreIsAtLeast = 75, RejectIfWorkerAgreementScoreIsLessThan = 25) policya <- do.call(GenerateHITReviewPolicy, lista) # Manually define a HIT Review Policy policya <- dict( list( 'PolicyName' = 'SimplePlurality/2011-09-01', 'Parameters' = list( dict( 'Key' = 'QuestionIds', 'Values' = list( 'Question1', 'Question2' ) ), dict( 'Key' = 'QuestionAgreementThreshold', 'Values' = list( '75' ) ), dict( 'Key' = 'ApproveIfWorkerAgreementScoreIsAtLeast', 'Values' = list( '75' ) ), dict( 'Key' = 'RejectIfWorkerAgreementScoreIsLessThan', 'Values' = list( '25' ) ) ) )) # Generate an Assignment Review Policy with GenerateAssignmentReviewPolicy listb <- list(AnswerKey = list("QuestionId1" = "B", "QuestionId2" = "A"), ApproveIfKnownAnswerScoreIsAtLeast = 99) policyb <- do.call(GenerateAssignmentReviewPolicy, listb) # Manually define an Assignment Review Policy policyb <- dict( list( 'PolicyName' = 'ScoreMyKnownAnswers/2011-09-01', 'Parameters' = list( dict( 'Key' = 'AnswerKey', 'MapEntries' = list( dict( 'Key' = 'QuestionId1', 'Values' = list('B') ), dict( 'Key' = 'QuestionId2', 'Values' = list('A') ) ) ), dict( 'Key' = 'ApproveIfKnownAnswerScoreIsAtLeast', 'Values' = list( '99' ) ) ) )) ## End(Not run) GenerateHITsFromTemplate Generate HITs from a Template Description Generate individual HIT .html files from a local .html HIT template file, in the same fashion as the MTurk Requester User Interface (RUI). Usage GenerateHITsFromTemplate( template, input, filenames = NULL, write.files = FALSE ) Arguments template A character string or filename for an .html HIT template input A data.frame containing one row for each HIT to be created and columns named identically to the placeholders in the HIT template file. Operation will fail if variable names do not correspond. filenames An optional list of filenames for the HITs to be created. Must be equal to the number of rows in input. write.files A logical specifying whether HIT .html files should be created and stored in the working directory. Or, alternatively, whether HITs should be returned as character vectors in a list. Details GenerateHITsFromTemplate generates individual HIT question content from a HIT template (con- taining placeholders for input data of the form \${variablename}). The tool provides functionality analogous to the MTurk RUI HIT template and can be performed on .html files generated therein. The HITs are returned as a list of character strings. If write.files = TRUE, a side effect occurs in the form of one or more .html files being written to the working directory, with filenames specified by the filenames option or, if filenames=NULL of the form “NewHIT1.html”, “NewHIT2.html”, etc. Value A list containing a character string for each HIT generated from the template. Author(s) <NAME> References API Reference: Operation API Reference: ExternalQuestion Data Structure Examples ## Not run: # create/edit template HTML file # should have placeholders of the form `${varName}` for variable values temp <- system.file("templates/htmlquestion2.xml", package = "pyMTurkR") readLines(temp) # create/load data.frame of template variable values a <- data.frame(hittitle = c("HIT title 1","HIT title 2","HIT title 3"), hitvariable = c("HIT text 1","HIT text 2","HIT text 3"), stringsAsFactors=FALSE) # create HITs from template and data.frame values temps <- GenerateHITsFromTemplate(template = temp, input = a) # create HITs from template hittype1 <- RegisterHITType(title = "2 Question Survey", description = "Complete a 2-question survey", reward = ".20", duration = seconds(hours=1), keywords = "survey, questionnaire, politics") hits <- lapply(temps, function(x) { CreateHIT(hit.type = hittype1$HITTypeId, expiration = seconds(days = 1), assignments = 2, question = GenerateHTMLQuestion(x)$string) }) # cleanup ExpireHIT(hit.type = hittype1$HITTypeId) DisposeHIT(hit.type = hittype1$HITTypeId) ## End(Not run) GenerateHTMLQuestion Generate HTMLQuestion Description Generate an HTMLQuestion data structure for use in the ‘Question’ parameter of CreateHIT. Usage GenerateHTMLQuestion(character = NULL, file = NULL, frame.height = 450) Arguments character An optional character string from which to construct the HTMLQuestion data structure. file An optional character string containing a filename from which to construct the HTMLQuestion data structure. frame.height A character string containing the integer value (in pixels) of the frame height for the HTMLQuestion iframe. Details Must specify either character or file. To be valid, an HTMLQuestion data structure must be a complete XHTML document, including doctype declaration, head and body tags, and a complete HTML form (including the form tag with a submit URL, the assignmentId for the assignment as a form field, at least one substan- tive form field (can be hidden), and a submit button that posts to the external submit URL; see GenerateExternalQuestion.). If you fail to include a complete form, workers will be unable to submit the HIT. See the API Documentation for a complete example. MTurkR comes pre-installed with several simple examples of HTMLQuestion HIT templates, which can be found by examining the ‘templates’ directory of the installed package directory. These ex- amples include simple HTMLQuestion forms, as well as templates for categorization, linking to off-site surveys, and sentiment analysis. Note that the examples, while validated complete, do not include CSS styling. Value A list containing xml.parsed, an XML data structure, string, xml formatted as a character string, and url.encoded, character string containing a URL query parameter-formatted HTMLQuestion data structure for use in the question parameter of CreateHIT. Author(s) <NAME>, <NAME> References API Reference See Also CreateHIT GenerateExternalQuestion Examples ## Not run: f <- system.file("templates/htmlquestion1.xml", package = "pyMTurkR") a <- GenerateHTMLQuestion(file=f) hit1 <- CreateHIT(title = "Survey", description = "5 question survey", reward = ".10", expiration = seconds(days = 4), duration = seconds(hours = 1), keywords = "survey, questionnaire", question = a$string) ExpireHIT(hit1$HITId) DisposeHIT(hit1$HITId) ## End(Not run) GenerateNotification Generate Notification Description Generate a HITType Notification data structure for use in SetHITTypeNotification. Usage GenerateNotification( destination, transport = "Email", event.type, version = "2006-05-05" ) Arguments destination Currently, a character string containing a complete email address (if transport="Email"), the SQS URL (if transport="SQS") or the SNS topic (if transport="SNS") transport Only “Email”, “SQS” and “SNS” are supported. AWS recommends the use of the SQS transport. event.type A character string containing one of: AssignmentAccepted, AssignmentAbandoned, AssignmentReturned, AssignmentSubmitted, AssignmentRejected, AssignmentApproved, HITCreated, HITExtended, HITDisposed, HITReviewable, HITCreated, HITExtended, HITDisposed, HITReviewable, HITExpired (the default), or Ping. version Version of the HITType Notification API to use. Intended only for advanced users. Details Generate a Notification data structure for use in the notification option of SetHITTypeNotification. Value A dictionary object containing the Notification data structure. Author(s) <NAME>, <NAME> References API Reference API Reference: Concept See Also SetHITTypeNotification SendTestEventNotification GenerateQualificationRequirement Generate QualificationRequirement Description Generate a QualificationRequirement data structure for use with CreateHIT or RegisterHITType. Usage GenerateQualificationRequirement(quals) Arguments quals A list of lists of Qualification parameters. Each list contains: Qualification- TypeId (string, REQUIRED), Comparator (string, REQUIRED), LocaleValues (vector of integers), LocaleValues (list containing Country = string, and op- tionally Subdivision = string), RequiredToPreview (logical), ActionsGuarded (string). See example below. Details A convenience function to translate the details of a QualificationRequirement into the necessary structure for use in the qual.req parameter of CreateHIT or RegisterHITType. The function accepts a list of lists of Qualification parameters. Value Returns a special reticulated ’tuple’ object Author(s) <NAME>, <NAME> References API Reference See Also CreateHIT RegisterHITType Examples ## Not run: quals.list <- list( list(QualificationTypeId = "2F1KVCNHMVHV8E9PBUB2A4J79LU20F", Comparator = "Exists", IntegerValues = 1, RequiredToPreview = TRUE ), list(QualificationTypeId = "00000000000000000071", Comparator = "EqualTo", LocaleValues = list(Country = "US"), RequiredToPreview = TRUE ) ) GenerateQualificationRequirement(quals.list) -> qual.req ## End(Not run) GetAssignment Get Assignment(s) Description Get an assignment or multiple assignments for one or more HITs (or a HITType) as a data frame. Usage GetAssignment( assignment = NULL, hit = NULL, hit.type = NULL, annotation = NULL, status = NULL, results = as.integer(100), pagetoken = NULL, get.answers = FALSE, persist.on.error = FALSE, verbose = getOption("pyMTurkR.verbose", TRUE) ) Arguments assignment An optional character string specifying the AssignmentId of an assignment to return. Must specify assignment xor hit xor hit.type xor annotation. hit An optional character string specifying the HITId whose assignments are to be returned, or a vector of character strings specifying multiple HITIds all of whose assignments are to be returned. Must specify assignment xor hit xor hit.type xor annotation. hit.type An optional character string specifying the HITTypeId (or a vector of HITType- Ids) of one or more HITs whose assignments are to be returned. Must specify assignment xor hit xor hit.type xor annotation. annotation An optional character string specifying the value of the RequesterAnnotation field for a batch of HITs. This can be used to retrieve all assignments for all HITs from a “batch” created in the online Requester User Interface (RUI). To use a batch ID, the batch must be written in a character string of the form “BatchId:78382;”, where “73832” is the batch ID shown in the RUI. Must spec- ify assignment xor hit xor hit.type xor annotation. status An optional vector of character strings (containing one of more of “Approved”,“Rejected”,“Submitted”), specifying whether only a subset of assignments should be returned. If NULL, all assignments are returned (the default). Only applies when hit or hit.type are specified; ignored otherwise. results An optional character string indicating how many results to fetch per page. Must be between 1 and 100. Most users can ignore this. pagetoken An optional character string indicating which page of search results to start at. Most users can ignore this. get.answers An optional logical indicating whether to also get the answers. If TRUE, the returned object is a list with Assignments and Answers. persist.on.error A boolean specifying whether to persist on an error. Errors can sometimes hap- pen when the server times-out, in cases where large numbers of Assignments are being retrieved. verbose Optionally print the results of the API request to the standard output. Default is taken from getOption('pyMTurkR.verbose',TRUE). Details This function returns the requested assignments. The function must specify an AssignmentId xor a HITId xor a HITTypeId. If an AssignmentId is specified, only that assignment is returned. If a HIT or HITType is specified, default behavior is to return all assignments through a series of sequential (but invisible) API calls meaning that returning large numbers of assignments (or assignments for a large number of HITs in a single request) may be time consuming. GetAssignments(), assignment(), assignments(), and ListAssignmentsForHIT() are aliases. Value A data frame representing an assignment or multiple assignments for one or more HITs (or a HIT- Type). Author(s) <NAME>, <NAME> References API Reference: GetAssignment API Reference: ListAssignmentsForHIT Examples ## Not run: # get an assignment GetAssignment(assignments = "26XXH0JPPSI23H54YVG7BKLEXAMPLE") # get all assignments for a HIT GetAssignment(hit = "2MQB727M0IGF304GJ16S1F4VE3AYDQ") # get all assignments for a HITType GetAssignment(hit.type = "2FFNCWYB49F9BBJWA4SJUNST5OFSOW") # get all assignments for an online batch from the RUI GetAssignment(annotation="BatchId:78382;") ## End(Not run) GetBonuses Get Bonus Payments Description Get details of bonuses paid to workers, by HIT, HITType, Assignment, or Annotation. Usage GetBonuses( assignment = NULL, hit = NULL, hit.type = NULL, annotation = NULL, results = as.integer(100), pagetoken = NULL, verbose = getOption("pyMTurkR.verbose", TRUE) ) Arguments assignment An optional character string containing an AssignmentId whose bonuses should be returned. Must specify assignment xor hit xor hit.type xor annotation. hit An optional character string containing a HITId whose bonuses should be re- turned. Must specify assignment xor hit xor hit.type xor annotation. hit.type An optional character string containing a HITTypeId (or a vector of HITType- Ids) whose bonuses should be returned. Must specify assignment xor hit xor hit.type xor annotation. annotation An optional character string specifying the value of the RequesterAnnotation field for a batch of HITs. This can be used to retrieve bonuses for all HITs from a “batch” created in the online Requester User Interface (RUI). To use a batch ID, the batch must be written in a character string of the form “BatchId:78382;”, where “73832” is the batch ID shown in the RUI. Must specify assignment xor hit xor hit.type xor annotation. results An optional character string indicating how many results to fetch per page. Must be between 1 and 100. Most users can ignore this. pagetoken An optional character string indicating which page of search results to start at. Most users can ignore this. verbose Optionally print the results of the API request to the standard output. Default is taken from getOption('pyMTurkR.verbose',TRUE). Details Retrieve bonuses previously paid to a specified HIT, HITType, Assignment, or Annotation. bonuses(), getbonuses(), ListBonusPayments() and listbonuspayments() are aliases. Value A data frame containing the details of each bonus, specifically: AssignmentId, WorkerId, Amount, Reason, and GrantTime. Author(s) <NAME>, <NAME> References API Reference See Also GrantBonus Examples ## Not run: # Get bonuses for a given assignment GetBonuses(assignment = "26XXH0JPPSI23H54YVG7BKLO82DHNU") # Get all bonuses for a given HIT GetBonuses(hit = "2MQB727M0IGF304GJ16S1F4VE3AYDQ") # Get bonuses from all HITs of a given batch from the RUI GetBonuses(annotation = "BatchId:78382;") ## End(Not run) GetClient Creates an MTurk Client using the AWS SDK for Python (Boto3) Description Create an API client. Only advanced users will likely need to use this function. CheckAWSKeys() is a helper function that checks if your AWS keys can be found. Usage GetClient( sandbox = getOption("pyMTurkR.sandbox", TRUE), restart.client = FALSE ) Arguments sandbox A logical indicating whether the client should be in the sandbox environment or the live environment. restart.client A boolean that specifies whether to force the creation of a new client. Details StartClient() is an alias Value No return value; Called to populate pyMTurkR$Client Author(s) <NAME> References AWS SDK for Python (Boto3) Boto3 Docs Examples ## Not run: GetClient() ## End(Not run) GetHIT Get HIT Description Retrieve various details of a HIT as a data frame. Usage GetHIT(hit, verbose = getOption("pyMTurkR.verbose", TRUE)) Arguments hit A character string specifying the HITId of the HIT to be retrieved. verbose Optionally print the results of the API request to the standard output. Default is taken from getOption('pyMTurkR.verbose',TRUE). Details GetHIT retrieves characteristics of a HIT. HITStatus is a wrapper that retrieves the Number of Assignments Pending, Number of Assignments Available, Number of Assignments Completed for the HIT(s), which is helpful for checking on the progress of currently available HITs. gethit() and hit() are aliases for GetHIT. status() is an alias for HITStatus. Value A list of data frames of various details of a HIT. Author(s) <NAME>, <NAME> References API Reference Examples ## Not run: # register HITType hittype <- RegisterHITType(title="10 Question Survey", description= "Complete a 10-question survey about news coverage and your opinions", reward=".20", duration=seconds(hours=1), keywords="survey, questionnaire, politics") a <- GenerateExternalQuestion("http://www.example.com/","400") hit1 <- CreateHIT(hit.type = hittype$HITTypeId, question = a$string) GetHIT(hit1$HITId) HITStatus(hit1$HITId) # cleanup DisableHIT(hit1$HITId) ## End(Not run) ## Not run: # Get the status of all HITs from a given batch from the RUI HITStatus(annotation="BatchId:78382;") ## End(Not run) GetHITsForQualificationType Get HITs by Qualification Description Retrieve HITs according to the QualificationTypes that are required to complete those HITs. Usage GetHITsForQualificationType( qual, results = as.integer(100), pagetoken = NULL, verbose = getOption("pyMTurkR.verbose", TRUE) ) Arguments qual A character string containing a QualificationTypeId. results An optional character string indicating how many results to fetch per page. Must be between 1 and 100. Most users can ignore this. pagetoken An optional character string indicating which page of search results to start at. Most users can ignore this. verbose Optionally print the results of the API request to the standard output. Default is taken from getOption('pyMTurkR.verbose',TRUE). Details A function to retrieve HITs that require the specified QualificationType. gethitsbyqual() and ListHITsForQualificationType() are aliases. Value A data frame containing the HITId and other requested characteristics of the qualifying HITs. Author(s) <NAME>, <NAME> References API Reference Examples ## Not run: GetHITsForQualificationType() ## End(Not run) GetQualificationRequests Get Qualification Requests Description Retrieve workers’ requests for a QualificationType. Usage GetQualificationRequests( qual = NULL, results = as.integer(100), pagetoken = NULL, verbose = getOption("pyMTurkR.verbose", TRUE) ) Arguments qual An optional character string containing a QualificationTypeId to which the search should be restricted. If none is supplied, requests made for all Qualification- Types are returned. results An optional character string indicating how many results to fetch per page. Must be between 1 and 100. Most users can ignore this. pagetoken An optional character string indicating which page of search results to start at. Most users can ignore this. verbose Optionally print the results of the API request to the standard output. Default is taken from getOption('pyMTurkR.verbose',TRUE). Details A function to retrieve pending Qualification Requests made by workers, either for a specified QualificationType or all QualificationTypes. Specifically, all active, custom QualificationTypes are visible to workers, and workers can request a QualificationType (e.g., when a HIT requires one they do not have). This function retrieves those requests so that they can be granted (with GrantQualification) or rejected (with RejectQualification). qualrequests() and ListQualificationRequests() are aliases. Value A data frame containing the QualificationRequestId, WorkerId, and other information (e.g., Quali- fication Test results) for each request. Author(s) <NAME>, <NAME> References API Reference See Also GrantQualification RejectQualification Examples ## Not run: GetQualificationRequests() # Search for qualifications you own, then get requests for one of the quals SearchQualificationTypes(must.be.owner = TRUE, verbose = FALSE) -> quals quals$QualificationTypeId[[1]] -> qual1 GetQualificationRequests(qual1) ## End(Not run) GetQualifications Get Qualifications Description Get all Qualifications of a particular QualificationType assigned to Workers. Usage GetQualifications( qual, status = NULL, results = as.integer(100), pagetoken = NULL, verbose = getOption("pyMTurkR.verbose", TRUE) ) Arguments qual A character string containing a QualificationTypeId for a custom (i.e., not built- in) QualificationType. status An optional character string specifying whether only “Granted” or “Revoked” Qualifications should be returned. results An optional character string indicating how many results to fetch per page. Must be between 1 and 100. Most users can ignore this. pagetoken An optional character string indicating which page of search results to start at. Most users can ignore this. verbose Optionally print the results of the API request to the standard output. Default is taken from getOption('pyMTurkR.verbose',TRUE). Details A function to retrieve Qualifications granted for the specified QualificationType. To retrieve a spe- cific Qualification score (e.g., for one worker), use GetQualificationScore. A practical use for this is with automatically granted QualificationTypes. After workers request and receive an automatically granted Qualification that is tied to one or more HITs, GetQualifications can be used to retrieve the WorkerIds for workers that are actively working on those HITs (even before they have submitted an assignment). getquals() and ListWorkersWithQualificationType() are aliases. Value A data frame containing the QualificationTypeId, WorkerId, and Qualification scores of workers assigned the Qualification. Author(s) <NAME>, <NAME> References API Reference See Also GetQualificationScore UpdateQualificationScore Examples ## Not run: qual1 <- AssignQualification(workers = "A1RO9UJNWXMU65", name = "Worked for me before", description = "This qualification is for people who have worked for me before", status = "Active", keywords = "Worked for me before") GetQualifications(qual1$QualificationTypeId) RevokeQualification(qual1$QualificationTypeId, qual1$WorkerId) GetQualifications(qual1$QualificationTypeId, status="Revoked") DisposeQualificationType(qual1$QualificationTypeId) ## End(Not run) GetQualificationScore Get a Worker’s Qualification Score Description Get a Worker’s score for a specific Qualification. You can only retrieve scores for custom Qualifi- cationTypes. Usage GetQualificationScore( qual, workers, verbose = getOption("pyMTurkR.verbose", TRUE) ) Arguments qual A character string containing a QualificationTypeId for a custom Qualification- Type. workers A character string containing a WorkerId, or a vector of character strings con- taining multiple WorkerIds, whose Qualification Scores you want to retrieve. verbose Optionally print the results of the API request to the standard output. Default is taken from getOption('pyMTurkR.verbose',TRUE). Details A function to retrieve one or more scores for a specified QualificationType. To retrieve all Quali- fications of a given QualificationType, use GetQualifications instead. Both qual and workers can be vectors. If qual is not length 1 or the same length as workers, an error will occur. qualscore() is an alias. Value A data frame containing the QualificationTypeId, WorkerId, time the qualification was granted, the Qualification score, a column indicating the status of the qualification, and a column indicating whether the API request was valid. Author(s) <NAME>, <NAME> References API Reference See Also UpdateQualificationScore GetQualifications Examples ## Not run: qual1 <- AssignQualification(workers = "A1RO9UJNWXMU65", name = "Worked for me before", description = "This qualification is for people who have worked for me before", status = "Active", keywords = "Worked for me before") GetQualificationScore(qual1$QualificationTypeId, qual1$WorkerId) # cleanup DisposeQualificationType(qual1$QualificationTypeId) ## End(Not run) GetQualificationType Get QualificationType Description Get the details of a Qualification Type. Usage GetQualificationType(qual, verbose = getOption("pyMTurkR.verbose", TRUE)) Arguments qual A character string containing a QualificationTypeId. verbose Optionally print the results of the API request to the standard output. Default is taken from getOption('pyMTurkR.verbose',TRUE). Details Retrieve characteristics of a specified QualificationType (as originally specified by CreateQualificationType). qualtype() is an alias. Value A data frame containing the QualificationTypeId of the newly created QualificationType and other details as specified in the request. Author(s) <NAME>, <NAME> References API Reference Examples ## Not run: qual1 <- CreateQualificationType(name="Worked for me before", description="This qualification is for people who have worked for me before", status = "Active", keywords="Worked for me before") GetQualificationType(qual1$QualificationTypeId) DisposeQualificationType(qual1$QualificationTypeId) ## End(Not run) GetReviewableHITs Get Reviewable HITs Description Get HITs that are currently reviewable. Usage GetReviewableHITs( hit.type = NULL, status = "Reviewable", results = as.integer(100), pagetoken = NULL, verbose = getOption("pyMTurkR.verbose", TRUE) ) Arguments hit.type An optional character string containing a HITTypeId to consider when looking for reviewable HITs. status An optional character string of either “Reviewable” or “Reviewing” limiting the search to HITs of with either status. results An optional character string indicating how many results to fetch per page. Must be between 1 and 100. Most users can ignore this. pagetoken An optional character string indicating which page of search results to start at. Most users can ignore this. verbose Optionally print the results of the API request to the standard output. Default is taken from getOption('pyMTurkR.verbose',TRUE). Details A simple function to return the HITIds of HITs currently in “Reviewable” or “Reviewing” sta- tus. To retrieve additional details about each of these HITs, see GetHIT. This is an alternative to SearchHITs. reviewable() is an alias. Value A data frame containing HITIds and Requester Annotations. Author(s) <NAME>, <NAME>. Leeper References API Reference Examples ## Not run: GetReviewableHITs() ## End(Not run) GetReviewResultsForHIT Get ReviewPolicy Results for a HIT Description Get HIT- and/or Assignment-level ReviewPolicy Results for a HIT Usage GetReviewResultsForHIT( hit, policy.level = NULL, results = as.integer(100), pagetoken = NULL, verbose = getOption("pyMTurkR.verbose", TRUE) ) Arguments hit A character string containing a HITId. policy.level Either HIT or Assignment. If NULL (the default), all data for both policy levels is retrieved. results An optional character string indicating how many results to fetch per page. Must be between 1 and 100. Most users can ignore this. pagetoken An optional character string indicating which page of search results to start at. Most users can ignore this. verbose Optionally print the results of the API request to the standard output. Default is taken from getOption('pyMTurkR.verbose',TRUE). Details A simple function to return the results of a ReviewPolicy. This is intended only for advanced users, who should reference MTurk documentation for further information or see the notes in GenerateHITReviewPolicy. reviewresults and ListReviewPolicyResultsForHIT are aliases. Value A four-element list containing up to four named data frames, depending on what ReviewPol- icy (or ReviewPolicies) were attached to the HIT and whether results or actions are requested: AssignmentReviewResult, AssignmentReviewAction, HITReviewResult, and/or HITReviewAction. Author(s) <NAME>, <NAME> References API Reference API Reference (ReviewPolicies) API Reference (Data Structure) See Also CreateHIT GenerateHITReviewPolicy GrantBonus Pay Bonus to Worker Description Pay a bonus to one or more workers. This function spends money from your MTurk account and will fail if insufficient funds are available. Usage GrantBonus( workers, assignments, amounts, reasons, skip.prompt = FALSE, unique.request.token = NULL, verbose = getOption("pyMTurkR.verbose", TRUE) ) Arguments workers A character string containing a WorkerId, or a vector of character strings con- taining multiple WorkerIds. assignments A character string containing an AssignmentId for an assignment performed by that worker, or a vector of character strings containing the AssignmentId for an assignment performed by each of the workers specified in workers. amounts A character string containing an amount (in U.S. Dollars) to bonus the worker(s), or a vector (of length equal to the number of workers) of character strings con- taining the amount to be paid to each worker. reasons A character string containing a reason for bonusing the worker(s), or a vector (of length equal to the number of workers) of character strings containing the reason to bonus each worker. The reason is visible to each worker and is sent via email. skip.prompt A logical indicating whether to skip the prompt that asks you to continue when duplicate AssignmentIds are found. If TRUE, you will not be asked to confirm. The prompt is a safeguard flag to protect the user from mistakenly paying a bonus twice. unique.request.token An optional character string, included only for advanced users. It can be used to prevent resending a bonus. A bonus will not be granted if a bonus was previously granted (within a short time window) using the same unique.request.token. verbose Optionally print the results of the API request to the standard output. Default is taken from getOption('pyMTurkR.verbose',TRUE). Details A simple function to grant a bonus to one or more workers. The function is somewhat picky in that it requires a WorkerId, the AssignmentId for an assignment that worker has completed, an amount, and a reason for the bonus, for each bonus to be paid. Optionally, the amount and reason can be specified as single (character string) values, which will be used for each bonus. bonus(), paybonus(), and sendbonus() are aliases. Value A data frame containing the WorkerId, AssignmentId, amount, reason, and whether each request to bonus was valid. Author(s) <NAME>, <NAME> References API Reference See Also GetBonuses Examples ## Not run: # Grant a single bonus a <- "A1RO9UEXAMPLE" b <- "26XXH0JPPSI23H54YVG7BKLEXAMPLE" c <- ".50" d <- "Thanks for your great work on my HITs!\nHope to work with you, again!" GrantBonus(workers=a, assignments=b, amounts=c, reasons=d) ## End(Not run) ## Not run: # Grant bonuses to multiple workers a <- c("A1RO9EXAMPLE1","A1RO9EXAMPLE2","A1RO9EXAMPLE3") b <- c("26XXH0JPPSI23H54YVG7BKLEXAMPLE1", "26XXH0JPPSI23H54YVG7BKLEXAMPLE2", "26XXH0JPPSI23H54YVG7BKLEXAMPLE3") c <- c(".50",".10",".25") d <- "Thanks for your great work on my HITs!" GrantBonus(workers=a, assignments=b, amounts=c, reasons=d) ## End(Not run) GrantQualification Grant/Accept or Reject a Qualification Request Description Gran/accept or reject a worker’s request for a Qualification. Usage GrantQualification( qual.requests, values, reason = NULL, verbose = getOption("pyMTurkR.verbose", TRUE) ) Arguments qual.requests A character string containing a QualificationRequestId (for example, returned by GetQualificationRequests), or a vector of QualificationRequestIds. values A character string containing the value of the Qualification to be assigned to the worker, or a vector of values of length equal to the number of QualificationRe- quests. reason An optional character string, or vector of character strings of length equal to length of the qual.requests parameter, supplying each worker with a reason for rejecting their request for the Qualification. Workers will see this message. Maximum of 1024 characters. verbose Optionally print the results of the API request to the standard output. Default is taken from getOption('pyMTurkR.verbose',TRUE). Details Qualifications are publicly visible to workers on the MTurk website and workers can request Qual- ifications (e.g., when a HIT requires a QualificationType that they have not been assigned). Qual- ificationRequests can be retrieved via GetQualificationRequests. GrantQualification grants the specified qualification requests. Requests can be rejected with RejectQualifications. Note that granting a qualification may have the consequence of modifying a worker’s existing qual- ification score. For example, if a worker already has a score of 100 on a given QualificationType and then requests the same QualificationType, a GrantQualification action might increase or decrease that worker’s qualification score. Similarly, rejecting a qualification is not the same as revoking a worker’s Qualification. For exam- ple, if a worker already has a score of 100 on a given QualificationType and then requests the same QualificationType, a RejectQualification leaves the worker’s existing Qualification in place. Use RevokeQualification to entirely remove a worker’s Qualification. GrantQualifications(), grantqual(), AcceptQualificationRequest() and acceptrequest() are aliases; RejectQualifications() and rejectrequest() are aliases. Value A data frame containing the QualificationRequestId, reason for rejection (if applicable; only for RejectQualification), and whether each request was valid. Author(s) <NAME>, <NAME> References API Reference: AcceptQualificationRequest See Also GetQualificationRequests RegisterHITType Register a HITType Description Register a HITType on MTurk, in order to create one or more HITs to show up as a group to workers. Usage RegisterHITType( title, description, reward, duration, keywords = NULL, auto.approval.delay = as.integer(2592000), qual.req = NULL, verbose = getOption("pyMTurkR.verbose", TRUE) ) Arguments title A character string containing the title for the HITType. All HITs of this HITType will be visibly grouped to workers according to this title. Maximum of 128 characters. description A character string containing a description of the HITType. This is visible to workers. Maximum of 2000 characters. reward A character string containing the per-assignment reward amount, in U.S. Dollars (e.g., “0.15”). duration A character string containing the amount of time workers have to complete an assignment for HITs of this HITType, in seconds (for example, as returned by seconds). Minimum of 30 seconds and maximum of 365 days. keywords An optional character string containing a comma-separated set of keywords by which workers can search for HITs of this HITType. Maximum of 1000 charac- ters. auto.approval.delay An optional character string specifying the amount of time, in seconds (for ex- ample, as returned by seconds), before a submitted assignment is automatically granted. Maximum of 30 days. qual.req An optional character string containing one or more QualificationRequirements data structures, for example as returned by GenerateQualificationRequirement. verbose Optionally print the results of the API request to the standard output. Default is taken from getOption('pyMTurkR.verbose',TRUE). Details All HITs of a given HITType are visibly grouped together for workers and share common properties (e.g., reward amount, QualificationRequirements). This function registers a HITType in the MTurk system, which can then be used when creating individual HITs. If a requester wants to change these properties for a specific HIT, the HIT should be changed to a new HITType (see ChangeHITType). hittype(), CreateHITType(), and createhittype() are aliases. Value A two-column data frame containing the HITTypeId of the newly registered HITType and an indi- cator for whether the registration request was valid. Author(s) <NAME>, <NAME> References API Reference: Operation See Also CreateHIT ChangeHITType Examples ## Not run: RegisterHITType(title="10 Question Survey", description= "Complete a 10-question survey about news coverage and your opinions", reward=".20", duration=seconds(hours=1), keywords="survey, questionnaire, politics") ## End(Not run) RejectAssignment Reject Assignment Description Reject a Worker’s assignment (or multiple assignments) submitted for a HIT. Feedback should be provided for why an assignment was rejected. Usage RejectAssignment( assignments, feedback, verbose = getOption("pyMTurkR.verbose", TRUE) ) Arguments assignments A character string containing an AssignmentId, or a vector of multiple character strings containing multiple AssignmentIds, to reject. feedback A character string containing any feedback for a worker. This must have length 1 or length equal to the number of workers. verbose Optionally print the results of the API request to the standard output. Default is taken from getOption('pyMTurkR.verbose',TRUE). Details Reject assignments, by AssignmentId (as returned by GetAssignment). More advanced function- ality to quickly reject many or all assignments (ala ApproveAllAssignments) is intentionally not provided. RejectAssignments() and reject() are aliases. Value A data frame containing the list of AssignmentIds, feedback (if any), and whether or not each rejection request was valid. Author(s) <NAME>, <NAME> References API Reference See Also ApproveAssignment Examples ## Not run: RejectAssignment(assignments = "26XXH0JPPSI23H54YVG7BKLEXAMPLE") ## End(Not run) RevokeQualification Revoke a Qualification from a Worker Description Revoke a Qualification from a worker or multiple workers. This deletes their qualification score and any record thereof. Usage RevokeQualification( qual, workers, reasons = NULL, verbose = getOption("pyMTurkR.verbose", TRUE) ) Arguments qual A character string containing a QualificationTypeId. workers A character string containing a WorkerId, or a vector of character strings con- taining multiple WorkerIds. reasons An optional character string, or vector of character strings of length equal to length of the workers parameter, supplying each worker with a reason for re- voking their Qualification. Workers will see this message. verbose Optionally print the results of the API request to the standard output. Default is taken from getOption('pyMTurkR.verbose',TRUE). Details A simple function to revoke a Qualification assigned to one or more workers. RevokeQualifications(), revokequal() and DisassociateQualificationFromWorker() are aliases. Value A data frame containing the QualificationTypeId, WorkerId, reason (if applicable), and whether each request was valid. Author(s) <NAME>, <NAME> References API Reference See Also GrantQualification RejectQualification Examples ## Not run: qual1 <- AssignQualification(workers = "A1RO9UJNWXMU65", name = "Worked for me before", description = "This qualification is for people who have worked for me before", status = "Active", keywords = "Worked for me before") RevokeQualification(qual = qual1$QualificationTypeId, worker = qual1$WorkerId, reason = "No longer needed") DisposeQualificationType(qual1$QualificationTypeId) ## End(Not run) SearchHITs Search your HITs Description Search for your HITs and return those HITs as R objects. Usage SearchHITs( return.pages = NULL, results = as.integer(100), pagetoken = NULL, verbose = getOption("pyMTurkR.verbose", TRUE) ) Arguments return.pages An integer indicating how many pages of results should be returned. results An optional character string indicating how many results to fetch per page. Must be between 1 and 100. Most users can ignore this. pagetoken An optional character string indicating which page of search results to start at. Most users can ignore this. verbose Optionally print the results of the API request to the standard output. Default is taken from getOption('pyMTurkR.verbose',TRUE). Details Retrieve your current HITs (and, optionally, characteristics thereof). searchhits(), ListHITs(), and listhits() are aliases Value A list containing data frames of HITs and Qualification Requirements Author(s) <NAME>, <NAME> References API Reference Examples ## Not run: SearchHITs() ## End(Not run) SearchQualificationTypes Search Qualification Types Description Search for Qualification Types. Usage SearchQualificationTypes( search.query = NULL, must.be.requestable = FALSE, must.be.owner = FALSE, results = as.integer(100), return.pages = NULL, pagetoken = NULL, verbose = getOption("pyMTurkR.verbose", TRUE) ) Arguments search.query An optional character string to use as a search query must.be.requestable A boolean indicating whether the Qualification must be requestable by Workers or not. must.be.owner A boolean indicating whether to search only the Qualifications you own / cre- ated, or to search all Qualifications. Defaults to FALSE. results An optional character string indicating how many results to fetch per page. Must be between 1 and 100. Most users can ignore this. return.pages An integer indicating how many pages of results should be returned. pagetoken An optional character string indicating which page of search results to start at. Most users can ignore this. verbose Optionally print the results of the API request to the standard output. Default is taken from getOption('pyMTurkR.verbose',TRUE). Details This function will search Qualification Types. It can search through the Qualifications you created, or through all the Qualifications that exist. SearchQuals(), searchquals(),ListQualificationTypes() listquals(), ListQuals() are aliases Value A data frame of Qualification Types Author(s) <NAME> References API Reference Examples ## Not run: SearchQuals() ## End(Not run) seconds Convert arbitrary times to seconds Description A convenience function to convert arbitrary numbers of days, hours, minutes, and/or seconds into seconds. Usage seconds(days = NULL, hours = NULL, minutes = NULL, seconds = NULL) Arguments days An optional number of days. hours An optional number of hours. minutes An optional number of minutes. seconds An optional number of seconds. Details A convenience function to convert arbitrary numbers of days, hours, minutes, and/or seconds into seconds. For example, to be used in setting a HIT expiration time. MTurk only accepts times (e.g., for HIT expirations, or the duration of assignments) in seconds. This function returns an integer value equal to the number of seconds of the input, and can be used atomically within other MTurkR calls (e.g., CreateHIT). Value An integer equal to the requested amount of time in seconds. Author(s) <NAME> SendTestEventNotification Test a Notification Description Test a HITType Notification, for example, to try out a HITType Notification before creating a HIT. Usage SendTestEventNotification( notification, test.event.type, verbose = getOption("pyMTurkR.verbose", TRUE) ) Arguments notification A dictionary object Notification structure (e.g., returned by GenerateNotification). test.event.type A character string containing one of: AssignmentAccepted, AssignmentAbandoned, AssignmentReturned, AssignmentSubmitted, AssignmentRejected, AssignmentApproved, HITCreated, HITExtended, HITDisposed, HITReviewable, HITCreated, HITExtended, HITDisposed, HITReviewable, HITExpired (the default), or Ping. verbose Optionally print the results of the API request to the standard output. Default is taken from getOption('pyMTurkR.verbose',TRUE). Details Test a Notification configuration. The test mimics whatever the Notification configuration will do when the event described in test.event.type occurs. For example, if a Notification has been configured to send an email any time an Assignment is Submitted, testing for an AssignmentSub- mitted event should trigger an email. Similarly, testing for an AssignmentReturned event should do nothing. notificationtest is an alias. Value A data frame containing the notification, the event type, and details on whether the request was valid. As a side effect, a notification will be sent to the configured destination (either an email or an SQS queue). Author(s) <NAME>, <NAME> References API Reference See Also SetHITTypeNotification Examples ## Not run: hittype <- RegisterHITType(title="10 Question Survey", description = "Complete a 10-question survey", reward = ".20", duration = seconds(hours = 1), keywords = "survey, questionnaire, politics") a <- GenerateNotification("<EMAIL>", event.type = "HITExpired") SetHITTypeNotification(hit.type = hittype$HITTypeId, notification = a, active = TRUE) ## End(Not run) SetHITAsReviewing Set HIT as “Reviewing” Description Update the review status of a HIT, from “Reviewable” to “Reviewing” or the reverse. Usage SetHITAsReviewing( hit = NULL, hit.type = NULL, annotation = NULL, revert = FALSE, verbose = getOption("pyMTurkR.verbose", TRUE) ) Arguments hit An optional character string containing a HITId, or a vector character strings containing HITIds, whose status are to be changed. Must specify hit xor hit.type xor annotation. hit.type An optional character string specifying a HITTypeId (or a vector of HITType- Ids), all the HITs of which should be set as “Reviewing” (or the reverse). Must specify hit xor hit.type xor annotation. annotation An optional character string specifying the value of the RequesterAnnotation field for a batch of HITs. This can be used to set the review status all HITs from a “batch” created in the online Requester User Interface (RUI). To use a batch ID, the batch must be written in a character string of the form “BatchId:78382;”, where “73832” is the batch ID shown in the RUI. Must specify hit xor hit.type xor annotation. revert An optional logical to revert the HIT from “Reviewing” to “Reviewable”. verbose Optionally print the results of the API request to the standard output. Default is taken from getOption('pyMTurkR.verbose',TRUE). Details A function to change the status of one or more HITs (or all HITs of a given HITType) to “Review- ing” or the reverse. This affects what HITs are returned by GetReviewableHITs. Must specify a HITId xor a HITTypeId xor an annotation. reviewing() and UpdateHITReviewStatus() are aliases. Value A data frame containing HITId, status, and whether the request to change the status of each was valid. Author(s) <NAME>, <NAME> References API Reference See Also GetReviewableHITs Examples ## Not run: a <- GenerateExternalQuestion("https://www.example.com/", "400") hit1 <- CreateHIT(hit.type = "2FFNCWYB49F9BBJWA4SJUNST5OFSOW", question = a$string, expiration = seconds(hours = 1)) SetHITAsReviewing(hit1$HITId) # cleanup DisableHIT(hit1$HITId) ## End(Not run) SetHITTypeNotification Configure a HITType Notification Description Configure a notification to be sent when specific actions occur for the specified HITType. Usage SetHITTypeNotification( hit.type, notification = NULL, active = NULL, verbose = getOption("pyMTurkR.verbose", TRUE) ) Arguments hit.type A character string specifying the HITTypeId of the HITType for which notifica- tions are being configured. notification An optional dictionary object Notification structure (e.g., returned by GenerateNotification). active A logical indicating whether the Notification is active or inactive. verbose Optionally print the results of the API request to the standard output. Default is taken from getOption('pyMTurkR.verbose',TRUE). Details Configure a notification to be sent to the requester whenever an event (specified in the Notification object) occurs. This is useful, for example, to enable email notifications about when assignments are submitted or HITs are completed, or for other HIT-related events. Email notifications are useful for small projects, but configuring notifications to use the Amazon Simple Queue Service (SQS) is more reliable for large projects and allows automated processing of notifications. setnotification() is an alias. Value A data frame containing details of the Notification and whether or not the request was successfully executed by MTurk. Once configured, events will trigger a side effect in the form of a notification sent to the speci- fied transport (either an email address or SQS queue). That notification will contain the following details: EventType, EventTime, HITTypeId, HITId, and (if applicable) AssignmentId. Note that the ’Notification’ column in this dataframe is a dictionary object coerced into a character type. This cannot be used again directly as a notification parameter, but it can be used to re-construct the dictionary object. Author(s) <NAME>, <NAME> References API Reference: Operation API Reference: Concept See Also GenerateNotification SendTestEventNotification Examples ## Not run: # setup email notification hittype <- RegisterHITType(title = "10 Question Survey", description = "Complete a 10-question survey", reward = ".20", duration = seconds(hours = 1), keywords = "survey, questionnaire, politics") a <- GenerateNotification("<EMAIL>", "Email", "AssignmentAccepted") SetHITTypeNotification(hit.type = hittype$HITTypeId, notification = a, active = TRUE) # send test notification SendTestEventNotification(a, test.event.type = "AssignmentAccepted") ## End(Not run) ToDataFrameAssignment ToDataFrameAssignment Description Get a list of assignment and answer information for an assignment Usage ToDataFrameAssignment(assignment) Arguments assignment assignment Value A list of Data frames, for assignment information and answers ToDataFrameBonusPayments 69 ToDataFrameBonusPayments ToDataFrameBonusPayments Description ToDataFrameBonusPayments Usage ToDataFrameBonusPayments(bonuses) Arguments bonuses bonuses Value A Data frame of Bonus payment information ToDataFrameHITs ToDataFrameHITs Description Convert a list of HITs to a data frame Usage ToDataFrameHITs(hits) Arguments hits hits Value A data frame of information on HITs, one per row. ToDataFrameQualificationRequests ToDataFrameQualificationRequests Description ToDataFrameQualificationRequests Usage ToDataFrameQualificationRequests(requests) Arguments requests requests Value A Data frame of Qualification Request information ToDataFrameQualificationRequirements ToDataFrameQualificationRequirements Description ToDataFrameQualificationRequirements Usage ToDataFrameQualificationRequirements(hits) Arguments hits hits Value A Data frame of Qualification Requirements for the given HITs ToDataFrameQualifications 71 ToDataFrameQualifications ToDataFrameQualifications Description ToDataFrameQualifications Usage ToDataFrameQualifications(quals) Arguments quals qualifications Value A Data frame of qualification information ToDataFrameQualificationTypes ToDataFrameQualificationTypes Description ToDataFrameQualificationTypes Usage ToDataFrameQualificationTypes(quals) Arguments quals qualifications Value A Data frame of Qualification Types ToDataFrameQuestionFormAnswers ToDataFrameQuestionFormAnswers Description ToDataFrameQuestionFormAnswers Usage ToDataFrameQuestionFormAnswers(assignment, answers) Arguments assignment assignment answers answers Value A Data frame of Answer information for the assignment ToDataFrameReviewableHITs ToDataFrameReviewableHITs Description ToDataFrameReviewableHITs Usage ToDataFrameReviewableHITs(hits) Arguments hits hits Value A Data frame of reviewable HIT information ToDataFrameReviewResults ToDataFrameReviewResults Description ToDataFrameReviewResults Usage ToDataFrameReviewResults(results) Arguments results results Value A list of Data frames of Assignment Reviews/Actions and HIT Reviews/Actions. ToDataFrameWorkerBlock ToDataFrameWorkerBlock Description ToDataFrameWorkerBlock Usage ToDataFrameWorkerBlock(workers) Arguments workers workers Value A Data frame of blocked workers UpdateQualificationScore Update a worker’s score for a QualificationType Description Update a worker’s score for a QualificationType that you created. Scores for built-in Qualification- Types (e.g., location, worker statistics) cannot be updated. Usage UpdateQualificationScore( qual, workers, values = NULL, increment = NULL, verbose = getOption("pyMTurkR.verbose", TRUE) ) Arguments qual A character string containing a QualificationTypeId. workers A character string containing a WorkerId, or a vector of character strings con- taining multiple WorkerIds. values A character string containing an integer value to be assigned to the worker, or a vector of character strings containing integer values to be assigned to each worker (and thus must have length equal to the number of workers). increment An optional character string specifying, in lieu of “values”, the amount that each worker’s current QualfiicationScore should be increased (or decreased). verbose Optionally print the results of the API request to the standard output. Default is taken from getOption('pyMTurkR.verbose',TRUE). Details A function to update the Qualification score assigned to one or more workers for the specified custom QualificationType. The simplest use is to specify a QualificationTypeId, a WorkerId, and a value to be assigned to the worker. Scores for multiple workers can be updated in one request. Additionally, the increment parameter allows you to increase (or decrease) each of the specified workers scores by the specified amount. This might be useful, for example, to keep a Qualifica- tionType that records how many of a specific style of HIT a worker has completed and increase the value of each worker’s score by 1 after they complete a HIT. This function will only affect workers who already have a score for the QualificationType. If a worker is given who does not already have a score, they will not be modified. updatequalscore() is an alias. Value A data frame containing the QualificationTypeId, WorkerId, Qualification score, and whether the request to update each was valid. Author(s) <NAME>, <NAME> References API Reference See Also GetQualificationScore GetQualifications Examples ## Not run: qual1 <- CreateQualificationType(name="Worked for me before", description="This qualification is for people who have worked for me before", status = "Active", keywords="Worked for me before") AssignQualification(qual1$QualificationTypeId, "A1RO9UJNWXMU65", value="50") UpdateQualificationScore(qual1$QualificationTypeId, "A1RO9UJNWXMU65", value="95") UpdateQualificationScore(qual1$QualificationTypeId, "A1RO9UJNWXMU65", increment="1") DisposeQualificationType(qual1$QualificationTypeId) ## End(Not run) UpdateQualificationType Update a Worker QualificationType Description Update characteristics of a QualificationType. Usage UpdateQualificationType( qual, description = NULL, status = NULL, retry.delay = NULL, test = NULL, answerkey = NULL, test.duration = NULL, auto = NULL, auto.value = NULL, verbose = getOption("pyMTurkR.verbose", TRUE) ) Arguments qual A character string containing a QualificationTypeId. description A longer description of the QualificationType. This is visible to workers. Maxi- mum of 2000 characters. status A character vector of “Active” or “Inactive”, indicating whether the Qualifica- tionType should be active and visible. retry.delay An optional time (in seconds) indicating how long workers have to wait before requesting the QualificationType after an initial rejection. If not specified, retries are disabled and Workers can request a Qualification of this type only once, even if the Worker has not been granted the Qualification. test An optional character string consisting of a QuestionForm data structure, used as a test a worker must complete before the QualificationType is granted to them. answerkey An optional character string consisting of an AnswerKey data structure, used to automatically score the test test.duration An optional time (in seconds) indicating how long workers have to complete the test. auto A logical indicating whether the Qualification is automatically granted to work- ers who request it. Default is NULL meaning FALSE. auto.value An optional parameter specifying the value that is automatically assigned to workers when they request it (if the Qualification is automatically granted). verbose Optionally print the results of the API request to the standard output. Default is taken from getOption('pyMTurkR.verbose',TRUE). Details A function to update the characteristics of a QualificationType. Name and keywords cannot be modified after a QualificationType is created. updatequal() is an alias. Value A data frame containing the QualificationTypeId of the newly created QualificationType and other details as specified in the request. Author(s) <NAME>, <NAME> References API Reference See Also GetQualificationType CreateQualificationType DisposeQualificationType SearchQualificationTypes Examples ## Not run: qual1 <- CreateQualificationType(name="Worked for me before", description="This qualification is for people who have worked for me before", status = "Active", keywords="Worked for me before") qual2 <- UpdateQualificationType(qual1$QualificationTypeId, description="This qualification is for everybody!", auto=TRUE, auto.value="5") DisposeQualificationType(qual1$QualificationTypeId) ## End(Not run)
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance
go
Go
README [¶](#section-readme) --- ### Azure Directory and Database Infrastructure Module for Go [![PkgGoDev](https://pkg.go.dev/badge/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance)](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance) The `armm365securityandcompliance` module provides operations for working with Azure Directory and Database Infrastructure. [Source code](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance) ### Getting started #### Prerequisites * an [Azure subscription](https://azure.microsoft.com/free/) * Go 1.18 or above (You could download and install the latest version of Go from [here](https://go.dev/doc/install). It will replace the existing Go on your machine. If you want to install multiple Go versions on the same machine, you could refer this [doc](https://go.dev/doc/manage-install).) #### Install the package This project uses [Go modules](https://github.com/golang/go/wiki/Modules) for versioning and dependency management. Install the Azure Directory and Database Infrastructure module: ``` go get github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance ``` #### Authorization When creating a client, you will need to provide a credential for authenticating with Azure Directory and Database Infrastructure. The `azidentity` module provides facilities for various ways of authenticating with Azure including client/secret, certificate, managed identity, and more. ``` cred, err := azidentity.NewDefaultAzureCredential(nil) ``` For more information on authentication, please see the documentation for `azidentity` at [pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity). #### Client Factory Azure Directory and Database Infrastructure module consists of one or more clients. We provide a client factory which could be used to create any client in this module. ``` clientFactory, err := armm365securityandcompliance.NewClientFactory(<subscription ID>, cred, nil) ``` You can use `ClientOptions` in package `github.com/Azure/azure-sdk-for-go/sdk/azcore/arm` to set endpoint to connect with public and sovereign clouds as well as Azure Stack. For more information, please see the documentation for `azcore` at [pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azcore](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azcore). ``` options := arm.ClientOptions { ClientOptions: azcore.ClientOptions { Cloud: cloud.AzureChina, }, } clientFactory, err := armm365securityandcompliance.NewClientFactory(<subscription ID>, cred, &options) ``` #### Clients A client groups a set of related APIs, providing access to its functionality. Create one or more clients to access the APIs you require using client factory. ``` client := clientFactory.NewPrivateEndpointConnectionsForMIPPolicySyncClient() ``` #### Provide Feedback If you encounter bugs or have suggestions, please [open an issue](https://github.com/Azure/azure-sdk-for-go/issues) and assign the `Directory and Database Infrastructure` label. ### Contributing This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit <https://cla.microsoft.com>. When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA. This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information, see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [<EMAIL>](mailto:<EMAIL>) with any additional questions or comments. Documentation [¶](#section-documentation) --- ### Index [¶](#pkg-index) * [type ClientFactory](#ClientFactory) * + [func NewClientFactory(subscriptionID string, credential azcore.TokenCredential, ...) (*ClientFactory, error)](#NewClientFactory) * + [func (c *ClientFactory) NewOperationResultsClient() *OperationResultsClient](#ClientFactory.NewOperationResultsClient) + [func (c *ClientFactory) NewOperationsClient() *OperationsClient](#ClientFactory.NewOperationsClient) + [func (c *ClientFactory) NewPrivateEndpointConnectionsAdtAPIClient() *PrivateEndpointConnectionsAdtAPIClient](#ClientFactory.NewPrivateEndpointConnectionsAdtAPIClient) + [func (c *ClientFactory) NewPrivateEndpointConnectionsCompClient() *PrivateEndpointConnectionsCompClient](#ClientFactory.NewPrivateEndpointConnectionsCompClient) + [func (c *ClientFactory) NewPrivateEndpointConnectionsForEDMClient() *PrivateEndpointConnectionsForEDMClient](#ClientFactory.NewPrivateEndpointConnectionsForEDMClient) + [func (c *ClientFactory) NewPrivateEndpointConnectionsForMIPPolicySyncClient() *PrivateEndpointConnectionsForMIPPolicySyncClient](#ClientFactory.NewPrivateEndpointConnectionsForMIPPolicySyncClient) + [func (c *ClientFactory) NewPrivateEndpointConnectionsForSCCPowershellClient() *PrivateEndpointConnectionsForSCCPowershellClient](#ClientFactory.NewPrivateEndpointConnectionsForSCCPowershellClient) + [func (c *ClientFactory) NewPrivateEndpointConnectionsSecClient() *PrivateEndpointConnectionsSecClient](#ClientFactory.NewPrivateEndpointConnectionsSecClient) + [func (c *ClientFactory) NewPrivateLinkResourcesAdtAPIClient() *PrivateLinkResourcesAdtAPIClient](#ClientFactory.NewPrivateLinkResourcesAdtAPIClient) + [func (c *ClientFactory) NewPrivateLinkResourcesClient() *PrivateLinkResourcesClient](#ClientFactory.NewPrivateLinkResourcesClient) + [func (c *ClientFactory) NewPrivateLinkResourcesCompClient() *PrivateLinkResourcesCompClient](#ClientFactory.NewPrivateLinkResourcesCompClient) + [func (c *ClientFactory) NewPrivateLinkResourcesForMIPPolicySyncClient() *PrivateLinkResourcesForMIPPolicySyncClient](#ClientFactory.NewPrivateLinkResourcesForMIPPolicySyncClient) + [func (c *ClientFactory) NewPrivateLinkResourcesForSCCPowershellClient() *PrivateLinkResourcesForSCCPowershellClient](#ClientFactory.NewPrivateLinkResourcesForSCCPowershellClient) + [func (c *ClientFactory) NewPrivateLinkResourcesSecClient() *PrivateLinkResourcesSecClient](#ClientFactory.NewPrivateLinkResourcesSecClient) + [func (c *ClientFactory) NewPrivateLinkServicesForEDMUploadClient() *PrivateLinkServicesForEDMUploadClient](#ClientFactory.NewPrivateLinkServicesForEDMUploadClient) + [func (c *ClientFactory) NewPrivateLinkServicesForM365ComplianceCenterClient() *PrivateLinkServicesForM365ComplianceCenterClient](#ClientFactory.NewPrivateLinkServicesForM365ComplianceCenterClient) + [func (c *ClientFactory) NewPrivateLinkServicesForM365SecurityCenterClient() *PrivateLinkServicesForM365SecurityCenterClient](#ClientFactory.NewPrivateLinkServicesForM365SecurityCenterClient) + [func (c *ClientFactory) NewPrivateLinkServicesForMIPPolicySyncClient() *PrivateLinkServicesForMIPPolicySyncClient](#ClientFactory.NewPrivateLinkServicesForMIPPolicySyncClient) + [func (c *ClientFactory) NewPrivateLinkServicesForO365ManagementActivityAPIClient() *PrivateLinkServicesForO365ManagementActivityAPIClient](#ClientFactory.NewPrivateLinkServicesForO365ManagementActivityAPIClient) + [func (c *ClientFactory) NewPrivateLinkServicesForSCCPowershellClient() *PrivateLinkServicesForSCCPowershellClient](#ClientFactory.NewPrivateLinkServicesForSCCPowershellClient) + [func (c *ClientFactory) NewServicesClient() *ServicesClient](#ClientFactory.NewServicesClient) * [type CreatedByType](#CreatedByType) * + [func PossibleCreatedByTypeValues() []CreatedByType](#PossibleCreatedByTypeValues) * [type ErrorDetails](#ErrorDetails) * + [func (e ErrorDetails) MarshalJSON() ([]byte, error)](#ErrorDetails.MarshalJSON) + [func (e *ErrorDetails) UnmarshalJSON(data []byte) error](#ErrorDetails.UnmarshalJSON) * [type ErrorDetailsInternal](#ErrorDetailsInternal) * + [func (e ErrorDetailsInternal) MarshalJSON() ([]byte, error)](#ErrorDetailsInternal.MarshalJSON) + [func (e *ErrorDetailsInternal) UnmarshalJSON(data []byte) error](#ErrorDetailsInternal.UnmarshalJSON) * [type Kind](#Kind) * + [func PossibleKindValues() []Kind](#PossibleKindValues) * [type ManagedServiceIdentityType](#ManagedServiceIdentityType) * + [func PossibleManagedServiceIdentityTypeValues() []ManagedServiceIdentityType](#PossibleManagedServiceIdentityTypeValues) * [type Operation](#Operation) * + [func (o Operation) MarshalJSON() ([]byte, error)](#Operation.MarshalJSON) + [func (o *Operation) UnmarshalJSON(data []byte) error](#Operation.UnmarshalJSON) * [type OperationDisplay](#OperationDisplay) * + [func (o OperationDisplay) MarshalJSON() ([]byte, error)](#OperationDisplay.MarshalJSON) + [func (o *OperationDisplay) UnmarshalJSON(data []byte) error](#OperationDisplay.UnmarshalJSON) * [type OperationListResult](#OperationListResult) * + [func (o OperationListResult) MarshalJSON() ([]byte, error)](#OperationListResult.MarshalJSON) + [func (o *OperationListResult) UnmarshalJSON(data []byte) error](#OperationListResult.UnmarshalJSON) * [type OperationResultStatus](#OperationResultStatus) * + [func PossibleOperationResultStatusValues() []OperationResultStatus](#PossibleOperationResultStatusValues) * [type OperationResultsClient](#OperationResultsClient) * + [func NewOperationResultsClient(subscriptionID string, credential azcore.TokenCredential, ...) (*OperationResultsClient, error)](#NewOperationResultsClient) * + [func (client *OperationResultsClient) Get(ctx context.Context, locationName string, operationResultID string, ...) (OperationResultsClientGetResponse, error)](#OperationResultsClient.Get) * [type OperationResultsClientGetOptions](#OperationResultsClientGetOptions) * [type OperationResultsClientGetResponse](#OperationResultsClientGetResponse) * [type OperationResultsDescription](#OperationResultsDescription) * + [func (o OperationResultsDescription) MarshalJSON() ([]byte, error)](#OperationResultsDescription.MarshalJSON) + [func (o *OperationResultsDescription) UnmarshalJSON(data []byte) error](#OperationResultsDescription.UnmarshalJSON) * [type OperationsClient](#OperationsClient) * + [func NewOperationsClient(credential azcore.TokenCredential, options *arm.ClientOptions) (*OperationsClient, error)](#NewOperationsClient) * + [func (client *OperationsClient) NewListPager(options *OperationsClientListOptions) *runtime.Pager[OperationsClientListResponse]](#OperationsClient.NewListPager) * [type OperationsClientListOptions](#OperationsClientListOptions) * [type OperationsClientListResponse](#OperationsClientListResponse) * [type PrivateEndpoint](#PrivateEndpoint) * + [func (p PrivateEndpoint) MarshalJSON() ([]byte, error)](#PrivateEndpoint.MarshalJSON) + [func (p *PrivateEndpoint) UnmarshalJSON(data []byte) error](#PrivateEndpoint.UnmarshalJSON) * [type PrivateEndpointConnection](#PrivateEndpointConnection) * + [func (p PrivateEndpointConnection) MarshalJSON() ([]byte, error)](#PrivateEndpointConnection.MarshalJSON) + [func (p *PrivateEndpointConnection) UnmarshalJSON(data []byte) error](#PrivateEndpointConnection.UnmarshalJSON) * [type PrivateEndpointConnectionListResult](#PrivateEndpointConnectionListResult) * + [func (p PrivateEndpointConnectionListResult) MarshalJSON() ([]byte, error)](#PrivateEndpointConnectionListResult.MarshalJSON) + [func (p *PrivateEndpointConnectionListResult) UnmarshalJSON(data []byte) error](#PrivateEndpointConnectionListResult.UnmarshalJSON) * [type PrivateEndpointConnectionProperties](#PrivateEndpointConnectionProperties) * + [func (p PrivateEndpointConnectionProperties) MarshalJSON() ([]byte, error)](#PrivateEndpointConnectionProperties.MarshalJSON) + [func (p *PrivateEndpointConnectionProperties) UnmarshalJSON(data []byte) error](#PrivateEndpointConnectionProperties.UnmarshalJSON) * [type PrivateEndpointConnectionProvisioningState](#PrivateEndpointConnectionProvisioningState) * + [func PossiblePrivateEndpointConnectionProvisioningStateValues() []PrivateEndpointConnectionProvisioningState](#PossiblePrivateEndpointConnectionProvisioningStateValues) * [type PrivateEndpointConnectionsAdtAPIClient](#PrivateEndpointConnectionsAdtAPIClient) * + [func NewPrivateEndpointConnectionsAdtAPIClient(subscriptionID string, credential azcore.TokenCredential, ...) (*PrivateEndpointConnectionsAdtAPIClient, error)](#NewPrivateEndpointConnectionsAdtAPIClient) * + [func (client *PrivateEndpointConnectionsAdtAPIClient) BeginCreateOrUpdate(ctx context.Context, resourceGroupName string, resourceName string, ...) (*runtime.Poller[PrivateEndpointConnectionsAdtAPIClientCreateOrUpdateResponse], ...)](#PrivateEndpointConnectionsAdtAPIClient.BeginCreateOrUpdate) + [func (client *PrivateEndpointConnectionsAdtAPIClient) BeginDelete(ctx context.Context, resourceGroupName string, resourceName string, ...) (*runtime.Poller[PrivateEndpointConnectionsAdtAPIClientDeleteResponse], error)](#PrivateEndpointConnectionsAdtAPIClient.BeginDelete) + [func (client *PrivateEndpointConnectionsAdtAPIClient) Get(ctx context.Context, resourceGroupName string, resourceName string, ...) (PrivateEndpointConnectionsAdtAPIClientGetResponse, error)](#PrivateEndpointConnectionsAdtAPIClient.Get) + [func (client *PrivateEndpointConnectionsAdtAPIClient) NewListByServicePager(resourceGroupName string, resourceName string, ...) *runtime.Pager[PrivateEndpointConnectionsAdtAPIClientListByServiceResponse]](#PrivateEndpointConnectionsAdtAPIClient.NewListByServicePager) * [type PrivateEndpointConnectionsAdtAPIClientBeginCreateOrUpdateOptions](#PrivateEndpointConnectionsAdtAPIClientBeginCreateOrUpdateOptions) * [type PrivateEndpointConnectionsAdtAPIClientBeginDeleteOptions](#PrivateEndpointConnectionsAdtAPIClientBeginDeleteOptions) * [type PrivateEndpointConnectionsAdtAPIClientCreateOrUpdateResponse](#PrivateEndpointConnectionsAdtAPIClientCreateOrUpdateResponse) * [type PrivateEndpointConnectionsAdtAPIClientDeleteResponse](#PrivateEndpointConnectionsAdtAPIClientDeleteResponse) * [type PrivateEndpointConnectionsAdtAPIClientGetOptions](#PrivateEndpointConnectionsAdtAPIClientGetOptions) * [type PrivateEndpointConnectionsAdtAPIClientGetResponse](#PrivateEndpointConnectionsAdtAPIClientGetResponse) * [type PrivateEndpointConnectionsAdtAPIClientListByServiceOptions](#PrivateEndpointConnectionsAdtAPIClientListByServiceOptions) * [type PrivateEndpointConnectionsAdtAPIClientListByServiceResponse](#PrivateEndpointConnectionsAdtAPIClientListByServiceResponse) * [type PrivateEndpointConnectionsCompClient](#PrivateEndpointConnectionsCompClient) * + [func NewPrivateEndpointConnectionsCompClient(subscriptionID string, credential azcore.TokenCredential, ...) (*PrivateEndpointConnectionsCompClient, error)](#NewPrivateEndpointConnectionsCompClient) * + [func (client *PrivateEndpointConnectionsCompClient) BeginCreateOrUpdate(ctx context.Context, resourceGroupName string, resourceName string, ...) (*runtime.Poller[PrivateEndpointConnectionsCompClientCreateOrUpdateResponse], ...)](#PrivateEndpointConnectionsCompClient.BeginCreateOrUpdate) + [func (client *PrivateEndpointConnectionsCompClient) BeginDelete(ctx context.Context, resourceGroupName string, resourceName string, ...) (*runtime.Poller[PrivateEndpointConnectionsCompClientDeleteResponse], error)](#PrivateEndpointConnectionsCompClient.BeginDelete) + [func (client *PrivateEndpointConnectionsCompClient) Get(ctx context.Context, resourceGroupName string, resourceName string, ...) (PrivateEndpointConnectionsCompClientGetResponse, error)](#PrivateEndpointConnectionsCompClient.Get) + [func (client *PrivateEndpointConnectionsCompClient) NewListByServicePager(resourceGroupName string, resourceName string, ...) *runtime.Pager[PrivateEndpointConnectionsCompClientListByServiceResponse]](#PrivateEndpointConnectionsCompClient.NewListByServicePager) * [type PrivateEndpointConnectionsCompClientBeginCreateOrUpdateOptions](#PrivateEndpointConnectionsCompClientBeginCreateOrUpdateOptions) * [type PrivateEndpointConnectionsCompClientBeginDeleteOptions](#PrivateEndpointConnectionsCompClientBeginDeleteOptions) * [type PrivateEndpointConnectionsCompClientCreateOrUpdateResponse](#PrivateEndpointConnectionsCompClientCreateOrUpdateResponse) * [type PrivateEndpointConnectionsCompClientDeleteResponse](#PrivateEndpointConnectionsCompClientDeleteResponse) * [type PrivateEndpointConnectionsCompClientGetOptions](#PrivateEndpointConnectionsCompClientGetOptions) * [type PrivateEndpointConnectionsCompClientGetResponse](#PrivateEndpointConnectionsCompClientGetResponse) * [type PrivateEndpointConnectionsCompClientListByServiceOptions](#PrivateEndpointConnectionsCompClientListByServiceOptions) * [type PrivateEndpointConnectionsCompClientListByServiceResponse](#PrivateEndpointConnectionsCompClientListByServiceResponse) * [type PrivateEndpointConnectionsForEDMClient](#PrivateEndpointConnectionsForEDMClient) * + [func NewPrivateEndpointConnectionsForEDMClient(subscriptionID string, credential azcore.TokenCredential, ...) (*PrivateEndpointConnectionsForEDMClient, error)](#NewPrivateEndpointConnectionsForEDMClient) * + [func (client *PrivateEndpointConnectionsForEDMClient) BeginCreateOrUpdate(ctx context.Context, resourceGroupName string, resourceName string, ...) (*runtime.Poller[PrivateEndpointConnectionsForEDMClientCreateOrUpdateResponse], ...)](#PrivateEndpointConnectionsForEDMClient.BeginCreateOrUpdate) + [func (client *PrivateEndpointConnectionsForEDMClient) BeginDelete(ctx context.Context, resourceGroupName string, resourceName string, ...) (*runtime.Poller[PrivateEndpointConnectionsForEDMClientDeleteResponse], error)](#PrivateEndpointConnectionsForEDMClient.BeginDelete) + [func (client *PrivateEndpointConnectionsForEDMClient) Get(ctx context.Context, resourceGroupName string, resourceName string, ...) (PrivateEndpointConnectionsForEDMClientGetResponse, error)](#PrivateEndpointConnectionsForEDMClient.Get) + [func (client *PrivateEndpointConnectionsForEDMClient) NewListByServicePager(resourceGroupName string, resourceName string, ...) *runtime.Pager[PrivateEndpointConnectionsForEDMClientListByServiceResponse]](#PrivateEndpointConnectionsForEDMClient.NewListByServicePager) * [type PrivateEndpointConnectionsForEDMClientBeginCreateOrUpdateOptions](#PrivateEndpointConnectionsForEDMClientBeginCreateOrUpdateOptions) * [type PrivateEndpointConnectionsForEDMClientBeginDeleteOptions](#PrivateEndpointConnectionsForEDMClientBeginDeleteOptions) * [type PrivateEndpointConnectionsForEDMClientCreateOrUpdateResponse](#PrivateEndpointConnectionsForEDMClientCreateOrUpdateResponse) * [type PrivateEndpointConnectionsForEDMClientDeleteResponse](#PrivateEndpointConnectionsForEDMClientDeleteResponse) * [type PrivateEndpointConnectionsForEDMClientGetOptions](#PrivateEndpointConnectionsForEDMClientGetOptions) * [type PrivateEndpointConnectionsForEDMClientGetResponse](#PrivateEndpointConnectionsForEDMClientGetResponse) * [type PrivateEndpointConnectionsForEDMClientListByServiceOptions](#PrivateEndpointConnectionsForEDMClientListByServiceOptions) * [type PrivateEndpointConnectionsForEDMClientListByServiceResponse](#PrivateEndpointConnectionsForEDMClientListByServiceResponse) * [type PrivateEndpointConnectionsForMIPPolicySyncClient](#PrivateEndpointConnectionsForMIPPolicySyncClient) * + [func NewPrivateEndpointConnectionsForMIPPolicySyncClient(subscriptionID string, credential azcore.TokenCredential, ...) (*PrivateEndpointConnectionsForMIPPolicySyncClient, error)](#NewPrivateEndpointConnectionsForMIPPolicySyncClient) * + [func (client *PrivateEndpointConnectionsForMIPPolicySyncClient) BeginCreateOrUpdate(ctx context.Context, resourceGroupName string, resourceName string, ...) (...)](#PrivateEndpointConnectionsForMIPPolicySyncClient.BeginCreateOrUpdate) + [func (client *PrivateEndpointConnectionsForMIPPolicySyncClient) BeginDelete(ctx context.Context, resourceGroupName string, resourceName string, ...) (...)](#PrivateEndpointConnectionsForMIPPolicySyncClient.BeginDelete) + [func (client *PrivateEndpointConnectionsForMIPPolicySyncClient) Get(ctx context.Context, resourceGroupName string, resourceName string, ...) (PrivateEndpointConnectionsForMIPPolicySyncClientGetResponse, error)](#PrivateEndpointConnectionsForMIPPolicySyncClient.Get) + [func (client *PrivateEndpointConnectionsForMIPPolicySyncClient) NewListByServicePager(resourceGroupName string, resourceName string, ...) ...](#PrivateEndpointConnectionsForMIPPolicySyncClient.NewListByServicePager) * [type PrivateEndpointConnectionsForMIPPolicySyncClientBeginCreateOrUpdateOptions](#PrivateEndpointConnectionsForMIPPolicySyncClientBeginCreateOrUpdateOptions) * [type PrivateEndpointConnectionsForMIPPolicySyncClientBeginDeleteOptions](#PrivateEndpointConnectionsForMIPPolicySyncClientBeginDeleteOptions) * [type PrivateEndpointConnectionsForMIPPolicySyncClientCreateOrUpdateResponse](#PrivateEndpointConnectionsForMIPPolicySyncClientCreateOrUpdateResponse) * [type PrivateEndpointConnectionsForMIPPolicySyncClientDeleteResponse](#PrivateEndpointConnectionsForMIPPolicySyncClientDeleteResponse) * [type PrivateEndpointConnectionsForMIPPolicySyncClientGetOptions](#PrivateEndpointConnectionsForMIPPolicySyncClientGetOptions) * [type PrivateEndpointConnectionsForMIPPolicySyncClientGetResponse](#PrivateEndpointConnectionsForMIPPolicySyncClientGetResponse) * [type PrivateEndpointConnectionsForMIPPolicySyncClientListByServiceOptions](#PrivateEndpointConnectionsForMIPPolicySyncClientListByServiceOptions) * [type PrivateEndpointConnectionsForMIPPolicySyncClientListByServiceResponse](#PrivateEndpointConnectionsForMIPPolicySyncClientListByServiceResponse) * [type PrivateEndpointConnectionsForSCCPowershellClient](#PrivateEndpointConnectionsForSCCPowershellClient) * + [func NewPrivateEndpointConnectionsForSCCPowershellClient(subscriptionID string, credential azcore.TokenCredential, ...) (*PrivateEndpointConnectionsForSCCPowershellClient, error)](#NewPrivateEndpointConnectionsForSCCPowershellClient) * + [func (client *PrivateEndpointConnectionsForSCCPowershellClient) BeginCreateOrUpdate(ctx context.Context, resourceGroupName string, resourceName string, ...) (...)](#PrivateEndpointConnectionsForSCCPowershellClient.BeginCreateOrUpdate) + [func (client *PrivateEndpointConnectionsForSCCPowershellClient) BeginDelete(ctx context.Context, resourceGroupName string, resourceName string, ...) (...)](#PrivateEndpointConnectionsForSCCPowershellClient.BeginDelete) + [func (client *PrivateEndpointConnectionsForSCCPowershellClient) Get(ctx context.Context, resourceGroupName string, resourceName string, ...) (PrivateEndpointConnectionsForSCCPowershellClientGetResponse, error)](#PrivateEndpointConnectionsForSCCPowershellClient.Get) + [func (client *PrivateEndpointConnectionsForSCCPowershellClient) NewListByServicePager(resourceGroupName string, resourceName string, ...) ...](#PrivateEndpointConnectionsForSCCPowershellClient.NewListByServicePager) * [type PrivateEndpointConnectionsForSCCPowershellClientBeginCreateOrUpdateOptions](#PrivateEndpointConnectionsForSCCPowershellClientBeginCreateOrUpdateOptions) * [type PrivateEndpointConnectionsForSCCPowershellClientBeginDeleteOptions](#PrivateEndpointConnectionsForSCCPowershellClientBeginDeleteOptions) * [type PrivateEndpointConnectionsForSCCPowershellClientCreateOrUpdateResponse](#PrivateEndpointConnectionsForSCCPowershellClientCreateOrUpdateResponse) * [type PrivateEndpointConnectionsForSCCPowershellClientDeleteResponse](#PrivateEndpointConnectionsForSCCPowershellClientDeleteResponse) * [type PrivateEndpointConnectionsForSCCPowershellClientGetOptions](#PrivateEndpointConnectionsForSCCPowershellClientGetOptions) * [type PrivateEndpointConnectionsForSCCPowershellClientGetResponse](#PrivateEndpointConnectionsForSCCPowershellClientGetResponse) * [type PrivateEndpointConnectionsForSCCPowershellClientListByServiceOptions](#PrivateEndpointConnectionsForSCCPowershellClientListByServiceOptions) * [type PrivateEndpointConnectionsForSCCPowershellClientListByServiceResponse](#PrivateEndpointConnectionsForSCCPowershellClientListByServiceResponse) * [type PrivateEndpointConnectionsSecClient](#PrivateEndpointConnectionsSecClient) * + [func NewPrivateEndpointConnectionsSecClient(subscriptionID string, credential azcore.TokenCredential, ...) (*PrivateEndpointConnectionsSecClient, error)](#NewPrivateEndpointConnectionsSecClient) * + [func (client *PrivateEndpointConnectionsSecClient) BeginCreateOrUpdate(ctx context.Context, resourceGroupName string, resourceName string, ...) (*runtime.Poller[PrivateEndpointConnectionsSecClientCreateOrUpdateResponse], ...)](#PrivateEndpointConnectionsSecClient.BeginCreateOrUpdate) + [func (client *PrivateEndpointConnectionsSecClient) BeginDelete(ctx context.Context, resourceGroupName string, resourceName string, ...) (*runtime.Poller[PrivateEndpointConnectionsSecClientDeleteResponse], error)](#PrivateEndpointConnectionsSecClient.BeginDelete) + [func (client *PrivateEndpointConnectionsSecClient) Get(ctx context.Context, resourceGroupName string, resourceName string, ...) (PrivateEndpointConnectionsSecClientGetResponse, error)](#PrivateEndpointConnectionsSecClient.Get) + [func (client *PrivateEndpointConnectionsSecClient) NewListByServicePager(resourceGroupName string, resourceName string, ...) *runtime.Pager[PrivateEndpointConnectionsSecClientListByServiceResponse]](#PrivateEndpointConnectionsSecClient.NewListByServicePager) * [type PrivateEndpointConnectionsSecClientBeginCreateOrUpdateOptions](#PrivateEndpointConnectionsSecClientBeginCreateOrUpdateOptions) * [type PrivateEndpointConnectionsSecClientBeginDeleteOptions](#PrivateEndpointConnectionsSecClientBeginDeleteOptions) * [type PrivateEndpointConnectionsSecClientCreateOrUpdateResponse](#PrivateEndpointConnectionsSecClientCreateOrUpdateResponse) * [type PrivateEndpointConnectionsSecClientDeleteResponse](#PrivateEndpointConnectionsSecClientDeleteResponse) * [type PrivateEndpointConnectionsSecClientGetOptions](#PrivateEndpointConnectionsSecClientGetOptions) * [type PrivateEndpointConnectionsSecClientGetResponse](#PrivateEndpointConnectionsSecClientGetResponse) * [type PrivateEndpointConnectionsSecClientListByServiceOptions](#PrivateEndpointConnectionsSecClientListByServiceOptions) * [type PrivateEndpointConnectionsSecClientListByServiceResponse](#PrivateEndpointConnectionsSecClientListByServiceResponse) * [type PrivateEndpointServiceConnectionStatus](#PrivateEndpointServiceConnectionStatus) * + [func PossiblePrivateEndpointServiceConnectionStatusValues() []PrivateEndpointServiceConnectionStatus](#PossiblePrivateEndpointServiceConnectionStatusValues) * [type PrivateLinkResource](#PrivateLinkResource) * + [func (p PrivateLinkResource) MarshalJSON() ([]byte, error)](#PrivateLinkResource.MarshalJSON) + [func (p *PrivateLinkResource) UnmarshalJSON(data []byte) error](#PrivateLinkResource.UnmarshalJSON) * [type PrivateLinkResourceListResult](#PrivateLinkResourceListResult) * + [func (p PrivateLinkResourceListResult) MarshalJSON() ([]byte, error)](#PrivateLinkResourceListResult.MarshalJSON) + [func (p *PrivateLinkResourceListResult) UnmarshalJSON(data []byte) error](#PrivateLinkResourceListResult.UnmarshalJSON) * [type PrivateLinkResourceProperties](#PrivateLinkResourceProperties) * + [func (p PrivateLinkResourceProperties) MarshalJSON() ([]byte, error)](#PrivateLinkResourceProperties.MarshalJSON) + [func (p *PrivateLinkResourceProperties) UnmarshalJSON(data []byte) error](#PrivateLinkResourceProperties.UnmarshalJSON) * [type PrivateLinkResourcesAdtAPIClient](#PrivateLinkResourcesAdtAPIClient) * + [func NewPrivateLinkResourcesAdtAPIClient(subscriptionID string, credential azcore.TokenCredential, ...) (*PrivateLinkResourcesAdtAPIClient, error)](#NewPrivateLinkResourcesAdtAPIClient) * + [func (client *PrivateLinkResourcesAdtAPIClient) Get(ctx context.Context, resourceGroupName string, resourceName string, ...) (PrivateLinkResourcesAdtAPIClientGetResponse, error)](#PrivateLinkResourcesAdtAPIClient.Get) + [func (client *PrivateLinkResourcesAdtAPIClient) ListByService(ctx context.Context, resourceGroupName string, resourceName string, ...) (PrivateLinkResourcesAdtAPIClientListByServiceResponse, error)](#PrivateLinkResourcesAdtAPIClient.ListByService) * [type PrivateLinkResourcesAdtAPIClientGetOptions](#PrivateLinkResourcesAdtAPIClientGetOptions) * [type PrivateLinkResourcesAdtAPIClientGetResponse](#PrivateLinkResourcesAdtAPIClientGetResponse) * [type PrivateLinkResourcesAdtAPIClientListByServiceOptions](#PrivateLinkResourcesAdtAPIClientListByServiceOptions) * [type PrivateLinkResourcesAdtAPIClientListByServiceResponse](#PrivateLinkResourcesAdtAPIClientListByServiceResponse) * [type PrivateLinkResourcesClient](#PrivateLinkResourcesClient) * + [func NewPrivateLinkResourcesClient(subscriptionID string, credential azcore.TokenCredential, ...) (*PrivateLinkResourcesClient, error)](#NewPrivateLinkResourcesClient) * + [func (client *PrivateLinkResourcesClient) Get(ctx context.Context, resourceGroupName string, resourceName string, ...) (PrivateLinkResourcesClientGetResponse, error)](#PrivateLinkResourcesClient.Get) + [func (client *PrivateLinkResourcesClient) ListByService(ctx context.Context, resourceGroupName string, resourceName string, ...) (PrivateLinkResourcesClientListByServiceResponse, error)](#PrivateLinkResourcesClient.ListByService) * [type PrivateLinkResourcesClientGetOptions](#PrivateLinkResourcesClientGetOptions) * [type PrivateLinkResourcesClientGetResponse](#PrivateLinkResourcesClientGetResponse) * [type PrivateLinkResourcesClientListByServiceOptions](#PrivateLinkResourcesClientListByServiceOptions) * [type PrivateLinkResourcesClientListByServiceResponse](#PrivateLinkResourcesClientListByServiceResponse) * [type PrivateLinkResourcesCompClient](#PrivateLinkResourcesCompClient) * + [func NewPrivateLinkResourcesCompClient(subscriptionID string, credential azcore.TokenCredential, ...) (*PrivateLinkResourcesCompClient, error)](#NewPrivateLinkResourcesCompClient) * + [func (client *PrivateLinkResourcesCompClient) Get(ctx context.Context, resourceGroupName string, resourceName string, ...) (PrivateLinkResourcesCompClientGetResponse, error)](#PrivateLinkResourcesCompClient.Get) + [func (client *PrivateLinkResourcesCompClient) ListByService(ctx context.Context, resourceGroupName string, resourceName string, ...) (PrivateLinkResourcesCompClientListByServiceResponse, error)](#PrivateLinkResourcesCompClient.ListByService) * [type PrivateLinkResourcesCompClientGetOptions](#PrivateLinkResourcesCompClientGetOptions) * [type PrivateLinkResourcesCompClientGetResponse](#PrivateLinkResourcesCompClientGetResponse) * [type PrivateLinkResourcesCompClientListByServiceOptions](#PrivateLinkResourcesCompClientListByServiceOptions) * [type PrivateLinkResourcesCompClientListByServiceResponse](#PrivateLinkResourcesCompClientListByServiceResponse) * [type PrivateLinkResourcesForMIPPolicySyncClient](#PrivateLinkResourcesForMIPPolicySyncClient) * + [func NewPrivateLinkResourcesForMIPPolicySyncClient(subscriptionID string, credential azcore.TokenCredential, ...) (*PrivateLinkResourcesForMIPPolicySyncClient, error)](#NewPrivateLinkResourcesForMIPPolicySyncClient) * + [func (client *PrivateLinkResourcesForMIPPolicySyncClient) Get(ctx context.Context, resourceGroupName string, resourceName string, ...) (PrivateLinkResourcesForMIPPolicySyncClientGetResponse, error)](#PrivateLinkResourcesForMIPPolicySyncClient.Get) + [func (client *PrivateLinkResourcesForMIPPolicySyncClient) ListByService(ctx context.Context, resourceGroupName string, resourceName string, ...) (PrivateLinkResourcesForMIPPolicySyncClientListByServiceResponse, error)](#PrivateLinkResourcesForMIPPolicySyncClient.ListByService) * [type PrivateLinkResourcesForMIPPolicySyncClientGetOptions](#PrivateLinkResourcesForMIPPolicySyncClientGetOptions) * [type PrivateLinkResourcesForMIPPolicySyncClientGetResponse](#PrivateLinkResourcesForMIPPolicySyncClientGetResponse) * [type PrivateLinkResourcesForMIPPolicySyncClientListByServiceOptions](#PrivateLinkResourcesForMIPPolicySyncClientListByServiceOptions) * [type PrivateLinkResourcesForMIPPolicySyncClientListByServiceResponse](#PrivateLinkResourcesForMIPPolicySyncClientListByServiceResponse) * [type PrivateLinkResourcesForSCCPowershellClient](#PrivateLinkResourcesForSCCPowershellClient) * + [func NewPrivateLinkResourcesForSCCPowershellClient(subscriptionID string, credential azcore.TokenCredential, ...) (*PrivateLinkResourcesForSCCPowershellClient, error)](#NewPrivateLinkResourcesForSCCPowershellClient) * + [func (client *PrivateLinkResourcesForSCCPowershellClient) Get(ctx context.Context, resourceGroupName string, resourceName string, ...) (PrivateLinkResourcesForSCCPowershellClientGetResponse, error)](#PrivateLinkResourcesForSCCPowershellClient.Get) + [func (client *PrivateLinkResourcesForSCCPowershellClient) ListByService(ctx context.Context, resourceGroupName string, resourceName string, ...) (PrivateLinkResourcesForSCCPowershellClientListByServiceResponse, error)](#PrivateLinkResourcesForSCCPowershellClient.ListByService) * [type PrivateLinkResourcesForSCCPowershellClientGetOptions](#PrivateLinkResourcesForSCCPowershellClientGetOptions) * [type PrivateLinkResourcesForSCCPowershellClientGetResponse](#PrivateLinkResourcesForSCCPowershellClientGetResponse) * [type PrivateLinkResourcesForSCCPowershellClientListByServiceOptions](#PrivateLinkResourcesForSCCPowershellClientListByServiceOptions) * [type PrivateLinkResourcesForSCCPowershellClientListByServiceResponse](#PrivateLinkResourcesForSCCPowershellClientListByServiceResponse) * [type PrivateLinkResourcesSecClient](#PrivateLinkResourcesSecClient) * + [func NewPrivateLinkResourcesSecClient(subscriptionID string, credential azcore.TokenCredential, ...) (*PrivateLinkResourcesSecClient, error)](#NewPrivateLinkResourcesSecClient) * + [func (client *PrivateLinkResourcesSecClient) Get(ctx context.Context, resourceGroupName string, resourceName string, ...) (PrivateLinkResourcesSecClientGetResponse, error)](#PrivateLinkResourcesSecClient.Get) + [func (client *PrivateLinkResourcesSecClient) ListByService(ctx context.Context, resourceGroupName string, resourceName string, ...) (PrivateLinkResourcesSecClientListByServiceResponse, error)](#PrivateLinkResourcesSecClient.ListByService) * [type PrivateLinkResourcesSecClientGetOptions](#PrivateLinkResourcesSecClientGetOptions) * [type PrivateLinkResourcesSecClientGetResponse](#PrivateLinkResourcesSecClientGetResponse) * [type PrivateLinkResourcesSecClientListByServiceOptions](#PrivateLinkResourcesSecClientListByServiceOptions) * [type PrivateLinkResourcesSecClientListByServiceResponse](#PrivateLinkResourcesSecClientListByServiceResponse) * [type PrivateLinkServiceConnectionState](#PrivateLinkServiceConnectionState) * + [func (p PrivateLinkServiceConnectionState) MarshalJSON() ([]byte, error)](#PrivateLinkServiceConnectionState.MarshalJSON) + [func (p *PrivateLinkServiceConnectionState) UnmarshalJSON(data []byte) error](#PrivateLinkServiceConnectionState.UnmarshalJSON) * [type PrivateLinkServicesForEDMUploadClient](#PrivateLinkServicesForEDMUploadClient) * + [func NewPrivateLinkServicesForEDMUploadClient(subscriptionID string, credential azcore.TokenCredential, ...) (*PrivateLinkServicesForEDMUploadClient, error)](#NewPrivateLinkServicesForEDMUploadClient) * + [func (client *PrivateLinkServicesForEDMUploadClient) BeginCreateOrUpdate(ctx context.Context, resourceGroupName string, resourceName string, ...) (*runtime.Poller[PrivateLinkServicesForEDMUploadClientCreateOrUpdateResponse], ...)](#PrivateLinkServicesForEDMUploadClient.BeginCreateOrUpdate) + [func (client *PrivateLinkServicesForEDMUploadClient) BeginUpdate(ctx context.Context, resourceGroupName string, resourceName string, ...) (*runtime.Poller[PrivateLinkServicesForEDMUploadClientUpdateResponse], error)](#PrivateLinkServicesForEDMUploadClient.BeginUpdate) + [func (client *PrivateLinkServicesForEDMUploadClient) Get(ctx context.Context, resourceGroupName string, resourceName string, ...) (PrivateLinkServicesForEDMUploadClientGetResponse, error)](#PrivateLinkServicesForEDMUploadClient.Get) + [func (client *PrivateLinkServicesForEDMUploadClient) NewListByResourceGroupPager(resourceGroupName string, ...) ...](#PrivateLinkServicesForEDMUploadClient.NewListByResourceGroupPager) + [func (client *PrivateLinkServicesForEDMUploadClient) NewListPager(options *PrivateLinkServicesForEDMUploadClientListOptions) *runtime.Pager[PrivateLinkServicesForEDMUploadClientListResponse]](#PrivateLinkServicesForEDMUploadClient.NewListPager) * [type PrivateLinkServicesForEDMUploadClientBeginCreateOrUpdateOptions](#PrivateLinkServicesForEDMUploadClientBeginCreateOrUpdateOptions) * [type PrivateLinkServicesForEDMUploadClientBeginUpdateOptions](#PrivateLinkServicesForEDMUploadClientBeginUpdateOptions) * [type PrivateLinkServicesForEDMUploadClientCreateOrUpdateResponse](#PrivateLinkServicesForEDMUploadClientCreateOrUpdateResponse) * [type PrivateLinkServicesForEDMUploadClientGetOptions](#PrivateLinkServicesForEDMUploadClientGetOptions) * [type PrivateLinkServicesForEDMUploadClientGetResponse](#PrivateLinkServicesForEDMUploadClientGetResponse) * [type PrivateLinkServicesForEDMUploadClientListByResourceGroupOptions](#PrivateLinkServicesForEDMUploadClientListByResourceGroupOptions) * [type PrivateLinkServicesForEDMUploadClientListByResourceGroupResponse](#PrivateLinkServicesForEDMUploadClientListByResourceGroupResponse) * [type PrivateLinkServicesForEDMUploadClientListOptions](#PrivateLinkServicesForEDMUploadClientListOptions) * [type PrivateLinkServicesForEDMUploadClientListResponse](#PrivateLinkServicesForEDMUploadClientListResponse) * [type PrivateLinkServicesForEDMUploadClientUpdateResponse](#PrivateLinkServicesForEDMUploadClientUpdateResponse) * [type PrivateLinkServicesForEDMUploadDescription](#PrivateLinkServicesForEDMUploadDescription) * + [func (p PrivateLinkServicesForEDMUploadDescription) MarshalJSON() ([]byte, error)](#PrivateLinkServicesForEDMUploadDescription.MarshalJSON) + [func (p *PrivateLinkServicesForEDMUploadDescription) UnmarshalJSON(data []byte) error](#PrivateLinkServicesForEDMUploadDescription.UnmarshalJSON) * [type PrivateLinkServicesForEDMUploadDescriptionListResult](#PrivateLinkServicesForEDMUploadDescriptionListResult) * + [func (p PrivateLinkServicesForEDMUploadDescriptionListResult) MarshalJSON() ([]byte, error)](#PrivateLinkServicesForEDMUploadDescriptionListResult.MarshalJSON) + [func (p *PrivateLinkServicesForEDMUploadDescriptionListResult) UnmarshalJSON(data []byte) error](#PrivateLinkServicesForEDMUploadDescriptionListResult.UnmarshalJSON) * [type PrivateLinkServicesForM365ComplianceCenterClient](#PrivateLinkServicesForM365ComplianceCenterClient) * + [func NewPrivateLinkServicesForM365ComplianceCenterClient(subscriptionID string, credential azcore.TokenCredential, ...) (*PrivateLinkServicesForM365ComplianceCenterClient, error)](#NewPrivateLinkServicesForM365ComplianceCenterClient) * + [func (client *PrivateLinkServicesForM365ComplianceCenterClient) BeginCreateOrUpdate(ctx context.Context, resourceGroupName string, resourceName string, ...) (...)](#PrivateLinkServicesForM365ComplianceCenterClient.BeginCreateOrUpdate) + [func (client *PrivateLinkServicesForM365ComplianceCenterClient) BeginDelete(ctx context.Context, resourceGroupName string, resourceName string, ...) (...)](#PrivateLinkServicesForM365ComplianceCenterClient.BeginDelete) + [func (client *PrivateLinkServicesForM365ComplianceCenterClient) BeginUpdate(ctx context.Context, resourceGroupName string, resourceName string, ...) (...)](#PrivateLinkServicesForM365ComplianceCenterClient.BeginUpdate) + [func (client *PrivateLinkServicesForM365ComplianceCenterClient) Get(ctx context.Context, resourceGroupName string, resourceName string, ...) (PrivateLinkServicesForM365ComplianceCenterClientGetResponse, error)](#PrivateLinkServicesForM365ComplianceCenterClient.Get) + [func (client *PrivateLinkServicesForM365ComplianceCenterClient) NewListByResourceGroupPager(resourceGroupName string, ...) ...](#PrivateLinkServicesForM365ComplianceCenterClient.NewListByResourceGroupPager) + [func (client *PrivateLinkServicesForM365ComplianceCenterClient) NewListPager(options *PrivateLinkServicesForM365ComplianceCenterClientListOptions) *runtime.Pager[PrivateLinkServicesForM365ComplianceCenterClientListResponse]](#PrivateLinkServicesForM365ComplianceCenterClient.NewListPager) * [type PrivateLinkServicesForM365ComplianceCenterClientBeginCreateOrUpdateOptions](#PrivateLinkServicesForM365ComplianceCenterClientBeginCreateOrUpdateOptions) * [type PrivateLinkServicesForM365ComplianceCenterClientBeginDeleteOptions](#PrivateLinkServicesForM365ComplianceCenterClientBeginDeleteOptions) * [type PrivateLinkServicesForM365ComplianceCenterClientBeginUpdateOptions](#PrivateLinkServicesForM365ComplianceCenterClientBeginUpdateOptions) * [type PrivateLinkServicesForM365ComplianceCenterClientCreateOrUpdateResponse](#PrivateLinkServicesForM365ComplianceCenterClientCreateOrUpdateResponse) * [type PrivateLinkServicesForM365ComplianceCenterClientDeleteResponse](#PrivateLinkServicesForM365ComplianceCenterClientDeleteResponse) * [type PrivateLinkServicesForM365ComplianceCenterClientGetOptions](#PrivateLinkServicesForM365ComplianceCenterClientGetOptions) * [type PrivateLinkServicesForM365ComplianceCenterClientGetResponse](#PrivateLinkServicesForM365ComplianceCenterClientGetResponse) * [type PrivateLinkServicesForM365ComplianceCenterClientListByResourceGroupOptions](#PrivateLinkServicesForM365ComplianceCenterClientListByResourceGroupOptions) * [type PrivateLinkServicesForM365ComplianceCenterClientListByResourceGroupResponse](#PrivateLinkServicesForM365ComplianceCenterClientListByResourceGroupResponse) * [type PrivateLinkServicesForM365ComplianceCenterClientListOptions](#PrivateLinkServicesForM365ComplianceCenterClientListOptions) * [type PrivateLinkServicesForM365ComplianceCenterClientListResponse](#PrivateLinkServicesForM365ComplianceCenterClientListResponse) * [type PrivateLinkServicesForM365ComplianceCenterClientUpdateResponse](#PrivateLinkServicesForM365ComplianceCenterClientUpdateResponse) * [type PrivateLinkServicesForM365ComplianceCenterDescription](#PrivateLinkServicesForM365ComplianceCenterDescription) * + [func (p PrivateLinkServicesForM365ComplianceCenterDescription) MarshalJSON() ([]byte, error)](#PrivateLinkServicesForM365ComplianceCenterDescription.MarshalJSON) + [func (p *PrivateLinkServicesForM365ComplianceCenterDescription) UnmarshalJSON(data []byte) error](#PrivateLinkServicesForM365ComplianceCenterDescription.UnmarshalJSON) * [type PrivateLinkServicesForM365ComplianceCenterDescriptionListResult](#PrivateLinkServicesForM365ComplianceCenterDescriptionListResult) * + [func (p PrivateLinkServicesForM365ComplianceCenterDescriptionListResult) MarshalJSON() ([]byte, error)](#PrivateLinkServicesForM365ComplianceCenterDescriptionListResult.MarshalJSON) + [func (p *PrivateLinkServicesForM365ComplianceCenterDescriptionListResult) UnmarshalJSON(data []byte) error](#PrivateLinkServicesForM365ComplianceCenterDescriptionListResult.UnmarshalJSON) * [type PrivateLinkServicesForM365SecurityCenterClient](#PrivateLinkServicesForM365SecurityCenterClient) * + [func NewPrivateLinkServicesForM365SecurityCenterClient(subscriptionID string, credential azcore.TokenCredential, ...) (*PrivateLinkServicesForM365SecurityCenterClient, error)](#NewPrivateLinkServicesForM365SecurityCenterClient) * + [func (client *PrivateLinkServicesForM365SecurityCenterClient) BeginCreateOrUpdate(ctx context.Context, resourceGroupName string, resourceName string, ...) (...)](#PrivateLinkServicesForM365SecurityCenterClient.BeginCreateOrUpdate) + [func (client *PrivateLinkServicesForM365SecurityCenterClient) BeginDelete(ctx context.Context, resourceGroupName string, resourceName string, ...) (*runtime.Poller[PrivateLinkServicesForM365SecurityCenterClientDeleteResponse], ...)](#PrivateLinkServicesForM365SecurityCenterClient.BeginDelete) + [func (client *PrivateLinkServicesForM365SecurityCenterClient) BeginUpdate(ctx context.Context, resourceGroupName string, resourceName string, ...) (*runtime.Poller[PrivateLinkServicesForM365SecurityCenterClientUpdateResponse], ...)](#PrivateLinkServicesForM365SecurityCenterClient.BeginUpdate) + [func (client *PrivateLinkServicesForM365SecurityCenterClient) Get(ctx context.Context, resourceGroupName string, resourceName string, ...) (PrivateLinkServicesForM365SecurityCenterClientGetResponse, error)](#PrivateLinkServicesForM365SecurityCenterClient.Get) + [func (client *PrivateLinkServicesForM365SecurityCenterClient) NewListByResourceGroupPager(resourceGroupName string, ...) ...](#PrivateLinkServicesForM365SecurityCenterClient.NewListByResourceGroupPager) + [func (client *PrivateLinkServicesForM365SecurityCenterClient) NewListPager(options *PrivateLinkServicesForM365SecurityCenterClientListOptions) *runtime.Pager[PrivateLinkServicesForM365SecurityCenterClientListResponse]](#PrivateLinkServicesForM365SecurityCenterClient.NewListPager) * [type PrivateLinkServicesForM365SecurityCenterClientBeginCreateOrUpdateOptions](#PrivateLinkServicesForM365SecurityCenterClientBeginCreateOrUpdateOptions) * [type PrivateLinkServicesForM365SecurityCenterClientBeginDeleteOptions](#PrivateLinkServicesForM365SecurityCenterClientBeginDeleteOptions) * [type PrivateLinkServicesForM365SecurityCenterClientBeginUpdateOptions](#PrivateLinkServicesForM365SecurityCenterClientBeginUpdateOptions) * [type PrivateLinkServicesForM365SecurityCenterClientCreateOrUpdateResponse](#PrivateLinkServicesForM365SecurityCenterClientCreateOrUpdateResponse) * [type PrivateLinkServicesForM365SecurityCenterClientDeleteResponse](#PrivateLinkServicesForM365SecurityCenterClientDeleteResponse) * [type PrivateLinkServicesForM365SecurityCenterClientGetOptions](#PrivateLinkServicesForM365SecurityCenterClientGetOptions) * [type PrivateLinkServicesForM365SecurityCenterClientGetResponse](#PrivateLinkServicesForM365SecurityCenterClientGetResponse) * [type PrivateLinkServicesForM365SecurityCenterClientListByResourceGroupOptions](#PrivateLinkServicesForM365SecurityCenterClientListByResourceGroupOptions) * [type PrivateLinkServicesForM365SecurityCenterClientListByResourceGroupResponse](#PrivateLinkServicesForM365SecurityCenterClientListByResourceGroupResponse) * [type PrivateLinkServicesForM365SecurityCenterClientListOptions](#PrivateLinkServicesForM365SecurityCenterClientListOptions) * [type PrivateLinkServicesForM365SecurityCenterClientListResponse](#PrivateLinkServicesForM365SecurityCenterClientListResponse) * [type PrivateLinkServicesForM365SecurityCenterClientUpdateResponse](#PrivateLinkServicesForM365SecurityCenterClientUpdateResponse) * [type PrivateLinkServicesForM365SecurityCenterDescription](#PrivateLinkServicesForM365SecurityCenterDescription) * + [func (p PrivateLinkServicesForM365SecurityCenterDescription) MarshalJSON() ([]byte, error)](#PrivateLinkServicesForM365SecurityCenterDescription.MarshalJSON) + [func (p *PrivateLinkServicesForM365SecurityCenterDescription) UnmarshalJSON(data []byte) error](#PrivateLinkServicesForM365SecurityCenterDescription.UnmarshalJSON) * [type PrivateLinkServicesForM365SecurityCenterDescriptionListResult](#PrivateLinkServicesForM365SecurityCenterDescriptionListResult) * + [func (p PrivateLinkServicesForM365SecurityCenterDescriptionListResult) MarshalJSON() ([]byte, error)](#PrivateLinkServicesForM365SecurityCenterDescriptionListResult.MarshalJSON) + [func (p *PrivateLinkServicesForM365SecurityCenterDescriptionListResult) UnmarshalJSON(data []byte) error](#PrivateLinkServicesForM365SecurityCenterDescriptionListResult.UnmarshalJSON) * [type PrivateLinkServicesForMIPPolicySyncClient](#PrivateLinkServicesForMIPPolicySyncClient) * + [func NewPrivateLinkServicesForMIPPolicySyncClient(subscriptionID string, credential azcore.TokenCredential, ...) (*PrivateLinkServicesForMIPPolicySyncClient, error)](#NewPrivateLinkServicesForMIPPolicySyncClient) * + [func (client *PrivateLinkServicesForMIPPolicySyncClient) BeginCreateOrUpdate(ctx context.Context, resourceGroupName string, resourceName string, ...) (...)](#PrivateLinkServicesForMIPPolicySyncClient.BeginCreateOrUpdate) + [func (client *PrivateLinkServicesForMIPPolicySyncClient) BeginDelete(ctx context.Context, resourceGroupName string, resourceName string, ...) (*runtime.Poller[PrivateLinkServicesForMIPPolicySyncClientDeleteResponse], ...)](#PrivateLinkServicesForMIPPolicySyncClient.BeginDelete) + [func (client *PrivateLinkServicesForMIPPolicySyncClient) BeginUpdate(ctx context.Context, resourceGroupName string, resourceName string, ...) (*runtime.Poller[PrivateLinkServicesForMIPPolicySyncClientUpdateResponse], ...)](#PrivateLinkServicesForMIPPolicySyncClient.BeginUpdate) + [func (client *PrivateLinkServicesForMIPPolicySyncClient) Get(ctx context.Context, resourceGroupName string, resourceName string, ...) (PrivateLinkServicesForMIPPolicySyncClientGetResponse, error)](#PrivateLinkServicesForMIPPolicySyncClient.Get) + [func (client *PrivateLinkServicesForMIPPolicySyncClient) NewListByResourceGroupPager(resourceGroupName string, ...) ...](#PrivateLinkServicesForMIPPolicySyncClient.NewListByResourceGroupPager) + [func (client *PrivateLinkServicesForMIPPolicySyncClient) NewListPager(options *PrivateLinkServicesForMIPPolicySyncClientListOptions) *runtime.Pager[PrivateLinkServicesForMIPPolicySyncClientListResponse]](#PrivateLinkServicesForMIPPolicySyncClient.NewListPager) * [type PrivateLinkServicesForMIPPolicySyncClientBeginCreateOrUpdateOptions](#PrivateLinkServicesForMIPPolicySyncClientBeginCreateOrUpdateOptions) * [type PrivateLinkServicesForMIPPolicySyncClientBeginDeleteOptions](#PrivateLinkServicesForMIPPolicySyncClientBeginDeleteOptions) * [type PrivateLinkServicesForMIPPolicySyncClientBeginUpdateOptions](#PrivateLinkServicesForMIPPolicySyncClientBeginUpdateOptions) * [type PrivateLinkServicesForMIPPolicySyncClientCreateOrUpdateResponse](#PrivateLinkServicesForMIPPolicySyncClientCreateOrUpdateResponse) * [type PrivateLinkServicesForMIPPolicySyncClientDeleteResponse](#PrivateLinkServicesForMIPPolicySyncClientDeleteResponse) * [type PrivateLinkServicesForMIPPolicySyncClientGetOptions](#PrivateLinkServicesForMIPPolicySyncClientGetOptions) * [type PrivateLinkServicesForMIPPolicySyncClientGetResponse](#PrivateLinkServicesForMIPPolicySyncClientGetResponse) * [type PrivateLinkServicesForMIPPolicySyncClientListByResourceGroupOptions](#PrivateLinkServicesForMIPPolicySyncClientListByResourceGroupOptions) * [type PrivateLinkServicesForMIPPolicySyncClientListByResourceGroupResponse](#PrivateLinkServicesForMIPPolicySyncClientListByResourceGroupResponse) * [type PrivateLinkServicesForMIPPolicySyncClientListOptions](#PrivateLinkServicesForMIPPolicySyncClientListOptions) * [type PrivateLinkServicesForMIPPolicySyncClientListResponse](#PrivateLinkServicesForMIPPolicySyncClientListResponse) * [type PrivateLinkServicesForMIPPolicySyncClientUpdateResponse](#PrivateLinkServicesForMIPPolicySyncClientUpdateResponse) * [type PrivateLinkServicesForMIPPolicySyncDescription](#PrivateLinkServicesForMIPPolicySyncDescription) * + [func (p PrivateLinkServicesForMIPPolicySyncDescription) MarshalJSON() ([]byte, error)](#PrivateLinkServicesForMIPPolicySyncDescription.MarshalJSON) + [func (p *PrivateLinkServicesForMIPPolicySyncDescription) UnmarshalJSON(data []byte) error](#PrivateLinkServicesForMIPPolicySyncDescription.UnmarshalJSON) * [type PrivateLinkServicesForMIPPolicySyncDescriptionListResult](#PrivateLinkServicesForMIPPolicySyncDescriptionListResult) * + [func (p PrivateLinkServicesForMIPPolicySyncDescriptionListResult) MarshalJSON() ([]byte, error)](#PrivateLinkServicesForMIPPolicySyncDescriptionListResult.MarshalJSON) + [func (p *PrivateLinkServicesForMIPPolicySyncDescriptionListResult) UnmarshalJSON(data []byte) error](#PrivateLinkServicesForMIPPolicySyncDescriptionListResult.UnmarshalJSON) * [type PrivateLinkServicesForO365ManagementActivityAPIClient](#PrivateLinkServicesForO365ManagementActivityAPIClient) * + [func NewPrivateLinkServicesForO365ManagementActivityAPIClient(subscriptionID string, credential azcore.TokenCredential, ...) (*PrivateLinkServicesForO365ManagementActivityAPIClient, error)](#NewPrivateLinkServicesForO365ManagementActivityAPIClient) * + [func (client *PrivateLinkServicesForO365ManagementActivityAPIClient) BeginCreateOrUpdate(ctx context.Context, resourceGroupName string, resourceName string, ...) (...)](#PrivateLinkServicesForO365ManagementActivityAPIClient.BeginCreateOrUpdate) + [func (client *PrivateLinkServicesForO365ManagementActivityAPIClient) BeginDelete(ctx context.Context, resourceGroupName string, resourceName string, ...) (...)](#PrivateLinkServicesForO365ManagementActivityAPIClient.BeginDelete) + [func (client *PrivateLinkServicesForO365ManagementActivityAPIClient) BeginUpdate(ctx context.Context, resourceGroupName string, resourceName string, ...) (...)](#PrivateLinkServicesForO365ManagementActivityAPIClient.BeginUpdate) + [func (client *PrivateLinkServicesForO365ManagementActivityAPIClient) Get(ctx context.Context, resourceGroupName string, resourceName string, ...) (PrivateLinkServicesForO365ManagementActivityAPIClientGetResponse, error)](#PrivateLinkServicesForO365ManagementActivityAPIClient.Get) + [func (client *PrivateLinkServicesForO365ManagementActivityAPIClient) NewListByResourceGroupPager(resourceGroupName string, ...) ...](#PrivateLinkServicesForO365ManagementActivityAPIClient.NewListByResourceGroupPager) + [func (client *PrivateLinkServicesForO365ManagementActivityAPIClient) NewListPager(options *PrivateLinkServicesForO365ManagementActivityAPIClientListOptions) ...](#PrivateLinkServicesForO365ManagementActivityAPIClient.NewListPager) * [type PrivateLinkServicesForO365ManagementActivityAPIClientBeginCreateOrUpdateOptions](#PrivateLinkServicesForO365ManagementActivityAPIClientBeginCreateOrUpdateOptions) * [type PrivateLinkServicesForO365ManagementActivityAPIClientBeginDeleteOptions](#PrivateLinkServicesForO365ManagementActivityAPIClientBeginDeleteOptions) * [type PrivateLinkServicesForO365ManagementActivityAPIClientBeginUpdateOptions](#PrivateLinkServicesForO365ManagementActivityAPIClientBeginUpdateOptions) * [type PrivateLinkServicesForO365ManagementActivityAPIClientCreateOrUpdateResponse](#PrivateLinkServicesForO365ManagementActivityAPIClientCreateOrUpdateResponse) * [type PrivateLinkServicesForO365ManagementActivityAPIClientDeleteResponse](#PrivateLinkServicesForO365ManagementActivityAPIClientDeleteResponse) * [type PrivateLinkServicesForO365ManagementActivityAPIClientGetOptions](#PrivateLinkServicesForO365ManagementActivityAPIClientGetOptions) * [type PrivateLinkServicesForO365ManagementActivityAPIClientGetResponse](#PrivateLinkServicesForO365ManagementActivityAPIClientGetResponse) * [type PrivateLinkServicesForO365ManagementActivityAPIClientListByResourceGroupOptions](#PrivateLinkServicesForO365ManagementActivityAPIClientListByResourceGroupOptions) * [type PrivateLinkServicesForO365ManagementActivityAPIClientListByResourceGroupResponse](#PrivateLinkServicesForO365ManagementActivityAPIClientListByResourceGroupResponse) * [type PrivateLinkServicesForO365ManagementActivityAPIClientListOptions](#PrivateLinkServicesForO365ManagementActivityAPIClientListOptions) * [type PrivateLinkServicesForO365ManagementActivityAPIClientListResponse](#PrivateLinkServicesForO365ManagementActivityAPIClientListResponse) * [type PrivateLinkServicesForO365ManagementActivityAPIClientUpdateResponse](#PrivateLinkServicesForO365ManagementActivityAPIClientUpdateResponse) * [type PrivateLinkServicesForO365ManagementActivityAPIDescription](#PrivateLinkServicesForO365ManagementActivityAPIDescription) * + [func (p PrivateLinkServicesForO365ManagementActivityAPIDescription) MarshalJSON() ([]byte, error)](#PrivateLinkServicesForO365ManagementActivityAPIDescription.MarshalJSON) + [func (p *PrivateLinkServicesForO365ManagementActivityAPIDescription) UnmarshalJSON(data []byte) error](#PrivateLinkServicesForO365ManagementActivityAPIDescription.UnmarshalJSON) * [type PrivateLinkServicesForO365ManagementActivityAPIDescriptionListResult](#PrivateLinkServicesForO365ManagementActivityAPIDescriptionListResult) * + [func (p PrivateLinkServicesForO365ManagementActivityAPIDescriptionListResult) MarshalJSON() ([]byte, error)](#PrivateLinkServicesForO365ManagementActivityAPIDescriptionListResult.MarshalJSON) + [func (p *PrivateLinkServicesForO365ManagementActivityAPIDescriptionListResult) UnmarshalJSON(data []byte) error](#PrivateLinkServicesForO365ManagementActivityAPIDescriptionListResult.UnmarshalJSON) * [type PrivateLinkServicesForSCCPowershellClient](#PrivateLinkServicesForSCCPowershellClient) * + [func NewPrivateLinkServicesForSCCPowershellClient(subscriptionID string, credential azcore.TokenCredential, ...) (*PrivateLinkServicesForSCCPowershellClient, error)](#NewPrivateLinkServicesForSCCPowershellClient) * + [func (client *PrivateLinkServicesForSCCPowershellClient) BeginCreateOrUpdate(ctx context.Context, resourceGroupName string, resourceName string, ...) (...)](#PrivateLinkServicesForSCCPowershellClient.BeginCreateOrUpdate) + [func (client *PrivateLinkServicesForSCCPowershellClient) BeginDelete(ctx context.Context, resourceGroupName string, resourceName string, ...) (*runtime.Poller[PrivateLinkServicesForSCCPowershellClientDeleteResponse], ...)](#PrivateLinkServicesForSCCPowershellClient.BeginDelete) + [func (client *PrivateLinkServicesForSCCPowershellClient) BeginUpdate(ctx context.Context, resourceGroupName string, resourceName string, ...) (*runtime.Poller[PrivateLinkServicesForSCCPowershellClientUpdateResponse], ...)](#PrivateLinkServicesForSCCPowershellClient.BeginUpdate) + [func (client *PrivateLinkServicesForSCCPowershellClient) Get(ctx context.Context, resourceGroupName string, resourceName string, ...) (PrivateLinkServicesForSCCPowershellClientGetResponse, error)](#PrivateLinkServicesForSCCPowershellClient.Get) + [func (client *PrivateLinkServicesForSCCPowershellClient) NewListByResourceGroupPager(resourceGroupName string, ...) ...](#PrivateLinkServicesForSCCPowershellClient.NewListByResourceGroupPager) + [func (client *PrivateLinkServicesForSCCPowershellClient) NewListPager(options *PrivateLinkServicesForSCCPowershellClientListOptions) *runtime.Pager[PrivateLinkServicesForSCCPowershellClientListResponse]](#PrivateLinkServicesForSCCPowershellClient.NewListPager) * [type PrivateLinkServicesForSCCPowershellClientBeginCreateOrUpdateOptions](#PrivateLinkServicesForSCCPowershellClientBeginCreateOrUpdateOptions) * [type PrivateLinkServicesForSCCPowershellClientBeginDeleteOptions](#PrivateLinkServicesForSCCPowershellClientBeginDeleteOptions) * [type PrivateLinkServicesForSCCPowershellClientBeginUpdateOptions](#PrivateLinkServicesForSCCPowershellClientBeginUpdateOptions) * [type PrivateLinkServicesForSCCPowershellClientCreateOrUpdateResponse](#PrivateLinkServicesForSCCPowershellClientCreateOrUpdateResponse) * [type PrivateLinkServicesForSCCPowershellClientDeleteResponse](#PrivateLinkServicesForSCCPowershellClientDeleteResponse) * [type PrivateLinkServicesForSCCPowershellClientGetOptions](#PrivateLinkServicesForSCCPowershellClientGetOptions) * [type PrivateLinkServicesForSCCPowershellClientGetResponse](#PrivateLinkServicesForSCCPowershellClientGetResponse) * [type PrivateLinkServicesForSCCPowershellClientListByResourceGroupOptions](#PrivateLinkServicesForSCCPowershellClientListByResourceGroupOptions) * [type PrivateLinkServicesForSCCPowershellClientListByResourceGroupResponse](#PrivateLinkServicesForSCCPowershellClientListByResourceGroupResponse) * [type PrivateLinkServicesForSCCPowershellClientListOptions](#PrivateLinkServicesForSCCPowershellClientListOptions) * [type PrivateLinkServicesForSCCPowershellClientListResponse](#PrivateLinkServicesForSCCPowershellClientListResponse) * [type PrivateLinkServicesForSCCPowershellClientUpdateResponse](#PrivateLinkServicesForSCCPowershellClientUpdateResponse) * [type PrivateLinkServicesForSCCPowershellDescription](#PrivateLinkServicesForSCCPowershellDescription) * + [func (p PrivateLinkServicesForSCCPowershellDescription) MarshalJSON() ([]byte, error)](#PrivateLinkServicesForSCCPowershellDescription.MarshalJSON) + [func (p *PrivateLinkServicesForSCCPowershellDescription) UnmarshalJSON(data []byte) error](#PrivateLinkServicesForSCCPowershellDescription.UnmarshalJSON) * [type PrivateLinkServicesForSCCPowershellDescriptionListResult](#PrivateLinkServicesForSCCPowershellDescriptionListResult) * + [func (p PrivateLinkServicesForSCCPowershellDescriptionListResult) MarshalJSON() ([]byte, error)](#PrivateLinkServicesForSCCPowershellDescriptionListResult.MarshalJSON) + [func (p *PrivateLinkServicesForSCCPowershellDescriptionListResult) UnmarshalJSON(data []byte) error](#PrivateLinkServicesForSCCPowershellDescriptionListResult.UnmarshalJSON) * [type ProvisioningState](#ProvisioningState) * + [func PossibleProvisioningStateValues() []ProvisioningState](#PossibleProvisioningStateValues) * [type PublicNetworkAccess](#PublicNetworkAccess) * + [func PossiblePublicNetworkAccessValues() []PublicNetworkAccess](#PossiblePublicNetworkAccessValues) * [type Resource](#Resource) * + [func (r Resource) MarshalJSON() ([]byte, error)](#Resource.MarshalJSON) + [func (r *Resource) UnmarshalJSON(data []byte) error](#Resource.UnmarshalJSON) * [type ServiceAccessPolicyEntry](#ServiceAccessPolicyEntry) * + [func (s ServiceAccessPolicyEntry) MarshalJSON() ([]byte, error)](#ServiceAccessPolicyEntry.MarshalJSON) + [func (s *ServiceAccessPolicyEntry) UnmarshalJSON(data []byte) error](#ServiceAccessPolicyEntry.UnmarshalJSON) * [type ServiceAuthenticationConfigurationInfo](#ServiceAuthenticationConfigurationInfo) * + [func (s ServiceAuthenticationConfigurationInfo) MarshalJSON() ([]byte, error)](#ServiceAuthenticationConfigurationInfo.MarshalJSON) + [func (s *ServiceAuthenticationConfigurationInfo) UnmarshalJSON(data []byte) error](#ServiceAuthenticationConfigurationInfo.UnmarshalJSON) * [type ServiceCorsConfigurationInfo](#ServiceCorsConfigurationInfo) * + [func (s ServiceCorsConfigurationInfo) MarshalJSON() ([]byte, error)](#ServiceCorsConfigurationInfo.MarshalJSON) + [func (s *ServiceCorsConfigurationInfo) UnmarshalJSON(data []byte) error](#ServiceCorsConfigurationInfo.UnmarshalJSON) * [type ServiceCosmosDbConfigurationInfo](#ServiceCosmosDbConfigurationInfo) * + [func (s ServiceCosmosDbConfigurationInfo) MarshalJSON() ([]byte, error)](#ServiceCosmosDbConfigurationInfo.MarshalJSON) + [func (s *ServiceCosmosDbConfigurationInfo) UnmarshalJSON(data []byte) error](#ServiceCosmosDbConfigurationInfo.UnmarshalJSON) * [type ServiceExportConfigurationInfo](#ServiceExportConfigurationInfo) * + [func (s ServiceExportConfigurationInfo) MarshalJSON() ([]byte, error)](#ServiceExportConfigurationInfo.MarshalJSON) + [func (s *ServiceExportConfigurationInfo) UnmarshalJSON(data []byte) error](#ServiceExportConfigurationInfo.UnmarshalJSON) * [type ServicesClient](#ServicesClient) * + [func NewServicesClient(subscriptionID string, credential azcore.TokenCredential, ...) (*ServicesClient, error)](#NewServicesClient) * + [func (client *ServicesClient) BeginDelete(ctx context.Context, resourceGroupName string, resourceName string, ...) (*runtime.Poller[ServicesClientDeleteResponse], error)](#ServicesClient.BeginDelete) * [type ServicesClientBeginDeleteOptions](#ServicesClientBeginDeleteOptions) * [type ServicesClientDeleteResponse](#ServicesClientDeleteResponse) * [type ServicesPatchDescription](#ServicesPatchDescription) * + [func (s ServicesPatchDescription) MarshalJSON() ([]byte, error)](#ServicesPatchDescription.MarshalJSON) + [func (s *ServicesPatchDescription) UnmarshalJSON(data []byte) error](#ServicesPatchDescription.UnmarshalJSON) * [type ServicesProperties](#ServicesProperties) * + [func (s ServicesProperties) MarshalJSON() ([]byte, error)](#ServicesProperties.MarshalJSON) + [func (s *ServicesProperties) UnmarshalJSON(data []byte) error](#ServicesProperties.UnmarshalJSON) * [type ServicesPropertiesUpdateParameters](#ServicesPropertiesUpdateParameters) * + [func (s ServicesPropertiesUpdateParameters) MarshalJSON() ([]byte, error)](#ServicesPropertiesUpdateParameters.MarshalJSON) + [func (s *ServicesPropertiesUpdateParameters) UnmarshalJSON(data []byte) error](#ServicesPropertiesUpdateParameters.UnmarshalJSON) * [type ServicesResource](#ServicesResource) * + [func (s ServicesResource) MarshalJSON() ([]byte, error)](#ServicesResource.MarshalJSON) + [func (s *ServicesResource) UnmarshalJSON(data []byte) error](#ServicesResource.UnmarshalJSON) * [type ServicesResourceIdentity](#ServicesResourceIdentity) * + [func (s ServicesResourceIdentity) MarshalJSON() ([]byte, error)](#ServicesResourceIdentity.MarshalJSON) + [func (s *ServicesResourceIdentity) UnmarshalJSON(data []byte) error](#ServicesResourceIdentity.UnmarshalJSON) * [type SystemData](#SystemData) * + [func (s SystemData) MarshalJSON() ([]byte, error)](#SystemData.MarshalJSON) + [func (s *SystemData) UnmarshalJSON(data []byte) error](#SystemData.UnmarshalJSON) #### Examples [¶](#pkg-examples) * [OperationResultsClient.Get](#example-OperationResultsClient.Get) * [OperationsClient.NewListPager (ListComplianceCenterOperations)](#example-OperationsClient.NewListPager-ListComplianceCenterOperations) * [OperationsClient.NewListPager (ListEdmUploadOperations)](#example-OperationsClient.NewListPager-ListEdmUploadOperations) * [OperationsClient.NewListPager (ListManagementApiOperations)](#example-OperationsClient.NewListPager-ListManagementApiOperations) * [OperationsClient.NewListPager (ListMipPolicySyncOperations)](#example-OperationsClient.NewListPager-ListMipPolicySyncOperations) * [OperationsClient.NewListPager (ListOperations)](#example-OperationsClient.NewListPager-ListOperations) * [OperationsClient.NewListPager (ListSccPowershellOperations)](#example-OperationsClient.NewListPager-ListSccPowershellOperations) * [OperationsClient.NewListPager (ListSecurityCenterOperations)](#example-OperationsClient.NewListPager-ListSecurityCenterOperations) * [PrivateEndpointConnectionsAdtAPIClient.BeginCreateOrUpdate](#example-PrivateEndpointConnectionsAdtAPIClient.BeginCreateOrUpdate) * [PrivateEndpointConnectionsAdtAPIClient.BeginDelete](#example-PrivateEndpointConnectionsAdtAPIClient.BeginDelete) * [PrivateEndpointConnectionsAdtAPIClient.Get](#example-PrivateEndpointConnectionsAdtAPIClient.Get) * [PrivateEndpointConnectionsAdtAPIClient.NewListByServicePager](#example-PrivateEndpointConnectionsAdtAPIClient.NewListByServicePager) * [PrivateEndpointConnectionsCompClient.BeginCreateOrUpdate](#example-PrivateEndpointConnectionsCompClient.BeginCreateOrUpdate) * [PrivateEndpointConnectionsCompClient.BeginDelete](#example-PrivateEndpointConnectionsCompClient.BeginDelete) * [PrivateEndpointConnectionsCompClient.Get](#example-PrivateEndpointConnectionsCompClient.Get) * [PrivateEndpointConnectionsCompClient.NewListByServicePager](#example-PrivateEndpointConnectionsCompClient.NewListByServicePager) * [PrivateEndpointConnectionsForEDMClient.BeginCreateOrUpdate](#example-PrivateEndpointConnectionsForEDMClient.BeginCreateOrUpdate) * [PrivateEndpointConnectionsForEDMClient.BeginDelete](#example-PrivateEndpointConnectionsForEDMClient.BeginDelete) * [PrivateEndpointConnectionsForEDMClient.Get](#example-PrivateEndpointConnectionsForEDMClient.Get) * [PrivateEndpointConnectionsForEDMClient.NewListByServicePager](#example-PrivateEndpointConnectionsForEDMClient.NewListByServicePager) * [PrivateEndpointConnectionsForMIPPolicySyncClient.BeginCreateOrUpdate](#example-PrivateEndpointConnectionsForMIPPolicySyncClient.BeginCreateOrUpdate) * [PrivateEndpointConnectionsForMIPPolicySyncClient.BeginDelete](#example-PrivateEndpointConnectionsForMIPPolicySyncClient.BeginDelete) * [PrivateEndpointConnectionsForMIPPolicySyncClient.Get](#example-PrivateEndpointConnectionsForMIPPolicySyncClient.Get) * [PrivateEndpointConnectionsForMIPPolicySyncClient.NewListByServicePager](#example-PrivateEndpointConnectionsForMIPPolicySyncClient.NewListByServicePager) * [PrivateEndpointConnectionsForSCCPowershellClient.BeginCreateOrUpdate](#example-PrivateEndpointConnectionsForSCCPowershellClient.BeginCreateOrUpdate) * [PrivateEndpointConnectionsForSCCPowershellClient.BeginDelete](#example-PrivateEndpointConnectionsForSCCPowershellClient.BeginDelete) * [PrivateEndpointConnectionsForSCCPowershellClient.Get](#example-PrivateEndpointConnectionsForSCCPowershellClient.Get) * [PrivateEndpointConnectionsForSCCPowershellClient.NewListByServicePager](#example-PrivateEndpointConnectionsForSCCPowershellClient.NewListByServicePager) * [PrivateEndpointConnectionsSecClient.BeginCreateOrUpdate](#example-PrivateEndpointConnectionsSecClient.BeginCreateOrUpdate) * [PrivateEndpointConnectionsSecClient.BeginDelete](#example-PrivateEndpointConnectionsSecClient.BeginDelete) * [PrivateEndpointConnectionsSecClient.Get](#example-PrivateEndpointConnectionsSecClient.Get) * [PrivateEndpointConnectionsSecClient.NewListByServicePager](#example-PrivateEndpointConnectionsSecClient.NewListByServicePager) * [PrivateLinkResourcesAdtAPIClient.Get](#example-PrivateLinkResourcesAdtAPIClient.Get) * [PrivateLinkResourcesAdtAPIClient.ListByService](#example-PrivateLinkResourcesAdtAPIClient.ListByService) * [PrivateLinkResourcesClient.Get](#example-PrivateLinkResourcesClient.Get) * [PrivateLinkResourcesClient.ListByService](#example-PrivateLinkResourcesClient.ListByService) * [PrivateLinkResourcesCompClient.Get](#example-PrivateLinkResourcesCompClient.Get) * [PrivateLinkResourcesCompClient.ListByService](#example-PrivateLinkResourcesCompClient.ListByService) * [PrivateLinkResourcesForMIPPolicySyncClient.Get](#example-PrivateLinkResourcesForMIPPolicySyncClient.Get) * [PrivateLinkResourcesForMIPPolicySyncClient.ListByService](#example-PrivateLinkResourcesForMIPPolicySyncClient.ListByService) * [PrivateLinkResourcesForSCCPowershellClient.Get](#example-PrivateLinkResourcesForSCCPowershellClient.Get) * [PrivateLinkResourcesForSCCPowershellClient.ListByService](#example-PrivateLinkResourcesForSCCPowershellClient.ListByService) * [PrivateLinkResourcesSecClient.Get](#example-PrivateLinkResourcesSecClient.Get) * [PrivateLinkResourcesSecClient.ListByService](#example-PrivateLinkResourcesSecClient.ListByService) * [PrivateLinkServicesForEDMUploadClient.BeginCreateOrUpdate (CreateOrUpdateAServiceWithAllParameters)](#example-PrivateLinkServicesForEDMUploadClient.BeginCreateOrUpdate-CreateOrUpdateAServiceWithAllParameters) * [PrivateLinkServicesForEDMUploadClient.BeginCreateOrUpdate (CreateOrUpdateAServiceWithMinimumParameters)](#example-PrivateLinkServicesForEDMUploadClient.BeginCreateOrUpdate-CreateOrUpdateAServiceWithMinimumParameters) * [PrivateLinkServicesForEDMUploadClient.BeginUpdate](#example-PrivateLinkServicesForEDMUploadClient.BeginUpdate) * [PrivateLinkServicesForEDMUploadClient.Get](#example-PrivateLinkServicesForEDMUploadClient.Get) * [PrivateLinkServicesForEDMUploadClient.NewListByResourceGroupPager](#example-PrivateLinkServicesForEDMUploadClient.NewListByResourceGroupPager) * [PrivateLinkServicesForEDMUploadClient.NewListPager](#example-PrivateLinkServicesForEDMUploadClient.NewListPager) * [PrivateLinkServicesForM365ComplianceCenterClient.BeginCreateOrUpdate (CreateOrUpdateAServiceWithAllParameters)](#example-PrivateLinkServicesForM365ComplianceCenterClient.BeginCreateOrUpdate-CreateOrUpdateAServiceWithAllParameters) * [PrivateLinkServicesForM365ComplianceCenterClient.BeginCreateOrUpdate (CreateOrUpdateAServiceWithMinimumParameters)](#example-PrivateLinkServicesForM365ComplianceCenterClient.BeginCreateOrUpdate-CreateOrUpdateAServiceWithMinimumParameters) * [PrivateLinkServicesForM365ComplianceCenterClient.BeginDelete](#example-PrivateLinkServicesForM365ComplianceCenterClient.BeginDelete) * [PrivateLinkServicesForM365ComplianceCenterClient.BeginUpdate](#example-PrivateLinkServicesForM365ComplianceCenterClient.BeginUpdate) * [PrivateLinkServicesForM365ComplianceCenterClient.Get](#example-PrivateLinkServicesForM365ComplianceCenterClient.Get) * [PrivateLinkServicesForM365ComplianceCenterClient.NewListByResourceGroupPager](#example-PrivateLinkServicesForM365ComplianceCenterClient.NewListByResourceGroupPager) * [PrivateLinkServicesForM365ComplianceCenterClient.NewListPager](#example-PrivateLinkServicesForM365ComplianceCenterClient.NewListPager) * [PrivateLinkServicesForM365SecurityCenterClient.BeginCreateOrUpdate (CreateOrUpdateAServiceWithAllParameters)](#example-PrivateLinkServicesForM365SecurityCenterClient.BeginCreateOrUpdate-CreateOrUpdateAServiceWithAllParameters) * [PrivateLinkServicesForM365SecurityCenterClient.BeginCreateOrUpdate (CreateOrUpdateAServiceWithMinimumParameters)](#example-PrivateLinkServicesForM365SecurityCenterClient.BeginCreateOrUpdate-CreateOrUpdateAServiceWithMinimumParameters) * [PrivateLinkServicesForM365SecurityCenterClient.BeginDelete](#example-PrivateLinkServicesForM365SecurityCenterClient.BeginDelete) * [PrivateLinkServicesForM365SecurityCenterClient.BeginUpdate](#example-PrivateLinkServicesForM365SecurityCenterClient.BeginUpdate) * [PrivateLinkServicesForM365SecurityCenterClient.Get](#example-PrivateLinkServicesForM365SecurityCenterClient.Get) * [PrivateLinkServicesForM365SecurityCenterClient.NewListByResourceGroupPager](#example-PrivateLinkServicesForM365SecurityCenterClient.NewListByResourceGroupPager) * [PrivateLinkServicesForM365SecurityCenterClient.NewListPager](#example-PrivateLinkServicesForM365SecurityCenterClient.NewListPager) * [PrivateLinkServicesForMIPPolicySyncClient.BeginCreateOrUpdate (CreateOrUpdateAServiceWithAllParameters)](#example-PrivateLinkServicesForMIPPolicySyncClient.BeginCreateOrUpdate-CreateOrUpdateAServiceWithAllParameters) * [PrivateLinkServicesForMIPPolicySyncClient.BeginCreateOrUpdate (CreateOrUpdateAServiceWithMinimumParameters)](#example-PrivateLinkServicesForMIPPolicySyncClient.BeginCreateOrUpdate-CreateOrUpdateAServiceWithMinimumParameters) * [PrivateLinkServicesForMIPPolicySyncClient.BeginDelete](#example-PrivateLinkServicesForMIPPolicySyncClient.BeginDelete) * [PrivateLinkServicesForMIPPolicySyncClient.BeginUpdate](#example-PrivateLinkServicesForMIPPolicySyncClient.BeginUpdate) * [PrivateLinkServicesForMIPPolicySyncClient.Get](#example-PrivateLinkServicesForMIPPolicySyncClient.Get) * [PrivateLinkServicesForMIPPolicySyncClient.NewListByResourceGroupPager](#example-PrivateLinkServicesForMIPPolicySyncClient.NewListByResourceGroupPager) * [PrivateLinkServicesForMIPPolicySyncClient.NewListPager](#example-PrivateLinkServicesForMIPPolicySyncClient.NewListPager) * [PrivateLinkServicesForO365ManagementActivityAPIClient.BeginCreateOrUpdate (CreateOrUpdateAServiceWithAllParameters)](#example-PrivateLinkServicesForO365ManagementActivityAPIClient.BeginCreateOrUpdate-CreateOrUpdateAServiceWithAllParameters) * [PrivateLinkServicesForO365ManagementActivityAPIClient.BeginCreateOrUpdate (CreateOrUpdateAServiceWithMinimumParameters)](#example-PrivateLinkServicesForO365ManagementActivityAPIClient.BeginCreateOrUpdate-CreateOrUpdateAServiceWithMinimumParameters) * [PrivateLinkServicesForO365ManagementActivityAPIClient.BeginDelete](#example-PrivateLinkServicesForO365ManagementActivityAPIClient.BeginDelete) * [PrivateLinkServicesForO365ManagementActivityAPIClient.BeginUpdate](#example-PrivateLinkServicesForO365ManagementActivityAPIClient.BeginUpdate) * [PrivateLinkServicesForO365ManagementActivityAPIClient.Get](#example-PrivateLinkServicesForO365ManagementActivityAPIClient.Get) * [PrivateLinkServicesForO365ManagementActivityAPIClient.NewListByResourceGroupPager](#example-PrivateLinkServicesForO365ManagementActivityAPIClient.NewListByResourceGroupPager) * [PrivateLinkServicesForO365ManagementActivityAPIClient.NewListPager](#example-PrivateLinkServicesForO365ManagementActivityAPIClient.NewListPager) * [PrivateLinkServicesForSCCPowershellClient.BeginCreateOrUpdate (CreateOrUpdateAServiceWithAllParameters)](#example-PrivateLinkServicesForSCCPowershellClient.BeginCreateOrUpdate-CreateOrUpdateAServiceWithAllParameters) * [PrivateLinkServicesForSCCPowershellClient.BeginCreateOrUpdate (CreateOrUpdateAServiceWithMinimumParameters)](#example-PrivateLinkServicesForSCCPowershellClient.BeginCreateOrUpdate-CreateOrUpdateAServiceWithMinimumParameters) * [PrivateLinkServicesForSCCPowershellClient.BeginDelete](#example-PrivateLinkServicesForSCCPowershellClient.BeginDelete) * [PrivateLinkServicesForSCCPowershellClient.BeginUpdate](#example-PrivateLinkServicesForSCCPowershellClient.BeginUpdate) * [PrivateLinkServicesForSCCPowershellClient.Get](#example-PrivateLinkServicesForSCCPowershellClient.Get) * [PrivateLinkServicesForSCCPowershellClient.NewListByResourceGroupPager](#example-PrivateLinkServicesForSCCPowershellClient.NewListByResourceGroupPager) * [PrivateLinkServicesForSCCPowershellClient.NewListPager](#example-PrivateLinkServicesForSCCPowershellClient.NewListPager) * [ServicesClient.BeginDelete](#example-ServicesClient.BeginDelete) ### Constants [¶](#pkg-constants) This section is empty. ### Variables [¶](#pkg-variables) This section is empty. ### Functions [¶](#pkg-functions) This section is empty. ### Types [¶](#pkg-types) #### type [ClientFactory](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/client_factory.go#L19) [¶](#ClientFactory) added in v0.6.0 ``` type ClientFactory struct { // contains filtered or unexported fields } ``` ClientFactory is a client factory used to create any client in this module. Don't use this type directly, use NewClientFactory instead. #### func [NewClientFactory](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/client_factory.go#L30) [¶](#NewClientFactory) added in v0.6.0 ``` func NewClientFactory(subscriptionID [string](/builtin#string), credential [azcore](/github.com/Azure/azure-sdk-for-go/sdk/azcore).[TokenCredential](/github.com/Azure/azure-sdk-for-go/sdk/azcore#TokenCredential), options *[arm](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm).[ClientOptions](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm#ClientOptions)) (*[ClientFactory](#ClientFactory), [error](/builtin#error)) ``` NewClientFactory creates a new instance of ClientFactory with the specified values. The parameter values will be propagated to any client created from this factory. * subscriptionID - The subscription identifier. * credential - used to authorize requests. Usually a credential from azidentity. * options - pass nil to accept the default values. #### func (*ClientFactory) [NewOperationResultsClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/client_factory.go#L46) [¶](#ClientFactory.NewOperationResultsClient) added in v0.6.0 ``` func (c *[ClientFactory](#ClientFactory)) NewOperationResultsClient() *[OperationResultsClient](#OperationResultsClient) ``` #### func (*ClientFactory) [NewOperationsClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/client_factory.go#L41) [¶](#ClientFactory.NewOperationsClient) added in v0.6.0 ``` func (c *[ClientFactory](#ClientFactory)) NewOperationsClient() *[OperationsClient](#OperationsClient) ``` #### func (*ClientFactory) [NewPrivateEndpointConnectionsAdtAPIClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/client_factory.go#L106) [¶](#ClientFactory.NewPrivateEndpointConnectionsAdtAPIClient) added in v0.6.0 ``` func (c *[ClientFactory](#ClientFactory)) NewPrivateEndpointConnectionsAdtAPIClient() *[PrivateEndpointConnectionsAdtAPIClient](#PrivateEndpointConnectionsAdtAPIClient) ``` #### func (*ClientFactory) [NewPrivateEndpointConnectionsCompClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/client_factory.go#L76) [¶](#ClientFactory.NewPrivateEndpointConnectionsCompClient) added in v0.6.0 ``` func (c *[ClientFactory](#ClientFactory)) NewPrivateEndpointConnectionsCompClient() *[PrivateEndpointConnectionsCompClient](#PrivateEndpointConnectionsCompClient) ``` #### func (*ClientFactory) [NewPrivateEndpointConnectionsForEDMClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/client_factory.go#L61) [¶](#ClientFactory.NewPrivateEndpointConnectionsForEDMClient) added in v0.6.0 ``` func (c *[ClientFactory](#ClientFactory)) NewPrivateEndpointConnectionsForEDMClient() *[PrivateEndpointConnectionsForEDMClient](#PrivateEndpointConnectionsForEDMClient) ``` #### func (*ClientFactory) [NewPrivateEndpointConnectionsForMIPPolicySyncClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/client_factory.go#L136) [¶](#ClientFactory.NewPrivateEndpointConnectionsForMIPPolicySyncClient) added in v0.6.0 ``` func (c *[ClientFactory](#ClientFactory)) NewPrivateEndpointConnectionsForMIPPolicySyncClient() *[PrivateEndpointConnectionsForMIPPolicySyncClient](#PrivateEndpointConnectionsForMIPPolicySyncClient) ``` #### func (*ClientFactory) [NewPrivateEndpointConnectionsForSCCPowershellClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/client_factory.go#L121) [¶](#ClientFactory.NewPrivateEndpointConnectionsForSCCPowershellClient) added in v0.6.0 ``` func (c *[ClientFactory](#ClientFactory)) NewPrivateEndpointConnectionsForSCCPowershellClient() *[PrivateEndpointConnectionsForSCCPowershellClient](#PrivateEndpointConnectionsForSCCPowershellClient) ``` #### func (*ClientFactory) [NewPrivateEndpointConnectionsSecClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/client_factory.go#L91) [¶](#ClientFactory.NewPrivateEndpointConnectionsSecClient) added in v0.6.0 ``` func (c *[ClientFactory](#ClientFactory)) NewPrivateEndpointConnectionsSecClient() *[PrivateEndpointConnectionsSecClient](#PrivateEndpointConnectionsSecClient) ``` #### func (*ClientFactory) [NewPrivateLinkResourcesAdtAPIClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/client_factory.go#L111) [¶](#ClientFactory.NewPrivateLinkResourcesAdtAPIClient) added in v0.6.0 ``` func (c *[ClientFactory](#ClientFactory)) NewPrivateLinkResourcesAdtAPIClient() *[PrivateLinkResourcesAdtAPIClient](#PrivateLinkResourcesAdtAPIClient) ``` #### func (*ClientFactory) [NewPrivateLinkResourcesClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/client_factory.go#L66) [¶](#ClientFactory.NewPrivateLinkResourcesClient) added in v0.6.0 ``` func (c *[ClientFactory](#ClientFactory)) NewPrivateLinkResourcesClient() *[PrivateLinkResourcesClient](#PrivateLinkResourcesClient) ``` #### func (*ClientFactory) [NewPrivateLinkResourcesCompClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/client_factory.go#L81) [¶](#ClientFactory.NewPrivateLinkResourcesCompClient) added in v0.6.0 ``` func (c *[ClientFactory](#ClientFactory)) NewPrivateLinkResourcesCompClient() *[PrivateLinkResourcesCompClient](#PrivateLinkResourcesCompClient) ``` #### func (*ClientFactory) [NewPrivateLinkResourcesForMIPPolicySyncClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/client_factory.go#L141) [¶](#ClientFactory.NewPrivateLinkResourcesForMIPPolicySyncClient) added in v0.6.0 ``` func (c *[ClientFactory](#ClientFactory)) NewPrivateLinkResourcesForMIPPolicySyncClient() *[PrivateLinkResourcesForMIPPolicySyncClient](#PrivateLinkResourcesForMIPPolicySyncClient) ``` #### func (*ClientFactory) [NewPrivateLinkResourcesForSCCPowershellClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/client_factory.go#L126) [¶](#ClientFactory.NewPrivateLinkResourcesForSCCPowershellClient) added in v0.6.0 ``` func (c *[ClientFactory](#ClientFactory)) NewPrivateLinkResourcesForSCCPowershellClient() *[PrivateLinkResourcesForSCCPowershellClient](#PrivateLinkResourcesForSCCPowershellClient) ``` #### func (*ClientFactory) [NewPrivateLinkResourcesSecClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/client_factory.go#L96) [¶](#ClientFactory.NewPrivateLinkResourcesSecClient) added in v0.6.0 ``` func (c *[ClientFactory](#ClientFactory)) NewPrivateLinkResourcesSecClient() *[PrivateLinkResourcesSecClient](#PrivateLinkResourcesSecClient) ``` #### func (*ClientFactory) [NewPrivateLinkServicesForEDMUploadClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/client_factory.go#L51) [¶](#ClientFactory.NewPrivateLinkServicesForEDMUploadClient) added in v0.6.0 ``` func (c *[ClientFactory](#ClientFactory)) NewPrivateLinkServicesForEDMUploadClient() *[PrivateLinkServicesForEDMUploadClient](#PrivateLinkServicesForEDMUploadClient) ``` #### func (*ClientFactory) [NewPrivateLinkServicesForM365ComplianceCenterClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/client_factory.go#L71) [¶](#ClientFactory.NewPrivateLinkServicesForM365ComplianceCenterClient) added in v0.6.0 ``` func (c *[ClientFactory](#ClientFactory)) NewPrivateLinkServicesForM365ComplianceCenterClient() *[PrivateLinkServicesForM365ComplianceCenterClient](#PrivateLinkServicesForM365ComplianceCenterClient) ``` #### func (*ClientFactory) [NewPrivateLinkServicesForM365SecurityCenterClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/client_factory.go#L86) [¶](#ClientFactory.NewPrivateLinkServicesForM365SecurityCenterClient) added in v0.6.0 ``` func (c *[ClientFactory](#ClientFactory)) NewPrivateLinkServicesForM365SecurityCenterClient() *[PrivateLinkServicesForM365SecurityCenterClient](#PrivateLinkServicesForM365SecurityCenterClient) ``` #### func (*ClientFactory) [NewPrivateLinkServicesForMIPPolicySyncClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/client_factory.go#L131) [¶](#ClientFactory.NewPrivateLinkServicesForMIPPolicySyncClient) added in v0.6.0 ``` func (c *[ClientFactory](#ClientFactory)) NewPrivateLinkServicesForMIPPolicySyncClient() *[PrivateLinkServicesForMIPPolicySyncClient](#PrivateLinkServicesForMIPPolicySyncClient) ``` #### func (*ClientFactory) [NewPrivateLinkServicesForO365ManagementActivityAPIClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/client_factory.go#L101) [¶](#ClientFactory.NewPrivateLinkServicesForO365ManagementActivityAPIClient) added in v0.6.0 ``` func (c *[ClientFactory](#ClientFactory)) NewPrivateLinkServicesForO365ManagementActivityAPIClient() *[PrivateLinkServicesForO365ManagementActivityAPIClient](#PrivateLinkServicesForO365ManagementActivityAPIClient) ``` #### func (*ClientFactory) [NewPrivateLinkServicesForSCCPowershellClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/client_factory.go#L116) [¶](#ClientFactory.NewPrivateLinkServicesForSCCPowershellClient) added in v0.6.0 ``` func (c *[ClientFactory](#ClientFactory)) NewPrivateLinkServicesForSCCPowershellClient() *[PrivateLinkServicesForSCCPowershellClient](#PrivateLinkServicesForSCCPowershellClient) ``` #### func (*ClientFactory) [NewServicesClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/client_factory.go#L56) [¶](#ClientFactory.NewServicesClient) added in v0.6.0 ``` func (c *[ClientFactory](#ClientFactory)) NewServicesClient() *[ServicesClient](#ServicesClient) ``` #### type [CreatedByType](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/constants.go#L18) [¶](#CreatedByType) ``` type CreatedByType [string](/builtin#string) ``` CreatedByType - The type of identity that created the resource. ``` const ( CreatedByTypeApplication [CreatedByType](#CreatedByType) = "Application" CreatedByTypeKey [CreatedByType](#CreatedByType) = "Key" CreatedByTypeManagedIdentity [CreatedByType](#CreatedByType) = "ManagedIdentity" CreatedByTypeUser [CreatedByType](#CreatedByType) = "User" ) ``` #### func [PossibleCreatedByTypeValues](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/constants.go#L28) [¶](#PossibleCreatedByTypeValues) ``` func PossibleCreatedByTypeValues() [][CreatedByType](#CreatedByType) ``` PossibleCreatedByTypeValues returns the possible values for the CreatedByType const type. #### type [ErrorDetails](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L15) [¶](#ErrorDetails) ``` type ErrorDetails struct { // Object containing error details. Error *[ErrorDetailsInternal](#ErrorDetailsInternal) } ``` ErrorDetails - Error details. #### func (ErrorDetails) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L20) [¶](#ErrorDetails.MarshalJSON) added in v0.6.0 ``` func (e [ErrorDetails](#ErrorDetails)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type ErrorDetails. #### func (*ErrorDetails) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L27) [¶](#ErrorDetails.UnmarshalJSON) added in v0.6.0 ``` func (e *[ErrorDetails](#ErrorDetails)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type ErrorDetails. #### type [ErrorDetailsInternal](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L21) [¶](#ErrorDetailsInternal) ``` type ErrorDetailsInternal struct { // READ-ONLY; The error code. Code *[string](/builtin#string) // READ-ONLY; The error message. Message *[string](/builtin#string) // READ-ONLY; The target of the particular error. Target *[string](/builtin#string) } ``` ErrorDetailsInternal - Error details. #### func (ErrorDetailsInternal) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L47) [¶](#ErrorDetailsInternal.MarshalJSON) added in v0.6.0 ``` func (e [ErrorDetailsInternal](#ErrorDetailsInternal)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type ErrorDetailsInternal. #### func (*ErrorDetailsInternal) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L56) [¶](#ErrorDetailsInternal.UnmarshalJSON) added in v0.6.0 ``` func (e *[ErrorDetailsInternal](#ErrorDetailsInternal)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type ErrorDetailsInternal. #### type [Kind](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/constants.go#L38) [¶](#Kind) ``` type Kind [string](/builtin#string) ``` Kind - The kind of the service. ``` const ( KindFhir [Kind](#Kind) = "fhir" KindFhirStu3 [Kind](#Kind) = "fhir-Stu3" KindFhirR4 [Kind](#Kind) = "fhir-R4" ) ``` #### func [PossibleKindValues](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/constants.go#L47) [¶](#PossibleKindValues) ``` func PossibleKindValues() [][Kind](#Kind) ``` PossibleKindValues returns the possible values for the Kind const type. #### type [ManagedServiceIdentityType](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/constants.go#L56) [¶](#ManagedServiceIdentityType) ``` type ManagedServiceIdentityType [string](/builtin#string) ``` ManagedServiceIdentityType - Type of identity being specified, currently SystemAssigned and None are allowed. ``` const ( ManagedServiceIdentityTypeNone [ManagedServiceIdentityType](#ManagedServiceIdentityType) = "None" ManagedServiceIdentityTypeSystemAssigned [ManagedServiceIdentityType](#ManagedServiceIdentityType) = "SystemAssigned" ) ``` #### func [PossibleManagedServiceIdentityTypeValues](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/constants.go#L64) [¶](#PossibleManagedServiceIdentityTypeValues) ``` func PossibleManagedServiceIdentityTypeValues() [][ManagedServiceIdentityType](#ManagedServiceIdentityType) ``` PossibleManagedServiceIdentityTypeValues returns the possible values for the ManagedServiceIdentityType const type. #### type [Operation](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L33) [¶](#Operation) ``` type Operation struct { // READ-ONLY; The information displayed about the operation. Display *[OperationDisplay](#OperationDisplay) // READ-ONLY; Indicates whether the operation is a data action IsDataAction *[bool](/builtin#bool) // READ-ONLY; Operation name: {provider}/{resource}/{read | write | action | delete} Name *[string](/builtin#string) // READ-ONLY; Default value is 'user,system'. Origin *[string](/builtin#string) } ``` Operation - Service REST API operation. #### func (Operation) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L82) [¶](#Operation.MarshalJSON) added in v0.6.0 ``` func (o [Operation](#Operation)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type Operation. #### func (*Operation) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L92) [¶](#Operation.UnmarshalJSON) added in v0.6.0 ``` func (o *[Operation](#Operation)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type Operation. #### type [OperationDisplay](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L48) [¶](#OperationDisplay) ``` type OperationDisplay struct { // READ-ONLY; Friendly description for the operation, Description *[string](/builtin#string) // READ-ONLY; Name of the operation Operation *[string](/builtin#string) // READ-ONLY; Service provider: Microsoft.M365SecurityAndCompliance Provider *[string](/builtin#string) // READ-ONLY; Resource Type: Services Resource *[string](/builtin#string) } ``` OperationDisplay - The object that represents the operation. #### func (OperationDisplay) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L121) [¶](#OperationDisplay.MarshalJSON) added in v0.6.0 ``` func (o [OperationDisplay](#OperationDisplay)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type OperationDisplay. #### func (*OperationDisplay) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L131) [¶](#OperationDisplay.UnmarshalJSON) added in v0.6.0 ``` func (o *[OperationDisplay](#OperationDisplay)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type OperationDisplay. #### type [OperationListResult](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L64) [¶](#OperationListResult) ``` type OperationListResult struct { // READ-ONLY; The link used to get the next page of service description objects. NextLink *[string](/builtin#string) // READ-ONLY; A list of service operations supported by the Microsoft.M365SecurityAndCompliance resource provider. Value []*[Operation](#Operation) } ``` OperationListResult - A list of service operations. It contains a list of operations and a URL link to get the next set of results. #### func (OperationListResult) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L160) [¶](#OperationListResult.MarshalJSON) ``` func (o [OperationListResult](#OperationListResult)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type OperationListResult. #### func (*OperationListResult) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L168) [¶](#OperationListResult.UnmarshalJSON) added in v0.6.0 ``` func (o *[OperationListResult](#OperationListResult)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type OperationListResult. #### type [OperationResultStatus](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/constants.go#L72) [¶](#OperationResultStatus) ``` type OperationResultStatus [string](/builtin#string) ``` OperationResultStatus - The status of the operation being performed. ``` const ( OperationResultStatusCanceled [OperationResultStatus](#OperationResultStatus) = "Canceled" OperationResultStatusFailed [OperationResultStatus](#OperationResultStatus) = "Failed" OperationResultStatusRequested [OperationResultStatus](#OperationResultStatus) = "Requested" OperationResultStatusRunning [OperationResultStatus](#OperationResultStatus) = "Running" OperationResultStatusSucceeded [OperationResultStatus](#OperationResultStatus) = "Succeeded" ) ``` #### func [PossibleOperationResultStatusValues](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/constants.go#L83) [¶](#PossibleOperationResultStatusValues) ``` func PossibleOperationResultStatusValues() [][OperationResultStatus](#OperationResultStatus) ``` PossibleOperationResultStatusValues returns the possible values for the OperationResultStatus const type. #### type [OperationResultsClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/operationresults_client.go#L26) [¶](#OperationResultsClient) ``` type OperationResultsClient struct { // contains filtered or unexported fields } ``` OperationResultsClient contains the methods for the OperationResults group. Don't use this type directly, use NewOperationResultsClient() instead. #### func [NewOperationResultsClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/operationresults_client.go#L35) [¶](#NewOperationResultsClient) ``` func NewOperationResultsClient(subscriptionID [string](/builtin#string), credential [azcore](/github.com/Azure/azure-sdk-for-go/sdk/azcore).[TokenCredential](/github.com/Azure/azure-sdk-for-go/sdk/azcore#TokenCredential), options *[arm](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm).[ClientOptions](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm#ClientOptions)) (*[OperationResultsClient](#OperationResultsClient), [error](/builtin#error)) ``` NewOperationResultsClient creates a new instance of OperationResultsClient with the specified values. * subscriptionID - The subscription identifier. * credential - used to authorize requests. Usually a credential from azidentity. * options - pass nil to accept the default values. #### func (*OperationResultsClient) [Get](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/operationresults_client.go#L54) [¶](#OperationResultsClient.Get) ``` func (client *[OperationResultsClient](#OperationResultsClient)) Get(ctx [context](/context).[Context](/context#Context), locationName [string](/builtin#string), operationResultID [string](/builtin#string), options *[OperationResultsClientGetOptions](#OperationResultsClientGetOptions)) ([OperationResultsClientGetResponse](#OperationResultsClientGetResponse), [error](/builtin#error)) ``` Get - Get the operation result for a long running operation. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * locationName - The location of the operation. * operationResultID - The ID of the operation result to get. * options - OperationResultsClientGetOptions contains the optional parameters for the OperationResultsClient.Get method. Example [¶](#example-OperationResultsClient.Get) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/OperationResultsGet.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewOperationResultsClient().Get(ctx, "westus", "exampleid", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.OperationResultsDescription = armm365securityandcompliance.OperationResultsDescription{ // Name: to.Ptr("servicename"), // ID: to.Ptr("/subscriptions/subid/providers/Microsoft.M365SecurityAndCompliance/locations/westus/operationresults/exampleid"), // Properties: map[string]any{ // }, // StartTime: to.Ptr("2020-01-11T06:03:30.2716301Z"), // Status: to.Ptr(armm365securityandcompliance.OperationResultStatusRequested), // } } ``` ``` Output: ``` Share Format Run #### type [OperationResultsClientGetOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L73) [¶](#OperationResultsClientGetOptions) added in v0.2.0 ``` type OperationResultsClientGetOptions struct { } ``` OperationResultsClientGetOptions contains the optional parameters for the OperationResultsClient.Get method. #### type [OperationResultsClientGetResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L13) [¶](#OperationResultsClientGetResponse) added in v0.2.0 ``` type OperationResultsClientGetResponse struct { [OperationResultsDescription](#OperationResultsDescription) } ``` OperationResultsClientGetResponse contains the response from method OperationResultsClient.Get. #### type [OperationResultsDescription](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L78) [¶](#OperationResultsDescription) ``` type OperationResultsDescription struct { // Additional properties of the operation result. Properties [any](/builtin#any) // READ-ONLY; The ID of the operation returned. ID *[string](/builtin#string) // READ-ONLY; The name of the operation result. Name *[string](/builtin#string) // READ-ONLY; The time that the operation was started. StartTime *[string](/builtin#string) // READ-ONLY; The status of the operation being performed. Status *[OperationResultStatus](#OperationResultStatus) } ``` OperationResultsDescription - The properties indicating the operation result of an operation on a service. #### func (OperationResultsDescription) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L191) [¶](#OperationResultsDescription.MarshalJSON) added in v0.6.0 ``` func (o [OperationResultsDescription](#OperationResultsDescription)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type OperationResultsDescription. #### func (*OperationResultsDescription) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L202) [¶](#OperationResultsDescription.UnmarshalJSON) added in v0.6.0 ``` func (o *[OperationResultsDescription](#OperationResultsDescription)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type OperationResultsDescription. #### type [OperationsClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/operations_client.go#L23) [¶](#OperationsClient) ``` type OperationsClient struct { // contains filtered or unexported fields } ``` OperationsClient contains the methods for the Operations group. Don't use this type directly, use NewOperationsClient() instead. #### func [NewOperationsClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/operations_client.go#L30) [¶](#NewOperationsClient) ``` func NewOperationsClient(credential [azcore](/github.com/Azure/azure-sdk-for-go/sdk/azcore).[TokenCredential](/github.com/Azure/azure-sdk-for-go/sdk/azcore#TokenCredential), options *[arm](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm).[ClientOptions](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm#ClientOptions)) (*[OperationsClient](#OperationsClient), [error](/builtin#error)) ``` NewOperationsClient creates a new instance of OperationsClient with the specified values. * credential - used to authorize requests. Usually a credential from azidentity. * options - pass nil to accept the default values. #### func (*OperationsClient) [NewListPager](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/operations_client.go#L45) [¶](#OperationsClient.NewListPager) added in v0.4.0 ``` func (client *[OperationsClient](#OperationsClient)) NewListPager(options *[OperationsClientListOptions](#OperationsClientListOptions)) *[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Pager](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Pager)[[OperationsClientListResponse](#OperationsClientListResponse)] ``` NewListPager - Lists all of the available M365SecurityAndCompliance REST API operations. Generated from API version 2021-03-25-preview * options - OperationsClientListOptions contains the optional parameters for the OperationsClient.NewListPager method. Example (ListComplianceCenterOperations) [¶](#example-OperationsClient.NewListPager-ListComplianceCenterOperations) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/ComplianceCenterOperationsList.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } pager := clientFactory.NewOperationsClient().NewListPager(nil) for pager.More() { page, err := pager.NextPage(ctx) if err != nil { log.Fatalf("failed to advance page: %v", err) } for _, v := range page.Value { // You could use page here. We use blank identifier for just demo purposes. _ = v } // If the HTTP response code is 200 as defined in example definition, your page structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // page.OperationListResult = armm365securityandcompliance.OperationListResult{ // Value: []*armm365securityandcompliance.Operation{ // { // Name: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365ComplianceCenter/read"), // Display: &armm365securityandcompliance.OperationDisplay{ // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365ComplianceCenter/write"), // Display: &armm365securityandcompliance.OperationDisplay{ // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365ComplianceCenter/delete"), // Display: &armm365securityandcompliance.OperationDisplay{ // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365ComplianceCenter/privateEndpointConnections/read"), // Display: &armm365securityandcompliance.OperationDisplay{ // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365ComplianceCenter/privateEndpointConnections/write"), // Display: &armm365securityandcompliance.OperationDisplay{ // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365ComplianceCenter/privateEndpointConnections/delete"), // Display: &armm365securityandcompliance.OperationDisplay{ // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365ComplianceCenter/privateLinkResources/read"), // Display: &armm365securityandcompliance.OperationDisplay{ // }, // Origin: to.Ptr("user,system"), // }}, // } } } ``` ``` Output: ``` Share Format Run Example (ListEdmUploadOperations) [¶](#example-OperationsClient.NewListPager-ListEdmUploadOperations) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/EdmUploadOperationsList.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } pager := clientFactory.NewOperationsClient().NewListPager(nil) for pager.More() { page, err := pager.NextPage(ctx) if err != nil { log.Fatalf("failed to advance page: %v", err) } for _, v := range page.Value { // You could use page here. We use blank identifier for just demo purposes. _ = v } // If the HTTP response code is 200 as defined in example definition, your page structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // page.OperationListResult = armm365securityandcompliance.OperationListResult{ // Value: []*armm365securityandcompliance.Operation{ // { // Name: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForEDMUpload/read"), // Display: &armm365securityandcompliance.OperationDisplay{ // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForEDMUpload/write"), // Display: &armm365securityandcompliance.OperationDisplay{ // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForEDMUpload/delete"), // Display: &armm365securityandcompliance.OperationDisplay{ // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForEDMUpload/privateEndpointConnections/read"), // Display: &armm365securityandcompliance.OperationDisplay{ // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForEDMUpload/privateEndpointConnections/write"), // Display: &armm365securityandcompliance.OperationDisplay{ // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForEDMUpload/privateEndpointConnections/delete"), // Display: &armm365securityandcompliance.OperationDisplay{ // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForEDMUpload/privateLinkResources/read"), // Display: &armm365securityandcompliance.OperationDisplay{ // }, // Origin: to.Ptr("user,system"), // }}, // } } } ``` ``` Output: ``` Share Format Run Example (ListManagementApiOperations) [¶](#example-OperationsClient.NewListPager-ListManagementApiOperations) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/ManagementAPIOperationsList.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } pager := clientFactory.NewOperationsClient().NewListPager(nil) for pager.More() { page, err := pager.NextPage(ctx) if err != nil { log.Fatalf("failed to advance page: %v", err) } for _, v := range page.Value { // You could use page here. We use blank identifier for just demo purposes. _ = v } // If the HTTP response code is 200 as defined in example definition, your page structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // page.OperationListResult = armm365securityandcompliance.OperationListResult{ // Value: []*armm365securityandcompliance.Operation{ // { // Name: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForO365ManagementActivityAPI/read"), // Display: &armm365securityandcompliance.OperationDisplay{ // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForO365ManagementActivityAPI/write"), // Display: &armm365securityandcompliance.OperationDisplay{ // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForO365ManagementActivityAPI/delete"), // Display: &armm365securityandcompliance.OperationDisplay{ // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForO365ManagementActivityAPI/privateEndpointConnections/read"), // Display: &armm365securityandcompliance.OperationDisplay{ // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForO365ManagementActivityAPI/privateEndpointConnections/write"), // Display: &armm365securityandcompliance.OperationDisplay{ // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForO365ManagementActivityAPI/privateEndpointConnections/delete"), // Display: &armm365securityandcompliance.OperationDisplay{ // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForO365ManagementActivityAPI/privateLinkResources/read"), // Display: &armm365securityandcompliance.OperationDisplay{ // }, // Origin: to.Ptr("user,system"), // }}, // } } } ``` ``` Output: ``` Share Format Run Example (ListMipPolicySyncOperations) [¶](#example-OperationsClient.NewListPager-ListMipPolicySyncOperations) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/MIPPolicySyncOperationsList.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } pager := clientFactory.NewOperationsClient().NewListPager(nil) for pager.More() { page, err := pager.NextPage(ctx) if err != nil { log.Fatalf("failed to advance page: %v", err) } for _, v := range page.Value { // You could use page here. We use blank identifier for just demo purposes. _ = v } // If the HTTP response code is 200 as defined in example definition, your page structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // page.OperationListResult = armm365securityandcompliance.OperationListResult{ // Value: []*armm365securityandcompliance.Operation{ // { // Name: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForMIPPolicySync/read"), // Display: &armm365securityandcompliance.OperationDisplay{ // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForMIPPolicySync/write"), // Display: &armm365securityandcompliance.OperationDisplay{ // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForMIPPolicySync/delete"), // Display: &armm365securityandcompliance.OperationDisplay{ // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForMIPPolicySync/privateEndpointConnections/read"), // Display: &armm365securityandcompliance.OperationDisplay{ // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForMIPPolicySync/privateEndpointConnections/write"), // Display: &armm365securityandcompliance.OperationDisplay{ // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForMIPPolicySync/privateEndpointConnections/delete"), // Display: &armm365securityandcompliance.OperationDisplay{ // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForMIPPolicySync/privateLinkResources/read"), // Display: &armm365securityandcompliance.OperationDisplay{ // }, // Origin: to.Ptr("user,system"), // }}, // } } } ``` ``` Output: ``` Share Format Run Example (ListOperations) [¶](#example-OperationsClient.NewListPager-ListOperations) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/OperationsList.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } pager := clientFactory.NewOperationsClient().NewListPager(nil) for pager.More() { page, err := pager.NextPage(ctx) if err != nil { log.Fatalf("failed to advance page: %v", err) } for _, v := range page.Value { // You could use page here. We use blank identifier for just demo purposes. _ = v } // If the HTTP response code is 200 as defined in example definition, your page structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // page.OperationListResult = armm365securityandcompliance.OperationListResult{ // Value: []*armm365securityandcompliance.Operation{ // { // Name: to.Ptr("Microsoft.M365SecurityAndCompliance/locations/operationresults/read"), // Display: &armm365securityandcompliance.OperationDisplay{ // Description: to.Ptr("Get the status of an asynchronous operation"), // Operation: to.Ptr("read"), // Provider: to.Ptr("Microsoft.M365SecurityAndCompliance"), // Resource: to.Ptr("operationresults"), // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.M365SecurityAndCompliance/Operations/read"), // Display: &armm365securityandcompliance.OperationDisplay{ // Description: to.Ptr("Get the list of operations supported by this Resource Provider."), // Operation: to.Ptr("read"), // Provider: to.Ptr("Microsoft.M365SecurityAndCompliance"), // Resource: to.Ptr("operations"), // }, // Origin: to.Ptr("user,system"), // }}, // } } } ``` ``` Output: ``` Share Format Run Example (ListSccPowershellOperations) [¶](#example-OperationsClient.NewListPager-ListSccPowershellOperations) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/SCCPowershellOperationsList.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } pager := clientFactory.NewOperationsClient().NewListPager(nil) for pager.More() { page, err := pager.NextPage(ctx) if err != nil { log.Fatalf("failed to advance page: %v", err) } for _, v := range page.Value { // You could use page here. We use blank identifier for just demo purposes. _ = v } // If the HTTP response code is 200 as defined in example definition, your page structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // page.OperationListResult = armm365securityandcompliance.OperationListResult{ // Value: []*armm365securityandcompliance.Operation{ // { // Name: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForSCCPowershell/read"), // Display: &armm365securityandcompliance.OperationDisplay{ // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForSCCPowershell/write"), // Display: &armm365securityandcompliance.OperationDisplay{ // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForSCCPowershell/delete"), // Display: &armm365securityandcompliance.OperationDisplay{ // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForSCCPowershell/privateEndpointConnections/read"), // Display: &armm365securityandcompliance.OperationDisplay{ // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForSCCPowershell/privateEndpointConnections/write"), // Display: &armm365securityandcompliance.OperationDisplay{ // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForSCCPowershell/privateEndpointConnections/delete"), // Display: &armm365securityandcompliance.OperationDisplay{ // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForSCCPowershell/privateLinkResources/read"), // Display: &armm365securityandcompliance.OperationDisplay{ // }, // Origin: to.Ptr("user,system"), // }}, // } } } ``` ``` Output: ``` Share Format Run Example (ListSecurityCenterOperations) [¶](#example-OperationsClient.NewListPager-ListSecurityCenterOperations) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/SecurityCenterOperationsList.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } pager := clientFactory.NewOperationsClient().NewListPager(nil) for pager.More() { page, err := pager.NextPage(ctx) if err != nil { log.Fatalf("failed to advance page: %v", err) } for _, v := range page.Value { // You could use page here. We use blank identifier for just demo purposes. _ = v } // If the HTTP response code is 200 as defined in example definition, your page structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // page.OperationListResult = armm365securityandcompliance.OperationListResult{ // Value: []*armm365securityandcompliance.Operation{ // { // Name: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365SecurityCenter/read"), // Display: &armm365securityandcompliance.OperationDisplay{ // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365SecurityCenter/write"), // Display: &armm365securityandcompliance.OperationDisplay{ // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365SecurityCenter/delete"), // Display: &armm365securityandcompliance.OperationDisplay{ // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365SecurityCenter/privateEndpointConnections/read"), // Display: &armm365securityandcompliance.OperationDisplay{ // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365SecurityCenter/privateEndpointConnections/write"), // Display: &armm365securityandcompliance.OperationDisplay{ // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365SecurityCenter/privateEndpointConnections/delete"), // Display: &armm365securityandcompliance.OperationDisplay{ // }, // Origin: to.Ptr("user,system"), // }, // { // Name: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365SecurityCenter/privateLinkResources/read"), // Display: &armm365securityandcompliance.OperationDisplay{ // }, // Origin: to.Ptr("user,system"), // }}, // } } } ``` ``` Output: ``` Share Format Run #### type [OperationsClientListOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L96) [¶](#OperationsClientListOptions) added in v0.2.0 ``` type OperationsClientListOptions struct { } ``` OperationsClientListOptions contains the optional parameters for the OperationsClient.NewListPager method. #### type [OperationsClientListResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L18) [¶](#OperationsClientListResponse) added in v0.2.0 ``` type OperationsClientListResponse struct { [OperationListResult](#OperationListResult) } ``` OperationsClientListResponse contains the response from method OperationsClient.NewListPager. #### type [PrivateEndpoint](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L101) [¶](#PrivateEndpoint) ``` type PrivateEndpoint struct { // READ-ONLY; The ARM identifier for Private Endpoint ID *[string](/builtin#string) } ``` PrivateEndpoint - The Private Endpoint resource. #### func (PrivateEndpoint) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L234) [¶](#PrivateEndpoint.MarshalJSON) added in v0.6.0 ``` func (p [PrivateEndpoint](#PrivateEndpoint)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type PrivateEndpoint. #### func (*PrivateEndpoint) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L241) [¶](#PrivateEndpoint.UnmarshalJSON) added in v0.6.0 ``` func (p *[PrivateEndpoint](#PrivateEndpoint)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type PrivateEndpoint. #### type [PrivateEndpointConnection](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L107) [¶](#PrivateEndpointConnection) ``` type PrivateEndpointConnection struct { // Resource properties. Properties *[PrivateEndpointConnectionProperties](#PrivateEndpointConnectionProperties) // READ-ONLY; Fully qualified resource ID for the resource. Ex - /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName} ID *[string](/builtin#string) // READ-ONLY; The name of the resource Name *[string](/builtin#string) // READ-ONLY; Required property for system data SystemData *[SystemData](#SystemData) // READ-ONLY; The type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts" Type *[string](/builtin#string) } ``` PrivateEndpointConnection - The Private Endpoint Connection resource. #### func (PrivateEndpointConnection) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L261) [¶](#PrivateEndpointConnection.MarshalJSON) added in v0.6.0 ``` func (p [PrivateEndpointConnection](#PrivateEndpointConnection)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type PrivateEndpointConnection. #### func (*PrivateEndpointConnection) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L272) [¶](#PrivateEndpointConnection.UnmarshalJSON) added in v0.6.0 ``` func (p *[PrivateEndpointConnection](#PrivateEndpointConnection)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type PrivateEndpointConnection. #### type [PrivateEndpointConnectionListResult](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L125) [¶](#PrivateEndpointConnectionListResult) ``` type PrivateEndpointConnectionListResult struct { // Array of private endpoint connections Value []*[PrivateEndpointConnection](#PrivateEndpointConnection) // READ-ONLY; The URL to get the next set of results. NextLink *[string](/builtin#string) } ``` PrivateEndpointConnectionListResult - List of private endpoint connection associated with the specified storage account #### func (PrivateEndpointConnectionListResult) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L304) [¶](#PrivateEndpointConnectionListResult.MarshalJSON) ``` func (p [PrivateEndpointConnectionListResult](#PrivateEndpointConnectionListResult)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type PrivateEndpointConnectionListResult. #### func (*PrivateEndpointConnectionListResult) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L312) [¶](#PrivateEndpointConnectionListResult.UnmarshalJSON) added in v0.6.0 ``` func (p *[PrivateEndpointConnectionListResult](#PrivateEndpointConnectionListResult)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type PrivateEndpointConnectionListResult. #### type [PrivateEndpointConnectionProperties](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L134) [¶](#PrivateEndpointConnectionProperties) ``` type PrivateEndpointConnectionProperties struct { // REQUIRED; A collection of information about the state of the connection between service consumer and provider. PrivateLinkServiceConnectionState *[PrivateLinkServiceConnectionState](#PrivateLinkServiceConnectionState) // The resource of private end point. PrivateEndpoint *[PrivateEndpoint](#PrivateEndpoint) // READ-ONLY; The provisioning state of the private endpoint connection resource. ProvisioningState *[PrivateEndpointConnectionProvisioningState](#PrivateEndpointConnectionProvisioningState) } ``` PrivateEndpointConnectionProperties - Properties of the PrivateEndpointConnectProperties. #### func (PrivateEndpointConnectionProperties) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L335) [¶](#PrivateEndpointConnectionProperties.MarshalJSON) added in v0.6.0 ``` func (p [PrivateEndpointConnectionProperties](#PrivateEndpointConnectionProperties)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type PrivateEndpointConnectionProperties. #### func (*PrivateEndpointConnectionProperties) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L344) [¶](#PrivateEndpointConnectionProperties.UnmarshalJSON) added in v0.6.0 ``` func (p *[PrivateEndpointConnectionProperties](#PrivateEndpointConnectionProperties)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type PrivateEndpointConnectionProperties. #### type [PrivateEndpointConnectionProvisioningState](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/constants.go#L94) [¶](#PrivateEndpointConnectionProvisioningState) ``` type PrivateEndpointConnectionProvisioningState [string](/builtin#string) ``` PrivateEndpointConnectionProvisioningState - The current provisioning state. ``` const ( PrivateEndpointConnectionProvisioningStateCreating [PrivateEndpointConnectionProvisioningState](#PrivateEndpointConnectionProvisioningState) = "Creating" PrivateEndpointConnectionProvisioningStateDeleting [PrivateEndpointConnectionProvisioningState](#PrivateEndpointConnectionProvisioningState) = "Deleting" PrivateEndpointConnectionProvisioningStateFailed [PrivateEndpointConnectionProvisioningState](#PrivateEndpointConnectionProvisioningState) = "Failed" PrivateEndpointConnectionProvisioningStateSucceeded [PrivateEndpointConnectionProvisioningState](#PrivateEndpointConnectionProvisioningState) = "Succeeded" ) ``` #### func [PossiblePrivateEndpointConnectionProvisioningStateValues](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/constants.go#L104) [¶](#PossiblePrivateEndpointConnectionProvisioningStateValues) ``` func PossiblePrivateEndpointConnectionProvisioningStateValues() [][PrivateEndpointConnectionProvisioningState](#PrivateEndpointConnectionProvisioningState) ``` PossiblePrivateEndpointConnectionProvisioningStateValues returns the possible values for the PrivateEndpointConnectionProvisioningState const type. #### type [PrivateEndpointConnectionsAdtAPIClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privateendpointconnectionsadtapi_client.go#L26) [¶](#PrivateEndpointConnectionsAdtAPIClient) ``` type PrivateEndpointConnectionsAdtAPIClient struct { // contains filtered or unexported fields } ``` PrivateEndpointConnectionsAdtAPIClient contains the methods for the PrivateEndpointConnectionsAdtAPI group. Don't use this type directly, use NewPrivateEndpointConnectionsAdtAPIClient() instead. #### func [NewPrivateEndpointConnectionsAdtAPIClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privateendpointconnectionsadtapi_client.go#L35) [¶](#NewPrivateEndpointConnectionsAdtAPIClient) ``` func NewPrivateEndpointConnectionsAdtAPIClient(subscriptionID [string](/builtin#string), credential [azcore](/github.com/Azure/azure-sdk-for-go/sdk/azcore).[TokenCredential](/github.com/Azure/azure-sdk-for-go/sdk/azcore#TokenCredential), options *[arm](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm).[ClientOptions](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm#ClientOptions)) (*[PrivateEndpointConnectionsAdtAPIClient](#PrivateEndpointConnectionsAdtAPIClient), [error](/builtin#error)) ``` NewPrivateEndpointConnectionsAdtAPIClient creates a new instance of PrivateEndpointConnectionsAdtAPIClient with the specified values. * subscriptionID - The subscription identifier. * credential - used to authorize requests. Usually a credential from azidentity. * options - pass nil to accept the default values. #### func (*PrivateEndpointConnectionsAdtAPIClient) [BeginCreateOrUpdate](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privateendpointconnectionsadtapi_client.go#L57) [¶](#PrivateEndpointConnectionsAdtAPIClient.BeginCreateOrUpdate) ``` func (client *[PrivateEndpointConnectionsAdtAPIClient](#PrivateEndpointConnectionsAdtAPIClient)) BeginCreateOrUpdate(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), privateEndpointConnectionName [string](/builtin#string), properties [PrivateEndpointConnection](#PrivateEndpointConnection), options *[PrivateEndpointConnectionsAdtAPIClientBeginCreateOrUpdateOptions](#PrivateEndpointConnectionsAdtAPIClientBeginCreateOrUpdateOptions)) (*[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Poller](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Poller)[[PrivateEndpointConnectionsAdtAPIClientCreateOrUpdateResponse](#PrivateEndpointConnectionsAdtAPIClientCreateOrUpdateResponse)], [error](/builtin#error)) ``` BeginCreateOrUpdate - Update the state of the specified private endpoint connection associated with the service. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * privateEndpointConnectionName - The name of the private endpoint connection associated with the Azure resource * properties - The private endpoint connection properties. * options - PrivateEndpointConnectionsAdtAPIClientBeginCreateOrUpdateOptions contains the optional parameters for the PrivateEndpointConnectionsAdtAPIClient.BeginCreateOrUpdate method. Example [¶](#example-PrivateEndpointConnectionsAdtAPIClient.BeginCreateOrUpdate) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/ManagementAPIServiceCreatePrivateEndpointConnection.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azcore/to" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } poller, err := clientFactory.NewPrivateEndpointConnectionsAdtAPIClient().BeginCreateOrUpdate(ctx, "rgname", "service1", "myConnection", armm365securityandcompliance.PrivateEndpointConnection{ Properties: &armm365securityandcompliance.PrivateEndpointConnectionProperties{ PrivateLinkServiceConnectionState: &armm365securityandcompliance.PrivateLinkServiceConnectionState{ Description: to.Ptr("Auto-Approved"), Status: to.Ptr(armm365securityandcompliance.PrivateEndpointServiceConnectionStatusApproved), }, }, }, nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } res, err := poller.PollUntilDone(ctx, nil) if err != nil { log.Fatalf("failed to pull the result: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateEndpointConnection = armm365securityandcompliance.PrivateEndpointConnection{ // Name: to.Ptr("myConnection"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForO365ManagementActivityAPI/privateEndpointConnections"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForO365ManagementActivityAPI/service1/privateEndpointConnections/myConnection"), // Properties: &armm365securityandcompliance.PrivateEndpointConnectionProperties{ // PrivateEndpoint: &armm365securityandcompliance.PrivateEndpoint{ // ID: to.Ptr("/subscriptions/c80fb759-c965-4c6a-9110-9b2b2d038882/resourceGroups/myResourceGroup/providers/Microsoft.Network/privateEndpoints/peexample01"), // }, // PrivateLinkServiceConnectionState: &armm365securityandcompliance.PrivateLinkServiceConnectionState{ // Description: to.Ptr("Auto-Approved"), // ActionsRequired: to.Ptr("None"), // Status: to.Ptr(armm365securityandcompliance.PrivateEndpointServiceConnectionStatusApproved), // }, // ProvisioningState: to.Ptr(armm365securityandcompliance.PrivateEndpointConnectionProvisioningStateSucceeded), // }, // SystemData: &armm365securityandcompliance.SystemData{ // CreatedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // CreatedBy: to.Ptr("sove"), // CreatedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // LastModifiedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // LastModifiedBy: to.Ptr("sove"), // LastModifiedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // }, // } } ``` ``` Output: ``` Share Format Run #### func (*PrivateEndpointConnectionsAdtAPIClient) [BeginDelete](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privateendpointconnectionsadtapi_client.go#L129) [¶](#PrivateEndpointConnectionsAdtAPIClient.BeginDelete) ``` func (client *[PrivateEndpointConnectionsAdtAPIClient](#PrivateEndpointConnectionsAdtAPIClient)) BeginDelete(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), privateEndpointConnectionName [string](/builtin#string), options *[PrivateEndpointConnectionsAdtAPIClientBeginDeleteOptions](#PrivateEndpointConnectionsAdtAPIClientBeginDeleteOptions)) (*[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Poller](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Poller)[[PrivateEndpointConnectionsAdtAPIClientDeleteResponse](#PrivateEndpointConnectionsAdtAPIClientDeleteResponse)], [error](/builtin#error)) ``` BeginDelete - Deletes a private endpoint connection. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * privateEndpointConnectionName - The name of the private endpoint connection associated with the Azure resource * options - PrivateEndpointConnectionsAdtAPIClientBeginDeleteOptions contains the optional parameters for the PrivateEndpointConnectionsAdtAPIClient.BeginDelete method. Example [¶](#example-PrivateEndpointConnectionsAdtAPIClient.BeginDelete) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/ManagementAPIServiceDeletePrivateEndpointConnection.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } poller, err := clientFactory.NewPrivateEndpointConnectionsAdtAPIClient().BeginDelete(ctx, "rgname", "service1", "myConnection", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } _, err = poller.PollUntilDone(ctx, nil) if err != nil { log.Fatalf("failed to pull the result: %v", err) } } ``` ``` Output: ``` Share Format Run #### func (*PrivateEndpointConnectionsAdtAPIClient) [Get](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privateendpointconnectionsadtapi_client.go#L201) [¶](#PrivateEndpointConnectionsAdtAPIClient.Get) ``` func (client *[PrivateEndpointConnectionsAdtAPIClient](#PrivateEndpointConnectionsAdtAPIClient)) Get(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), privateEndpointConnectionName [string](/builtin#string), options *[PrivateEndpointConnectionsAdtAPIClientGetOptions](#PrivateEndpointConnectionsAdtAPIClientGetOptions)) ([PrivateEndpointConnectionsAdtAPIClientGetResponse](#PrivateEndpointConnectionsAdtAPIClientGetResponse), [error](/builtin#error)) ``` Get - Gets the specified private endpoint connection associated with the service. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * privateEndpointConnectionName - The name of the private endpoint connection associated with the Azure resource * options - PrivateEndpointConnectionsAdtAPIClientGetOptions contains the optional parameters for the PrivateEndpointConnectionsAdtAPIClient.Get method. Example [¶](#example-PrivateEndpointConnectionsAdtAPIClient.Get) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/ManagementAPIServiceGetPrivateEndpointConnection.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewPrivateEndpointConnectionsAdtAPIClient().Get(ctx, "rgname", "service1", "myConnection", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateEndpointConnection = armm365securityandcompliance.PrivateEndpointConnection{ // Name: to.Ptr("myConnection"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForO365ManagementActivityAPI/privateEndpointConnections"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForO365ManagementActivityAPI/service1/privateEndpointConnections/myConnection"), // Properties: &armm365securityandcompliance.PrivateEndpointConnectionProperties{ // PrivateEndpoint: &armm365securityandcompliance.PrivateEndpoint{ // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.Network/privateEndpoints/peexample01"), // }, // PrivateLinkServiceConnectionState: &armm365securityandcompliance.PrivateLinkServiceConnectionState{ // Description: to.Ptr("Auto-Approved"), // ActionsRequired: to.Ptr("None"), // Status: to.Ptr(armm365securityandcompliance.PrivateEndpointServiceConnectionStatusApproved), // }, // }, // SystemData: &armm365securityandcompliance.SystemData{ // CreatedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // CreatedBy: to.Ptr("sove"), // CreatedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // LastModifiedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // LastModifiedBy: to.Ptr("sove"), // LastModifiedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // }, // } } ``` ``` Output: ``` Share Format Run #### func (*PrivateEndpointConnectionsAdtAPIClient) [NewListByServicePager](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privateendpointconnectionsadtapi_client.go#L262) [¶](#PrivateEndpointConnectionsAdtAPIClient.NewListByServicePager) added in v0.4.0 ``` func (client *[PrivateEndpointConnectionsAdtAPIClient](#PrivateEndpointConnectionsAdtAPIClient)) NewListByServicePager(resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), options *[PrivateEndpointConnectionsAdtAPIClientListByServiceOptions](#PrivateEndpointConnectionsAdtAPIClientListByServiceOptions)) *[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Pager](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Pager)[[PrivateEndpointConnectionsAdtAPIClientListByServiceResponse](#PrivateEndpointConnectionsAdtAPIClientListByServiceResponse)] ``` NewListByServicePager - Lists all private endpoint connections for a service. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * options - PrivateEndpointConnectionsAdtAPIClientListByServiceOptions contains the optional parameters for the PrivateEndpointConnectionsAdtAPIClient.NewListByServicePager method. Example [¶](#example-PrivateEndpointConnectionsAdtAPIClient.NewListByServicePager) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/ManagementAPIServiceListPrivateEndpointConnections.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } pager := clientFactory.NewPrivateEndpointConnectionsAdtAPIClient().NewListByServicePager("rgname", "service1", nil) for pager.More() { page, err := pager.NextPage(ctx) if err != nil { log.Fatalf("failed to advance page: %v", err) } for _, v := range page.Value { // You could use page here. We use blank identifier for just demo purposes. _ = v } // If the HTTP response code is 200 as defined in example definition, your page structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // page.PrivateEndpointConnectionListResult = armm365securityandcompliance.PrivateEndpointConnectionListResult{ // Value: []*armm365securityandcompliance.PrivateEndpointConnection{ // { // Name: to.Ptr("myConnection"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForO365ManagementActivityAPI/privateEndpointConnections"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForO365ManagementActivityAPI/service1/privateEndpointConnections/myConnection"), // Properties: &armm365securityandcompliance.PrivateEndpointConnectionProperties{ // PrivateEndpoint: &armm365securityandcompliance.PrivateEndpoint{ // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.Network/privateEndpoints/peexample01"), // }, // PrivateLinkServiceConnectionState: &armm365securityandcompliance.PrivateLinkServiceConnectionState{ // Description: to.Ptr("Auto-Approved"), // ActionsRequired: to.Ptr("None"), // Status: to.Ptr(armm365securityandcompliance.PrivateEndpointServiceConnectionStatusApproved), // }, // }, // }}, // } } } ``` ``` Output: ``` Share Format Run #### type [PrivateEndpointConnectionsAdtAPIClientBeginCreateOrUpdateOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L147) [¶](#PrivateEndpointConnectionsAdtAPIClientBeginCreateOrUpdateOptions) added in v0.2.0 ``` type PrivateEndpointConnectionsAdtAPIClientBeginCreateOrUpdateOptions struct { // Resumes the LRO from the provided token. ResumeToken [string](/builtin#string) } ``` PrivateEndpointConnectionsAdtAPIClientBeginCreateOrUpdateOptions contains the optional parameters for the PrivateEndpointConnectionsAdtAPIClient.BeginCreateOrUpdate method. #### type [PrivateEndpointConnectionsAdtAPIClientBeginDeleteOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L154) [¶](#PrivateEndpointConnectionsAdtAPIClientBeginDeleteOptions) added in v0.2.0 ``` type PrivateEndpointConnectionsAdtAPIClientBeginDeleteOptions struct { // Resumes the LRO from the provided token. ResumeToken [string](/builtin#string) } ``` PrivateEndpointConnectionsAdtAPIClientBeginDeleteOptions contains the optional parameters for the PrivateEndpointConnectionsAdtAPIClient.BeginDelete method. #### type [PrivateEndpointConnectionsAdtAPIClientCreateOrUpdateResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L23) [¶](#PrivateEndpointConnectionsAdtAPIClientCreateOrUpdateResponse) added in v0.2.0 ``` type PrivateEndpointConnectionsAdtAPIClientCreateOrUpdateResponse struct { [PrivateEndpointConnection](#PrivateEndpointConnection) } ``` PrivateEndpointConnectionsAdtAPIClientCreateOrUpdateResponse contains the response from method PrivateEndpointConnectionsAdtAPIClient.BeginCreateOrUpdate. #### type [PrivateEndpointConnectionsAdtAPIClientDeleteResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L28) [¶](#PrivateEndpointConnectionsAdtAPIClientDeleteResponse) added in v0.2.0 ``` type PrivateEndpointConnectionsAdtAPIClientDeleteResponse struct { } ``` PrivateEndpointConnectionsAdtAPIClientDeleteResponse contains the response from method PrivateEndpointConnectionsAdtAPIClient.BeginDelete. #### type [PrivateEndpointConnectionsAdtAPIClientGetOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L161) [¶](#PrivateEndpointConnectionsAdtAPIClientGetOptions) added in v0.2.0 ``` type PrivateEndpointConnectionsAdtAPIClientGetOptions struct { } ``` PrivateEndpointConnectionsAdtAPIClientGetOptions contains the optional parameters for the PrivateEndpointConnectionsAdtAPIClient.Get method. #### type [PrivateEndpointConnectionsAdtAPIClientGetResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L33) [¶](#PrivateEndpointConnectionsAdtAPIClientGetResponse) added in v0.2.0 ``` type PrivateEndpointConnectionsAdtAPIClientGetResponse struct { [PrivateEndpointConnection](#PrivateEndpointConnection) } ``` PrivateEndpointConnectionsAdtAPIClientGetResponse contains the response from method PrivateEndpointConnectionsAdtAPIClient.Get. #### type [PrivateEndpointConnectionsAdtAPIClientListByServiceOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L167) [¶](#PrivateEndpointConnectionsAdtAPIClientListByServiceOptions) added in v0.2.0 ``` type PrivateEndpointConnectionsAdtAPIClientListByServiceOptions struct { } ``` PrivateEndpointConnectionsAdtAPIClientListByServiceOptions contains the optional parameters for the PrivateEndpointConnectionsAdtAPIClient.NewListByServicePager method. #### type [PrivateEndpointConnectionsAdtAPIClientListByServiceResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L38) [¶](#PrivateEndpointConnectionsAdtAPIClientListByServiceResponse) added in v0.2.0 ``` type PrivateEndpointConnectionsAdtAPIClientListByServiceResponse struct { [PrivateEndpointConnectionListResult](#PrivateEndpointConnectionListResult) } ``` PrivateEndpointConnectionsAdtAPIClientListByServiceResponse contains the response from method PrivateEndpointConnectionsAdtAPIClient.NewListByServicePager. #### type [PrivateEndpointConnectionsCompClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privateendpointconnectionscomp_client.go#L26) [¶](#PrivateEndpointConnectionsCompClient) ``` type PrivateEndpointConnectionsCompClient struct { // contains filtered or unexported fields } ``` PrivateEndpointConnectionsCompClient contains the methods for the PrivateEndpointConnectionsComp group. Don't use this type directly, use NewPrivateEndpointConnectionsCompClient() instead. #### func [NewPrivateEndpointConnectionsCompClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privateendpointconnectionscomp_client.go#L35) [¶](#NewPrivateEndpointConnectionsCompClient) ``` func NewPrivateEndpointConnectionsCompClient(subscriptionID [string](/builtin#string), credential [azcore](/github.com/Azure/azure-sdk-for-go/sdk/azcore).[TokenCredential](/github.com/Azure/azure-sdk-for-go/sdk/azcore#TokenCredential), options *[arm](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm).[ClientOptions](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm#ClientOptions)) (*[PrivateEndpointConnectionsCompClient](#PrivateEndpointConnectionsCompClient), [error](/builtin#error)) ``` NewPrivateEndpointConnectionsCompClient creates a new instance of PrivateEndpointConnectionsCompClient with the specified values. * subscriptionID - The subscription identifier. * credential - used to authorize requests. Usually a credential from azidentity. * options - pass nil to accept the default values. #### func (*PrivateEndpointConnectionsCompClient) [BeginCreateOrUpdate](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privateendpointconnectionscomp_client.go#L57) [¶](#PrivateEndpointConnectionsCompClient.BeginCreateOrUpdate) ``` func (client *[PrivateEndpointConnectionsCompClient](#PrivateEndpointConnectionsCompClient)) BeginCreateOrUpdate(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), privateEndpointConnectionName [string](/builtin#string), properties [PrivateEndpointConnection](#PrivateEndpointConnection), options *[PrivateEndpointConnectionsCompClientBeginCreateOrUpdateOptions](#PrivateEndpointConnectionsCompClientBeginCreateOrUpdateOptions)) (*[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Poller](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Poller)[[PrivateEndpointConnectionsCompClientCreateOrUpdateResponse](#PrivateEndpointConnectionsCompClientCreateOrUpdateResponse)], [error](/builtin#error)) ``` BeginCreateOrUpdate - Update the state of the specified private endpoint connection associated with the service. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * privateEndpointConnectionName - The name of the private endpoint connection associated with the Azure resource * properties - The private endpoint connection properties. * options - PrivateEndpointConnectionsCompClientBeginCreateOrUpdateOptions contains the optional parameters for the PrivateEndpointConnectionsCompClient.BeginCreateOrUpdate method. Example [¶](#example-PrivateEndpointConnectionsCompClient.BeginCreateOrUpdate) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/ComplianceCenterServiceCreatePrivateEndpointConnection.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azcore/to" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } poller, err := clientFactory.NewPrivateEndpointConnectionsCompClient().BeginCreateOrUpdate(ctx, "rgname", "service1", "myConnection", armm365securityandcompliance.PrivateEndpointConnection{ Properties: &armm365securityandcompliance.PrivateEndpointConnectionProperties{ PrivateLinkServiceConnectionState: &armm365securityandcompliance.PrivateLinkServiceConnectionState{ Description: to.Ptr("Auto-Approved"), Status: to.Ptr(armm365securityandcompliance.PrivateEndpointServiceConnectionStatusApproved), }, }, }, nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } res, err := poller.PollUntilDone(ctx, nil) if err != nil { log.Fatalf("failed to pull the result: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateEndpointConnection = armm365securityandcompliance.PrivateEndpointConnection{ // Name: to.Ptr("myConnection"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365ComplianceCenter/privateEndpointConnections"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365ComplianceCenter/service1/privateEndpointConnections/myConnection"), // Properties: &armm365securityandcompliance.PrivateEndpointConnectionProperties{ // PrivateEndpoint: &armm365securityandcompliance.PrivateEndpoint{ // ID: to.Ptr("/subscriptions/c80fb759-c965-4c6a-9110-9b2b2d038882/resourceGroups/myResourceGroup/providers/Microsoft.Network/privateEndpoints/peexample01"), // }, // PrivateLinkServiceConnectionState: &armm365securityandcompliance.PrivateLinkServiceConnectionState{ // Description: to.Ptr("Auto-Approved"), // ActionsRequired: to.Ptr("None"), // Status: to.Ptr(armm365securityandcompliance.PrivateEndpointServiceConnectionStatusApproved), // }, // ProvisioningState: to.Ptr(armm365securityandcompliance.PrivateEndpointConnectionProvisioningStateSucceeded), // }, // SystemData: &armm365securityandcompliance.SystemData{ // CreatedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // CreatedBy: to.Ptr("sove"), // CreatedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // LastModifiedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // LastModifiedBy: to.Ptr("sove"), // LastModifiedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // }, // } } ``` ``` Output: ``` Share Format Run #### func (*PrivateEndpointConnectionsCompClient) [BeginDelete](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privateendpointconnectionscomp_client.go#L129) [¶](#PrivateEndpointConnectionsCompClient.BeginDelete) ``` func (client *[PrivateEndpointConnectionsCompClient](#PrivateEndpointConnectionsCompClient)) BeginDelete(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), privateEndpointConnectionName [string](/builtin#string), options *[PrivateEndpointConnectionsCompClientBeginDeleteOptions](#PrivateEndpointConnectionsCompClientBeginDeleteOptions)) (*[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Poller](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Poller)[[PrivateEndpointConnectionsCompClientDeleteResponse](#PrivateEndpointConnectionsCompClientDeleteResponse)], [error](/builtin#error)) ``` BeginDelete - Deletes a private endpoint connection. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * privateEndpointConnectionName - The name of the private endpoint connection associated with the Azure resource * options - PrivateEndpointConnectionsCompClientBeginDeleteOptions contains the optional parameters for the PrivateEndpointConnectionsCompClient.BeginDelete method. Example [¶](#example-PrivateEndpointConnectionsCompClient.BeginDelete) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/ComplianceCenterServiceDeletePrivateEndpointConnection.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } poller, err := clientFactory.NewPrivateEndpointConnectionsCompClient().BeginDelete(ctx, "rgname", "service1", "myConnection", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } _, err = poller.PollUntilDone(ctx, nil) if err != nil { log.Fatalf("failed to pull the result: %v", err) } } ``` ``` Output: ``` Share Format Run #### func (*PrivateEndpointConnectionsCompClient) [Get](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privateendpointconnectionscomp_client.go#L201) [¶](#PrivateEndpointConnectionsCompClient.Get) ``` func (client *[PrivateEndpointConnectionsCompClient](#PrivateEndpointConnectionsCompClient)) Get(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), privateEndpointConnectionName [string](/builtin#string), options *[PrivateEndpointConnectionsCompClientGetOptions](#PrivateEndpointConnectionsCompClientGetOptions)) ([PrivateEndpointConnectionsCompClientGetResponse](#PrivateEndpointConnectionsCompClientGetResponse), [error](/builtin#error)) ``` Get - Gets the specified private endpoint connection associated with the service. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * privateEndpointConnectionName - The name of the private endpoint connection associated with the Azure resource * options - PrivateEndpointConnectionsCompClientGetOptions contains the optional parameters for the PrivateEndpointConnectionsCompClient.Get method. Example [¶](#example-PrivateEndpointConnectionsCompClient.Get) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/ComplianceCenterServiceGetPrivateEndpointConnection.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewPrivateEndpointConnectionsCompClient().Get(ctx, "rgname", "service1", "myConnection", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateEndpointConnection = armm365securityandcompliance.PrivateEndpointConnection{ // Name: to.Ptr("myConnection"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365ComplianceCenter/privateEndpointConnections"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365ComplianceCenter/service1/privateEndpointConnections/myConnection"), // Properties: &armm365securityandcompliance.PrivateEndpointConnectionProperties{ // PrivateEndpoint: &armm365securityandcompliance.PrivateEndpoint{ // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.Network/privateEndpoints/peexample01"), // }, // PrivateLinkServiceConnectionState: &armm365securityandcompliance.PrivateLinkServiceConnectionState{ // Description: to.Ptr("Auto-Approved"), // ActionsRequired: to.Ptr("None"), // Status: to.Ptr(armm365securityandcompliance.PrivateEndpointServiceConnectionStatusApproved), // }, // }, // SystemData: &armm365securityandcompliance.SystemData{ // CreatedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // CreatedBy: to.Ptr("sove"), // CreatedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // LastModifiedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // LastModifiedBy: to.Ptr("sove"), // LastModifiedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // }, // } } ``` ``` Output: ``` Share Format Run #### func (*PrivateEndpointConnectionsCompClient) [NewListByServicePager](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privateendpointconnectionscomp_client.go#L262) [¶](#PrivateEndpointConnectionsCompClient.NewListByServicePager) added in v0.4.0 ``` func (client *[PrivateEndpointConnectionsCompClient](#PrivateEndpointConnectionsCompClient)) NewListByServicePager(resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), options *[PrivateEndpointConnectionsCompClientListByServiceOptions](#PrivateEndpointConnectionsCompClientListByServiceOptions)) *[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Pager](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Pager)[[PrivateEndpointConnectionsCompClientListByServiceResponse](#PrivateEndpointConnectionsCompClientListByServiceResponse)] ``` NewListByServicePager - Lists all private endpoint connections for a service. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * options - PrivateEndpointConnectionsCompClientListByServiceOptions contains the optional parameters for the PrivateEndpointConnectionsCompClient.NewListByServicePager method. Example [¶](#example-PrivateEndpointConnectionsCompClient.NewListByServicePager) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/ComplianceCenterServiceListPrivateEndpointConnections.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } pager := clientFactory.NewPrivateEndpointConnectionsCompClient().NewListByServicePager("rgname", "service1", nil) for pager.More() { page, err := pager.NextPage(ctx) if err != nil { log.Fatalf("failed to advance page: %v", err) } for _, v := range page.Value { // You could use page here. We use blank identifier for just demo purposes. _ = v } // If the HTTP response code is 200 as defined in example definition, your page structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // page.PrivateEndpointConnectionListResult = armm365securityandcompliance.PrivateEndpointConnectionListResult{ // Value: []*armm365securityandcompliance.PrivateEndpointConnection{ // { // Name: to.Ptr("myConnection"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365ComplianceCenter/privateEndpointConnections"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365ComplianceCenter/service1/privateEndpointConnections/myConnection"), // Properties: &armm365securityandcompliance.PrivateEndpointConnectionProperties{ // PrivateEndpoint: &armm365securityandcompliance.PrivateEndpoint{ // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.Network/privateEndpoints/peexample01"), // }, // PrivateLinkServiceConnectionState: &armm365securityandcompliance.PrivateLinkServiceConnectionState{ // Description: to.Ptr("Auto-Approved"), // ActionsRequired: to.Ptr("None"), // Status: to.Ptr(armm365securityandcompliance.PrivateEndpointServiceConnectionStatusApproved), // }, // }, // }}, // } } } ``` ``` Output: ``` Share Format Run #### type [PrivateEndpointConnectionsCompClientBeginCreateOrUpdateOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L173) [¶](#PrivateEndpointConnectionsCompClientBeginCreateOrUpdateOptions) added in v0.2.0 ``` type PrivateEndpointConnectionsCompClientBeginCreateOrUpdateOptions struct { // Resumes the LRO from the provided token. ResumeToken [string](/builtin#string) } ``` PrivateEndpointConnectionsCompClientBeginCreateOrUpdateOptions contains the optional parameters for the PrivateEndpointConnectionsCompClient.BeginCreateOrUpdate method. #### type [PrivateEndpointConnectionsCompClientBeginDeleteOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L180) [¶](#PrivateEndpointConnectionsCompClientBeginDeleteOptions) added in v0.2.0 ``` type PrivateEndpointConnectionsCompClientBeginDeleteOptions struct { // Resumes the LRO from the provided token. ResumeToken [string](/builtin#string) } ``` PrivateEndpointConnectionsCompClientBeginDeleteOptions contains the optional parameters for the PrivateEndpointConnectionsCompClient.BeginDelete method. #### type [PrivateEndpointConnectionsCompClientCreateOrUpdateResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L43) [¶](#PrivateEndpointConnectionsCompClientCreateOrUpdateResponse) added in v0.2.0 ``` type PrivateEndpointConnectionsCompClientCreateOrUpdateResponse struct { [PrivateEndpointConnection](#PrivateEndpointConnection) } ``` PrivateEndpointConnectionsCompClientCreateOrUpdateResponse contains the response from method PrivateEndpointConnectionsCompClient.BeginCreateOrUpdate. #### type [PrivateEndpointConnectionsCompClientDeleteResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L48) [¶](#PrivateEndpointConnectionsCompClientDeleteResponse) added in v0.2.0 ``` type PrivateEndpointConnectionsCompClientDeleteResponse struct { } ``` PrivateEndpointConnectionsCompClientDeleteResponse contains the response from method PrivateEndpointConnectionsCompClient.BeginDelete. #### type [PrivateEndpointConnectionsCompClientGetOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L187) [¶](#PrivateEndpointConnectionsCompClientGetOptions) added in v0.2.0 ``` type PrivateEndpointConnectionsCompClientGetOptions struct { } ``` PrivateEndpointConnectionsCompClientGetOptions contains the optional parameters for the PrivateEndpointConnectionsCompClient.Get method. #### type [PrivateEndpointConnectionsCompClientGetResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L53) [¶](#PrivateEndpointConnectionsCompClientGetResponse) added in v0.2.0 ``` type PrivateEndpointConnectionsCompClientGetResponse struct { [PrivateEndpointConnection](#PrivateEndpointConnection) } ``` PrivateEndpointConnectionsCompClientGetResponse contains the response from method PrivateEndpointConnectionsCompClient.Get. #### type [PrivateEndpointConnectionsCompClientListByServiceOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L193) [¶](#PrivateEndpointConnectionsCompClientListByServiceOptions) added in v0.2.0 ``` type PrivateEndpointConnectionsCompClientListByServiceOptions struct { } ``` PrivateEndpointConnectionsCompClientListByServiceOptions contains the optional parameters for the PrivateEndpointConnectionsCompClient.NewListByServicePager method. #### type [PrivateEndpointConnectionsCompClientListByServiceResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L58) [¶](#PrivateEndpointConnectionsCompClientListByServiceResponse) added in v0.2.0 ``` type PrivateEndpointConnectionsCompClientListByServiceResponse struct { [PrivateEndpointConnectionListResult](#PrivateEndpointConnectionListResult) } ``` PrivateEndpointConnectionsCompClientListByServiceResponse contains the response from method PrivateEndpointConnectionsCompClient.NewListByServicePager. #### type [PrivateEndpointConnectionsForEDMClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privateendpointconnectionsforedm_client.go#L26) [¶](#PrivateEndpointConnectionsForEDMClient) ``` type PrivateEndpointConnectionsForEDMClient struct { // contains filtered or unexported fields } ``` PrivateEndpointConnectionsForEDMClient contains the methods for the PrivateEndpointConnectionsForEDM group. Don't use this type directly, use NewPrivateEndpointConnectionsForEDMClient() instead. #### func [NewPrivateEndpointConnectionsForEDMClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privateendpointconnectionsforedm_client.go#L35) [¶](#NewPrivateEndpointConnectionsForEDMClient) ``` func NewPrivateEndpointConnectionsForEDMClient(subscriptionID [string](/builtin#string), credential [azcore](/github.com/Azure/azure-sdk-for-go/sdk/azcore).[TokenCredential](/github.com/Azure/azure-sdk-for-go/sdk/azcore#TokenCredential), options *[arm](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm).[ClientOptions](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm#ClientOptions)) (*[PrivateEndpointConnectionsForEDMClient](#PrivateEndpointConnectionsForEDMClient), [error](/builtin#error)) ``` NewPrivateEndpointConnectionsForEDMClient creates a new instance of PrivateEndpointConnectionsForEDMClient with the specified values. * subscriptionID - The subscription identifier. * credential - used to authorize requests. Usually a credential from azidentity. * options - pass nil to accept the default values. #### func (*PrivateEndpointConnectionsForEDMClient) [BeginCreateOrUpdate](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privateendpointconnectionsforedm_client.go#L57) [¶](#PrivateEndpointConnectionsForEDMClient.BeginCreateOrUpdate) ``` func (client *[PrivateEndpointConnectionsForEDMClient](#PrivateEndpointConnectionsForEDMClient)) BeginCreateOrUpdate(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), privateEndpointConnectionName [string](/builtin#string), properties [PrivateEndpointConnection](#PrivateEndpointConnection), options *[PrivateEndpointConnectionsForEDMClientBeginCreateOrUpdateOptions](#PrivateEndpointConnectionsForEDMClientBeginCreateOrUpdateOptions)) (*[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Poller](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Poller)[[PrivateEndpointConnectionsForEDMClientCreateOrUpdateResponse](#PrivateEndpointConnectionsForEDMClientCreateOrUpdateResponse)], [error](/builtin#error)) ``` BeginCreateOrUpdate - Update the state of the specified private endpoint connection associated with the service. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * privateEndpointConnectionName - The name of the private endpoint connection associated with the Azure resource * properties - The private endpoint connection properties. * options - PrivateEndpointConnectionsForEDMClientBeginCreateOrUpdateOptions contains the optional parameters for the PrivateEndpointConnectionsForEDMClient.BeginCreateOrUpdate method. Example [¶](#example-PrivateEndpointConnectionsForEDMClient.BeginCreateOrUpdate) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/EdmUploadServiceCreatePrivateEndpointConnection.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azcore/to" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } poller, err := clientFactory.NewPrivateEndpointConnectionsForEDMClient().BeginCreateOrUpdate(ctx, "rgname", "service1", "myConnection", armm365securityandcompliance.PrivateEndpointConnection{ Properties: &armm365securityandcompliance.PrivateEndpointConnectionProperties{ PrivateLinkServiceConnectionState: &armm365securityandcompliance.PrivateLinkServiceConnectionState{ Description: to.Ptr("Auto-Approved"), Status: to.Ptr(armm365securityandcompliance.PrivateEndpointServiceConnectionStatusApproved), }, }, }, nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } res, err := poller.PollUntilDone(ctx, nil) if err != nil { log.Fatalf("failed to pull the result: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateEndpointConnection = armm365securityandcompliance.PrivateEndpointConnection{ // Name: to.Ptr("myConnection"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForEDMUpload/privateEndpointConnections"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForEDMUpload/service1/privateEndpointConnections/myConnection"), // Properties: &armm365securityandcompliance.PrivateEndpointConnectionProperties{ // PrivateEndpoint: &armm365securityandcompliance.PrivateEndpoint{ // ID: to.Ptr("/subscriptions/c80fb759-c965-4c6a-9110-9b2b2d038882/resourceGroups/myResourceGroup/providers/Microsoft.Network/privateEndpoints/peexample01"), // }, // PrivateLinkServiceConnectionState: &armm365securityandcompliance.PrivateLinkServiceConnectionState{ // Description: to.Ptr("Auto-Approved"), // ActionsRequired: to.Ptr("None"), // Status: to.Ptr(armm365securityandcompliance.PrivateEndpointServiceConnectionStatusApproved), // }, // ProvisioningState: to.Ptr(armm365securityandcompliance.PrivateEndpointConnectionProvisioningStateSucceeded), // }, // SystemData: &armm365securityandcompliance.SystemData{ // CreatedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // CreatedBy: to.Ptr("sove"), // CreatedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // LastModifiedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // LastModifiedBy: to.Ptr("sove"), // LastModifiedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // }, // } } ``` ``` Output: ``` Share Format Run #### func (*PrivateEndpointConnectionsForEDMClient) [BeginDelete](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privateendpointconnectionsforedm_client.go#L129) [¶](#PrivateEndpointConnectionsForEDMClient.BeginDelete) ``` func (client *[PrivateEndpointConnectionsForEDMClient](#PrivateEndpointConnectionsForEDMClient)) BeginDelete(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), privateEndpointConnectionName [string](/builtin#string), options *[PrivateEndpointConnectionsForEDMClientBeginDeleteOptions](#PrivateEndpointConnectionsForEDMClientBeginDeleteOptions)) (*[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Poller](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Poller)[[PrivateEndpointConnectionsForEDMClientDeleteResponse](#PrivateEndpointConnectionsForEDMClientDeleteResponse)], [error](/builtin#error)) ``` BeginDelete - Deletes a private endpoint connection. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * privateEndpointConnectionName - The name of the private endpoint connection associated with the Azure resource * options - PrivateEndpointConnectionsForEDMClientBeginDeleteOptions contains the optional parameters for the PrivateEndpointConnectionsForEDMClient.BeginDelete method. Example [¶](#example-PrivateEndpointConnectionsForEDMClient.BeginDelete) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/EdmUploadServiceDeletePrivateEndpointConnection.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } poller, err := clientFactory.NewPrivateEndpointConnectionsForEDMClient().BeginDelete(ctx, "rgname", "service1", "myConnection", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } _, err = poller.PollUntilDone(ctx, nil) if err != nil { log.Fatalf("failed to pull the result: %v", err) } } ``` ``` Output: ``` Share Format Run #### func (*PrivateEndpointConnectionsForEDMClient) [Get](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privateendpointconnectionsforedm_client.go#L201) [¶](#PrivateEndpointConnectionsForEDMClient.Get) ``` func (client *[PrivateEndpointConnectionsForEDMClient](#PrivateEndpointConnectionsForEDMClient)) Get(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), privateEndpointConnectionName [string](/builtin#string), options *[PrivateEndpointConnectionsForEDMClientGetOptions](#PrivateEndpointConnectionsForEDMClientGetOptions)) ([PrivateEndpointConnectionsForEDMClientGetResponse](#PrivateEndpointConnectionsForEDMClientGetResponse), [error](/builtin#error)) ``` Get - Gets the specified private endpoint connection associated with the service. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * privateEndpointConnectionName - The name of the private endpoint connection associated with the Azure resource * options - PrivateEndpointConnectionsForEDMClientGetOptions contains the optional parameters for the PrivateEndpointConnectionsForEDMClient.Get method. Example [¶](#example-PrivateEndpointConnectionsForEDMClient.Get) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/EdmUploadServiceGetPrivateEndpointConnection.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewPrivateEndpointConnectionsForEDMClient().Get(ctx, "rgname", "service1", "myConnection", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateEndpointConnection = armm365securityandcompliance.PrivateEndpointConnection{ // Name: to.Ptr("myConnection"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForEDMUpload/privateEndpointConnections"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForEDMUpload/service1/privateEndpointConnections/myConnection"), // Properties: &armm365securityandcompliance.PrivateEndpointConnectionProperties{ // PrivateEndpoint: &armm365securityandcompliance.PrivateEndpoint{ // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.Network/privateEndpoints/peexample01"), // }, // PrivateLinkServiceConnectionState: &armm365securityandcompliance.PrivateLinkServiceConnectionState{ // Description: to.Ptr("Auto-Approved"), // ActionsRequired: to.Ptr("None"), // Status: to.Ptr(armm365securityandcompliance.PrivateEndpointServiceConnectionStatusApproved), // }, // }, // SystemData: &armm365securityandcompliance.SystemData{ // CreatedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // CreatedBy: to.Ptr("sove"), // CreatedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // LastModifiedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // LastModifiedBy: to.Ptr("sove"), // LastModifiedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // }, // } } ``` ``` Output: ``` Share Format Run #### func (*PrivateEndpointConnectionsForEDMClient) [NewListByServicePager](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privateendpointconnectionsforedm_client.go#L262) [¶](#PrivateEndpointConnectionsForEDMClient.NewListByServicePager) added in v0.4.0 ``` func (client *[PrivateEndpointConnectionsForEDMClient](#PrivateEndpointConnectionsForEDMClient)) NewListByServicePager(resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), options *[PrivateEndpointConnectionsForEDMClientListByServiceOptions](#PrivateEndpointConnectionsForEDMClientListByServiceOptions)) *[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Pager](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Pager)[[PrivateEndpointConnectionsForEDMClientListByServiceResponse](#PrivateEndpointConnectionsForEDMClientListByServiceResponse)] ``` NewListByServicePager - Lists all private endpoint connections for a service. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * options - PrivateEndpointConnectionsForEDMClientListByServiceOptions contains the optional parameters for the PrivateEndpointConnectionsForEDMClient.NewListByServicePager method. Example [¶](#example-PrivateEndpointConnectionsForEDMClient.NewListByServicePager) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/EdmUploadServiceListPrivateEndpointConnections.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } pager := clientFactory.NewPrivateEndpointConnectionsForEDMClient().NewListByServicePager("rgname", "service1", nil) for pager.More() { page, err := pager.NextPage(ctx) if err != nil { log.Fatalf("failed to advance page: %v", err) } for _, v := range page.Value { // You could use page here. We use blank identifier for just demo purposes. _ = v } // If the HTTP response code is 200 as defined in example definition, your page structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // page.PrivateEndpointConnectionListResult = armm365securityandcompliance.PrivateEndpointConnectionListResult{ // Value: []*armm365securityandcompliance.PrivateEndpointConnection{ // { // Name: to.Ptr("myConnection"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForEDMUpload/privateEndpointConnections"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForEDMUpload/service1/privateEndpointConnections/myConnection"), // Properties: &armm365securityandcompliance.PrivateEndpointConnectionProperties{ // PrivateEndpoint: &armm365securityandcompliance.PrivateEndpoint{ // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.Network/privateEndpoints/peexample01"), // }, // PrivateLinkServiceConnectionState: &armm365securityandcompliance.PrivateLinkServiceConnectionState{ // Description: to.Ptr("Auto-Approved"), // ActionsRequired: to.Ptr("None"), // Status: to.Ptr(armm365securityandcompliance.PrivateEndpointServiceConnectionStatusApproved), // }, // }, // }}, // } } } ``` ``` Output: ``` Share Format Run #### type [PrivateEndpointConnectionsForEDMClientBeginCreateOrUpdateOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L199) [¶](#PrivateEndpointConnectionsForEDMClientBeginCreateOrUpdateOptions) added in v0.2.0 ``` type PrivateEndpointConnectionsForEDMClientBeginCreateOrUpdateOptions struct { // Resumes the LRO from the provided token. ResumeToken [string](/builtin#string) } ``` PrivateEndpointConnectionsForEDMClientBeginCreateOrUpdateOptions contains the optional parameters for the PrivateEndpointConnectionsForEDMClient.BeginCreateOrUpdate method. #### type [PrivateEndpointConnectionsForEDMClientBeginDeleteOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L206) [¶](#PrivateEndpointConnectionsForEDMClientBeginDeleteOptions) added in v0.2.0 ``` type PrivateEndpointConnectionsForEDMClientBeginDeleteOptions struct { // Resumes the LRO from the provided token. ResumeToken [string](/builtin#string) } ``` PrivateEndpointConnectionsForEDMClientBeginDeleteOptions contains the optional parameters for the PrivateEndpointConnectionsForEDMClient.BeginDelete method. #### type [PrivateEndpointConnectionsForEDMClientCreateOrUpdateResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L63) [¶](#PrivateEndpointConnectionsForEDMClientCreateOrUpdateResponse) added in v0.2.0 ``` type PrivateEndpointConnectionsForEDMClientCreateOrUpdateResponse struct { [PrivateEndpointConnection](#PrivateEndpointConnection) } ``` PrivateEndpointConnectionsForEDMClientCreateOrUpdateResponse contains the response from method PrivateEndpointConnectionsForEDMClient.BeginCreateOrUpdate. #### type [PrivateEndpointConnectionsForEDMClientDeleteResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L68) [¶](#PrivateEndpointConnectionsForEDMClientDeleteResponse) added in v0.2.0 ``` type PrivateEndpointConnectionsForEDMClientDeleteResponse struct { } ``` PrivateEndpointConnectionsForEDMClientDeleteResponse contains the response from method PrivateEndpointConnectionsForEDMClient.BeginDelete. #### type [PrivateEndpointConnectionsForEDMClientGetOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L213) [¶](#PrivateEndpointConnectionsForEDMClientGetOptions) added in v0.2.0 ``` type PrivateEndpointConnectionsForEDMClientGetOptions struct { } ``` PrivateEndpointConnectionsForEDMClientGetOptions contains the optional parameters for the PrivateEndpointConnectionsForEDMClient.Get method. #### type [PrivateEndpointConnectionsForEDMClientGetResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L73) [¶](#PrivateEndpointConnectionsForEDMClientGetResponse) added in v0.2.0 ``` type PrivateEndpointConnectionsForEDMClientGetResponse struct { [PrivateEndpointConnection](#PrivateEndpointConnection) } ``` PrivateEndpointConnectionsForEDMClientGetResponse contains the response from method PrivateEndpointConnectionsForEDMClient.Get. #### type [PrivateEndpointConnectionsForEDMClientListByServiceOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L219) [¶](#PrivateEndpointConnectionsForEDMClientListByServiceOptions) added in v0.2.0 ``` type PrivateEndpointConnectionsForEDMClientListByServiceOptions struct { } ``` PrivateEndpointConnectionsForEDMClientListByServiceOptions contains the optional parameters for the PrivateEndpointConnectionsForEDMClient.NewListByServicePager method. #### type [PrivateEndpointConnectionsForEDMClientListByServiceResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L78) [¶](#PrivateEndpointConnectionsForEDMClientListByServiceResponse) added in v0.2.0 ``` type PrivateEndpointConnectionsForEDMClientListByServiceResponse struct { [PrivateEndpointConnectionListResult](#PrivateEndpointConnectionListResult) } ``` PrivateEndpointConnectionsForEDMClientListByServiceResponse contains the response from method PrivateEndpointConnectionsForEDMClient.NewListByServicePager. #### type [PrivateEndpointConnectionsForMIPPolicySyncClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privateendpointconnectionsformippolicysync_client.go#L26) [¶](#PrivateEndpointConnectionsForMIPPolicySyncClient) ``` type PrivateEndpointConnectionsForMIPPolicySyncClient struct { // contains filtered or unexported fields } ``` PrivateEndpointConnectionsForMIPPolicySyncClient contains the methods for the PrivateEndpointConnectionsForMIPPolicySync group. Don't use this type directly, use NewPrivateEndpointConnectionsForMIPPolicySyncClient() instead. #### func [NewPrivateEndpointConnectionsForMIPPolicySyncClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privateendpointconnectionsformippolicysync_client.go#L35) [¶](#NewPrivateEndpointConnectionsForMIPPolicySyncClient) ``` func NewPrivateEndpointConnectionsForMIPPolicySyncClient(subscriptionID [string](/builtin#string), credential [azcore](/github.com/Azure/azure-sdk-for-go/sdk/azcore).[TokenCredential](/github.com/Azure/azure-sdk-for-go/sdk/azcore#TokenCredential), options *[arm](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm).[ClientOptions](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm#ClientOptions)) (*[PrivateEndpointConnectionsForMIPPolicySyncClient](#PrivateEndpointConnectionsForMIPPolicySyncClient), [error](/builtin#error)) ``` NewPrivateEndpointConnectionsForMIPPolicySyncClient creates a new instance of PrivateEndpointConnectionsForMIPPolicySyncClient with the specified values. * subscriptionID - The subscription identifier. * credential - used to authorize requests. Usually a credential from azidentity. * options - pass nil to accept the default values. #### func (*PrivateEndpointConnectionsForMIPPolicySyncClient) [BeginCreateOrUpdate](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privateendpointconnectionsformippolicysync_client.go#L57) [¶](#PrivateEndpointConnectionsForMIPPolicySyncClient.BeginCreateOrUpdate) ``` func (client *[PrivateEndpointConnectionsForMIPPolicySyncClient](#PrivateEndpointConnectionsForMIPPolicySyncClient)) BeginCreateOrUpdate(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), privateEndpointConnectionName [string](/builtin#string), properties [PrivateEndpointConnection](#PrivateEndpointConnection), options *[PrivateEndpointConnectionsForMIPPolicySyncClientBeginCreateOrUpdateOptions](#PrivateEndpointConnectionsForMIPPolicySyncClientBeginCreateOrUpdateOptions)) (*[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Poller](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Poller)[[PrivateEndpointConnectionsForMIPPolicySyncClientCreateOrUpdateResponse](#PrivateEndpointConnectionsForMIPPolicySyncClientCreateOrUpdateResponse)], [error](/builtin#error)) ``` BeginCreateOrUpdate - Update the state of the specified private endpoint connection associated with the service. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * privateEndpointConnectionName - The name of the private endpoint connection associated with the Azure resource * properties - The private endpoint connection properties. * options - PrivateEndpointConnectionsForMIPPolicySyncClientBeginCreateOrUpdateOptions contains the optional parameters for the PrivateEndpointConnectionsForMIPPolicySyncClient.BeginCreateOrUpdate method. Example [¶](#example-PrivateEndpointConnectionsForMIPPolicySyncClient.BeginCreateOrUpdate) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/MIPPolicySyncServiceCreatePrivateEndpointConnection.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azcore/to" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } poller, err := clientFactory.NewPrivateEndpointConnectionsForMIPPolicySyncClient().BeginCreateOrUpdate(ctx, "rgname", "service1", "myConnection", armm365securityandcompliance.PrivateEndpointConnection{ Properties: &armm365securityandcompliance.PrivateEndpointConnectionProperties{ PrivateLinkServiceConnectionState: &armm365securityandcompliance.PrivateLinkServiceConnectionState{ Description: to.Ptr("Auto-Approved"), Status: to.Ptr(armm365securityandcompliance.PrivateEndpointServiceConnectionStatusApproved), }, }, }, nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } res, err := poller.PollUntilDone(ctx, nil) if err != nil { log.Fatalf("failed to pull the result: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateEndpointConnection = armm365securityandcompliance.PrivateEndpointConnection{ // Name: to.Ptr("myConnection"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForMIPPolicySync/privateEndpointConnections"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForMIPPolicySync/service1/privateEndpointConnections/myConnection"), // Properties: &armm365securityandcompliance.PrivateEndpointConnectionProperties{ // PrivateEndpoint: &armm365securityandcompliance.PrivateEndpoint{ // ID: to.Ptr("/subscriptions/<KEY>/resourceGroups/myResourceGroup/providers/Microsoft.Network/privateEndpoints/peexample01"), // }, // PrivateLinkServiceConnectionState: &armm365securityandcompliance.PrivateLinkServiceConnectionState{ // Description: to.Ptr("Auto-Approved"), // ActionsRequired: to.Ptr("None"), // Status: to.Ptr(armm365securityandcompliance.PrivateEndpointServiceConnectionStatusApproved), // }, // ProvisioningState: to.Ptr(armm365securityandcompliance.PrivateEndpointConnectionProvisioningStateSucceeded), // }, // SystemData: &armm365securityandcompliance.SystemData{ // CreatedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // CreatedBy: to.Ptr("fangsu"), // CreatedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // LastModifiedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // LastModifiedBy: to.Ptr("fangsu"), // LastModifiedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // }, // } } ``` ``` Output: ``` Share Format Run #### func (*PrivateEndpointConnectionsForMIPPolicySyncClient) [BeginDelete](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privateendpointconnectionsformippolicysync_client.go#L129) [¶](#PrivateEndpointConnectionsForMIPPolicySyncClient.BeginDelete) ``` func (client *[PrivateEndpointConnectionsForMIPPolicySyncClient](#PrivateEndpointConnectionsForMIPPolicySyncClient)) BeginDelete(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), privateEndpointConnectionName [string](/builtin#string), options *[PrivateEndpointConnectionsForMIPPolicySyncClientBeginDeleteOptions](#PrivateEndpointConnectionsForMIPPolicySyncClientBeginDeleteOptions)) (*[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Poller](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Poller)[[PrivateEndpointConnectionsForMIPPolicySyncClientDeleteResponse](#PrivateEndpointConnectionsForMIPPolicySyncClientDeleteResponse)], [error](/builtin#error)) ``` BeginDelete - Deletes a private endpoint connection. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * privateEndpointConnectionName - The name of the private endpoint connection associated with the Azure resource * options - PrivateEndpointConnectionsForMIPPolicySyncClientBeginDeleteOptions contains the optional parameters for the PrivateEndpointConnectionsForMIPPolicySyncClient.BeginDelete method. Example [¶](#example-PrivateEndpointConnectionsForMIPPolicySyncClient.BeginDelete) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/MIPPolicySyncServiceDeletePrivateEndpointConnection.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } poller, err := clientFactory.NewPrivateEndpointConnectionsForMIPPolicySyncClient().BeginDelete(ctx, "rgname", "service1", "myConnection", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } _, err = poller.PollUntilDone(ctx, nil) if err != nil { log.Fatalf("failed to pull the result: %v", err) } } ``` ``` Output: ``` Share Format Run #### func (*PrivateEndpointConnectionsForMIPPolicySyncClient) [Get](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privateendpointconnectionsformippolicysync_client.go#L201) [¶](#PrivateEndpointConnectionsForMIPPolicySyncClient.Get) ``` func (client *[PrivateEndpointConnectionsForMIPPolicySyncClient](#PrivateEndpointConnectionsForMIPPolicySyncClient)) Get(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), privateEndpointConnectionName [string](/builtin#string), options *[PrivateEndpointConnectionsForMIPPolicySyncClientGetOptions](#PrivateEndpointConnectionsForMIPPolicySyncClientGetOptions)) ([PrivateEndpointConnectionsForMIPPolicySyncClientGetResponse](#PrivateEndpointConnectionsForMIPPolicySyncClientGetResponse), [error](/builtin#error)) ``` Get - Gets the specified private endpoint connection associated with the service. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * privateEndpointConnectionName - The name of the private endpoint connection associated with the Azure resource * options - PrivateEndpointConnectionsForMIPPolicySyncClientGetOptions contains the optional parameters for the PrivateEndpointConnectionsForMIPPolicySyncClient.Get method. Example [¶](#example-PrivateEndpointConnectionsForMIPPolicySyncClient.Get) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/MIPPolicySyncServiceGetPrivateEndpointConnection.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewPrivateEndpointConnectionsForMIPPolicySyncClient().Get(ctx, "rgname", "service1", "myConnection", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateEndpointConnection = armm365securityandcompliance.PrivateEndpointConnection{ // Name: to.Ptr("myConnection"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForSCCPowershell/privateEndpointConnections"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForMIPPolicySync/service1/privateEndpointConnections/myConnection"), // Properties: &armm365securityandcompliance.PrivateEndpointConnectionProperties{ // PrivateEndpoint: &armm365securityandcompliance.PrivateEndpoint{ // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.Network/privateEndpoints/peexample01"), // }, // PrivateLinkServiceConnectionState: &armm365securityandcompliance.PrivateLinkServiceConnectionState{ // Description: to.Ptr("Auto-Approved"), // ActionsRequired: to.Ptr("None"), // Status: to.Ptr(armm365securityandcompliance.PrivateEndpointServiceConnectionStatusApproved), // }, // }, // SystemData: &armm365securityandcompliance.SystemData{ // CreatedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // CreatedBy: to.Ptr("fangsu"), // CreatedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // LastModifiedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // LastModifiedBy: to.Ptr("fangsu"), // LastModifiedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // }, // } } ``` ``` Output: ``` Share Format Run #### func (*PrivateEndpointConnectionsForMIPPolicySyncClient) [NewListByServicePager](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privateendpointconnectionsformippolicysync_client.go#L262) [¶](#PrivateEndpointConnectionsForMIPPolicySyncClient.NewListByServicePager) added in v0.4.0 ``` func (client *[PrivateEndpointConnectionsForMIPPolicySyncClient](#PrivateEndpointConnectionsForMIPPolicySyncClient)) NewListByServicePager(resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), options *[PrivateEndpointConnectionsForMIPPolicySyncClientListByServiceOptions](#PrivateEndpointConnectionsForMIPPolicySyncClientListByServiceOptions)) *[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Pager](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Pager)[[PrivateEndpointConnectionsForMIPPolicySyncClientListByServiceResponse](#PrivateEndpointConnectionsForMIPPolicySyncClientListByServiceResponse)] ``` NewListByServicePager - Lists all private endpoint connections for a service. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * options - PrivateEndpointConnectionsForMIPPolicySyncClientListByServiceOptions contains the optional parameters for the PrivateEndpointConnectionsForMIPPolicySyncClient.NewListByServicePager method. Example [¶](#example-PrivateEndpointConnectionsForMIPPolicySyncClient.NewListByServicePager) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/MIPPolicySyncServiceListPrivateEndpointConnections.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } pager := clientFactory.NewPrivateEndpointConnectionsForMIPPolicySyncClient().NewListByServicePager("rgname", "service1", nil) for pager.More() { page, err := pager.NextPage(ctx) if err != nil { log.Fatalf("failed to advance page: %v", err) } for _, v := range page.Value { // You could use page here. We use blank identifier for just demo purposes. _ = v } // If the HTTP response code is 200 as defined in example definition, your page structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // page.PrivateEndpointConnectionListResult = armm365securityandcompliance.PrivateEndpointConnectionListResult{ // Value: []*armm365securityandcompliance.PrivateEndpointConnection{ // { // Name: to.Ptr("myConnection"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForMIPPolicySync/privateEndpointConnections"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForMIPPolicySync/service1/privateEndpointConnections/myConnection"), // Properties: &armm365securityandcompliance.PrivateEndpointConnectionProperties{ // PrivateEndpoint: &armm365securityandcompliance.PrivateEndpoint{ // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.Network/privateEndpoints/peexample01"), // }, // PrivateLinkServiceConnectionState: &armm365securityandcompliance.PrivateLinkServiceConnectionState{ // Description: to.Ptr("Auto-Approved"), // ActionsRequired: to.Ptr("None"), // Status: to.Ptr(armm365securityandcompliance.PrivateEndpointServiceConnectionStatusApproved), // }, // }, // }}, // } } } ``` ``` Output: ``` Share Format Run #### type [PrivateEndpointConnectionsForMIPPolicySyncClientBeginCreateOrUpdateOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L225) [¶](#PrivateEndpointConnectionsForMIPPolicySyncClientBeginCreateOrUpdateOptions) added in v0.2.0 ``` type PrivateEndpointConnectionsForMIPPolicySyncClientBeginCreateOrUpdateOptions struct { // Resumes the LRO from the provided token. ResumeToken [string](/builtin#string) } ``` PrivateEndpointConnectionsForMIPPolicySyncClientBeginCreateOrUpdateOptions contains the optional parameters for the PrivateEndpointConnectionsForMIPPolicySyncClient.BeginCreateOrUpdate method. #### type [PrivateEndpointConnectionsForMIPPolicySyncClientBeginDeleteOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L232) [¶](#PrivateEndpointConnectionsForMIPPolicySyncClientBeginDeleteOptions) added in v0.2.0 ``` type PrivateEndpointConnectionsForMIPPolicySyncClientBeginDeleteOptions struct { // Resumes the LRO from the provided token. ResumeToken [string](/builtin#string) } ``` PrivateEndpointConnectionsForMIPPolicySyncClientBeginDeleteOptions contains the optional parameters for the PrivateEndpointConnectionsForMIPPolicySyncClient.BeginDelete method. #### type [PrivateEndpointConnectionsForMIPPolicySyncClientCreateOrUpdateResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L83) [¶](#PrivateEndpointConnectionsForMIPPolicySyncClientCreateOrUpdateResponse) added in v0.2.0 ``` type PrivateEndpointConnectionsForMIPPolicySyncClientCreateOrUpdateResponse struct { [PrivateEndpointConnection](#PrivateEndpointConnection) } ``` PrivateEndpointConnectionsForMIPPolicySyncClientCreateOrUpdateResponse contains the response from method PrivateEndpointConnectionsForMIPPolicySyncClient.BeginCreateOrUpdate. #### type [PrivateEndpointConnectionsForMIPPolicySyncClientDeleteResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L88) [¶](#PrivateEndpointConnectionsForMIPPolicySyncClientDeleteResponse) added in v0.2.0 ``` type PrivateEndpointConnectionsForMIPPolicySyncClientDeleteResponse struct { } ``` PrivateEndpointConnectionsForMIPPolicySyncClientDeleteResponse contains the response from method PrivateEndpointConnectionsForMIPPolicySyncClient.BeginDelete. #### type [PrivateEndpointConnectionsForMIPPolicySyncClientGetOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L239) [¶](#PrivateEndpointConnectionsForMIPPolicySyncClientGetOptions) added in v0.2.0 ``` type PrivateEndpointConnectionsForMIPPolicySyncClientGetOptions struct { } ``` PrivateEndpointConnectionsForMIPPolicySyncClientGetOptions contains the optional parameters for the PrivateEndpointConnectionsForMIPPolicySyncClient.Get method. #### type [PrivateEndpointConnectionsForMIPPolicySyncClientGetResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L93) [¶](#PrivateEndpointConnectionsForMIPPolicySyncClientGetResponse) added in v0.2.0 ``` type PrivateEndpointConnectionsForMIPPolicySyncClientGetResponse struct { [PrivateEndpointConnection](#PrivateEndpointConnection) } ``` PrivateEndpointConnectionsForMIPPolicySyncClientGetResponse contains the response from method PrivateEndpointConnectionsForMIPPolicySyncClient.Get. #### type [PrivateEndpointConnectionsForMIPPolicySyncClientListByServiceOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L245) [¶](#PrivateEndpointConnectionsForMIPPolicySyncClientListByServiceOptions) added in v0.2.0 ``` type PrivateEndpointConnectionsForMIPPolicySyncClientListByServiceOptions struct { } ``` PrivateEndpointConnectionsForMIPPolicySyncClientListByServiceOptions contains the optional parameters for the PrivateEndpointConnectionsForMIPPolicySyncClient.NewListByServicePager method. #### type [PrivateEndpointConnectionsForMIPPolicySyncClientListByServiceResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L98) [¶](#PrivateEndpointConnectionsForMIPPolicySyncClientListByServiceResponse) added in v0.2.0 ``` type PrivateEndpointConnectionsForMIPPolicySyncClientListByServiceResponse struct { [PrivateEndpointConnectionListResult](#PrivateEndpointConnectionListResult) } ``` PrivateEndpointConnectionsForMIPPolicySyncClientListByServiceResponse contains the response from method PrivateEndpointConnectionsForMIPPolicySyncClient.NewListByServicePager. #### type [PrivateEndpointConnectionsForSCCPowershellClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privateendpointconnectionsforsccpowershell_client.go#L26) [¶](#PrivateEndpointConnectionsForSCCPowershellClient) ``` type PrivateEndpointConnectionsForSCCPowershellClient struct { // contains filtered or unexported fields } ``` PrivateEndpointConnectionsForSCCPowershellClient contains the methods for the PrivateEndpointConnectionsForSCCPowershell group. Don't use this type directly, use NewPrivateEndpointConnectionsForSCCPowershellClient() instead. #### func [NewPrivateEndpointConnectionsForSCCPowershellClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privateendpointconnectionsforsccpowershell_client.go#L35) [¶](#NewPrivateEndpointConnectionsForSCCPowershellClient) ``` func NewPrivateEndpointConnectionsForSCCPowershellClient(subscriptionID [string](/builtin#string), credential [azcore](/github.com/Azure/azure-sdk-for-go/sdk/azcore).[TokenCredential](/github.com/Azure/azure-sdk-for-go/sdk/azcore#TokenCredential), options *[arm](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm).[ClientOptions](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm#ClientOptions)) (*[PrivateEndpointConnectionsForSCCPowershellClient](#PrivateEndpointConnectionsForSCCPowershellClient), [error](/builtin#error)) ``` NewPrivateEndpointConnectionsForSCCPowershellClient creates a new instance of PrivateEndpointConnectionsForSCCPowershellClient with the specified values. * subscriptionID - The subscription identifier. * credential - used to authorize requests. Usually a credential from azidentity. * options - pass nil to accept the default values. #### func (*PrivateEndpointConnectionsForSCCPowershellClient) [BeginCreateOrUpdate](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privateendpointconnectionsforsccpowershell_client.go#L57) [¶](#PrivateEndpointConnectionsForSCCPowershellClient.BeginCreateOrUpdate) ``` func (client *[PrivateEndpointConnectionsForSCCPowershellClient](#PrivateEndpointConnectionsForSCCPowershellClient)) BeginCreateOrUpdate(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), privateEndpointConnectionName [string](/builtin#string), properties [PrivateEndpointConnection](#PrivateEndpointConnection), options *[PrivateEndpointConnectionsForSCCPowershellClientBeginCreateOrUpdateOptions](#PrivateEndpointConnectionsForSCCPowershellClientBeginCreateOrUpdateOptions)) (*[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Poller](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Poller)[[PrivateEndpointConnectionsForSCCPowershellClientCreateOrUpdateResponse](#PrivateEndpointConnectionsForSCCPowershellClientCreateOrUpdateResponse)], [error](/builtin#error)) ``` BeginCreateOrUpdate - Update the state of the specified private endpoint connection associated with the service. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * privateEndpointConnectionName - The name of the private endpoint connection associated with the Azure resource * properties - The private endpoint connection properties. * options - PrivateEndpointConnectionsForSCCPowershellClientBeginCreateOrUpdateOptions contains the optional parameters for the PrivateEndpointConnectionsForSCCPowershellClient.BeginCreateOrUpdate method. Example [¶](#example-PrivateEndpointConnectionsForSCCPowershellClient.BeginCreateOrUpdate) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/SCCPowershellServiceCreatePrivateEndpointConnection.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azcore/to" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } poller, err := clientFactory.NewPrivateEndpointConnectionsForSCCPowershellClient().BeginCreateOrUpdate(ctx, "rgname", "service1", "myConnection", armm365securityandcompliance.PrivateEndpointConnection{ Properties: &armm365securityandcompliance.PrivateEndpointConnectionProperties{ PrivateLinkServiceConnectionState: &armm365securityandcompliance.PrivateLinkServiceConnectionState{ Description: to.Ptr("Auto-Approved"), Status: to.Ptr(armm365securityandcompliance.PrivateEndpointServiceConnectionStatusApproved), }, }, }, nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } res, err := poller.PollUntilDone(ctx, nil) if err != nil { log.Fatalf("failed to pull the result: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateEndpointConnection = armm365securityandcompliance.PrivateEndpointConnection{ // Name: to.Ptr("myConnection"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForSCCPowershell/privateEndpointConnections"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForSCCPowershell/service1/privateEndpointConnections/myConnection"), // Properties: &armm365securityandcompliance.PrivateEndpointConnectionProperties{ // PrivateEndpoint: &armm365securityandcompliance.PrivateEndpoint{ // ID: to.Ptr("/subscriptions/c80fb759-c965-4c6a-9110-9b2b2d038882/resourceGroups/myResourceGroup/providers/Microsoft.Network/privateEndpoints/peexample01"), // }, // PrivateLinkServiceConnectionState: &armm365securityandcompliance.PrivateLinkServiceConnectionState{ // Description: to.Ptr("Auto-Approved"), // ActionsRequired: to.Ptr("None"), // Status: to.Ptr(armm365securityandcompliance.PrivateEndpointServiceConnectionStatusApproved), // }, // ProvisioningState: to.Ptr(armm365securityandcompliance.PrivateEndpointConnectionProvisioningStateSucceeded), // }, // SystemData: &armm365securityandcompliance.SystemData{ // CreatedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // CreatedBy: to.Ptr("sove"), // CreatedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // LastModifiedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // LastModifiedBy: to.Ptr("sove"), // LastModifiedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // }, // } } ``` ``` Output: ``` Share Format Run #### func (*PrivateEndpointConnectionsForSCCPowershellClient) [BeginDelete](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privateendpointconnectionsforsccpowershell_client.go#L129) [¶](#PrivateEndpointConnectionsForSCCPowershellClient.BeginDelete) ``` func (client *[PrivateEndpointConnectionsForSCCPowershellClient](#PrivateEndpointConnectionsForSCCPowershellClient)) BeginDelete(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), privateEndpointConnectionName [string](/builtin#string), options *[PrivateEndpointConnectionsForSCCPowershellClientBeginDeleteOptions](#PrivateEndpointConnectionsForSCCPowershellClientBeginDeleteOptions)) (*[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Poller](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Poller)[[PrivateEndpointConnectionsForSCCPowershellClientDeleteResponse](#PrivateEndpointConnectionsForSCCPowershellClientDeleteResponse)], [error](/builtin#error)) ``` BeginDelete - Deletes a private endpoint connection. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * privateEndpointConnectionName - The name of the private endpoint connection associated with the Azure resource * options - PrivateEndpointConnectionsForSCCPowershellClientBeginDeleteOptions contains the optional parameters for the PrivateEndpointConnectionsForSCCPowershellClient.BeginDelete method. Example [¶](#example-PrivateEndpointConnectionsForSCCPowershellClient.BeginDelete) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/SCCPowershellServiceDeletePrivateEndpointConnection.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } poller, err := clientFactory.NewPrivateEndpointConnectionsForSCCPowershellClient().BeginDelete(ctx, "rgname", "service1", "myConnection", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } _, err = poller.PollUntilDone(ctx, nil) if err != nil { log.Fatalf("failed to pull the result: %v", err) } } ``` ``` Output: ``` Share Format Run #### func (*PrivateEndpointConnectionsForSCCPowershellClient) [Get](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privateendpointconnectionsforsccpowershell_client.go#L201) [¶](#PrivateEndpointConnectionsForSCCPowershellClient.Get) ``` func (client *[PrivateEndpointConnectionsForSCCPowershellClient](#PrivateEndpointConnectionsForSCCPowershellClient)) Get(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), privateEndpointConnectionName [string](/builtin#string), options *[PrivateEndpointConnectionsForSCCPowershellClientGetOptions](#PrivateEndpointConnectionsForSCCPowershellClientGetOptions)) ([PrivateEndpointConnectionsForSCCPowershellClientGetResponse](#PrivateEndpointConnectionsForSCCPowershellClientGetResponse), [error](/builtin#error)) ``` Get - Gets the specified private endpoint connection associated with the service. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * privateEndpointConnectionName - The name of the private endpoint connection associated with the Azure resource * options - PrivateEndpointConnectionsForSCCPowershellClientGetOptions contains the optional parameters for the PrivateEndpointConnectionsForSCCPowershellClient.Get method. Example [¶](#example-PrivateEndpointConnectionsForSCCPowershellClient.Get) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/SCCPowershellServiceGetPrivateEndpointConnection.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewPrivateEndpointConnectionsForSCCPowershellClient().Get(ctx, "rgname", "service1", "myConnection", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateEndpointConnection = armm365securityandcompliance.PrivateEndpointConnection{ // Name: to.Ptr("myConnection"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForSCCPowershell/privateEndpointConnections"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForSCCPowershell/service1/privateEndpointConnections/myConnection"), // Properties: &armm365securityandcompliance.PrivateEndpointConnectionProperties{ // PrivateEndpoint: &armm365securityandcompliance.PrivateEndpoint{ // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.Network/privateEndpoints/peexample01"), // }, // PrivateLinkServiceConnectionState: &armm365securityandcompliance.PrivateLinkServiceConnectionState{ // Description: to.Ptr("Auto-Approved"), // ActionsRequired: to.Ptr("None"), // Status: to.Ptr(armm365securityandcompliance.PrivateEndpointServiceConnectionStatusApproved), // }, // }, // SystemData: &armm365securityandcompliance.SystemData{ // CreatedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // CreatedBy: to.Ptr("sove"), // CreatedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // LastModifiedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // LastModifiedBy: to.Ptr("sove"), // LastModifiedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // }, // } } ``` ``` Output: ``` Share Format Run #### func (*PrivateEndpointConnectionsForSCCPowershellClient) [NewListByServicePager](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privateendpointconnectionsforsccpowershell_client.go#L262) [¶](#PrivateEndpointConnectionsForSCCPowershellClient.NewListByServicePager) added in v0.4.0 ``` func (client *[PrivateEndpointConnectionsForSCCPowershellClient](#PrivateEndpointConnectionsForSCCPowershellClient)) NewListByServicePager(resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), options *[PrivateEndpointConnectionsForSCCPowershellClientListByServiceOptions](#PrivateEndpointConnectionsForSCCPowershellClientListByServiceOptions)) *[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Pager](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Pager)[[PrivateEndpointConnectionsForSCCPowershellClientListByServiceResponse](#PrivateEndpointConnectionsForSCCPowershellClientListByServiceResponse)] ``` NewListByServicePager - Lists all private endpoint connections for a service. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * options - PrivateEndpointConnectionsForSCCPowershellClientListByServiceOptions contains the optional parameters for the PrivateEndpointConnectionsForSCCPowershellClient.NewListByServicePager method. Example [¶](#example-PrivateEndpointConnectionsForSCCPowershellClient.NewListByServicePager) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/SCCPowershellServiceListPrivateEndpointConnections.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } pager := clientFactory.NewPrivateEndpointConnectionsForSCCPowershellClient().NewListByServicePager("rgname", "service1", nil) for pager.More() { page, err := pager.NextPage(ctx) if err != nil { log.Fatalf("failed to advance page: %v", err) } for _, v := range page.Value { // You could use page here. We use blank identifier for just demo purposes. _ = v } // If the HTTP response code is 200 as defined in example definition, your page structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // page.PrivateEndpointConnectionListResult = armm365securityandcompliance.PrivateEndpointConnectionListResult{ // Value: []*armm365securityandcompliance.PrivateEndpointConnection{ // { // Name: to.Ptr("myConnection"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForSCCPowershell/privateEndpointConnections"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForSCCPowershell/service1/privateEndpointConnections/myConnection"), // Properties: &armm365securityandcompliance.PrivateEndpointConnectionProperties{ // PrivateEndpoint: &armm365securityandcompliance.PrivateEndpoint{ // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.Network/privateEndpoints/peexample01"), // }, // PrivateLinkServiceConnectionState: &armm365securityandcompliance.PrivateLinkServiceConnectionState{ // Description: to.Ptr("Auto-Approved"), // ActionsRequired: to.Ptr("None"), // Status: to.Ptr(armm365securityandcompliance.PrivateEndpointServiceConnectionStatusApproved), // }, // }, // }}, // } } } ``` ``` Output: ``` Share Format Run #### type [PrivateEndpointConnectionsForSCCPowershellClientBeginCreateOrUpdateOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L251) [¶](#PrivateEndpointConnectionsForSCCPowershellClientBeginCreateOrUpdateOptions) added in v0.2.0 ``` type PrivateEndpointConnectionsForSCCPowershellClientBeginCreateOrUpdateOptions struct { // Resumes the LRO from the provided token. ResumeToken [string](/builtin#string) } ``` PrivateEndpointConnectionsForSCCPowershellClientBeginCreateOrUpdateOptions contains the optional parameters for the PrivateEndpointConnectionsForSCCPowershellClient.BeginCreateOrUpdate method. #### type [PrivateEndpointConnectionsForSCCPowershellClientBeginDeleteOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L258) [¶](#PrivateEndpointConnectionsForSCCPowershellClientBeginDeleteOptions) added in v0.2.0 ``` type PrivateEndpointConnectionsForSCCPowershellClientBeginDeleteOptions struct { // Resumes the LRO from the provided token. ResumeToken [string](/builtin#string) } ``` PrivateEndpointConnectionsForSCCPowershellClientBeginDeleteOptions contains the optional parameters for the PrivateEndpointConnectionsForSCCPowershellClient.BeginDelete method. #### type [PrivateEndpointConnectionsForSCCPowershellClientCreateOrUpdateResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L103) [¶](#PrivateEndpointConnectionsForSCCPowershellClientCreateOrUpdateResponse) added in v0.2.0 ``` type PrivateEndpointConnectionsForSCCPowershellClientCreateOrUpdateResponse struct { [PrivateEndpointConnection](#PrivateEndpointConnection) } ``` PrivateEndpointConnectionsForSCCPowershellClientCreateOrUpdateResponse contains the response from method PrivateEndpointConnectionsForSCCPowershellClient.BeginCreateOrUpdate. #### type [PrivateEndpointConnectionsForSCCPowershellClientDeleteResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L108) [¶](#PrivateEndpointConnectionsForSCCPowershellClientDeleteResponse) added in v0.2.0 ``` type PrivateEndpointConnectionsForSCCPowershellClientDeleteResponse struct { } ``` PrivateEndpointConnectionsForSCCPowershellClientDeleteResponse contains the response from method PrivateEndpointConnectionsForSCCPowershellClient.BeginDelete. #### type [PrivateEndpointConnectionsForSCCPowershellClientGetOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L265) [¶](#PrivateEndpointConnectionsForSCCPowershellClientGetOptions) added in v0.2.0 ``` type PrivateEndpointConnectionsForSCCPowershellClientGetOptions struct { } ``` PrivateEndpointConnectionsForSCCPowershellClientGetOptions contains the optional parameters for the PrivateEndpointConnectionsForSCCPowershellClient.Get method. #### type [PrivateEndpointConnectionsForSCCPowershellClientGetResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L113) [¶](#PrivateEndpointConnectionsForSCCPowershellClientGetResponse) added in v0.2.0 ``` type PrivateEndpointConnectionsForSCCPowershellClientGetResponse struct { [PrivateEndpointConnection](#PrivateEndpointConnection) } ``` PrivateEndpointConnectionsForSCCPowershellClientGetResponse contains the response from method PrivateEndpointConnectionsForSCCPowershellClient.Get. #### type [PrivateEndpointConnectionsForSCCPowershellClientListByServiceOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L271) [¶](#PrivateEndpointConnectionsForSCCPowershellClientListByServiceOptions) added in v0.2.0 ``` type PrivateEndpointConnectionsForSCCPowershellClientListByServiceOptions struct { } ``` PrivateEndpointConnectionsForSCCPowershellClientListByServiceOptions contains the optional parameters for the PrivateEndpointConnectionsForSCCPowershellClient.NewListByServicePager method. #### type [PrivateEndpointConnectionsForSCCPowershellClientListByServiceResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L118) [¶](#PrivateEndpointConnectionsForSCCPowershellClientListByServiceResponse) added in v0.2.0 ``` type PrivateEndpointConnectionsForSCCPowershellClientListByServiceResponse struct { [PrivateEndpointConnectionListResult](#PrivateEndpointConnectionListResult) } ``` PrivateEndpointConnectionsForSCCPowershellClientListByServiceResponse contains the response from method PrivateEndpointConnectionsForSCCPowershellClient.NewListByServicePager. #### type [PrivateEndpointConnectionsSecClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privateendpointconnectionssec_client.go#L26) [¶](#PrivateEndpointConnectionsSecClient) ``` type PrivateEndpointConnectionsSecClient struct { // contains filtered or unexported fields } ``` PrivateEndpointConnectionsSecClient contains the methods for the PrivateEndpointConnectionsSec group. Don't use this type directly, use NewPrivateEndpointConnectionsSecClient() instead. #### func [NewPrivateEndpointConnectionsSecClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privateendpointconnectionssec_client.go#L35) [¶](#NewPrivateEndpointConnectionsSecClient) ``` func NewPrivateEndpointConnectionsSecClient(subscriptionID [string](/builtin#string), credential [azcore](/github.com/Azure/azure-sdk-for-go/sdk/azcore).[TokenCredential](/github.com/Azure/azure-sdk-for-go/sdk/azcore#TokenCredential), options *[arm](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm).[ClientOptions](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm#ClientOptions)) (*[PrivateEndpointConnectionsSecClient](#PrivateEndpointConnectionsSecClient), [error](/builtin#error)) ``` NewPrivateEndpointConnectionsSecClient creates a new instance of PrivateEndpointConnectionsSecClient with the specified values. * subscriptionID - The subscription identifier. * credential - used to authorize requests. Usually a credential from azidentity. * options - pass nil to accept the default values. #### func (*PrivateEndpointConnectionsSecClient) [BeginCreateOrUpdate](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privateendpointconnectionssec_client.go#L57) [¶](#PrivateEndpointConnectionsSecClient.BeginCreateOrUpdate) ``` func (client *[PrivateEndpointConnectionsSecClient](#PrivateEndpointConnectionsSecClient)) BeginCreateOrUpdate(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), privateEndpointConnectionName [string](/builtin#string), properties [PrivateEndpointConnection](#PrivateEndpointConnection), options *[PrivateEndpointConnectionsSecClientBeginCreateOrUpdateOptions](#PrivateEndpointConnectionsSecClientBeginCreateOrUpdateOptions)) (*[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Poller](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Poller)[[PrivateEndpointConnectionsSecClientCreateOrUpdateResponse](#PrivateEndpointConnectionsSecClientCreateOrUpdateResponse)], [error](/builtin#error)) ``` BeginCreateOrUpdate - Update the state of the specified private endpoint connection associated with the service. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * privateEndpointConnectionName - The name of the private endpoint connection associated with the Azure resource * properties - The private endpoint connection properties. * options - PrivateEndpointConnectionsSecClientBeginCreateOrUpdateOptions contains the optional parameters for the PrivateEndpointConnectionsSecClient.BeginCreateOrUpdate method. Example [¶](#example-PrivateEndpointConnectionsSecClient.BeginCreateOrUpdate) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/SecurityCenterServiceCreatePrivateEndpointConnection.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azcore/to" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } poller, err := clientFactory.NewPrivateEndpointConnectionsSecClient().BeginCreateOrUpdate(ctx, "rgname", "service1", "myConnection", armm365securityandcompliance.PrivateEndpointConnection{ Properties: &armm365securityandcompliance.PrivateEndpointConnectionProperties{ PrivateLinkServiceConnectionState: &armm365securityandcompliance.PrivateLinkServiceConnectionState{ Description: to.Ptr("Auto-Approved"), Status: to.Ptr(armm365securityandcompliance.PrivateEndpointServiceConnectionStatusApproved), }, }, }, nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } res, err := poller.PollUntilDone(ctx, nil) if err != nil { log.Fatalf("failed to pull the result: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateEndpointConnection = armm365securityandcompliance.PrivateEndpointConnection{ // Name: to.Ptr("myConnection"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365SecurityCenter/privateEndpointConnections"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365SecurityCenter/service1/privateEndpointConnections/myConnection"), // Properties: &armm365securityandcompliance.PrivateEndpointConnectionProperties{ // PrivateEndpoint: &armm365securityandcompliance.PrivateEndpoint{ // ID: to.Ptr("/subscriptions/c<KEY>/resourceGroups/myResourceGroup/providers/Microsoft.Network/privateEndpoints/peexample01"), // }, // PrivateLinkServiceConnectionState: &armm365securityandcompliance.PrivateLinkServiceConnectionState{ // Description: to.Ptr("Auto-Approved"), // ActionsRequired: to.Ptr("None"), // Status: to.Ptr(armm365securityandcompliance.PrivateEndpointServiceConnectionStatusApproved), // }, // ProvisioningState: to.Ptr(armm365securityandcompliance.PrivateEndpointConnectionProvisioningStateSucceeded), // }, // SystemData: &armm365securityandcompliance.SystemData{ // CreatedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // CreatedBy: to.Ptr("sove"), // CreatedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // LastModifiedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // LastModifiedBy: to.Ptr("sove"), // LastModifiedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // }, // } } ``` ``` Output: ``` Share Format Run #### func (*PrivateEndpointConnectionsSecClient) [BeginDelete](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privateendpointconnectionssec_client.go#L129) [¶](#PrivateEndpointConnectionsSecClient.BeginDelete) ``` func (client *[PrivateEndpointConnectionsSecClient](#PrivateEndpointConnectionsSecClient)) BeginDelete(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), privateEndpointConnectionName [string](/builtin#string), options *[PrivateEndpointConnectionsSecClientBeginDeleteOptions](#PrivateEndpointConnectionsSecClientBeginDeleteOptions)) (*[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Poller](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Poller)[[PrivateEndpointConnectionsSecClientDeleteResponse](#PrivateEndpointConnectionsSecClientDeleteResponse)], [error](/builtin#error)) ``` BeginDelete - Deletes a private endpoint connection. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * privateEndpointConnectionName - The name of the private endpoint connection associated with the Azure resource * options - PrivateEndpointConnectionsSecClientBeginDeleteOptions contains the optional parameters for the PrivateEndpointConnectionsSecClient.BeginDelete method. Example [¶](#example-PrivateEndpointConnectionsSecClient.BeginDelete) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/SecurityCenterServiceDeletePrivateEndpointConnection.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } poller, err := clientFactory.NewPrivateEndpointConnectionsSecClient().BeginDelete(ctx, "rgname", "service1", "myConnection", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } _, err = poller.PollUntilDone(ctx, nil) if err != nil { log.Fatalf("failed to pull the result: %v", err) } } ``` ``` Output: ``` Share Format Run #### func (*PrivateEndpointConnectionsSecClient) [Get](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privateendpointconnectionssec_client.go#L201) [¶](#PrivateEndpointConnectionsSecClient.Get) ``` func (client *[PrivateEndpointConnectionsSecClient](#PrivateEndpointConnectionsSecClient)) Get(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), privateEndpointConnectionName [string](/builtin#string), options *[PrivateEndpointConnectionsSecClientGetOptions](#PrivateEndpointConnectionsSecClientGetOptions)) ([PrivateEndpointConnectionsSecClientGetResponse](#PrivateEndpointConnectionsSecClientGetResponse), [error](/builtin#error)) ``` Get - Gets the specified private endpoint connection associated with the service. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * privateEndpointConnectionName - The name of the private endpoint connection associated with the Azure resource * options - PrivateEndpointConnectionsSecClientGetOptions contains the optional parameters for the PrivateEndpointConnectionsSecClient.Get method. Example [¶](#example-PrivateEndpointConnectionsSecClient.Get) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/SecurityCenterServiceGetPrivateEndpointConnection.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewPrivateEndpointConnectionsSecClient().Get(ctx, "rgname", "service1", "myConnection", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateEndpointConnection = armm365securityandcompliance.PrivateEndpointConnection{ // Name: to.Ptr("myConnection"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365SecurityCenter/privateEndpointConnections"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365SecurityCenter/service1/privateEndpointConnections/myConnection"), // Properties: &armm365securityandcompliance.PrivateEndpointConnectionProperties{ // PrivateEndpoint: &armm365securityandcompliance.PrivateEndpoint{ // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.Network/privateEndpoints/peexample01"), // }, // PrivateLinkServiceConnectionState: &armm365securityandcompliance.PrivateLinkServiceConnectionState{ // Description: to.Ptr("Auto-Approved"), // ActionsRequired: to.Ptr("None"), // Status: to.Ptr(armm365securityandcompliance.PrivateEndpointServiceConnectionStatusApproved), // }, // }, // SystemData: &armm365securityandcompliance.SystemData{ // CreatedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // CreatedBy: to.Ptr("sove"), // CreatedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // LastModifiedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // LastModifiedBy: to.Ptr("sove"), // LastModifiedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // }, // } } ``` ``` Output: ``` Share Format Run #### func (*PrivateEndpointConnectionsSecClient) [NewListByServicePager](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privateendpointconnectionssec_client.go#L262) [¶](#PrivateEndpointConnectionsSecClient.NewListByServicePager) added in v0.4.0 ``` func (client *[PrivateEndpointConnectionsSecClient](#PrivateEndpointConnectionsSecClient)) NewListByServicePager(resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), options *[PrivateEndpointConnectionsSecClientListByServiceOptions](#PrivateEndpointConnectionsSecClientListByServiceOptions)) *[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Pager](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Pager)[[PrivateEndpointConnectionsSecClientListByServiceResponse](#PrivateEndpointConnectionsSecClientListByServiceResponse)] ``` NewListByServicePager - Lists all private endpoint connections for a service. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * options - PrivateEndpointConnectionsSecClientListByServiceOptions contains the optional parameters for the PrivateEndpointConnectionsSecClient.NewListByServicePager method. Example [¶](#example-PrivateEndpointConnectionsSecClient.NewListByServicePager) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/SecurityCenterServiceListPrivateEndpointConnections.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } pager := clientFactory.NewPrivateEndpointConnectionsSecClient().NewListByServicePager("rgname", "service1", nil) for pager.More() { page, err := pager.NextPage(ctx) if err != nil { log.Fatalf("failed to advance page: %v", err) } for _, v := range page.Value { // You could use page here. We use blank identifier for just demo purposes. _ = v } // If the HTTP response code is 200 as defined in example definition, your page structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // page.PrivateEndpointConnectionListResult = armm365securityandcompliance.PrivateEndpointConnectionListResult{ // Value: []*armm365securityandcompliance.PrivateEndpointConnection{ // { // Name: to.Ptr("myConnection"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365SecurityCenter/privateEndpointConnections"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365SecurityCenter/service1/privateEndpointConnections/myConnection"), // Properties: &armm365securityandcompliance.PrivateEndpointConnectionProperties{ // PrivateEndpoint: &armm365securityandcompliance.PrivateEndpoint{ // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.Network/privateEndpoints/peexample01"), // }, // PrivateLinkServiceConnectionState: &armm365securityandcompliance.PrivateLinkServiceConnectionState{ // Description: to.Ptr("Auto-Approved"), // ActionsRequired: to.Ptr("None"), // Status: to.Ptr(armm365securityandcompliance.PrivateEndpointServiceConnectionStatusApproved), // }, // }, // }}, // } } } ``` ``` Output: ``` Share Format Run #### type [PrivateEndpointConnectionsSecClientBeginCreateOrUpdateOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L277) [¶](#PrivateEndpointConnectionsSecClientBeginCreateOrUpdateOptions) added in v0.2.0 ``` type PrivateEndpointConnectionsSecClientBeginCreateOrUpdateOptions struct { // Resumes the LRO from the provided token. ResumeToken [string](/builtin#string) } ``` PrivateEndpointConnectionsSecClientBeginCreateOrUpdateOptions contains the optional parameters for the PrivateEndpointConnectionsSecClient.BeginCreateOrUpdate method. #### type [PrivateEndpointConnectionsSecClientBeginDeleteOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L284) [¶](#PrivateEndpointConnectionsSecClientBeginDeleteOptions) added in v0.2.0 ``` type PrivateEndpointConnectionsSecClientBeginDeleteOptions struct { // Resumes the LRO from the provided token. ResumeToken [string](/builtin#string) } ``` PrivateEndpointConnectionsSecClientBeginDeleteOptions contains the optional parameters for the PrivateEndpointConnectionsSecClient.BeginDelete method. #### type [PrivateEndpointConnectionsSecClientCreateOrUpdateResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L123) [¶](#PrivateEndpointConnectionsSecClientCreateOrUpdateResponse) added in v0.2.0 ``` type PrivateEndpointConnectionsSecClientCreateOrUpdateResponse struct { [PrivateEndpointConnection](#PrivateEndpointConnection) } ``` PrivateEndpointConnectionsSecClientCreateOrUpdateResponse contains the response from method PrivateEndpointConnectionsSecClient.BeginCreateOrUpdate. #### type [PrivateEndpointConnectionsSecClientDeleteResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L128) [¶](#PrivateEndpointConnectionsSecClientDeleteResponse) added in v0.2.0 ``` type PrivateEndpointConnectionsSecClientDeleteResponse struct { } ``` PrivateEndpointConnectionsSecClientDeleteResponse contains the response from method PrivateEndpointConnectionsSecClient.BeginDelete. #### type [PrivateEndpointConnectionsSecClientGetOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L291) [¶](#PrivateEndpointConnectionsSecClientGetOptions) added in v0.2.0 ``` type PrivateEndpointConnectionsSecClientGetOptions struct { } ``` PrivateEndpointConnectionsSecClientGetOptions contains the optional parameters for the PrivateEndpointConnectionsSecClient.Get method. #### type [PrivateEndpointConnectionsSecClientGetResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L133) [¶](#PrivateEndpointConnectionsSecClientGetResponse) added in v0.2.0 ``` type PrivateEndpointConnectionsSecClientGetResponse struct { [PrivateEndpointConnection](#PrivateEndpointConnection) } ``` PrivateEndpointConnectionsSecClientGetResponse contains the response from method PrivateEndpointConnectionsSecClient.Get. #### type [PrivateEndpointConnectionsSecClientListByServiceOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L297) [¶](#PrivateEndpointConnectionsSecClientListByServiceOptions) added in v0.2.0 ``` type PrivateEndpointConnectionsSecClientListByServiceOptions struct { } ``` PrivateEndpointConnectionsSecClientListByServiceOptions contains the optional parameters for the PrivateEndpointConnectionsSecClient.NewListByServicePager method. #### type [PrivateEndpointConnectionsSecClientListByServiceResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L138) [¶](#PrivateEndpointConnectionsSecClientListByServiceResponse) added in v0.2.0 ``` type PrivateEndpointConnectionsSecClientListByServiceResponse struct { [PrivateEndpointConnectionListResult](#PrivateEndpointConnectionListResult) } ``` PrivateEndpointConnectionsSecClientListByServiceResponse contains the response from method PrivateEndpointConnectionsSecClient.NewListByServicePager. #### type [PrivateEndpointServiceConnectionStatus](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/constants.go#L114) [¶](#PrivateEndpointServiceConnectionStatus) ``` type PrivateEndpointServiceConnectionStatus [string](/builtin#string) ``` PrivateEndpointServiceConnectionStatus - The private endpoint connection status. ``` const ( PrivateEndpointServiceConnectionStatusApproved [PrivateEndpointServiceConnectionStatus](#PrivateEndpointServiceConnectionStatus) = "Approved" PrivateEndpointServiceConnectionStatusPending [PrivateEndpointServiceConnectionStatus](#PrivateEndpointServiceConnectionStatus) = "Pending" PrivateEndpointServiceConnectionStatusRejected [PrivateEndpointServiceConnectionStatus](#PrivateEndpointServiceConnectionStatus) = "Rejected" ) ``` #### func [PossiblePrivateEndpointServiceConnectionStatusValues](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/constants.go#L123) [¶](#PossiblePrivateEndpointServiceConnectionStatusValues) ``` func PossiblePrivateEndpointServiceConnectionStatusValues() [][PrivateEndpointServiceConnectionStatus](#PrivateEndpointServiceConnectionStatus) ``` PossiblePrivateEndpointServiceConnectionStatusValues returns the possible values for the PrivateEndpointServiceConnectionStatus const type. #### type [PrivateLinkResource](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L302) [¶](#PrivateLinkResource) ``` type PrivateLinkResource struct { // Resource properties. Properties *[PrivateLinkResourceProperties](#PrivateLinkResourceProperties) // READ-ONLY; Fully qualified resource ID for the resource. Ex - /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName} ID *[string](/builtin#string) // READ-ONLY; The name of the resource Name *[string](/builtin#string) // READ-ONLY; Required property for system data SystemData *[SystemData](#SystemData) // READ-ONLY; The type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts" Type *[string](/builtin#string) } ``` PrivateLinkResource - A private link resource #### func (PrivateLinkResource) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L370) [¶](#PrivateLinkResource.MarshalJSON) added in v0.6.0 ``` func (p [PrivateLinkResource](#PrivateLinkResource)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type PrivateLinkResource. #### func (*PrivateLinkResource) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L381) [¶](#PrivateLinkResource.UnmarshalJSON) added in v0.6.0 ``` func (p *[PrivateLinkResource](#PrivateLinkResource)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type PrivateLinkResource. #### type [PrivateLinkResourceListResult](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L320) [¶](#PrivateLinkResourceListResult) ``` type PrivateLinkResourceListResult struct { // Array of private link resources Value []*[PrivateLinkResource](#PrivateLinkResource) // READ-ONLY; The URL to get the next set of results. NextLink *[string](/builtin#string) } ``` PrivateLinkResourceListResult - A list of private link resources #### func (PrivateLinkResourceListResult) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L413) [¶](#PrivateLinkResourceListResult.MarshalJSON) ``` func (p [PrivateLinkResourceListResult](#PrivateLinkResourceListResult)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type PrivateLinkResourceListResult. #### func (*PrivateLinkResourceListResult) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L421) [¶](#PrivateLinkResourceListResult.UnmarshalJSON) added in v0.6.0 ``` func (p *[PrivateLinkResourceListResult](#PrivateLinkResourceListResult)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type PrivateLinkResourceListResult. #### type [PrivateLinkResourceProperties](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L329) [¶](#PrivateLinkResourceProperties) ``` type PrivateLinkResourceProperties struct { // The private link resource Private link DNS zone name. RequiredZoneNames []*[string](/builtin#string) // READ-ONLY; The private link resource group id. GroupID *[string](/builtin#string) // READ-ONLY; The private link resource required member names. RequiredMembers []*[string](/builtin#string) } ``` PrivateLinkResourceProperties - Properties of a private link resource. #### func (PrivateLinkResourceProperties) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L444) [¶](#PrivateLinkResourceProperties.MarshalJSON) ``` func (p [PrivateLinkResourceProperties](#PrivateLinkResourceProperties)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type PrivateLinkResourceProperties. #### func (*PrivateLinkResourceProperties) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L453) [¶](#PrivateLinkResourceProperties.UnmarshalJSON) added in v0.6.0 ``` func (p *[PrivateLinkResourceProperties](#PrivateLinkResourceProperties)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type PrivateLinkResourceProperties. #### type [PrivateLinkResourcesAdtAPIClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkresourcesadtapi_client.go#L26) [¶](#PrivateLinkResourcesAdtAPIClient) ``` type PrivateLinkResourcesAdtAPIClient struct { // contains filtered or unexported fields } ``` PrivateLinkResourcesAdtAPIClient contains the methods for the PrivateLinkResourcesAdtAPI group. Don't use this type directly, use NewPrivateLinkResourcesAdtAPIClient() instead. #### func [NewPrivateLinkResourcesAdtAPIClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkresourcesadtapi_client.go#L35) [¶](#NewPrivateLinkResourcesAdtAPIClient) ``` func NewPrivateLinkResourcesAdtAPIClient(subscriptionID [string](/builtin#string), credential [azcore](/github.com/Azure/azure-sdk-for-go/sdk/azcore).[TokenCredential](/github.com/Azure/azure-sdk-for-go/sdk/azcore#TokenCredential), options *[arm](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm).[ClientOptions](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm#ClientOptions)) (*[PrivateLinkResourcesAdtAPIClient](#PrivateLinkResourcesAdtAPIClient), [error](/builtin#error)) ``` NewPrivateLinkResourcesAdtAPIClient creates a new instance of PrivateLinkResourcesAdtAPIClient with the specified values. * subscriptionID - The subscription identifier. * credential - used to authorize requests. Usually a credential from azidentity. * options - pass nil to accept the default values. #### func (*PrivateLinkResourcesAdtAPIClient) [Get](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkresourcesadtapi_client.go#L56) [¶](#PrivateLinkResourcesAdtAPIClient.Get) ``` func (client *[PrivateLinkResourcesAdtAPIClient](#PrivateLinkResourcesAdtAPIClient)) Get(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), groupName [string](/builtin#string), options *[PrivateLinkResourcesAdtAPIClientGetOptions](#PrivateLinkResourcesAdtAPIClientGetOptions)) ([PrivateLinkResourcesAdtAPIClientGetResponse](#PrivateLinkResourcesAdtAPIClientGetResponse), [error](/builtin#error)) ``` Get - Gets a private link resource that need to be created for a service. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * groupName - The name of the private link resource group. * options - PrivateLinkResourcesAdtAPIClientGetOptions contains the optional parameters for the PrivateLinkResourcesAdtAPIClient.Get method. Example [¶](#example-PrivateLinkResourcesAdtAPIClient.Get) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/ManagementAPIPrivateLinkResourceGet.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewPrivateLinkResourcesAdtAPIClient().Get(ctx, "rgname", "service1", "fhir", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateLinkResource = armm365securityandcompliance.PrivateLinkResource{ // Name: to.Ptr("fhir"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForO365ManagementActivityAPI/privateLinkResources"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForO365ManagementActivityAPI/service1/privateLinkResources/fhir"), // Properties: &armm365securityandcompliance.PrivateLinkResourceProperties{ // GroupID: to.Ptr("fhir"), // RequiredMembers: []*string{ // to.Ptr("fhir")}, // RequiredZoneNames: []*string{ // to.Ptr("privatelink.security.microsoft.com")}, // }, // SystemData: &armm365securityandcompliance.SystemData{ // CreatedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // CreatedBy: to.Ptr("sove"), // CreatedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // LastModifiedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // LastModifiedBy: to.Ptr("sove"), // LastModifiedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // }, // } } ``` ``` Output: ``` Share Format Run #### func (*PrivateLinkResourcesAdtAPIClient) [ListByService](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkresourcesadtapi_client.go#L118) [¶](#PrivateLinkResourcesAdtAPIClient.ListByService) ``` func (client *[PrivateLinkResourcesAdtAPIClient](#PrivateLinkResourcesAdtAPIClient)) ListByService(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), options *[PrivateLinkResourcesAdtAPIClientListByServiceOptions](#PrivateLinkResourcesAdtAPIClientListByServiceOptions)) ([PrivateLinkResourcesAdtAPIClientListByServiceResponse](#PrivateLinkResourcesAdtAPIClientListByServiceResponse), [error](/builtin#error)) ``` ListByService - Gets the private link resources that need to be created for a service. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * options - PrivateLinkResourcesAdtAPIClientListByServiceOptions contains the optional parameters for the PrivateLinkResourcesAdtAPIClient.ListByService method. Example [¶](#example-PrivateLinkResourcesAdtAPIClient.ListByService) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/ManagementAPIPrivateLinkResourcesListByService.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewPrivateLinkResourcesAdtAPIClient().ListByService(ctx, "rgname", "service1", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateLinkResourceListResult = armm365securityandcompliance.PrivateLinkResourceListResult{ // Value: []*armm365securityandcompliance.PrivateLinkResource{ // { // Name: to.Ptr("fhir"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForO365ManagementActivityAPI/privateLinkResources"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForO365ManagementActivityAPI/service1/privateLinkResources/fhir"), // Properties: &armm365securityandcompliance.PrivateLinkResourceProperties{ // GroupID: to.Ptr("fhir"), // RequiredMembers: []*string{ // to.Ptr("fhir")}, // RequiredZoneNames: []*string{ // to.Ptr("privatelink.compliance.microsoft.com")}, // }, // }}, // } } ``` ``` Output: ``` Share Format Run #### type [PrivateLinkResourcesAdtAPIClientGetOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L342) [¶](#PrivateLinkResourcesAdtAPIClientGetOptions) added in v0.2.0 ``` type PrivateLinkResourcesAdtAPIClientGetOptions struct { } ``` PrivateLinkResourcesAdtAPIClientGetOptions contains the optional parameters for the PrivateLinkResourcesAdtAPIClient.Get method. #### type [PrivateLinkResourcesAdtAPIClientGetResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L143) [¶](#PrivateLinkResourcesAdtAPIClientGetResponse) added in v0.2.0 ``` type PrivateLinkResourcesAdtAPIClientGetResponse struct { [PrivateLinkResource](#PrivateLinkResource) } ``` PrivateLinkResourcesAdtAPIClientGetResponse contains the response from method PrivateLinkResourcesAdtAPIClient.Get. #### type [PrivateLinkResourcesAdtAPIClientListByServiceOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L348) [¶](#PrivateLinkResourcesAdtAPIClientListByServiceOptions) added in v0.2.0 ``` type PrivateLinkResourcesAdtAPIClientListByServiceOptions struct { } ``` PrivateLinkResourcesAdtAPIClientListByServiceOptions contains the optional parameters for the PrivateLinkResourcesAdtAPIClient.ListByService method. #### type [PrivateLinkResourcesAdtAPIClientListByServiceResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L148) [¶](#PrivateLinkResourcesAdtAPIClientListByServiceResponse) added in v0.2.0 ``` type PrivateLinkResourcesAdtAPIClientListByServiceResponse struct { [PrivateLinkResourceListResult](#PrivateLinkResourceListResult) } ``` PrivateLinkResourcesAdtAPIClientListByServiceResponse contains the response from method PrivateLinkResourcesAdtAPIClient.ListByService. #### type [PrivateLinkResourcesClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkresources_client.go#L26) [¶](#PrivateLinkResourcesClient) ``` type PrivateLinkResourcesClient struct { // contains filtered or unexported fields } ``` PrivateLinkResourcesClient contains the methods for the PrivateLinkResources group. Don't use this type directly, use NewPrivateLinkResourcesClient() instead. #### func [NewPrivateLinkResourcesClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkresources_client.go#L35) [¶](#NewPrivateLinkResourcesClient) ``` func NewPrivateLinkResourcesClient(subscriptionID [string](/builtin#string), credential [azcore](/github.com/Azure/azure-sdk-for-go/sdk/azcore).[TokenCredential](/github.com/Azure/azure-sdk-for-go/sdk/azcore#TokenCredential), options *[arm](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm).[ClientOptions](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm#ClientOptions)) (*[PrivateLinkResourcesClient](#PrivateLinkResourcesClient), [error](/builtin#error)) ``` NewPrivateLinkResourcesClient creates a new instance of PrivateLinkResourcesClient with the specified values. * subscriptionID - The subscription identifier. * credential - used to authorize requests. Usually a credential from azidentity. * options - pass nil to accept the default values. #### func (*PrivateLinkResourcesClient) [Get](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkresources_client.go#L56) [¶](#PrivateLinkResourcesClient.Get) ``` func (client *[PrivateLinkResourcesClient](#PrivateLinkResourcesClient)) Get(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), groupName [string](/builtin#string), options *[PrivateLinkResourcesClientGetOptions](#PrivateLinkResourcesClientGetOptions)) ([PrivateLinkResourcesClientGetResponse](#PrivateLinkResourcesClientGetResponse), [error](/builtin#error)) ``` Get - Gets a private link resource that need to be created for a service. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * groupName - The name of the private link resource group. * options - PrivateLinkResourcesClientGetOptions contains the optional parameters for the PrivateLinkResourcesClient.Get method. Example [¶](#example-PrivateLinkResourcesClient.Get) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/EdmUploadPrivateLinkResourceGet.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewPrivateLinkResourcesClient().Get(ctx, "rgname", "service1", "fhir", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateLinkResource = armm365securityandcompliance.PrivateLinkResource{ // Name: to.Ptr("fhir"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForEDMUpload/privateLinkResources"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForEDMUpload/service1/privateLinkResources/fhir"), // Properties: &armm365securityandcompliance.PrivateLinkResourceProperties{ // GroupID: to.Ptr("fhir"), // RequiredMembers: []*string{ // to.Ptr("fhir")}, // RequiredZoneNames: []*string{ // to.Ptr("privatelink.security.microsoft.com")}, // }, // SystemData: &armm365securityandcompliance.SystemData{ // CreatedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // CreatedBy: to.Ptr("sove"), // CreatedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // LastModifiedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // LastModifiedBy: to.Ptr("sove"), // LastModifiedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // }, // } } ``` ``` Output: ``` Share Format Run #### func (*PrivateLinkResourcesClient) [ListByService](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkresources_client.go#L118) [¶](#PrivateLinkResourcesClient.ListByService) ``` func (client *[PrivateLinkResourcesClient](#PrivateLinkResourcesClient)) ListByService(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), options *[PrivateLinkResourcesClientListByServiceOptions](#PrivateLinkResourcesClientListByServiceOptions)) ([PrivateLinkResourcesClientListByServiceResponse](#PrivateLinkResourcesClientListByServiceResponse), [error](/builtin#error)) ``` ListByService - Gets the private link resources that need to be created for a service. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * options - PrivateLinkResourcesClientListByServiceOptions contains the optional parameters for the PrivateLinkResourcesClient.ListByService method. Example [¶](#example-PrivateLinkResourcesClient.ListByService) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/EdmUploadPrivateLinkResourcesListByService.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewPrivateLinkResourcesClient().ListByService(ctx, "rgname", "service1", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateLinkResourceListResult = armm365securityandcompliance.PrivateLinkResourceListResult{ // Value: []*armm365securityandcompliance.PrivateLinkResource{ // { // Name: to.Ptr("fhir"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForEDMUpload/privateLinkResources"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForEDMUpload/service1/privateLinkResources/fhir"), // Properties: &armm365securityandcompliance.PrivateLinkResourceProperties{ // GroupID: to.Ptr("fhir"), // RequiredMembers: []*string{ // to.Ptr("fhir")}, // RequiredZoneNames: []*string{ // to.Ptr("privatelink.compliance.microsoft.com")}, // }, // }}, // } } ``` ``` Output: ``` Share Format Run #### type [PrivateLinkResourcesClientGetOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L353) [¶](#PrivateLinkResourcesClientGetOptions) added in v0.2.0 ``` type PrivateLinkResourcesClientGetOptions struct { } ``` PrivateLinkResourcesClientGetOptions contains the optional parameters for the PrivateLinkResourcesClient.Get method. #### type [PrivateLinkResourcesClientGetResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L153) [¶](#PrivateLinkResourcesClientGetResponse) added in v0.2.0 ``` type PrivateLinkResourcesClientGetResponse struct { [PrivateLinkResource](#PrivateLinkResource) } ``` PrivateLinkResourcesClientGetResponse contains the response from method PrivateLinkResourcesClient.Get. #### type [PrivateLinkResourcesClientListByServiceOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L359) [¶](#PrivateLinkResourcesClientListByServiceOptions) added in v0.2.0 ``` type PrivateLinkResourcesClientListByServiceOptions struct { } ``` PrivateLinkResourcesClientListByServiceOptions contains the optional parameters for the PrivateLinkResourcesClient.ListByService method. #### type [PrivateLinkResourcesClientListByServiceResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L158) [¶](#PrivateLinkResourcesClientListByServiceResponse) added in v0.2.0 ``` type PrivateLinkResourcesClientListByServiceResponse struct { [PrivateLinkResourceListResult](#PrivateLinkResourceListResult) } ``` PrivateLinkResourcesClientListByServiceResponse contains the response from method PrivateLinkResourcesClient.ListByService. #### type [PrivateLinkResourcesCompClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkresourcescomp_client.go#L26) [¶](#PrivateLinkResourcesCompClient) ``` type PrivateLinkResourcesCompClient struct { // contains filtered or unexported fields } ``` PrivateLinkResourcesCompClient contains the methods for the PrivateLinkResourcesComp group. Don't use this type directly, use NewPrivateLinkResourcesCompClient() instead. #### func [NewPrivateLinkResourcesCompClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkresourcescomp_client.go#L35) [¶](#NewPrivateLinkResourcesCompClient) ``` func NewPrivateLinkResourcesCompClient(subscriptionID [string](/builtin#string), credential [azcore](/github.com/Azure/azure-sdk-for-go/sdk/azcore).[TokenCredential](/github.com/Azure/azure-sdk-for-go/sdk/azcore#TokenCredential), options *[arm](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm).[ClientOptions](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm#ClientOptions)) (*[PrivateLinkResourcesCompClient](#PrivateLinkResourcesCompClient), [error](/builtin#error)) ``` NewPrivateLinkResourcesCompClient creates a new instance of PrivateLinkResourcesCompClient with the specified values. * subscriptionID - The subscription identifier. * credential - used to authorize requests. Usually a credential from azidentity. * options - pass nil to accept the default values. #### func (*PrivateLinkResourcesCompClient) [Get](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkresourcescomp_client.go#L56) [¶](#PrivateLinkResourcesCompClient.Get) ``` func (client *[PrivateLinkResourcesCompClient](#PrivateLinkResourcesCompClient)) Get(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), groupName [string](/builtin#string), options *[PrivateLinkResourcesCompClientGetOptions](#PrivateLinkResourcesCompClientGetOptions)) ([PrivateLinkResourcesCompClientGetResponse](#PrivateLinkResourcesCompClientGetResponse), [error](/builtin#error)) ``` Get - Gets a private link resource that need to be created for a service. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * groupName - The name of the private link resource group. * options - PrivateLinkResourcesCompClientGetOptions contains the optional parameters for the PrivateLinkResourcesCompClient.Get method. Example [¶](#example-PrivateLinkResourcesCompClient.Get) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/ComplianceCenterPrivateLinkResourceGet.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewPrivateLinkResourcesCompClient().Get(ctx, "rgname", "service1", "fhir", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateLinkResource = armm365securityandcompliance.PrivateLinkResource{ // Name: to.Ptr("fhir"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365ComplianceCenter/privateLinkResources"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365ComplianceCenter/service1/privateLinkResources/fhir"), // Properties: &armm365securityandcompliance.PrivateLinkResourceProperties{ // GroupID: to.Ptr("fhir"), // RequiredMembers: []*string{ // to.Ptr("fhir")}, // RequiredZoneNames: []*string{ // to.Ptr("privatelink.security.microsoft.com")}, // }, // SystemData: &armm365securityandcompliance.SystemData{ // CreatedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // CreatedBy: to.Ptr("sove"), // CreatedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // LastModifiedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // LastModifiedBy: to.Ptr("sove"), // LastModifiedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // }, // } } ``` ``` Output: ``` Share Format Run #### func (*PrivateLinkResourcesCompClient) [ListByService](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkresourcescomp_client.go#L118) [¶](#PrivateLinkResourcesCompClient.ListByService) ``` func (client *[PrivateLinkResourcesCompClient](#PrivateLinkResourcesCompClient)) ListByService(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), options *[PrivateLinkResourcesCompClientListByServiceOptions](#PrivateLinkResourcesCompClientListByServiceOptions)) ([PrivateLinkResourcesCompClientListByServiceResponse](#PrivateLinkResourcesCompClientListByServiceResponse), [error](/builtin#error)) ``` ListByService - Gets the private link resources that need to be created for a service. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * options - PrivateLinkResourcesCompClientListByServiceOptions contains the optional parameters for the PrivateLinkResourcesCompClient.ListByService method. Example [¶](#example-PrivateLinkResourcesCompClient.ListByService) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/ComplianceCenterPrivateLinkResourcesListByService.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewPrivateLinkResourcesCompClient().ListByService(ctx, "rgname", "service1", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateLinkResourceListResult = armm365securityandcompliance.PrivateLinkResourceListResult{ // Value: []*armm365securityandcompliance.PrivateLinkResource{ // { // Name: to.Ptr("fhir"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365ComplianceCenter/privateLinkResources"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365ComplianceCenter/service1/privateLinkResources/fhir"), // Properties: &armm365securityandcompliance.PrivateLinkResourceProperties{ // GroupID: to.Ptr("fhir"), // RequiredMembers: []*string{ // to.Ptr("fhir")}, // RequiredZoneNames: []*string{ // to.Ptr("privatelink.compliance.microsoft.com")}, // }, // }}, // } } ``` ``` Output: ``` Share Format Run #### type [PrivateLinkResourcesCompClientGetOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L364) [¶](#PrivateLinkResourcesCompClientGetOptions) added in v0.2.0 ``` type PrivateLinkResourcesCompClientGetOptions struct { } ``` PrivateLinkResourcesCompClientGetOptions contains the optional parameters for the PrivateLinkResourcesCompClient.Get method. #### type [PrivateLinkResourcesCompClientGetResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L163) [¶](#PrivateLinkResourcesCompClientGetResponse) added in v0.2.0 ``` type PrivateLinkResourcesCompClientGetResponse struct { [PrivateLinkResource](#PrivateLinkResource) } ``` PrivateLinkResourcesCompClientGetResponse contains the response from method PrivateLinkResourcesCompClient.Get. #### type [PrivateLinkResourcesCompClientListByServiceOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L370) [¶](#PrivateLinkResourcesCompClientListByServiceOptions) added in v0.2.0 ``` type PrivateLinkResourcesCompClientListByServiceOptions struct { } ``` PrivateLinkResourcesCompClientListByServiceOptions contains the optional parameters for the PrivateLinkResourcesCompClient.ListByService method. #### type [PrivateLinkResourcesCompClientListByServiceResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L168) [¶](#PrivateLinkResourcesCompClientListByServiceResponse) added in v0.2.0 ``` type PrivateLinkResourcesCompClientListByServiceResponse struct { [PrivateLinkResourceListResult](#PrivateLinkResourceListResult) } ``` PrivateLinkResourcesCompClientListByServiceResponse contains the response from method PrivateLinkResourcesCompClient.ListByService. #### type [PrivateLinkResourcesForMIPPolicySyncClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkresourcesformippolicysync_client.go#L26) [¶](#PrivateLinkResourcesForMIPPolicySyncClient) ``` type PrivateLinkResourcesForMIPPolicySyncClient struct { // contains filtered or unexported fields } ``` PrivateLinkResourcesForMIPPolicySyncClient contains the methods for the PrivateLinkResourcesForMIPPolicySync group. Don't use this type directly, use NewPrivateLinkResourcesForMIPPolicySyncClient() instead. #### func [NewPrivateLinkResourcesForMIPPolicySyncClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkresourcesformippolicysync_client.go#L35) [¶](#NewPrivateLinkResourcesForMIPPolicySyncClient) ``` func NewPrivateLinkResourcesForMIPPolicySyncClient(subscriptionID [string](/builtin#string), credential [azcore](/github.com/Azure/azure-sdk-for-go/sdk/azcore).[TokenCredential](/github.com/Azure/azure-sdk-for-go/sdk/azcore#TokenCredential), options *[arm](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm).[ClientOptions](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm#ClientOptions)) (*[PrivateLinkResourcesForMIPPolicySyncClient](#PrivateLinkResourcesForMIPPolicySyncClient), [error](/builtin#error)) ``` NewPrivateLinkResourcesForMIPPolicySyncClient creates a new instance of PrivateLinkResourcesForMIPPolicySyncClient with the specified values. * subscriptionID - The subscription identifier. * credential - used to authorize requests. Usually a credential from azidentity. * options - pass nil to accept the default values. #### func (*PrivateLinkResourcesForMIPPolicySyncClient) [Get](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkresourcesformippolicysync_client.go#L56) [¶](#PrivateLinkResourcesForMIPPolicySyncClient.Get) ``` func (client *[PrivateLinkResourcesForMIPPolicySyncClient](#PrivateLinkResourcesForMIPPolicySyncClient)) Get(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), groupName [string](/builtin#string), options *[PrivateLinkResourcesForMIPPolicySyncClientGetOptions](#PrivateLinkResourcesForMIPPolicySyncClientGetOptions)) ([PrivateLinkResourcesForMIPPolicySyncClientGetResponse](#PrivateLinkResourcesForMIPPolicySyncClientGetResponse), [error](/builtin#error)) ``` Get - Gets a private link resource that need to be created for a service. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * groupName - The name of the private link resource group. * options - PrivateLinkResourcesForMIPPolicySyncClientGetOptions contains the optional parameters for the PrivateLinkResourcesForMIPPolicySyncClient.Get method. Example [¶](#example-PrivateLinkResourcesForMIPPolicySyncClient.Get) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/MIPPolicySyncPrivateLinkResourceGet.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewPrivateLinkResourcesForMIPPolicySyncClient().Get(ctx, "rgname", "service1", "fhir", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateLinkResource = armm365securityandcompliance.PrivateLinkResource{ // Name: to.Ptr("fhir"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForMIPPolicySync/privateLinkResources"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForMIPPolicySync/service1/privateLinkResources/fhir"), // Properties: &armm365securityandcompliance.PrivateLinkResourceProperties{ // GroupID: to.Ptr("fhir"), // RequiredMembers: []*string{ // to.Ptr("fhir")}, // RequiredZoneNames: []*string{ // to.Ptr("privatelink.security.microsoft.com")}, // }, // SystemData: &armm365securityandcompliance.SystemData{ // CreatedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // CreatedBy: to.Ptr("fangsu"), // CreatedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // LastModifiedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // LastModifiedBy: to.Ptr("fangsu"), // LastModifiedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // }, // } } ``` ``` Output: ``` Share Format Run #### func (*PrivateLinkResourcesForMIPPolicySyncClient) [ListByService](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkresourcesformippolicysync_client.go#L118) [¶](#PrivateLinkResourcesForMIPPolicySyncClient.ListByService) ``` func (client *[PrivateLinkResourcesForMIPPolicySyncClient](#PrivateLinkResourcesForMIPPolicySyncClient)) ListByService(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), options *[PrivateLinkResourcesForMIPPolicySyncClientListByServiceOptions](#PrivateLinkResourcesForMIPPolicySyncClientListByServiceOptions)) ([PrivateLinkResourcesForMIPPolicySyncClientListByServiceResponse](#PrivateLinkResourcesForMIPPolicySyncClientListByServiceResponse), [error](/builtin#error)) ``` ListByService - Gets the private link resources that need to be created for a service. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * options - PrivateLinkResourcesForMIPPolicySyncClientListByServiceOptions contains the optional parameters for the PrivateLinkResourcesForMIPPolicySyncClient.ListByService method. Example [¶](#example-PrivateLinkResourcesForMIPPolicySyncClient.ListByService) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/MIPPolicySyncPrivateLinkResourcesListByService.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewPrivateLinkResourcesForMIPPolicySyncClient().ListByService(ctx, "rgname", "service1", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateLinkResourceListResult = armm365securityandcompliance.PrivateLinkResourceListResult{ // Value: []*armm365securityandcompliance.PrivateLinkResource{ // { // Name: to.Ptr("fhir"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForMIPPolicySync/privateLinkResources"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForMIPPolicySync/service1/privateLinkResources/fhir"), // Properties: &armm365securityandcompliance.PrivateLinkResourceProperties{ // GroupID: to.Ptr("fhir"), // RequiredMembers: []*string{ // to.Ptr("fhir")}, // RequiredZoneNames: []*string{ // to.Ptr("privatelink.compliance.microsoft.com")}, // }, // }}, // } } ``` ``` Output: ``` Share Format Run #### type [PrivateLinkResourcesForMIPPolicySyncClientGetOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L376) [¶](#PrivateLinkResourcesForMIPPolicySyncClientGetOptions) added in v0.2.0 ``` type PrivateLinkResourcesForMIPPolicySyncClientGetOptions struct { } ``` PrivateLinkResourcesForMIPPolicySyncClientGetOptions contains the optional parameters for the PrivateLinkResourcesForMIPPolicySyncClient.Get method. #### type [PrivateLinkResourcesForMIPPolicySyncClientGetResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L173) [¶](#PrivateLinkResourcesForMIPPolicySyncClientGetResponse) added in v0.2.0 ``` type PrivateLinkResourcesForMIPPolicySyncClientGetResponse struct { [PrivateLinkResource](#PrivateLinkResource) } ``` PrivateLinkResourcesForMIPPolicySyncClientGetResponse contains the response from method PrivateLinkResourcesForMIPPolicySyncClient.Get. #### type [PrivateLinkResourcesForMIPPolicySyncClientListByServiceOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L382) [¶](#PrivateLinkResourcesForMIPPolicySyncClientListByServiceOptions) added in v0.2.0 ``` type PrivateLinkResourcesForMIPPolicySyncClientListByServiceOptions struct { } ``` PrivateLinkResourcesForMIPPolicySyncClientListByServiceOptions contains the optional parameters for the PrivateLinkResourcesForMIPPolicySyncClient.ListByService method. #### type [PrivateLinkResourcesForMIPPolicySyncClientListByServiceResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L178) [¶](#PrivateLinkResourcesForMIPPolicySyncClientListByServiceResponse) added in v0.2.0 ``` type PrivateLinkResourcesForMIPPolicySyncClientListByServiceResponse struct { [PrivateLinkResourceListResult](#PrivateLinkResourceListResult) } ``` PrivateLinkResourcesForMIPPolicySyncClientListByServiceResponse contains the response from method PrivateLinkResourcesForMIPPolicySyncClient.ListByService. #### type [PrivateLinkResourcesForSCCPowershellClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkresourcesforsccpowershell_client.go#L26) [¶](#PrivateLinkResourcesForSCCPowershellClient) ``` type PrivateLinkResourcesForSCCPowershellClient struct { // contains filtered or unexported fields } ``` PrivateLinkResourcesForSCCPowershellClient contains the methods for the PrivateLinkResourcesForSCCPowershell group. Don't use this type directly, use NewPrivateLinkResourcesForSCCPowershellClient() instead. #### func [NewPrivateLinkResourcesForSCCPowershellClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkresourcesforsccpowershell_client.go#L35) [¶](#NewPrivateLinkResourcesForSCCPowershellClient) ``` func NewPrivateLinkResourcesForSCCPowershellClient(subscriptionID [string](/builtin#string), credential [azcore](/github.com/Azure/azure-sdk-for-go/sdk/azcore).[TokenCredential](/github.com/Azure/azure-sdk-for-go/sdk/azcore#TokenCredential), options *[arm](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm).[ClientOptions](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm#ClientOptions)) (*[PrivateLinkResourcesForSCCPowershellClient](#PrivateLinkResourcesForSCCPowershellClient), [error](/builtin#error)) ``` NewPrivateLinkResourcesForSCCPowershellClient creates a new instance of PrivateLinkResourcesForSCCPowershellClient with the specified values. * subscriptionID - The subscription identifier. * credential - used to authorize requests. Usually a credential from azidentity. * options - pass nil to accept the default values. #### func (*PrivateLinkResourcesForSCCPowershellClient) [Get](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkresourcesforsccpowershell_client.go#L56) [¶](#PrivateLinkResourcesForSCCPowershellClient.Get) ``` func (client *[PrivateLinkResourcesForSCCPowershellClient](#PrivateLinkResourcesForSCCPowershellClient)) Get(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), groupName [string](/builtin#string), options *[PrivateLinkResourcesForSCCPowershellClientGetOptions](#PrivateLinkResourcesForSCCPowershellClientGetOptions)) ([PrivateLinkResourcesForSCCPowershellClientGetResponse](#PrivateLinkResourcesForSCCPowershellClientGetResponse), [error](/builtin#error)) ``` Get - Gets a private link resource that need to be created for a service. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * groupName - The name of the private link resource group. * options - PrivateLinkResourcesForSCCPowershellClientGetOptions contains the optional parameters for the PrivateLinkResourcesForSCCPowershellClient.Get method. Example [¶](#example-PrivateLinkResourcesForSCCPowershellClient.Get) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/SCCPowershellPrivateLinkResourceGet.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewPrivateLinkResourcesForSCCPowershellClient().Get(ctx, "rgname", "service1", "fhir", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateLinkResource = armm365securityandcompliance.PrivateLinkResource{ // Name: to.Ptr("fhir"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForSCCPowershell/privateLinkResources"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForSCCPowershell/service1/privateLinkResources/fhir"), // Properties: &armm365securityandcompliance.PrivateLinkResourceProperties{ // GroupID: to.Ptr("fhir"), // RequiredMembers: []*string{ // to.Ptr("fhir")}, // RequiredZoneNames: []*string{ // to.Ptr("privatelink.security.microsoft.com")}, // }, // SystemData: &armm365securityandcompliance.SystemData{ // CreatedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // CreatedBy: to.Ptr("sove"), // CreatedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // LastModifiedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // LastModifiedBy: to.Ptr("sove"), // LastModifiedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // }, // } } ``` ``` Output: ``` Share Format Run #### func (*PrivateLinkResourcesForSCCPowershellClient) [ListByService](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkresourcesforsccpowershell_client.go#L118) [¶](#PrivateLinkResourcesForSCCPowershellClient.ListByService) ``` func (client *[PrivateLinkResourcesForSCCPowershellClient](#PrivateLinkResourcesForSCCPowershellClient)) ListByService(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), options *[PrivateLinkResourcesForSCCPowershellClientListByServiceOptions](#PrivateLinkResourcesForSCCPowershellClientListByServiceOptions)) ([PrivateLinkResourcesForSCCPowershellClientListByServiceResponse](#PrivateLinkResourcesForSCCPowershellClientListByServiceResponse), [error](/builtin#error)) ``` ListByService - Gets the private link resources that need to be created for a service. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * options - PrivateLinkResourcesForSCCPowershellClientListByServiceOptions contains the optional parameters for the PrivateLinkResourcesForSCCPowershellClient.ListByService method. Example [¶](#example-PrivateLinkResourcesForSCCPowershellClient.ListByService) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/SCCPowershellPrivateLinkResourcesListByService.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewPrivateLinkResourcesForSCCPowershellClient().ListByService(ctx, "rgname", "service1", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateLinkResourceListResult = armm365securityandcompliance.PrivateLinkResourceListResult{ // Value: []*armm365securityandcompliance.PrivateLinkResource{ // { // Name: to.Ptr("fhir"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForSCCPowershell/privateLinkResources"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForSCCPowershell/service1/privateLinkResources/fhir"), // Properties: &armm365securityandcompliance.PrivateLinkResourceProperties{ // GroupID: to.Ptr("fhir"), // RequiredMembers: []*string{ // to.Ptr("fhir")}, // RequiredZoneNames: []*string{ // to.Ptr("privatelink.compliance.microsoft.com")}, // }, // }}, // } } ``` ``` Output: ``` Share Format Run #### type [PrivateLinkResourcesForSCCPowershellClientGetOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L388) [¶](#PrivateLinkResourcesForSCCPowershellClientGetOptions) added in v0.2.0 ``` type PrivateLinkResourcesForSCCPowershellClientGetOptions struct { } ``` PrivateLinkResourcesForSCCPowershellClientGetOptions contains the optional parameters for the PrivateLinkResourcesForSCCPowershellClient.Get method. #### type [PrivateLinkResourcesForSCCPowershellClientGetResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L183) [¶](#PrivateLinkResourcesForSCCPowershellClientGetResponse) added in v0.2.0 ``` type PrivateLinkResourcesForSCCPowershellClientGetResponse struct { [PrivateLinkResource](#PrivateLinkResource) } ``` PrivateLinkResourcesForSCCPowershellClientGetResponse contains the response from method PrivateLinkResourcesForSCCPowershellClient.Get. #### type [PrivateLinkResourcesForSCCPowershellClientListByServiceOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L394) [¶](#PrivateLinkResourcesForSCCPowershellClientListByServiceOptions) added in v0.2.0 ``` type PrivateLinkResourcesForSCCPowershellClientListByServiceOptions struct { } ``` PrivateLinkResourcesForSCCPowershellClientListByServiceOptions contains the optional parameters for the PrivateLinkResourcesForSCCPowershellClient.ListByService method. #### type [PrivateLinkResourcesForSCCPowershellClientListByServiceResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L188) [¶](#PrivateLinkResourcesForSCCPowershellClientListByServiceResponse) added in v0.2.0 ``` type PrivateLinkResourcesForSCCPowershellClientListByServiceResponse struct { [PrivateLinkResourceListResult](#PrivateLinkResourceListResult) } ``` PrivateLinkResourcesForSCCPowershellClientListByServiceResponse contains the response from method PrivateLinkResourcesForSCCPowershellClient.ListByService. #### type [PrivateLinkResourcesSecClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkresourcessec_client.go#L26) [¶](#PrivateLinkResourcesSecClient) ``` type PrivateLinkResourcesSecClient struct { // contains filtered or unexported fields } ``` PrivateLinkResourcesSecClient contains the methods for the PrivateLinkResourcesSec group. Don't use this type directly, use NewPrivateLinkResourcesSecClient() instead. #### func [NewPrivateLinkResourcesSecClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkresourcessec_client.go#L35) [¶](#NewPrivateLinkResourcesSecClient) ``` func NewPrivateLinkResourcesSecClient(subscriptionID [string](/builtin#string), credential [azcore](/github.com/Azure/azure-sdk-for-go/sdk/azcore).[TokenCredential](/github.com/Azure/azure-sdk-for-go/sdk/azcore#TokenCredential), options *[arm](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm).[ClientOptions](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm#ClientOptions)) (*[PrivateLinkResourcesSecClient](#PrivateLinkResourcesSecClient), [error](/builtin#error)) ``` NewPrivateLinkResourcesSecClient creates a new instance of PrivateLinkResourcesSecClient with the specified values. * subscriptionID - The subscription identifier. * credential - used to authorize requests. Usually a credential from azidentity. * options - pass nil to accept the default values. #### func (*PrivateLinkResourcesSecClient) [Get](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkresourcessec_client.go#L56) [¶](#PrivateLinkResourcesSecClient.Get) ``` func (client *[PrivateLinkResourcesSecClient](#PrivateLinkResourcesSecClient)) Get(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), groupName [string](/builtin#string), options *[PrivateLinkResourcesSecClientGetOptions](#PrivateLinkResourcesSecClientGetOptions)) ([PrivateLinkResourcesSecClientGetResponse](#PrivateLinkResourcesSecClientGetResponse), [error](/builtin#error)) ``` Get - Gets a private link resource that need to be created for a service. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * groupName - The name of the private link resource group. * options - PrivateLinkResourcesSecClientGetOptions contains the optional parameters for the PrivateLinkResourcesSecClient.Get method. Example [¶](#example-PrivateLinkResourcesSecClient.Get) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/SecurityCenterPrivateLinkResourceGet.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewPrivateLinkResourcesSecClient().Get(ctx, "rgname", "service1", "fhir", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateLinkResource = armm365securityandcompliance.PrivateLinkResource{ // Name: to.Ptr("fhir"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365SecurityCenter/privateLinkResources"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365SecurityCenter/service1/privateLinkResources/fhir"), // Properties: &armm365securityandcompliance.PrivateLinkResourceProperties{ // GroupID: to.Ptr("fhir"), // RequiredMembers: []*string{ // to.Ptr("fhir")}, // RequiredZoneNames: []*string{ // to.Ptr("privatelink.security.microsoft.com")}, // }, // SystemData: &armm365securityandcompliance.SystemData{ // CreatedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // CreatedBy: to.Ptr("sove"), // CreatedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // LastModifiedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // LastModifiedBy: to.Ptr("sove"), // LastModifiedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // }, // } } ``` ``` Output: ``` Share Format Run #### func (*PrivateLinkResourcesSecClient) [ListByService](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkresourcessec_client.go#L118) [¶](#PrivateLinkResourcesSecClient.ListByService) ``` func (client *[PrivateLinkResourcesSecClient](#PrivateLinkResourcesSecClient)) ListByService(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), options *[PrivateLinkResourcesSecClientListByServiceOptions](#PrivateLinkResourcesSecClientListByServiceOptions)) ([PrivateLinkResourcesSecClientListByServiceResponse](#PrivateLinkResourcesSecClientListByServiceResponse), [error](/builtin#error)) ``` ListByService - Gets the private link resources that need to be created for a service. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * options - PrivateLinkResourcesSecClientListByServiceOptions contains the optional parameters for the PrivateLinkResourcesSecClient.ListByService method. Example [¶](#example-PrivateLinkResourcesSecClient.ListByService) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/SecurityCenterPrivateLinkResourcesListByService.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewPrivateLinkResourcesSecClient().ListByService(ctx, "rgname", "service1", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateLinkResourceListResult = armm365securityandcompliance.PrivateLinkResourceListResult{ // Value: []*armm365securityandcompliance.PrivateLinkResource{ // { // Name: to.Ptr("fhir"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365SecurityCenter/privateLinkResources"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365SecurityCenter/service1/privateLinkResources/fhir"), // Properties: &armm365securityandcompliance.PrivateLinkResourceProperties{ // GroupID: to.Ptr("fhir"), // RequiredMembers: []*string{ // to.Ptr("fhir")}, // RequiredZoneNames: []*string{ // to.Ptr("privatelink.compliance.microsoft.com")}, // }, // }}, // } } ``` ``` Output: ``` Share Format Run #### type [PrivateLinkResourcesSecClientGetOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L399) [¶](#PrivateLinkResourcesSecClientGetOptions) added in v0.2.0 ``` type PrivateLinkResourcesSecClientGetOptions struct { } ``` PrivateLinkResourcesSecClientGetOptions contains the optional parameters for the PrivateLinkResourcesSecClient.Get method. #### type [PrivateLinkResourcesSecClientGetResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L193) [¶](#PrivateLinkResourcesSecClientGetResponse) added in v0.2.0 ``` type PrivateLinkResourcesSecClientGetResponse struct { [PrivateLinkResource](#PrivateLinkResource) } ``` PrivateLinkResourcesSecClientGetResponse contains the response from method PrivateLinkResourcesSecClient.Get. #### type [PrivateLinkResourcesSecClientListByServiceOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L405) [¶](#PrivateLinkResourcesSecClientListByServiceOptions) added in v0.2.0 ``` type PrivateLinkResourcesSecClientListByServiceOptions struct { } ``` PrivateLinkResourcesSecClientListByServiceOptions contains the optional parameters for the PrivateLinkResourcesSecClient.ListByService method. #### type [PrivateLinkResourcesSecClientListByServiceResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L198) [¶](#PrivateLinkResourcesSecClientListByServiceResponse) added in v0.2.0 ``` type PrivateLinkResourcesSecClientListByServiceResponse struct { [PrivateLinkResourceListResult](#PrivateLinkResourceListResult) } ``` PrivateLinkResourcesSecClientListByServiceResponse contains the response from method PrivateLinkResourcesSecClient.ListByService. #### type [PrivateLinkServiceConnectionState](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L411) [¶](#PrivateLinkServiceConnectionState) ``` type PrivateLinkServiceConnectionState struct { // A message indicating if changes on the service provider require any updates on the consumer. ActionsRequired *[string](/builtin#string) // The reason for approval/rejection of the connection. Description *[string](/builtin#string) // Indicates whether the connection has been Approved/Rejected/Removed by the owner of the service. Status *[PrivateEndpointServiceConnectionStatus](#PrivateEndpointServiceConnectionStatus) } ``` PrivateLinkServiceConnectionState - A collection of information about the state of the connection between service consumer and provider. #### func (PrivateLinkServiceConnectionState) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L479) [¶](#PrivateLinkServiceConnectionState.MarshalJSON) added in v0.6.0 ``` func (p [PrivateLinkServiceConnectionState](#PrivateLinkServiceConnectionState)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type PrivateLinkServiceConnectionState. #### func (*PrivateLinkServiceConnectionState) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L488) [¶](#PrivateLinkServiceConnectionState.UnmarshalJSON) added in v0.6.0 ``` func (p *[PrivateLinkServiceConnectionState](#PrivateLinkServiceConnectionState)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type PrivateLinkServiceConnectionState. #### type [PrivateLinkServicesForEDMUploadClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesforedmupload_client.go#L26) [¶](#PrivateLinkServicesForEDMUploadClient) ``` type PrivateLinkServicesForEDMUploadClient struct { // contains filtered or unexported fields } ``` PrivateLinkServicesForEDMUploadClient contains the methods for the PrivateLinkServicesForEDMUpload group. Don't use this type directly, use NewPrivateLinkServicesForEDMUploadClient() instead. #### func [NewPrivateLinkServicesForEDMUploadClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesforedmupload_client.go#L35) [¶](#NewPrivateLinkServicesForEDMUploadClient) ``` func NewPrivateLinkServicesForEDMUploadClient(subscriptionID [string](/builtin#string), credential [azcore](/github.com/Azure/azure-sdk-for-go/sdk/azcore).[TokenCredential](/github.com/Azure/azure-sdk-for-go/sdk/azcore#TokenCredential), options *[arm](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm).[ClientOptions](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm#ClientOptions)) (*[PrivateLinkServicesForEDMUploadClient](#PrivateLinkServicesForEDMUploadClient), [error](/builtin#error)) ``` NewPrivateLinkServicesForEDMUploadClient creates a new instance of PrivateLinkServicesForEDMUploadClient with the specified values. * subscriptionID - The subscription identifier. * credential - used to authorize requests. Usually a credential from azidentity. * options - pass nil to accept the default values. #### func (*PrivateLinkServicesForEDMUploadClient) [BeginCreateOrUpdate](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesforedmupload_client.go#L56) [¶](#PrivateLinkServicesForEDMUploadClient.BeginCreateOrUpdate) ``` func (client *[PrivateLinkServicesForEDMUploadClient](#PrivateLinkServicesForEDMUploadClient)) BeginCreateOrUpdate(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), privateLinkServicesForEDMUploadDescription [PrivateLinkServicesForEDMUploadDescription](#PrivateLinkServicesForEDMUploadDescription), options *[PrivateLinkServicesForEDMUploadClientBeginCreateOrUpdateOptions](#PrivateLinkServicesForEDMUploadClientBeginCreateOrUpdateOptions)) (*[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Poller](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Poller)[[PrivateLinkServicesForEDMUploadClientCreateOrUpdateResponse](#PrivateLinkServicesForEDMUploadClientCreateOrUpdateResponse)], [error](/builtin#error)) ``` BeginCreateOrUpdate - Create or update the metadata of a privateLinkServicesForEDMUpload instance. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * privateLinkServicesForEDMUploadDescription - The service instance metadata. * options - PrivateLinkServicesForEDMUploadClientBeginCreateOrUpdateOptions contains the optional parameters for the PrivateLinkServicesForEDMUploadClient.BeginCreateOrUpdate method. Example (CreateOrUpdateAServiceWithAllParameters) [¶](#example-PrivateLinkServicesForEDMUploadClient.BeginCreateOrUpdate-CreateOrUpdateAServiceWithAllParameters) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/EdmUploadServiceCreate.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azcore/to" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } poller, err := clientFactory.NewPrivateLinkServicesForEDMUploadClient().BeginCreateOrUpdate(ctx, "rg1", "service1", armm365securityandcompliance.PrivateLinkServicesForEDMUploadDescription{ Identity: &armm365securityandcompliance.ServicesResourceIdentity{ Type: to.Ptr(armm365securityandcompliance.ManagedServiceIdentityTypeSystemAssigned), }, Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), Location: to.Ptr("westus2"), Tags: map[string]*string{}, Properties: &armm365securityandcompliance.ServicesProperties{ AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ { ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), }, { ObjectID: to.Ptr("5b307da8-43d4-492b-8b66-b0294ade872f"), }}, AuthenticationConfiguration: &armm365securityandcompliance.ServiceAuthenticationConfigurationInfo{ Audience: to.Ptr("https://azurehealthcareapis.com"), Authority: to.Ptr("https://login.microsoftonline.com/abfde7b2-df0f-47e6-aabf-2462b07508dc"), SmartProxyEnabled: to.Ptr(true), }, CorsConfiguration: &armm365securityandcompliance.ServiceCorsConfigurationInfo{ AllowCredentials: to.Ptr(false), Headers: []*string{ to.Ptr("*")}, MaxAge: to.Ptr[int64](1440), Methods: []*string{ to.Ptr("DELETE"), to.Ptr("GET"), to.Ptr("OPTIONS"), to.Ptr("PATCH"), to.Ptr("POST"), to.Ptr("PUT")}, Origins: []*string{ to.Ptr("*")}, }, CosmosDbConfiguration: &armm365securityandcompliance.ServiceCosmosDbConfigurationInfo{ KeyVaultKeyURI: to.Ptr("https://my-vault.vault.azure.net/keys/my-key"), OfferThroughput: to.Ptr[int64](1000), }, ExportConfiguration: &armm365securityandcompliance.ServiceExportConfigurationInfo{ StorageAccountName: to.Ptr("existingStorageAccount"), }, PrivateEndpointConnections: []*armm365securityandcompliance.PrivateEndpointConnection{}, PublicNetworkAccess: to.Ptr(armm365securityandcompliance.PublicNetworkAccessDisabled), }, }, nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } res, err := poller.PollUntilDone(ctx, nil) if err != nil { log.Fatalf("failed to pull the result: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateLinkServicesForEDMUploadDescription = armm365securityandcompliance.PrivateLinkServicesForEDMUploadDescription{ // Name: to.Ptr("service1"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForEDMUpload"), // Etag: to.Ptr("etagvalue"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForEDMUpload/service1"), // Identity: &armm365securityandcompliance.ServicesResourceIdentity{ // Type: to.Ptr(armm365securityandcompliance.ManagedServiceIdentityTypeSystemAssigned), // PrincipalID: to.Ptr("03fe6ae0-952c-4e4b-954b-cc0364dd252e"), // TenantID: to.Ptr("72f988bf-86f1-41af-91ab-2d8cd011db47"), // }, // Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), // Location: to.Ptr("West US 2"), // SystemData: &armm365securityandcompliance.SystemData{ // CreatedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // CreatedBy: to.Ptr("sove"), // CreatedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // LastModifiedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // LastModifiedBy: to.Ptr("sove"), // LastModifiedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // }, // Tags: map[string]*string{ // }, // Properties: &armm365securityandcompliance.ServicesProperties{ // AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ // { // ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), // }, // { // ObjectID: to.Ptr("5b307da8-43d4-492b-8b66-b0294ade872f"), // }}, // AuthenticationConfiguration: &armm365securityandcompliance.ServiceAuthenticationConfigurationInfo{ // Audience: to.Ptr("https://azurehealthcareapis.com"), // Authority: to.Ptr("https://login.microsoftonline.com/abfde7b2-df0f-47e6-aabf-2462b07508dc"), // SmartProxyEnabled: to.Ptr(true), // }, // CorsConfiguration: &armm365securityandcompliance.ServiceCorsConfigurationInfo{ // AllowCredentials: to.Ptr(false), // Headers: []*string{ // to.Ptr("*")}, // MaxAge: to.Ptr[int64](1440), // Methods: []*string{ // to.Ptr("DELETE"), // to.Ptr("GET"), // to.Ptr("OPTIONS"), // to.Ptr("PATCH"), // to.Ptr("POST"), // to.Ptr("PUT")}, // Origins: []*string{ // to.Ptr("*")}, // }, // CosmosDbConfiguration: &armm365securityandcompliance.ServiceCosmosDbConfigurationInfo{ // KeyVaultKeyURI: to.Ptr("https://my-vault.vault.azure.net/keys/my-key"), // OfferThroughput: to.Ptr[int64](1000), // }, // ExportConfiguration: &armm365securityandcompliance.ServiceExportConfigurationInfo{ // StorageAccountName: to.Ptr("existingStorageAccount"), // }, // PrivateEndpointConnections: []*armm365securityandcompliance.PrivateEndpointConnection{ // }, // ProvisioningState: to.Ptr(armm365securityandcompliance.ProvisioningStateSucceeded), // PublicNetworkAccess: to.Ptr(armm365securityandcompliance.PublicNetworkAccessDisabled), // }, // } } ``` ``` Output: ``` Share Format Run Example (CreateOrUpdateAServiceWithMinimumParameters) [¶](#example-PrivateLinkServicesForEDMUploadClient.BeginCreateOrUpdate-CreateOrUpdateAServiceWithMinimumParameters) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/EdmUploadServiceCreateMinimum.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azcore/to" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } poller, err := clientFactory.NewPrivateLinkServicesForEDMUploadClient().BeginCreateOrUpdate(ctx, "rg1", "service2", armm365securityandcompliance.PrivateLinkServicesForEDMUploadDescription{ Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), Location: to.Ptr("westus2"), Tags: map[string]*string{}, Properties: &armm365securityandcompliance.ServicesProperties{ AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ { ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), }}, }, }, nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } res, err := poller.PollUntilDone(ctx, nil) if err != nil { log.Fatalf("failed to pull the result: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateLinkServicesForEDMUploadDescription = armm365securityandcompliance.PrivateLinkServicesForEDMUploadDescription{ // Name: to.Ptr("service2"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForEDMUpload"), // Etag: to.Ptr("etagvalue"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForEDMUpload/service2"), // Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), // Location: to.Ptr("westus2"), // SystemData: &armm365securityandcompliance.SystemData{ // CreatedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // CreatedBy: to.Ptr("sove"), // CreatedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // LastModifiedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // LastModifiedBy: to.Ptr("sove"), // LastModifiedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // }, // Tags: map[string]*string{ // }, // Properties: &armm365securityandcompliance.ServicesProperties{ // AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ // { // ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), // }}, // AuthenticationConfiguration: &armm365securityandcompliance.ServiceAuthenticationConfigurationInfo{ // Audience: to.Ptr("https://azurehealthcareapis.com"), // Authority: to.Ptr("https://login.microsoftonline.com/abfde7b2-df0f-47e6-aabf-2462b07508dc"), // SmartProxyEnabled: to.Ptr(false), // }, // CorsConfiguration: &armm365securityandcompliance.ServiceCorsConfigurationInfo{ // AllowCredentials: to.Ptr(false), // Headers: []*string{ // }, // Methods: []*string{ // }, // Origins: []*string{ // }, // }, // CosmosDbConfiguration: &armm365securityandcompliance.ServiceCosmosDbConfigurationInfo{ // OfferThroughput: to.Ptr[int64](1000), // }, // PrivateEndpointConnections: []*armm365securityandcompliance.PrivateEndpointConnection{ // }, // ProvisioningState: to.Ptr(armm365securityandcompliance.ProvisioningStateSucceeded), // PublicNetworkAccess: to.Ptr(armm365securityandcompliance.PublicNetworkAccessDisabled), // }, // } } ``` ``` Output: ``` Share Format Run #### func (*PrivateLinkServicesForEDMUploadClient) [BeginUpdate](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesforedmupload_client.go#L307) [¶](#PrivateLinkServicesForEDMUploadClient.BeginUpdate) ``` func (client *[PrivateLinkServicesForEDMUploadClient](#PrivateLinkServicesForEDMUploadClient)) BeginUpdate(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), servicePatchDescription [ServicesPatchDescription](#ServicesPatchDescription), options *[PrivateLinkServicesForEDMUploadClientBeginUpdateOptions](#PrivateLinkServicesForEDMUploadClientBeginUpdateOptions)) (*[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Poller](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Poller)[[PrivateLinkServicesForEDMUploadClientUpdateResponse](#PrivateLinkServicesForEDMUploadClientUpdateResponse)], [error](/builtin#error)) ``` BeginUpdate - Update the metadata of a privateLinkServicesForEDMUpload instance. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * servicePatchDescription - The service instance metadata and security metadata. * options - PrivateLinkServicesForEDMUploadClientBeginUpdateOptions contains the optional parameters for the PrivateLinkServicesForEDMUploadClient.BeginUpdate method. Example [¶](#example-PrivateLinkServicesForEDMUploadClient.BeginUpdate) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/EdmUploadServicePatch.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azcore/to" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } poller, err := clientFactory.NewPrivateLinkServicesForEDMUploadClient().BeginUpdate(ctx, "rg1", "service1", armm365securityandcompliance.ServicesPatchDescription{ Tags: map[string]*string{ "tag1": to.Ptr("value1"), "tag2": to.Ptr("value2"), }, }, nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } res, err := poller.PollUntilDone(ctx, nil) if err != nil { log.Fatalf("failed to pull the result: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateLinkServicesForEDMUploadDescription = armm365securityandcompliance.PrivateLinkServicesForEDMUploadDescription{ // Name: to.Ptr("service1"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForEDMUpload"), // Etag: to.Ptr("etagvalue"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForEDMUpload/service1"), // Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), // Location: to.Ptr("West US"), // SystemData: &armm365securityandcompliance.SystemData{ // CreatedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // CreatedBy: to.Ptr("sove"), // CreatedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // LastModifiedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // LastModifiedBy: to.Ptr("sove"), // LastModifiedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // }, // Tags: map[string]*string{ // "tag1": to.Ptr("value1"), // "tag2": to.Ptr("value2"), // }, // Properties: &armm365securityandcompliance.ServicesProperties{ // AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ // { // ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), // }, // { // ObjectID: to.Ptr("5b307da8-43d4-492b-8b66-b0294ade872f"), // }}, // AuthenticationConfiguration: &armm365securityandcompliance.ServiceAuthenticationConfigurationInfo{ // Audience: to.Ptr("https://azurehealthcareapis.com"), // Authority: to.Ptr("https://login.microsoftonline.com/abfde7b2-df0f-47e6-aabf-2462b07508dc"), // SmartProxyEnabled: to.Ptr(true), // }, // CorsConfiguration: &armm365securityandcompliance.ServiceCorsConfigurationInfo{ // AllowCredentials: to.Ptr(false), // Headers: []*string{ // to.Ptr("*")}, // MaxAge: to.Ptr[int64](1440), // Methods: []*string{ // to.Ptr("DELETE"), // to.Ptr("GET"), // to.Ptr("OPTIONS"), // to.Ptr("PATCH"), // to.Ptr("POST"), // to.Ptr("PUT")}, // Origins: []*string{ // to.Ptr("*")}, // }, // CosmosDbConfiguration: &armm365securityandcompliance.ServiceCosmosDbConfigurationInfo{ // KeyVaultKeyURI: to.Ptr("https://my-vault.vault.azure.net/keys/my-key"), // OfferThroughput: to.Ptr[int64](1000), // }, // PrivateEndpointConnections: []*armm365securityandcompliance.PrivateEndpointConnection{ // }, // ProvisioningState: to.Ptr(armm365securityandcompliance.ProvisioningStateSucceeded), // PublicNetworkAccess: to.Ptr(armm365securityandcompliance.PublicNetworkAccessDisabled), // }, // } } ``` ``` Output: ``` Share Format Run #### func (*PrivateLinkServicesForEDMUploadClient) [Get](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesforedmupload_client.go#L123) [¶](#PrivateLinkServicesForEDMUploadClient.Get) ``` func (client *[PrivateLinkServicesForEDMUploadClient](#PrivateLinkServicesForEDMUploadClient)) Get(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), options *[PrivateLinkServicesForEDMUploadClientGetOptions](#PrivateLinkServicesForEDMUploadClientGetOptions)) ([PrivateLinkServicesForEDMUploadClientGetResponse](#PrivateLinkServicesForEDMUploadClientGetResponse), [error](/builtin#error)) ``` Get - Get the metadata of a privateLinkServicesForEDMUpload resource. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * options - PrivateLinkServicesForEDMUploadClientGetOptions contains the optional parameters for the PrivateLinkServicesForEDMUploadClient.Get method. Example [¶](#example-PrivateLinkServicesForEDMUploadClient.Get) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/EdmUploadServiceGet.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewPrivateLinkServicesForEDMUploadClient().Get(ctx, "rg1", "service1", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateLinkServicesForEDMUploadDescription = armm365securityandcompliance.PrivateLinkServicesForEDMUploadDescription{ // Name: to.Ptr("service1"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForEDMUpload"), // Etag: to.Ptr("etagvalue"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForEDMUpload/service1"), // Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), // Location: to.Ptr("West US"), // SystemData: &armm365securityandcompliance.SystemData{ // CreatedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // CreatedBy: to.Ptr("sove"), // CreatedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // LastModifiedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // LastModifiedBy: to.Ptr("sove"), // LastModifiedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // }, // Tags: map[string]*string{ // }, // Properties: &armm365securityandcompliance.ServicesProperties{ // AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ // { // ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), // }, // { // ObjectID: to.Ptr("5b307da8-43d4-492b-8b66-b0294ade872f"), // }}, // AuthenticationConfiguration: &armm365securityandcompliance.ServiceAuthenticationConfigurationInfo{ // Audience: to.Ptr("https://azurehealthcareapis.com"), // Authority: to.Ptr("https://login.microsoftonline.com/abfde7b2-df0f-47e6-aabf-2462b07508dc"), // SmartProxyEnabled: to.Ptr(true), // }, // CorsConfiguration: &armm365securityandcompliance.ServiceCorsConfigurationInfo{ // AllowCredentials: to.Ptr(false), // Headers: []*string{ // to.Ptr("*")}, // MaxAge: to.Ptr[int64](1440), // Methods: []*string{ // to.Ptr("DELETE"), // to.Ptr("GET"), // to.Ptr("OPTIONS"), // to.Ptr("PATCH"), // to.Ptr("POST"), // to.Ptr("PUT")}, // Origins: []*string{ // to.Ptr("*")}, // }, // CosmosDbConfiguration: &armm365securityandcompliance.ServiceCosmosDbConfigurationInfo{ // KeyVaultKeyURI: to.Ptr("https://my-vault.vault.azure.net/keys/my-key"), // OfferThroughput: to.Ptr[int64](1000), // }, // PrivateEndpointConnections: []*armm365securityandcompliance.PrivateEndpointConnection{ // }, // ProvisioningState: to.Ptr(armm365securityandcompliance.ProvisioningStateSucceeded), // PublicNetworkAccess: to.Ptr(armm365securityandcompliance.PublicNetworkAccessDisabled), // }, // } } ``` ``` Output: ``` Share Format Run #### func (*PrivateLinkServicesForEDMUploadClient) [NewListByResourceGroupPager](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesforedmupload_client.go#L239) [¶](#PrivateLinkServicesForEDMUploadClient.NewListByResourceGroupPager) added in v0.4.0 ``` func (client *[PrivateLinkServicesForEDMUploadClient](#PrivateLinkServicesForEDMUploadClient)) NewListByResourceGroupPager(resourceGroupName [string](/builtin#string), options *[PrivateLinkServicesForEDMUploadClientListByResourceGroupOptions](#PrivateLinkServicesForEDMUploadClientListByResourceGroupOptions)) *[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Pager](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Pager)[[PrivateLinkServicesForEDMUploadClientListByResourceGroupResponse](#PrivateLinkServicesForEDMUploadClientListByResourceGroupResponse)] ``` NewListByResourceGroupPager - Get all the service instances in a resource group. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * options - PrivateLinkServicesForEDMUploadClientListByResourceGroupOptions contains the optional parameters for the PrivateLinkServicesForEDMUploadClient.NewListByResourceGroupPager method. Example [¶](#example-PrivateLinkServicesForEDMUploadClient.NewListByResourceGroupPager) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/EdmUploadServiceListByResourceGroup.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } pager := clientFactory.NewPrivateLinkServicesForEDMUploadClient().NewListByResourceGroupPager("rgname", nil) for pager.More() { page, err := pager.NextPage(ctx) if err != nil { log.Fatalf("failed to advance page: %v", err) } for _, v := range page.Value { // You could use page here. We use blank identifier for just demo purposes. _ = v } // If the HTTP response code is 200 as defined in example definition, your page structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // page.PrivateLinkServicesForEDMUploadDescriptionListResult = armm365securityandcompliance.PrivateLinkServicesForEDMUploadDescriptionListResult{ // Value: []*armm365securityandcompliance.PrivateLinkServicesForEDMUploadDescription{ // { // Name: to.Ptr("service1"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForEDMUpload"), // Etag: to.Ptr("etagvalue"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForEDMUpload/dddb8dcb-effb-4290-bb47-ce1e8440c729"), // Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), // Location: to.Ptr("westus"), // Tags: map[string]*string{ // }, // Properties: &armm365securityandcompliance.ServicesProperties{ // AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ // { // ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), // }, // { // ObjectID: to.Ptr("5b307da8-43d4-492b-8b66-b0294ade872f"), // }}, // AuthenticationConfiguration: &armm365securityandcompliance.ServiceAuthenticationConfigurationInfo{ // Audience: to.Ptr("https://azurehealthcareapis.com"), // Authority: to.Ptr("https://login.microsoftonline.com/abfde7b2-df0f-47e6-aabf-2462b07508dc"), // SmartProxyEnabled: to.Ptr(true), // }, // CorsConfiguration: &armm365securityandcompliance.ServiceCorsConfigurationInfo{ // AllowCredentials: to.Ptr(false), // Headers: []*string{ // to.Ptr("*")}, // MaxAge: to.Ptr[int64](1440), // Methods: []*string{ // to.Ptr("DELETE"), // to.Ptr("GET"), // to.Ptr("OPTIONS"), // to.Ptr("PATCH"), // to.Ptr("POST"), // to.Ptr("PUT")}, // Origins: []*string{ // to.Ptr("*")}, // }, // CosmosDbConfiguration: &armm365securityandcompliance.ServiceCosmosDbConfigurationInfo{ // KeyVaultKeyURI: to.Ptr("https://my-vault.vault.azure.net/keys/my-key"), // OfferThroughput: to.Ptr[int64](1000), // }, // PrivateEndpointConnections: []*armm365securityandcompliance.PrivateEndpointConnection{ // }, // ProvisioningState: to.Ptr(armm365securityandcompliance.ProvisioningStateSucceeded), // PublicNetworkAccess: to.Ptr(armm365securityandcompliance.PublicNetworkAccessDisabled), // }, // }}, // } } } ``` ``` Output: ``` Share Format Run #### func (*PrivateLinkServicesForEDMUploadClient) [NewListPager](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesforedmupload_client.go#L178) [¶](#PrivateLinkServicesForEDMUploadClient.NewListPager) added in v0.4.0 ``` func (client *[PrivateLinkServicesForEDMUploadClient](#PrivateLinkServicesForEDMUploadClient)) NewListPager(options *[PrivateLinkServicesForEDMUploadClientListOptions](#PrivateLinkServicesForEDMUploadClientListOptions)) *[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Pager](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Pager)[[PrivateLinkServicesForEDMUploadClientListResponse](#PrivateLinkServicesForEDMUploadClientListResponse)] ``` NewListPager - Get all the privateLinkServicesForEDMUpload instances in a subscription. Generated from API version 2021-03-25-preview * options - PrivateLinkServicesForEDMUploadClientListOptions contains the optional parameters for the PrivateLinkServicesForEDMUploadClient.NewListPager method. Example [¶](#example-PrivateLinkServicesForEDMUploadClient.NewListPager) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/EdmUploadServiceList.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } pager := clientFactory.NewPrivateLinkServicesForEDMUploadClient().NewListPager(nil) for pager.More() { page, err := pager.NextPage(ctx) if err != nil { log.Fatalf("failed to advance page: %v", err) } for _, v := range page.Value { // You could use page here. We use blank identifier for just demo purposes. _ = v } // If the HTTP response code is 200 as defined in example definition, your page structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // page.PrivateLinkServicesForEDMUploadDescriptionListResult = armm365securityandcompliance.PrivateLinkServicesForEDMUploadDescriptionListResult{ // Value: []*armm365securityandcompliance.PrivateLinkServicesForEDMUploadDescription{ // { // Name: to.Ptr("service1"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForEDMUpload"), // Etag: to.Ptr("etag"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForEDMUpload/service1"), // Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), // Location: to.Ptr("West US"), // Tags: map[string]*string{ // }, // Properties: &armm365securityandcompliance.ServicesProperties{ // AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ // { // ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), // }, // { // ObjectID: to.Ptr("5b307da8-43d4-492b-8b66-b0294ade872f"), // }}, // AuthenticationConfiguration: &armm365securityandcompliance.ServiceAuthenticationConfigurationInfo{ // Audience: to.Ptr("https://azurehealthcareapis.com"), // Authority: to.Ptr("https://login.microsoftonline.com/abfde7b2-df0f-47e6-aabf-2462b07508dc"), // SmartProxyEnabled: to.Ptr(true), // }, // CorsConfiguration: &armm365securityandcompliance.ServiceCorsConfigurationInfo{ // AllowCredentials: to.Ptr(false), // Headers: []*string{ // to.Ptr("*")}, // MaxAge: to.Ptr[int64](1440), // Methods: []*string{ // to.Ptr("DELETE"), // to.Ptr("GET"), // to.Ptr("OPTIONS"), // to.Ptr("PATCH"), // to.Ptr("POST"), // to.Ptr("PUT")}, // Origins: []*string{ // to.Ptr("*")}, // }, // CosmosDbConfiguration: &armm365securityandcompliance.ServiceCosmosDbConfigurationInfo{ // KeyVaultKeyURI: to.Ptr("https://my-vault.vault.azure.net/keys/my-key"), // OfferThroughput: to.Ptr[int64](1000), // }, // PrivateEndpointConnections: []*armm365securityandcompliance.PrivateEndpointConnection{ // }, // ProvisioningState: to.Ptr(armm365securityandcompliance.ProvisioningStateSucceeded), // PublicNetworkAccess: to.Ptr(armm365securityandcompliance.PublicNetworkAccessDisabled), // }, // }}, // } } } ``` ``` Output: ``` Share Format Run #### type [PrivateLinkServicesForEDMUploadClientBeginCreateOrUpdateOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L424) [¶](#PrivateLinkServicesForEDMUploadClientBeginCreateOrUpdateOptions) added in v0.2.0 ``` type PrivateLinkServicesForEDMUploadClientBeginCreateOrUpdateOptions struct { // Resumes the LRO from the provided token. ResumeToken [string](/builtin#string) } ``` PrivateLinkServicesForEDMUploadClientBeginCreateOrUpdateOptions contains the optional parameters for the PrivateLinkServicesForEDMUploadClient.BeginCreateOrUpdate method. #### type [PrivateLinkServicesForEDMUploadClientBeginUpdateOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L431) [¶](#PrivateLinkServicesForEDMUploadClientBeginUpdateOptions) added in v0.2.0 ``` type PrivateLinkServicesForEDMUploadClientBeginUpdateOptions struct { // Resumes the LRO from the provided token. ResumeToken [string](/builtin#string) } ``` PrivateLinkServicesForEDMUploadClientBeginUpdateOptions contains the optional parameters for the PrivateLinkServicesForEDMUploadClient.BeginUpdate method. #### type [PrivateLinkServicesForEDMUploadClientCreateOrUpdateResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L203) [¶](#PrivateLinkServicesForEDMUploadClientCreateOrUpdateResponse) added in v0.2.0 ``` type PrivateLinkServicesForEDMUploadClientCreateOrUpdateResponse struct { [PrivateLinkServicesForEDMUploadDescription](#PrivateLinkServicesForEDMUploadDescription) } ``` PrivateLinkServicesForEDMUploadClientCreateOrUpdateResponse contains the response from method PrivateLinkServicesForEDMUploadClient.BeginCreateOrUpdate. #### type [PrivateLinkServicesForEDMUploadClientGetOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L438) [¶](#PrivateLinkServicesForEDMUploadClientGetOptions) added in v0.2.0 ``` type PrivateLinkServicesForEDMUploadClientGetOptions struct { } ``` PrivateLinkServicesForEDMUploadClientGetOptions contains the optional parameters for the PrivateLinkServicesForEDMUploadClient.Get method. #### type [PrivateLinkServicesForEDMUploadClientGetResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L208) [¶](#PrivateLinkServicesForEDMUploadClientGetResponse) added in v0.2.0 ``` type PrivateLinkServicesForEDMUploadClientGetResponse struct { [PrivateLinkServicesForEDMUploadDescription](#PrivateLinkServicesForEDMUploadDescription) } ``` PrivateLinkServicesForEDMUploadClientGetResponse contains the response from method PrivateLinkServicesForEDMUploadClient.Get. #### type [PrivateLinkServicesForEDMUploadClientListByResourceGroupOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L444) [¶](#PrivateLinkServicesForEDMUploadClientListByResourceGroupOptions) added in v0.2.0 ``` type PrivateLinkServicesForEDMUploadClientListByResourceGroupOptions struct { } ``` PrivateLinkServicesForEDMUploadClientListByResourceGroupOptions contains the optional parameters for the PrivateLinkServicesForEDMUploadClient.NewListByResourceGroupPager method. #### type [PrivateLinkServicesForEDMUploadClientListByResourceGroupResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L213) [¶](#PrivateLinkServicesForEDMUploadClientListByResourceGroupResponse) added in v0.2.0 ``` type PrivateLinkServicesForEDMUploadClientListByResourceGroupResponse struct { [PrivateLinkServicesForEDMUploadDescriptionListResult](#PrivateLinkServicesForEDMUploadDescriptionListResult) } ``` PrivateLinkServicesForEDMUploadClientListByResourceGroupResponse contains the response from method PrivateLinkServicesForEDMUploadClient.NewListByResourceGroupPager. #### type [PrivateLinkServicesForEDMUploadClientListOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L450) [¶](#PrivateLinkServicesForEDMUploadClientListOptions) added in v0.2.0 ``` type PrivateLinkServicesForEDMUploadClientListOptions struct { } ``` PrivateLinkServicesForEDMUploadClientListOptions contains the optional parameters for the PrivateLinkServicesForEDMUploadClient.NewListPager method. #### type [PrivateLinkServicesForEDMUploadClientListResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L218) [¶](#PrivateLinkServicesForEDMUploadClientListResponse) added in v0.2.0 ``` type PrivateLinkServicesForEDMUploadClientListResponse struct { [PrivateLinkServicesForEDMUploadDescriptionListResult](#PrivateLinkServicesForEDMUploadDescriptionListResult) } ``` PrivateLinkServicesForEDMUploadClientListResponse contains the response from method PrivateLinkServicesForEDMUploadClient.NewListPager. #### type [PrivateLinkServicesForEDMUploadClientUpdateResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L223) [¶](#PrivateLinkServicesForEDMUploadClientUpdateResponse) added in v0.2.0 ``` type PrivateLinkServicesForEDMUploadClientUpdateResponse struct { [PrivateLinkServicesForEDMUploadDescription](#PrivateLinkServicesForEDMUploadDescription) } ``` PrivateLinkServicesForEDMUploadClientUpdateResponse contains the response from method PrivateLinkServicesForEDMUploadClient.BeginUpdate. #### type [PrivateLinkServicesForEDMUploadDescription](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L455) [¶](#PrivateLinkServicesForEDMUploadDescription) ``` type PrivateLinkServicesForEDMUploadDescription struct { // REQUIRED; The kind of the service. Kind *[Kind](#Kind) // REQUIRED; The resource location. Location *[string](/builtin#string) // An etag associated with the resource, used for optimistic concurrency when editing it. Etag *[string](/builtin#string) // Setting indicating whether the service has a managed identity associated with it. Identity *[ServicesResourceIdentity](#ServicesResourceIdentity) // The common properties of a service. Properties *[ServicesProperties](#ServicesProperties) // The resource tags. Tags map[[string](/builtin#string)]*[string](/builtin#string) // READ-ONLY; The resource identifier. ID *[string](/builtin#string) // READ-ONLY; The resource name. Name *[string](/builtin#string) // READ-ONLY; Required property for system data SystemData *[SystemData](#SystemData) // READ-ONLY; The resource type. Type *[string](/builtin#string) } ``` PrivateLinkServicesForEDMUploadDescription - The description of the service. #### func (PrivateLinkServicesForEDMUploadDescription) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L514) [¶](#PrivateLinkServicesForEDMUploadDescription.MarshalJSON) ``` func (p [PrivateLinkServicesForEDMUploadDescription](#PrivateLinkServicesForEDMUploadDescription)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type PrivateLinkServicesForEDMUploadDescription. #### func (*PrivateLinkServicesForEDMUploadDescription) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L530) [¶](#PrivateLinkServicesForEDMUploadDescription.UnmarshalJSON) added in v0.6.0 ``` func (p *[PrivateLinkServicesForEDMUploadDescription](#PrivateLinkServicesForEDMUploadDescription)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type PrivateLinkServicesForEDMUploadDescription. #### type [PrivateLinkServicesForEDMUploadDescriptionListResult](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L488) [¶](#PrivateLinkServicesForEDMUploadDescriptionListResult) ``` type PrivateLinkServicesForEDMUploadDescriptionListResult struct { // A list of service description objects. Value []*[PrivateLinkServicesForEDMUploadDescription](#PrivateLinkServicesForEDMUploadDescription) // READ-ONLY; The link used to get the next page of service description objects. NextLink *[string](/builtin#string) } ``` PrivateLinkServicesForEDMUploadDescriptionListResult - A list of service description objects with a next link. #### func (PrivateLinkServicesForEDMUploadDescriptionListResult) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L577) [¶](#PrivateLinkServicesForEDMUploadDescriptionListResult.MarshalJSON) ``` func (p [PrivateLinkServicesForEDMUploadDescriptionListResult](#PrivateLinkServicesForEDMUploadDescriptionListResult)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type PrivateLinkServicesForEDMUploadDescriptionListResult. #### func (*PrivateLinkServicesForEDMUploadDescriptionListResult) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L585) [¶](#PrivateLinkServicesForEDMUploadDescriptionListResult.UnmarshalJSON) added in v0.6.0 ``` func (p *[PrivateLinkServicesForEDMUploadDescriptionListResult](#PrivateLinkServicesForEDMUploadDescriptionListResult)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type PrivateLinkServicesForEDMUploadDescriptionListResult. #### type [PrivateLinkServicesForM365ComplianceCenterClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesform365compliancecenter_client.go#L26) [¶](#PrivateLinkServicesForM365ComplianceCenterClient) ``` type PrivateLinkServicesForM365ComplianceCenterClient struct { // contains filtered or unexported fields } ``` PrivateLinkServicesForM365ComplianceCenterClient contains the methods for the PrivateLinkServicesForM365ComplianceCenter group. Don't use this type directly, use NewPrivateLinkServicesForM365ComplianceCenterClient() instead. #### func [NewPrivateLinkServicesForM365ComplianceCenterClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesform365compliancecenter_client.go#L35) [¶](#NewPrivateLinkServicesForM365ComplianceCenterClient) ``` func NewPrivateLinkServicesForM365ComplianceCenterClient(subscriptionID [string](/builtin#string), credential [azcore](/github.com/Azure/azure-sdk-for-go/sdk/azcore).[TokenCredential](/github.com/Azure/azure-sdk-for-go/sdk/azcore#TokenCredential), options *[arm](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm).[ClientOptions](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm#ClientOptions)) (*[PrivateLinkServicesForM365ComplianceCenterClient](#PrivateLinkServicesForM365ComplianceCenterClient), [error](/builtin#error)) ``` NewPrivateLinkServicesForM365ComplianceCenterClient creates a new instance of PrivateLinkServicesForM365ComplianceCenterClient with the specified values. * subscriptionID - The subscription identifier. * credential - used to authorize requests. Usually a credential from azidentity. * options - pass nil to accept the default values. #### func (*PrivateLinkServicesForM365ComplianceCenterClient) [BeginCreateOrUpdate](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesform365compliancecenter_client.go#L56) [¶](#PrivateLinkServicesForM365ComplianceCenterClient.BeginCreateOrUpdate) ``` func (client *[PrivateLinkServicesForM365ComplianceCenterClient](#PrivateLinkServicesForM365ComplianceCenterClient)) BeginCreateOrUpdate(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), privateLinkServicesForM365ComplianceCenterDescription [PrivateLinkServicesForM365ComplianceCenterDescription](#PrivateLinkServicesForM365ComplianceCenterDescription), options *[PrivateLinkServicesForM365ComplianceCenterClientBeginCreateOrUpdateOptions](#PrivateLinkServicesForM365ComplianceCenterClientBeginCreateOrUpdateOptions)) (*[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Poller](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Poller)[[PrivateLinkServicesForM365ComplianceCenterClientCreateOrUpdateResponse](#PrivateLinkServicesForM365ComplianceCenterClientCreateOrUpdateResponse)], [error](/builtin#error)) ``` BeginCreateOrUpdate - Create or update the metadata of a privateLinkServicesForM365ComplianceCenter instance. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * privateLinkServicesForM365ComplianceCenterDescription - The service instance metadata. * options - PrivateLinkServicesForM365ComplianceCenterClientBeginCreateOrUpdateOptions contains the optional parameters for the PrivateLinkServicesForM365ComplianceCenterClient.BeginCreateOrUpdate method. Example (CreateOrUpdateAServiceWithAllParameters) [¶](#example-PrivateLinkServicesForM365ComplianceCenterClient.BeginCreateOrUpdate-CreateOrUpdateAServiceWithAllParameters) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/ComplianceCenterServiceCreate.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azcore/to" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } poller, err := clientFactory.NewPrivateLinkServicesForM365ComplianceCenterClient().BeginCreateOrUpdate(ctx, "rg1", "service1", armm365securityandcompliance.PrivateLinkServicesForM365ComplianceCenterDescription{ Identity: &armm365securityandcompliance.ServicesResourceIdentity{ Type: to.Ptr(armm365securityandcompliance.ManagedServiceIdentityTypeSystemAssigned), }, Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), Location: to.Ptr("westus2"), Tags: map[string]*string{}, Properties: &armm365securityandcompliance.ServicesProperties{ AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ { ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), }, { ObjectID: to.Ptr("5b307da8-43d4-492b-8b66-b0294ade872f"), }}, AuthenticationConfiguration: &armm365securityandcompliance.ServiceAuthenticationConfigurationInfo{ Audience: to.Ptr("https://azurehealthcareapis.com"), Authority: to.Ptr("https://login.microsoftonline.com/abfde7b2-df0f-47e6-aabf-2462b07508dc"), SmartProxyEnabled: to.Ptr(true), }, CorsConfiguration: &armm365securityandcompliance.ServiceCorsConfigurationInfo{ AllowCredentials: to.Ptr(false), Headers: []*string{ to.Ptr("*")}, MaxAge: to.Ptr[int64](1440), Methods: []*string{ to.Ptr("DELETE"), to.Ptr("GET"), to.Ptr("OPTIONS"), to.Ptr("PATCH"), to.Ptr("POST"), to.Ptr("PUT")}, Origins: []*string{ to.Ptr("*")}, }, CosmosDbConfiguration: &armm365securityandcompliance.ServiceCosmosDbConfigurationInfo{ KeyVaultKeyURI: to.Ptr("https://my-vault.vault.azure.net/keys/my-key"), OfferThroughput: to.Ptr[int64](1000), }, ExportConfiguration: &armm365securityandcompliance.ServiceExportConfigurationInfo{ StorageAccountName: to.Ptr("existingStorageAccount"), }, PrivateEndpointConnections: []*armm365securityandcompliance.PrivateEndpointConnection{}, PublicNetworkAccess: to.Ptr(armm365securityandcompliance.PublicNetworkAccessDisabled), }, }, nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } res, err := poller.PollUntilDone(ctx, nil) if err != nil { log.Fatalf("failed to pull the result: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateLinkServicesForM365ComplianceCenterDescription = armm365securityandcompliance.PrivateLinkServicesForM365ComplianceCenterDescription{ // Name: to.Ptr("service1"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365ComplianceCenter"), // Etag: to.Ptr("etagvalue"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365ComplianceCenter/service1"), // Identity: &armm365securityandcompliance.ServicesResourceIdentity{ // Type: to.Ptr(armm365securityandcompliance.ManagedServiceIdentityTypeSystemAssigned), // PrincipalID: to.Ptr("03fe6ae0-952c-4e4b-954b-cc0364dd252e"), // TenantID: to.Ptr("72f988bf-86f1-41af-91ab-2d8cd011db47"), // }, // Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), // Location: to.Ptr("West US 2"), // SystemData: &armm365securityandcompliance.SystemData{ // CreatedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // CreatedBy: to.Ptr("sove"), // CreatedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // LastModifiedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // LastModifiedBy: to.Ptr("sove"), // LastModifiedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // }, // Tags: map[string]*string{ // }, // Properties: &armm365securityandcompliance.ServicesProperties{ // AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ // { // ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), // }, // { // ObjectID: to.Ptr("5b307da8-43d4-492b-8b66-b0294ade872f"), // }}, // AuthenticationConfiguration: &armm365securityandcompliance.ServiceAuthenticationConfigurationInfo{ // Audience: to.Ptr("https://azurehealthcareapis.com"), // Authority: to.Ptr("https://login.microsoftonline.com/abfde7b2-df0f-47e6-aabf-2462b07508dc"), // SmartProxyEnabled: to.Ptr(true), // }, // CorsConfiguration: &armm365securityandcompliance.ServiceCorsConfigurationInfo{ // AllowCredentials: to.Ptr(false), // Headers: []*string{ // to.Ptr("*")}, // MaxAge: to.Ptr[int64](1440), // Methods: []*string{ // to.Ptr("DELETE"), // to.Ptr("GET"), // to.Ptr("OPTIONS"), // to.Ptr("PATCH"), // to.Ptr("POST"), // to.Ptr("PUT")}, // Origins: []*string{ // to.Ptr("*")}, // }, // CosmosDbConfiguration: &armm365securityandcompliance.ServiceCosmosDbConfigurationInfo{ // KeyVaultKeyURI: to.Ptr("https://my-vault.vault.azure.net/keys/my-key"), // OfferThroughput: to.Ptr[int64](1000), // }, // ExportConfiguration: &armm365securityandcompliance.ServiceExportConfigurationInfo{ // StorageAccountName: to.Ptr("existingStorageAccount"), // }, // PrivateEndpointConnections: []*armm365securityandcompliance.PrivateEndpointConnection{ // }, // ProvisioningState: to.Ptr(armm365securityandcompliance.ProvisioningStateSucceeded), // PublicNetworkAccess: to.Ptr(armm365securityandcompliance.PublicNetworkAccessDisabled), // }, // } } ``` ``` Output: ``` Share Format Run Example (CreateOrUpdateAServiceWithMinimumParameters) [¶](#example-PrivateLinkServicesForM365ComplianceCenterClient.BeginCreateOrUpdate-CreateOrUpdateAServiceWithMinimumParameters) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/ComplianceCenterServiceCreateMinimum.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azcore/to" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } poller, err := clientFactory.NewPrivateLinkServicesForM365ComplianceCenterClient().BeginCreateOrUpdate(ctx, "rg1", "service2", armm365securityandcompliance.PrivateLinkServicesForM365ComplianceCenterDescription{ Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), Location: to.Ptr("westus2"), Tags: map[string]*string{}, Properties: &armm365securityandcompliance.ServicesProperties{ AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ { ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), }}, }, }, nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } res, err := poller.PollUntilDone(ctx, nil) if err != nil { log.Fatalf("failed to pull the result: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateLinkServicesForM365ComplianceCenterDescription = armm365securityandcompliance.PrivateLinkServicesForM365ComplianceCenterDescription{ // Name: to.Ptr("service2"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365ComplianceCenter"), // Etag: to.Ptr("etagvalue"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365ComplianceCenter/service2"), // Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), // Location: to.Ptr("westus2"), // SystemData: &armm365securityandcompliance.SystemData{ // CreatedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // CreatedBy: to.Ptr("sove"), // CreatedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // LastModifiedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // LastModifiedBy: to.Ptr("sove"), // LastModifiedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // }, // Tags: map[string]*string{ // }, // Properties: &armm365securityandcompliance.ServicesProperties{ // AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ // { // ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), // }}, // AuthenticationConfiguration: &armm365securityandcompliance.ServiceAuthenticationConfigurationInfo{ // Audience: to.Ptr("https://azurehealthcareapis.com"), // Authority: to.Ptr("https://login.microsoftonline.com/abfde7b2-df0f-47e6-aabf-2462b07508dc"), // SmartProxyEnabled: to.Ptr(false), // }, // CorsConfiguration: &armm365securityandcompliance.ServiceCorsConfigurationInfo{ // AllowCredentials: to.Ptr(false), // Headers: []*string{ // }, // Methods: []*string{ // }, // Origins: []*string{ // }, // }, // CosmosDbConfiguration: &armm365securityandcompliance.ServiceCosmosDbConfigurationInfo{ // OfferThroughput: to.Ptr[int64](1000), // }, // PrivateEndpointConnections: []*armm365securityandcompliance.PrivateEndpointConnection{ // }, // ProvisioningState: to.Ptr(armm365securityandcompliance.ProvisioningStateSucceeded), // PublicNetworkAccess: to.Ptr(armm365securityandcompliance.PublicNetworkAccessDisabled), // }, // } } ``` ``` Output: ``` Share Format Run #### func (*PrivateLinkServicesForM365ComplianceCenterClient) [BeginDelete](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesform365compliancecenter_client.go#L123) [¶](#PrivateLinkServicesForM365ComplianceCenterClient.BeginDelete) ``` func (client *[PrivateLinkServicesForM365ComplianceCenterClient](#PrivateLinkServicesForM365ComplianceCenterClient)) BeginDelete(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), options *[PrivateLinkServicesForM365ComplianceCenterClientBeginDeleteOptions](#PrivateLinkServicesForM365ComplianceCenterClientBeginDeleteOptions)) (*[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Poller](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Poller)[[PrivateLinkServicesForM365ComplianceCenterClientDeleteResponse](#PrivateLinkServicesForM365ComplianceCenterClientDeleteResponse)], [error](/builtin#error)) ``` BeginDelete - Delete a service instance. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * options - PrivateLinkServicesForM365ComplianceCenterClientBeginDeleteOptions contains the optional parameters for the PrivateLinkServicesForM365ComplianceCenterClient.BeginDelete method. Example [¶](#example-PrivateLinkServicesForM365ComplianceCenterClient.BeginDelete) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/ComplianceCenterServiceDelete.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } poller, err := clientFactory.NewPrivateLinkServicesForM365ComplianceCenterClient().BeginDelete(ctx, "rg1", "service1", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } _, err = poller.PollUntilDone(ctx, nil) if err != nil { log.Fatalf("failed to pull the result: %v", err) } } ``` ``` Output: ``` Share Format Run #### func (*PrivateLinkServicesForM365ComplianceCenterClient) [BeginUpdate](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesform365compliancecenter_client.go#L374) [¶](#PrivateLinkServicesForM365ComplianceCenterClient.BeginUpdate) ``` func (client *[PrivateLinkServicesForM365ComplianceCenterClient](#PrivateLinkServicesForM365ComplianceCenterClient)) BeginUpdate(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), servicePatchDescription [ServicesPatchDescription](#ServicesPatchDescription), options *[PrivateLinkServicesForM365ComplianceCenterClientBeginUpdateOptions](#PrivateLinkServicesForM365ComplianceCenterClientBeginUpdateOptions)) (*[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Poller](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Poller)[[PrivateLinkServicesForM365ComplianceCenterClientUpdateResponse](#PrivateLinkServicesForM365ComplianceCenterClientUpdateResponse)], [error](/builtin#error)) ``` BeginUpdate - Update the metadata of a privateLinkServicesForM365ComplianceCenter instance. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * servicePatchDescription - The service instance metadata and security metadata. * options - PrivateLinkServicesForM365ComplianceCenterClientBeginUpdateOptions contains the optional parameters for the PrivateLinkServicesForM365ComplianceCenterClient.BeginUpdate method. Example [¶](#example-PrivateLinkServicesForM365ComplianceCenterClient.BeginUpdate) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/ComplianceCenterServicePatch.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azcore/to" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } poller, err := clientFactory.NewPrivateLinkServicesForM365ComplianceCenterClient().BeginUpdate(ctx, "rg1", "service1", armm365securityandcompliance.ServicesPatchDescription{ Tags: map[string]*string{ "tag1": to.Ptr("value1"), "tag2": to.Ptr("value2"), }, }, nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } res, err := poller.PollUntilDone(ctx, nil) if err != nil { log.Fatalf("failed to pull the result: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateLinkServicesForM365ComplianceCenterDescription = armm365securityandcompliance.PrivateLinkServicesForM365ComplianceCenterDescription{ // Name: to.Ptr("service1"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365ComplianceCenter"), // Etag: to.Ptr("etagvalue"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365ComplianceCenter/service1"), // Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), // Location: to.Ptr("West US"), // SystemData: &armm365securityandcompliance.SystemData{ // CreatedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // CreatedBy: to.Ptr("sove"), // CreatedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // LastModifiedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // LastModifiedBy: to.Ptr("sove"), // LastModifiedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // }, // Tags: map[string]*string{ // "tag1": to.Ptr("value1"), // "tag2": to.Ptr("value2"), // }, // Properties: &armm365securityandcompliance.ServicesProperties{ // AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ // { // ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), // }, // { // ObjectID: to.Ptr("5b307da8-43d4-492b-8b66-b0294ade872f"), // }}, // AuthenticationConfiguration: &armm365securityandcompliance.ServiceAuthenticationConfigurationInfo{ // Audience: to.Ptr("https://azurehealthcareapis.com"), // Authority: to.Ptr("https://login.microsoftonline.com/abfde7b2-df0f-47e6-aabf-2462b07508dc"), // SmartProxyEnabled: to.Ptr(true), // }, // CorsConfiguration: &armm365securityandcompliance.ServiceCorsConfigurationInfo{ // AllowCredentials: to.Ptr(false), // Headers: []*string{ // to.Ptr("*")}, // MaxAge: to.Ptr[int64](1440), // Methods: []*string{ // to.Ptr("DELETE"), // to.Ptr("GET"), // to.Ptr("OPTIONS"), // to.Ptr("PATCH"), // to.Ptr("POST"), // to.Ptr("PUT")}, // Origins: []*string{ // to.Ptr("*")}, // }, // CosmosDbConfiguration: &armm365securityandcompliance.ServiceCosmosDbConfigurationInfo{ // KeyVaultKeyURI: to.Ptr("https://my-vault.vault.azure.net/keys/my-key"), // OfferThroughput: to.Ptr[int64](1000), // }, // PrivateEndpointConnections: []*armm365securityandcompliance.PrivateEndpointConnection{ // }, // ProvisioningState: to.Ptr(armm365securityandcompliance.ProvisioningStateSucceeded), // PublicNetworkAccess: to.Ptr(armm365securityandcompliance.PublicNetworkAccessDisabled), // }, // } } ``` ``` Output: ``` Share Format Run #### func (*PrivateLinkServicesForM365ComplianceCenterClient) [Get](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesform365compliancecenter_client.go#L190) [¶](#PrivateLinkServicesForM365ComplianceCenterClient.Get) ``` func (client *[PrivateLinkServicesForM365ComplianceCenterClient](#PrivateLinkServicesForM365ComplianceCenterClient)) Get(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), options *[PrivateLinkServicesForM365ComplianceCenterClientGetOptions](#PrivateLinkServicesForM365ComplianceCenterClientGetOptions)) ([PrivateLinkServicesForM365ComplianceCenterClientGetResponse](#PrivateLinkServicesForM365ComplianceCenterClientGetResponse), [error](/builtin#error)) ``` Get - Get the metadata of a privateLinkServicesForM365ComplianceCenter resource. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * options - PrivateLinkServicesForM365ComplianceCenterClientGetOptions contains the optional parameters for the PrivateLinkServicesForM365ComplianceCenterClient.Get method. Example [¶](#example-PrivateLinkServicesForM365ComplianceCenterClient.Get) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/ComplianceCenterServiceGet.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewPrivateLinkServicesForM365ComplianceCenterClient().Get(ctx, "rg1", "service1", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateLinkServicesForM365ComplianceCenterDescription = armm365securityandcompliance.PrivateLinkServicesForM365ComplianceCenterDescription{ // Name: to.Ptr("service1"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365ComplianceCenter"), // Etag: to.Ptr("etagvalue"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365ComplianceCenter/service1"), // Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), // Location: to.Ptr("West US"), // SystemData: &armm365securityandcompliance.SystemData{ // CreatedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // CreatedBy: to.Ptr("sove"), // CreatedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // LastModifiedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // LastModifiedBy: to.Ptr("sove"), // LastModifiedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // }, // Tags: map[string]*string{ // }, // Properties: &armm365securityandcompliance.ServicesProperties{ // AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ // { // ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), // }, // { // ObjectID: to.Ptr("5b307da8-43d4-492b-8b66-b0294ade872f"), // }}, // AuthenticationConfiguration: &armm365securityandcompliance.ServiceAuthenticationConfigurationInfo{ // Audience: to.Ptr("https://azurehealthcareapis.com"), // Authority: to.Ptr("https://login.microsoftonline.com/abfde7b2-df0f-47e6-aabf-2462b07508dc"), // SmartProxyEnabled: to.Ptr(true), // }, // CorsConfiguration: &armm365securityandcompliance.ServiceCorsConfigurationInfo{ // AllowCredentials: to.Ptr(false), // Headers: []*string{ // to.Ptr("*")}, // MaxAge: to.Ptr[int64](1440), // Methods: []*string{ // to.Ptr("DELETE"), // to.Ptr("GET"), // to.Ptr("OPTIONS"), // to.Ptr("PATCH"), // to.Ptr("POST"), // to.Ptr("PUT")}, // Origins: []*string{ // to.Ptr("*")}, // }, // CosmosDbConfiguration: &armm365securityandcompliance.ServiceCosmosDbConfigurationInfo{ // KeyVaultKeyURI: to.Ptr("https://my-vault.vault.azure.net/keys/my-key"), // OfferThroughput: to.Ptr[int64](1000), // }, // PrivateEndpointConnections: []*armm365securityandcompliance.PrivateEndpointConnection{ // }, // ProvisioningState: to.Ptr(armm365securityandcompliance.ProvisioningStateSucceeded), // PublicNetworkAccess: to.Ptr(armm365securityandcompliance.PublicNetworkAccessDisabled), // }, // } } ``` ``` Output: ``` Share Format Run #### func (*PrivateLinkServicesForM365ComplianceCenterClient) [NewListByResourceGroupPager](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesform365compliancecenter_client.go#L306) [¶](#PrivateLinkServicesForM365ComplianceCenterClient.NewListByResourceGroupPager) added in v0.4.0 ``` func (client *[PrivateLinkServicesForM365ComplianceCenterClient](#PrivateLinkServicesForM365ComplianceCenterClient)) NewListByResourceGroupPager(resourceGroupName [string](/builtin#string), options *[PrivateLinkServicesForM365ComplianceCenterClientListByResourceGroupOptions](#PrivateLinkServicesForM365ComplianceCenterClientListByResourceGroupOptions)) *[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Pager](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Pager)[[PrivateLinkServicesForM365ComplianceCenterClientListByResourceGroupResponse](#PrivateLinkServicesForM365ComplianceCenterClientListByResourceGroupResponse)] ``` NewListByResourceGroupPager - Get all the service instances in a resource group. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * options - PrivateLinkServicesForM365ComplianceCenterClientListByResourceGroupOptions contains the optional parameters for the PrivateLinkServicesForM365ComplianceCenterClient.NewListByResourceGroupPager method. Example [¶](#example-PrivateLinkServicesForM365ComplianceCenterClient.NewListByResourceGroupPager) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/ComplianceCenterServiceListByResourceGroup.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } pager := clientFactory.NewPrivateLinkServicesForM365ComplianceCenterClient().NewListByResourceGroupPager("rgname", nil) for pager.More() { page, err := pager.NextPage(ctx) if err != nil { log.Fatalf("failed to advance page: %v", err) } for _, v := range page.Value { // You could use page here. We use blank identifier for just demo purposes. _ = v } // If the HTTP response code is 200 as defined in example definition, your page structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // page.PrivateLinkServicesForM365ComplianceCenterDescriptionListResult = armm365securityandcompliance.PrivateLinkServicesForM365ComplianceCenterDescriptionListResult{ // Value: []*armm365securityandcompliance.PrivateLinkServicesForM365ComplianceCenterDescription{ // { // Name: to.Ptr("service1"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365ComplianceCenter"), // Etag: to.Ptr("etagvalue"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365ComplianceCenter/dddb8dcb-effb-4290-bb47-ce1e8440c729"), // Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), // Location: to.Ptr("westus"), // Tags: map[string]*string{ // }, // Properties: &armm365securityandcompliance.ServicesProperties{ // AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ // { // ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), // }, // { // ObjectID: to.Ptr("5b307da8-43d4-492b-8b66-b0294ade872f"), // }}, // AuthenticationConfiguration: &armm365securityandcompliance.ServiceAuthenticationConfigurationInfo{ // Audience: to.Ptr("https://azurehealthcareapis.com"), // Authority: to.Ptr("https://login.microsoftonline.com/abfde7b2-df0f-47e6-aabf-2462b07508dc"), // SmartProxyEnabled: to.Ptr(true), // }, // CorsConfiguration: &armm365securityandcompliance.ServiceCorsConfigurationInfo{ // AllowCredentials: to.Ptr(false), // Headers: []*string{ // to.Ptr("*")}, // MaxAge: to.Ptr[int64](1440), // Methods: []*string{ // to.Ptr("DELETE"), // to.Ptr("GET"), // to.Ptr("OPTIONS"), // to.Ptr("PATCH"), // to.Ptr("POST"), // to.Ptr("PUT")}, // Origins: []*string{ // to.Ptr("*")}, // }, // CosmosDbConfiguration: &armm365securityandcompliance.ServiceCosmosDbConfigurationInfo{ // KeyVaultKeyURI: to.Ptr("https://my-vault.vault.azure.net/keys/my-key"), // OfferThroughput: to.Ptr[int64](1000), // }, // PrivateEndpointConnections: []*armm365securityandcompliance.PrivateEndpointConnection{ // }, // ProvisioningState: to.Ptr(armm365securityandcompliance.ProvisioningStateSucceeded), // PublicNetworkAccess: to.Ptr(armm365securityandcompliance.PublicNetworkAccessDisabled), // }, // }}, // } } } ``` ``` Output: ``` Share Format Run #### func (*PrivateLinkServicesForM365ComplianceCenterClient) [NewListPager](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesform365compliancecenter_client.go#L245) [¶](#PrivateLinkServicesForM365ComplianceCenterClient.NewListPager) added in v0.4.0 ``` func (client *[PrivateLinkServicesForM365ComplianceCenterClient](#PrivateLinkServicesForM365ComplianceCenterClient)) NewListPager(options *[PrivateLinkServicesForM365ComplianceCenterClientListOptions](#PrivateLinkServicesForM365ComplianceCenterClientListOptions)) *[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Pager](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Pager)[[PrivateLinkServicesForM365ComplianceCenterClientListResponse](#PrivateLinkServicesForM365ComplianceCenterClientListResponse)] ``` NewListPager - Get all the privateLinkServicesForM365ComplianceCenter instances in a subscription. Generated from API version 2021-03-25-preview * options - PrivateLinkServicesForM365ComplianceCenterClientListOptions contains the optional parameters for the PrivateLinkServicesForM365ComplianceCenterClient.NewListPager method. Example [¶](#example-PrivateLinkServicesForM365ComplianceCenterClient.NewListPager) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/ComplianceCenterServiceList.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } pager := clientFactory.NewPrivateLinkServicesForM365ComplianceCenterClient().NewListPager(nil) for pager.More() { page, err := pager.NextPage(ctx) if err != nil { log.Fatalf("failed to advance page: %v", err) } for _, v := range page.Value { // You could use page here. We use blank identifier for just demo purposes. _ = v } // If the HTTP response code is 200 as defined in example definition, your page structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // page.PrivateLinkServicesForM365ComplianceCenterDescriptionListResult = armm365securityandcompliance.PrivateLinkServicesForM365ComplianceCenterDescriptionListResult{ // Value: []*armm365securityandcompliance.PrivateLinkServicesForM365ComplianceCenterDescription{ // { // Name: to.Ptr("service1"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365ComplianceCenter"), // Etag: to.Ptr("etag"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365ComplianceCenter/service1"), // Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), // Location: to.Ptr("West US"), // Tags: map[string]*string{ // }, // Properties: &armm365securityandcompliance.ServicesProperties{ // AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ // { // ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), // }, // { // ObjectID: to.Ptr("5b307da8-43d4-492b-8b66-b0294ade872f"), // }}, // AuthenticationConfiguration: &armm365securityandcompliance.ServiceAuthenticationConfigurationInfo{ // Audience: to.Ptr("https://azurehealthcareapis.com"), // Authority: to.Ptr("https://login.microsoftonline.com/abfde7b2-df0f-47e6-aabf-2462b07508dc"), // SmartProxyEnabled: to.Ptr(true), // }, // CorsConfiguration: &armm365securityandcompliance.ServiceCorsConfigurationInfo{ // AllowCredentials: to.Ptr(false), // Headers: []*string{ // to.Ptr("*")}, // MaxAge: to.Ptr[int64](1440), // Methods: []*string{ // to.Ptr("DELETE"), // to.Ptr("GET"), // to.Ptr("OPTIONS"), // to.Ptr("PATCH"), // to.Ptr("POST"), // to.Ptr("PUT")}, // Origins: []*string{ // to.Ptr("*")}, // }, // CosmosDbConfiguration: &armm365securityandcompliance.ServiceCosmosDbConfigurationInfo{ // KeyVaultKeyURI: to.Ptr("https://my-vault.vault.azure.net/keys/my-key"), // OfferThroughput: to.Ptr[int64](1000), // }, // PrivateEndpointConnections: []*armm365securityandcompliance.PrivateEndpointConnection{ // }, // ProvisioningState: to.Ptr(armm365securityandcompliance.ProvisioningStateSucceeded), // PublicNetworkAccess: to.Ptr(armm365securityandcompliance.PublicNetworkAccessDisabled), // }, // }}, // } } } ``` ``` Output: ``` Share Format Run #### type [PrivateLinkServicesForM365ComplianceCenterClientBeginCreateOrUpdateOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L498) [¶](#PrivateLinkServicesForM365ComplianceCenterClientBeginCreateOrUpdateOptions) added in v0.2.0 ``` type PrivateLinkServicesForM365ComplianceCenterClientBeginCreateOrUpdateOptions struct { // Resumes the LRO from the provided token. ResumeToken [string](/builtin#string) } ``` PrivateLinkServicesForM365ComplianceCenterClientBeginCreateOrUpdateOptions contains the optional parameters for the PrivateLinkServicesForM365ComplianceCenterClient.BeginCreateOrUpdate method. #### type [PrivateLinkServicesForM365ComplianceCenterClientBeginDeleteOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L505) [¶](#PrivateLinkServicesForM365ComplianceCenterClientBeginDeleteOptions) added in v0.2.0 ``` type PrivateLinkServicesForM365ComplianceCenterClientBeginDeleteOptions struct { // Resumes the LRO from the provided token. ResumeToken [string](/builtin#string) } ``` PrivateLinkServicesForM365ComplianceCenterClientBeginDeleteOptions contains the optional parameters for the PrivateLinkServicesForM365ComplianceCenterClient.BeginDelete method. #### type [PrivateLinkServicesForM365ComplianceCenterClientBeginUpdateOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L512) [¶](#PrivateLinkServicesForM365ComplianceCenterClientBeginUpdateOptions) added in v0.2.0 ``` type PrivateLinkServicesForM365ComplianceCenterClientBeginUpdateOptions struct { // Resumes the LRO from the provided token. ResumeToken [string](/builtin#string) } ``` PrivateLinkServicesForM365ComplianceCenterClientBeginUpdateOptions contains the optional parameters for the PrivateLinkServicesForM365ComplianceCenterClient.BeginUpdate method. #### type [PrivateLinkServicesForM365ComplianceCenterClientCreateOrUpdateResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L228) [¶](#PrivateLinkServicesForM365ComplianceCenterClientCreateOrUpdateResponse) added in v0.2.0 ``` type PrivateLinkServicesForM365ComplianceCenterClientCreateOrUpdateResponse struct { [PrivateLinkServicesForM365ComplianceCenterDescription](#PrivateLinkServicesForM365ComplianceCenterDescription) } ``` PrivateLinkServicesForM365ComplianceCenterClientCreateOrUpdateResponse contains the response from method PrivateLinkServicesForM365ComplianceCenterClient.BeginCreateOrUpdate. #### type [PrivateLinkServicesForM365ComplianceCenterClientDeleteResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L233) [¶](#PrivateLinkServicesForM365ComplianceCenterClientDeleteResponse) added in v0.2.0 ``` type PrivateLinkServicesForM365ComplianceCenterClientDeleteResponse struct { } ``` PrivateLinkServicesForM365ComplianceCenterClientDeleteResponse contains the response from method PrivateLinkServicesForM365ComplianceCenterClient.BeginDelete. #### type [PrivateLinkServicesForM365ComplianceCenterClientGetOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L519) [¶](#PrivateLinkServicesForM365ComplianceCenterClientGetOptions) added in v0.2.0 ``` type PrivateLinkServicesForM365ComplianceCenterClientGetOptions struct { } ``` PrivateLinkServicesForM365ComplianceCenterClientGetOptions contains the optional parameters for the PrivateLinkServicesForM365ComplianceCenterClient.Get method. #### type [PrivateLinkServicesForM365ComplianceCenterClientGetResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L238) [¶](#PrivateLinkServicesForM365ComplianceCenterClientGetResponse) added in v0.2.0 ``` type PrivateLinkServicesForM365ComplianceCenterClientGetResponse struct { [PrivateLinkServicesForM365ComplianceCenterDescription](#PrivateLinkServicesForM365ComplianceCenterDescription) } ``` PrivateLinkServicesForM365ComplianceCenterClientGetResponse contains the response from method PrivateLinkServicesForM365ComplianceCenterClient.Get. #### type [PrivateLinkServicesForM365ComplianceCenterClientListByResourceGroupOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L525) [¶](#PrivateLinkServicesForM365ComplianceCenterClientListByResourceGroupOptions) added in v0.2.0 ``` type PrivateLinkServicesForM365ComplianceCenterClientListByResourceGroupOptions struct { } ``` PrivateLinkServicesForM365ComplianceCenterClientListByResourceGroupOptions contains the optional parameters for the PrivateLinkServicesForM365ComplianceCenterClient.NewListByResourceGroupPager method. #### type [PrivateLinkServicesForM365ComplianceCenterClientListByResourceGroupResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L243) [¶](#PrivateLinkServicesForM365ComplianceCenterClientListByResourceGroupResponse) added in v0.2.0 ``` type PrivateLinkServicesForM365ComplianceCenterClientListByResourceGroupResponse struct { [PrivateLinkServicesForM365ComplianceCenterDescriptionListResult](#PrivateLinkServicesForM365ComplianceCenterDescriptionListResult) } ``` PrivateLinkServicesForM365ComplianceCenterClientListByResourceGroupResponse contains the response from method PrivateLinkServicesForM365ComplianceCenterClient.NewListByResourceGroupPager. #### type [PrivateLinkServicesForM365ComplianceCenterClientListOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L531) [¶](#PrivateLinkServicesForM365ComplianceCenterClientListOptions) added in v0.2.0 ``` type PrivateLinkServicesForM365ComplianceCenterClientListOptions struct { } ``` PrivateLinkServicesForM365ComplianceCenterClientListOptions contains the optional parameters for the PrivateLinkServicesForM365ComplianceCenterClient.NewListPager method. #### type [PrivateLinkServicesForM365ComplianceCenterClientListResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L248) [¶](#PrivateLinkServicesForM365ComplianceCenterClientListResponse) added in v0.2.0 ``` type PrivateLinkServicesForM365ComplianceCenterClientListResponse struct { [PrivateLinkServicesForM365ComplianceCenterDescriptionListResult](#PrivateLinkServicesForM365ComplianceCenterDescriptionListResult) } ``` PrivateLinkServicesForM365ComplianceCenterClientListResponse contains the response from method PrivateLinkServicesForM365ComplianceCenterClient.NewListPager. #### type [PrivateLinkServicesForM365ComplianceCenterClientUpdateResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L253) [¶](#PrivateLinkServicesForM365ComplianceCenterClientUpdateResponse) added in v0.2.0 ``` type PrivateLinkServicesForM365ComplianceCenterClientUpdateResponse struct { [PrivateLinkServicesForM365ComplianceCenterDescription](#PrivateLinkServicesForM365ComplianceCenterDescription) } ``` PrivateLinkServicesForM365ComplianceCenterClientUpdateResponse contains the response from method PrivateLinkServicesForM365ComplianceCenterClient.BeginUpdate. #### type [PrivateLinkServicesForM365ComplianceCenterDescription](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L536) [¶](#PrivateLinkServicesForM365ComplianceCenterDescription) ``` type PrivateLinkServicesForM365ComplianceCenterDescription struct { // REQUIRED; The kind of the service. Kind *[Kind](#Kind) // REQUIRED; The resource location. Location *[string](/builtin#string) // An etag associated with the resource, used for optimistic concurrency when editing it. Etag *[string](/builtin#string) // Setting indicating whether the service has a managed identity associated with it. Identity *[ServicesResourceIdentity](#ServicesResourceIdentity) // The common properties of a service. Properties *[ServicesProperties](#ServicesProperties) // The resource tags. Tags map[[string](/builtin#string)]*[string](/builtin#string) // READ-ONLY; The resource identifier. ID *[string](/builtin#string) // READ-ONLY; The resource name. Name *[string](/builtin#string) // READ-ONLY; Required property for system data SystemData *[SystemData](#SystemData) // READ-ONLY; The resource type. Type *[string](/builtin#string) } ``` PrivateLinkServicesForM365ComplianceCenterDescription - The description of the service. #### func (PrivateLinkServicesForM365ComplianceCenterDescription) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L608) [¶](#PrivateLinkServicesForM365ComplianceCenterDescription.MarshalJSON) ``` func (p [PrivateLinkServicesForM365ComplianceCenterDescription](#PrivateLinkServicesForM365ComplianceCenterDescription)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type PrivateLinkServicesForM365ComplianceCenterDescription. #### func (*PrivateLinkServicesForM365ComplianceCenterDescription) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L624) [¶](#PrivateLinkServicesForM365ComplianceCenterDescription.UnmarshalJSON) added in v0.6.0 ``` func (p *[PrivateLinkServicesForM365ComplianceCenterDescription](#PrivateLinkServicesForM365ComplianceCenterDescription)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type PrivateLinkServicesForM365ComplianceCenterDescription. #### type [PrivateLinkServicesForM365ComplianceCenterDescriptionListResult](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L569) [¶](#PrivateLinkServicesForM365ComplianceCenterDescriptionListResult) ``` type PrivateLinkServicesForM365ComplianceCenterDescriptionListResult struct { // A list of service description objects. Value []*[PrivateLinkServicesForM365ComplianceCenterDescription](#PrivateLinkServicesForM365ComplianceCenterDescription) // READ-ONLY; The link used to get the next page of service description objects. NextLink *[string](/builtin#string) } ``` PrivateLinkServicesForM365ComplianceCenterDescriptionListResult - A list of service description objects with a next link. #### func (PrivateLinkServicesForM365ComplianceCenterDescriptionListResult) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L671) [¶](#PrivateLinkServicesForM365ComplianceCenterDescriptionListResult.MarshalJSON) ``` func (p [PrivateLinkServicesForM365ComplianceCenterDescriptionListResult](#PrivateLinkServicesForM365ComplianceCenterDescriptionListResult)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type PrivateLinkServicesForM365ComplianceCenterDescriptionListResult. #### func (*PrivateLinkServicesForM365ComplianceCenterDescriptionListResult) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L679) [¶](#PrivateLinkServicesForM365ComplianceCenterDescriptionListResult.UnmarshalJSON) added in v0.6.0 ``` func (p *[PrivateLinkServicesForM365ComplianceCenterDescriptionListResult](#PrivateLinkServicesForM365ComplianceCenterDescriptionListResult)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type PrivateLinkServicesForM365ComplianceCenterDescriptionListResult. #### type [PrivateLinkServicesForM365SecurityCenterClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesform365securitycenter_client.go#L26) [¶](#PrivateLinkServicesForM365SecurityCenterClient) ``` type PrivateLinkServicesForM365SecurityCenterClient struct { // contains filtered or unexported fields } ``` PrivateLinkServicesForM365SecurityCenterClient contains the methods for the PrivateLinkServicesForM365SecurityCenter group. Don't use this type directly, use NewPrivateLinkServicesForM365SecurityCenterClient() instead. #### func [NewPrivateLinkServicesForM365SecurityCenterClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesform365securitycenter_client.go#L35) [¶](#NewPrivateLinkServicesForM365SecurityCenterClient) ``` func NewPrivateLinkServicesForM365SecurityCenterClient(subscriptionID [string](/builtin#string), credential [azcore](/github.com/Azure/azure-sdk-for-go/sdk/azcore).[TokenCredential](/github.com/Azure/azure-sdk-for-go/sdk/azcore#TokenCredential), options *[arm](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm).[ClientOptions](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm#ClientOptions)) (*[PrivateLinkServicesForM365SecurityCenterClient](#PrivateLinkServicesForM365SecurityCenterClient), [error](/builtin#error)) ``` NewPrivateLinkServicesForM365SecurityCenterClient creates a new instance of PrivateLinkServicesForM365SecurityCenterClient with the specified values. * subscriptionID - The subscription identifier. * credential - used to authorize requests. Usually a credential from azidentity. * options - pass nil to accept the default values. #### func (*PrivateLinkServicesForM365SecurityCenterClient) [BeginCreateOrUpdate](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesform365securitycenter_client.go#L56) [¶](#PrivateLinkServicesForM365SecurityCenterClient.BeginCreateOrUpdate) ``` func (client *[PrivateLinkServicesForM365SecurityCenterClient](#PrivateLinkServicesForM365SecurityCenterClient)) BeginCreateOrUpdate(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), privateLinkServicesForM365SecurityCenterDescription [PrivateLinkServicesForM365SecurityCenterDescription](#PrivateLinkServicesForM365SecurityCenterDescription), options *[PrivateLinkServicesForM365SecurityCenterClientBeginCreateOrUpdateOptions](#PrivateLinkServicesForM365SecurityCenterClientBeginCreateOrUpdateOptions)) (*[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Poller](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Poller)[[PrivateLinkServicesForM365SecurityCenterClientCreateOrUpdateResponse](#PrivateLinkServicesForM365SecurityCenterClientCreateOrUpdateResponse)], [error](/builtin#error)) ``` BeginCreateOrUpdate - Create or update the metadata of a privateLinkServicesForM365SecurityCenter instance. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * privateLinkServicesForM365SecurityCenterDescription - The service instance metadata. * options - PrivateLinkServicesForM365SecurityCenterClientBeginCreateOrUpdateOptions contains the optional parameters for the PrivateLinkServicesForM365SecurityCenterClient.BeginCreateOrUpdate method. Example (CreateOrUpdateAServiceWithAllParameters) [¶](#example-PrivateLinkServicesForM365SecurityCenterClient.BeginCreateOrUpdate-CreateOrUpdateAServiceWithAllParameters) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/SecurityCenterServiceCreate.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azcore/to" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } poller, err := clientFactory.NewPrivateLinkServicesForM365SecurityCenterClient().BeginCreateOrUpdate(ctx, "rg1", "service1", armm365securityandcompliance.PrivateLinkServicesForM365SecurityCenterDescription{ Identity: &armm365securityandcompliance.ServicesResourceIdentity{ Type: to.Ptr(armm365securityandcompliance.ManagedServiceIdentityTypeSystemAssigned), }, Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), Location: to.Ptr("westus2"), Tags: map[string]*string{}, Properties: &armm365securityandcompliance.ServicesProperties{ AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ { ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), }, { ObjectID: to.Ptr("5b307da8-43d4-492b-8b66-b0294ade872f"), }}, AuthenticationConfiguration: &armm365securityandcompliance.ServiceAuthenticationConfigurationInfo{ Audience: to.Ptr("https://azurehealthcareapis.com"), Authority: to.Ptr("https://login.microsoftonline.com/abfde7b2-df0f-47e6-aabf-2462b07508dc"), SmartProxyEnabled: to.Ptr(true), }, CorsConfiguration: &armm365securityandcompliance.ServiceCorsConfigurationInfo{ AllowCredentials: to.Ptr(false), Headers: []*string{ to.Ptr("*")}, MaxAge: to.Ptr[int64](1440), Methods: []*string{ to.Ptr("DELETE"), to.Ptr("GET"), to.Ptr("OPTIONS"), to.Ptr("PATCH"), to.Ptr("POST"), to.Ptr("PUT")}, Origins: []*string{ to.Ptr("*")}, }, CosmosDbConfiguration: &armm365securityandcompliance.ServiceCosmosDbConfigurationInfo{ KeyVaultKeyURI: to.Ptr("https://my-vault.vault.azure.net/keys/my-key"), OfferThroughput: to.Ptr[int64](1000), }, ExportConfiguration: &armm365securityandcompliance.ServiceExportConfigurationInfo{ StorageAccountName: to.Ptr("existingStorageAccount"), }, PrivateEndpointConnections: []*armm365securityandcompliance.PrivateEndpointConnection{}, PublicNetworkAccess: to.Ptr(armm365securityandcompliance.PublicNetworkAccessDisabled), }, }, nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } res, err := poller.PollUntilDone(ctx, nil) if err != nil { log.Fatalf("failed to pull the result: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateLinkServicesForM365SecurityCenterDescription = armm365securityandcompliance.PrivateLinkServicesForM365SecurityCenterDescription{ // Name: to.Ptr("service1"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365SecurityCenter"), // Etag: to.Ptr("etagvalue"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365SecurityCenter/service1"), // Identity: &armm365securityandcompliance.ServicesResourceIdentity{ // Type: to.Ptr(armm365securityandcompliance.ManagedServiceIdentityTypeSystemAssigned), // PrincipalID: to.Ptr("03fe6ae0-952c-4e4b-954b-cc0364dd252e"), // TenantID: to.Ptr("72f988bf-86f1-41af-91ab-2d8cd011db47"), // }, // Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), // Location: to.Ptr("West US 2"), // SystemData: &armm365securityandcompliance.SystemData{ // CreatedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // CreatedBy: to.Ptr("sove"), // CreatedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // LastModifiedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // LastModifiedBy: to.Ptr("sove"), // LastModifiedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // }, // Tags: map[string]*string{ // }, // Properties: &armm365securityandcompliance.ServicesProperties{ // AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ // { // ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), // }, // { // ObjectID: to.Ptr("5b307da8-43d4-492b-8b66-b0294ade872f"), // }}, // AuthenticationConfiguration: &armm365securityandcompliance.ServiceAuthenticationConfigurationInfo{ // Audience: to.Ptr("https://azurehealthcareapis.com"), // Authority: to.Ptr("https://login.microsoftonline.com/abfde7b2-df0f-47e6-aabf-2462b07508dc"), // SmartProxyEnabled: to.Ptr(true), // }, // CorsConfiguration: &armm365securityandcompliance.ServiceCorsConfigurationInfo{ // AllowCredentials: to.Ptr(false), // Headers: []*string{ // to.Ptr("*")}, // MaxAge: to.Ptr[int64](1440), // Methods: []*string{ // to.Ptr("DELETE"), // to.Ptr("GET"), // to.Ptr("OPTIONS"), // to.Ptr("PATCH"), // to.Ptr("POST"), // to.Ptr("PUT")}, // Origins: []*string{ // to.Ptr("*")}, // }, // CosmosDbConfiguration: &armm365securityandcompliance.ServiceCosmosDbConfigurationInfo{ // KeyVaultKeyURI: to.Ptr("https://my-vault.vault.azure.net/keys/my-key"), // OfferThroughput: to.Ptr[int64](1000), // }, // ExportConfiguration: &armm365securityandcompliance.ServiceExportConfigurationInfo{ // StorageAccountName: to.Ptr("existingStorageAccount"), // }, // PrivateEndpointConnections: []*armm365securityandcompliance.PrivateEndpointConnection{ // }, // ProvisioningState: to.Ptr(armm365securityandcompliance.ProvisioningStateSucceeded), // PublicNetworkAccess: to.Ptr(armm365securityandcompliance.PublicNetworkAccessDisabled), // }, // } } ``` ``` Output: ``` Share Format Run Example (CreateOrUpdateAServiceWithMinimumParameters) [¶](#example-PrivateLinkServicesForM365SecurityCenterClient.BeginCreateOrUpdate-CreateOrUpdateAServiceWithMinimumParameters) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/SecurityCenterServiceCreateMinimum.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azcore/to" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } poller, err := clientFactory.NewPrivateLinkServicesForM365SecurityCenterClient().BeginCreateOrUpdate(ctx, "rg1", "service2", armm365securityandcompliance.PrivateLinkServicesForM365SecurityCenterDescription{ Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), Location: to.Ptr("westus2"), Tags: map[string]*string{}, Properties: &armm365securityandcompliance.ServicesProperties{ AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ { ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), }}, }, }, nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } res, err := poller.PollUntilDone(ctx, nil) if err != nil { log.Fatalf("failed to pull the result: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateLinkServicesForM365SecurityCenterDescription = armm365securityandcompliance.PrivateLinkServicesForM365SecurityCenterDescription{ // Name: to.Ptr("service2"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365SecurityCenter"), // Etag: to.Ptr("etagvalue"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365SecurityCenter/service2"), // Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), // Location: to.Ptr("westus2"), // SystemData: &armm365securityandcompliance.SystemData{ // CreatedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // CreatedBy: to.Ptr("sove"), // CreatedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // LastModifiedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // LastModifiedBy: to.Ptr("sove"), // LastModifiedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // }, // Tags: map[string]*string{ // }, // Properties: &armm365securityandcompliance.ServicesProperties{ // AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ // { // ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), // }}, // AuthenticationConfiguration: &armm365securityandcompliance.ServiceAuthenticationConfigurationInfo{ // Audience: to.Ptr("https://azurehealthcareapis.com"), // Authority: to.Ptr("https://login.microsoftonline.com/abfde7b2-df0f-47e6-aabf-2462b07508dc"), // SmartProxyEnabled: to.Ptr(false), // }, // CorsConfiguration: &armm365securityandcompliance.ServiceCorsConfigurationInfo{ // AllowCredentials: to.Ptr(false), // Headers: []*string{ // }, // Methods: []*string{ // }, // Origins: []*string{ // }, // }, // CosmosDbConfiguration: &armm365securityandcompliance.ServiceCosmosDbConfigurationInfo{ // OfferThroughput: to.Ptr[int64](1000), // }, // PrivateEndpointConnections: []*armm365securityandcompliance.PrivateEndpointConnection{ // }, // ProvisioningState: to.Ptr(armm365securityandcompliance.ProvisioningStateSucceeded), // PublicNetworkAccess: to.Ptr(armm365securityandcompliance.PublicNetworkAccessDisabled), // }, // } } ``` ``` Output: ``` Share Format Run #### func (*PrivateLinkServicesForM365SecurityCenterClient) [BeginDelete](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesform365securitycenter_client.go#L123) [¶](#PrivateLinkServicesForM365SecurityCenterClient.BeginDelete) ``` func (client *[PrivateLinkServicesForM365SecurityCenterClient](#PrivateLinkServicesForM365SecurityCenterClient)) BeginDelete(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), options *[PrivateLinkServicesForM365SecurityCenterClientBeginDeleteOptions](#PrivateLinkServicesForM365SecurityCenterClientBeginDeleteOptions)) (*[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Poller](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Poller)[[PrivateLinkServicesForM365SecurityCenterClientDeleteResponse](#PrivateLinkServicesForM365SecurityCenterClientDeleteResponse)], [error](/builtin#error)) ``` BeginDelete - Delete a service instance. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * options - PrivateLinkServicesForM365SecurityCenterClientBeginDeleteOptions contains the optional parameters for the PrivateLinkServicesForM365SecurityCenterClient.BeginDelete method. Example [¶](#example-PrivateLinkServicesForM365SecurityCenterClient.BeginDelete) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/SecurityCenterServiceDelete.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } poller, err := clientFactory.NewPrivateLinkServicesForM365SecurityCenterClient().BeginDelete(ctx, "rg1", "service1", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } _, err = poller.PollUntilDone(ctx, nil) if err != nil { log.Fatalf("failed to pull the result: %v", err) } } ``` ``` Output: ``` Share Format Run #### func (*PrivateLinkServicesForM365SecurityCenterClient) [BeginUpdate](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesform365securitycenter_client.go#L374) [¶](#PrivateLinkServicesForM365SecurityCenterClient.BeginUpdate) ``` func (client *[PrivateLinkServicesForM365SecurityCenterClient](#PrivateLinkServicesForM365SecurityCenterClient)) BeginUpdate(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), servicePatchDescription [ServicesPatchDescription](#ServicesPatchDescription), options *[PrivateLinkServicesForM365SecurityCenterClientBeginUpdateOptions](#PrivateLinkServicesForM365SecurityCenterClientBeginUpdateOptions)) (*[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Poller](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Poller)[[PrivateLinkServicesForM365SecurityCenterClientUpdateResponse](#PrivateLinkServicesForM365SecurityCenterClientUpdateResponse)], [error](/builtin#error)) ``` BeginUpdate - Update the metadata of a privateLinkServicesForM365SecurityCenter instance. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * servicePatchDescription - The service instance metadata and security metadata. * options - PrivateLinkServicesForM365SecurityCenterClientBeginUpdateOptions contains the optional parameters for the PrivateLinkServicesForM365SecurityCenterClient.BeginUpdate method. Example [¶](#example-PrivateLinkServicesForM365SecurityCenterClient.BeginUpdate) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/SecurityCenterServicePatch.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azcore/to" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } poller, err := clientFactory.NewPrivateLinkServicesForM365SecurityCenterClient().BeginUpdate(ctx, "rg1", "service1", armm365securityandcompliance.ServicesPatchDescription{ Tags: map[string]*string{ "tag1": to.Ptr("value1"), "tag2": to.Ptr("value2"), }, }, nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } res, err := poller.PollUntilDone(ctx, nil) if err != nil { log.Fatalf("failed to pull the result: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateLinkServicesForM365SecurityCenterDescription = armm365securityandcompliance.PrivateLinkServicesForM365SecurityCenterDescription{ // Name: to.Ptr("service1"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365SecurityCenter"), // Etag: to.Ptr("etagvalue"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365SecurityCenter/service1"), // Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), // Location: to.Ptr("West US"), // SystemData: &armm365securityandcompliance.SystemData{ // CreatedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // CreatedBy: to.Ptr("sove"), // CreatedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // LastModifiedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // LastModifiedBy: to.Ptr("sove"), // LastModifiedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // }, // Tags: map[string]*string{ // "tag1": to.Ptr("value1"), // "tag2": to.Ptr("value2"), // }, // Properties: &armm365securityandcompliance.ServicesProperties{ // AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ // { // ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), // }, // { // ObjectID: to.Ptr("5b307da8-43d4-492b-8b66-b0294ade872f"), // }}, // AuthenticationConfiguration: &armm365securityandcompliance.ServiceAuthenticationConfigurationInfo{ // Audience: to.Ptr("https://azurehealthcareapis.com"), // Authority: to.Ptr("https://login.microsoftonline.com/abfde7b2-df0f-47e6-aabf-2462b07508dc"), // SmartProxyEnabled: to.Ptr(true), // }, // CorsConfiguration: &armm365securityandcompliance.ServiceCorsConfigurationInfo{ // AllowCredentials: to.Ptr(false), // Headers: []*string{ // to.Ptr("*")}, // MaxAge: to.Ptr[int64](1440), // Methods: []*string{ // to.Ptr("DELETE"), // to.Ptr("GET"), // to.Ptr("OPTIONS"), // to.Ptr("PATCH"), // to.Ptr("POST"), // to.Ptr("PUT")}, // Origins: []*string{ // to.Ptr("*")}, // }, // CosmosDbConfiguration: &armm365securityandcompliance.ServiceCosmosDbConfigurationInfo{ // KeyVaultKeyURI: to.Ptr("https://my-vault.vault.azure.net/keys/my-key"), // OfferThroughput: to.Ptr[int64](1000), // }, // PrivateEndpointConnections: []*armm365securityandcompliance.PrivateEndpointConnection{ // }, // ProvisioningState: to.Ptr(armm365securityandcompliance.ProvisioningStateSucceeded), // PublicNetworkAccess: to.Ptr(armm365securityandcompliance.PublicNetworkAccessDisabled), // }, // } } ``` ``` Output: ``` Share Format Run #### func (*PrivateLinkServicesForM365SecurityCenterClient) [Get](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesform365securitycenter_client.go#L190) [¶](#PrivateLinkServicesForM365SecurityCenterClient.Get) ``` func (client *[PrivateLinkServicesForM365SecurityCenterClient](#PrivateLinkServicesForM365SecurityCenterClient)) Get(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), options *[PrivateLinkServicesForM365SecurityCenterClientGetOptions](#PrivateLinkServicesForM365SecurityCenterClientGetOptions)) ([PrivateLinkServicesForM365SecurityCenterClientGetResponse](#PrivateLinkServicesForM365SecurityCenterClientGetResponse), [error](/builtin#error)) ``` Get - Get the metadata of a privateLinkServicesForM365SecurityCenter resource. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * options - PrivateLinkServicesForM365SecurityCenterClientGetOptions contains the optional parameters for the PrivateLinkServicesForM365SecurityCenterClient.Get method. Example [¶](#example-PrivateLinkServicesForM365SecurityCenterClient.Get) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/SecurityCenterServiceGet.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewPrivateLinkServicesForM365SecurityCenterClient().Get(ctx, "rg1", "service1", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateLinkServicesForM365SecurityCenterDescription = armm365securityandcompliance.PrivateLinkServicesForM365SecurityCenterDescription{ // Name: to.Ptr("service1"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365SecurityCenter"), // Etag: to.Ptr("etagvalue"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365SecurityCenter/service1"), // Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), // Location: to.Ptr("West US"), // SystemData: &armm365securityandcompliance.SystemData{ // CreatedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // CreatedBy: to.Ptr("sove"), // CreatedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // LastModifiedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // LastModifiedBy: to.Ptr("sove"), // LastModifiedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // }, // Tags: map[string]*string{ // }, // Properties: &armm365securityandcompliance.ServicesProperties{ // AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ // { // ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), // }, // { // ObjectID: to.Ptr("5b307da8-43d4-492b-8b66-b0294ade872f"), // }}, // AuthenticationConfiguration: &armm365securityandcompliance.ServiceAuthenticationConfigurationInfo{ // Audience: to.Ptr("https://azurehealthcareapis.com"), // Authority: to.Ptr("https://login.microsoftonline.com/abfde7b2-df0f-47e6-aabf-2462b07508dc"), // SmartProxyEnabled: to.Ptr(true), // }, // CorsConfiguration: &armm365securityandcompliance.ServiceCorsConfigurationInfo{ // AllowCredentials: to.Ptr(false), // Headers: []*string{ // to.Ptr("*")}, // MaxAge: to.Ptr[int64](1440), // Methods: []*string{ // to.Ptr("DELETE"), // to.Ptr("GET"), // to.Ptr("OPTIONS"), // to.Ptr("PATCH"), // to.Ptr("POST"), // to.Ptr("PUT")}, // Origins: []*string{ // to.Ptr("*")}, // }, // CosmosDbConfiguration: &armm365securityandcompliance.ServiceCosmosDbConfigurationInfo{ // KeyVaultKeyURI: to.Ptr("https://my-vault.vault.azure.net/keys/my-key"), // OfferThroughput: to.Ptr[int64](1000), // }, // PrivateEndpointConnections: []*armm365securityandcompliance.PrivateEndpointConnection{ // }, // ProvisioningState: to.Ptr(armm365securityandcompliance.ProvisioningStateSucceeded), // PublicNetworkAccess: to.Ptr(armm365securityandcompliance.PublicNetworkAccessDisabled), // }, // } } ``` ``` Output: ``` Share Format Run #### func (*PrivateLinkServicesForM365SecurityCenterClient) [NewListByResourceGroupPager](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesform365securitycenter_client.go#L306) [¶](#PrivateLinkServicesForM365SecurityCenterClient.NewListByResourceGroupPager) added in v0.4.0 ``` func (client *[PrivateLinkServicesForM365SecurityCenterClient](#PrivateLinkServicesForM365SecurityCenterClient)) NewListByResourceGroupPager(resourceGroupName [string](/builtin#string), options *[PrivateLinkServicesForM365SecurityCenterClientListByResourceGroupOptions](#PrivateLinkServicesForM365SecurityCenterClientListByResourceGroupOptions)) *[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Pager](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Pager)[[PrivateLinkServicesForM365SecurityCenterClientListByResourceGroupResponse](#PrivateLinkServicesForM365SecurityCenterClientListByResourceGroupResponse)] ``` NewListByResourceGroupPager - Get all the service instances in a resource group. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * options - PrivateLinkServicesForM365SecurityCenterClientListByResourceGroupOptions contains the optional parameters for the PrivateLinkServicesForM365SecurityCenterClient.NewListByResourceGroupPager method. Example [¶](#example-PrivateLinkServicesForM365SecurityCenterClient.NewListByResourceGroupPager) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/SecurityCenterServiceListByResourceGroup.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } pager := clientFactory.NewPrivateLinkServicesForM365SecurityCenterClient().NewListByResourceGroupPager("rgname", nil) for pager.More() { page, err := pager.NextPage(ctx) if err != nil { log.Fatalf("failed to advance page: %v", err) } for _, v := range page.Value { // You could use page here. We use blank identifier for just demo purposes. _ = v } // If the HTTP response code is 200 as defined in example definition, your page structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // page.PrivateLinkServicesForM365SecurityCenterDescriptionListResult = armm365securityandcompliance.PrivateLinkServicesForM365SecurityCenterDescriptionListResult{ // Value: []*armm365securityandcompliance.PrivateLinkServicesForM365SecurityCenterDescription{ // { // Name: to.Ptr("service1"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365SecurityCenter"), // Etag: to.Ptr("etagvalue"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365SecurityCenter/dddb8dcb-effb-4290-bb47-ce1e8440c729"), // Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), // Location: to.Ptr("westus"), // Tags: map[string]*string{ // }, // Properties: &armm365securityandcompliance.ServicesProperties{ // AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ // { // ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), // }, // { // ObjectID: to.Ptr("5b307da8-43d4-492b-8b66-b0294ade872f"), // }}, // AuthenticationConfiguration: &armm365securityandcompliance.ServiceAuthenticationConfigurationInfo{ // Audience: to.Ptr("https://azurehealthcareapis.com"), // Authority: to.Ptr("https://login.microsoftonline.com/abfde7b2-df0f-47e6-aabf-2462b07508dc"), // SmartProxyEnabled: to.Ptr(true), // }, // CorsConfiguration: &armm365securityandcompliance.ServiceCorsConfigurationInfo{ // AllowCredentials: to.Ptr(false), // Headers: []*string{ // to.Ptr("*")}, // MaxAge: to.Ptr[int64](1440), // Methods: []*string{ // to.Ptr("DELETE"), // to.Ptr("GET"), // to.Ptr("OPTIONS"), // to.Ptr("PATCH"), // to.Ptr("POST"), // to.Ptr("PUT")}, // Origins: []*string{ // to.Ptr("*")}, // }, // CosmosDbConfiguration: &armm365securityandcompliance.ServiceCosmosDbConfigurationInfo{ // KeyVaultKeyURI: to.Ptr("https://my-vault.vault.azure.net/keys/my-key"), // OfferThroughput: to.Ptr[int64](1000), // }, // PrivateEndpointConnections: []*armm365securityandcompliance.PrivateEndpointConnection{ // }, // ProvisioningState: to.Ptr(armm365securityandcompliance.ProvisioningStateSucceeded), // PublicNetworkAccess: to.Ptr(armm365securityandcompliance.PublicNetworkAccessDisabled), // }, // }}, // } } } ``` ``` Output: ``` Share Format Run #### func (*PrivateLinkServicesForM365SecurityCenterClient) [NewListPager](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesform365securitycenter_client.go#L245) [¶](#PrivateLinkServicesForM365SecurityCenterClient.NewListPager) added in v0.4.0 ``` func (client *[PrivateLinkServicesForM365SecurityCenterClient](#PrivateLinkServicesForM365SecurityCenterClient)) NewListPager(options *[PrivateLinkServicesForM365SecurityCenterClientListOptions](#PrivateLinkServicesForM365SecurityCenterClientListOptions)) *[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Pager](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Pager)[[PrivateLinkServicesForM365SecurityCenterClientListResponse](#PrivateLinkServicesForM365SecurityCenterClientListResponse)] ``` NewListPager - Get all the privateLinkServicesForM365SecurityCenter instances in a subscription. Generated from API version 2021-03-25-preview * options - PrivateLinkServicesForM365SecurityCenterClientListOptions contains the optional parameters for the PrivateLinkServicesForM365SecurityCenterClient.NewListPager method. Example [¶](#example-PrivateLinkServicesForM365SecurityCenterClient.NewListPager) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/SecurityCenterServiceList.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } pager := clientFactory.NewPrivateLinkServicesForM365SecurityCenterClient().NewListPager(nil) for pager.More() { page, err := pager.NextPage(ctx) if err != nil { log.Fatalf("failed to advance page: %v", err) } for _, v := range page.Value { // You could use page here. We use blank identifier for just demo purposes. _ = v } // If the HTTP response code is 200 as defined in example definition, your page structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // page.PrivateLinkServicesForM365SecurityCenterDescriptionListResult = armm365securityandcompliance.PrivateLinkServicesForM365SecurityCenterDescriptionListResult{ // Value: []*armm365securityandcompliance.PrivateLinkServicesForM365SecurityCenterDescription{ // { // Name: to.Ptr("service1"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365SecurityCenter"), // Etag: to.Ptr("etag"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForM365SecurityCenter/service1"), // Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), // Location: to.Ptr("West US"), // Tags: map[string]*string{ // }, // Properties: &armm365securityandcompliance.ServicesProperties{ // AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ // { // ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), // }, // { // ObjectID: to.Ptr("5b307da8-43d4-492b-8b66-b0294ade872f"), // }}, // AuthenticationConfiguration: &armm365securityandcompliance.ServiceAuthenticationConfigurationInfo{ // Audience: to.Ptr("https://azurehealthcareapis.com"), // Authority: to.Ptr("https://login.microsoftonline.com/abfde7b2-df0f-47e6-aabf-2462b07508dc"), // SmartProxyEnabled: to.Ptr(true), // }, // CorsConfiguration: &armm365securityandcompliance.ServiceCorsConfigurationInfo{ // AllowCredentials: to.Ptr(false), // Headers: []*string{ // to.Ptr("*")}, // MaxAge: to.Ptr[int64](1440), // Methods: []*string{ // to.Ptr("DELETE"), // to.Ptr("GET"), // to.Ptr("OPTIONS"), // to.Ptr("PATCH"), // to.Ptr("POST"), // to.Ptr("PUT")}, // Origins: []*string{ // to.Ptr("*")}, // }, // CosmosDbConfiguration: &armm365securityandcompliance.ServiceCosmosDbConfigurationInfo{ // KeyVaultKeyURI: to.Ptr("https://my-vault.vault.azure.net/keys/my-key"), // OfferThroughput: to.Ptr[int64](1000), // }, // PrivateEndpointConnections: []*armm365securityandcompliance.PrivateEndpointConnection{ // }, // ProvisioningState: to.Ptr(armm365securityandcompliance.ProvisioningStateSucceeded), // PublicNetworkAccess: to.Ptr(armm365securityandcompliance.PublicNetworkAccessDisabled), // }, // }}, // } } } ``` ``` Output: ``` Share Format Run #### type [PrivateLinkServicesForM365SecurityCenterClientBeginCreateOrUpdateOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L579) [¶](#PrivateLinkServicesForM365SecurityCenterClientBeginCreateOrUpdateOptions) added in v0.2.0 ``` type PrivateLinkServicesForM365SecurityCenterClientBeginCreateOrUpdateOptions struct { // Resumes the LRO from the provided token. ResumeToken [string](/builtin#string) } ``` PrivateLinkServicesForM365SecurityCenterClientBeginCreateOrUpdateOptions contains the optional parameters for the PrivateLinkServicesForM365SecurityCenterClient.BeginCreateOrUpdate method. #### type [PrivateLinkServicesForM365SecurityCenterClientBeginDeleteOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L586) [¶](#PrivateLinkServicesForM365SecurityCenterClientBeginDeleteOptions) added in v0.2.0 ``` type PrivateLinkServicesForM365SecurityCenterClientBeginDeleteOptions struct { // Resumes the LRO from the provided token. ResumeToken [string](/builtin#string) } ``` PrivateLinkServicesForM365SecurityCenterClientBeginDeleteOptions contains the optional parameters for the PrivateLinkServicesForM365SecurityCenterClient.BeginDelete method. #### type [PrivateLinkServicesForM365SecurityCenterClientBeginUpdateOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L593) [¶](#PrivateLinkServicesForM365SecurityCenterClientBeginUpdateOptions) added in v0.2.0 ``` type PrivateLinkServicesForM365SecurityCenterClientBeginUpdateOptions struct { // Resumes the LRO from the provided token. ResumeToken [string](/builtin#string) } ``` PrivateLinkServicesForM365SecurityCenterClientBeginUpdateOptions contains the optional parameters for the PrivateLinkServicesForM365SecurityCenterClient.BeginUpdate method. #### type [PrivateLinkServicesForM365SecurityCenterClientCreateOrUpdateResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L258) [¶](#PrivateLinkServicesForM365SecurityCenterClientCreateOrUpdateResponse) added in v0.2.0 ``` type PrivateLinkServicesForM365SecurityCenterClientCreateOrUpdateResponse struct { [PrivateLinkServicesForM365SecurityCenterDescription](#PrivateLinkServicesForM365SecurityCenterDescription) } ``` PrivateLinkServicesForM365SecurityCenterClientCreateOrUpdateResponse contains the response from method PrivateLinkServicesForM365SecurityCenterClient.BeginCreateOrUpdate. #### type [PrivateLinkServicesForM365SecurityCenterClientDeleteResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L263) [¶](#PrivateLinkServicesForM365SecurityCenterClientDeleteResponse) added in v0.2.0 ``` type PrivateLinkServicesForM365SecurityCenterClientDeleteResponse struct { } ``` PrivateLinkServicesForM365SecurityCenterClientDeleteResponse contains the response from method PrivateLinkServicesForM365SecurityCenterClient.BeginDelete. #### type [PrivateLinkServicesForM365SecurityCenterClientGetOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L600) [¶](#PrivateLinkServicesForM365SecurityCenterClientGetOptions) added in v0.2.0 ``` type PrivateLinkServicesForM365SecurityCenterClientGetOptions struct { } ``` PrivateLinkServicesForM365SecurityCenterClientGetOptions contains the optional parameters for the PrivateLinkServicesForM365SecurityCenterClient.Get method. #### type [PrivateLinkServicesForM365SecurityCenterClientGetResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L268) [¶](#PrivateLinkServicesForM365SecurityCenterClientGetResponse) added in v0.2.0 ``` type PrivateLinkServicesForM365SecurityCenterClientGetResponse struct { [PrivateLinkServicesForM365SecurityCenterDescription](#PrivateLinkServicesForM365SecurityCenterDescription) } ``` PrivateLinkServicesForM365SecurityCenterClientGetResponse contains the response from method PrivateLinkServicesForM365SecurityCenterClient.Get. #### type [PrivateLinkServicesForM365SecurityCenterClientListByResourceGroupOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L606) [¶](#PrivateLinkServicesForM365SecurityCenterClientListByResourceGroupOptions) added in v0.2.0 ``` type PrivateLinkServicesForM365SecurityCenterClientListByResourceGroupOptions struct { } ``` PrivateLinkServicesForM365SecurityCenterClientListByResourceGroupOptions contains the optional parameters for the PrivateLinkServicesForM365SecurityCenterClient.NewListByResourceGroupPager method. #### type [PrivateLinkServicesForM365SecurityCenterClientListByResourceGroupResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L273) [¶](#PrivateLinkServicesForM365SecurityCenterClientListByResourceGroupResponse) added in v0.2.0 ``` type PrivateLinkServicesForM365SecurityCenterClientListByResourceGroupResponse struct { [PrivateLinkServicesForM365SecurityCenterDescriptionListResult](#PrivateLinkServicesForM365SecurityCenterDescriptionListResult) } ``` PrivateLinkServicesForM365SecurityCenterClientListByResourceGroupResponse contains the response from method PrivateLinkServicesForM365SecurityCenterClient.NewListByResourceGroupPager. #### type [PrivateLinkServicesForM365SecurityCenterClientListOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L612) [¶](#PrivateLinkServicesForM365SecurityCenterClientListOptions) added in v0.2.0 ``` type PrivateLinkServicesForM365SecurityCenterClientListOptions struct { } ``` PrivateLinkServicesForM365SecurityCenterClientListOptions contains the optional parameters for the PrivateLinkServicesForM365SecurityCenterClient.NewListPager method. #### type [PrivateLinkServicesForM365SecurityCenterClientListResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L278) [¶](#PrivateLinkServicesForM365SecurityCenterClientListResponse) added in v0.2.0 ``` type PrivateLinkServicesForM365SecurityCenterClientListResponse struct { [PrivateLinkServicesForM365SecurityCenterDescriptionListResult](#PrivateLinkServicesForM365SecurityCenterDescriptionListResult) } ``` PrivateLinkServicesForM365SecurityCenterClientListResponse contains the response from method PrivateLinkServicesForM365SecurityCenterClient.NewListPager. #### type [PrivateLinkServicesForM365SecurityCenterClientUpdateResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L283) [¶](#PrivateLinkServicesForM365SecurityCenterClientUpdateResponse) added in v0.2.0 ``` type PrivateLinkServicesForM365SecurityCenterClientUpdateResponse struct { [PrivateLinkServicesForM365SecurityCenterDescription](#PrivateLinkServicesForM365SecurityCenterDescription) } ``` PrivateLinkServicesForM365SecurityCenterClientUpdateResponse contains the response from method PrivateLinkServicesForM365SecurityCenterClient.BeginUpdate. #### type [PrivateLinkServicesForM365SecurityCenterDescription](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L617) [¶](#PrivateLinkServicesForM365SecurityCenterDescription) ``` type PrivateLinkServicesForM365SecurityCenterDescription struct { // REQUIRED; The kind of the service. Kind *[Kind](#Kind) // REQUIRED; The resource location. Location *[string](/builtin#string) // An etag associated with the resource, used for optimistic concurrency when editing it. Etag *[string](/builtin#string) // Setting indicating whether the service has a managed identity associated with it. Identity *[ServicesResourceIdentity](#ServicesResourceIdentity) // The common properties of a service. Properties *[ServicesProperties](#ServicesProperties) // The resource tags. Tags map[[string](/builtin#string)]*[string](/builtin#string) // READ-ONLY; The resource identifier. ID *[string](/builtin#string) // READ-ONLY; The resource name. Name *[string](/builtin#string) // READ-ONLY; Required property for system data SystemData *[SystemData](#SystemData) // READ-ONLY; The resource type. Type *[string](/builtin#string) } ``` PrivateLinkServicesForM365SecurityCenterDescription - The description of the service. #### func (PrivateLinkServicesForM365SecurityCenterDescription) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L702) [¶](#PrivateLinkServicesForM365SecurityCenterDescription.MarshalJSON) ``` func (p [PrivateLinkServicesForM365SecurityCenterDescription](#PrivateLinkServicesForM365SecurityCenterDescription)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type PrivateLinkServicesForM365SecurityCenterDescription. #### func (*PrivateLinkServicesForM365SecurityCenterDescription) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L718) [¶](#PrivateLinkServicesForM365SecurityCenterDescription.UnmarshalJSON) added in v0.6.0 ``` func (p *[PrivateLinkServicesForM365SecurityCenterDescription](#PrivateLinkServicesForM365SecurityCenterDescription)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type PrivateLinkServicesForM365SecurityCenterDescription. #### type [PrivateLinkServicesForM365SecurityCenterDescriptionListResult](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L650) [¶](#PrivateLinkServicesForM365SecurityCenterDescriptionListResult) ``` type PrivateLinkServicesForM365SecurityCenterDescriptionListResult struct { // A list of service description objects. Value []*[PrivateLinkServicesForM365SecurityCenterDescription](#PrivateLinkServicesForM365SecurityCenterDescription) // READ-ONLY; The link used to get the next page of service description objects. NextLink *[string](/builtin#string) } ``` PrivateLinkServicesForM365SecurityCenterDescriptionListResult - A list of service description objects with a next link. #### func (PrivateLinkServicesForM365SecurityCenterDescriptionListResult) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L765) [¶](#PrivateLinkServicesForM365SecurityCenterDescriptionListResult.MarshalJSON) ``` func (p [PrivateLinkServicesForM365SecurityCenterDescriptionListResult](#PrivateLinkServicesForM365SecurityCenterDescriptionListResult)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type PrivateLinkServicesForM365SecurityCenterDescriptionListResult. #### func (*PrivateLinkServicesForM365SecurityCenterDescriptionListResult) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L773) [¶](#PrivateLinkServicesForM365SecurityCenterDescriptionListResult.UnmarshalJSON) added in v0.6.0 ``` func (p *[PrivateLinkServicesForM365SecurityCenterDescriptionListResult](#PrivateLinkServicesForM365SecurityCenterDescriptionListResult)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type PrivateLinkServicesForM365SecurityCenterDescriptionListResult. #### type [PrivateLinkServicesForMIPPolicySyncClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesformippolicysync_client.go#L26) [¶](#PrivateLinkServicesForMIPPolicySyncClient) ``` type PrivateLinkServicesForMIPPolicySyncClient struct { // contains filtered or unexported fields } ``` PrivateLinkServicesForMIPPolicySyncClient contains the methods for the PrivateLinkServicesForMIPPolicySync group. Don't use this type directly, use NewPrivateLinkServicesForMIPPolicySyncClient() instead. #### func [NewPrivateLinkServicesForMIPPolicySyncClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesformippolicysync_client.go#L35) [¶](#NewPrivateLinkServicesForMIPPolicySyncClient) ``` func NewPrivateLinkServicesForMIPPolicySyncClient(subscriptionID [string](/builtin#string), credential [azcore](/github.com/Azure/azure-sdk-for-go/sdk/azcore).[TokenCredential](/github.com/Azure/azure-sdk-for-go/sdk/azcore#TokenCredential), options *[arm](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm).[ClientOptions](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm#ClientOptions)) (*[PrivateLinkServicesForMIPPolicySyncClient](#PrivateLinkServicesForMIPPolicySyncClient), [error](/builtin#error)) ``` NewPrivateLinkServicesForMIPPolicySyncClient creates a new instance of PrivateLinkServicesForMIPPolicySyncClient with the specified values. * subscriptionID - The subscription identifier. * credential - used to authorize requests. Usually a credential from azidentity. * options - pass nil to accept the default values. #### func (*PrivateLinkServicesForMIPPolicySyncClient) [BeginCreateOrUpdate](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesformippolicysync_client.go#L56) [¶](#PrivateLinkServicesForMIPPolicySyncClient.BeginCreateOrUpdate) ``` func (client *[PrivateLinkServicesForMIPPolicySyncClient](#PrivateLinkServicesForMIPPolicySyncClient)) BeginCreateOrUpdate(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), privateLinkServicesForMIPPolicySyncDescription [PrivateLinkServicesForMIPPolicySyncDescription](#PrivateLinkServicesForMIPPolicySyncDescription), options *[PrivateLinkServicesForMIPPolicySyncClientBeginCreateOrUpdateOptions](#PrivateLinkServicesForMIPPolicySyncClientBeginCreateOrUpdateOptions)) (*[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Poller](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Poller)[[PrivateLinkServicesForMIPPolicySyncClientCreateOrUpdateResponse](#PrivateLinkServicesForMIPPolicySyncClientCreateOrUpdateResponse)], [error](/builtin#error)) ``` BeginCreateOrUpdate - Create or update the metadata of a privateLinkServicesForMIPPolicySync instance. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * privateLinkServicesForMIPPolicySyncDescription - The service instance metadata. * options - PrivateLinkServicesForMIPPolicySyncClientBeginCreateOrUpdateOptions contains the optional parameters for the PrivateLinkServicesForMIPPolicySyncClient.BeginCreateOrUpdate method. Example (CreateOrUpdateAServiceWithAllParameters) [¶](#example-PrivateLinkServicesForMIPPolicySyncClient.BeginCreateOrUpdate-CreateOrUpdateAServiceWithAllParameters) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/MIPPolicySyncServiceCreate.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azcore/to" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } poller, err := clientFactory.NewPrivateLinkServicesForMIPPolicySyncClient().BeginCreateOrUpdate(ctx, "rg1", "service1", armm365securityandcompliance.PrivateLinkServicesForMIPPolicySyncDescription{ Identity: &armm365securityandcompliance.ServicesResourceIdentity{ Type: to.Ptr(armm365securityandcompliance.ManagedServiceIdentityTypeSystemAssigned), }, Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), Location: to.Ptr("westus2"), Tags: map[string]*string{}, Properties: &armm365securityandcompliance.ServicesProperties{ AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ { ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), }, { ObjectID: to.Ptr("5b307da8-43d4-492b-8b66-b0294ade872f"), }}, AuthenticationConfiguration: &armm365securityandcompliance.ServiceAuthenticationConfigurationInfo{ Audience: to.Ptr("https://azurehealthcareapis.com"), Authority: to.Ptr("https://login.microsoftonline.com/abfde7b2-df0f-47e6-aabf-2462b07508dc"), SmartProxyEnabled: to.Ptr(true), }, CorsConfiguration: &armm365securityandcompliance.ServiceCorsConfigurationInfo{ AllowCredentials: to.Ptr(false), Headers: []*string{ to.Ptr("*")}, MaxAge: to.Ptr[int64](1440), Methods: []*string{ to.Ptr("DELETE"), to.Ptr("GET"), to.Ptr("OPTIONS"), to.Ptr("PATCH"), to.Ptr("POST"), to.Ptr("PUT")}, Origins: []*string{ to.Ptr("*")}, }, CosmosDbConfiguration: &armm365securityandcompliance.ServiceCosmosDbConfigurationInfo{ KeyVaultKeyURI: to.Ptr("https://my-vault.vault.azure.net/keys/my-key"), OfferThroughput: to.Ptr[int64](1000), }, ExportConfiguration: &armm365securityandcompliance.ServiceExportConfigurationInfo{ StorageAccountName: to.Ptr("existingStorageAccount"), }, PrivateEndpointConnections: []*armm365securityandcompliance.PrivateEndpointConnection{}, PublicNetworkAccess: to.Ptr(armm365securityandcompliance.PublicNetworkAccessDisabled), }, }, nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } res, err := poller.PollUntilDone(ctx, nil) if err != nil { log.Fatalf("failed to pull the result: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateLinkServicesForMIPPolicySyncDescription = armm365securityandcompliance.PrivateLinkServicesForMIPPolicySyncDescription{ // Name: to.Ptr("service1"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForMIPPolicySync"), // Etag: to.Ptr("etagvalue"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForMIPPolicySync/service1"), // Identity: &armm365securityandcompliance.ServicesResourceIdentity{ // Type: to.Ptr(armm365securityandcompliance.ManagedServiceIdentityTypeSystemAssigned), // PrincipalID: to.Ptr("03fe6ae0-952c-4e4b-954b-cc0364dd252e"), // TenantID: to.Ptr("72f988bf-86f1-41af-91ab-2d8cd011db47"), // }, // Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), // Location: to.Ptr("West US 2"), // SystemData: &armm365securityandcompliance.SystemData{ // CreatedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // CreatedBy: to.Ptr("fangsu"), // CreatedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // LastModifiedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // LastModifiedBy: to.Ptr("fangsu"), // LastModifiedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // }, // Tags: map[string]*string{ // }, // Properties: &armm365securityandcompliance.ServicesProperties{ // AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ // { // ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), // }, // { // ObjectID: to.Ptr("5b307da8-43d4-492b-8b66-b0294ade872f"), // }}, // AuthenticationConfiguration: &armm365securityandcompliance.ServiceAuthenticationConfigurationInfo{ // Audience: to.Ptr("https://azurehealthcareapis.com"), // Authority: to.Ptr("https://login.microsoftonline.com/abfde7b2-df0f-47e6-aabf-2462b07508dc"), // SmartProxyEnabled: to.Ptr(true), // }, // CorsConfiguration: &armm365securityandcompliance.ServiceCorsConfigurationInfo{ // AllowCredentials: to.Ptr(false), // Headers: []*string{ // to.Ptr("*")}, // MaxAge: to.Ptr[int64](1440), // Methods: []*string{ // to.Ptr("DELETE"), // to.Ptr("GET"), // to.Ptr("OPTIONS"), // to.Ptr("PATCH"), // to.Ptr("POST"), // to.Ptr("PUT")}, // Origins: []*string{ // to.Ptr("*")}, // }, // CosmosDbConfiguration: &armm365securityandcompliance.ServiceCosmosDbConfigurationInfo{ // KeyVaultKeyURI: to.Ptr("https://my-vault.vault.azure.net/keys/my-key"), // OfferThroughput: to.Ptr[int64](1000), // }, // ExportConfiguration: &armm365securityandcompliance.ServiceExportConfigurationInfo{ // StorageAccountName: to.Ptr("existingStorageAccount"), // }, // PrivateEndpointConnections: []*armm365securityandcompliance.PrivateEndpointConnection{ // }, // ProvisioningState: to.Ptr(armm365securityandcompliance.ProvisioningStateSucceeded), // PublicNetworkAccess: to.Ptr(armm365securityandcompliance.PublicNetworkAccessDisabled), // }, // } } ``` ``` Output: ``` Share Format Run Example (CreateOrUpdateAServiceWithMinimumParameters) [¶](#example-PrivateLinkServicesForMIPPolicySyncClient.BeginCreateOrUpdate-CreateOrUpdateAServiceWithMinimumParameters) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/MIPPolicySyncServiceCreateMinimum.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azcore/to" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } poller, err := clientFactory.NewPrivateLinkServicesForMIPPolicySyncClient().BeginCreateOrUpdate(ctx, "rg1", "service2", armm365securityandcompliance.PrivateLinkServicesForMIPPolicySyncDescription{ Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), Location: to.Ptr("westus2"), Tags: map[string]*string{}, Properties: &armm365securityandcompliance.ServicesProperties{ AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ { ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), }}, }, }, nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } res, err := poller.PollUntilDone(ctx, nil) if err != nil { log.Fatalf("failed to pull the result: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateLinkServicesForMIPPolicySyncDescription = armm365securityandcompliance.PrivateLinkServicesForMIPPolicySyncDescription{ // Name: to.Ptr("service2"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForMIPPolicySync"), // Etag: to.Ptr("etagvalue"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForMIPPolicySync/service2"), // Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), // Location: to.Ptr("westus2"), // SystemData: &armm365securityandcompliance.SystemData{ // CreatedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // CreatedBy: to.Ptr("fangsu"), // CreatedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // LastModifiedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // LastModifiedBy: to.Ptr("fangsu"), // LastModifiedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // }, // Tags: map[string]*string{ // }, // Properties: &armm365securityandcompliance.ServicesProperties{ // AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ // { // ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), // }}, // AuthenticationConfiguration: &armm365securityandcompliance.ServiceAuthenticationConfigurationInfo{ // Audience: to.Ptr("https://azurehealthcareapis.com"), // Authority: to.Ptr("https://login.microsoftonline.com/abfde7b2-df0f-47e6-aabf-2462b07508dc"), // SmartProxyEnabled: to.Ptr(false), // }, // CorsConfiguration: &armm365securityandcompliance.ServiceCorsConfigurationInfo{ // AllowCredentials: to.Ptr(false), // Headers: []*string{ // }, // Methods: []*string{ // }, // Origins: []*string{ // }, // }, // CosmosDbConfiguration: &armm365securityandcompliance.ServiceCosmosDbConfigurationInfo{ // OfferThroughput: to.Ptr[int64](1000), // }, // PrivateEndpointConnections: []*armm365securityandcompliance.PrivateEndpointConnection{ // }, // ProvisioningState: to.Ptr(armm365securityandcompliance.ProvisioningStateSucceeded), // PublicNetworkAccess: to.Ptr(armm365securityandcompliance.PublicNetworkAccessDisabled), // }, // } } ``` ``` Output: ``` Share Format Run #### func (*PrivateLinkServicesForMIPPolicySyncClient) [BeginDelete](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesformippolicysync_client.go#L123) [¶](#PrivateLinkServicesForMIPPolicySyncClient.BeginDelete) ``` func (client *[PrivateLinkServicesForMIPPolicySyncClient](#PrivateLinkServicesForMIPPolicySyncClient)) BeginDelete(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), options *[PrivateLinkServicesForMIPPolicySyncClientBeginDeleteOptions](#PrivateLinkServicesForMIPPolicySyncClientBeginDeleteOptions)) (*[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Poller](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Poller)[[PrivateLinkServicesForMIPPolicySyncClientDeleteResponse](#PrivateLinkServicesForMIPPolicySyncClientDeleteResponse)], [error](/builtin#error)) ``` BeginDelete - Delete a service instance. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * options - PrivateLinkServicesForMIPPolicySyncClientBeginDeleteOptions contains the optional parameters for the PrivateLinkServicesForMIPPolicySyncClient.BeginDelete method. Example [¶](#example-PrivateLinkServicesForMIPPolicySyncClient.BeginDelete) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/MIPPolicySyncServiceDelete.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } poller, err := clientFactory.NewPrivateLinkServicesForMIPPolicySyncClient().BeginDelete(ctx, "rg1", "service1", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } _, err = poller.PollUntilDone(ctx, nil) if err != nil { log.Fatalf("failed to pull the result: %v", err) } } ``` ``` Output: ``` Share Format Run #### func (*PrivateLinkServicesForMIPPolicySyncClient) [BeginUpdate](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesformippolicysync_client.go#L374) [¶](#PrivateLinkServicesForMIPPolicySyncClient.BeginUpdate) ``` func (client *[PrivateLinkServicesForMIPPolicySyncClient](#PrivateLinkServicesForMIPPolicySyncClient)) BeginUpdate(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), servicePatchDescription [ServicesPatchDescription](#ServicesPatchDescription), options *[PrivateLinkServicesForMIPPolicySyncClientBeginUpdateOptions](#PrivateLinkServicesForMIPPolicySyncClientBeginUpdateOptions)) (*[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Poller](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Poller)[[PrivateLinkServicesForMIPPolicySyncClientUpdateResponse](#PrivateLinkServicesForMIPPolicySyncClientUpdateResponse)], [error](/builtin#error)) ``` BeginUpdate - Update the metadata of a privateLinkServicesForMIPPolicySync instance. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * servicePatchDescription - The service instance metadata and security metadata. * options - PrivateLinkServicesForMIPPolicySyncClientBeginUpdateOptions contains the optional parameters for the PrivateLinkServicesForMIPPolicySyncClient.BeginUpdate method. Example [¶](#example-PrivateLinkServicesForMIPPolicySyncClient.BeginUpdate) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/MIPPolicySyncServicePatch.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azcore/to" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } poller, err := clientFactory.NewPrivateLinkServicesForMIPPolicySyncClient().BeginUpdate(ctx, "rg1", "service1", armm365securityandcompliance.ServicesPatchDescription{ Tags: map[string]*string{ "tag1": to.Ptr("value1"), "tag2": to.Ptr("value2"), }, }, nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } res, err := poller.PollUntilDone(ctx, nil) if err != nil { log.Fatalf("failed to pull the result: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateLinkServicesForMIPPolicySyncDescription = armm365securityandcompliance.PrivateLinkServicesForMIPPolicySyncDescription{ // Name: to.Ptr("service1"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForMIPPolicySync"), // Etag: to.Ptr("etagvalue"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForMIPPolicySync/service1"), // Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), // Location: to.Ptr("West US"), // SystemData: &armm365securityandcompliance.SystemData{ // CreatedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // CreatedBy: to.Ptr("fangsu"), // CreatedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // LastModifiedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // LastModifiedBy: to.Ptr("fangsu"), // LastModifiedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // }, // Tags: map[string]*string{ // "tag1": to.Ptr("value1"), // "tag2": to.Ptr("value2"), // }, // Properties: &armm365securityandcompliance.ServicesProperties{ // AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ // { // ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), // }, // { // ObjectID: to.Ptr("5b307da8-43d4-492b-8b66-b0294ade872f"), // }}, // AuthenticationConfiguration: &armm365securityandcompliance.ServiceAuthenticationConfigurationInfo{ // Audience: to.Ptr("https://azurehealthcareapis.com"), // Authority: to.Ptr("https://login.microsoftonline.com/abfde7b2-df0f-47e6-aabf-2462b07508dc"), // SmartProxyEnabled: to.Ptr(true), // }, // CorsConfiguration: &armm365securityandcompliance.ServiceCorsConfigurationInfo{ // AllowCredentials: to.Ptr(false), // Headers: []*string{ // to.Ptr("*")}, // MaxAge: to.Ptr[int64](1440), // Methods: []*string{ // to.Ptr("DELETE"), // to.Ptr("GET"), // to.Ptr("OPTIONS"), // to.Ptr("PATCH"), // to.Ptr("POST"), // to.Ptr("PUT")}, // Origins: []*string{ // to.Ptr("*")}, // }, // CosmosDbConfiguration: &armm365securityandcompliance.ServiceCosmosDbConfigurationInfo{ // KeyVaultKeyURI: to.Ptr("https://my-vault.vault.azure.net/keys/my-key"), // OfferThroughput: to.Ptr[int64](1000), // }, // PrivateEndpointConnections: []*armm365securityandcompliance.PrivateEndpointConnection{ // }, // ProvisioningState: to.Ptr(armm365securityandcompliance.ProvisioningStateSucceeded), // PublicNetworkAccess: to.Ptr(armm365securityandcompliance.PublicNetworkAccessDisabled), // }, // } } ``` ``` Output: ``` Share Format Run #### func (*PrivateLinkServicesForMIPPolicySyncClient) [Get](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesformippolicysync_client.go#L190) [¶](#PrivateLinkServicesForMIPPolicySyncClient.Get) ``` func (client *[PrivateLinkServicesForMIPPolicySyncClient](#PrivateLinkServicesForMIPPolicySyncClient)) Get(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), options *[PrivateLinkServicesForMIPPolicySyncClientGetOptions](#PrivateLinkServicesForMIPPolicySyncClientGetOptions)) ([PrivateLinkServicesForMIPPolicySyncClientGetResponse](#PrivateLinkServicesForMIPPolicySyncClientGetResponse), [error](/builtin#error)) ``` Get - Get the metadata of a privateLinkServicesForMIPPolicySync resource. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * options - PrivateLinkServicesForMIPPolicySyncClientGetOptions contains the optional parameters for the PrivateLinkServicesForMIPPolicySyncClient.Get method. Example [¶](#example-PrivateLinkServicesForMIPPolicySyncClient.Get) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/MIPPolicySyncServiceGet.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewPrivateLinkServicesForMIPPolicySyncClient().Get(ctx, "rg1", "service1", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateLinkServicesForMIPPolicySyncDescription = armm365securityandcompliance.PrivateLinkServicesForMIPPolicySyncDescription{ // Name: to.Ptr("service1"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForSCCPowershell"), // Etag: to.Ptr("etagvalue"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForMIPPolicySync/service1"), // Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), // Location: to.Ptr("West US"), // SystemData: &armm365securityandcompliance.SystemData{ // CreatedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // CreatedBy: to.Ptr("fangsu"), // CreatedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // LastModifiedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // LastModifiedBy: to.Ptr("fangsu"), // LastModifiedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // }, // Tags: map[string]*string{ // }, // Properties: &armm365securityandcompliance.ServicesProperties{ // AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ // { // ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), // }, // { // ObjectID: to.Ptr("5b307da8-43d4-492b-8b66-b0294ade872f"), // }}, // AuthenticationConfiguration: &armm365securityandcompliance.ServiceAuthenticationConfigurationInfo{ // Audience: to.Ptr("https://azurehealthcareapis.com"), // Authority: to.Ptr("https://login.microsoftonline.com/abfde7b2-df0f-47e6-aabf-2462b07508dc"), // SmartProxyEnabled: to.Ptr(true), // }, // CorsConfiguration: &armm365securityandcompliance.ServiceCorsConfigurationInfo{ // AllowCredentials: to.Ptr(false), // Headers: []*string{ // to.Ptr("*")}, // MaxAge: to.Ptr[int64](1440), // Methods: []*string{ // to.Ptr("DELETE"), // to.Ptr("GET"), // to.Ptr("OPTIONS"), // to.Ptr("PATCH"), // to.Ptr("POST"), // to.Ptr("PUT")}, // Origins: []*string{ // to.Ptr("*")}, // }, // CosmosDbConfiguration: &armm365securityandcompliance.ServiceCosmosDbConfigurationInfo{ // KeyVaultKeyURI: to.Ptr("https://my-vault.vault.azure.net/keys/my-key"), // OfferThroughput: to.Ptr[int64](1000), // }, // PrivateEndpointConnections: []*armm365securityandcompliance.PrivateEndpointConnection{ // }, // ProvisioningState: to.Ptr(armm365securityandcompliance.ProvisioningStateSucceeded), // PublicNetworkAccess: to.Ptr(armm365securityandcompliance.PublicNetworkAccessDisabled), // }, // } } ``` ``` Output: ``` Share Format Run #### func (*PrivateLinkServicesForMIPPolicySyncClient) [NewListByResourceGroupPager](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesformippolicysync_client.go#L306) [¶](#PrivateLinkServicesForMIPPolicySyncClient.NewListByResourceGroupPager) added in v0.4.0 ``` func (client *[PrivateLinkServicesForMIPPolicySyncClient](#PrivateLinkServicesForMIPPolicySyncClient)) NewListByResourceGroupPager(resourceGroupName [string](/builtin#string), options *[PrivateLinkServicesForMIPPolicySyncClientListByResourceGroupOptions](#PrivateLinkServicesForMIPPolicySyncClientListByResourceGroupOptions)) *[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Pager](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Pager)[[PrivateLinkServicesForMIPPolicySyncClientListByResourceGroupResponse](#PrivateLinkServicesForMIPPolicySyncClientListByResourceGroupResponse)] ``` NewListByResourceGroupPager - Get all the service instances in a resource group. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * options - PrivateLinkServicesForMIPPolicySyncClientListByResourceGroupOptions contains the optional parameters for the PrivateLinkServicesForMIPPolicySyncClient.NewListByResourceGroupPager method. Example [¶](#example-PrivateLinkServicesForMIPPolicySyncClient.NewListByResourceGroupPager) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/MIPPolicySyncServiceListByResourceGroup.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } pager := clientFactory.NewPrivateLinkServicesForMIPPolicySyncClient().NewListByResourceGroupPager("rgname", nil) for pager.More() { page, err := pager.NextPage(ctx) if err != nil { log.Fatalf("failed to advance page: %v", err) } for _, v := range page.Value { // You could use page here. We use blank identifier for just demo purposes. _ = v } // If the HTTP response code is 200 as defined in example definition, your page structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // page.PrivateLinkServicesForMIPPolicySyncDescriptionListResult = armm365securityandcompliance.PrivateLinkServicesForMIPPolicySyncDescriptionListResult{ // Value: []*armm365securityandcompliance.PrivateLinkServicesForMIPPolicySyncDescription{ // { // Name: to.Ptr("service1"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForMIPPolicySync"), // Etag: to.Ptr("etagvalue"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForMIPPolicySync/dddb8dcb-effb-4290-bb47-ce1e8440c729"), // Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), // Location: to.Ptr("westus"), // Tags: map[string]*string{ // }, // Properties: &armm365securityandcompliance.ServicesProperties{ // AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ // { // ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), // }, // { // ObjectID: to.Ptr("5b307da8-43d4-492b-8b66-b0294ade872f"), // }}, // AuthenticationConfiguration: &armm365securityandcompliance.ServiceAuthenticationConfigurationInfo{ // Audience: to.Ptr("https://azurehealthcareapis.com"), // Authority: to.Ptr("https://login.microsoftonline.com/abfde7b2-df0f-47e6-aabf-2462b07508dc"), // SmartProxyEnabled: to.Ptr(true), // }, // CorsConfiguration: &armm365securityandcompliance.ServiceCorsConfigurationInfo{ // AllowCredentials: to.Ptr(false), // Headers: []*string{ // to.Ptr("*")}, // MaxAge: to.Ptr[int64](1440), // Methods: []*string{ // to.Ptr("DELETE"), // to.Ptr("GET"), // to.Ptr("OPTIONS"), // to.Ptr("PATCH"), // to.Ptr("POST"), // to.Ptr("PUT")}, // Origins: []*string{ // to.Ptr("*")}, // }, // CosmosDbConfiguration: &armm365securityandcompliance.ServiceCosmosDbConfigurationInfo{ // KeyVaultKeyURI: to.Ptr("https://my-vault.vault.azure.net/keys/my-key"), // OfferThroughput: to.Ptr[int64](1000), // }, // PrivateEndpointConnections: []*armm365securityandcompliance.PrivateEndpointConnection{ // }, // ProvisioningState: to.Ptr(armm365securityandcompliance.ProvisioningStateSucceeded), // PublicNetworkAccess: to.Ptr(armm365securityandcompliance.PublicNetworkAccessDisabled), // }, // }}, // } } } ``` ``` Output: ``` Share Format Run #### func (*PrivateLinkServicesForMIPPolicySyncClient) [NewListPager](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesformippolicysync_client.go#L245) [¶](#PrivateLinkServicesForMIPPolicySyncClient.NewListPager) added in v0.4.0 ``` func (client *[PrivateLinkServicesForMIPPolicySyncClient](#PrivateLinkServicesForMIPPolicySyncClient)) NewListPager(options *[PrivateLinkServicesForMIPPolicySyncClientListOptions](#PrivateLinkServicesForMIPPolicySyncClientListOptions)) *[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Pager](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Pager)[[PrivateLinkServicesForMIPPolicySyncClientListResponse](#PrivateLinkServicesForMIPPolicySyncClientListResponse)] ``` NewListPager - Get all the privateLinkServicesForMIPPolicySync instances in a subscription. Generated from API version 2021-03-25-preview * options - PrivateLinkServicesForMIPPolicySyncClientListOptions contains the optional parameters for the PrivateLinkServicesForMIPPolicySyncClient.NewListPager method. Example [¶](#example-PrivateLinkServicesForMIPPolicySyncClient.NewListPager) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/MIPPolicySyncServiceList.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } pager := clientFactory.NewPrivateLinkServicesForMIPPolicySyncClient().NewListPager(nil) for pager.More() { page, err := pager.NextPage(ctx) if err != nil { log.Fatalf("failed to advance page: %v", err) } for _, v := range page.Value { // You could use page here. We use blank identifier for just demo purposes. _ = v } // If the HTTP response code is 200 as defined in example definition, your page structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // page.PrivateLinkServicesForMIPPolicySyncDescriptionListResult = armm365securityandcompliance.PrivateLinkServicesForMIPPolicySyncDescriptionListResult{ // Value: []*armm365securityandcompliance.PrivateLinkServicesForMIPPolicySyncDescription{ // { // Name: to.Ptr("service1"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForMIPPolicySync"), // Etag: to.Ptr("etag"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForMIPPolicySync/service1"), // Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), // Location: to.Ptr("West US"), // Tags: map[string]*string{ // }, // Properties: &armm365securityandcompliance.ServicesProperties{ // AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ // { // ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), // }, // { // ObjectID: to.Ptr("5b307da8-43d4-492b-8b66-b0294ade872f"), // }}, // AuthenticationConfiguration: &armm365securityandcompliance.ServiceAuthenticationConfigurationInfo{ // Audience: to.Ptr("https://azurehealthcareapis.com"), // Authority: to.Ptr("https://login.microsoftonline.com/abfde7b2-df0f-47e6-aabf-2462b07508dc"), // SmartProxyEnabled: to.Ptr(true), // }, // CorsConfiguration: &armm365securityandcompliance.ServiceCorsConfigurationInfo{ // AllowCredentials: to.Ptr(false), // Headers: []*string{ // to.Ptr("*")}, // MaxAge: to.Ptr[int64](1440), // Methods: []*string{ // to.Ptr("DELETE"), // to.Ptr("GET"), // to.Ptr("OPTIONS"), // to.Ptr("PATCH"), // to.Ptr("POST"), // to.Ptr("PUT")}, // Origins: []*string{ // to.Ptr("*")}, // }, // CosmosDbConfiguration: &armm365securityandcompliance.ServiceCosmosDbConfigurationInfo{ // KeyVaultKeyURI: to.Ptr("https://my-vault.vault.azure.net/keys/my-key"), // OfferThroughput: to.Ptr[int64](1000), // }, // PrivateEndpointConnections: []*armm365securityandcompliance.PrivateEndpointConnection{ // }, // ProvisioningState: to.Ptr(armm365securityandcompliance.ProvisioningStateSucceeded), // PublicNetworkAccess: to.Ptr(armm365securityandcompliance.PublicNetworkAccessDisabled), // }, // }}, // } } } ``` ``` Output: ``` Share Format Run #### type [PrivateLinkServicesForMIPPolicySyncClientBeginCreateOrUpdateOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L660) [¶](#PrivateLinkServicesForMIPPolicySyncClientBeginCreateOrUpdateOptions) added in v0.2.0 ``` type PrivateLinkServicesForMIPPolicySyncClientBeginCreateOrUpdateOptions struct { // Resumes the LRO from the provided token. ResumeToken [string](/builtin#string) } ``` PrivateLinkServicesForMIPPolicySyncClientBeginCreateOrUpdateOptions contains the optional parameters for the PrivateLinkServicesForMIPPolicySyncClient.BeginCreateOrUpdate method. #### type [PrivateLinkServicesForMIPPolicySyncClientBeginDeleteOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L667) [¶](#PrivateLinkServicesForMIPPolicySyncClientBeginDeleteOptions) added in v0.2.0 ``` type PrivateLinkServicesForMIPPolicySyncClientBeginDeleteOptions struct { // Resumes the LRO from the provided token. ResumeToken [string](/builtin#string) } ``` PrivateLinkServicesForMIPPolicySyncClientBeginDeleteOptions contains the optional parameters for the PrivateLinkServicesForMIPPolicySyncClient.BeginDelete method. #### type [PrivateLinkServicesForMIPPolicySyncClientBeginUpdateOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L674) [¶](#PrivateLinkServicesForMIPPolicySyncClientBeginUpdateOptions) added in v0.2.0 ``` type PrivateLinkServicesForMIPPolicySyncClientBeginUpdateOptions struct { // Resumes the LRO from the provided token. ResumeToken [string](/builtin#string) } ``` PrivateLinkServicesForMIPPolicySyncClientBeginUpdateOptions contains the optional parameters for the PrivateLinkServicesForMIPPolicySyncClient.BeginUpdate method. #### type [PrivateLinkServicesForMIPPolicySyncClientCreateOrUpdateResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L288) [¶](#PrivateLinkServicesForMIPPolicySyncClientCreateOrUpdateResponse) added in v0.2.0 ``` type PrivateLinkServicesForMIPPolicySyncClientCreateOrUpdateResponse struct { [PrivateLinkServicesForMIPPolicySyncDescription](#PrivateLinkServicesForMIPPolicySyncDescription) } ``` PrivateLinkServicesForMIPPolicySyncClientCreateOrUpdateResponse contains the response from method PrivateLinkServicesForMIPPolicySyncClient.BeginCreateOrUpdate. #### type [PrivateLinkServicesForMIPPolicySyncClientDeleteResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L293) [¶](#PrivateLinkServicesForMIPPolicySyncClientDeleteResponse) added in v0.2.0 ``` type PrivateLinkServicesForMIPPolicySyncClientDeleteResponse struct { } ``` PrivateLinkServicesForMIPPolicySyncClientDeleteResponse contains the response from method PrivateLinkServicesForMIPPolicySyncClient.BeginDelete. #### type [PrivateLinkServicesForMIPPolicySyncClientGetOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L681) [¶](#PrivateLinkServicesForMIPPolicySyncClientGetOptions) added in v0.2.0 ``` type PrivateLinkServicesForMIPPolicySyncClientGetOptions struct { } ``` PrivateLinkServicesForMIPPolicySyncClientGetOptions contains the optional parameters for the PrivateLinkServicesForMIPPolicySyncClient.Get method. #### type [PrivateLinkServicesForMIPPolicySyncClientGetResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L298) [¶](#PrivateLinkServicesForMIPPolicySyncClientGetResponse) added in v0.2.0 ``` type PrivateLinkServicesForMIPPolicySyncClientGetResponse struct { [PrivateLinkServicesForMIPPolicySyncDescription](#PrivateLinkServicesForMIPPolicySyncDescription) } ``` PrivateLinkServicesForMIPPolicySyncClientGetResponse contains the response from method PrivateLinkServicesForMIPPolicySyncClient.Get. #### type [PrivateLinkServicesForMIPPolicySyncClientListByResourceGroupOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L687) [¶](#PrivateLinkServicesForMIPPolicySyncClientListByResourceGroupOptions) added in v0.2.0 ``` type PrivateLinkServicesForMIPPolicySyncClientListByResourceGroupOptions struct { } ``` PrivateLinkServicesForMIPPolicySyncClientListByResourceGroupOptions contains the optional parameters for the PrivateLinkServicesForMIPPolicySyncClient.NewListByResourceGroupPager method. #### type [PrivateLinkServicesForMIPPolicySyncClientListByResourceGroupResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L303) [¶](#PrivateLinkServicesForMIPPolicySyncClientListByResourceGroupResponse) added in v0.2.0 ``` type PrivateLinkServicesForMIPPolicySyncClientListByResourceGroupResponse struct { [PrivateLinkServicesForMIPPolicySyncDescriptionListResult](#PrivateLinkServicesForMIPPolicySyncDescriptionListResult) } ``` PrivateLinkServicesForMIPPolicySyncClientListByResourceGroupResponse contains the response from method PrivateLinkServicesForMIPPolicySyncClient.NewListByResourceGroupPager. #### type [PrivateLinkServicesForMIPPolicySyncClientListOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L693) [¶](#PrivateLinkServicesForMIPPolicySyncClientListOptions) added in v0.2.0 ``` type PrivateLinkServicesForMIPPolicySyncClientListOptions struct { } ``` PrivateLinkServicesForMIPPolicySyncClientListOptions contains the optional parameters for the PrivateLinkServicesForMIPPolicySyncClient.NewListPager method. #### type [PrivateLinkServicesForMIPPolicySyncClientListResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L308) [¶](#PrivateLinkServicesForMIPPolicySyncClientListResponse) added in v0.2.0 ``` type PrivateLinkServicesForMIPPolicySyncClientListResponse struct { [PrivateLinkServicesForMIPPolicySyncDescriptionListResult](#PrivateLinkServicesForMIPPolicySyncDescriptionListResult) } ``` PrivateLinkServicesForMIPPolicySyncClientListResponse contains the response from method PrivateLinkServicesForMIPPolicySyncClient.NewListPager. #### type [PrivateLinkServicesForMIPPolicySyncClientUpdateResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L313) [¶](#PrivateLinkServicesForMIPPolicySyncClientUpdateResponse) added in v0.2.0 ``` type PrivateLinkServicesForMIPPolicySyncClientUpdateResponse struct { [PrivateLinkServicesForMIPPolicySyncDescription](#PrivateLinkServicesForMIPPolicySyncDescription) } ``` PrivateLinkServicesForMIPPolicySyncClientUpdateResponse contains the response from method PrivateLinkServicesForMIPPolicySyncClient.BeginUpdate. #### type [PrivateLinkServicesForMIPPolicySyncDescription](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L698) [¶](#PrivateLinkServicesForMIPPolicySyncDescription) ``` type PrivateLinkServicesForMIPPolicySyncDescription struct { // REQUIRED; The kind of the service. Kind *[Kind](#Kind) // REQUIRED; The resource location. Location *[string](/builtin#string) // An etag associated with the resource, used for optimistic concurrency when editing it. Etag *[string](/builtin#string) // Setting indicating whether the service has a managed identity associated with it. Identity *[ServicesResourceIdentity](#ServicesResourceIdentity) // The common properties of a service. Properties *[ServicesProperties](#ServicesProperties) // The resource tags. Tags map[[string](/builtin#string)]*[string](/builtin#string) // READ-ONLY; The resource identifier. ID *[string](/builtin#string) // READ-ONLY; The resource name. Name *[string](/builtin#string) // READ-ONLY; Required property for system data SystemData *[SystemData](#SystemData) // READ-ONLY; The resource type. Type *[string](/builtin#string) } ``` PrivateLinkServicesForMIPPolicySyncDescription - The description of the service. #### func (PrivateLinkServicesForMIPPolicySyncDescription) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L796) [¶](#PrivateLinkServicesForMIPPolicySyncDescription.MarshalJSON) ``` func (p [PrivateLinkServicesForMIPPolicySyncDescription](#PrivateLinkServicesForMIPPolicySyncDescription)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type PrivateLinkServicesForMIPPolicySyncDescription. #### func (*PrivateLinkServicesForMIPPolicySyncDescription) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L812) [¶](#PrivateLinkServicesForMIPPolicySyncDescription.UnmarshalJSON) added in v0.6.0 ``` func (p *[PrivateLinkServicesForMIPPolicySyncDescription](#PrivateLinkServicesForMIPPolicySyncDescription)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type PrivateLinkServicesForMIPPolicySyncDescription. #### type [PrivateLinkServicesForMIPPolicySyncDescriptionListResult](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L731) [¶](#PrivateLinkServicesForMIPPolicySyncDescriptionListResult) ``` type PrivateLinkServicesForMIPPolicySyncDescriptionListResult struct { // A list of service description objects. Value []*[PrivateLinkServicesForMIPPolicySyncDescription](#PrivateLinkServicesForMIPPolicySyncDescription) // READ-ONLY; The link used to get the next page of service description objects. NextLink *[string](/builtin#string) } ``` PrivateLinkServicesForMIPPolicySyncDescriptionListResult - A list of service description objects with a next link. #### func (PrivateLinkServicesForMIPPolicySyncDescriptionListResult) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L859) [¶](#PrivateLinkServicesForMIPPolicySyncDescriptionListResult.MarshalJSON) ``` func (p [PrivateLinkServicesForMIPPolicySyncDescriptionListResult](#PrivateLinkServicesForMIPPolicySyncDescriptionListResult)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type PrivateLinkServicesForMIPPolicySyncDescriptionListResult. #### func (*PrivateLinkServicesForMIPPolicySyncDescriptionListResult) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L867) [¶](#PrivateLinkServicesForMIPPolicySyncDescriptionListResult.UnmarshalJSON) added in v0.6.0 ``` func (p *[PrivateLinkServicesForMIPPolicySyncDescriptionListResult](#PrivateLinkServicesForMIPPolicySyncDescriptionListResult)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type PrivateLinkServicesForMIPPolicySyncDescriptionListResult. #### type [PrivateLinkServicesForO365ManagementActivityAPIClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesforo365managementactivityapi_client.go#L26) [¶](#PrivateLinkServicesForO365ManagementActivityAPIClient) ``` type PrivateLinkServicesForO365ManagementActivityAPIClient struct { // contains filtered or unexported fields } ``` PrivateLinkServicesForO365ManagementActivityAPIClient contains the methods for the PrivateLinkServicesForO365ManagementActivityAPI group. Don't use this type directly, use NewPrivateLinkServicesForO365ManagementActivityAPIClient() instead. #### func [NewPrivateLinkServicesForO365ManagementActivityAPIClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesforo365managementactivityapi_client.go#L35) [¶](#NewPrivateLinkServicesForO365ManagementActivityAPIClient) ``` func NewPrivateLinkServicesForO365ManagementActivityAPIClient(subscriptionID [string](/builtin#string), credential [azcore](/github.com/Azure/azure-sdk-for-go/sdk/azcore).[TokenCredential](/github.com/Azure/azure-sdk-for-go/sdk/azcore#TokenCredential), options *[arm](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm).[ClientOptions](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm#ClientOptions)) (*[PrivateLinkServicesForO365ManagementActivityAPIClient](#PrivateLinkServicesForO365ManagementActivityAPIClient), [error](/builtin#error)) ``` NewPrivateLinkServicesForO365ManagementActivityAPIClient creates a new instance of PrivateLinkServicesForO365ManagementActivityAPIClient with the specified values. * subscriptionID - The subscription identifier. * credential - used to authorize requests. Usually a credential from azidentity. * options - pass nil to accept the default values. #### func (*PrivateLinkServicesForO365ManagementActivityAPIClient) [BeginCreateOrUpdate](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesforo365managementactivityapi_client.go#L56) [¶](#PrivateLinkServicesForO365ManagementActivityAPIClient.BeginCreateOrUpdate) ``` func (client *[PrivateLinkServicesForO365ManagementActivityAPIClient](#PrivateLinkServicesForO365ManagementActivityAPIClient)) BeginCreateOrUpdate(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), privateLinkServicesForO365ManagementActivityAPIDescription [PrivateLinkServicesForO365ManagementActivityAPIDescription](#PrivateLinkServicesForO365ManagementActivityAPIDescription), options *[PrivateLinkServicesForO365ManagementActivityAPIClientBeginCreateOrUpdateOptions](#PrivateLinkServicesForO365ManagementActivityAPIClientBeginCreateOrUpdateOptions)) (*[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Poller](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Poller)[[PrivateLinkServicesForO365ManagementActivityAPIClientCreateOrUpdateResponse](#PrivateLinkServicesForO365ManagementActivityAPIClientCreateOrUpdateResponse)], [error](/builtin#error)) ``` BeginCreateOrUpdate - Create or update the metadata of a privateLinkServicesForO365ManagementActivityAPI instance. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * privateLinkServicesForO365ManagementActivityAPIDescription - The service instance metadata. * options - PrivateLinkServicesForO365ManagementActivityAPIClientBeginCreateOrUpdateOptions contains the optional parameters for the PrivateLinkServicesForO365ManagementActivityAPIClient.BeginCreateOrUpdate method. Example (CreateOrUpdateAServiceWithAllParameters) [¶](#example-PrivateLinkServicesForO365ManagementActivityAPIClient.BeginCreateOrUpdate-CreateOrUpdateAServiceWithAllParameters) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/ManagementAPIServiceCreate.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azcore/to" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } poller, err := clientFactory.NewPrivateLinkServicesForO365ManagementActivityAPIClient().BeginCreateOrUpdate(ctx, "rg1", "service1", armm365securityandcompliance.PrivateLinkServicesForO365ManagementActivityAPIDescription{ Identity: &armm365securityandcompliance.ServicesResourceIdentity{ Type: to.Ptr(armm365securityandcompliance.ManagedServiceIdentityTypeSystemAssigned), }, Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), Location: to.Ptr("westus2"), Tags: map[string]*string{}, Properties: &armm365securityandcompliance.ServicesProperties{ AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ { ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), }, { ObjectID: to.Ptr("5b307da8-43d4-492b-8b66-b0294ade872f"), }}, AuthenticationConfiguration: &armm365securityandcompliance.ServiceAuthenticationConfigurationInfo{ Audience: to.Ptr("https://azurehealthcareapis.com"), Authority: to.Ptr("https://login.microsoftonline.com/abfde7b2-df0f-47e6-aabf-2462b07508dc"), SmartProxyEnabled: to.Ptr(true), }, CorsConfiguration: &armm365securityandcompliance.ServiceCorsConfigurationInfo{ AllowCredentials: to.Ptr(false), Headers: []*string{ to.Ptr("*")}, MaxAge: to.Ptr[int64](1440), Methods: []*string{ to.Ptr("DELETE"), to.Ptr("GET"), to.Ptr("OPTIONS"), to.Ptr("PATCH"), to.Ptr("POST"), to.Ptr("PUT")}, Origins: []*string{ to.Ptr("*")}, }, CosmosDbConfiguration: &armm365securityandcompliance.ServiceCosmosDbConfigurationInfo{ KeyVaultKeyURI: to.Ptr("https://my-vault.vault.azure.net/keys/my-key"), OfferThroughput: to.Ptr[int64](1000), }, ExportConfiguration: &armm365securityandcompliance.ServiceExportConfigurationInfo{ StorageAccountName: to.Ptr("existingStorageAccount"), }, PrivateEndpointConnections: []*armm365securityandcompliance.PrivateEndpointConnection{}, PublicNetworkAccess: to.Ptr(armm365securityandcompliance.PublicNetworkAccessDisabled), }, }, nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } res, err := poller.PollUntilDone(ctx, nil) if err != nil { log.Fatalf("failed to pull the result: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateLinkServicesForO365ManagementActivityAPIDescription = armm365securityandcompliance.PrivateLinkServicesForO365ManagementActivityAPIDescription{ // Name: to.Ptr("service1"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForO365ManagementActivityAPI"), // Etag: to.Ptr("etagvalue"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForO365ManagementActivityAPI/service1"), // Identity: &armm365securityandcompliance.ServicesResourceIdentity{ // Type: to.Ptr(armm365securityandcompliance.ManagedServiceIdentityTypeSystemAssigned), // PrincipalID: to.Ptr("03fe6ae0-952c-4e4b-954b-cc0364dd252e"), // TenantID: to.Ptr("72f988bf-86f1-41af-91ab-2d8cd011db47"), // }, // Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), // Location: to.Ptr("West US 2"), // SystemData: &armm365securityandcompliance.SystemData{ // CreatedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // CreatedBy: to.Ptr("sove"), // CreatedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // LastModifiedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // LastModifiedBy: to.Ptr("sove"), // LastModifiedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // }, // Tags: map[string]*string{ // }, // Properties: &armm365securityandcompliance.ServicesProperties{ // AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ // { // ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), // }, // { // ObjectID: to.Ptr("5b307da8-43d4-492b-8b66-b0294ade872f"), // }}, // AuthenticationConfiguration: &armm365securityandcompliance.ServiceAuthenticationConfigurationInfo{ // Audience: to.Ptr("https://azurehealthcareapis.com"), // Authority: to.Ptr("https://login.microsoftonline.com/abfde7b2-df0f-47e6-aabf-2462b07508dc"), // SmartProxyEnabled: to.Ptr(true), // }, // CorsConfiguration: &armm365securityandcompliance.ServiceCorsConfigurationInfo{ // AllowCredentials: to.Ptr(false), // Headers: []*string{ // to.Ptr("*")}, // MaxAge: to.Ptr[int64](1440), // Methods: []*string{ // to.Ptr("DELETE"), // to.Ptr("GET"), // to.Ptr("OPTIONS"), // to.Ptr("PATCH"), // to.Ptr("POST"), // to.Ptr("PUT")}, // Origins: []*string{ // to.Ptr("*")}, // }, // CosmosDbConfiguration: &armm365securityandcompliance.ServiceCosmosDbConfigurationInfo{ // KeyVaultKeyURI: to.Ptr("https://my-vault.vault.azure.net/keys/my-key"), // OfferThroughput: to.Ptr[int64](1000), // }, // ExportConfiguration: &armm365securityandcompliance.ServiceExportConfigurationInfo{ // StorageAccountName: to.Ptr("existingStorageAccount"), // }, // PrivateEndpointConnections: []*armm365securityandcompliance.PrivateEndpointConnection{ // }, // ProvisioningState: to.Ptr(armm365securityandcompliance.ProvisioningStateSucceeded), // PublicNetworkAccess: to.Ptr(armm365securityandcompliance.PublicNetworkAccessDisabled), // }, // } } ``` ``` Output: ``` Share Format Run Example (CreateOrUpdateAServiceWithMinimumParameters) [¶](#example-PrivateLinkServicesForO365ManagementActivityAPIClient.BeginCreateOrUpdate-CreateOrUpdateAServiceWithMinimumParameters) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/ManagementAPIServiceCreateMinimum.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azcore/to" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } poller, err := clientFactory.NewPrivateLinkServicesForO365ManagementActivityAPIClient().BeginCreateOrUpdate(ctx, "rg1", "service2", armm365securityandcompliance.PrivateLinkServicesForO365ManagementActivityAPIDescription{ Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), Location: to.Ptr("westus2"), Tags: map[string]*string{}, Properties: &armm365securityandcompliance.ServicesProperties{ AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ { ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), }}, }, }, nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } res, err := poller.PollUntilDone(ctx, nil) if err != nil { log.Fatalf("failed to pull the result: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateLinkServicesForO365ManagementActivityAPIDescription = armm365securityandcompliance.PrivateLinkServicesForO365ManagementActivityAPIDescription{ // Name: to.Ptr("service2"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForO365ManagementActivityAPI"), // Etag: to.Ptr("etagvalue"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForO365ManagementActivityAPI/service2"), // Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), // Location: to.Ptr("westus2"), // SystemData: &armm365securityandcompliance.SystemData{ // CreatedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // CreatedBy: to.Ptr("sove"), // CreatedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // LastModifiedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // LastModifiedBy: to.Ptr("sove"), // LastModifiedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // }, // Tags: map[string]*string{ // }, // Properties: &armm365securityandcompliance.ServicesProperties{ // AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ // { // ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), // }}, // AuthenticationConfiguration: &armm365securityandcompliance.ServiceAuthenticationConfigurationInfo{ // Audience: to.Ptr("https://azurehealthcareapis.com"), // Authority: to.Ptr("https://login.microsoftonline.com/abfde7b2-df0f-47e6-aabf-2462b07508dc"), // SmartProxyEnabled: to.Ptr(false), // }, // CorsConfiguration: &armm365securityandcompliance.ServiceCorsConfigurationInfo{ // AllowCredentials: to.Ptr(false), // Headers: []*string{ // }, // Methods: []*string{ // }, // Origins: []*string{ // }, // }, // CosmosDbConfiguration: &armm365securityandcompliance.ServiceCosmosDbConfigurationInfo{ // OfferThroughput: to.Ptr[int64](1000), // }, // PrivateEndpointConnections: []*armm365securityandcompliance.PrivateEndpointConnection{ // }, // ProvisioningState: to.Ptr(armm365securityandcompliance.ProvisioningStateSucceeded), // PublicNetworkAccess: to.Ptr(armm365securityandcompliance.PublicNetworkAccessDisabled), // }, // } } ``` ``` Output: ``` Share Format Run #### func (*PrivateLinkServicesForO365ManagementActivityAPIClient) [BeginDelete](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesforo365managementactivityapi_client.go#L123) [¶](#PrivateLinkServicesForO365ManagementActivityAPIClient.BeginDelete) ``` func (client *[PrivateLinkServicesForO365ManagementActivityAPIClient](#PrivateLinkServicesForO365ManagementActivityAPIClient)) BeginDelete(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), options *[PrivateLinkServicesForO365ManagementActivityAPIClientBeginDeleteOptions](#PrivateLinkServicesForO365ManagementActivityAPIClientBeginDeleteOptions)) (*[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Poller](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Poller)[[PrivateLinkServicesForO365ManagementActivityAPIClientDeleteResponse](#PrivateLinkServicesForO365ManagementActivityAPIClientDeleteResponse)], [error](/builtin#error)) ``` BeginDelete - Delete a service instance. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * options - PrivateLinkServicesForO365ManagementActivityAPIClientBeginDeleteOptions contains the optional parameters for the PrivateLinkServicesForO365ManagementActivityAPIClient.BeginDelete method. Example [¶](#example-PrivateLinkServicesForO365ManagementActivityAPIClient.BeginDelete) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/ManagementAPIServiceDelete.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } poller, err := clientFactory.NewPrivateLinkServicesForO365ManagementActivityAPIClient().BeginDelete(ctx, "rg1", "service1", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } _, err = poller.PollUntilDone(ctx, nil) if err != nil { log.Fatalf("failed to pull the result: %v", err) } } ``` ``` Output: ``` Share Format Run #### func (*PrivateLinkServicesForO365ManagementActivityAPIClient) [BeginUpdate](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesforo365managementactivityapi_client.go#L374) [¶](#PrivateLinkServicesForO365ManagementActivityAPIClient.BeginUpdate) ``` func (client *[PrivateLinkServicesForO365ManagementActivityAPIClient](#PrivateLinkServicesForO365ManagementActivityAPIClient)) BeginUpdate(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), servicePatchDescription [ServicesPatchDescription](#ServicesPatchDescription), options *[PrivateLinkServicesForO365ManagementActivityAPIClientBeginUpdateOptions](#PrivateLinkServicesForO365ManagementActivityAPIClientBeginUpdateOptions)) (*[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Poller](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Poller)[[PrivateLinkServicesForO365ManagementActivityAPIClientUpdateResponse](#PrivateLinkServicesForO365ManagementActivityAPIClientUpdateResponse)], [error](/builtin#error)) ``` BeginUpdate - Update the metadata of a privateLinkServicesForO365ManagementActivityAPI instance. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * servicePatchDescription - The service instance metadata and security metadata. * options - PrivateLinkServicesForO365ManagementActivityAPIClientBeginUpdateOptions contains the optional parameters for the PrivateLinkServicesForO365ManagementActivityAPIClient.BeginUpdate method. Example [¶](#example-PrivateLinkServicesForO365ManagementActivityAPIClient.BeginUpdate) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/ManagementAPIServicePatch.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azcore/to" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } poller, err := clientFactory.NewPrivateLinkServicesForO365ManagementActivityAPIClient().BeginUpdate(ctx, "rg1", "service1", armm365securityandcompliance.ServicesPatchDescription{ Tags: map[string]*string{ "tag1": to.Ptr("value1"), "tag2": to.Ptr("value2"), }, }, nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } res, err := poller.PollUntilDone(ctx, nil) if err != nil { log.Fatalf("failed to pull the result: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateLinkServicesForO365ManagementActivityAPIDescription = armm365securityandcompliance.PrivateLinkServicesForO365ManagementActivityAPIDescription{ // Name: to.Ptr("service1"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForO365ManagementActivityAPI"), // Etag: to.Ptr("etagvalue"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForO365ManagementActivityAPI/service1"), // Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), // Location: to.Ptr("West US"), // SystemData: &armm365securityandcompliance.SystemData{ // CreatedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // CreatedBy: to.Ptr("sove"), // CreatedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // LastModifiedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // LastModifiedBy: to.Ptr("sove"), // LastModifiedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // }, // Tags: map[string]*string{ // "tag1": to.Ptr("value1"), // "tag2": to.Ptr("value2"), // }, // Properties: &armm365securityandcompliance.ServicesProperties{ // AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ // { // ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), // }, // { // ObjectID: to.Ptr("5b307da8-43d4-492b-8b66-b0294ade872f"), // }}, // AuthenticationConfiguration: &armm365securityandcompliance.ServiceAuthenticationConfigurationInfo{ // Audience: to.Ptr("https://azurehealthcareapis.com"), // Authority: to.Ptr("https://login.microsoftonline.com/abfde7b2-df0f-47e6-aabf-2462b07508dc"), // SmartProxyEnabled: to.Ptr(true), // }, // CorsConfiguration: &armm365securityandcompliance.ServiceCorsConfigurationInfo{ // AllowCredentials: to.Ptr(false), // Headers: []*string{ // to.Ptr("*")}, // MaxAge: to.Ptr[int64](1440), // Methods: []*string{ // to.Ptr("DELETE"), // to.Ptr("GET"), // to.Ptr("OPTIONS"), // to.Ptr("PATCH"), // to.Ptr("POST"), // to.Ptr("PUT")}, // Origins: []*string{ // to.Ptr("*")}, // }, // CosmosDbConfiguration: &armm365securityandcompliance.ServiceCosmosDbConfigurationInfo{ // KeyVaultKeyURI: to.Ptr("https://my-vault.vault.azure.net/keys/my-key"), // OfferThroughput: to.Ptr[int64](1000), // }, // PrivateEndpointConnections: []*armm365securityandcompliance.PrivateEndpointConnection{ // }, // ProvisioningState: to.Ptr(armm365securityandcompliance.ProvisioningStateSucceeded), // PublicNetworkAccess: to.Ptr(armm365securityandcompliance.PublicNetworkAccessDisabled), // }, // } } ``` ``` Output: ``` Share Format Run #### func (*PrivateLinkServicesForO365ManagementActivityAPIClient) [Get](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesforo365managementactivityapi_client.go#L190) [¶](#PrivateLinkServicesForO365ManagementActivityAPIClient.Get) ``` func (client *[PrivateLinkServicesForO365ManagementActivityAPIClient](#PrivateLinkServicesForO365ManagementActivityAPIClient)) Get(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), options *[PrivateLinkServicesForO365ManagementActivityAPIClientGetOptions](#PrivateLinkServicesForO365ManagementActivityAPIClientGetOptions)) ([PrivateLinkServicesForO365ManagementActivityAPIClientGetResponse](#PrivateLinkServicesForO365ManagementActivityAPIClientGetResponse), [error](/builtin#error)) ``` Get - Get the metadata of a privateLinkServicesForO365ManagementActivityAPI resource. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * options - PrivateLinkServicesForO365ManagementActivityAPIClientGetOptions contains the optional parameters for the PrivateLinkServicesForO365ManagementActivityAPIClient.Get method. Example [¶](#example-PrivateLinkServicesForO365ManagementActivityAPIClient.Get) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/ManagementAPIServiceGet.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewPrivateLinkServicesForO365ManagementActivityAPIClient().Get(ctx, "rg1", "service1", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateLinkServicesForO365ManagementActivityAPIDescription = armm365securityandcompliance.PrivateLinkServicesForO365ManagementActivityAPIDescription{ // Name: to.Ptr("service1"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForO365ManagementActivityAPI"), // Etag: to.Ptr("etagvalue"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForO365ManagementActivityAPI/service1"), // Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), // Location: to.Ptr("West US"), // SystemData: &armm365securityandcompliance.SystemData{ // CreatedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // CreatedBy: to.Ptr("sove"), // CreatedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // LastModifiedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // LastModifiedBy: to.Ptr("sove"), // LastModifiedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // }, // Tags: map[string]*string{ // }, // Properties: &armm365securityandcompliance.ServicesProperties{ // AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ // { // ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), // }, // { // ObjectID: to.Ptr("5b307da8-43d4-492b-8b66-b0294ade872f"), // }}, // AuthenticationConfiguration: &armm365securityandcompliance.ServiceAuthenticationConfigurationInfo{ // Audience: to.Ptr("https://azurehealthcareapis.com"), // Authority: to.Ptr("https://login.microsoftonline.com/abfde7b2-df0f-47e6-aabf-2462b07508dc"), // SmartProxyEnabled: to.Ptr(true), // }, // CorsConfiguration: &armm365securityandcompliance.ServiceCorsConfigurationInfo{ // AllowCredentials: to.Ptr(false), // Headers: []*string{ // to.Ptr("*")}, // MaxAge: to.Ptr[int64](1440), // Methods: []*string{ // to.Ptr("DELETE"), // to.Ptr("GET"), // to.Ptr("OPTIONS"), // to.Ptr("PATCH"), // to.Ptr("POST"), // to.Ptr("PUT")}, // Origins: []*string{ // to.Ptr("*")}, // }, // CosmosDbConfiguration: &armm365securityandcompliance.ServiceCosmosDbConfigurationInfo{ // KeyVaultKeyURI: to.Ptr("https://my-vault.vault.azure.net/keys/my-key"), // OfferThroughput: to.Ptr[int64](1000), // }, // PrivateEndpointConnections: []*armm365securityandcompliance.PrivateEndpointConnection{ // }, // ProvisioningState: to.Ptr(armm365securityandcompliance.ProvisioningStateSucceeded), // PublicNetworkAccess: to.Ptr(armm365securityandcompliance.PublicNetworkAccessDisabled), // }, // } } ``` ``` Output: ``` Share Format Run #### func (*PrivateLinkServicesForO365ManagementActivityAPIClient) [NewListByResourceGroupPager](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesforo365managementactivityapi_client.go#L306) [¶](#PrivateLinkServicesForO365ManagementActivityAPIClient.NewListByResourceGroupPager) added in v0.4.0 ``` func (client *[PrivateLinkServicesForO365ManagementActivityAPIClient](#PrivateLinkServicesForO365ManagementActivityAPIClient)) NewListByResourceGroupPager(resourceGroupName [string](/builtin#string), options *[PrivateLinkServicesForO365ManagementActivityAPIClientListByResourceGroupOptions](#PrivateLinkServicesForO365ManagementActivityAPIClientListByResourceGroupOptions)) *[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Pager](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Pager)[[PrivateLinkServicesForO365ManagementActivityAPIClientListByResourceGroupResponse](#PrivateLinkServicesForO365ManagementActivityAPIClientListByResourceGroupResponse)] ``` NewListByResourceGroupPager - Get all the service instances in a resource group. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * options - PrivateLinkServicesForO365ManagementActivityAPIClientListByResourceGroupOptions contains the optional parameters for the PrivateLinkServicesForO365ManagementActivityAPIClient.NewListByResourceGroupPager method. Example [¶](#example-PrivateLinkServicesForO365ManagementActivityAPIClient.NewListByResourceGroupPager) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/ManagementAPIServiceListByResourceGroup.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } pager := clientFactory.NewPrivateLinkServicesForO365ManagementActivityAPIClient().NewListByResourceGroupPager("rgname", nil) for pager.More() { page, err := pager.NextPage(ctx) if err != nil { log.Fatalf("failed to advance page: %v", err) } for _, v := range page.Value { // You could use page here. We use blank identifier for just demo purposes. _ = v } // If the HTTP response code is 200 as defined in example definition, your page structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // page.PrivateLinkServicesForO365ManagementActivityAPIDescriptionListResult = armm365securityandcompliance.PrivateLinkServicesForO365ManagementActivityAPIDescriptionListResult{ // Value: []*armm365securityandcompliance.PrivateLinkServicesForO365ManagementActivityAPIDescription{ // { // Name: to.Ptr("service1"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForO365ManagementActivityAPI"), // Etag: to.Ptr("etagvalue"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForO365ManagementActivityAPI/dddb8dcb-effb-4290-bb47-ce1e8440c729"), // Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), // Location: to.Ptr("westus"), // Tags: map[string]*string{ // }, // Properties: &armm365securityandcompliance.ServicesProperties{ // AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ // { // ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), // }, // { // ObjectID: to.Ptr("5b307da8-43d4-492b-8b66-b0294ade872f"), // }}, // AuthenticationConfiguration: &armm365securityandcompliance.ServiceAuthenticationConfigurationInfo{ // Audience: to.Ptr("https://azurehealthcareapis.com"), // Authority: to.Ptr("https://login.microsoftonline.com/abfde7b2-df0f-47e6-aabf-2462b07508dc"), // SmartProxyEnabled: to.Ptr(true), // }, // CorsConfiguration: &armm365securityandcompliance.ServiceCorsConfigurationInfo{ // AllowCredentials: to.Ptr(false), // Headers: []*string{ // to.Ptr("*")}, // MaxAge: to.Ptr[int64](1440), // Methods: []*string{ // to.Ptr("DELETE"), // to.Ptr("GET"), // to.Ptr("OPTIONS"), // to.Ptr("PATCH"), // to.Ptr("POST"), // to.Ptr("PUT")}, // Origins: []*string{ // to.Ptr("*")}, // }, // CosmosDbConfiguration: &armm365securityandcompliance.ServiceCosmosDbConfigurationInfo{ // KeyVaultKeyURI: to.Ptr("https://my-vault.vault.azure.net/keys/my-key"), // OfferThroughput: to.Ptr[int64](1000), // }, // PrivateEndpointConnections: []*armm365securityandcompliance.PrivateEndpointConnection{ // }, // ProvisioningState: to.Ptr(armm365securityandcompliance.ProvisioningStateSucceeded), // PublicNetworkAccess: to.Ptr(armm365securityandcompliance.PublicNetworkAccessDisabled), // }, // }}, // } } } ``` ``` Output: ``` Share Format Run #### func (*PrivateLinkServicesForO365ManagementActivityAPIClient) [NewListPager](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesforo365managementactivityapi_client.go#L245) [¶](#PrivateLinkServicesForO365ManagementActivityAPIClient.NewListPager) added in v0.4.0 ``` func (client *[PrivateLinkServicesForO365ManagementActivityAPIClient](#PrivateLinkServicesForO365ManagementActivityAPIClient)) NewListPager(options *[PrivateLinkServicesForO365ManagementActivityAPIClientListOptions](#PrivateLinkServicesForO365ManagementActivityAPIClientListOptions)) *[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Pager](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Pager)[[PrivateLinkServicesForO365ManagementActivityAPIClientListResponse](#PrivateLinkServicesForO365ManagementActivityAPIClientListResponse)] ``` NewListPager - Get all the privateLinkServicesForO365ManagementActivityAPI instances in a subscription. Generated from API version 2021-03-25-preview * options - PrivateLinkServicesForO365ManagementActivityAPIClientListOptions contains the optional parameters for the PrivateLinkServicesForO365ManagementActivityAPIClient.NewListPager method. Example [¶](#example-PrivateLinkServicesForO365ManagementActivityAPIClient.NewListPager) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/ManagementAPIServiceList.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } pager := clientFactory.NewPrivateLinkServicesForO365ManagementActivityAPIClient().NewListPager(nil) for pager.More() { page, err := pager.NextPage(ctx) if err != nil { log.Fatalf("failed to advance page: %v", err) } for _, v := range page.Value { // You could use page here. We use blank identifier for just demo purposes. _ = v } // If the HTTP response code is 200 as defined in example definition, your page structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // page.PrivateLinkServicesForO365ManagementActivityAPIDescriptionListResult = armm365securityandcompliance.PrivateLinkServicesForO365ManagementActivityAPIDescriptionListResult{ // Value: []*armm365securityandcompliance.PrivateLinkServicesForO365ManagementActivityAPIDescription{ // { // Name: to.Ptr("service1"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForO365ManagementActivityAPI"), // Etag: to.Ptr("etag"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForO365ManagementActivityAPI/service1"), // Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), // Location: to.Ptr("West US"), // Tags: map[string]*string{ // }, // Properties: &armm365securityandcompliance.ServicesProperties{ // AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ // { // ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), // }, // { // ObjectID: to.Ptr("5b307da8-43d4-492b-8b66-b0294ade872f"), // }}, // AuthenticationConfiguration: &armm365securityandcompliance.ServiceAuthenticationConfigurationInfo{ // Audience: to.Ptr("https://azurehealthcareapis.com"), // Authority: to.Ptr("https://login.microsoftonline.com/abfde7b2-df0f-47e6-aabf-2462b07508dc"), // SmartProxyEnabled: to.Ptr(true), // }, // CorsConfiguration: &armm365securityandcompliance.ServiceCorsConfigurationInfo{ // AllowCredentials: to.Ptr(false), // Headers: []*string{ // to.Ptr("*")}, // MaxAge: to.Ptr[int64](1440), // Methods: []*string{ // to.Ptr("DELETE"), // to.Ptr("GET"), // to.Ptr("OPTIONS"), // to.Ptr("PATCH"), // to.Ptr("POST"), // to.Ptr("PUT")}, // Origins: []*string{ // to.Ptr("*")}, // }, // CosmosDbConfiguration: &armm365securityandcompliance.ServiceCosmosDbConfigurationInfo{ // KeyVaultKeyURI: to.Ptr("https://my-vault.vault.azure.net/keys/my-key"), // OfferThroughput: to.Ptr[int64](1000), // }, // PrivateEndpointConnections: []*armm365securityandcompliance.PrivateEndpointConnection{ // }, // ProvisioningState: to.Ptr(armm365securityandcompliance.ProvisioningStateSucceeded), // PublicNetworkAccess: to.Ptr(armm365securityandcompliance.PublicNetworkAccessDisabled), // }, // }}, // } } } ``` ``` Output: ``` Share Format Run #### type [PrivateLinkServicesForO365ManagementActivityAPIClientBeginCreateOrUpdateOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L741) [¶](#PrivateLinkServicesForO365ManagementActivityAPIClientBeginCreateOrUpdateOptions) added in v0.2.0 ``` type PrivateLinkServicesForO365ManagementActivityAPIClientBeginCreateOrUpdateOptions struct { // Resumes the LRO from the provided token. ResumeToken [string](/builtin#string) } ``` PrivateLinkServicesForO365ManagementActivityAPIClientBeginCreateOrUpdateOptions contains the optional parameters for the PrivateLinkServicesForO365ManagementActivityAPIClient.BeginCreateOrUpdate method. #### type [PrivateLinkServicesForO365ManagementActivityAPIClientBeginDeleteOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L748) [¶](#PrivateLinkServicesForO365ManagementActivityAPIClientBeginDeleteOptions) added in v0.2.0 ``` type PrivateLinkServicesForO365ManagementActivityAPIClientBeginDeleteOptions struct { // Resumes the LRO from the provided token. ResumeToken [string](/builtin#string) } ``` PrivateLinkServicesForO365ManagementActivityAPIClientBeginDeleteOptions contains the optional parameters for the PrivateLinkServicesForO365ManagementActivityAPIClient.BeginDelete method. #### type [PrivateLinkServicesForO365ManagementActivityAPIClientBeginUpdateOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L755) [¶](#PrivateLinkServicesForO365ManagementActivityAPIClientBeginUpdateOptions) added in v0.2.0 ``` type PrivateLinkServicesForO365ManagementActivityAPIClientBeginUpdateOptions struct { // Resumes the LRO from the provided token. ResumeToken [string](/builtin#string) } ``` PrivateLinkServicesForO365ManagementActivityAPIClientBeginUpdateOptions contains the optional parameters for the PrivateLinkServicesForO365ManagementActivityAPIClient.BeginUpdate method. #### type [PrivateLinkServicesForO365ManagementActivityAPIClientCreateOrUpdateResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L318) [¶](#PrivateLinkServicesForO365ManagementActivityAPIClientCreateOrUpdateResponse) added in v0.2.0 ``` type PrivateLinkServicesForO365ManagementActivityAPIClientCreateOrUpdateResponse struct { [PrivateLinkServicesForO365ManagementActivityAPIDescription](#PrivateLinkServicesForO365ManagementActivityAPIDescription) } ``` PrivateLinkServicesForO365ManagementActivityAPIClientCreateOrUpdateResponse contains the response from method PrivateLinkServicesForO365ManagementActivityAPIClient.BeginCreateOrUpdate. #### type [PrivateLinkServicesForO365ManagementActivityAPIClientDeleteResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L323) [¶](#PrivateLinkServicesForO365ManagementActivityAPIClientDeleteResponse) added in v0.2.0 ``` type PrivateLinkServicesForO365ManagementActivityAPIClientDeleteResponse struct { } ``` PrivateLinkServicesForO365ManagementActivityAPIClientDeleteResponse contains the response from method PrivateLinkServicesForO365ManagementActivityAPIClient.BeginDelete. #### type [PrivateLinkServicesForO365ManagementActivityAPIClientGetOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L762) [¶](#PrivateLinkServicesForO365ManagementActivityAPIClientGetOptions) added in v0.2.0 ``` type PrivateLinkServicesForO365ManagementActivityAPIClientGetOptions struct { } ``` PrivateLinkServicesForO365ManagementActivityAPIClientGetOptions contains the optional parameters for the PrivateLinkServicesForO365ManagementActivityAPIClient.Get method. #### type [PrivateLinkServicesForO365ManagementActivityAPIClientGetResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L328) [¶](#PrivateLinkServicesForO365ManagementActivityAPIClientGetResponse) added in v0.2.0 ``` type PrivateLinkServicesForO365ManagementActivityAPIClientGetResponse struct { [PrivateLinkServicesForO365ManagementActivityAPIDescription](#PrivateLinkServicesForO365ManagementActivityAPIDescription) } ``` PrivateLinkServicesForO365ManagementActivityAPIClientGetResponse contains the response from method PrivateLinkServicesForO365ManagementActivityAPIClient.Get. #### type [PrivateLinkServicesForO365ManagementActivityAPIClientListByResourceGroupOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L768) [¶](#PrivateLinkServicesForO365ManagementActivityAPIClientListByResourceGroupOptions) added in v0.2.0 ``` type PrivateLinkServicesForO365ManagementActivityAPIClientListByResourceGroupOptions struct { } ``` PrivateLinkServicesForO365ManagementActivityAPIClientListByResourceGroupOptions contains the optional parameters for the PrivateLinkServicesForO365ManagementActivityAPIClient.NewListByResourceGroupPager method. #### type [PrivateLinkServicesForO365ManagementActivityAPIClientListByResourceGroupResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L333) [¶](#PrivateLinkServicesForO365ManagementActivityAPIClientListByResourceGroupResponse) added in v0.2.0 ``` type PrivateLinkServicesForO365ManagementActivityAPIClientListByResourceGroupResponse struct { [PrivateLinkServicesForO365ManagementActivityAPIDescriptionListResult](#PrivateLinkServicesForO365ManagementActivityAPIDescriptionListResult) } ``` PrivateLinkServicesForO365ManagementActivityAPIClientListByResourceGroupResponse contains the response from method PrivateLinkServicesForO365ManagementActivityAPIClient.NewListByResourceGroupPager. #### type [PrivateLinkServicesForO365ManagementActivityAPIClientListOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L774) [¶](#PrivateLinkServicesForO365ManagementActivityAPIClientListOptions) added in v0.2.0 ``` type PrivateLinkServicesForO365ManagementActivityAPIClientListOptions struct { } ``` PrivateLinkServicesForO365ManagementActivityAPIClientListOptions contains the optional parameters for the PrivateLinkServicesForO365ManagementActivityAPIClient.NewListPager method. #### type [PrivateLinkServicesForO365ManagementActivityAPIClientListResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L338) [¶](#PrivateLinkServicesForO365ManagementActivityAPIClientListResponse) added in v0.2.0 ``` type PrivateLinkServicesForO365ManagementActivityAPIClientListResponse struct { [PrivateLinkServicesForO365ManagementActivityAPIDescriptionListResult](#PrivateLinkServicesForO365ManagementActivityAPIDescriptionListResult) } ``` PrivateLinkServicesForO365ManagementActivityAPIClientListResponse contains the response from method PrivateLinkServicesForO365ManagementActivityAPIClient.NewListPager. #### type [PrivateLinkServicesForO365ManagementActivityAPIClientUpdateResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L343) [¶](#PrivateLinkServicesForO365ManagementActivityAPIClientUpdateResponse) added in v0.2.0 ``` type PrivateLinkServicesForO365ManagementActivityAPIClientUpdateResponse struct { [PrivateLinkServicesForO365ManagementActivityAPIDescription](#PrivateLinkServicesForO365ManagementActivityAPIDescription) } ``` PrivateLinkServicesForO365ManagementActivityAPIClientUpdateResponse contains the response from method PrivateLinkServicesForO365ManagementActivityAPIClient.BeginUpdate. #### type [PrivateLinkServicesForO365ManagementActivityAPIDescription](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L779) [¶](#PrivateLinkServicesForO365ManagementActivityAPIDescription) ``` type PrivateLinkServicesForO365ManagementActivityAPIDescription struct { // REQUIRED; The kind of the service. Kind *[Kind](#Kind) // REQUIRED; The resource location. Location *[string](/builtin#string) // An etag associated with the resource, used for optimistic concurrency when editing it. Etag *[string](/builtin#string) // Setting indicating whether the service has a managed identity associated with it. Identity *[ServicesResourceIdentity](#ServicesResourceIdentity) // The common properties of a service. Properties *[ServicesProperties](#ServicesProperties) // The resource tags. Tags map[[string](/builtin#string)]*[string](/builtin#string) // READ-ONLY; The resource identifier. ID *[string](/builtin#string) // READ-ONLY; The resource name. Name *[string](/builtin#string) // READ-ONLY; Required property for system data SystemData *[SystemData](#SystemData) // READ-ONLY; The resource type. Type *[string](/builtin#string) } ``` PrivateLinkServicesForO365ManagementActivityAPIDescription - The description of the service. #### func (PrivateLinkServicesForO365ManagementActivityAPIDescription) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L890) [¶](#PrivateLinkServicesForO365ManagementActivityAPIDescription.MarshalJSON) ``` func (p [PrivateLinkServicesForO365ManagementActivityAPIDescription](#PrivateLinkServicesForO365ManagementActivityAPIDescription)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type PrivateLinkServicesForO365ManagementActivityAPIDescription. #### func (*PrivateLinkServicesForO365ManagementActivityAPIDescription) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L906) [¶](#PrivateLinkServicesForO365ManagementActivityAPIDescription.UnmarshalJSON) added in v0.6.0 ``` func (p *[PrivateLinkServicesForO365ManagementActivityAPIDescription](#PrivateLinkServicesForO365ManagementActivityAPIDescription)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type PrivateLinkServicesForO365ManagementActivityAPIDescription. #### type [PrivateLinkServicesForO365ManagementActivityAPIDescriptionListResult](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L813) [¶](#PrivateLinkServicesForO365ManagementActivityAPIDescriptionListResult) ``` type PrivateLinkServicesForO365ManagementActivityAPIDescriptionListResult struct { // A list of service description objects. Value []*[PrivateLinkServicesForO365ManagementActivityAPIDescription](#PrivateLinkServicesForO365ManagementActivityAPIDescription) // READ-ONLY; The link used to get the next page of service description objects. NextLink *[string](/builtin#string) } ``` PrivateLinkServicesForO365ManagementActivityAPIDescriptionListResult - A list of service description objects with a next link. #### func (PrivateLinkServicesForO365ManagementActivityAPIDescriptionListResult) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L953) [¶](#PrivateLinkServicesForO365ManagementActivityAPIDescriptionListResult.MarshalJSON) ``` func (p [PrivateLinkServicesForO365ManagementActivityAPIDescriptionListResult](#PrivateLinkServicesForO365ManagementActivityAPIDescriptionListResult)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type PrivateLinkServicesForO365ManagementActivityAPIDescriptionListResult. #### func (*PrivateLinkServicesForO365ManagementActivityAPIDescriptionListResult) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L961) [¶](#PrivateLinkServicesForO365ManagementActivityAPIDescriptionListResult.UnmarshalJSON) added in v0.6.0 ``` func (p *[PrivateLinkServicesForO365ManagementActivityAPIDescriptionListResult](#PrivateLinkServicesForO365ManagementActivityAPIDescriptionListResult)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type PrivateLinkServicesForO365ManagementActivityAPIDescriptionListResult. #### type [PrivateLinkServicesForSCCPowershellClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesforsccpowershell_client.go#L26) [¶](#PrivateLinkServicesForSCCPowershellClient) ``` type PrivateLinkServicesForSCCPowershellClient struct { // contains filtered or unexported fields } ``` PrivateLinkServicesForSCCPowershellClient contains the methods for the PrivateLinkServicesForSCCPowershell group. Don't use this type directly, use NewPrivateLinkServicesForSCCPowershellClient() instead. #### func [NewPrivateLinkServicesForSCCPowershellClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesforsccpowershell_client.go#L35) [¶](#NewPrivateLinkServicesForSCCPowershellClient) ``` func NewPrivateLinkServicesForSCCPowershellClient(subscriptionID [string](/builtin#string), credential [azcore](/github.com/Azure/azure-sdk-for-go/sdk/azcore).[TokenCredential](/github.com/Azure/azure-sdk-for-go/sdk/azcore#TokenCredential), options *[arm](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm).[ClientOptions](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm#ClientOptions)) (*[PrivateLinkServicesForSCCPowershellClient](#PrivateLinkServicesForSCCPowershellClient), [error](/builtin#error)) ``` NewPrivateLinkServicesForSCCPowershellClient creates a new instance of PrivateLinkServicesForSCCPowershellClient with the specified values. * subscriptionID - The subscription identifier. * credential - used to authorize requests. Usually a credential from azidentity. * options - pass nil to accept the default values. #### func (*PrivateLinkServicesForSCCPowershellClient) [BeginCreateOrUpdate](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesforsccpowershell_client.go#L56) [¶](#PrivateLinkServicesForSCCPowershellClient.BeginCreateOrUpdate) ``` func (client *[PrivateLinkServicesForSCCPowershellClient](#PrivateLinkServicesForSCCPowershellClient)) BeginCreateOrUpdate(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), privateLinkServicesForSCCPowershellDescription [PrivateLinkServicesForSCCPowershellDescription](#PrivateLinkServicesForSCCPowershellDescription), options *[PrivateLinkServicesForSCCPowershellClientBeginCreateOrUpdateOptions](#PrivateLinkServicesForSCCPowershellClientBeginCreateOrUpdateOptions)) (*[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Poller](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Poller)[[PrivateLinkServicesForSCCPowershellClientCreateOrUpdateResponse](#PrivateLinkServicesForSCCPowershellClientCreateOrUpdateResponse)], [error](/builtin#error)) ``` BeginCreateOrUpdate - Create or update the metadata of a privateLinkServicesForSCCPowershell instance. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * privateLinkServicesForSCCPowershellDescription - The service instance metadata. * options - PrivateLinkServicesForSCCPowershellClientBeginCreateOrUpdateOptions contains the optional parameters for the PrivateLinkServicesForSCCPowershellClient.BeginCreateOrUpdate method. Example (CreateOrUpdateAServiceWithAllParameters) [¶](#example-PrivateLinkServicesForSCCPowershellClient.BeginCreateOrUpdate-CreateOrUpdateAServiceWithAllParameters) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/SCCPowershellServiceCreate.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azcore/to" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } poller, err := clientFactory.NewPrivateLinkServicesForSCCPowershellClient().BeginCreateOrUpdate(ctx, "rg1", "service1", armm365securityandcompliance.PrivateLinkServicesForSCCPowershellDescription{ Identity: &armm365securityandcompliance.ServicesResourceIdentity{ Type: to.Ptr(armm365securityandcompliance.ManagedServiceIdentityTypeSystemAssigned), }, Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), Location: to.Ptr("westus2"), Tags: map[string]*string{}, Properties: &armm365securityandcompliance.ServicesProperties{ AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ { ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), }, { ObjectID: to.Ptr("5b307da8-43d4-492b-8b66-b0294ade872f"), }}, AuthenticationConfiguration: &armm365securityandcompliance.ServiceAuthenticationConfigurationInfo{ Audience: to.Ptr("https://azurehealthcareapis.com"), Authority: to.Ptr("https://login.microsoftonline.com/abfde7b2-df0f-47e6-aabf-2462b07508dc"), SmartProxyEnabled: to.Ptr(true), }, CorsConfiguration: &armm365securityandcompliance.ServiceCorsConfigurationInfo{ AllowCredentials: to.Ptr(false), Headers: []*string{ to.Ptr("*")}, MaxAge: to.Ptr[int64](1440), Methods: []*string{ to.Ptr("DELETE"), to.Ptr("GET"), to.Ptr("OPTIONS"), to.Ptr("PATCH"), to.Ptr("POST"), to.Ptr("PUT")}, Origins: []*string{ to.Ptr("*")}, }, CosmosDbConfiguration: &armm365securityandcompliance.ServiceCosmosDbConfigurationInfo{ KeyVaultKeyURI: to.Ptr("https://my-vault.vault.azure.net/keys/my-key"), OfferThroughput: to.Ptr[int64](1000), }, ExportConfiguration: &armm365securityandcompliance.ServiceExportConfigurationInfo{ StorageAccountName: to.Ptr("existingStorageAccount"), }, PrivateEndpointConnections: []*armm365securityandcompliance.PrivateEndpointConnection{}, PublicNetworkAccess: to.Ptr(armm365securityandcompliance.PublicNetworkAccessDisabled), }, }, nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } res, err := poller.PollUntilDone(ctx, nil) if err != nil { log.Fatalf("failed to pull the result: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateLinkServicesForSCCPowershellDescription = armm365securityandcompliance.PrivateLinkServicesForSCCPowershellDescription{ // Name: to.Ptr("service1"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForSCCPowershell"), // Etag: to.Ptr("etagvalue"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForSCCPowershell/service1"), // Identity: &armm365securityandcompliance.ServicesResourceIdentity{ // Type: to.Ptr(armm365securityandcompliance.ManagedServiceIdentityTypeSystemAssigned), // PrincipalID: to.Ptr("03fe6ae0-<KEY>"), // TenantID: to.Ptr("72f988bf-86f1-41af-91ab-2d8cd011db47"), // }, // Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), // Location: to.Ptr("West US 2"), // SystemData: &armm365securityandcompliance.SystemData{ // CreatedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // CreatedBy: to.Ptr("sove"), // CreatedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // LastModifiedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // LastModifiedBy: to.Ptr("sove"), // LastModifiedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // }, // Tags: map[string]*string{ // }, // Properties: &armm365securityandcompliance.ServicesProperties{ // AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ // { // ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), // }, // { // ObjectID: to.Ptr("5b307da8-43d4-492b-8b66-b0294ade872f"), // }}, // AuthenticationConfiguration: &armm365securityandcompliance.ServiceAuthenticationConfigurationInfo{ // Audience: to.Ptr("https://azurehealthcareapis.com"), // Authority: to.Ptr("https://login.microsoftonline.com/abfde7b2-df0f-47e6-aabf-2462b07508dc"), // SmartProxyEnabled: to.Ptr(true), // }, // CorsConfiguration: &armm365securityandcompliance.ServiceCorsConfigurationInfo{ // AllowCredentials: to.Ptr(false), // Headers: []*string{ // to.Ptr("*")}, // MaxAge: to.Ptr[int64](1440), // Methods: []*string{ // to.Ptr("DELETE"), // to.Ptr("GET"), // to.Ptr("OPTIONS"), // to.Ptr("PATCH"), // to.Ptr("POST"), // to.Ptr("PUT")}, // Origins: []*string{ // to.Ptr("*")}, // }, // CosmosDbConfiguration: &armm365securityandcompliance.ServiceCosmosDbConfigurationInfo{ // KeyVaultKeyURI: to.Ptr("https://my-vault.vault.azure.net/keys/my-key"), // OfferThroughput: to.Ptr[int64](1000), // }, // ExportConfiguration: &armm365securityandcompliance.ServiceExportConfigurationInfo{ // StorageAccountName: to.Ptr("existingStorageAccount"), // }, // PrivateEndpointConnections: []*armm365securityandcompliance.PrivateEndpointConnection{ // }, // ProvisioningState: to.Ptr(armm365securityandcompliance.ProvisioningStateSucceeded), // PublicNetworkAccess: to.Ptr(armm365securityandcompliance.PublicNetworkAccessDisabled), // }, // } } ``` ``` Output: ``` Share Format Run Example (CreateOrUpdateAServiceWithMinimumParameters) [¶](#example-PrivateLinkServicesForSCCPowershellClient.BeginCreateOrUpdate-CreateOrUpdateAServiceWithMinimumParameters) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/SCCPowershellServiceCreateMinimum.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azcore/to" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } poller, err := clientFactory.NewPrivateLinkServicesForSCCPowershellClient().BeginCreateOrUpdate(ctx, "rg1", "service2", armm365securityandcompliance.PrivateLinkServicesForSCCPowershellDescription{ Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), Location: to.Ptr("westus2"), Tags: map[string]*string{}, Properties: &armm365securityandcompliance.ServicesProperties{ AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ { ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), }}, }, }, nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } res, err := poller.PollUntilDone(ctx, nil) if err != nil { log.Fatalf("failed to pull the result: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateLinkServicesForSCCPowershellDescription = armm365securityandcompliance.PrivateLinkServicesForSCCPowershellDescription{ // Name: to.Ptr("service2"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForSCCPowershell"), // Etag: to.Ptr("etagvalue"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForSCCPowershell/service2"), // Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), // Location: to.Ptr("westus2"), // SystemData: &armm365securityandcompliance.SystemData{ // CreatedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // CreatedBy: to.Ptr("sove"), // CreatedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // LastModifiedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // LastModifiedBy: to.Ptr("sove"), // LastModifiedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // }, // Tags: map[string]*string{ // }, // Properties: &armm365securityandcompliance.ServicesProperties{ // AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ // { // ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), // }}, // AuthenticationConfiguration: &armm365securityandcompliance.ServiceAuthenticationConfigurationInfo{ // Audience: to.Ptr("https://azurehealthcareapis.com"), // Authority: to.Ptr("https://login.microsoftonline.com/abfde7b2-df0f-47e6-aabf-2462b07508dc"), // SmartProxyEnabled: to.Ptr(false), // }, // CorsConfiguration: &armm365securityandcompliance.ServiceCorsConfigurationInfo{ // AllowCredentials: to.Ptr(false), // Headers: []*string{ // }, // Methods: []*string{ // }, // Origins: []*string{ // }, // }, // CosmosDbConfiguration: &armm365securityandcompliance.ServiceCosmosDbConfigurationInfo{ // OfferThroughput: to.Ptr[int64](1000), // }, // PrivateEndpointConnections: []*armm365securityandcompliance.PrivateEndpointConnection{ // }, // ProvisioningState: to.Ptr(armm365securityandcompliance.ProvisioningStateSucceeded), // PublicNetworkAccess: to.Ptr(armm365securityandcompliance.PublicNetworkAccessDisabled), // }, // } } ``` ``` Output: ``` Share Format Run #### func (*PrivateLinkServicesForSCCPowershellClient) [BeginDelete](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesforsccpowershell_client.go#L123) [¶](#PrivateLinkServicesForSCCPowershellClient.BeginDelete) ``` func (client *[PrivateLinkServicesForSCCPowershellClient](#PrivateLinkServicesForSCCPowershellClient)) BeginDelete(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), options *[PrivateLinkServicesForSCCPowershellClientBeginDeleteOptions](#PrivateLinkServicesForSCCPowershellClientBeginDeleteOptions)) (*[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Poller](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Poller)[[PrivateLinkServicesForSCCPowershellClientDeleteResponse](#PrivateLinkServicesForSCCPowershellClientDeleteResponse)], [error](/builtin#error)) ``` BeginDelete - Delete a service instance. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * options - PrivateLinkServicesForSCCPowershellClientBeginDeleteOptions contains the optional parameters for the PrivateLinkServicesForSCCPowershellClient.BeginDelete method. Example [¶](#example-PrivateLinkServicesForSCCPowershellClient.BeginDelete) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/SCCPowershellServiceDelete.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } poller, err := clientFactory.NewPrivateLinkServicesForSCCPowershellClient().BeginDelete(ctx, "rg1", "service1", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } _, err = poller.PollUntilDone(ctx, nil) if err != nil { log.Fatalf("failed to pull the result: %v", err) } } ``` ``` Output: ``` Share Format Run #### func (*PrivateLinkServicesForSCCPowershellClient) [BeginUpdate](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesforsccpowershell_client.go#L374) [¶](#PrivateLinkServicesForSCCPowershellClient.BeginUpdate) ``` func (client *[PrivateLinkServicesForSCCPowershellClient](#PrivateLinkServicesForSCCPowershellClient)) BeginUpdate(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), servicePatchDescription [ServicesPatchDescription](#ServicesPatchDescription), options *[PrivateLinkServicesForSCCPowershellClientBeginUpdateOptions](#PrivateLinkServicesForSCCPowershellClientBeginUpdateOptions)) (*[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Poller](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Poller)[[PrivateLinkServicesForSCCPowershellClientUpdateResponse](#PrivateLinkServicesForSCCPowershellClientUpdateResponse)], [error](/builtin#error)) ``` BeginUpdate - Update the metadata of a privateLinkServicesForSCCPowershell instance. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * servicePatchDescription - The service instance metadata and security metadata. * options - PrivateLinkServicesForSCCPowershellClientBeginUpdateOptions contains the optional parameters for the PrivateLinkServicesForSCCPowershellClient.BeginUpdate method. Example [¶](#example-PrivateLinkServicesForSCCPowershellClient.BeginUpdate) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/SCCPowershellServicePatch.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azcore/to" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } poller, err := clientFactory.NewPrivateLinkServicesForSCCPowershellClient().BeginUpdate(ctx, "rg1", "service1", armm365securityandcompliance.ServicesPatchDescription{ Tags: map[string]*string{ "tag1": to.Ptr("value1"), "tag2": to.Ptr("value2"), }, }, nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } res, err := poller.PollUntilDone(ctx, nil) if err != nil { log.Fatalf("failed to pull the result: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateLinkServicesForSCCPowershellDescription = armm365securityandcompliance.PrivateLinkServicesForSCCPowershellDescription{ // Name: to.Ptr("service1"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForSCCPowershell"), // Etag: to.Ptr("etagvalue"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForSCCPowershell/service1"), // Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), // Location: to.Ptr("West US"), // SystemData: &armm365securityandcompliance.SystemData{ // CreatedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // CreatedBy: to.Ptr("sove"), // CreatedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // LastModifiedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // LastModifiedBy: to.Ptr("sove"), // LastModifiedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // }, // Tags: map[string]*string{ // "tag1": to.Ptr("value1"), // "tag2": to.Ptr("value2"), // }, // Properties: &armm365securityandcompliance.ServicesProperties{ // AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ // { // ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), // }, // { // ObjectID: to.Ptr("5b307da8-43d4-492b-8b66-b0294ade872f"), // }}, // AuthenticationConfiguration: &armm365securityandcompliance.ServiceAuthenticationConfigurationInfo{ // Audience: to.Ptr("https://azurehealthcareapis.com"), // Authority: to.Ptr("https://login.microsoftonline.com/abfde7b2-df0f-47e6-aabf-2462b07508dc"), // SmartProxyEnabled: to.Ptr(true), // }, // CorsConfiguration: &armm365securityandcompliance.ServiceCorsConfigurationInfo{ // AllowCredentials: to.Ptr(false), // Headers: []*string{ // to.Ptr("*")}, // MaxAge: to.Ptr[int64](1440), // Methods: []*string{ // to.Ptr("DELETE"), // to.Ptr("GET"), // to.Ptr("OPTIONS"), // to.Ptr("PATCH"), // to.Ptr("POST"), // to.Ptr("PUT")}, // Origins: []*string{ // to.Ptr("*")}, // }, // CosmosDbConfiguration: &armm365securityandcompliance.ServiceCosmosDbConfigurationInfo{ // KeyVaultKeyURI: to.Ptr("https://my-vault.vault.azure.net/keys/my-key"), // OfferThroughput: to.Ptr[int64](1000), // }, // PrivateEndpointConnections: []*armm365securityandcompliance.PrivateEndpointConnection{ // }, // ProvisioningState: to.Ptr(armm365securityandcompliance.ProvisioningStateSucceeded), // PublicNetworkAccess: to.Ptr(armm365securityandcompliance.PublicNetworkAccessDisabled), // }, // } } ``` ``` Output: ``` Share Format Run #### func (*PrivateLinkServicesForSCCPowershellClient) [Get](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesforsccpowershell_client.go#L190) [¶](#PrivateLinkServicesForSCCPowershellClient.Get) ``` func (client *[PrivateLinkServicesForSCCPowershellClient](#PrivateLinkServicesForSCCPowershellClient)) Get(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), options *[PrivateLinkServicesForSCCPowershellClientGetOptions](#PrivateLinkServicesForSCCPowershellClientGetOptions)) ([PrivateLinkServicesForSCCPowershellClientGetResponse](#PrivateLinkServicesForSCCPowershellClientGetResponse), [error](/builtin#error)) ``` Get - Get the metadata of a privateLinkServicesForSCCPowershell resource. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * options - PrivateLinkServicesForSCCPowershellClientGetOptions contains the optional parameters for the PrivateLinkServicesForSCCPowershellClient.Get method. Example [¶](#example-PrivateLinkServicesForSCCPowershellClient.Get) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/SCCPowershellServiceGet.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } res, err := clientFactory.NewPrivateLinkServicesForSCCPowershellClient().Get(ctx, "rg1", "service1", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } // You could use response here. We use blank identifier for just demo purposes. _ = res // If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // res.PrivateLinkServicesForSCCPowershellDescription = armm365securityandcompliance.PrivateLinkServicesForSCCPowershellDescription{ // Name: to.Ptr("service1"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForSCCPowershell"), // Etag: to.Ptr("etagvalue"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForSCCPowershell/service1"), // Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), // Location: to.Ptr("West US"), // SystemData: &armm365securityandcompliance.SystemData{ // CreatedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // CreatedBy: to.Ptr("sove"), // CreatedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // LastModifiedAt: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2021-03-24T13:30:28.958Z"); return t}()), // LastModifiedBy: to.Ptr("sove"), // LastModifiedByType: to.Ptr(armm365securityandcompliance.CreatedByTypeUser), // }, // Tags: map[string]*string{ // }, // Properties: &armm365securityandcompliance.ServicesProperties{ // AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ // { // ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), // }, // { // ObjectID: to.Ptr("5b307da8-43d4-492b-8b66-b0294ade872f"), // }}, // AuthenticationConfiguration: &armm365securityandcompliance.ServiceAuthenticationConfigurationInfo{ // Audience: to.Ptr("https://azurehealthcareapis.com"), // Authority: to.Ptr("https://login.microsoftonline.com/abfde7b2-df0f-47e6-aabf-2462b07508dc"), // SmartProxyEnabled: to.Ptr(true), // }, // CorsConfiguration: &armm365securityandcompliance.ServiceCorsConfigurationInfo{ // AllowCredentials: to.Ptr(false), // Headers: []*string{ // to.Ptr("*")}, // MaxAge: to.Ptr[int64](1440), // Methods: []*string{ // to.Ptr("DELETE"), // to.Ptr("GET"), // to.Ptr("OPTIONS"), // to.Ptr("PATCH"), // to.Ptr("POST"), // to.Ptr("PUT")}, // Origins: []*string{ // to.Ptr("*")}, // }, // CosmosDbConfiguration: &armm365securityandcompliance.ServiceCosmosDbConfigurationInfo{ // KeyVaultKeyURI: to.Ptr("https://my-vault.vault.azure.net/keys/my-key"), // OfferThroughput: to.Ptr[int64](1000), // }, // PrivateEndpointConnections: []*armm365securityandcompliance.PrivateEndpointConnection{ // }, // ProvisioningState: to.Ptr(armm365securityandcompliance.ProvisioningStateSucceeded), // PublicNetworkAccess: to.Ptr(armm365securityandcompliance.PublicNetworkAccessDisabled), // }, // } } ``` ``` Output: ``` Share Format Run #### func (*PrivateLinkServicesForSCCPowershellClient) [NewListByResourceGroupPager](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesforsccpowershell_client.go#L306) [¶](#PrivateLinkServicesForSCCPowershellClient.NewListByResourceGroupPager) added in v0.4.0 ``` func (client *[PrivateLinkServicesForSCCPowershellClient](#PrivateLinkServicesForSCCPowershellClient)) NewListByResourceGroupPager(resourceGroupName [string](/builtin#string), options *[PrivateLinkServicesForSCCPowershellClientListByResourceGroupOptions](#PrivateLinkServicesForSCCPowershellClientListByResourceGroupOptions)) *[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Pager](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Pager)[[PrivateLinkServicesForSCCPowershellClientListByResourceGroupResponse](#PrivateLinkServicesForSCCPowershellClientListByResourceGroupResponse)] ``` NewListByResourceGroupPager - Get all the service instances in a resource group. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * options - PrivateLinkServicesForSCCPowershellClientListByResourceGroupOptions contains the optional parameters for the PrivateLinkServicesForSCCPowershellClient.NewListByResourceGroupPager method. Example [¶](#example-PrivateLinkServicesForSCCPowershellClient.NewListByResourceGroupPager) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/SCCPowershellServiceListByResourceGroup.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } pager := clientFactory.NewPrivateLinkServicesForSCCPowershellClient().NewListByResourceGroupPager("rgname", nil) for pager.More() { page, err := pager.NextPage(ctx) if err != nil { log.Fatalf("failed to advance page: %v", err) } for _, v := range page.Value { // You could use page here. We use blank identifier for just demo purposes. _ = v } // If the HTTP response code is 200 as defined in example definition, your page structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // page.PrivateLinkServicesForSCCPowershellDescriptionListResult = armm365securityandcompliance.PrivateLinkServicesForSCCPowershellDescriptionListResult{ // Value: []*armm365securityandcompliance.PrivateLinkServicesForSCCPowershellDescription{ // { // Name: to.Ptr("service1"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForSCCPowershell"), // Etag: to.Ptr("etagvalue"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForSCCPowershell/dddb8dcb-effb-4290-bb47-ce1e8440c729"), // Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), // Location: to.Ptr("westus"), // Tags: map[string]*string{ // }, // Properties: &armm365securityandcompliance.ServicesProperties{ // AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ // { // ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), // }, // { // ObjectID: to.Ptr("5b307da8-43d4-492b-8b66-b0294ade872f"), // }}, // AuthenticationConfiguration: &armm365securityandcompliance.ServiceAuthenticationConfigurationInfo{ // Audience: to.Ptr("https://azurehealthcareapis.com"), // Authority: to.Ptr("https://login.microsoftonline.com/abfde7b2-df0f-47e6-aabf-2462b07508dc"), // SmartProxyEnabled: to.Ptr(true), // }, // CorsConfiguration: &armm365securityandcompliance.ServiceCorsConfigurationInfo{ // AllowCredentials: to.Ptr(false), // Headers: []*string{ // to.Ptr("*")}, // MaxAge: to.Ptr[int64](1440), // Methods: []*string{ // to.Ptr("DELETE"), // to.Ptr("GET"), // to.Ptr("OPTIONS"), // to.Ptr("PATCH"), // to.Ptr("POST"), // to.Ptr("PUT")}, // Origins: []*string{ // to.Ptr("*")}, // }, // CosmosDbConfiguration: &armm365securityandcompliance.ServiceCosmosDbConfigurationInfo{ // KeyVaultKeyURI: to.Ptr("https://my-vault.vault.azure.net/keys/my-key"), // OfferThroughput: to.Ptr[int64](1000), // }, // PrivateEndpointConnections: []*armm365securityandcompliance.PrivateEndpointConnection{ // }, // ProvisioningState: to.Ptr(armm365securityandcompliance.ProvisioningStateSucceeded), // PublicNetworkAccess: to.Ptr(armm365securityandcompliance.PublicNetworkAccessDisabled), // }, // }}, // } } } ``` ``` Output: ``` Share Format Run #### func (*PrivateLinkServicesForSCCPowershellClient) [NewListPager](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/privatelinkservicesforsccpowershell_client.go#L245) [¶](#PrivateLinkServicesForSCCPowershellClient.NewListPager) added in v0.4.0 ``` func (client *[PrivateLinkServicesForSCCPowershellClient](#PrivateLinkServicesForSCCPowershellClient)) NewListPager(options *[PrivateLinkServicesForSCCPowershellClientListOptions](#PrivateLinkServicesForSCCPowershellClientListOptions)) *[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Pager](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Pager)[[PrivateLinkServicesForSCCPowershellClientListResponse](#PrivateLinkServicesForSCCPowershellClientListResponse)] ``` NewListPager - Get all the privateLinkServicesForSCCPowershell instances in a subscription. Generated from API version 2021-03-25-preview * options - PrivateLinkServicesForSCCPowershellClientListOptions contains the optional parameters for the PrivateLinkServicesForSCCPowershellClient.NewListPager method. Example [¶](#example-PrivateLinkServicesForSCCPowershellClient.NewListPager) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/SCCPowershellServiceList.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } pager := clientFactory.NewPrivateLinkServicesForSCCPowershellClient().NewListPager(nil) for pager.More() { page, err := pager.NextPage(ctx) if err != nil { log.Fatalf("failed to advance page: %v", err) } for _, v := range page.Value { // You could use page here. We use blank identifier for just demo purposes. _ = v } // If the HTTP response code is 200 as defined in example definition, your page structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes. // page.PrivateLinkServicesForSCCPowershellDescriptionListResult = armm365securityandcompliance.PrivateLinkServicesForSCCPowershellDescriptionListResult{ // Value: []*armm365securityandcompliance.PrivateLinkServicesForSCCPowershellDescription{ // { // Name: to.Ptr("service1"), // Type: to.Ptr("Microsoft.M365SecurityAndCompliance/privateLinkServicesForSCCPowershell"), // Etag: to.Ptr("etag"), // ID: to.Ptr("/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.M365SecurityAndCompliance/privateLinkServicesForSCCPowershell/service1"), // Kind: to.Ptr(armm365securityandcompliance.KindFhirR4), // Location: to.Ptr("West US"), // Tags: map[string]*string{ // }, // Properties: &armm365securityandcompliance.ServicesProperties{ // AccessPolicies: []*armm365securityandcompliance.ServiceAccessPolicyEntry{ // { // ObjectID: to.Ptr("c487e7d1-3210-41a3-8ccc-e9372b78da47"), // }, // { // ObjectID: to.Ptr("5b307da8-43d4-492b-8b66-b0294ade872f"), // }}, // AuthenticationConfiguration: &armm365securityandcompliance.ServiceAuthenticationConfigurationInfo{ // Audience: to.Ptr("https://azurehealthcareapis.com"), // Authority: to.Ptr("https://login.microsoftonline.com/abfde7b2-df0f-47e6-aabf-2462b07508dc"), // SmartProxyEnabled: to.Ptr(true), // }, // CorsConfiguration: &armm365securityandcompliance.ServiceCorsConfigurationInfo{ // AllowCredentials: to.Ptr(false), // Headers: []*string{ // to.Ptr("*")}, // MaxAge: to.Ptr[int64](1440), // Methods: []*string{ // to.Ptr("DELETE"), // to.Ptr("GET"), // to.Ptr("OPTIONS"), // to.Ptr("PATCH"), // to.Ptr("POST"), // to.Ptr("PUT")}, // Origins: []*string{ // to.Ptr("*")}, // }, // CosmosDbConfiguration: &armm365securityandcompliance.ServiceCosmosDbConfigurationInfo{ // KeyVaultKeyURI: to.Ptr("https://my-vault.vault.azure.net/keys/my-key"), // OfferThroughput: to.Ptr[int64](1000), // }, // PrivateEndpointConnections: []*armm365securityandcompliance.PrivateEndpointConnection{ // }, // ProvisioningState: to.Ptr(armm365securityandcompliance.ProvisioningStateSucceeded), // PublicNetworkAccess: to.Ptr(armm365securityandcompliance.PublicNetworkAccessDisabled), // }, // }}, // } } } ``` ``` Output: ``` Share Format Run #### type [PrivateLinkServicesForSCCPowershellClientBeginCreateOrUpdateOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L823) [¶](#PrivateLinkServicesForSCCPowershellClientBeginCreateOrUpdateOptions) added in v0.2.0 ``` type PrivateLinkServicesForSCCPowershellClientBeginCreateOrUpdateOptions struct { // Resumes the LRO from the provided token. ResumeToken [string](/builtin#string) } ``` PrivateLinkServicesForSCCPowershellClientBeginCreateOrUpdateOptions contains the optional parameters for the PrivateLinkServicesForSCCPowershellClient.BeginCreateOrUpdate method. #### type [PrivateLinkServicesForSCCPowershellClientBeginDeleteOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L830) [¶](#PrivateLinkServicesForSCCPowershellClientBeginDeleteOptions) added in v0.2.0 ``` type PrivateLinkServicesForSCCPowershellClientBeginDeleteOptions struct { // Resumes the LRO from the provided token. ResumeToken [string](/builtin#string) } ``` PrivateLinkServicesForSCCPowershellClientBeginDeleteOptions contains the optional parameters for the PrivateLinkServicesForSCCPowershellClient.BeginDelete method. #### type [PrivateLinkServicesForSCCPowershellClientBeginUpdateOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L837) [¶](#PrivateLinkServicesForSCCPowershellClientBeginUpdateOptions) added in v0.2.0 ``` type PrivateLinkServicesForSCCPowershellClientBeginUpdateOptions struct { // Resumes the LRO from the provided token. ResumeToken [string](/builtin#string) } ``` PrivateLinkServicesForSCCPowershellClientBeginUpdateOptions contains the optional parameters for the PrivateLinkServicesForSCCPowershellClient.BeginUpdate method. #### type [PrivateLinkServicesForSCCPowershellClientCreateOrUpdateResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L348) [¶](#PrivateLinkServicesForSCCPowershellClientCreateOrUpdateResponse) added in v0.2.0 ``` type PrivateLinkServicesForSCCPowershellClientCreateOrUpdateResponse struct { [PrivateLinkServicesForSCCPowershellDescription](#PrivateLinkServicesForSCCPowershellDescription) } ``` PrivateLinkServicesForSCCPowershellClientCreateOrUpdateResponse contains the response from method PrivateLinkServicesForSCCPowershellClient.BeginCreateOrUpdate. #### type [PrivateLinkServicesForSCCPowershellClientDeleteResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L353) [¶](#PrivateLinkServicesForSCCPowershellClientDeleteResponse) added in v0.2.0 ``` type PrivateLinkServicesForSCCPowershellClientDeleteResponse struct { } ``` PrivateLinkServicesForSCCPowershellClientDeleteResponse contains the response from method PrivateLinkServicesForSCCPowershellClient.BeginDelete. #### type [PrivateLinkServicesForSCCPowershellClientGetOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L844) [¶](#PrivateLinkServicesForSCCPowershellClientGetOptions) added in v0.2.0 ``` type PrivateLinkServicesForSCCPowershellClientGetOptions struct { } ``` PrivateLinkServicesForSCCPowershellClientGetOptions contains the optional parameters for the PrivateLinkServicesForSCCPowershellClient.Get method. #### type [PrivateLinkServicesForSCCPowershellClientGetResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L358) [¶](#PrivateLinkServicesForSCCPowershellClientGetResponse) added in v0.2.0 ``` type PrivateLinkServicesForSCCPowershellClientGetResponse struct { [PrivateLinkServicesForSCCPowershellDescription](#PrivateLinkServicesForSCCPowershellDescription) } ``` PrivateLinkServicesForSCCPowershellClientGetResponse contains the response from method PrivateLinkServicesForSCCPowershellClient.Get. #### type [PrivateLinkServicesForSCCPowershellClientListByResourceGroupOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L850) [¶](#PrivateLinkServicesForSCCPowershellClientListByResourceGroupOptions) added in v0.2.0 ``` type PrivateLinkServicesForSCCPowershellClientListByResourceGroupOptions struct { } ``` PrivateLinkServicesForSCCPowershellClientListByResourceGroupOptions contains the optional parameters for the PrivateLinkServicesForSCCPowershellClient.NewListByResourceGroupPager method. #### type [PrivateLinkServicesForSCCPowershellClientListByResourceGroupResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L363) [¶](#PrivateLinkServicesForSCCPowershellClientListByResourceGroupResponse) added in v0.2.0 ``` type PrivateLinkServicesForSCCPowershellClientListByResourceGroupResponse struct { [PrivateLinkServicesForSCCPowershellDescriptionListResult](#PrivateLinkServicesForSCCPowershellDescriptionListResult) } ``` PrivateLinkServicesForSCCPowershellClientListByResourceGroupResponse contains the response from method PrivateLinkServicesForSCCPowershellClient.NewListByResourceGroupPager. #### type [PrivateLinkServicesForSCCPowershellClientListOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L856) [¶](#PrivateLinkServicesForSCCPowershellClientListOptions) added in v0.2.0 ``` type PrivateLinkServicesForSCCPowershellClientListOptions struct { } ``` PrivateLinkServicesForSCCPowershellClientListOptions contains the optional parameters for the PrivateLinkServicesForSCCPowershellClient.NewListPager method. #### type [PrivateLinkServicesForSCCPowershellClientListResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L368) [¶](#PrivateLinkServicesForSCCPowershellClientListResponse) added in v0.2.0 ``` type PrivateLinkServicesForSCCPowershellClientListResponse struct { [PrivateLinkServicesForSCCPowershellDescriptionListResult](#PrivateLinkServicesForSCCPowershellDescriptionListResult) } ``` PrivateLinkServicesForSCCPowershellClientListResponse contains the response from method PrivateLinkServicesForSCCPowershellClient.NewListPager. #### type [PrivateLinkServicesForSCCPowershellClientUpdateResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L373) [¶](#PrivateLinkServicesForSCCPowershellClientUpdateResponse) added in v0.2.0 ``` type PrivateLinkServicesForSCCPowershellClientUpdateResponse struct { [PrivateLinkServicesForSCCPowershellDescription](#PrivateLinkServicesForSCCPowershellDescription) } ``` PrivateLinkServicesForSCCPowershellClientUpdateResponse contains the response from method PrivateLinkServicesForSCCPowershellClient.BeginUpdate. #### type [PrivateLinkServicesForSCCPowershellDescription](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L861) [¶](#PrivateLinkServicesForSCCPowershellDescription) ``` type PrivateLinkServicesForSCCPowershellDescription struct { // REQUIRED; The kind of the service. Kind *[Kind](#Kind) // REQUIRED; The resource location. Location *[string](/builtin#string) // An etag associated with the resource, used for optimistic concurrency when editing it. Etag *[string](/builtin#string) // Setting indicating whether the service has a managed identity associated with it. Identity *[ServicesResourceIdentity](#ServicesResourceIdentity) // The common properties of a service. Properties *[ServicesProperties](#ServicesProperties) // The resource tags. Tags map[[string](/builtin#string)]*[string](/builtin#string) // READ-ONLY; The resource identifier. ID *[string](/builtin#string) // READ-ONLY; The resource name. Name *[string](/builtin#string) // READ-ONLY; Required property for system data SystemData *[SystemData](#SystemData) // READ-ONLY; The resource type. Type *[string](/builtin#string) } ``` PrivateLinkServicesForSCCPowershellDescription - The description of the service. #### func (PrivateLinkServicesForSCCPowershellDescription) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L984) [¶](#PrivateLinkServicesForSCCPowershellDescription.MarshalJSON) ``` func (p [PrivateLinkServicesForSCCPowershellDescription](#PrivateLinkServicesForSCCPowershellDescription)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type PrivateLinkServicesForSCCPowershellDescription. #### func (*PrivateLinkServicesForSCCPowershellDescription) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L1000) [¶](#PrivateLinkServicesForSCCPowershellDescription.UnmarshalJSON) added in v0.6.0 ``` func (p *[PrivateLinkServicesForSCCPowershellDescription](#PrivateLinkServicesForSCCPowershellDescription)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type PrivateLinkServicesForSCCPowershellDescription. #### type [PrivateLinkServicesForSCCPowershellDescriptionListResult](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L894) [¶](#PrivateLinkServicesForSCCPowershellDescriptionListResult) ``` type PrivateLinkServicesForSCCPowershellDescriptionListResult struct { // A list of service description objects. Value []*[PrivateLinkServicesForSCCPowershellDescription](#PrivateLinkServicesForSCCPowershellDescription) // READ-ONLY; The link used to get the next page of service description objects. NextLink *[string](/builtin#string) } ``` PrivateLinkServicesForSCCPowershellDescriptionListResult - A list of service description objects with a next link. #### func (PrivateLinkServicesForSCCPowershellDescriptionListResult) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L1047) [¶](#PrivateLinkServicesForSCCPowershellDescriptionListResult.MarshalJSON) ``` func (p [PrivateLinkServicesForSCCPowershellDescriptionListResult](#PrivateLinkServicesForSCCPowershellDescriptionListResult)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type PrivateLinkServicesForSCCPowershellDescriptionListResult. #### func (*PrivateLinkServicesForSCCPowershellDescriptionListResult) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L1055) [¶](#PrivateLinkServicesForSCCPowershellDescriptionListResult.UnmarshalJSON) added in v0.6.0 ``` func (p *[PrivateLinkServicesForSCCPowershellDescriptionListResult](#PrivateLinkServicesForSCCPowershellDescriptionListResult)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type PrivateLinkServicesForSCCPowershellDescriptionListResult. #### type [ProvisioningState](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/constants.go#L132) [¶](#ProvisioningState) ``` type ProvisioningState [string](/builtin#string) ``` ProvisioningState - The provisioning state. ``` const ( ProvisioningStateAccepted [ProvisioningState](#ProvisioningState) = "Accepted" ProvisioningStateCanceled [ProvisioningState](#ProvisioningState) = "Canceled" ProvisioningStateCreating [ProvisioningState](#ProvisioningState) = "Creating" ProvisioningStateDeleting [ProvisioningState](#ProvisioningState) = "Deleting" ProvisioningStateDeprovisioned [ProvisioningState](#ProvisioningState) = "Deprovisioned" ProvisioningStateFailed [ProvisioningState](#ProvisioningState) = "Failed" ProvisioningStateSucceeded [ProvisioningState](#ProvisioningState) = "Succeeded" ProvisioningStateUpdating [ProvisioningState](#ProvisioningState) = "Updating" ProvisioningStateVerifying [ProvisioningState](#ProvisioningState) = "Verifying" ) ``` #### func [PossibleProvisioningStateValues](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/constants.go#L147) [¶](#PossibleProvisioningStateValues) ``` func PossibleProvisioningStateValues() [][ProvisioningState](#ProvisioningState) ``` PossibleProvisioningStateValues returns the possible values for the ProvisioningState const type. #### type [PublicNetworkAccess](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/constants.go#L162) [¶](#PublicNetworkAccess) ``` type PublicNetworkAccess [string](/builtin#string) ``` PublicNetworkAccess - Control permission for data plane traffic coming from public networks while private endpoint is enabled. ``` const ( PublicNetworkAccessDisabled [PublicNetworkAccess](#PublicNetworkAccess) = "Disabled" PublicNetworkAccessEnabled [PublicNetworkAccess](#PublicNetworkAccess) = "Enabled" ) ``` #### func [PossiblePublicNetworkAccessValues](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/constants.go#L170) [¶](#PossiblePublicNetworkAccessValues) ``` func PossiblePublicNetworkAccessValues() [][PublicNetworkAccess](#PublicNetworkAccess) ``` PossiblePublicNetworkAccessValues returns the possible values for the PublicNetworkAccess const type. #### type [Resource](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L903) [¶](#Resource) ``` type Resource struct { // READ-ONLY; Fully qualified resource ID for the resource. Ex - /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName} ID *[string](/builtin#string) // READ-ONLY; The name of the resource Name *[string](/builtin#string) // READ-ONLY; The type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts" Type *[string](/builtin#string) } ``` Resource - Common fields that are returned in the response for all Azure Resource Manager resources #### func (Resource) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L1078) [¶](#Resource.MarshalJSON) added in v0.6.0 ``` func (r [Resource](#Resource)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type Resource. #### func (*Resource) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L1087) [¶](#Resource.UnmarshalJSON) added in v0.6.0 ``` func (r *[Resource](#Resource)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type Resource. #### type [ServiceAccessPolicyEntry](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L915) [¶](#ServiceAccessPolicyEntry) ``` type ServiceAccessPolicyEntry struct { // REQUIRED; An Azure AD object ID (User or Apps) that is allowed access to the FHIR service. ObjectID *[string](/builtin#string) } ``` ServiceAccessPolicyEntry - An access policy entry. #### func (ServiceAccessPolicyEntry) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L1113) [¶](#ServiceAccessPolicyEntry.MarshalJSON) added in v0.6.0 ``` func (s [ServiceAccessPolicyEntry](#ServiceAccessPolicyEntry)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type ServiceAccessPolicyEntry. #### func (*ServiceAccessPolicyEntry) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L1120) [¶](#ServiceAccessPolicyEntry.UnmarshalJSON) added in v0.6.0 ``` func (s *[ServiceAccessPolicyEntry](#ServiceAccessPolicyEntry)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type ServiceAccessPolicyEntry. #### type [ServiceAuthenticationConfigurationInfo](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L921) [¶](#ServiceAuthenticationConfigurationInfo) ``` type ServiceAuthenticationConfigurationInfo struct { // The audience url for the service Audience *[string](/builtin#string) // The authority url for the service Authority *[string](/builtin#string) // If the SMART on FHIR proxy is enabled SmartProxyEnabled *[bool](/builtin#bool) } ``` ServiceAuthenticationConfigurationInfo - Authentication configuration information #### func (ServiceAuthenticationConfigurationInfo) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L1140) [¶](#ServiceAuthenticationConfigurationInfo.MarshalJSON) added in v0.6.0 ``` func (s [ServiceAuthenticationConfigurationInfo](#ServiceAuthenticationConfigurationInfo)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type ServiceAuthenticationConfigurationInfo. #### func (*ServiceAuthenticationConfigurationInfo) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L1149) [¶](#ServiceAuthenticationConfigurationInfo.UnmarshalJSON) added in v0.6.0 ``` func (s *[ServiceAuthenticationConfigurationInfo](#ServiceAuthenticationConfigurationInfo)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type ServiceAuthenticationConfigurationInfo. #### type [ServiceCorsConfigurationInfo](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L933) [¶](#ServiceCorsConfigurationInfo) ``` type ServiceCorsConfigurationInfo struct { // If credentials are allowed via CORS. AllowCredentials *[bool](/builtin#bool) // The headers to be allowed via CORS. Headers []*[string](/builtin#string) // The max age to be allowed via CORS. MaxAge *[int64](/builtin#int64) // The methods to be allowed via CORS. Methods []*[string](/builtin#string) // The origins to be allowed via CORS. Origins []*[string](/builtin#string) } ``` ServiceCorsConfigurationInfo - The settings for the CORS configuration of the service instance. #### func (ServiceCorsConfigurationInfo) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L1175) [¶](#ServiceCorsConfigurationInfo.MarshalJSON) ``` func (s [ServiceCorsConfigurationInfo](#ServiceCorsConfigurationInfo)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type ServiceCorsConfigurationInfo. #### func (*ServiceCorsConfigurationInfo) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L1186) [¶](#ServiceCorsConfigurationInfo.UnmarshalJSON) added in v0.6.0 ``` func (s *[ServiceCorsConfigurationInfo](#ServiceCorsConfigurationInfo)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type ServiceCorsConfigurationInfo. #### type [ServiceCosmosDbConfigurationInfo](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L951) [¶](#ServiceCosmosDbConfigurationInfo) ``` type ServiceCosmosDbConfigurationInfo struct { // The URI of the customer-managed key for the backing database. KeyVaultKeyURI *[string](/builtin#string) // The provisioned throughput for the backing database. OfferThroughput *[int64](/builtin#int64) } ``` ServiceCosmosDbConfigurationInfo - The settings for the Cosmos DB database backing the service. #### func (ServiceCosmosDbConfigurationInfo) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L1218) [¶](#ServiceCosmosDbConfigurationInfo.MarshalJSON) added in v0.6.0 ``` func (s [ServiceCosmosDbConfigurationInfo](#ServiceCosmosDbConfigurationInfo)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type ServiceCosmosDbConfigurationInfo. #### func (*ServiceCosmosDbConfigurationInfo) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L1226) [¶](#ServiceCosmosDbConfigurationInfo.UnmarshalJSON) added in v0.6.0 ``` func (s *[ServiceCosmosDbConfigurationInfo](#ServiceCosmosDbConfigurationInfo)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type ServiceCosmosDbConfigurationInfo. #### type [ServiceExportConfigurationInfo](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L960) [¶](#ServiceExportConfigurationInfo) ``` type ServiceExportConfigurationInfo struct { // The name of the default export storage account. StorageAccountName *[string](/builtin#string) } ``` ServiceExportConfigurationInfo - Export operation configuration information #### func (ServiceExportConfigurationInfo) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L1249) [¶](#ServiceExportConfigurationInfo.MarshalJSON) added in v0.6.0 ``` func (s [ServiceExportConfigurationInfo](#ServiceExportConfigurationInfo)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type ServiceExportConfigurationInfo. #### func (*ServiceExportConfigurationInfo) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L1256) [¶](#ServiceExportConfigurationInfo.UnmarshalJSON) added in v0.6.0 ``` func (s *[ServiceExportConfigurationInfo](#ServiceExportConfigurationInfo)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type ServiceExportConfigurationInfo. #### type [ServicesClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/services_client.go#L26) [¶](#ServicesClient) ``` type ServicesClient struct { // contains filtered or unexported fields } ``` ServicesClient contains the methods for the Services group. Don't use this type directly, use NewServicesClient() instead. #### func [NewServicesClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/services_client.go#L35) [¶](#NewServicesClient) ``` func NewServicesClient(subscriptionID [string](/builtin#string), credential [azcore](/github.com/Azure/azure-sdk-for-go/sdk/azcore).[TokenCredential](/github.com/Azure/azure-sdk-for-go/sdk/azcore#TokenCredential), options *[arm](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm).[ClientOptions](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm#ClientOptions)) (*[ServicesClient](#ServicesClient), [error](/builtin#error)) ``` NewServicesClient creates a new instance of ServicesClient with the specified values. * subscriptionID - The subscription identifier. * credential - used to authorize requests. Usually a credential from azidentity. * options - pass nil to accept the default values. #### func (*ServicesClient) [BeginDelete](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/services_client.go#L54) [¶](#ServicesClient.BeginDelete) ``` func (client *[ServicesClient](#ServicesClient)) BeginDelete(ctx [context](/context).[Context](/context#Context), resourceGroupName [string](/builtin#string), resourceName [string](/builtin#string), options *[ServicesClientBeginDeleteOptions](#ServicesClientBeginDeleteOptions)) (*[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Poller](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Poller)[[ServicesClientDeleteResponse](#ServicesClientDeleteResponse)], [error](/builtin#error)) ``` BeginDelete - Delete a service instance. If the operation fails it returns an *azcore.ResponseError type. Generated from API version 2021-03-25-preview * resourceGroupName - The name of the resource group that contains the service instance. * resourceName - The name of the service instance. * options - ServicesClientBeginDeleteOptions contains the optional parameters for the ServicesClient.BeginDelete method. Example [¶](#example-ServicesClient.BeginDelete) Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/d55b8005f05b040b852c15e74a0f3e36494a15e1/specification/m365securityandcompliance/resource-manager/Microsoft.M365SecurityAndCompliance/preview/2021-03-25-preview/examples/EdmUploadServiceDelete.json> ``` package main import ( "context" "log" "github.com/Azure/azure-sdk-for-go/sdk/azidentity" "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance" ) func main() { cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) } ctx := context.Background() clientFactory, err := armm365securityandcompliance.NewClientFactory("<subscription-id>", cred, nil) if err != nil { log.Fatalf("failed to create client: %v", err) } poller, err := clientFactory.NewServicesClient().BeginDelete(ctx, "rg1", "service1", nil) if err != nil { log.Fatalf("failed to finish the request: %v", err) } _, err = poller.PollUntilDone(ctx, nil) if err != nil { log.Fatalf("failed to pull the result: %v", err) } } ``` ``` Output: ``` Share Format Run #### type [ServicesClientBeginDeleteOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L966) [¶](#ServicesClientBeginDeleteOptions) added in v0.2.0 ``` type ServicesClientBeginDeleteOptions struct { // Resumes the LRO from the provided token. ResumeToken [string](/builtin#string) } ``` ServicesClientBeginDeleteOptions contains the optional parameters for the ServicesClient.BeginDelete method. #### type [ServicesClientDeleteResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/response_types.go#L378) [¶](#ServicesClientDeleteResponse) added in v0.2.0 ``` type ServicesClientDeleteResponse struct { } ``` ServicesClientDeleteResponse contains the response from method ServicesClient.BeginDelete. #### type [ServicesPatchDescription](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L972) [¶](#ServicesPatchDescription) ``` type ServicesPatchDescription struct { // The properties for updating a service instance. Properties *[ServicesPropertiesUpdateParameters](#ServicesPropertiesUpdateParameters) // Instance tags Tags map[[string](/builtin#string)]*[string](/builtin#string) } ``` ServicesPatchDescription - The description of the service. #### func (ServicesPatchDescription) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L1276) [¶](#ServicesPatchDescription.MarshalJSON) ``` func (s [ServicesPatchDescription](#ServicesPatchDescription)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type ServicesPatchDescription. #### func (*ServicesPatchDescription) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L1284) [¶](#ServicesPatchDescription.UnmarshalJSON) added in v0.6.0 ``` func (s *[ServicesPatchDescription](#ServicesPatchDescription)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type ServicesPatchDescription. #### type [ServicesProperties](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L981) [¶](#ServicesProperties) ``` type ServicesProperties struct { // The access policies of the service instance. AccessPolicies []*[ServiceAccessPolicyEntry](#ServiceAccessPolicyEntry) // The authentication configuration for the service instance. AuthenticationConfiguration *[ServiceAuthenticationConfigurationInfo](#ServiceAuthenticationConfigurationInfo) // The settings for the CORS configuration of the service instance. CorsConfiguration *[ServiceCorsConfigurationInfo](#ServiceCorsConfigurationInfo) // The settings for the Cosmos DB database backing the service. CosmosDbConfiguration *[ServiceCosmosDbConfigurationInfo](#ServiceCosmosDbConfigurationInfo) // The settings for the export operation of the service instance. ExportConfiguration *[ServiceExportConfigurationInfo](#ServiceExportConfigurationInfo) // The list of private endpoint connections that are set up for this resource. PrivateEndpointConnections []*[PrivateEndpointConnection](#PrivateEndpointConnection) // Control permission for data plane traffic coming from public networks while private endpoint is enabled. PublicNetworkAccess *[PublicNetworkAccess](#PublicNetworkAccess) // READ-ONLY; The provisioning state. ProvisioningState *[ProvisioningState](#ProvisioningState) } ``` ServicesProperties - The properties of a service instance. #### func (ServicesProperties) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L1307) [¶](#ServicesProperties.MarshalJSON) ``` func (s [ServicesProperties](#ServicesProperties)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type ServicesProperties. #### func (*ServicesProperties) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L1321) [¶](#ServicesProperties.UnmarshalJSON) added in v0.6.0 ``` func (s *[ServicesProperties](#ServicesProperties)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type ServicesProperties. #### type [ServicesPropertiesUpdateParameters](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L1008) [¶](#ServicesPropertiesUpdateParameters) ``` type ServicesPropertiesUpdateParameters struct { // Control permission for data plane traffic coming from public networks while private endpoint is enabled. PublicNetworkAccess *[PublicNetworkAccess](#PublicNetworkAccess) } ``` ServicesPropertiesUpdateParameters - The properties for updating a service instance. #### func (ServicesPropertiesUpdateParameters) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L1362) [¶](#ServicesPropertiesUpdateParameters.MarshalJSON) added in v0.6.0 ``` func (s [ServicesPropertiesUpdateParameters](#ServicesPropertiesUpdateParameters)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type ServicesPropertiesUpdateParameters. #### func (*ServicesPropertiesUpdateParameters) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L1369) [¶](#ServicesPropertiesUpdateParameters.UnmarshalJSON) added in v0.6.0 ``` func (s *[ServicesPropertiesUpdateParameters](#ServicesPropertiesUpdateParameters)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type ServicesPropertiesUpdateParameters. #### type [ServicesResource](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L1014) [¶](#ServicesResource) ``` type ServicesResource struct { // REQUIRED; The kind of the service. Kind *[Kind](#Kind) // REQUIRED; The resource location. Location *[string](/builtin#string) // An etag associated with the resource, used for optimistic concurrency when editing it. Etag *[string](/builtin#string) // Setting indicating whether the service has a managed identity associated with it. Identity *[ServicesResourceIdentity](#ServicesResourceIdentity) // The resource tags. Tags map[[string](/builtin#string)]*[string](/builtin#string) // READ-ONLY; The resource identifier. ID *[string](/builtin#string) // READ-ONLY; The resource name. Name *[string](/builtin#string) // READ-ONLY; Required property for system data SystemData *[SystemData](#SystemData) // READ-ONLY; The resource type. Type *[string](/builtin#string) } ``` ServicesResource - The common properties of a service. #### func (ServicesResource) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L1389) [¶](#ServicesResource.MarshalJSON) ``` func (s [ServicesResource](#ServicesResource)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type ServicesResource. #### func (*ServicesResource) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L1404) [¶](#ServicesResource.UnmarshalJSON) added in v0.6.0 ``` func (s *[ServicesResource](#ServicesResource)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type ServicesResource. #### type [ServicesResourceIdentity](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L1044) [¶](#ServicesResourceIdentity) ``` type ServicesResourceIdentity struct { // Type of identity being specified, currently SystemAssigned and None are allowed. Type *[ManagedServiceIdentityType](#ManagedServiceIdentityType) // READ-ONLY; The principal ID of the resource identity. PrincipalID *[string](/builtin#string) // READ-ONLY; The tenant ID of the resource. TenantID *[string](/builtin#string) } ``` ServicesResourceIdentity - Setting indicating whether the service has a managed identity associated with it. #### func (ServicesResourceIdentity) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L1448) [¶](#ServicesResourceIdentity.MarshalJSON) added in v0.6.0 ``` func (s [ServicesResourceIdentity](#ServicesResourceIdentity)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type ServicesResourceIdentity. #### func (*ServicesResourceIdentity) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L1457) [¶](#ServicesResourceIdentity.UnmarshalJSON) added in v0.6.0 ``` func (s *[ServicesResourceIdentity](#ServicesResourceIdentity)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type ServicesResourceIdentity. #### type [SystemData](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models.go#L1056) [¶](#SystemData) ``` type SystemData struct { // The timestamp of resource creation (UTC). CreatedAt *[time](/time).[Time](/time#Time) // The identity that created the resource. CreatedBy *[string](/builtin#string) // The type of identity that created the resource. CreatedByType *[CreatedByType](#CreatedByType) // The timestamp of resource last modification (UTC) LastModifiedAt *[time](/time).[Time](/time#Time) // The identity that last modified the resource. LastModifiedBy *[string](/builtin#string) // The type of identity that last modified the resource. LastModifiedByType *[CreatedByType](#CreatedByType) } ``` SystemData - Metadata pertaining to creation and last modification of the resource. #### func (SystemData) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L1483) [¶](#SystemData.MarshalJSON) ``` func (s [SystemData](#SystemData)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalJSON implements the json.Marshaller interface for type SystemData. #### func (*SystemData) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/v0.6.1/sdk/resourcemanager/m365securityandcompliance/armm365securityandcompliance/models_serde.go#L1495) [¶](#SystemData.UnmarshalJSON) ``` func (s *[SystemData](#SystemData)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON implements the json.Unmarshaller interface for type SystemData.
redix_sharding
hex
Erlang
redix\_sharding v0.1.0 API Reference === Modules --- [RedixSharding](RedixSharding.html) [RedixSharding.Utils](RedixSharding.Utils.html) redix\_sharding v0.1.0 RedixSharding === Summary === [Functions](#functions) --- [child\_spec(opts)](#child_spec/1) [command(cmd)](#command/1) [command(conn, command, opts \\ [])](#command/3) [pipeline(cmds)](#pipeline/1) [pipeline(conn, commands, opts \\ [])](#pipeline/3) [start\_link(opts)](#start_link/1) Functions === child\_spec(opts) command(cmd) command(conn, command, opts \\ []) pipeline(cmds) pipeline(conn, commands, opts \\ []) start\_link(opts) redix\_sharding v0.1.0 RedixSharding.Utils === Summary === [Functions](#functions) --- [eval\_keys(keys)](#eval_keys/1) [mset\_keys(keys)](#mset_keys/1) [shard\_command(command, configs)](#shard_command/2) [special\_commands()](#special_commands/0) [to\_integer(i)](#to_integer/1) [to\_string(s)](#to_string/1) [unsupported\_commands()](#unsupported_commands/0) [zstore\_keys(keys)](#zstore_keys/1) Functions === eval\_keys(keys) mset\_keys(keys) shard\_command(command, configs) special\_commands() to\_integer(i) to\_string(s) unsupported\_commands() zstore\_keys(keys)
ic-sharpy
readthedoc
Python
SHARPy 1.1.1 documentation [SHARPy](index.html#document-index) --- Simulation of High Aspect Ratio planes in Python [SHARPy][¶](#simulation-of-high-aspect-ratio-planes-in-python-sharpy) === Welcome to SHARPy (Simulation of High Aspect Ratio aeroplanes in Python)! SHARPy is an aeroelastic analysis package currently under development at the Department of Aeronautics, Imperial College London. It can be used for the structural, aerodynamic, aeroelastic and flight dynamics analysis of flexible aircraft, flying wings and wind turbines. Amongst other [capabilities](./content/capabilities.html), it offers the following solutions to the user: * Static aerodynamic, structural and aeroelastic solutions * Finding trim conditions for aeroelastic configurations * Nonlinear, dynamic time domain simulations under a large number of conditions such as: > + Prescribed trajectories. > + Free flight. > + Dynamic follower forces. > + Control inputs in thrust, control surface deflection… > + Arbitrary time-domain gusts, including non span-constant ones. > + Full 3D turbulent fields. > * Multibody dynamics with hinges, articulations and prescribed nodal motions. > + Applicable to wind turbines. > + Hinged aircraft. > + Catapult assisted takeoffs. > * Linear analysis > + Linearisation around a nonlinear equilibrium. > + Frequency response analysis. > + Asymptotic stability analysis. > * Model order reduction > + Krylov-subspace reduction methods. > + Balancing reduction methods. The modular design of SHARPy allows to simulate complex aeroelastic cases involving very flexible aircraft. The structural solver supports very complex beam arrangements, while retaining geometrical nonlinearity. The UVLM solver features different wake modelling fidelities while supporting large lifting surface deformations in a native way. Detailed information on each of the solvers is presented in their respective documentation packages. Contents[¶](#contents) --- ### SHARPy Installation Guide[¶](#sharpy-installation-guide) **Last revision 3 February 2020** The following step by step tutorial will guide you through the installation process of SHARPy. #### Requirements[¶](#requirements) **Operating System Requirements** SHARPy is being developed and tested on the following operating systems: * CentOS 7 and CentOS 8 * Ubuntu 18.04 LTS * MacOS Mojave and Catalina It is also available to the vast majority of operating systems that are supported by Docker, including Windows! **Required Distributions** * Anaconda Python 3.7 * GCC 6.0 or higher (recommended). C++ and Fortran. **Recommended Software** You may find the applications below useful, we recommend you use them but cannot provide any direct support. * [HDFView](https://portal.hdfgroup.org/display/HDFVIEW/HDFView) to read and view `.h5` files. HDF5 is the SHARPy input file format. * [Paraview](https://www.paraview.org/) to visualise SHARPy’s output. **GitHub Repository** * [SHARPy](http://github.com/imperialcollegelondon/sharpy) SHARPy can be installed from the source code available on GitHub or you can get it packed in a Docker container. If what you want is to give it a go and run some static or simple dynamic cases (and are familiar with Docker), we recommend the [Docker route](#using-sharpy-from-a-docker-container). If you want to check the code, modify it and compile the libraries with custom flags, build it from source (recommended). #### Building SHARPy from source (release or development builds)[¶](#building-sharpy-from-source-release-or-development-builds) SHARPy can be built from source so that you can get the latest release or (stable) development build. SHARPy depends on two external libraries, [xbeam](http://github.com/imperialcollegelondon/xbeam) and [UVLM](http://github.com/imperialcollegelondon/UVLM). These are included as submodules to SHARPy and therefore once you initialise SHARPy you will also automatically clone the relevant versions of each library. ##### Set up the folder structure[¶](#set-up-the-folder-structure) 1. Clone `sharpy` in your desired location, if you agree with the license in `license.txt` ``` git clone --recursive http://github.com/ImperialCollegeLondon/sharpy ``` The `--recursive` flag will also initialise and update the submodules SHARPy depends on, [xbeam](http://github.com/imperialcollegelondon/xbeam) and [UVLM](http://github.com/imperialcollegelondon/UVLM). 2. We will now set up the SHARPy environment that will install other required distributions. ##### Setting up the Python Environment[¶](#setting-up-the-python-environment) SHARPy uses the Anaconda package manager to provide the necessary Python packages. These are specified in an Anaconda environment that shall be activated prior to compiling the xbeam and UVLM libraries or running any SHARPy cases. 1. If you do not have it, install the [Anaconda](https://conda.io/docs/) Python 3 distribution 2. Make sure your Python version is at least 3.7: ``` python --version ``` 3. Create the conda environment that SHARPy will use. Change `environment_linux.yml` to read `environment_macos.yml` file if you are installing SHARPy on Mac OS X ``` cd sharpy/utils conda env create -f environment_linux.yml cd ../.. ``` We also provide a light-weight environment with the minimum required dependencies. If you’d like to use it, create the conda environment using `environment_minimal.yml`. 4. Activate the `sharpy_env` conda environment ``` conda activate sharpy_env ``` you need to do this before you compile the `xbeam` and `uvlm` libraries, as some dependencies are included in the conda environment. If you would like to use the minimal environment you can run `conda activate sharpy_minimal`. ##### Quick install[¶](#quick-install) The quick install is geared towards getting the release build of SHARPy running as quickly and simply as possible. If you would like to install a develop build or modify the compilation settings of the libraries skip to the next section. 1. Move into the cloned repository ``` cd sharpy ``` 2. Ensure that the SHARPy environment is active in the session. Your terminal prompt line should begin with ``` (sharpy_env) [usr@host] $ ``` If it is not the case, activate the environment. Otherwise xbeam and UVLM will not compile ``` conda activate sharpy_env ``` 3. Create a directory `build` that will be used during CMake’s building process and `cd` into it: ``` mkdir build cd build ``` 4. Prepare UVLM and xbeam for compilation using `gfortran` and `g++` in their release builds running. If you’d like to change compilers see the Custom Installation. ``` cmake .. ``` 5. Compile the libraries ``` make install -j 4 ``` where the number after the `-j` flag will specify how many cores to use during installation. 6. Finally, load the SHARPy variables ``` source bin/sharpy_vars.sh ``` **You are ready to run SHARPy**. Continue reading the [Running SHARPy](#running-sharpy) section. ##### Custom installation[¶](#custom-installation) These steps will show you how to compile the xbeam and UVLM libraries such that you can modify the compilation settings to your taste. 1. Ensure that the SHARPy environment is loaded in your session ``` conda activate sharpy_env ``` 2. If you want to use SHARPy’s latest release, skip this step. If you would like to use the latest development work, you will need to checkout the `develop` branch. For more info on how we structure our development and what branches are used for what kind of features have a look at the [Contributing](contributing.html) page. ``` git checkout -b develop --track origin/develop ``` This command will check out the `develop` branch and set it to track the remote origin. 3. Run CMake with custom flags: 1. Choose your compilers for Fortran `FC` and C++ `CXX`, for instance ``` FC=gfortran CXX=g++ cmake .. ``` If you’d like to use the Intel compilers you can set them using: ``` FC=ifort CXX=icpc cmake .. ``` 2. To build the libraries in debug mode: ``` cmake -DCMAKE_BUILD_TYPE=Debug .. ``` 4. Compile the libraries and parallelise as you prefer ``` make install -j 4 ``` 5. This concludes the installation! Continue reading the [Running SHARPy](#running-sharpy) section. #### Using SHARPy from a Docker container[¶](#using-sharpy-from-a-docker-container) Docker containers are similar to lightweight virtual machines. The SHARPy container distributed through [Docker Hub](https://hub.docker.com/) is a CentOS 8 machine with the libraries compiled with `gfortran` and `g++` and an Anaconda Python distribution. Make sure your machine has Docker working. The instructions are here: [link](https://docs.docker.com/v17.09/engine/installation/). You might want to run a test in your terminal: ``` docker pull hello-world docker run hello-world ``` If this works, you’re good to go! First, obtain the SHARPy docker container: ``` docker pull fonsocarre/sharpy:stable ``` Now you can run it: ``` docker run --name sharpy -it fonsocarre/sharpy:stable ``` You should see a welcome dialog such as: ``` >>>> docker run -it fonsocarre/sharpy:stable SHARPy added to PATH from the directory: /sharpy_dir/bin === Welcome to the Docker image of SHARPy SHARPy is located in /sharpy_dir/ and the environment is already set up! Copyright Imperial College London. Released under BSD 3-Clause license. === SHARPy``` You are now good to go. It is important to note that a docker container runs as an independent operating system with no access to your hard drive. If you want to copy your own files, run the container and from another terminal run: ``` docker cp my_file.txt sharpy:/my_file.txt # copy from host to container docker cp sharpy:/my_file.txt my_file.txt # copy from container to host ``` The `sharpy:` part is the `--name` argument you wrote in the `docker run` command. You can run the test suite once inside the container as: ``` cd sharpy_dir python -m unittest ``` We make available two different releases: `stable` and `experimental`. The former is the latest SHARPy release. The latter is our latest development work which will include new features but with higher chances of encountering some bugs along the way. To obtain the experimental build, follow the instructions above replacing the `stable` tag for `experimental`. **Enjoy!** #### Running SHARPy[¶](#running-sharpy) In order to run SHARPy, you need to load the conda environment and load the SHARPy variables (so your computer knows where SHARPy is). Therefore, **before you run any SHARPy case**: 1. Activate the SHARPy conda environment ``` conda activate sharpy_env ``` 2. Load the SHARPy variables ``` source sharpy/bin/sharpy_vars.sh ``` You are now ready to run SHARPy cases from the terminal. ##### Automated tests[¶](#automated-tests) SHARPy uses unittests to verify the integrity of the code. These tests can be run from the `./sharpy` directory. ``` python -m unittest ``` The tests will run and you should see a success message. If you don’t… check the following options: * Check you are running the latest version. Running the following from the root directory should update to the latest release version: + `git pull` + `git submodule update --init --recursive` * If the tests don’t run, make sure you have followed correctly the instructions and that you managed to compile xbeam and UVLM. * If some tests fail, i.e. you get a message after the tests run saying that certain tests did not pass, please open an [issue](http://www.github.com/imperialcollegelondon/sharpy/issues) with the following information: + Operating system + Whether you did a Custom/quick install + UVLM and xbeam compiler of choice + A log of the tests that failed ##### The SHARPy Case Structure and input files[¶](#the-sharpy-case-structure-and-input-files) **Setting up a SHARPy case** SHARPy cases are usually structured in the following way: 1. A `generate_case.py` file: contains the setup of the problem like geometry, flight conditions etc. This script creates the output files that will then be used by SHARPy, namely: * The [structural](./casefiles.html#fem-file) `.fem.h5` file. * The [aerodynamic](./casefiles.html#aerodynamics-file) `.aero.h5` file. * [Simulation information](./casefiles.html#solver-configuration-file) and settings `.sharpy` file. * The dynamic forces file `.dyn.h5` (when required). * The linear input files `.lininput.h5` (when required). * The ROM settings file `.rom.h5` (when required).See the [chapter](./casefiles.html) on the case files for a detailed description on the contents of each one.Data is exchanged in binary format by means of `.h5` files that make the transmission efficient between the different languages of the required libraries. To view these `.h5` files, a viewer like [HDF5](https://portal.hdfgroup.org/display/support) is recommended. 2. The `h5` files contain data of the FEM, aerodynamics, dynamic conditions. They are later read by SHARPy. 3. The `.sharpy` file contains the settings for SHARPy and is the file that is parsed to SHARPy. **To run a SHARPy case** SHARPy cases are therefore usually ran in the following way: 1. Create a `generate_case.py` file following the provided templates. 2. Run it to produce the `.h5` files and the `.sharpy` files. ``` (sharpy_env) python generate_case.py ``` 3. Run SHARPy (ensure the environment is activated). ``` (sharpy_env) sharpy case.sharpy ``` ###### Output[¶](#output) By default, the output is located in the `output` folder. The contents of the folder will typically be a `beam` and `aero` folders, which contain the output data that can then be loaded in Paraview. These are the `.vtu` format files that can be used with [Paraview](https://www.paraview.org/). ##### Running (and modifiying) a test case[¶](#running-and-modifiying-a-test-case) 1. This command generates the required files for running a static, clamped beam case that is used as part of code verification: ``` cd ../sharpy python ./tests/xbeam/geradin/generate_geradin.py ``` Now you should see a success message, and if you check the `./tests/xbeam/geradin/` folder, you should see two new files: * geradin_cardona.sharpy * geradin_cardona.fem.h5 Try to open the `sharpy` file with a plain text editor and have a quick look. The `sharpy` file is the main settings file. We’ll get deeper into this later. If you try to open the `fem.h5` file, you’ll get an error or something meaningless. This is because the structural data is stored in [HDF5](https://support.hdfgroup.org/HDF5/) format, which is compressed binary. 1. Run it (part 1) The `sharpy` call is: ``` # Make sure that the sharpy_env conda environment is active sharpy <path to solver file> ``` 2. Results (part 1) Since this is a test case, there is no output directly to screen. We will therefore change this setting first. In the `generate_geradin.py` file, look for the `SHARPy` setting `write_screen` and set it to `on`. This will output the progress of the execution to the terminal. We would also like to create a Paraview file to view the beam deformation. Append the post-processor `BeamPlot` to the end of the `SHARPy` setting `flow`, which is a list. This will run the post-processor and plot the beam in Paraview format with the settings specified in the `generate_geradin.py` file under `config['BeamPlot]`. 3. Run (part 2) Now that we have made these modifications, run again the generation script: ``` python ./tests/xbeam/geradin/generate_geradin.py ``` Check the solver file `geradin.sharpy` and look for the settings we just changed. Make sure they read what we wanted. You are now ready to run the case again: ``` # Make sure that the sharpy_env conda environment is active sharpy <path to solver file> ``` 4. Post-processing After a successful execution, you should a long display of information in the terminal as the case is being executed. The deformed beam will have been written in a `.vtu` file and will be located in the `output/` folder (or where you specified in the settings) which you can open using Paraview. In the `output` directory you will also note a folder named `WriteVariablesTime` which outputs certain variables as a function of time to a `.dat` file. In this case, the beam tip position deflection and rotation is written. Check the values of those files and look for the following result: ``` Pos_def: 4.403530 0.000000 -2.159692 Psi_def: 0.000000 0.672006 0.000000 ``` FYI, the correct solution for this test case by Geradin and Cardona is `Delta R_3 = -2.159 m` and `Psi_2 = 0.6720 rad`. Congratulations, you’ve run your first case. You can now check the [Examples](examples.html) section for further cases. ### Capabilities[¶](#capabilities) This is just the tip of the iceberg, possibilities are nearly endless and once you understand how SHARPy’s modular interface works, you will be capable of running very complex simulations. #### Very flexible aircraft nonlinear aeroelasticity[¶](#very-flexible-aircraft-nonlinear-aeroelasticity) The modular design of SHARPy allows to simulate complex aeroelastic cases involving very flexible aircraft. The structural solver supports very complex beam arrangements, while retaining geometrical nonlinearity. The UVLM solver features different wake modelling fidelities while supporting large lifting surface deformations in a native way. Among the problems studied, a few interesting ones, in no particular order are: * Catapult take off of a very flexible aircraft analysis [[Paper]](https://arc.aiaa.org/doi/abs/10.2514/6.2019-2038). In this type of simulations, a PID controller was used in order to enforce displacements and velocities in a number of structural nodes (the clamping points). Then, several take off strategies were studied in order to analyse the influence of the structural stiffness in this kind of procedures. This case is a very good example of the type of problems where nonlinear aeroelasticity is essential. *Catapult Takeoff of Flexible Aircraft* * Flight in full 3D atmospheric boundary layer (to be published). A very flexible aircraft is flown immersed in a turbulent boundary layer obtained from HPC LES simulations. The results are compared against simpler turbulence models such as von Karman and Kaimal. Intermittency and coherence features in the LES field are absent or less remarkable in the synthetic turbulence fields. *HALE Aircraft in a Turbulent Field* * Lateral gust reponse of a realistic very flexible aircraft. For this problem (to be published), a realistic very flexible aircraft (University of Michigan X-HALE) model has been created in SHARPy and validated against their own aeroelastic solver for static and dynamic cases. A set of vertical and lateral gust responses have been simulated. *X-HALE* #### Wind turbine aeroelasticity[¶](#wind-turbine-aeroelasticity) SHARPy is suitable to simulate wind turbine aeroelasticity. On the structural side, it accounts for material anisotropy which is needed to characterize composite blades and for geometrically non-linear deformations observed in current blades due to the increasing length and flexibility. Both rigid and flexible simulations can be performed and the structural modes can be computed accounting for rotational effects (Campbell diagrams). The rotor-tower interaction is modelled through a multibody approach based on the theory of Lagrange multipliers. Finally, he tower base can be fixed or subjected to prescribed linear and angular velocities. On the aerodynamic side, the use of potential flow theory allows the characterization of flow unsteadiness at a reasonable computational cost. Specifically, steady and dynamic simulations can be performed. The steady simulations are carried out in a non-inertial frame of reference linked to the rotor under uniform steady wind with the assumption of prescribed helicoidal wake. On the other hand, dynamic simulations can be enriched with a wide variety of incoming winds such as shear and yaw. Moreover, the wake shape can be freely computed under no assumptions accounting for self-induction and wake expansion or can be prescribed to an helicoidal shape for computational efficiency. Wind Turbine #### Model Order Reduction[¶](#model-order-reduction) Numerical models of physical phenomena require fine discretisations to show convergence and agreement with their real counterparts, and, in the case of SHARPy’s aeroelastic systems, hundreds of thousands of states are not an uncommon encounter. However, modern hardware or the use of these models for other applications such as controller synthesis may limit their size, and we must turn to model order reduction techniques to achieve lower dimensional representations that can then be used. SHARPy offers several model order reduction methods to reduce the initially large system to a lower dimension, attending to the user’s requirements of numerical efficiency or global error bound. ##### Krylov Methods for Model Order Reduction - Moment Matching[¶](#krylov-methods-for-model-order-reduction-moment-matching) Model reduction by moment matching can be seen as approximating a transfer function through a power series expansion about a user defined point in the complex plane. The reduction by projection retains the moments between the full and reduced systems as long as the projection matrices span certain Krylov subspaces dependant on the expansion point and the system’s matrices. This can be taken advantage of, in particular for aeroelastic applications where the interest resides in the low frequency behaviour of the system, the ROM can be expanded about these low frequency points discarding accuracy higher up the frequency spectrum. ###### Example 1 - Aerodynamics - Frequency response of a high AR flat plate subject to a sinusoidal gust[¶](#example-1-aerodynamics-frequency-response-of-a-high-ar-flat-plate-subject-to-a-sinusoidal-gust) The objective is to compare SHARPy’s solution of a very high aspect ratio flat plate subject to a sinusoidal gust to the closed form solution obtained by Sears (1944 - Ref). SHARPy’s inherent 3D nature makes comparing results to the 2D solution require very high aspect ratio wings with fine discretisations, resulting in very large state space models. In this case, we would like to utilise a Krylov ROM to approximate the low frequency behaviour and perform a frequency response analysis on the reduced system, since it would represent too much computational cost if it were performed on the full system. The full order model was reduced utilising Krylov methods, in particular the Arnoldi iteration, with an expansion about zero frequency to produce the following result. *Sears Gust Bode Plot* As it can be seen from the image above, the ROM approximates well the low frequency, quasi-steady state and loses accuracy as the frequency is increased, just as intended. Still, perfect matching is never achieved even at the expansion frequency given the 3D nature of the wing compared to the 2D analytical solution. ###### Example 2 - Aeroelastics - Flutter analysis of a Goland wing with modal projection[¶](#example-2-aeroelastics-flutter-analysis-of-a-goland-wing-with-modal-projection) The Goland wing flutter example is presented next. The aerodynamic surface is finely discretised for the UVLM solution, resulting in not only a large state space but also in large input/output dimensionality. Therefore, to reduce the number of inputs and outputs, the UVLM is projected onto the structural mode shapes, the first four in this particular case. The resulting multi input multi output system (mode shapes -> UVLM -> modal forces) was subsequently reduced using Krylov methods aimed at MIMO systems which use variations of the block Arnoldi iteration. Again, the expansion frequency selected was the zero frequency. As a sample, the transfer function from two inputs to two outputs is shown to illustrate the performance of the reduced model against the full order UVLM. *Goland Reduced Order Model Transfer Functions* The reduced aerodynamic model projected onto the modal shapes was then coupled to the linearised beam model, and the stability analysed against a change in velocity. Note that the UVLM model and its ROM are actually scaled to be independent on the freestream velocity, hence only one UVLM and ROM need to be computed. The structural model needs to be updated at each test velocity but its a lot less costly in computational terms. The resulting stability of the aeroelastic system is plotted on the Argand diagram below with changing freestream velocity. *Goland Flutter* ### Publications[¶](#publications) SHARPy has been used in many technical papers that have been both published in Journals and presented at conferences. Here we present a list of past papers which have used SHARPy for research purposes: #### 2020[¶](#id1) * <NAME>., <NAME>., & <NAME>. (2020). Realistic turbulence effects in low altitude dynamics of very flexible aircraft. In AIAA SciTech Forum (pp. 1–18). <https://doi.org/10.2514/6.2020-1187> * <NAME>., <NAME>., <NAME>., & <NAME>. (2020). Modal-Based Nonlinear Estimation and Control for Highly Flexible Aeroelastic Systems. In AIAA SciTech Forum (pp. 1–23). <https://doi.org/10.2514/6.2020-1192> * <NAME>., <NAME>., & <NAME>. (2020). Unsteady and three-dimensional aerodynamic effects on wind turbine rotor loads. In AIAA SciTech Forum. <https://doi.org/10.2514/6.2020-0991#### 2019[¶](#id2) * <NAME>., <NAME>., <NAME>., & <NAME>. (2019). SHARPy : A dynamic aeroelastic simulation toolbox for very flexible aircraft and wind turbines. Journal of Open Source Software, 4(44), 1885. <https://doi.org/10.21105/joss.01885> * <NAME>., <NAME>., <NAME>., & <NAME>. (2019). Nonlinear Response of a Very Flexible Aircraft Under Lateral Gust. In International Forum on Aeroelasticity and Structural Dynamics. * <NAME>., & <NAME>. (2019). Efficient Time-Domain Simulations in Nonlinear Aeroelasticity. In AIAA Scitech Forum (pp. 1–20). <https://doi.org/10.2514/6.2019-2038> * <NAME>., & <NAME>. (2019). State-Space Realizations and Internal Balancing in Potential-Flow Aerodynamics with Arbitrary Kinematics. AIAA Journal, 57(6), 1–14. <https://doi.org/10.2514/1.J058153### Examples[¶](#examples) A set of SHARPy examples created with Jupyter Notebooks is provided for users to interact and modify cases running on SHARPy. #### Flutter Analysis of a Goland Wing using the SHARPy Linear Solver[¶](#Flutter-Analysis-of-a-Goland-Wing-using-the-SHARPy-Linear-Solver) This is an example using SHARPy to find the flutter speed of a Goland wing by: * Calculating aerodynamic forces and deflections using a nonlinear solver * Linearising about this reference condition * Creating a reduced order model of the linearised aerodynamics * Evaluate the stability of the linearised aeroelastic system at different velocities ##### References[¶](#References) <NAME>., & <NAME>. (2019). State-Space Realizations and Internal Balancing in Potential-Flow Aerodynamics with Arbitrary Kinematics. AIAA Journal, 57(6), 1–14. <https://doi.org/10.2514/1.J058153###### Required Packages[¶](#Required-Packages) ``` [1]: ``` ``` import numpy as np import matplotlib.pyplot as plt import os import sys import cases.templates.flying_wings as wings # See this package for the Goland wing structural and aerodynamic definition import sharpy.sharpy_main # used to run SHARPy from Jupyter ``` ###### Problem Set-up[¶](#Problem-Set-up) ####### Velocity[¶](#Velocity) The UVLM is assembled in normalised time at a velocity of \(1 m/s\). The only matrices that need updating then with free stream velocity are the structural matrices, which is significantly cheaper to do than to update the UVLM. ``` [2]: ``` ``` u_inf = 1. alpha_deg = 0. rho = 1.02 num_modes = 4 ``` ####### Discretisation[¶](#Discretisation) Note: To achieve convergence of the flutter results with the ones found in the literature, a significant discretisation may be required. If you are running this notebook for the first time, set `M = 4` initially to verify that your system can perform! ``` [3]: ``` ``` M = 16 N = 32 M_star_fact = 10 ``` ####### ROM[¶](#ROM) A moment-matching (Krylov subspace) model order reduction technique is employed. This ROM method offers the ability to interpolate the transfer functions at a desired point in the complex plane. See the ROM documentation pages for more info. Note: this ROM method matches the transfer function but does not guarantee stability. Therefore the resulting system may be unstable. These unstable modes may appear far in the right hand plane but will not affect the flutter speed calculations. ``` [4]: ``` ``` c_ref = 1.8288 # Goland wing reference chord. Used for frequency normalisation rom_settings = dict() rom_settings['algorithm'] = 'mimo_rational_arnoldi' # reduction algorithm rom_settings['r'] = 6 # Krylov subspace order frequency_continuous_k = np.array([0.]) # Interpolation point in the complex plane with reduced frequency units frequency_continuous_w = 2 * u_inf * frequency_continuous_k / c_ref rom_settings['frequency'] = frequency_continuous_w ``` ####### Case Admin[¶](#Case-Admin) ``` [5]: ``` ``` case_name = 'goland_cs' case_nlin_info = 'M%dN%dMs%d_nmodes%d' % (M, N, M_star_fact, num_modes) case_rom_info = 'rom_MIMORA_r%d_sig%04d_%04dj' % (rom_settings['r'], frequency_continuous_k[-1].real * 100, frequency_continuous_k[-1].imag * 100) case_name += case_nlin_info + case_rom_info route_test_dir = os.path.abspath('') print('The case to run will be: %s' % case_name) print('Case files will be saved in ./cases/%s' %case_name) print('Output files will be saved in ./output/%s/' %case_name) ``` ``` The case to run will be: goland_csM16N32Ms10_nmodes4rom_MIMORA_r6_sig0000_0000j Case files will be saved in ./cases/goland_csM16N32Ms10_nmodes4rom_MIMORA_r6_sig0000_0000j Output files will be saved in ./output/goland_csM16N32Ms10_nmodes4rom_MIMORA_r6_sig0000_0000j/ ``` ###### Simulation Set-Up[¶](#Simulation-Set-Up) ####### Goland Wing[¶](#Goland-Wing) `ws` is an instance of a Goland wing with a control surface. Reference the template file `cases.templates.flying_wings.GolandControlSurface` for more info on the geometrical, structural and aerodynamic definition of the Goland wing here used. ``` [6]: ``` ``` ws = wings.GolandControlSurface(M=M, N=N, Mstar_fact=M_star_fact, u_inf=u_inf, alpha=alpha_deg, cs_deflection=[0, 0], rho=rho, sweep=0, physical_time=2, n_surfaces=2, route=route_test_dir + '/cases', case_name=case_name) ws.clean_test_files() ws.update_derived_params() ws.set_default_config_dict() ws.generate_aero_file() ws.generate_fem_file() ``` ``` Surface0 Surface1 ``` ####### Simulation Settings[¶](#Simulation-Settings) The settings for each of the solvers are now set. For a detailed description on them please reference their respective documentation pages ##### SHARPy Settings[¶](#SHARPy-Settings) The most important setting is the `flow` list. It tells SHARPy which solvers to run and in which order. ``` [7]: ``` ``` ws.config['SHARPy'] = { 'flow': ['BeamLoader', 'AerogridLoader', 'StaticCoupled', 'AerogridPlot', 'BeamPlot', 'Modal', 'LinearAssembler', 'FrequencyResponse', 'AsymptoticStability', ], 'case': ws.case_name, 'route': ws.route, 'write_screen': 'on', 'write_log': 'on', 'log_folder': route_test_dir + '/output/' + ws.case_name + '/', 'log_file': ws.case_name + '.log'} ``` ##### Beam Loader Settings[¶](#Beam-Loader-Settings) ``` [8]: ``` ``` ws.config['BeamLoader'] = { 'unsteady': 'off', 'orientation': ws.quat} ``` ##### Aerogrid Loader Settings[¶](#Aerogrid-Loader-Settings) ``` [9]: ``` ``` ws.config['AerogridLoader'] = { 'unsteady': 'off', 'aligned_grid': 'on', 'mstar': ws.Mstar_fact * ws.M, 'freestream_dir': ws.u_inf_direction } ``` ##### Static Coupled Solver[¶](#Static-Coupled-Solver) ``` [10]: ``` ``` ws.config['StaticCoupled'] = { 'print_info': 'on', 'max_iter': 200, 'n_load_steps': 1, 'tolerance': 1e-10, 'relaxation_factor': 0., 'aero_solver': 'StaticUvlm', 'aero_solver_settings': { 'rho': ws.rho, 'print_info': 'off', 'horseshoe': 'off', 'num_cores': 4, 'n_rollup': 0, 'rollup_dt': ws.dt, 'rollup_aic_refresh': 1, 'rollup_tolerance': 1e-4, 'velocity_field_generator': 'SteadyVelocityField', 'velocity_field_input': { 'u_inf': ws.u_inf, 'u_inf_direction': ws.u_inf_direction}}, 'structural_solver': 'NonLinearStatic', 'structural_solver_settings': {'print_info': 'off', 'max_iterations': 150, 'num_load_steps': 4, 'delta_curved': 1e-1, 'min_delta': 1e-10, 'gravity_on': 'on', 'gravity': 9.81}} ``` ##### AerogridPlot Settings[¶](#AerogridPlot-Settings) ``` [11]: ``` ``` ws.config['AerogridPlot'] = {'folder': route_test_dir + '/output/', 'include_rbm': 'off', 'include_applied_forces': 'on', 'minus_m_star': 0} ``` ##### BeamPlot Settings[¶](#BeamPlot-Settings) ``` [12]: ``` ``` ws.config['BeamPlot'] = {'folder': route_test_dir + '/output/', 'include_rbm': 'off', 'include_applied_forces': 'on'} ``` ##### Modal Solver Settings[¶](#Modal-Solver-Settings) ``` [13]: ``` ``` ws.config['Modal'] = {'folder': route_test_dir + '/output/', 'NumLambda': 20, 'rigid_body_modes': 'off', 'print_matrices': 'on', 'keep_linear_matrices': 'on', 'write_dat': 'off', 'rigid_modes_cg': 'off', 'continuous_eigenvalues': 'off', 'dt': 0, 'plot_eigenvalues': False, 'max_rotation_deg': 15., 'max_displacement': 0.15, 'write_modes_vtk': True, 'use_undamped_modes': True} ``` ##### Linear System Assembly Settings[¶](#Linear-System-Assembly-Settings) ``` [14]: ``` ``` ws.config['LinearAssembler'] = {'linear_system': 'LinearAeroelastic', 'linear_system_settings': { 'beam_settings': {'modal_projection': 'on', 'inout_coords': 'modes', 'discrete_time': 'on', 'newmark_damp': 0.5e-4, 'discr_method': 'newmark', 'dt': ws.dt, 'proj_modes': 'undamped', 'use_euler': 'off', 'num_modes': num_modes, 'print_info': 'on', 'gravity': 'on', 'remove_sym_modes': 'on', 'remove_dofs': []}, 'aero_settings': {'dt': ws.dt, 'ScalingDict': {'length': 0.5 * ws.c_ref, 'speed': u_inf, 'density': rho}, 'integr_order': 2, 'density': ws.rho, 'remove_predictor': 'on', 'use_sparse': 'on', 'rigid_body_motion': 'off', 'use_euler': 'off', 'remove_inputs': ['u_gust'], 'rom_method': ['Krylov'], 'rom_method_settings': {'Krylov': rom_settings}}, 'rigid_body_motion': False}} ``` ##### Asymptotic Stability Analysis Settings[¶](#Asymptotic-Stability-Analysis-Settings) ``` [15]: ``` ``` ws.config['AsymptoticStability'] = {'print_info': True, 'folder': route_test_dir + '/output/', 'velocity_analysis': [100, 180, 81], 'modes_to_plot': []} ``` ``` [16]: ``` ``` ws.config.write() ``` ###### Run SHARPy[¶](#Run-SHARPy) ``` [17]: ``` ``` sharpy.sharpy_main.main(['', ws.route + ws.case_name + '.sharpy']) ``` ``` --- ###### ## ## ### ######## ######## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #### ###### ######### ## ## ######## ######## ## ## ## ## ######### ## ## ## ## ## ## ## ## ## ## ## ## ## ## ###### ## ## ## ## ## ## ## ## --- Aeroelastics Lab, Aeronautics Department. Copyright (c), Imperial College London. All rights reserved. License available at https://github.com/imperialcollegelondon/sharpy Running SHARPy from /home/ng213/code/sharpy/docs/source/content/example_notebooks SHARPy being run is in /home/ng213/code/sharpy The branch being run is dev_examples The version and commit hash are: v0.1-1539-gd3ef7dd-d3ef7dd The available solvers on this session are: PreSharpy _BaseStructural AerogridLoader BeamLoader DynamicCoupled DynamicUVLM LinDynamicSim LinearAssembler Modal NoAero NonLinearDynamic NonLinearDynamicCoupledStep NonLinearDynamicMultibody NonLinearDynamicPrescribedStep NonLinearStatic NonLinearStaticMultibody PrescribedUvlm RigidDynamicPrescribedStep SHWUvlm StaticCoupled StaticCoupledRBM StaticTrim StaticUvlm StepLinearUVLM StepUvlm Trim Cleanup PickleData SaveData CreateSnapshot PlotFlowField StabilityDerivatives AeroForcesCalculator WriteVariablesTime AerogridPlot LiftDistribution StallCheck FrequencyResponse AsymptoticStability BeamPlot BeamLoads Generating an instance of BeamLoader Generating an instance of AerogridLoader Variable control_surface_deflection has no assigned value in the settings file. will default to the value: [] Variable control_surface_deflection_generator_settings has no assigned value in the settings file. will default to the value: [] The aerodynamic grid contains 2 surfaces Surface 0, M=16, N=16 Wake 0, M=160, N=16 Surface 1, M=16, N=16 Wake 1, M=160, N=16 In total: 512 bound panels In total: 5120 wake panels Total number of panels = 5632 Generating an instance of StaticCoupled Generating an instance of NonLinearStatic Variable newmark_damp has no assigned value in the settings file. will default to the value: c_double(0.0001) Variable relaxation_factor has no assigned value in the settings file. will default to the value: c_double(0.3) Variable dt has no assigned value in the settings file. will default to the value: c_double(0.01) Variable num_steps has no assigned value in the settings file. will default to the value: c_int(500) Variable initial_position has no assigned value in the settings file. will default to the value: [0. 0. 0.] Generating an instance of StaticUvlm Variable iterative_solver has no assigned value in the settings file. will default to the value: c_bool(False) Variable iterative_tol has no assigned value in the settings file. will default to the value: c_double(0.0001) Variable iterative_precond has no assigned value in the settings file. will default to the value: c_bool(False) |===|===|===|===|===|===|===|===|===| |iter |step | log10(res) | Fx | Fy | Fz | Mx | My | Mz | |===|===|===|===|===|===|===|===|===| | 0 | 0 | 0.00000 | -0.0000 | 0.0000 |-4271.0417| 0.0000 | 781.0842 | -0.0000 | | 1 | 0 | -11.88931 | 0.0000 | -0.0000 |-4271.0039| 0.0000 | 781.0906 | -0.0000 | Generating an instance of AerogridPlot Variable include_forward_motion has no assigned value in the settings file. will default to the value: c_bool(False) Variable include_unsteady_applied_forces has no assigned value in the settings file. will default to the value: c_bool(False) Variable name_prefix has no assigned value in the settings file. will default to the value: Variable u_inf has no assigned value in the settings file. will default to the value: c_double(0.0) Variable dt has no assigned value in the settings file. will default to the value: c_double(0.0) Variable include_velocities has no assigned value in the settings file. will default to the value: c_bool(False) Variable num_cores has no assigned value in the settings file. will default to the value: c_int(1) ...Finished Generating an instance of BeamPlot Variable include_FoR has no assigned value in the settings file. will default to the value: c_bool(False) Variable include_applied_moments has no assigned value in the settings file. will default to the value: c_bool(True) Variable name_prefix has no assigned value in the settings file. will default to the value: Variable output_rbm has no assigned value in the settings file. will default to the value: c_bool(True) ...Finished Generating an instance of Modal Variable print_info has no assigned value in the settings file. will default to the value: c_bool(True) Variable delta_curved has no assigned value in the settings file. will default to the value: c_double(0.01) Variable use_custom_timestep has no assigned value in the settings file. will default to the value: c_int(-1) Structural eigenvalues |===|===|===|===|===|===|===| | mode | eval_real | eval_imag | freq_n (Hz) | freq_d (Hz) | damping | period (s) | |===|===|===|===|===|===|===| | 0 | 0.000000 | 48.067396 | 7.650164 | 7.650164 | -0.000000 | 0.130716 | | 1 | 0.000000 | 48.067398 | 7.650164 | 7.650164 | -0.000000 | 0.130716 | | 2 | 0.000000 | 95.685736 | 15.228858 | 15.228858 | -0.000000 | 0.065665 | | 3 | 0.000000 | 95.685754 | 15.228861 | 15.228861 | -0.000000 | 0.065665 | | 4 | 0.000000 | 243.144471 | 38.697644 | 38.697644 | -0.000000 | 0.025841 | | 5 | 0.000000 | 243.144477 | 38.697645 | 38.697645 | -0.000000 | 0.025841 | | 6 | 0.000000 | 343.801136 | 54.717650 | 54.717650 | -0.000000 | 0.018276 | | 7 | 0.000000 | 343.801137 | 54.717650 | 54.717650 | -0.000000 | 0.018276 | | 8 | 0.000000 | 443.324608 | 70.557303 | 70.557303 | -0.000000 | 0.014173 | | 9 | 0.000000 | 443.324619 | 70.557304 | 70.557304 | -0.000000 | 0.014173 | | 10 | 0.000000 | 461.992869 | 73.528449 | 73.528449 | -0.000000 | 0.013600 | | 11 | 0.000000 | 461.992869 | 73.528449 | 73.528449 | -0.000000 | 0.013600 | | 12 | 0.000000 | 601.126871 | 95.672313 | 95.672313 | -0.000000 | 0.010452 | | 13 | 0.000000 | 601.126873 | 95.672313 | 95.672313 | -0.000000 | 0.010452 | | 14 | 0.000000 | 782.997645 | 124.617946 | 124.617946 | -0.000000 | 0.008025 | | 15 | 0.000000 | 782.997649 | 124.617946 | 124.617946 | -0.000000 | 0.008025 | | 16 | 0.000000 | 917.191257 | 145.975522 | 145.975522 | -0.000000 | 0.006850 | | 17 | 0.000000 | 917.191259 | 145.975523 | 145.975523 | -0.000000 | 0.006850 | | 18 | 0.000000 | 975.005694 | 155.176976 | 155.176976 | -0.000000 | 0.006444 | | 19 | 0.000000 | 975.005699 | 155.176977 | 155.176977 | -0.000000 | 0.006444 | Generating an instance of LinearAssembler Variable linearisation_tstep has no assigned value in the settings file. will default to the value: c_int(-1) Generating an instance of LinearAeroelastic Variable uvlm_filename has no assigned value in the settings file. will default to the value: Variable track_body has no assigned value in the settings file. will default to the value: c_bool(True) Variable use_euler has no assigned value in the settings file. will default to the value: c_bool(False) Generating an instance of LinearUVLM Variable gust_assembler has no assigned value in the settings file. will default to the value: Initialising Static linear UVLM solver class... ``` ``` /home/ng213/code/sharpy/sharpy/solvers/linearassembler.py:79: UserWarning: LinearAssembler solver under development warnings.warn('LinearAssembler solver under development') ``` ``` ...done in 1.41 sec Generating an instance of Krylov Variable print_info has no assigned value in the settings file. will default to the value: c_bool(True) Variable tangent_input_file has no assigned value in the settings file. will default to the value: Variable restart_arnoldi has no assigned value in the settings file. will default to the value: c_bool(False) Initialising Krylov Model Order Reduction State-space realisation of UVLM equations started... ``` ``` /home/ng213/code/sharpy/sharpy/linear/src/assembly.py:1256: SparseEfficiencyWarning: Changing the sparsity structure of a csc_matrix is expensive. lil_matrix is more efficient. C[iivec, N * (M - 1) + iivec] = 1.0 /home/ng213/code/sharpy/sharpy/linear/src/assembly.py:1259: SparseEfficiencyWarning: Changing the sparsity structure of a csc_matrix is expensive. lil_matrix is more efficient. C_star[mm * N + iivec, (mm - 1) * N + iivec] = 1.0 ``` ``` state-space model produced in form: h_{n+1} = A h_{n} + B u_{n} with: x_n = h_n + Bp u_n ...done in 19.06 sec Scaling UVLM system with reference time 0.914400s Non-dimensional time step set (0.125000) System scaled in 31.559722s Generating an instance of LinearBeam Warning, projecting system with damping onto undamped modes Linearising gravity terms... M = 7.26 kg X_CG A -> 0.00 -0.00 -0.00 Node 1 -> B -0.000 -0.116 0.000 -> A 0.116 0.381 -0.000 -> G 0.116 0.381 -0.000 Node mass: Matrix: 14.5125 Node 2 -> B -0.000 -0.116 0.000 -> A 0.116 0.762 -0.000 -> G 0.116 0.762 -0.000 Node mass: Matrix: 7.2563 Node 3 -> B -0.000 -0.116 0.000 -> A 0.116 1.143 -0.000 -> G 0.116 1.143 -0.000 Node mass: Matrix: 14.5125 Node 4 -> B -0.000 -0.116 0.000 -> A 0.116 1.524 -0.001 -> G 0.116 1.524 -0.001 Node mass: Matrix: 7.2563 Node 5 -> B -0.000 -0.116 0.000 -> A 0.116 1.905 -0.001 -> G 0.116 1.905 -0.001 Node mass: Matrix: 14.5125 Node 6 -> B -0.000 -0.116 0.000 -> A 0.116 2.286 -0.001 -> G 0.116 2.286 -0.001 Node mass: Matrix: 7.2563 Node 7 -> B -0.000 -0.116 0.000 -> A 0.116 2.667 -0.002 -> G 0.116 2.667 -0.002 Node mass: Matrix: 14.5125 Node 8 -> B -0.000 -0.116 0.000 -> A 0.116 3.048 -0.002 -> G 0.116 3.048 -0.002 Node mass: Matrix: 7.2563 Node 9 -> B -0.000 -0.116 0.000 -> A 0.116 3.429 -0.003 -> G 0.116 3.429 -0.003 Node mass: Matrix: 14.5125 Node 10 -> B -0.000 -0.116 0.000 -> A 0.116 3.810 -0.003 -> G 0.116 3.810 -0.003 Node mass: Matrix: 7.2563 Node 11 -> B -0.000 -0.116 0.000 -> A 0.116 4.191 -0.004 -> G 0.116 4.191 -0.004 Node mass: Matrix: 14.5125 Node 12 -> B -0.000 -0.116 0.000 -> A 0.116 4.572 -0.004 -> G 0.116 4.572 -0.004 Node mass: Matrix: 7.2563 Node 13 -> B -0.000 -0.116 0.000 -> A 0.116 4.953 -0.005 -> G 0.116 4.953 -0.005 Node mass: Matrix: 14.5125 Node 14 -> B -0.000 -0.116 0.000 -> A 0.116 5.334 -0.005 -> G 0.116 5.334 -0.005 Node mass: Matrix: 7.2563 Node 15 -> B -0.000 -0.116 0.000 -> A 0.116 5.715 -0.006 -> G 0.116 5.715 -0.006 Node mass: Matrix: 14.5125 Node 16 -> B -0.000 -0.116 0.000 -> A 0.116 6.096 -0.006 -> G 0.116 6.096 -0.006 Node mass: Matrix: 3.6281 Node 17 -> B -0.000 -0.116 -0.000 -> A 0.116 -6.096 -0.006 -> G 0.116 -6.096 -0.006 Node mass: Matrix: 3.6281 Node 18 -> B -0.000 -0.116 -0.000 -> A 0.116 -5.715 -0.006 -> G 0.116 -5.715 -0.006 Node mass: Matrix: 14.5125 Node 19 -> B -0.000 -0.116 -0.000 -> A 0.116 -5.334 -0.005 -> G 0.116 -5.334 -0.005 Node mass: Matrix: 7.2563 Node 20 -> B -0.000 -0.116 -0.000 -> A 0.116 -4.953 -0.005 -> G 0.116 -4.953 -0.005 Node mass: Matrix: 14.5125 Node 21 -> B -0.000 -0.116 -0.000 -> A 0.116 -4.572 -0.004 -> G 0.116 -4.572 -0.004 Node mass: Matrix: 7.2563 Node 22 -> B -0.000 -0.116 -0.000 -> A 0.116 -4.191 -0.004 -> G 0.116 -4.191 -0.004 Node mass: Matrix: 14.5125 Node 23 -> B -0.000 -0.116 -0.000 -> A 0.116 -3.810 -0.003 -> G 0.116 -3.810 -0.003 Node mass: Matrix: 7.2563 Node 24 -> B -0.000 -0.116 -0.000 -> A 0.116 -3.429 -0.003 -> G 0.116 -3.429 -0.003 Node mass: Matrix: 14.5125 Node 25 -> B -0.000 -0.116 -0.000 -> A 0.116 -3.048 -0.002 -> G 0.116 -3.048 -0.002 Node mass: Matrix: 7.2563 Node 26 -> B -0.000 -0.116 -0.000 -> A 0.116 -2.667 -0.002 -> G 0.116 -2.667 -0.002 Node mass: Matrix: 14.5125 Node 27 -> B -0.000 -0.116 -0.000 -> A 0.116 -2.286 -0.002 -> G 0.116 -2.286 -0.002 Node mass: Matrix: 7.2563 Node 28 -> B -0.000 -0.116 -0.000 -> A 0.116 -1.905 -0.001 -> G 0.116 -1.905 -0.001 Node mass: Matrix: 14.5125 Node 29 -> B -0.000 -0.116 -0.000 -> A 0.116 -1.524 -0.001 -> G 0.116 -1.524 -0.001 Node mass: Matrix: 7.2563 Node 30 -> B -0.000 -0.116 -0.000 -> A 0.116 -1.143 -0.000 -> G 0.116 -1.143 -0.000 Node mass: Matrix: 14.5125 Node 31 -> B -0.000 -0.116 -0.000 -> A 0.116 -0.762 -0.000 -> G 0.116 -0.762 -0.000 Node mass: Matrix: 7.2563 Node 32 -> B -0.000 -0.116 -0.000 -> A 0.116 -0.381 -0.000 -> G 0.116 -0.381 -0.000 Node mass: Matrix: 14.5125 Updated the beam C, modal C and K matrices with the terms from the gravity linearisation Scaling beam according to reduced time... Setting the beam time step to (0.1250) Updating C and K matrices and natural frequencies with new normalised time... Model Order Reduction in progress... Moment Matching Krylov Model Reduction Construction Algorithm: mimo_rational_arnoldi Interpolation points: Krylov order: r = 6 Unstable ROM - 2 Eigenvalues with |r| > 1 mu = -3.202177 + 0.000000j mu = 1.062204 + 0.000000j System reduced from order 6656 to n = 36 states ...Completed Model Order Reduction in 2.51 s Aeroelastic system assembled: Aerodynamic states: 36 Structural states: 4 Total states: 40 Inputs: 8 Outputs: 6 Generating an instance of FrequencyResponse Variable load_fom has no assigned value in the settings file. will default to the value: Variable num_freqs has no assigned value in the settings file. will default to the value: c_int(50) Computing frequency response... Full order system: ``` ``` /home/ng213/anaconda3/envs/sharpy_env/lib/python3.7/site-packages/scipy/sparse/compressed.py:708: SparseEfficiencyWarning: Changing the sparsity structure of a csc_matrix is expensive. lil_matrix is more efficient. self[i, j] = values ``` ``` Computed the frequency response of the full order system in 26.673870 s Reduced order system: Computed the frequency response of the reduced order system in 0.002653 s Computing error in frequency response m = 0, p = 0 Error Magnitude -real-: log10(error) = -4.85 (-0.00 pct) at 1.09 rad/s Error Magnitude -imag-: log10(error) = -4.80 (-0.00 pct) at 1.07 rad/s m = 0, p = 1 Error Magnitude -real-: log10(error) = -4.93 (-0.00 pct) at 1.09 rad/s Error Magnitude -imag-: log10(error) = -5.17 (0.00 pct) at 1.09 rad/s m = 1, p = 0 Error Magnitude -real-: log10(error) = -3.98 (0.00 pct) at 1.09 rad/s Error Magnitude -imag-: log10(error) = -3.94 (0.00 pct) at 1.07 rad/s m = 1, p = 1 Error Magnitude -real-: log10(error) = -4.06 (0.00 pct) at 1.09 rad/s Error Magnitude -imag-: log10(error) = -4.30 (-0.00 pct) at 1.09 rad/s m = 2, p = 0 Error Magnitude -real-: log10(error) = -4.28 (-0.00 pct) at 1.09 rad/s Error Magnitude -imag-: log10(error) = -4.23 (-0.00 pct) at 1.07 rad/s m = 2, p = 1 Error Magnitude -real-: log10(error) = -4.35 (-0.00 pct) at 1.09 rad/s Error Magnitude -imag-: log10(error) = -4.61 (0.00 pct) at 1.09 rad/s m = 3, p = 0 Error Magnitude -real-: log10(error) = -4.16 (0.00 pct) at 1.09 rad/s Error Magnitude -imag-: log10(error) = -4.12 (0.01 pct) at 1.07 rad/s m = 3, p = 1 Error Magnitude -real-: log10(error) = -4.24 (-0.00 pct) at 1.09 rad/s Error Magnitude -imag-: log10(error) = -4.48 (-0.00 pct) at 1.09 rad/s m = 4, p = 0 Error Magnitude -real-: log10(error) = -3.67 (0.00 pct) at 1.09 rad/s Error Magnitude -imag-: log10(error) = -3.59 (0.00 pct) at 1.07 rad/s m = 4, p = 1 Error Magnitude -real-: log10(error) = -3.71 (0.00 pct) at 1.09 rad/s Error Magnitude -imag-: log10(error) = -4.00 (-0.00 pct) at 0.98 rad/s m = 5, p = 0 Error Magnitude -real-: log10(error) = -3.83 (0.00 pct) at 1.09 rad/s Error Magnitude -imag-: log10(error) = -3.75 (0.00 pct) at 1.07 rad/s m = 5, p = 1 Error Magnitude -real-: log10(error) = -3.87 (-0.00 pct) at 1.09 rad/s Error Magnitude -imag-: log10(error) = -4.16 (-0.00 pct) at 0.98 rad/s Creating Quick plots of the frequency response Plots saved to ./output//goland_csM16N32Ms10_nmodes4rom_MIMORA_r6_sig0000_0000j/frequencyresponse/ Generating an instance of AsymptoticStability Variable reference_velocity has no assigned value in the settings file. will default to the value: c_double(1.0) Variable frequency_cutoff has no assigned value in the settings file. will default to the value: c_double(0.0) Variable export_eigenvalues has no assigned value in the settings file. will default to the value: c_bool(False) Variable display_root_locus has no assigned value in the settings file. will default to the value: c_bool(False) Variable num_evals has no assigned value in the settings file. will default to the value: c_int(200) Variable postprocessors has no assigned value in the settings file. will default to the value: [] Variable postprocessors_settings has no assigned value in the settings file. will default to the value: {} Dynamical System Eigenvalues |===|===|===|===|===|===|===| | mode | eval_real | eval_imag | freq_n (Hz) | freq_d (Hz) | damping | period (s) | |===|===|===|===|===|===|===| | 0 | 10.182550 | -27.485500 | 4.664997 | 4.374453 | -0.347396 | 0.228600 | | 1 | 0.527963 | -0.000000 | 0.084028 | 0.000000 | 1.000000 | inf | | 2 | -0.021585 | -24.315459 | 3.869927 | 3.869926 | 0.000888 | 0.258403 | | 3 | -0.021585 | 24.315459 | 3.869927 | 3.869926 | 0.000888 | 0.258403 | | 4 | -0.105973 | -21.319801 | 3.393194 | 3.393152 | 0.004971 | 0.294711 | | 5 | -0.105973 | 21.319801 | 3.393194 | 3.393152 | 0.004971 | 0.294711 | | 6 | -0.253786 | 0.000000 | 0.040391 | 0.000000 | 1.000000 | inf | | 7 | -0.372465 | -0.355725 | 0.081972 | 0.056615 | 0.723172 | 17.663059 | | 8 | -0.372465 | 0.355725 | 0.081972 | 0.056615 | 0.723172 | 17.663059 | | 9 | -0.405420 | 0.870139 | 0.152781 | 0.138487 | 0.422334 | 7.220896 | | 10 | -0.405420 | -0.870139 | 0.152781 | 0.138487 | 0.422334 | 7.220896 | | 11 | -0.414445 | 0.000000 | 0.065961 | 0.000000 | 1.000000 | inf | | 12 | -0.433769 | -0.705316 | 0.131784 | 0.112255 | 0.523860 | 8.908329 | | 13 | -0.433769 | 0.705316 | 0.131784 | 0.112255 | 0.523860 | 8.908329 | | 14 | -0.584818 | 0.000000 | 0.093077 | 0.000000 | 1.000000 | inf | | 15 | -0.652779 | -0.945278 | 0.182832 | 0.150446 | 0.568242 | 6.646916 | | 16 | -0.652779 | 0.945278 | 0.182832 | 0.150446 | 0.568242 | 6.646916 | | 17 | -0.662253 | -0.344794 | 0.118830 | 0.054876 | 0.886985 | 18.223008 | | 18 | -0.662253 | 0.344794 | 0.118830 | 0.054876 | 0.886985 | 18.223008 | | 19 | -0.689479 | -0.611472 | 0.146671 | 0.097319 | 0.748162 | 10.275501 | | 20 | -0.689479 | 0.611472 | 0.146671 | 0.097319 | 0.748162 | 10.275501 | | 21 | -0.733120 | -0.423757 | 0.134769 | 0.067443 | 0.865775 | 14.827328 | | 22 | -0.733120 | 0.423757 | 0.134769 | 0.067443 | 0.865775 | 14.827328 | | 23 | -0.755676 | 0.287185 | 0.128662 | 0.045707 | 0.934772 | 21.878536 | | 24 | -0.755676 | -0.287185 | 0.128662 | 0.045707 | 0.934772 | 21.878536 | | 25 | -0.765019 | 0.043949 | 0.121957 | 0.006995 | 0.998354 | 142.964071 | | 26 | -0.765019 | -0.043949 | 0.121957 | 0.006995 | 0.998354 | 142.964071 | | 27 | -0.768680 | 0.696822 | 0.165125 | 0.110903 | 0.740889 | 9.016920 | | 28 | -0.768680 | -0.696822 | 0.165125 | 0.110903 | 0.740889 | 9.016920 | | 29 | -0.847888 | -0.177523 | 0.137872 | 0.028254 | 0.978777 | 35.393580 | | 30 | -0.847888 | 0.177523 | 0.137872 | 0.028254 | 0.978777 | 35.393580 | | 31 | -2.191875 | -0.000000 | 0.348848 | 0.000000 | 1.000000 | inf | | 32 | -3.035678 | 1.135842 | 0.515855 | 0.180775 | 0.936586 | 5.531743 | | 33 | -3.035678 | -1.135842 | 0.515855 | 0.180775 | 0.936586 | 5.531743 | | 34 | -4.223460 | 0.000000 | 0.672185 | 0.000000 | 1.000000 | inf | | 35 | -9.070692 | -0.000000 | 1.443645 | 0.000000 | 1.000000 | inf | | 36 | -26.578730 | 27.485500 | 6.085219 | 4.374453 | 0.695149 | 0.228600 | | 37 | -28.711479 | -0.000000 | 4.569574 | 0.000000 | 1.000000 | inf | | 38 | -35.859976 | 27.485500 | 7.190899 | 4.374453 | 0.793683 | 0.228600 | | 39 | -36.744614 | 0.000000 | 5.848087 | 0.000000 | 1.000000 | inf | Velocity Asymptotic Stability Analysis Updating C and K matrices and natural frequencies with new normalised time... LTI u: 100.00 m/2 max. CT eig. real: 1018.304110 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 52.80 49.09 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 101.00 m/2 max. CT eig. real: 1028.487151 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 53.32 49.15 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 102.00 m/2 max. CT eig. real: 1038.670193 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 53.85 49.21 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 103.00 m/2 max. CT eig. real: 1048.853234 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 54.38 49.27 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 104.00 m/2 max. CT eig. real: 1059.036275 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 54.91 49.34 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 105.00 m/2 max. CT eig. real: 1069.219316 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 55.44 49.40 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 106.00 m/2 max. CT eig. real: 1079.402357 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 55.96 49.47 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 107.00 m/2 max. CT eig. real: 1089.585399 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 56.49 49.54 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 108.00 m/2 max. CT eig. real: 1099.768440 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 57.02 49.60 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 109.00 m/2 max. CT eig. real: 1109.951481 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 57.55 49.68 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 110.00 m/2 max. CT eig. real: 1120.134522 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 58.08 49.75 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 111.00 m/2 max. CT eig. real: 1130.317564 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 58.60 49.82 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 112.00 m/2 max. CT eig. real: 1140.500605 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 59.13 49.90 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 113.00 m/2 max. CT eig. real: 1150.683646 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 59.66 49.97 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 114.00 m/2 max. CT eig. real: 1160.866687 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 60.19 50.05 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 115.00 m/2 max. CT eig. real: 1171.049728 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 60.72 50.13 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 116.00 m/2 max. CT eig. real: 1181.232770 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 61.24 50.22 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 117.00 m/2 max. CT eig. real: 1191.415811 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 61.77 50.30 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 118.00 m/2 max. CT eig. real: 1201.598852 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 62.30 50.39 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 119.00 m/2 max. CT eig. real: 1211.781893 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 62.83 83.07 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 120.00 m/2 max. CT eig. real: 1221.964934 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 63.36 82.86 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 121.00 m/2 max. CT eig. real: 1232.147976 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 63.88 82.64 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 122.00 m/2 max. CT eig. real: 1242.331017 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 64.41 82.42 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 123.00 m/2 max. CT eig. real: 1252.514058 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 64.94 82.19 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 124.00 m/2 max. CT eig. real: 1262.697099 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 65.47 81.96 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 125.00 m/2 max. CT eig. real: 1272.880140 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 66.00 81.73 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 126.00 m/2 max. CT eig. real: 1283.063182 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 66.52 81.50 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 127.00 m/2 max. CT eig. real: 1293.246223 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 67.05 81.26 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 128.00 m/2 max. CT eig. real: 1303.429264 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 67.58 81.01 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 129.00 m/2 max. CT eig. real: 1313.612305 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 68.11 80.77 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 130.00 m/2 max. CT eig. real: 1323.795346 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 68.64 80.51 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 131.00 m/2 max. CT eig. real: 1333.978388 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 69.16 80.26 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 132.00 m/2 max. CT eig. real: 1344.161429 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 69.69 80.00 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 133.00 m/2 max. CT eig. real: 1354.344470 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 70.22 79.74 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 134.00 m/2 max. CT eig. real: 1364.527511 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 70.75 79.47 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 135.00 m/2 max. CT eig. real: 1374.710552 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 71.28 79.20 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 136.00 m/2 max. CT eig. real: 1384.893594 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 71.80 78.92 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 137.00 m/2 max. CT eig. real: 1395.076635 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 72.33 78.64 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 138.00 m/2 max. CT eig. real: 1405.259676 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 72.86 78.36 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 139.00 m/2 max. CT eig. real: 1415.442717 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 73.39 78.07 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 140.00 m/2 max. CT eig. real: 1425.625758 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 73.91 77.77 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 141.00 m/2 max. CT eig. real: 1435.808799 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 74.44 77.48 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 142.00 m/2 max. CT eig. real: 1445.991841 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 74.97 77.18 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 143.00 m/2 max. CT eig. real: 1456.174882 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 75.50 76.87 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 144.00 m/2 max. CT eig. real: 1466.357923 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 76.03 76.57 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 145.00 m/2 max. CT eig. real: 1476.540964 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 76.55 76.25 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 146.00 m/2 max. CT eig. real: 1486.724005 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 77.08 75.94 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 147.00 m/2 max. CT eig. real: 1496.907047 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 77.61 75.62 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 148.00 m/2 max. CT eig. real: 1507.090088 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 78.14 75.31 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 149.00 m/2 max. CT eig. real: 1517.273129 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 78.67 74.99 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 150.00 m/2 max. CT eig. real: 1527.456170 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 79.19 74.67 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 151.00 m/2 max. CT eig. real: 1537.639211 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 79.72 74.35 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 152.00 m/2 max. CT eig. real: 1547.822253 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 80.25 74.03 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 153.00 m/2 max. CT eig. real: 1558.005294 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 80.78 73.71 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 154.00 m/2 max. CT eig. real: 1568.188335 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 81.31 73.40 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 155.00 m/2 max. CT eig. real: 1578.371376 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 81.83 73.09 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 156.00 m/2 max. CT eig. real: 1588.554417 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 82.36 72.78 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 157.00 m/2 max. CT eig. real: 1598.737458 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 82.89 72.49 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 158.00 m/2 max. CT eig. real: 1608.920500 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 83.42 72.19 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 159.00 m/2 max. CT eig. real: 1619.103541 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 83.95 71.91 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 160.00 m/2 max. CT eig. real: 1629.286582 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 84.47 71.64 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 161.00 m/2 max. CT eig. real: 1639.469623 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 85.00 71.37 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 162.00 m/2 max. CT eig. real: 1649.652664 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 85.53 71.12 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 163.00 m/2 max. CT eig. real: 1659.835706 N unstab.: 002 Unstable aeroelastic natural frequency CT(rad/s): 86.06 70.87 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 164.00 m/2 max. CT eig. real: 1670.018747 N unstab.: 004 Unstable aeroelastic natural frequency CT(rad/s): 86.59 70.64 70.64 56.53 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 165.00 m/2 max. CT eig. real: 1680.201788 N unstab.: 004 Unstable aeroelastic natural frequency CT(rad/s): 87.11 70.41 70.41 56.63 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 166.00 m/2 max. CT eig. real: 1690.384829 N unstab.: 004 Unstable aeroelastic natural frequency CT(rad/s): 87.64 70.20 70.20 56.73 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 167.00 m/2 max. CT eig. real: 1700.567870 N unstab.: 004 Unstable aeroelastic natural frequency CT(rad/s): 88.17 69.99 69.99 56.82 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 168.00 m/2 max. CT eig. real: 1710.750911 N unstab.: 004 Unstable aeroelastic natural frequency CT(rad/s): 88.70 69.79 69.79 56.90 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 169.00 m/2 max. CT eig. real: 1720.933953 N unstab.: 004 Unstable aeroelastic natural frequency CT(rad/s): 89.23 69.61 69.61 56.97 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 170.00 m/2 max. CT eig. real: 1731.116994 N unstab.: 004 Unstable aeroelastic natural frequency CT(rad/s): 89.75 69.43 69.43 57.04 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 171.00 m/2 max. CT eig. real: 1741.300035 N unstab.: 004 Unstable aeroelastic natural frequency CT(rad/s): 90.28 69.26 69.26 57.10 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 172.00 m/2 max. CT eig. real: 1751.483076 N unstab.: 004 Unstable aeroelastic natural frequency CT(rad/s): 90.81 69.10 69.10 57.16 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 173.00 m/2 max. CT eig. real: 1761.666117 N unstab.: 004 Unstable aeroelastic natural frequency CT(rad/s): 91.34 68.94 68.94 57.21 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 174.00 m/2 max. CT eig. real: 1771.849159 N unstab.: 004 Unstable aeroelastic natural frequency CT(rad/s): 91.87 68.79 68.79 57.25 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 175.00 m/2 max. CT eig. real: 1782.032200 N unstab.: 004 Unstable aeroelastic natural frequency CT(rad/s): 92.39 68.65 68.65 57.29 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 176.00 m/2 max. CT eig. real: 1792.215241 N unstab.: 004 Unstable aeroelastic natural frequency CT(rad/s): 92.92 68.51 68.51 57.33 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 177.00 m/2 max. CT eig. real: 1802.398282 N unstab.: 004 Unstable aeroelastic natural frequency CT(rad/s): 93.45 68.38 68.38 57.36 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 178.00 m/2 max. CT eig. real: 1812.581323 N unstab.: 004 Unstable aeroelastic natural frequency CT(rad/s): 93.98 68.25 68.25 57.39 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 179.00 m/2 max. CT eig. real: 1822.764364 N unstab.: 004 Unstable aeroelastic natural frequency CT(rad/s): 94.51 68.13 68.13 57.41 Updating C and K matrices and natural frequencies with new normalised time... LTI u: 180.00 m/2 max. CT eig. real: 1832.947406 N unstab.: 004 Unstable aeroelastic natural frequency CT(rad/s): 95.03 68.01 68.01 57.43 Saving velocity analysis results... Successful FINISHED - Elapsed time = 87.4548716 seconds FINISHED - CPU process time = 158.0786466 seconds ``` ``` /home/ng213/code/sharpy/sharpy/postproc/asymptoticstability.py:171: UserWarning: Plotting modes is under development warn.warn('Plotting modes is under development') ``` ``` [17]: ``` ``` <sharpy.presharpy.presharpy.PreSharpy at 0x7f94ae37cf28> ``` ###### Analysis[¶](#Analysis) ####### Nonlinear equilibrium[¶](#Nonlinear-equilibrium) The nonlinear equilibrium condition can be visualised and analysed by opening, with Paraview, the files in the `/output/<case_name>/aero` and `/output/<case_name>/beam` folders to see the deflection and aerodynamic forces acting ####### Stability[¶](#Stability) The stability of the Goland wing is now analysed under changing free stream velocity. The aeroelastic system is projected onto 2 structural modes (1st bending and 1st torsion). The two modes are seen quite separated at 100 m/s. As speed is increased, the damping of the torsion mode decreases until it crosses the imaginary axis onto the right hand plane and flutter begins. This flutter mode is a bending-torsion mode, as seen from the natural frequency plot where the frequencies of each coalesce into this mode. ``` [18]: ``` ``` file_name = './output/%s/stability/velocity_analysis_min1000_max1800_nvel0081.dat' % case_name velocity_analysis = np.loadtxt(file_name) u_inf = velocity_analysis[:, 0] eigs_r = velocity_analysis[:, 1] eigs_i = velocity_analysis[:, 2] ``` ``` [19]: ``` ``` fig = plt.figure() plt.scatter(eigs_r, eigs_i, c=u_inf, cmap='Blues') cbar = plt.colorbar() cbar.set_label('Free Stream Velocity, $u_\infty$ [m/s]') plt.grid() plt.xlim(-10, 10) plt.ylim(-150, 150) plt.xlabel('Real Part, $\lambda$ [rad/s]') plt.ylabel('Imag Part, $\lambda$ [rad/s]'); ``` ``` [20]: ``` ``` fig = plt.figure() natural_frequency = np.sqrt(eigs_r ** 2 + eigs_i ** 2) damping_ratio = eigs_r / natural_frequency cond = (eigs_r>-25) * (eigs_r<10) * (natural_frequency<100) # filter unwanted eigenvalues for this plot (mostly aero modes) plt.scatter(u_inf[cond], damping_ratio[cond], color='k', marker='s', s=9) plt.grid() plt.ylim(-0.25, 0.25) plt.xlabel('Free Stream Velocity, $u_\infty$ [m/s]') plt.ylabel('Damping Ratio, $\zeta$ [-]'); ``` ``` [21]: ``` ``` fig = plt.figure() cond = (eigs_r>-25) * (eigs_r<10) # filter unwanted eigenvalues for this plot (mostly aero modes) plt.scatter(u_inf[cond], natural_frequency[cond], color='k', marker='s', s=9) plt.grid() plt.ylim(40, 100) plt.xlabel('Free Stream Velocity, $u_\infty$ [m/s]') plt.ylabel('Natural Frequency, $\omega_n$ [rad/s]'); ``` ``` [1]: ``` ``` %load_ext autoreload %autoreload 2 %matplotlib inline %config InlineBackend.figure_format = 'svg' import numpy as np import matplotlib.pyplot as plt from IPython.display import Image ``` #### T-Tail HALE Model tutorial[¶](#T-Tail-HALE-Model-tutorial) The HALE T-Tail model intends to be a representative example of a typical HALE configuration, with high flexibility and aspect-ratio. A geometry outline and a summary of the beam properties are given next ``` [2]: ``` ``` url = 'https://raw.githubusercontent.com/ImperialCollegeLondon/sharpy/dev_doc/docs/source/content/example_notebooks/images/t-tail_geometry.png' Image(url=url, width=800) ``` ``` [2]: ``` ``` [3]: ``` ``` url = 'https://raw.githubusercontent.com/ImperialCollegeLondon/sharpy/dev_doc/docs/source/content/example_notebooks/images/t-tail_properties.png' Image(url=url, width=500) ``` ``` [3]: ``` This case is included in `tests/coupled/simple_HALE/`. The `generate_hale.py` file in that folder is the one that, if executed, generates all the required SHARPy files. This document is a step by step guide to how to process that case. The T-Tail HALE model is subject to a 20% 1-cos spanwise constant gust. First, let’s start with importing SHARPy in our Python environment. ``` [4]: ``` ``` import sharpy import sharpy.sharpy_main as sharpy_main ``` And now the `generate_HALE.py` file needs to be executed. ``` [5]: ``` ``` route_to_case = '../../../../cases/coupled/simple_HALE/' %run '../../../../cases/coupled/simple_HALE/generate_hale.py' ``` There should be 3 new files, apart from the original `generate_hale.py`: ``` [6]: ``` ``` !ls ../../../../cases/coupled/simple_HALE/ ``` ``` generate_hale.py simple_HALE.aero.h5 simple_HALE.fem.h5 simple_HALE.sharpy ``` SHARPy can be run now. In the terminal, doing `cd` to the `simple_HALE` folder, the command would look like: ``` sharpy simple_HALE.sharpy ``` From a python console with `import sharpy` already run, the command is: ``` [7]: ``` ``` case_data = sharpy_main.main(['', route_to_case + 'simple_HALE.sharpy']) ``` ``` /home/ad214/run/code_doc/sharpy/sharpy/aero/utils/uvlmlib.py:230: RuntimeWarning: invalid value encountered in true_divide flightconditions.uinf_direction = np.ctypeslib.as_ctypes(ts_info.u_ext[0][:, 0, 0]/flightconditions.uinf) /home/ad214/run/code_doc/sharpy/sharpy/aero/utils/uvlmlib.py:347: RuntimeWarning: invalid value encountered in true_divide flightconditions.uinf_direction = np.ctypeslib.as_ctypes(ts_info.u_ext[0][:, 0, 0]/flightconditions.uinf) ``` The resulting data structure that is returned from the call to `main` contains all the time-dependant variables for both the structural and aerodynamic solvers. `timestep_info` can be found in `case_data.structure` and `case_data.aero`. It is an array with custom-made structure to contain the data of each solver. In the `.sharpy` file, we can see which solvers are run: ``` flow = ['BeamLoader', 'AerogridLoader', 'StaticTrim', 'BeamLoads', 'AerogridPlot', 'BeamPlot', 'DynamicCoupled', ] ``` In order: * BeamLoader: reads the `fem.h5` file and generates the structure for the beam solver. * AerogridLoader: reads the `aero.h5` file and generates the aerodynamic grid for the aerodynamic solver. * StaticTrim: this solver performs a longitudinal trim (Thrust, Angle of attack and Elevator deflection) using the StaticCoupled solver. * BeamLoads: calculates the internal beam loads for the static solution * AerogridPlot: outputs the aerodynamic grid for the static solution. * BeamPlot: outputs the structural discretisation for the static solution. * DynamicCoupled: is the main driver of the dynamic simulation: executes the structural and aerodynamic solvers and couples both. Every converged time step is followed by a BeamLoads, AerogridPlot and BeamPlot execution. ##### Structural data organisation[¶](#Structural-data-organisation) The `timestep_info` structure contains several relevant variables: * `for_pos`: position of the body-attached frame of reference in inertial FoR. * `for_vel`: velocity (in body FoR) of the body FoR wrt inertial FoR. * `pos`: nodal position in A FoR. * `psi`: nodal rotations (from the material B FoR to A FoR) in a Cartesian Rotation Vector parametrisation. * `applied_steady_forces`: nodal forces from the aero solver and the applied forces. * `postproc_cell`: is a dictionary that contains the variables generated by a postprocessor, such as the internal beam loads. The structural `timestep_info` also contains some useful variables: * `cag` and `cga` return \(C^{AG}\) and \(C^{GA}\), the rotation matrices from the body-attached (A) FoR to the inertial (G). * `glob_pos` rotates the `pos` variable to give you the inertial nodal position. If `include_rbm = True` is passed, `for_pos` is added to it. ##### Aerodynamic data organisation[¶](#Aerodynamic-data-organisation) The aerodynamic datastructure can be found in `case_data.aero.timestep_info`. It contains useful variables, such as: * `dimensions` and `dimensions_star`: gives the dimensions of every surface and wake surface. Organised as: `dimensions[i_surf, 0] = chordwise panels`, `dimensions[i_surf, 1] = spanwise panels`. * `zeta` and `zeta_star`: they are the \(G\) FoR coordinates of the surface vertices. * `gamma` and `gamma_star`: vortex ring circulations. ##### Structural dynamics[¶](#Structural-dynamics) We can now plot the rigid body dynamics: *RBM trajectory* ``` [16]: ``` ``` fig, ax = plt.subplots(1, 1, figsize=(7, 4)) # extract information n_tsteps = len(case_data.structure.timestep_info) xz = np.zeros((n_tsteps, 2)) for it in range(n_tsteps): xz[it, 0] = -case_data.structure.timestep_info[it].for_pos[0] # the - is so that increasing time -> increasing x xz[it, 1] = case_data.structure.timestep_info[it].for_pos[2] ax.plot(xz[:, 0], xz[:, 1]) fig.suptitle('Longitudinal trajectory of the T-Tail model in a 20% 1-cos gust encounter') ax.set_xlabel('X [m]') ax.set_ylabel('Z [m]'); plt.show() ``` *RBM velocities* ``` [17]: ``` ``` fig, ax = plt.subplots(3, 1, figsize=(7, 6), sharex=True) ylabels = ['Vx [m/s]', 'Vy [m/s]', 'Vz [m/s]'] # extract information n_tsteps = len(case_data.structure.timestep_info) dt = case_data.settings['DynamicCoupled']['dt'].value time_vec = np.linspace(0, n_tsteps*dt, n_tsteps) for_vel = np.zeros((n_tsteps, 3)) for it in range(n_tsteps): for_vel[it, 0:3] = case_data.structure.timestep_info[it].for_vel[0:3] for idim in range(3): ax[idim].plot(time_vec, for_vel[:, idim]) ax[idim].set_ylabel(ylabels[idim]) ax[2].set_xlabel('time [s]') plt.subplots_adjust(hspace=0) fig.suptitle('Linear RBM velocities. T-Tail model in a 20% 1-cos gust encounter'); # ax.set_xlabel('X [m]') # ax.set_ylabel('Z [m]'); plt.show() ``` ``` [18]: ``` ``` fig, ax = plt.subplots(3, 1, figsize=(7, 6), sharex=True) ylabels = ['Roll rate [deg/s]', 'Pitch rate [deg/s]', 'Yaw rate [deg/s]'] # extract information n_tsteps = len(case_data.structure.timestep_info) dt = case_data.settings['DynamicCoupled']['dt'].value time_vec = np.linspace(0, n_tsteps*dt, n_tsteps) for_vel = np.zeros((n_tsteps, 3)) for it in range(n_tsteps): for_vel[it, 0:3] = case_data.structure.timestep_info[it].for_vel[3:6]*180/np.pi for idim in range(3): ax[idim].plot(time_vec, for_vel[:, idim]) ax[idim].set_ylabel(ylabels[idim]) ax[2].set_xlabel('time [s]') plt.subplots_adjust(hspace=0) fig.suptitle('Angular RBM velocities. T-Tail model in a 20% 1-cos gust encounter'); # ax.set_xlabel('X [m]') # ax.set_ylabel('Z [m]'); plt.show() ``` *Wing tip deformation* It is stored in `timestep_info` as `pos`. We need to find the correct node. ``` [19]: ``` ``` fig, ax = plt.subplots(1, 1, figsize=(6, 6)) ax.scatter(case_data.structure.ini_info.pos[:, 0], case_data.structure.ini_info.pos[:, 1]) ax.axis('equal') tip_node = np.argmax(case_data.structure.ini_info.pos[:, 1]) print('Wing tip node is the maximum Y one: ', tip_node) ax.scatter(case_data.structure.ini_info.pos[tip_node, 0], case_data.structure.ini_info.pos[tip_node, 1], color='red') plt.show() ``` ``` Wing tip node is the maximum Y one: 16 ``` We can plot now the `pos[tip_node,:]` variable: ``` [20]: ``` ``` fig, ax = plt.subplots(1, 1, figsize=(7, 3)) # extract information n_tsteps = len(case_data.structure.timestep_info) xz = np.zeros((n_tsteps, 2)) for it in range(n_tsteps): xz[it, 0] = case_data.structure.timestep_info[it].pos[tip_node, 0] xz[it, 1] = case_data.structure.timestep_info[it].pos[tip_node, 2] ax.plot(time_vec, xz[:, 1]) # fig.suptitle('Longitudinal trajectory of the T-Tail model in a 20% 1-cos gust encounter') ax.set_xlabel('time [s]') ax.set_ylabel('Vertical disp. [m]'); plt.show() ``` ``` [21]: ``` ``` fig, ax = plt.subplots(1, 1, figsize=(7, 3)) # extract information n_tsteps = len(case_data.structure.timestep_info) xz = np.zeros((n_tsteps, 2)) for it in range(n_tsteps): xz[it, 0] = case_data.structure.timestep_info[it].pos[tip_node, 0] xz[it, 1] = case_data.structure.timestep_info[it].pos[tip_node, 2] ax.plot(time_vec, xz[:, 0]) # fig.suptitle('Longitudinal trajectory of the T-Tail model in a 20% 1-cos gust encounter') ax.set_xlabel('time [s]') ax.set_ylabel('Horizontal disp. [m]\nPositive is aft'); plt.show() ``` *Wing root loads* The wing root loads can be extracted from the `postproc_cell` structure in `timestep_info`. ``` [22]: ``` ``` fig, ax = plt.subplots(3, 1, figsize=(7, 6), sharex=True) ylabels = ['Torsion [Nm2]', 'OOP [Nm2]', 'IP [Nm2]'] # extract information n_tsteps = len(case_data.structure.timestep_info) dt = case_data.settings['DynamicCoupled']['dt'].value time_vec = np.linspace(0, n_tsteps*dt, n_tsteps) loads = np.zeros((n_tsteps, 3)) for it in range(n_tsteps): loads[it, 0:3] = case_data.structure.timestep_info[it].postproc_cell['loads'][0, 3:6] for idim in range(3): ax[idim].plot(time_vec, loads[:, idim]) ax[idim].set_ylabel(ylabels[idim]) ax[2].set_xlabel('time [s]') plt.subplots_adjust(hspace=0) fig.suptitle('Wing root loads. T-Tail model in a 20% 1-cos gust encounter'); # ax.set_xlabel('X [m]') # ax.set_ylabel('Z [m]'); plt.show() ``` ###### Aerodynamic analysis[¶](#Aerodynamic-analysis) The aerodynamic analysis can be obviously conducted using python. However, the easiest way is to run the case by yourself and open the files in `output/simple_HALE/beam` and `output/simple_HALE/aero` with [Paraview](https://www.paraview.org/). ``` [23]: ``` ``` url = 'https://raw.githubusercontent.com/ImperialCollegeLondon/sharpy/dev_doc/docs/source/content/example_notebooks/images/t-tail_solution.png' Image(url=url, width=600) ``` ``` [23]: ``` #### Asymptotic Stability of a Flying Wing in Cruise Trimmed Conditions[¶](#Asymptotic-Stability-of-a-Flying-Wing-in-Cruise-Trimmed-Conditions) A Horten flying wing is analysed. The nonlinear trim condition is found and the system is linearised. The eigenvalues of the linearised system are then used to evaluate the stability at the cruise trimmed flight conditions. ``` [1]: ``` ``` # required packages import sharpy.utils.algebra as algebra import sharpy.sharpy_main from cases.hangar.richards_wing import Baseline import numpy as np import configobj import matplotlib.pyplot as plt ``` ##### Flight Conditions[¶](#Flight-Conditions) Initial flight conditions. The values for angle of attack `alpha`, control surface deflection `cs_deflection` and `thrust` are only initial values. The values required for trim will be calculated by the `StaticTrim` routine ``` [2]: ``` ``` u_inf = 28 alpha_deg = 4.5135 cs_deflection = 0.1814 thrust = 5.5129 ``` ##### Discretisation[¶](#Discretisation) ``` [3]: ``` ``` M = 4 # chordwise panels N = 11 # spanwise panels Msf = 5 # wake length in chord numbers ``` ##### Create Horten Wing[¶](#Create-Horten-Wing) ``` [4]: ``` ``` ws = Baseline(M=M, N=N, Mstarfactor=Msf, u_inf=u_inf, rho=1.02, alpha_deg=alpha_deg, roll_deg=0, cs_deflection_deg=cs_deflection, thrust=thrust, physical_time=20, case_name='horten', case_name_format=4, case_remarks='M%gN%gMsf%g' % (M, N, Msf)) ws.set_properties() ws.initialise() ws.clean_test_files() ws.update_mass_stiffness(sigma=1., sigma_mass=2.5) ws.update_fem_prop() ws.generate_fem_file() ws.update_aero_properties() ws.generate_aero_file() ``` ``` 0 Section Mass: 11.88 Linear Mass: 11.88 Section Ixx: 1.8777 Section Iyy: 1.0137 Section Izz: 2.5496 Linear Ixx: 1.88 1 Section Mass: 10.99 Linear Mass: 10.99 Section Ixx: 1.4694 Section Iyy: 0.9345 Section Izz: 2.1501 Linear Ixx: 1.74 2 Section Mass: 10.10 Linear Mass: 10.10 Section Ixx: 1.1257 Section Iyy: 0.8561 Section Izz: 1.7993 Linear Ixx: 1.60 3 Section Mass: 9.21 Linear Mass: 9.21 Section Ixx: 0.8410 Section Iyy: 0.7783 Section Izz: 1.4933 Linear Ixx: 1.46 4 Section Mass: 8.32 Linear Mass: 8.32 Section Ixx: 0.6096 Section Iyy: 0.7011 Section Izz: 1.2280 Linear Ixx: 1.31 5 Section Mass: 7.43 Linear Mass: 7.43 Section Ixx: 0.4260 Section Iyy: 0.6246 Section Izz: 0.9996 Linear Ixx: 1.17 6 Section Mass: 6.54 Linear Mass: 6.54 Section Ixx: 0.2845 Section Iyy: 0.5485 Section Izz: 0.8040 Linear Ixx: 1.03 7 Section Mass: 5.64 Linear Mass: 5.64 Section Ixx: 0.1796 Section Iyy: 0.4728 Section Izz: 0.6374 Linear Ixx: 0.89 8 Section Mass: 4.75 Linear Mass: 4.75 Section Ixx: 0.1055 Section Iyy: 0.3975 Section Izz: 0.4959 Linear Ixx: 0.75 9 Section Mass: 3.86 Linear Mass: 3.86 Section Ixx: 0.0567 Section Iyy: 0.3226 Section Izz: 0.3753 Linear Ixx: 0.61 10 Section Mass: 2.97 Linear Mass: 2.97 Section Ixx: 0.0275 Section Iyy: 0.2479 Section Izz: 0.2719 Linear Ixx: 0.47 ``` ##### Simulation Information[¶](#Simulation-Information) The `flow` setting tells SHARPy which solvers to run and in which order. You may be stranged by the presence of the `DynamicCoupled` solver but it is necessary to give an initial speed to the structure. This will allow proper linearisation of the structural and rigid body equations. ``` [5]: ``` ``` flow = ['BeamLoader', 'AerogridLoader', 'StaticTrim', 'BeamPlot', 'AerogridPlot', 'AeroForcesCalculator', 'DynamicCoupled', 'Modal', 'LinearAssembler', 'AsymptoticStability', 'StabilityDerivatives', ] ``` ###### SHARPy Settings[¶](#SHARPy-Settings) ``` [6]: ``` ``` settings = dict() settings['SHARPy'] = {'case': ws.case_name, 'route': ws.case_route, 'flow': flow, 'write_screen': 'on', 'write_log': 'on', 'log_folder': './output/' + ws.case_name + '/', 'log_file': ws.case_name + '.log'} ``` ###### Loaders[¶](#Loaders) ``` [7]: ``` ``` settings['BeamLoader'] = {'unsteady': 'off', 'orientation': algebra.euler2quat(np.array([ws.roll, ws.alpha, ws.beta]))} settings['AerogridLoader'] = {'unsteady': 'off', 'aligned_grid': 'on', 'mstar': int(ws.M * ws.Mstarfactor), 'freestream_dir': ['1', '0', '0'], 'control_surface_deflection': ['']} ``` ###### StaticCoupled Solver[¶](#StaticCoupled-Solver) ``` [8]: ``` ``` settings['StaticCoupled'] = {'print_info': 'on', 'structural_solver': 'NonLinearStatic', 'structural_solver_settings': {'print_info': 'off', 'max_iterations': 200, 'num_load_steps': 1, 'delta_curved': 1e-5, 'min_delta': ws.tolerance, 'gravity_on': 'on', 'gravity': 9.81}, 'aero_solver': 'StaticUvlm', 'aero_solver_settings': {'print_info': 'on', 'horseshoe': ws.horseshoe, 'num_cores': 4, 'n_rollup': int(0), 'rollup_dt': ws.dt, 'rollup_aic_refresh': 1, 'rollup_tolerance': 1e-4, 'velocity_field_generator': 'SteadyVelocityField', 'velocity_field_input': {'u_inf': ws.u_inf, 'u_inf_direction': [1., 0, 0]}, 'rho': ws.rho}, 'max_iter': 200, 'n_load_steps': 1, 'tolerance': ws.tolerance, 'relaxation_factor': 0.2} ``` ###### Trim solver[¶](#Trim-solver) ``` [9]: ``` ``` settings['StaticTrim'] = {'solver': 'StaticCoupled', 'solver_settings': settings['StaticCoupled'], 'thrust_nodes': ws.thrust_nodes, 'initial_alpha': ws.alpha, 'initial_deflection': ws.cs_deflection, 'initial_thrust': ws.thrust, 'max_iter': 200, 'fz_tolerance': 1e-2, 'fx_tolerance': 1e-2, 'm_tolerance': 1e-2} ``` ###### Nonlinear Equilibrium Post-process[¶](#Nonlinear-Equilibrium-Post-process) ``` [10]: ``` ``` settings['AerogridPlot'] = {'folder': './output/', 'include_rbm': 'off', 'include_applied_forces': 'on', 'minus_m_star': 0, 'u_inf': ws.u_inf } settings['AeroForcesCalculator'] = {'folder': './output/', 'write_text_file': 'off', 'text_file_name': ws.case_name + '_aeroforces.csv', 'screen_output': 'on', 'unsteady': 'off', 'coefficients': True, 'q_ref': 0.5 * ws.rho * ws.u_inf ** 2, 'S_ref': 12.809, } settings['BeamPlot'] = {'folder': './output/', 'include_rbm': 'on', 'include_applied_forces': 'on', 'include_FoR': 'on'} ``` ###### DynamicCoupled Solver[¶](#DynamicCoupled-Solver) As mentioned before, a single time step of `DynamicCoupled` is required to give the structure the velocity required for the linearisation of the rigid body equations to be correct. Hence `n_time_steps = 1` ``` [11]: ``` ``` struct_solver_settings = {'print_info': 'off', 'initial_velocity_direction': [-1., 0., 0.], 'max_iterations': 950, 'delta_curved': 1e-6, 'min_delta': ws.tolerance, 'newmark_damp': 5e-3, 'gravity_on': True, 'gravity': 9.81, 'num_steps': ws.n_tstep, 'dt': ws.dt, 'initial_velocity': ws.u_inf * 1} step_uvlm_settings = {'print_info': 'on', 'horseshoe': ws.horseshoe, 'num_cores': 4, 'n_rollup': 1, 'convection_scheme': ws.wake_type, 'rollup_dt': ws.dt, 'rollup_aic_refresh': 1, 'rollup_tolerance': 1e-4, 'velocity_field_generator': 'SteadyVelocityField', 'velocity_field_input': {'u_inf': ws.u_inf * 0, 'u_inf_direction': [1., 0., 0.]}, 'rho': ws.rho, 'n_time_steps': ws.n_tstep, 'dt': ws.dt, 'gamma_dot_filtering': 3} settings['DynamicCoupled'] = {'print_info': 'on', 'structural_solver': 'NonLinearDynamicCoupledStep', 'structural_solver_settings': struct_solver_settings, 'aero_solver': 'StepUvlm', 'aero_solver_settings': step_uvlm_settings, 'fsi_substeps': 200, 'fsi_tolerance': ws.fsi_tolerance, 'relaxation_factor': ws.relaxation_factor, 'minimum_steps': 1, 'relaxation_steps': 150, 'final_relaxation_factor': 0.5, 'n_time_steps': 1, 'dt': ws.dt, 'include_unsteady_force_contribution': 'off', } ``` ###### Modal Solver Settings[¶](#Modal-Solver-Settings) ``` [12]: ``` ``` settings['Modal'] = {'print_info': True, 'use_undamped_modes': True, 'NumLambda': 30, 'rigid_body_modes': True, 'write_modes_vtk': 'on', 'print_matrices': 'on', 'write_data': 'on', 'continuous_eigenvalues': 'off', 'dt': ws.dt, 'plot_eigenvalues': False, 'rigid_modes_cg': False} ``` ###### Linear Assembler Settings[¶](#Linear-Assembler-Settings) Note that for the assembly of the linear system, we replace the parametrisation of the orientation with Euler angles instead of quaternions. ``` [13]: ``` ``` settings['LinearAssembler'] = {'linear_system': 'LinearAeroelastic', 'linear_system_settings': { 'beam_settings': {'modal_projection': 'off', 'inout_coords': 'modes', 'discrete_time': True, 'newmark_damp': 0.5e-2, 'discr_method': 'newmark', 'dt': ws.dt, 'proj_modes': 'undamped', 'num_modes': 9, 'print_info': 'on', 'gravity': 'on', 'remove_dofs': []}, 'aero_settings': {'dt': ws.dt, 'integr_order': 2, 'density': ws.rho, 'remove_predictor': 'off', 'use_sparse': 'off', 'rigid_body_motion': 'on', 'remove_inputs': ['u_gust']}, 'rigid_body_motion': True, 'track_body': 'on', 'use_euler': 'on', 'linearisation_tstep': -1 }} ``` ###### Asymptotic Stability Post-processor[¶](#Asymptotic-Stability-Post-processor) ``` [14]: ``` ``` settings['AsymptoticStability'] = { 'print_info': 'on', 'frequency_cutoff': 0, 'export_eigenvalues': 'on', 'num_evals': 1000, 'folder': './output/'} ``` ###### Stability Derivatives Post-processor[¶](#Stability-Derivatives-Post-processor) ``` [15]: ``` ``` settings['StabilityDerivatives'] = {'u_inf': ws.u_inf, 'S_ref': 12.809, 'b_ref': ws.span, 'c_ref': 0.719} ``` ###### Write solver file[¶](#Write-solver-file) ``` [16]: ``` ``` config = configobj.ConfigObj() np.set_printoptions(precision=16) file_name = ws.case_route + '/' + ws.case_name + '.sharpy' config.filename = file_name for k, v in settings.items(): config[k] = v config.write() ``` ##### Run Simulation[¶](#Run-Simulation) ``` [17]: ``` ``` data = sharpy.sharpy_main.main(['', ws.case_route + '/' + ws.case_name + '.sharpy']) ``` ``` --- ###### ## ## ### ######## ######## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #### ###### ######### ## ## ######## ######## ## ## ## ## ######### ## ## ## ## ## ## ## ## ## ## ## ## ## ## ###### ## ## ## ## ## ## ## ## --- Aeroelastics Lab, Aeronautics Department. Copyright (c), Imperial College London. All rights reserved. License available at https://github.com/imperialcollegelondon/sharpy Running SHARPy from /home/ng213/code/sharpy/docs/source/content/example_notebooks SHARPy being run is in /home/ng213/code/sharpy The branch being run is dev_examples The version and commit hash are: v0.1-1590-g7385fe4-7385fe4 The available solvers on this session are: PreSharpy _BaseStructural AerogridLoader BeamLoader DynamicCoupled DynamicUVLM LinDynamicSim LinearAssembler Modal NoAero NonLinearDynamic NonLinearDynamicCoupledStep NonLinearDynamicMultibody NonLinearDynamicPrescribedStep NonLinearStatic NonLinearStaticMultibody PrescribedUvlm RigidDynamicPrescribedStep SHWUvlm StaticCoupled StaticCoupledRBM StaticTrim StaticUvlm StepLinearUVLM StepUvlm Trim Cleanup PickleData SaveData CreateSnapshot PlotFlowField StabilityDerivatives AeroForcesCalculator WriteVariablesTime AerogridPlot LiftDistribution StallCheck FrequencyResponse AsymptoticStability BeamPlot BeamLoads Generating an instance of BeamLoader Generating an instance of AerogridLoader Variable control_surface_deflection_generator_settings has no assigned value in the settings file. will default to the value: [] The aerodynamic grid contains 4 surfaces Surface 0, M=4, N=2 Wake 0, M=20, N=2 Surface 1, M=4, N=22 Wake 1, M=20, N=22 Surface 2, M=4, N=2 Wake 2, M=20, N=2 Surface 3, M=4, N=22 Wake 3, M=20, N=22 In total: 192 bound panels In total: 960 wake panels Total number of panels = 1152 Generating an instance of StaticTrim Variable print_info has no assigned value in the settings file. will default to the value: c_bool(True) Variable tail_cs_index has no assigned value in the settings file. will default to the value: c_int(0) Variable initial_angle_eps has no assigned value in the settings file. will default to the value: c_double(0.05) Variable initial_thrust_eps has no assigned value in the settings file. will default to the value: c_double(2.0) Variable relaxation_factor has no assigned value in the settings file. will default to the value: c_double(0.2) Generating an instance of StaticCoupled Generating an instance of NonLinearStatic Variable newmark_damp has no assigned value in the settings file. will default to the value: c_double(0.0001) Variable relaxation_factor has no assigned value in the settings file. will default to the value: c_double(0.3) Variable dt has no assigned value in the settings file. will default to the value: c_double(0.01) Variable num_steps has no assigned value in the settings file. will default to the value: c_int(500) Variable initial_position has no assigned value in the settings file. will default to the value: [0. 0. 0.] Generating an instance of StaticUvlm Variable iterative_solver has no assigned value in the settings file. will default to the value: c_bool(False) Variable iterative_tol has no assigned value in the settings file. will default to the value: c_double(0.0001) Variable iterative_precond has no assigned value in the settings file. will default to the value: c_bool(False) |===|===|===|===|===|===|===|===|===| |iter |step | log10(res) | Fx | Fy | Fz | Mx | My | Mz | |===|===|===|===|===|===|===|===|===| |===|===|===|===|===|===|===|===|===|===| | iter | alpha | elev | thrust | Fx | Fy | Fz | Mx | My | Mz | |===|===|===|===|===|===|===|===|===|===| | 0 | 0 | 0.00000 | -0.1051 | -0.0000 | 0.0604 | -0.0000 | 1.0816 | -0.0000 | | 1 | 0 | -7.62661 | -0.1119 | 0.0000 | 0.1274 | -0.0000 | 0.0026 | 0.0000 | | 2 | 0 | -8.34118 | -0.0941 | -0.0000 | 0.0393 | 0.0000 | -0.0793 | 0.0000 | | 3 | 0 | -9.29266 | -0.0870 | 0.0000 | 0.0007 | -0.0000 | -0.0089 | 0.0000 | | 4 | 0 | -10.67443 | -0.0876 | 0.0000 | 0.0028 | 0.0000 | -0.0119 | 0.0000 | | 5 | 0 | -10.87008 | -0.0878 | 0.0000 | 0.0039 | -0.0000 | -0.0138 | 0.0000 | | 6 | 0 | -11.64423 | -0.0877 | -0.0000 | 0.0037 | 0.0000 | -0.0135 | -0.0000 | | 7 | 0 | -12.87835 | -0.0877 | 0.0000 | 0.0037 | -0.0000 | -0.0135 | 0.0000 | | 0 | 4.5135 | 0.1814 | 5.5129 | -0.0877 | 0.0000 | 0.0037 | -0.0000 | -0.0135 | 0.0000 | | 0 | 0 | 0.00000 |-116.9870 | -0.0000 | 994.8061 | -0.0000 |-882.4096 | 0.0000 | | 1 | 0 | -5.81338 | -81.0252 | -0.0000 | 944.6090 | -0.0000 |-802.5942 | -0.0000 | | 2 | 0 | -6.69380 | -74.1802 | -0.0000 | 937.6597 | -0.0000 |-792.1302 | 0.0000 | | 3 | 0 | -7.20553 | -75.0015 | 0.0000 | 939.7800 | -0.0000 |-795.7138 | -0.0000 | | 4 | 0 | -8.63165 | -74.9725 | 0.0000 | 939.7033 | -0.0000 |-795.5808 | 0.0000 | | 5 | 0 | -8.79020 | -74.9511 | 0.0000 | 939.6488 | 0.0000 |-795.4881 | 0.0000 | | 6 | 0 | -9.56992 | -74.9546 | -0.0000 | 939.6578 | 0.0000 |-795.5035 | -0.0000 | | 7 | 0 | -10.77969 | -74.9549 | 0.0000 | 939.6584 | -0.0000 |-795.5044 | 0.0000 | | 8 | 0 | -10.98913 | -74.9547 | -0.0000 | 939.6580 | 0.0000 |-795.5039 | -0.0000 | | 9 | 0 | -12.12706 | -74.9547 | -0.0000 | 939.6581 | 0.0000 |-795.5039 | -0.0000 | | 0 | 7.3783 | -2.6834 | 5.5129 | -74.9547 | -0.0000 | 939.6581 | 0.0000 |-795.5039 | -0.0000 | | 0 | 0 | 0.00000 | -32.9132 | -0.0000 | 371.4719 | -0.0000 |-902.7965 | -0.0000 | | 1 | 0 | -5.48782 | -11.8207 | 0.0000 | 298.8468 | -0.0000 |-777.2958 | 0.0000 | | 2 | 0 | -6.40702 | -8.5206 | 0.0000 | 289.8069 | -0.0000 |-761.4879 | -0.0000 | | 3 | 0 | -6.85006 | -9.2671 | -0.0000 | 293.2383 | 0.0000 |-767.3401 | -0.0000 | | 4 | 0 | -8.25284 | -9.2365 | 0.0000 | 293.1025 | -0.0000 |-767.1091 | 0.0000 | | 5 | 0 | -8.43408 | -9.2166 | 0.0000 | 293.0132 | 0.0000 |-766.9570 | 0.0000 | | 6 | 0 | -9.20313 | -9.2199 | -0.0000 | 293.0284 | -0.0000 |-766.9829 | -0.0000 | | 7 | 0 | -10.44114 | -9.2201 | 0.0000 | 293.0293 | -0.0000 |-766.9844 | 0.0000 | | 8 | 0 | -10.62337 | -9.2200 | -0.0000 | 293.0287 | 0.0000 |-766.9834 | -0.0000 | | 9 | 0 | -11.73764 | -9.2200 | -0.0000 | 293.0287 | 0.0000 |-766.9835 | -0.0000 | | 10 | 0 | -12.29756 | -9.2200 | 0.0000 | 293.0287 | -0.0000 |-766.9835 | 0.0000 | | 0 | 4.5135 | 3.0462 | 5.5129 | -9.2200 | 0.0000 | 293.0287 | -0.0000 |-766.9835 | 0.0000 | | 0 | 0 | 0.00000 | -4.1051 | -0.0000 | 0.0604 | -0.0000 | 1.0812 | -0.0000 | | 1 | 0 | -7.62660 | -4.1120 | -0.0000 | 0.1274 | -0.0000 | 0.0023 | -0.0000 | | 2 | 0 | -8.34116 | -4.0942 | -0.0000 | 0.0393 | 0.0000 | -0.0796 | -0.0000 | | 3 | 0 | -9.29268 | -4.0871 | 0.0000 | 0.0007 | -0.0000 | -0.0093 | 0.0000 | | 4 | 0 | -10.67446 | -4.0876 | 0.0000 | 0.0028 | 0.0000 | -0.0123 | 0.0000 | | 5 | 0 | -10.86979 | -4.0878 | 0.0000 | 0.0039 | -0.0000 | -0.0142 | 0.0000 | | 6 | 0 | -11.64229 | -4.0878 | 0.0000 | 0.0037 | -0.0000 | -0.0138 | 0.0000 | | 7 | 0 | -12.86703 | -4.0878 | 0.0000 | 0.0037 | -0.0000 | -0.0138 | 0.0000 | | 0 | 4.5135 | 0.1814 | 7.5129 | -4.0878 | 0.0000 | 0.0037 | -0.0000 | -0.0138 | 0.0000 | | 0 | 0 | 0.00000 | -0.0165 | 0.0000 | 0.0499 | 0.0000 | 1.1009 | 0.0000 | | 1 | 0 | -7.62629 | -0.0236 | 0.0000 | 0.1184 | 0.0000 | 0.0195 | 0.0000 | | 2 | 0 | -8.34096 | -0.0058 | -0.0000 | 0.0305 | -0.0000 | -0.0628 | -0.0000 | | 3 | 0 | -9.29199 | 0.0012 | 0.0000 | -0.0082 | 0.0000 | 0.0077 | 0.0000 | | 4 | 0 | -10.67407 | 0.0007 | 0.0000 | -0.0061 | -0.0000 | 0.0047 | 0.0000 | | 5 | 0 | -10.86912 | 0.0005 | -0.0000 | -0.0050 | 0.0000 | 0.0028 | -0.0000 | | 6 | 0 | -11.64225 | 0.0005 | 0.0000 | -0.0052 | -0.0000 | 0.0031 | 0.0000 | | 7 | 0 | -12.83721 | 0.0005 | -0.0000 | -0.0052 | 0.0000 | 0.0032 | -0.0000 | | 1 | 4.5135 | 0.1814 | 5.4690 | 0.0005 | -0.0000 | -0.0052 | 0.0000 | 0.0032 | -0.0000 | Generating an instance of BeamPlot Variable include_applied_moments has no assigned value in the settings file. will default to the value: c_bool(True) Variable name_prefix has no assigned value in the settings file. will default to the value: Variable output_rbm has no assigned value in the settings file. will default to the value: c_bool(True) ...Finished Generating an instance of AerogridPlot Variable include_forward_motion has no assigned value in the settings file. will default to the value: c_bool(False) Variable include_unsteady_applied_forces has no assigned value in the settings file. will default to the value: c_bool(False) Variable name_prefix has no assigned value in the settings file. will default to the value: Variable dt has no assigned value in the settings file. will default to the value: c_double(0.0) Variable include_velocities has no assigned value in the settings file. will default to the value: c_bool(False) Variable num_cores has no assigned value in the settings file. will default to the value: c_int(1) ...Finished Generating an instance of AeroForcesCalculator --- tstep | fx_g | fy_g | fz_g | Cfx_g | Cfy_g | Cfz_g 0 | 1.090e+01 | -1.250e-07 | 1.835e+03 | 2.129e-03 | -2.441e-11 | 3.583e-01 ...Finished Generating an instance of DynamicCoupled Variable structural_substeps has no assigned value in the settings file. will default to the value: c_int(0) Variable dynamic_relaxation has no assigned value in the settings file. will default to the value: c_bool(False) Variable postprocessors has no assigned value in the settings file. will default to the value: [] Variable postprocessors_settings has no assigned value in the settings file. will default to the value: {} Variable controller_id has no assigned value in the settings file. will default to the value: {} Variable controller_settings has no assigned value in the settings file. will default to the value: {} Variable cleanup_previous_solution has no assigned value in the settings file. will default to the value: c_bool(False) Variable steps_without_unsteady_force has no assigned value in the settings file. will default to the value: c_int(0) Variable pseudosteps_ramp_unsteady_force has no assigned value in the settings file. will default to the value: c_int(0) Generating an instance of NonLinearDynamicCoupledStep Variable num_load_steps has no assigned value in the settings file. will default to the value: c_int(1) Variable relaxation_factor has no assigned value in the settings file. will default to the value: c_double(0.3) Variable balancing has no assigned value in the settings file. will default to the value: c_bool(False) Generating an instance of StepUvlm Variable iterative_solver has no assigned value in the settings file. will default to the value: c_bool(False) Variable iterative_tol has no assigned value in the settings file. will default to the value: c_double(0.0001) Variable iterative_precond has no assigned value in the settings file. will default to the value: c_bool(False) |===|===|===|===|===|===|===|===| | ts | t | iter | struc ratio | iter time | residual vel | FoR_vel(x) | FoR_vel(z) | |===|===|===|===|===|===|===|===| ``` ``` /home/ng213/code/sharpy/sharpy/aero/utils/uvlmlib.py:230: RuntimeWarning: invalid value encountered in true_divide flightconditions.uinf_direction = np.ctypeslib.as_ctypes(ts_info.u_ext[0][:, 0, 0]/flightconditions.uinf) ``` ``` | 1 | 0.0089 | 3 | 0.652648 | 0.921936 | -10.549271 |-2.791317e+01 |-2.203427e+00 | ...Finished Generating an instance of Modal Variable folder has no assigned value in the settings file. will default to the value: ./output Variable keep_linear_matrices has no assigned value in the settings file. will default to the value: c_bool(True) Variable write_dat has no assigned value in the settings file. will default to the value: c_bool(True) Variable delta_curved has no assigned value in the settings file. will default to the value: c_double(0.01) Variable max_rotation_deg has no assigned value in the settings file. will default to the value: c_double(15.0) Variable max_displacement has no assigned value in the settings file. will default to the value: c_double(0.15) Variable use_custom_timestep has no assigned value in the settings file. will default to the value: c_int(-1) Structural eigenvalues |===|===|===|===|===|===|===| | mode | eval_real | eval_imag | freq_n (Hz) | freq_d (Hz) | damping | period (s) | |===|===|===|===|===|===|===| ``` ``` /home/ng213/code/sharpy/sharpy/solvers/modal.py:284: UserWarning: Projecting a system with damping on undamped modal shapes 'Projecting a system with damping on undamped modal shapes') ``` ``` | 0 | -0.000000 | 0.000000 | 0.000000 | 0.000000 | 1.000000 | inf | | 1 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 1.000000 | inf | | 2 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 1.000000 | inf | | 3 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 1.000000 | inf | | 4 | -0.000000 | 0.000000 | 0.000000 | 0.000000 | 1.000000 | inf | | 5 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 1.000000 | inf | | 6 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 1.000000 | inf | | 7 | -0.000000 | 0.000000 | 0.000000 | 0.000000 | 1.000000 | inf | | 8 | -0.000000 | 0.000000 | 0.000000 | 0.000000 | 1.000000 | inf | | 9 | -0.000000 | 0.000000 | 0.000000 | 0.000000 | 1.000000 | inf | | 10 | 0.000000 | 28.293939 | 4.503120 | 4.503120 | -0.000000 | 0.222068 | | 11 | 0.000000 | 29.271318 | 4.658675 | 4.658675 | -0.000000 | 0.214653 | | 12 | 0.000000 | 54.780234 | 8.718545 | 8.718545 | -0.000000 | 0.114698 | | 13 | 0.000000 | 58.999779 | 9.390106 | 9.390106 | -0.000000 | 0.106495 | | 14 | 0.000000 | 70.520741 | 11.223724 | 11.223724 | -0.000000 | 0.089097 | | 15 | 0.000000 | 76.917111 | 12.241738 | 12.241738 | -0.000000 | 0.081688 | | 16 | 0.000000 | 87.324076 | 13.898058 | 13.898058 | -0.000000 | 0.071952 | | 17 | 0.000000 | 108.035577 | 17.194396 | 17.194396 | -0.000000 | 0.058158 | | 18 | 0.000000 | 119.692139 | 19.049596 | 19.049596 | -0.000000 | 0.052495 | | 19 | 0.000000 | 133.495187 | 21.246419 | 21.246419 | -0.000000 | 0.047067 | | 20 | 0.000000 | 134.444788 | 21.397553 | 21.397553 | -0.000000 | 0.046734 | | 21 | 0.000000 | 151.060442 | 24.042016 | 24.042016 | -0.000000 | 0.041594 | | 22 | 0.000000 | 159.369020 | 25.364367 | 25.364367 | -0.000000 | 0.039425 | | 23 | 0.000000 | 171.256102 | 27.256255 | 27.256255 | -0.000000 | 0.036689 | | 24 | 0.000000 | 173.895881 | 27.676389 | 27.676389 | -0.000000 | 0.036132 | | 25 | 0.000000 | 199.016557 | 31.674469 | 31.674469 | -0.000000 | 0.031571 | | 26 | 0.000000 | 205.412581 | 32.692428 | 32.692428 | -0.000000 | 0.030588 | | 27 | 0.000000 | 205.419531 | 32.693534 | 32.693534 | -0.000000 | 0.030587 | | 28 | 0.000000 | 223.563796 | 35.581283 | 35.581283 | -0.000000 | 0.028105 | | 29 | 0.000000 | 227.924750 | 36.275351 | 36.275351 | -0.000000 | 0.027567 | Generating an instance of LinearAssembler Variable linearisation_tstep has no assigned value in the settings file. will default to the value: c_int(-1) Generating an instance of LinearAeroelastic Variable uvlm_filename has no assigned value in the settings file. will default to the value: Generating an instance of LinearUVLM Variable ScalingDict has no assigned value in the settings file. will default to the value: {} Variable gust_assembler has no assigned value in the settings file. will default to the value: Variable rom_method has no assigned value in the settings file. will default to the value: [] Variable rom_method_settings has no assigned value in the settings file. will default to the value: {} Variable length has no assigned value in the settings file. will default to the value: 1.0 Variable speed has no assigned value in the settings file. will default to the value: 1.0 Variable density has no assigned value in the settings file. will default to the value: 1.0 Initialising Static linear UVLM solver class... ...done in 0.27 sec State-space realisation of UVLM equations started... state-space model produced in form: x_{n+1} = A x_{n} + Bp u_{n+1} ...done in 2.44 sec Generating an instance of LinearBeam Variable remove_sym_modes has no assigned value in the settings file. will default to the value: False Warning, projecting system with damping onto undamped modes Linearising gravity terms... M = 187.12 kg X_CG A -> 1.19 -0.00 0.01 Node 1 -> B 0.000 -0.089 -0.000 -> A 0.089 0.206 0.000 -> G 0.089 0.206 -0.007 Node mass: Matrix: 2.6141 Node 2 -> B -0.010 -0.019 -0.000 -> A 0.019 0.403 0.000 -> G 0.019 0.403 -0.001 Node mass: Matrix: 7.3672 Node 3 -> B -0.019 -0.087 -0.000 -> A 0.234 0.800 0.000 -> G 0.234 0.800 -0.018 Node mass: Matrix: 5.8780 Node 4 -> B -0.019 -0.084 -0.000 -> A 0.390 1.238 0.001 -> G 0.389 1.238 -0.030 Node mass: Matrix: 2.8288 Node 5 -> B -0.018 -0.081 -0.000 -> A 0.546 1.676 0.001 -> G 0.544 1.676 -0.041 Node mass: Matrix: 5.4372 Node 6 -> B -0.017 -0.078 -0.000 -> A 0.702 2.113 0.002 -> G 0.700 2.113 -0.053 Node mass: Matrix: 2.6084 Node 7 -> B -0.016 -0.074 -0.000 -> A 0.857 2.551 0.003 -> G 0.855 2.551 -0.064 Node mass: Matrix: 4.9963 Node 8 -> B -0.016 -0.071 -0.000 -> A 1.013 2.988 0.004 -> G 1.010 2.988 -0.076 Node mass: Matrix: 2.3879 Node 9 -> B -0.015 -0.068 -0.000 -> A 1.169 3.426 0.005 -> G 1.166 3.426 -0.087 Node mass: Matrix: 4.5555 Node 10 -> B -0.014 -0.065 -0.000 -> A 1.325 3.863 0.006 -> G 1.321 3.863 -0.098 Node mass: Matrix: 2.1675 Node 11 -> B -0.013 -0.061 -0.000 -> A 1.480 4.301 0.007 -> G 1.476 4.301 -0.109 Node mass: Matrix: 4.1146 Node 12 -> B -0.013 -0.058 -0.000 -> A 1.636 4.739 0.009 -> G 1.632 4.739 -0.120 Node mass: Matrix: 1.9471 Node 13 -> B -0.012 -0.055 -0.000 -> A 1.792 5.176 0.010 -> G 1.787 5.176 -0.131 Node mass: Matrix: 3.6738 Node 14 -> B -0.011 -0.052 -0.000 -> A 1.948 5.614 0.011 -> G 1.943 5.614 -0.142 Node mass: Matrix: 1.7267 Node 15 -> B -0.011 -0.048 -0.000 -> A 2.104 6.052 0.012 -> G 2.098 6.052 -0.153 Node mass: Matrix: 3.2329 Node 16 -> B -0.010 -0.045 -0.000 -> A 2.260 6.489 0.014 -> G 2.254 6.489 -0.164 Node mass: Matrix: 1.5062 Node 17 -> B -0.009 -0.042 -0.000 -> A 2.415 6.927 0.015 -> G 2.409 6.927 -0.175 Node mass: Matrix: 2.7921 Node 18 -> B -0.008 -0.039 -0.000 -> A 2.571 7.364 0.016 -> G 2.564 7.364 -0.186 Node mass: Matrix: 1.2858 Node 19 -> B -0.008 -0.035 -0.000 -> A 2.727 7.802 0.017 -> G 2.720 7.802 -0.197 Node mass: Matrix: 2.3512 Node 20 -> B -0.007 -0.032 -0.000 -> A 2.883 8.239 0.019 -> G 2.875 8.239 -0.208 Node mass: Matrix: 1.0654 Node 21 -> B -0.006 -0.028 -0.000 -> A 3.038 8.677 0.020 -> G 3.030 8.677 -0.219 Node mass: Matrix: 1.9104 Node 22 -> B -0.006 -0.026 -0.000 -> A 3.194 9.114 0.021 -> G 3.186 9.114 -0.230 Node mass: Matrix: 0.8450 Node 23 -> B -0.005 -0.022 -0.000 -> A 3.350 9.552 0.023 -> G 3.341 9.552 -0.241 Node mass: Matrix: 1.4695 Node 24 -> B -0.005 -0.022 -0.000 -> A 3.508 9.988 0.024 -> G 3.499 9.988 -0.252 Node mass: Matrix: 0.3674 Node 25 -> B 0.000 0.089 -0.000 -> A 0.089 -0.206 0.000 -> G 0.089 -0.206 -0.007 Node mass: Matrix: 2.6141 Node 26 -> B -0.010 0.019 -0.000 -> A 0.019 -0.403 0.000 -> G 0.019 -0.403 -0.001 Node mass: Matrix: 7.3672 Node 27 -> B -0.019 0.087 -0.000 -> A 0.234 -0.800 0.000 -> G 0.234 -0.800 -0.018 Node mass: Matrix: 5.8780 Node 28 -> B -0.019 0.084 -0.000 -> A 0.390 -1.238 0.001 -> G 0.389 -1.238 -0.030 Node mass: Matrix: 2.8288 Node 29 -> B -0.018 0.081 -0.000 -> A 0.546 -1.676 0.001 -> G 0.544 -1.676 -0.041 Node mass: Matrix: 5.4372 Node 30 -> B -0.017 0.078 -0.000 -> A 0.702 -2.113 0.002 -> G 0.700 -2.113 -0.053 Node mass: Matrix: 2.6084 Node 31 -> B -0.016 0.074 -0.000 -> A 0.857 -2.551 0.003 -> G 0.855 -2.551 -0.064 Node mass: Matrix: 4.9963 Node 32 -> B -0.016 0.071 -0.000 -> A 1.013 -2.988 0.004 -> G 1.010 -2.988 -0.076 Node mass: Matrix: 2.3879 Node 33 -> B -0.015 0.068 -0.000 -> A 1.169 -3.426 0.005 -> G 1.166 -3.426 -0.087 Node mass: Matrix: 4.5555 Node 34 -> B -0.014 0.065 -0.000 -> A 1.325 -3.863 0.006 -> G 1.321 -3.863 -0.098 Node mass: Matrix: 2.1675 Node 35 -> B -0.013 0.061 -0.000 -> A 1.480 -4.301 0.007 -> G 1.476 -4.301 -0.109 Node mass: Matrix: 4.1146 Node 36 -> B -0.013 0.058 -0.000 -> A 1.636 -4.739 0.009 -> G 1.632 -4.739 -0.120 Node mass: Matrix: 1.9471 Node 37 -> B -0.012 0.055 -0.000 -> A 1.792 -5.176 0.010 -> G 1.787 -5.176 -0.131 Node mass: Matrix: 3.6738 Node 38 -> B -0.011 0.052 -0.000 -> A 1.948 -5.614 0.011 -> G 1.943 -5.614 -0.142 Node mass: Matrix: 1.7267 Node 39 -> B -0.011 0.048 -0.000 -> A 2.104 -6.052 0.012 -> G 2.098 -6.052 -0.153 Node mass: Matrix: 3.2329 Node 40 -> B -0.010 0.045 -0.000 -> A 2.260 -6.489 0.014 -> G 2.254 -6.489 -0.164 Node mass: Matrix: 1.5062 Node 41 -> B -0.009 0.042 -0.000 -> A 2.415 -6.927 0.015 -> G 2.409 -6.927 -0.175 Node mass: Matrix: 2.7921 Node 42 -> B -0.008 0.039 -0.000 -> A 2.571 -7.364 0.016 -> G 2.564 -7.364 -0.186 Node mass: Matrix: 1.2858 Node 43 -> B -0.008 0.035 -0.000 -> A 2.727 -7.802 0.017 -> G 2.720 -7.802 -0.197 Node mass: Matrix: 2.3512 Node 44 -> B -0.007 0.032 -0.000 -> A 2.883 -8.239 0.019 -> G 2.875 -8.239 -0.208 Node mass: Matrix: 1.0654 Node 45 -> B -0.006 0.028 -0.000 -> A 3.038 -8.677 0.020 -> G 3.030 -8.677 -0.219 Node mass: Matrix: 1.9104 Node 46 -> B -0.006 0.026 -0.000 -> A 3.194 -9.114 0.021 -> G 3.186 -9.114 -0.230 Node mass: Matrix: 0.8450 Node 47 -> B -0.005 0.022 -0.000 -> A 3.350 -9.552 0.023 -> G 3.341 -9.552 -0.241 Node mass: Matrix: 1.4695 Node 48 -> B -0.005 0.022 -0.000 -> A 3.508 -9.988 0.024 -> G 3.499 -9.988 -0.252 Node mass: Matrix: 0.3674 Updated the beam C, modal C and K matrices with the terms from the gravity linearisation Aeroelastic system assembled: Aerodynamic states: 1536 Structural states: 594 Total states: 2130 Inputs: 893 Outputs: 891 Generating an instance of AsymptoticStability Variable reference_velocity has no assigned value in the settings file. will default to the value: c_double(1.0) Variable display_root_locus has no assigned value in the settings file. will default to the value: c_bool(False) Variable velocity_analysis has no assigned value in the settings file. will default to the value: [] Variable modes_to_plot has no assigned value in the settings file. will default to the value: [] Variable postprocessors has no assigned value in the settings file. will default to the value: [] Variable postprocessors_settings has no assigned value in the settings file. will default to the value: {} Dynamical System Eigenvalues |===|===|===|===|===|===|===| | mode | eval_real | eval_imag | freq_n (Hz) | freq_d (Hz) | damping | period (s) | |===|===|===|===|===|===|===| | 0 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 1.000000 | inf | | 1 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 1.000000 | inf | | 2 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 1.000000 | inf | | 3 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 1.000000 | inf | | 4 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 1.000000 | inf | | 5 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 1.000000 | inf | | 6 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 1.000000 | inf | | 7 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 1.000000 | inf | | 8 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 1.000000 | inf | | 9 | -0.000000 | 0.000000 | 0.000000 | 0.000000 | 1.000000 | inf | | 10 | -0.000884 | 0.321695 | 0.051200 | 0.051199 | 0.002749 | 19.531499 | | 11 | -0.000884 | -0.321695 | 0.051200 | 0.051199 | 0.002749 | 19.531499 | | 12 | -0.008471 | -0.391290 | 0.062290 | 0.062276 | 0.021644 | 16.057627 | | 13 | -0.008471 | 0.391290 | 0.062290 | 0.062276 | 0.021644 | 16.057627 | | 14 | -0.022506 | 0.000000 | 0.003582 | 0.000000 | 1.000000 | inf | | 15 | -0.064807 | -53.732340 | 8.551774 | 8.551767 | 0.001206 | 0.116935 | | 16 | -0.064807 | 53.732340 | 8.551774 | 8.551767 | 0.001206 | 0.116935 | | 17 | -0.101946 | 68.319126 | 10.873339 | 10.873327 | 0.001492 | 0.091968 | | 18 | -0.101946 | -68.319126 | 10.873339 | 10.873327 | 0.001492 | 0.091968 | | 19 | -0.147587 | 83.265087 | 13.252071 | 13.252050 | 0.001772 | 0.075460 | | 20 | -0.147587 | -83.265087 | 13.252071 | 13.252050 | 0.001772 | 0.075460 | | 21 | -0.248703 | 109.925761 | 17.495273 | 17.495228 | 0.002262 | 0.057158 | | 22 | -0.248703 | -109.925761 | 17.495273 | 17.495228 | 0.002262 | 0.057158 | | 23 | -0.293471 | -120.387486 | 19.160320 | 19.160263 | 0.002438 | 0.052191 | | 24 | -0.293471 | 120.387486 | 19.160320 | 19.160263 | 0.002438 | 0.052191 | | 25 | -0.350319 | -132.903267 | 21.152285 | 21.152212 | 0.002636 | 0.047276 | | 26 | -0.350319 | 132.903267 | 21.152285 | 21.152212 | 0.002636 | 0.047276 | | 27 | -0.376400 | -138.516845 | 22.045722 | 22.045641 | 0.002717 | 0.045360 | | 28 | -0.376400 | 138.516845 | 22.045722 | 22.045641 | 0.002717 | 0.045360 | | 29 | -0.494445 | -162.714219 | 25.896892 | 25.896772 | 0.003039 | 0.038615 | | 30 | -0.494445 | 162.714219 | 25.896892 | 25.896772 | 0.003039 | 0.038615 | | 31 | -0.511650 | 166.238203 | 26.457757 | 26.457632 | 0.003078 | 0.037796 | | 32 | -0.511650 | -166.238203 | 26.457757 | 26.457632 | 0.003078 | 0.037796 | | 33 | -0.559180 | 175.709808 | 27.965226 | 27.965084 | 0.003182 | 0.035759 | | 34 | -0.559180 | -175.709808 | 27.965226 | 27.965084 | 0.003182 | 0.035759 | | 35 | -0.569755 | -177.873185 | 28.309542 | 28.309397 | 0.003203 | 0.035324 | | 36 | -0.569755 | 177.873185 | 28.309542 | 28.309397 | 0.003203 | 0.035324 | | 37 | -0.669914 | -197.999013 | 31.512702 | 31.512522 | 0.003383 | 0.031733 | | 38 | -0.669914 | 197.999013 | 31.512702 | 31.512522 | 0.003383 | 0.031733 | | 39 | -0.678424 | -199.782668 | 31.796582 | 31.796399 | 0.003396 | 0.031450 | | 40 | -0.678424 | 199.782668 | 31.796582 | 31.796399 | 0.003396 | 0.031450 | | 41 | -0.715684 | -207.440558 | 33.015387 | 33.015190 | 0.003450 | 0.030289 | | 42 | -0.715684 | 207.440558 | 33.015387 | 33.015190 | 0.003450 | 0.030289 | | 43 | -0.721193 | -208.622990 | 33.203578 | 33.203380 | 0.003457 | 0.030117 | | 44 | -0.721193 | 208.622990 | 33.203578 | 33.203380 | 0.003457 | 0.030117 | | 45 | -0.796838 | -224.809925 | 35.779836 | 35.779611 | 0.003544 | 0.027949 | | 46 | -0.796838 | 224.809925 | 35.779836 | 35.779611 | 0.003544 | 0.027949 | | 47 | -0.801462 | -225.851206 | 35.945562 | 35.945336 | 0.003549 | 0.027820 | | 48 | -0.801462 | 225.851206 | 35.945562 | 35.945336 | 0.003549 | 0.027820 | | 49 | -0.823221 | -257.880049 | 41.043094 | 41.042885 | 0.003192 | 0.024365 | | 50 | -0.823221 | 257.880049 | 41.043094 | 41.042885 | 0.003192 | 0.024365 | | 51 | -0.829849 | 232.223375 | 36.959734 | 36.959498 | 0.003573 | 0.027057 | | 52 | -0.829849 | -232.223375 | 36.959734 | 36.959498 | 0.003573 | 0.027057 | | 53 | -0.833132 | 232.986709 | 37.081224 | 37.080986 | 0.003576 | 0.026968 | | 54 | -0.833132 | -232.986709 | 37.081224 | 37.080986 | 0.003576 | 0.026968 | | 55 | -0.837695 | 252.830753 | 40.239485 | 40.239264 | 0.003313 | 0.024851 | | 56 | -0.837695 | -252.830753 | 40.239485 | 40.239264 | 0.003313 | 0.024851 | | 57 | -0.843057 | 274.636583 | 43.709976 | 43.709770 | 0.003070 | 0.022878 | | 58 | -0.843057 | -274.636583 | 43.709976 | 43.709770 | 0.003070 | 0.022878 | | 59 | -0.855992 | 264.468482 | 42.091687 | 42.091466 | 0.003237 | 0.023758 | | 60 | -0.855992 | -264.468482 | 42.091687 | 42.091466 | 0.003237 | 0.023758 | | 61 | -0.864725 | 271.184097 | 43.160509 | 43.160289 | 0.003189 | 0.023169 | | 62 | -0.864725 | -271.184097 | 43.160509 | 43.160289 | 0.003189 | 0.023169 | | 63 | -0.871326 | -283.421753 | 45.108186 | 45.107973 | 0.003074 | 0.022169 | | 64 | -0.871326 | 283.421753 | 45.108186 | 45.107973 | 0.003074 | 0.022169 | | 65 | -0.878446 | -267.336880 | 42.548216 | 42.547986 | 0.003286 | 0.023503 | | 66 | -0.878446 | 267.336880 | 42.548216 | 42.547986 | 0.003286 | 0.023503 | | 67 | -0.882869 | -280.833492 | 44.696259 | 44.696038 | 0.003144 | 0.022373 | | 68 | -0.882869 | 280.833492 | 44.696259 | 44.696038 | 0.003144 | 0.022373 | | 69 | -0.884024 | 245.027541 | 38.997598 | 38.997344 | 0.003608 | 0.025643 | | 70 | -0.884024 | -245.027541 | 38.997598 | 38.997344 | 0.003608 | 0.025643 | | 71 | -0.886589 | -245.661872 | 39.098556 | 39.098301 | 0.003609 | 0.025577 | | 72 | -0.886589 | 245.661872 | 39.098556 | 39.098301 | 0.003609 | 0.025577 | | 73 | -0.891211 | 288.915187 | 45.982499 | 45.982280 | 0.003085 | 0.021748 | | 74 | -0.891211 | -288.915187 | 45.982499 | 45.982280 | 0.003085 | 0.021748 | | 75 | -0.908699 | 251.206722 | 39.981053 | 39.980792 | 0.003617 | 0.025012 | | 76 | -0.908699 | -251.206722 | 39.981053 | 39.980792 | 0.003617 | 0.025012 | | 77 | -0.910251 | 251.606123 | 40.044620 | 40.044358 | 0.003618 | 0.024972 | | 78 | -0.910251 | -251.606123 | 40.044620 | 40.044358 | 0.003618 | 0.024972 | | 79 | -0.914189 | -241.156654 | 38.381549 | 38.381274 | 0.003791 | 0.026054 | | 80 | -0.914189 | 241.156654 | 38.381549 | 38.381274 | 0.003791 | 0.026054 | | 81 | -0.915396 | 290.517028 | 46.237451 | 46.237221 | 0.003151 | 0.021628 | | 82 | -0.915396 | -290.517028 | 46.237451 | 46.237221 | 0.003151 | 0.021628 | | 83 | -0.933975 | 278.955366 | 44.397374 | 44.397125 | 0.003348 | 0.022524 | | 84 | -0.933975 | -278.955366 | 44.397374 | 44.397125 | 0.003348 | 0.022524 | | 85 | -0.943144 | 260.320871 | 41.431625 | 41.431353 | 0.003623 | 0.024136 | | 86 | -0.943144 | -260.320871 | 41.431625 | 41.431353 | 0.003623 | 0.024136 | | 87 | -0.944542 | 260.700477 | 41.492042 | 41.491770 | 0.003623 | 0.024101 | | 88 | -0.944542 | -260.700477 | 41.492042 | 41.491770 | 0.003623 | 0.024101 | | 89 | -0.953005 | 294.814043 | 46.921357 | 46.921112 | 0.003233 | 0.021312 | | 90 | -0.953005 | -294.814043 | 46.921357 | 46.921112 | 0.003233 | 0.021312 | | 91 | -0.960628 | 295.652741 | 47.054843 | 47.054595 | 0.003249 | 0.021252 | | 92 | -0.960628 | -295.652741 | 47.054843 | 47.054595 | 0.003249 | 0.021252 | | 93 | -0.960976 | 265.315973 | 42.226626 | 42.226349 | 0.003622 | 0.023682 | | 94 | -0.960976 | -265.315973 | 42.226626 | 42.226349 | 0.003622 | 0.023682 | | 95 | -0.961740 | -300.017780 | 47.749558 | 47.749313 | 0.003206 | 0.020943 | | 96 | -0.961740 | 300.017780 | 47.749558 | 47.749313 | 0.003206 | 0.020943 | | 97 | -0.961940 | -265.596058 | 42.271203 | 42.270926 | 0.003622 | 0.023657 | | 98 | -0.961940 | 265.596058 | 42.271203 | 42.270926 | 0.003622 | 0.023657 | | 99 | -0.965384 | 266.582845 | 42.428256 | 42.427978 | 0.003621 | 0.023569 | | 100 | -0.965384 | -266.582845 | 42.428256 | 42.427978 | 0.003621 | 0.023569 | | 101 | -0.968899 | -235.916658 | 37.547619 | 37.547302 | 0.004107 | 0.026633 | | 102 | -0.968899 | 235.916658 | 37.547619 | 37.547302 | 0.004107 | 0.026633 | | 103 | -0.969126 | 301.069959 | 47.917020 | 47.916772 | 0.003219 | 0.020870 | | 104 | -0.969126 | -301.069959 | 47.917020 | 47.916772 | 0.003219 | 0.020870 | | 105 | -0.977774 | -281.665806 | 44.828775 | 44.828505 | 0.003471 | 0.022307 | | 106 | -0.977774 | 281.665806 | 44.828775 | 44.828505 | 0.003471 | 0.022307 | | 107 | -0.984431 | -272.268138 | 43.333103 | 43.332820 | 0.003616 | 0.023077 | | 108 | -0.984431 | 272.268138 | 43.333103 | 43.332820 | 0.003616 | 0.023077 | | 109 | -0.985003 | 251.399896 | 40.011843 | 40.011536 | 0.003918 | 0.024993 | | 110 | -0.985003 | -251.399896 | 40.011843 | 40.011536 | 0.003918 | 0.024993 | | 111 | -0.985198 | -272.494340 | 43.369105 | 43.368821 | 0.003615 | 0.023058 | | 112 | -0.985198 | 272.494340 | 43.369105 | 43.368821 | 0.003615 | 0.023058 | | 113 | -0.998111 | 276.558228 | 44.015896 | 44.015609 | 0.003609 | 0.022719 | | 114 | -0.998111 | -276.558228 | 44.015896 | 44.015609 | 0.003609 | 0.022719 | | 115 | -0.998802 | 276.771420 | 44.049826 | 44.049540 | 0.003609 | 0.022702 | | 116 | -0.998802 | -276.771420 | 44.049826 | 44.049540 | 0.003609 | 0.022702 | | 117 | -1.002609 | -296.732610 | 47.226731 | 47.226462 | 0.003379 | 0.021175 | | 118 | -1.002609 | 296.732610 | 47.226731 | 47.226462 | 0.003379 | 0.021175 | | 119 | -1.006593 | -246.344800 | 39.207320 | 39.206993 | 0.004086 | 0.025506 | | 120 | -1.006593 | 246.344800 | 39.207320 | 39.206993 | 0.004086 | 0.025506 | | 121 | -1.012564 | -297.793229 | 47.395538 | 47.395264 | 0.003400 | 0.021099 | | 122 | -1.012564 | 297.793229 | 47.395538 | 47.395264 | 0.003400 | 0.021099 | | 123 | -1.014283 | 306.354455 | 48.758093 | 48.757826 | 0.003311 | 0.020510 | | 124 | -1.014283 | -306.354455 | 48.758093 | 48.757826 | 0.003311 | 0.020510 | | 125 | -1.014361 | 281.941928 | 44.872742 | 44.872451 | 0.003598 | 0.022285 | | 126 | -1.014361 | -281.941928 | 44.872742 | 44.872451 | 0.003598 | 0.022285 | | 127 | -1.014860 | 282.102785 | 44.898343 | 44.898053 | 0.003597 | 0.022273 | | 128 | -1.014860 | -282.102785 | 44.898343 | 44.898053 | 0.003597 | 0.022273 | | 129 | -1.017175 | -306.227153 | 48.737834 | 48.737565 | 0.003322 | 0.020518 | | 130 | -1.017175 | 306.227153 | 48.737834 | 48.737565 | 0.003322 | 0.020518 | | 131 | -1.017450 | 57.533524 | 9.158176 | 9.156745 | 0.017682 | 0.109209 | | 132 | -1.017450 | -57.533524 | 9.158176 | 9.156745 | 0.017682 | 0.109209 | | 133 | -1.021012 | 310.599833 | 49.433766 | 49.433499 | 0.003287 | 0.020229 | | 134 | -1.021012 | -310.599833 | 49.433766 | 49.433499 | 0.003287 | 0.020229 | | 135 | -1.021452 | 310.492397 | 49.416667 | 49.416400 | 0.003290 | 0.020236 | | 136 | -1.021452 | -310.492397 | 49.416667 | 49.416400 | 0.003290 | 0.020236 | | 137 | -1.022875 | -284.908060 | 45.344818 | 45.344526 | 0.003590 | 0.022053 | | 138 | -1.022875 | 284.908060 | 45.344818 | 45.344526 | 0.003590 | 0.022053 | | 139 | -1.023294 | 285.048653 | 45.367195 | 45.366902 | 0.003590 | 0.022043 | | 140 | -1.023294 | -285.048653 | 45.367195 | 45.366902 | 0.003590 | 0.022043 | | 141 | -1.036388 | -289.874565 | 46.135265 | 46.134970 | 0.003575 | 0.021676 | | 142 | -1.036388 | 289.874565 | 46.135265 | 46.134970 | 0.003575 | 0.021676 | | 143 | -1.036747 | -290.000394 | 46.155291 | 46.154996 | 0.003575 | 0.021666 | | 144 | -1.036747 | 290.000394 | 46.155291 | 46.154996 | 0.003575 | 0.021666 | | 145 | -1.040407 | 291.418922 | 46.381058 | 46.380762 | 0.003570 | 0.021561 | | 146 | -1.040407 | -291.418922 | 46.381058 | 46.380762 | 0.003570 | 0.021561 | | 147 | -1.040753 | 291.545493 | 46.401202 | 46.400906 | 0.003570 | 0.021551 | | 148 | -1.040753 | -291.545493 | 46.401202 | 46.400906 | 0.003570 | 0.021551 | | 149 | -1.041948 | 304.815949 | 48.513248 | 48.512965 | 0.003418 | 0.020613 | | 150 | -1.041948 | -304.815949 | 48.513248 | 48.512965 | 0.003418 | 0.020613 | | 151 | -1.042964 | -314.673618 | 50.082137 | 50.081862 | 0.003314 | 0.019967 | | 152 | -1.042964 | 314.673618 | 50.082137 | 50.081862 | 0.003314 | 0.019967 | | 153 | -1.043286 | -307.905151 | 49.004908 | 49.004627 | 0.003388 | 0.020406 | | 154 | -1.043286 | 307.905151 | 49.004908 | 49.004627 | 0.003388 | 0.020406 | | 155 | -1.044327 | 308.342971 | 49.074590 | 49.074308 | 0.003387 | 0.020377 | | 156 | -1.044327 | -308.342971 | 49.074590 | 49.074308 | 0.003387 | 0.020377 | | 157 | -1.046009 | -304.875396 | 48.522712 | 48.522426 | 0.003431 | 0.020609 | | 158 | -1.046009 | 304.875396 | 48.522712 | 48.522426 | 0.003431 | 0.020609 | | 159 | -1.047589 | 314.548401 | 50.062210 | 50.061933 | 0.003330 | 0.019975 | | 160 | -1.047589 | -314.548401 | 50.062210 | 50.061933 | 0.003330 | 0.019975 | | 161 | -1.049126 | -306.882777 | 48.842196 | 48.841911 | 0.003419 | 0.020474 | | 162 | -1.049126 | 306.882777 | 48.842196 | 48.841911 | 0.003419 | 0.020474 | | 163 | -1.050232 | 295.362748 | 47.008739 | 47.008441 | 0.003556 | 0.021273 | | 164 | -1.050232 | -295.362748 | 47.008739 | 47.008441 | 0.003556 | 0.021273 | | 165 | -1.050553 | -295.497323 | 47.030157 | 47.029860 | 0.003555 | 0.021263 | | 166 | -1.050553 | 295.497323 | 47.030157 | 47.029860 | 0.003555 | 0.021263 | | 167 | -1.051548 | -295.907799 | 47.095486 | 47.095189 | 0.003554 | 0.021234 | | 168 | -1.051548 | 295.907799 | 47.095486 | 47.095189 | 0.003554 | 0.021234 | | 169 | -1.051948 | 296.064745 | 47.120465 | 47.120168 | 0.003553 | 0.021222 | | 170 | -1.051948 | -296.064745 | 47.120465 | 47.120168 | 0.003553 | 0.021222 | | 171 | -1.054264 | -306.923573 | 48.848692 | 48.848404 | 0.003435 | 0.020471 | | 172 | -1.054264 | 306.923573 | 48.848692 | 48.848404 | 0.003435 | 0.020471 | | 173 | -1.054520 | -318.465849 | 50.685692 | 50.685414 | 0.003311 | 0.019730 | | 174 | -1.054520 | 318.465849 | 50.685692 | 50.685414 | 0.003311 | 0.019730 | | 175 | -1.055288 | -318.515081 | 50.693528 | 50.693250 | 0.003313 | 0.019726 | | 176 | -1.055288 | 318.515081 | 50.693528 | 50.693250 | 0.003313 | 0.019726 | | 177 | -1.057440 | 314.897307 | 50.117746 | 50.117463 | 0.003358 | 0.019953 | | 178 | -1.057440 | -314.897307 | 50.117746 | 50.117463 | 0.003358 | 0.019953 | | 179 | -1.058199 | -309.920136 | 49.325609 | 49.325322 | 0.003414 | 0.020274 | | 180 | -1.058199 | 309.920136 | 49.325609 | 49.325322 | 0.003414 | 0.020274 | | 181 | -1.059262 | 299.214097 | 47.621701 | 47.621403 | 0.003540 | 0.020999 | | 182 | -1.059262 | -299.214097 | 47.621701 | 47.621403 | 0.003540 | 0.020999 | | 183 | -1.059504 | -299.314232 | 47.637638 | 47.637340 | 0.003540 | 0.020992 | | 184 | -1.059504 | 299.314232 | 47.637638 | 47.637340 | 0.003540 | 0.020992 | | 185 | -1.060555 | -299.787498 | 47.712961 | 47.712662 | 0.003538 | 0.020959 | | 186 | -1.060555 | 299.787498 | 47.712961 | 47.712662 | 0.003538 | 0.020959 | | 187 | -1.060658 | -309.965877 | 49.332890 | 49.332602 | 0.003422 | 0.020271 | | 188 | -1.060658 | 309.965877 | 49.332890 | 49.332602 | 0.003422 | 0.020271 | | 189 | -1.060760 | 299.897460 | 47.730462 | 47.730163 | 0.003537 | 0.020951 | | 190 | -1.060760 | -299.897460 | 47.730462 | 47.730163 | 0.003537 | 0.020951 | | 191 | -1.064660 | 315.237918 | 50.171959 | 50.171673 | 0.003377 | 0.019932 | | 192 | -1.064660 | -315.237918 | 50.171959 | 50.171673 | 0.003377 | 0.019932 | | 193 | -1.065948 | -313.067380 | 49.826510 | 49.826221 | 0.003405 | 0.020070 | | 194 | -1.065948 | 313.067380 | 49.826510 | 49.826221 | 0.003405 | 0.020070 | | 195 | -1.066182 | 312.991749 | 49.814473 | 49.814184 | 0.003406 | 0.020075 | | 196 | -1.066182 | -312.991749 | 49.814473 | 49.814184 | 0.003406 | 0.020075 | | 197 | -1.070206 | 304.267646 | 48.425999 | 48.425700 | 0.003517 | 0.020650 | | 198 | -1.070206 | -304.267646 | 48.425999 | 48.425700 | 0.003517 | 0.020650 | | 199 | -1.070296 | 304.308945 | 48.432572 | 48.432273 | 0.003517 | 0.020647 | | 200 | -1.070296 | -304.308945 | 48.432572 | 48.432273 | 0.003517 | 0.020647 | | 201 | -1.071416 | -304.858400 | 48.520021 | 48.519721 | 0.003514 | 0.020610 | | 202 | -1.071416 | 304.858400 | 48.520021 | 48.519721 | 0.003514 | 0.020610 | | 203 | -1.071537 | -304.918001 | 48.529507 | 48.529207 | 0.003514 | 0.020606 | | 204 | -1.071537 | 304.918001 | 48.529507 | 48.529207 | 0.003514 | 0.020606 | | 205 | -1.071722 | -321.513275 | 51.170711 | 51.170427 | 0.003333 | 0.019543 | | 206 | -1.071722 | 321.513275 | 51.170711 | 51.170427 | 0.003333 | 0.019543 | | 207 | -1.072773 | -321.531559 | 51.173622 | 51.173337 | 0.003336 | 0.019541 | | 208 | -1.072773 | 321.531559 | 51.173622 | 51.173337 | 0.003336 | 0.019541 | | 209 | -1.073011 | 316.543066 | 50.379683 | 50.379394 | 0.003390 | 0.019849 | | 210 | -1.073011 | -316.543066 | 50.379683 | 50.379394 | 0.003390 | 0.019849 | | 211 | -1.073122 | -316.594455 | 50.387862 | 50.387573 | 0.003390 | 0.019846 | | 212 | -1.073122 | 316.594455 | 50.387862 | 50.387573 | 0.003390 | 0.019846 | | 213 | -1.075633 | 324.572398 | 51.657585 | 51.657302 | 0.003314 | 0.019358 | | 214 | -1.075633 | -324.572398 | 51.657585 | 51.657302 | 0.003314 | 0.019358 | | 215 | -1.076300 | -324.607705 | 51.663205 | 51.662921 | 0.003316 | 0.019356 | | 216 | -1.076300 | 324.607705 | 51.663205 | 51.662921 | 0.003316 | 0.019356 | | 217 | -1.078724 | 320.140186 | 50.952182 | 50.951893 | 0.003370 | 0.019626 | | 218 | -1.078724 | -320.140186 | 50.952182 | 50.951893 | 0.003370 | 0.019626 | | 219 | -1.078925 | 319.933158 | 50.919233 | 50.918944 | 0.003372 | 0.019639 | | 220 | -1.078925 | -319.933158 | 50.919233 | 50.918944 | 0.003372 | 0.019639 | | 221 | -1.080029 | 309.288686 | 49.225123 | 49.224823 | 0.003492 | 0.020315 | | 222 | -1.080029 | -309.288686 | 49.225123 | 49.224823 | 0.003492 | 0.020315 | | 223 | -1.080630 | 309.618139 | 49.277557 | 49.277257 | 0.003490 | 0.020293 | | 224 | -1.080630 | -309.618139 | 49.277557 | 49.277257 | 0.003490 | 0.020293 | | 225 | -1.080975 | 309.794320 | 49.305598 | 49.305297 | 0.003489 | 0.020282 | | 226 | -1.080975 | -309.794320 | 49.305598 | 49.305297 | 0.003489 | 0.020282 | | 227 | -1.081038 | 309.828767 | 49.311080 | 49.310780 | 0.003489 | 0.020280 | | 228 | -1.081038 | -309.828767 | 49.311080 | 49.310780 | 0.003489 | 0.020280 | | 229 | -1.082433 | 310.602445 | 49.434215 | 49.433914 | 0.003485 | 0.020229 | | 230 | -1.082433 | -310.602445 | 49.434215 | 49.433914 | 0.003485 | 0.020229 | | 231 | -1.084088 | 322.662762 | 51.353663 | 51.353373 | 0.003360 | 0.019473 | | 232 | -1.084088 | -322.662762 | 51.353663 | 51.353373 | 0.003360 | 0.019473 | | 233 | -1.084503 | -322.669036 | 51.354662 | 51.354372 | 0.003361 | 0.019473 | | 234 | -1.084503 | 322.669036 | 51.354662 | 51.354372 | 0.003361 | 0.019473 | | 235 | -1.085005 | -319.745514 | 50.889372 | 50.889079 | 0.003393 | 0.019651 | | 236 | -1.085005 | 319.745514 | 50.889372 | 50.889079 | 0.003393 | 0.019651 | | 237 | -1.085145 | 319.750916 | 50.890232 | 50.889939 | 0.003394 | 0.019650 | | 238 | -1.085145 | -319.750916 | 50.890232 | 50.889939 | 0.003394 | 0.019650 | | 239 | -1.086265 | -329.094966 | 52.377376 | 52.377091 | 0.003301 | 0.019092 | | 240 | -1.086265 | 329.094966 | 52.377376 | 52.377091 | 0.003301 | 0.019092 | | 241 | -1.086372 | -326.932381 | 52.033192 | 52.032904 | 0.003323 | 0.019219 | | 242 | -1.086372 | 326.932381 | 52.033192 | 52.032904 | 0.003323 | 0.019219 | | 243 | -1.086417 | 329.063936 | 52.372437 | 52.372152 | 0.003302 | 0.019094 | | 244 | -1.086417 | -329.063936 | 52.372437 | 52.372152 | 0.003302 | 0.019094 | | 245 | -1.087051 | 313.245575 | 49.854882 | 49.854582 | 0.003470 | 0.020058 | | 246 | -1.087051 | -313.245575 | 49.854882 | 49.854582 | 0.003470 | 0.020058 | | 247 | -1.087414 | 313.457338 | 49.888585 | 49.888285 | 0.003469 | 0.020045 | | 248 | -1.087414 | -313.457338 | 49.888585 | 49.888285 | 0.003469 | 0.020045 | | 249 | -1.087499 | 313.508013 | 49.896650 | 49.896350 | 0.003469 | 0.020042 | | 250 | -1.087499 | -313.508013 | 49.896650 | 49.896350 | 0.003469 | 0.020042 | | 251 | -1.087543 | 313.538425 | 49.901490 | 49.901190 | 0.003469 | 0.020040 | | 252 | -1.087543 | -313.538425 | 49.901490 | 49.901190 | 0.003469 | 0.020040 | | 253 | -1.087735 | 326.918220 | 52.030939 | 52.030651 | 0.003327 | 0.019219 | | 254 | -1.087735 | -326.918220 | 52.030939 | 52.030651 | 0.003327 | 0.019219 | | 255 | -1.089451 | 314.692046 | 50.085095 | 50.084795 | 0.003462 | 0.019966 | | 256 | -1.089451 | -314.692046 | 50.085095 | 50.084795 | 0.003462 | 0.019966 | | 257 | -1.089795 | -330.845038 | 52.655909 | 52.655623 | 0.003294 | 0.018991 | | 258 | -1.089795 | 330.845038 | 52.655909 | 52.655623 | 0.003294 | 0.018991 | | 259 | -1.091434 | -330.806905 | 52.649841 | 52.649554 | 0.003299 | 0.018994 | | 260 | -1.091434 | 330.806905 | 52.649841 | 52.649554 | 0.003299 | 0.018994 | | 261 | -1.091577 | 323.712470 | 51.520733 | 51.520440 | 0.003372 | 0.019410 | | 262 | -1.091577 | -323.712470 | 51.520733 | 51.520440 | 0.003372 | 0.019410 | | 263 | -1.091720 | -327.088590 | 52.058056 | 52.057766 | 0.003338 | 0.019209 | | 264 | -1.091720 | 327.088590 | 52.058056 | 52.057766 | 0.003338 | 0.019209 | | 265 | -1.091742 | -323.846011 | 51.541986 | 51.541693 | 0.003371 | 0.019402 | | 266 | -1.091742 | 323.846011 | 51.541986 | 51.541693 | 0.003371 | 0.019402 | | 267 | -1.092874 | 327.210801 | 52.077507 | 52.077216 | 0.003340 | 0.019202 | | 268 | -1.092874 | -327.210801 | 52.077507 | 52.077216 | 0.003340 | 0.019202 | | 269 | -1.092906 | -316.871195 | 50.431917 | 50.431617 | 0.003449 | 0.019829 | | 270 | -1.092906 | 316.871195 | 50.431917 | 50.431617 | 0.003449 | 0.019829 | | 271 | -1.092933 | 316.892539 | 50.435314 | 50.435014 | 0.003449 | 0.019827 | | 272 | -1.092933 | -316.892539 | 50.435314 | 50.435014 | 0.003449 | 0.019827 | | 273 | -1.092962 | 316.907185 | 50.437645 | 50.437345 | 0.003449 | 0.019827 | | 274 | -1.092962 | -316.907185 | 50.437645 | 50.437345 | 0.003449 | 0.019827 | | 275 | -1.092985 | -316.926030 | 50.440644 | 50.440344 | 0.003449 | 0.019825 | | 276 | -1.092985 | 316.926030 | 50.440644 | 50.440344 | 0.003449 | 0.019825 | | 277 | -1.093283 | -332.332221 | 52.892602 | 52.892316 | 0.003290 | 0.018906 | | 278 | -1.093283 | 332.332221 | 52.892602 | 52.892316 | 0.003290 | 0.018906 | | 279 | -1.093458 | 332.343157 | 52.894343 | 52.894056 | 0.003290 | 0.018906 | | 280 | -1.093458 | -332.343157 | 52.894343 | 52.894056 | 0.003290 | 0.018906 | | 281 | -1.094656 | 325.174138 | 51.753365 | 51.753071 | 0.003366 | 0.019323 | | 282 | -1.094656 | -325.174138 | 51.753365 | 51.753071 | 0.003366 | 0.019323 | | 283 | -1.095311 | 325.124568 | 51.745476 | 51.745182 | 0.003369 | 0.019325 | | 284 | -1.095311 | -325.124568 | 51.745476 | 51.745182 | 0.003369 | 0.019325 | | 285 | -1.096829 | -332.066508 | 52.850314 | 52.850026 | 0.003303 | 0.018921 | | 286 | -1.096829 | 332.066508 | 52.850314 | 52.850026 | 0.003303 | 0.018921 | | 287 | -1.097490 | -319.989534 | 50.928216 | 50.927916 | 0.003430 | 0.019636 | | 288 | -1.097490 | 319.989534 | 50.928216 | 50.927916 | 0.003430 | 0.019636 | | 289 | -1.097493 | 319.994348 | 50.928982 | 50.928682 | 0.003430 | 0.019635 | | 290 | -1.097493 | -319.994348 | 50.928982 | 50.928682 | 0.003430 | 0.019635 | | 291 | -1.097519 | -320.011922 | 50.931779 | 50.931479 | 0.003430 | 0.019634 | | 292 | -1.097519 | 320.011922 | 50.931779 | 50.931479 | 0.003430 | 0.019634 | | 293 | -1.097543 | -320.026563 | 50.934109 | 50.933809 | 0.003430 | 0.019633 | | 294 | -1.097543 | 320.026563 | 50.934109 | 50.933809 | 0.003430 | 0.019633 | | 295 | -1.098669 | -331.385423 | 52.741918 | 52.741628 | 0.003315 | 0.018960 | | 296 | -1.098669 | 331.385423 | 52.741918 | 52.741628 | 0.003315 | 0.018960 | | 297 | -1.100859 | -326.436651 | 51.954302 | 51.954007 | 0.003372 | 0.019248 | | 298 | -1.100859 | 326.436651 | 51.954302 | 51.954007 | 0.003372 | 0.019248 | | 299 | -1.101053 | -326.436812 | 51.954328 | 51.954032 | 0.003373 | 0.019248 | | 300 | -1.101053 | 326.436812 | 51.954328 | 51.954032 | 0.003373 | 0.019248 | | 301 | -1.101292 | 322.822286 | 51.379062 | 51.378763 | 0.003411 | 0.019463 | | 302 | -1.101292 | -322.822286 | 51.379062 | 51.378763 | 0.003411 | 0.019463 | | 303 | -1.101293 | 322.823898 | 51.379318 | 51.379019 | 0.003411 | 0.019463 | | 304 | -1.101293 | -322.823898 | 51.379318 | 51.379019 | 0.003411 | 0.019463 | | 305 | -1.101309 | 322.834412 | 51.380991 | 51.380692 | 0.003411 | 0.019463 | | 306 | -1.101309 | -322.834412 | 51.380991 | 51.380692 | 0.003411 | 0.019463 | | 307 | -1.101331 | 322.849724 | 51.383428 | 51.383129 | 0.003411 | 0.019462 | | 308 | -1.101331 | -322.849724 | 51.383428 | 51.383129 | 0.003411 | 0.019462 | | 309 | -1.102161 | -330.407444 | 52.586271 | 52.585978 | 0.003336 | 0.019016 | | 310 | -1.102161 | 330.407444 | 52.586271 | 52.585978 | 0.003336 | 0.019016 | | 311 | -1.102206 | -328.348838 | 52.258635 | 52.258341 | 0.003357 | 0.019136 | | 312 | -1.102206 | 328.348838 | 52.258635 | 52.258341 | 0.003357 | 0.019136 | | 313 | -1.102281 | -328.322944 | 52.254514 | 52.254220 | 0.003357 | 0.019137 | | 314 | -1.102281 | 328.322944 | 52.254514 | 52.254220 | 0.003357 | 0.019137 | | 315 | -1.103460 | 330.498141 | 52.600706 | 52.600413 | 0.003339 | 0.019011 | | 316 | -1.103460 | -330.498141 | 52.600706 | 52.600413 | 0.003339 | 0.019011 | | 317 | -1.104404 | -325.357719 | 51.782588 | 51.782289 | 0.003394 | 0.019312 | | 318 | -1.104404 | 325.357719 | 51.782588 | 51.782289 | 0.003394 | 0.019312 | | 319 | -1.104408 | -325.362658 | 51.783374 | 51.783075 | 0.003394 | 0.019311 | | 320 | -1.104408 | 325.362658 | 51.783374 | 51.783075 | 0.003394 | 0.019311 | | 321 | -1.104432 | 325.380976 | 51.786289 | 51.785991 | 0.003394 | 0.019310 | | 322 | -1.104432 | -325.380976 | 51.786289 | 51.785991 | 0.003394 | 0.019310 | | 323 | -1.104447 | -325.392512 | 51.788125 | 51.787827 | 0.003394 | 0.019310 | | 324 | -1.104447 | 325.392512 | 51.788125 | 51.787827 | 0.003394 | 0.019310 | | 325 | -1.104643 | 329.605095 | 52.458575 | 52.458280 | 0.003351 | 0.019063 | | 326 | -1.104643 | -329.605095 | 52.458575 | 52.458280 | 0.003351 | 0.019063 | | 327 | -1.104721 | 329.588858 | 52.455991 | 52.455696 | 0.003352 | 0.019064 | | 328 | -1.104721 | -329.588858 | 52.455991 | 52.455696 | 0.003352 | 0.019064 | | 329 | -1.106744 | -327.432658 | 52.112824 | 52.112526 | 0.003380 | 0.019189 | | 330 | -1.106744 | 327.432658 | 52.112824 | 52.112526 | 0.003380 | 0.019189 | | 331 | -1.106932 | 327.609429 | 52.140958 | 52.140660 | 0.003379 | 0.019179 | | 332 | -1.106932 | -327.609429 | 52.140958 | 52.140660 | 0.003379 | 0.019179 | | 333 | -1.106956 | -327.628565 | 52.144003 | 52.143706 | 0.003379 | 0.019178 | | 334 | -1.106956 | 327.628565 | 52.144003 | 52.143706 | 0.003379 | 0.019178 | | 335 | -1.106973 | 327.644199 | 52.146491 | 52.146194 | 0.003379 | 0.019177 | | 336 | -1.106973 | -327.644199 | 52.146491 | 52.146194 | 0.003379 | 0.019177 | | 337 | -1.107211 | -327.869981 | 52.182426 | 52.182128 | 0.003377 | 0.019164 | | 338 | -1.107211 | 327.869981 | 52.182426 | 52.182128 | 0.003377 | 0.019164 | | 339 | -1.107565 | 331.532240 | 52.765289 | 52.764995 | 0.003341 | 0.018952 | | 340 | -1.107565 | -331.532240 | 52.765289 | 52.764995 | 0.003341 | 0.018952 | | 341 | -1.107916 | -333.461804 | 53.072387 | 53.072094 | 0.003322 | 0.018842 | | 342 | -1.107916 | 333.461804 | 53.072387 | 53.072094 | 0.003322 | 0.018842 | | 343 | -1.108211 | 331.513313 | 52.762277 | 52.761983 | 0.003343 | 0.018953 | | 344 | -1.108211 | -331.513313 | 52.762277 | 52.761983 | 0.003343 | 0.018953 | | 345 | -1.108216 | -333.455247 | 53.071344 | 53.071051 | 0.003323 | 0.018843 | | 346 | -1.108216 | 333.455247 | 53.071344 | 53.071051 | 0.003323 | 0.018843 | | 347 | -1.108440 | 329.055252 | 52.371067 | 52.370770 | 0.003369 | 0.019095 | | 348 | -1.108440 | -329.055252 | 52.371067 | 52.370770 | 0.003369 | 0.019095 | | 349 | -1.108951 | 329.564068 | 52.452047 | 52.451750 | 0.003365 | 0.019065 | | 350 | -1.108951 | -329.564068 | 52.452047 | 52.451750 | 0.003365 | 0.019065 | | 351 | -1.108969 | 329.582680 | 52.455010 | 52.454713 | 0.003365 | 0.019064 | | 352 | -1.108969 | -329.582680 | 52.455010 | 52.454713 | 0.003365 | 0.019064 | | 353 | -1.108985 | 329.604673 | 52.458510 | 52.458213 | 0.003365 | 0.019063 | | 354 | -1.108985 | -329.604673 | 52.458510 | 52.458213 | 0.003365 | 0.019063 | | 355 | -1.109008 | 329.624636 | 52.461687 | 52.461390 | 0.003364 | 0.019062 | | 356 | -1.109008 | -329.624636 | 52.461687 | 52.461390 | 0.003364 | 0.019062 | | 357 | -1.109656 | -334.561471 | 53.247405 | 53.247112 | 0.003317 | 0.018780 | | 358 | -1.109656 | 334.561471 | 53.247405 | 53.247112 | 0.003317 | 0.018780 | | 359 | -1.109672 | -334.545407 | 53.244848 | 53.244555 | 0.003317 | 0.018781 | | 360 | -1.109672 | 334.545407 | 53.244848 | 53.244555 | 0.003317 | 0.018781 | | 361 | -1.110193 | -333.282365 | 53.043830 | 53.043536 | 0.003331 | 0.018852 | | 362 | -1.110193 | 333.282365 | 53.043830 | 53.043536 | 0.003331 | 0.018852 | | 363 | -1.110333 | -331.008434 | 52.681925 | 52.681628 | 0.003354 | 0.018982 | | 364 | -1.110333 | 331.008434 | 52.681925 | 52.681628 | 0.003354 | 0.018982 | | 365 | -1.110341 | 331.018034 | 52.683453 | 52.683156 | 0.003354 | 0.018981 | | 366 | -1.110341 | -331.018034 | 52.683453 | 52.683156 | 0.003354 | 0.018981 | | 367 | -1.110385 | -333.294673 | 53.045789 | 53.045495 | 0.003332 | 0.018852 | | 368 | -1.110385 | 333.294673 | 53.045789 | 53.045495 | 0.003332 | 0.018852 | | 369 | -1.110518 | 331.208602 | 52.713783 | 52.713486 | 0.003353 | 0.018970 | | 370 | -1.110518 | -331.208602 | 52.713783 | 52.713486 | 0.003353 | 0.018970 | | 371 | -1.110520 | 331.214085 | 52.714655 | 52.714359 | 0.003353 | 0.018970 | | 372 | -1.110520 | -331.214085 | 52.714655 | 52.714359 | 0.003353 | 0.018970 | | 373 | -1.110649 | 331.351732 | 52.736562 | 52.736266 | 0.003352 | 0.018962 | | 374 | -1.110649 | -331.351732 | 52.736562 | 52.736266 | 0.003352 | 0.018962 | | 375 | -1.110652 | -331.363928 | 52.738503 | 52.738207 | 0.003352 | 0.018962 | | 376 | -1.110652 | 331.363928 | 52.738503 | 52.738207 | 0.003352 | 0.018962 | | 377 | -1.111317 | -332.101213 | 52.855846 | 52.855550 | 0.003346 | 0.018919 | | 378 | -1.111317 | 332.101213 | 52.855846 | 52.855550 | 0.003346 | 0.018919 | | 379 | -1.111358 | -332.145988 | 52.862972 | 52.862676 | 0.003346 | 0.018917 | | 380 | -1.111358 | 332.145988 | 52.862972 | 52.862676 | 0.003346 | 0.018917 | | 381 | -1.111730 | -332.575059 | 52.931260 | 52.930965 | 0.003343 | 0.018893 | | 382 | -1.111730 | 332.575059 | 52.931260 | 52.930965 | 0.003343 | 0.018893 | | 383 | -1.111730 | 332.577756 | 52.931690 | 52.931394 | 0.003343 | 0.018892 | | 384 | -1.111730 | -332.577756 | 52.931690 | 52.931394 | 0.003343 | 0.018892 | | 385 | -1.111733 | -332.582902 | 52.932509 | 52.932213 | 0.003343 | 0.018892 | | 386 | -1.111733 | 332.582902 | 52.932509 | 52.932213 | 0.003343 | 0.018892 | | 387 | -1.111733 | 332.581750 | 52.932325 | 52.932029 | 0.003343 | 0.018892 | | 388 | -1.111733 | -332.581750 | 52.932325 | 52.932029 | 0.003343 | 0.018892 | | 389 | -1.111895 | -335.183640 | 53.346427 | 53.346133 | 0.003317 | 0.018746 | | 390 | -1.111895 | 335.183640 | 53.346427 | 53.346133 | 0.003317 | 0.018746 | | 391 | -1.111918 | -335.185684 | 53.346752 | 53.346459 | 0.003317 | 0.018745 | | 392 | -1.111918 | 335.185684 | 53.346752 | 53.346459 | 0.003317 | 0.018745 | | 393 | -1.111977 | -337.042524 | 53.642276 | 53.641984 | 0.003299 | 0.018642 | | 394 | -1.111977 | 337.042524 | 53.642276 | 53.641984 | 0.003299 | 0.018642 | | 395 | -1.111984 | 337.038066 | 53.641566 | 53.641274 | 0.003299 | 0.018642 | | 396 | -1.111984 | -337.038066 | 53.641566 | 53.641274 | 0.003299 | 0.018642 | | 397 | -1.113433 | 336.128317 | 53.496777 | 53.496483 | 0.003313 | 0.018693 | | 398 | -1.113433 | -336.128317 | 53.496777 | 53.496483 | 0.003313 | 0.018693 | | 399 | -1.113437 | 336.131297 | 53.497251 | 53.496957 | 0.003312 | 0.018693 | | 400 | -1.113437 | -336.131297 | 53.497251 | 53.496957 | 0.003312 | 0.018693 | | 401 | -1.113648 | -338.105396 | 53.811437 | 53.811145 | 0.003294 | 0.018584 | | 402 | -1.113648 | 338.105396 | 53.811437 | 53.811145 | 0.003294 | 0.018584 | | 403 | -1.113653 | -338.102774 | 53.811020 | 53.810728 | 0.003294 | 0.018584 | | 404 | -1.113653 | 338.102774 | 53.811020 | 53.810728 | 0.003294 | 0.018584 | | 405 | -1.113881 | 335.273046 | 53.360657 | 53.360363 | 0.003322 | 0.018741 | | 406 | -1.113881 | -335.273046 | 53.360657 | 53.360363 | 0.003322 | 0.018741 | | 407 | -1.114271 | 335.810669 | 53.446222 | 53.445928 | 0.003318 | 0.018710 | | 408 | -1.114271 | -335.810669 | 53.446222 | 53.445928 | 0.003318 | 0.018710 | | 409 | -1.114682 | -337.420697 | 53.702465 | 53.702172 | 0.003304 | 0.018621 | | 410 | -1.114682 | 337.420697 | 53.702465 | 53.702172 | 0.003304 | 0.018621 | | 411 | -1.114685 | -337.421129 | 53.702534 | 53.702241 | 0.003304 | 0.018621 | | 412 | -1.114685 | 337.421129 | 53.702534 | 53.702241 | 0.003304 | 0.018621 | | 413 | -1.114745 | -339.534726 | 54.038921 | 54.038630 | 0.003283 | 0.018505 | | 414 | -1.114745 | 339.534726 | 54.038921 | 54.038630 | 0.003283 | 0.018505 | | 415 | -1.114754 | 339.527234 | 54.037729 | 54.037438 | 0.003283 | 0.018506 | | 416 | -1.114754 | -339.527234 | 54.037729 | 54.037438 | 0.003283 | 0.018506 | | 417 | -1.115312 | 74.207184 | 11.811774 | 11.810440 | 0.015028 | 0.084671 | | 418 | -1.115312 | -74.207184 | 11.811774 | 11.810440 | 0.015028 | 0.084671 | | 419 | -1.115588 | 338.574851 | 53.886154 | 53.885861 | 0.003295 | 0.018558 | | 420 | -1.115588 | -338.574851 | 53.886154 | 53.885861 | 0.003295 | 0.018558 | | 421 | -1.115590 | 338.573805 | 53.885987 | 53.885695 | 0.003295 | 0.018558 | | 422 | -1.115590 | -338.573805 | 53.885987 | 53.885695 | 0.003295 | 0.018558 | | 423 | -1.115783 | -340.920059 | 54.259403 | 54.259113 | 0.003273 | 0.018430 | | 424 | -1.115783 | 340.920059 | 54.259403 | 54.259113 | 0.003273 | 0.018430 | | 425 | -1.115870 | 341.161924 | 54.297897 | 54.297607 | 0.003271 | 0.018417 | | 426 | -1.115870 | -341.161924 | 54.297897 | 54.297607 | 0.003271 | 0.018417 | | 427 | -1.115916 | 340.879701 | 54.252980 | 54.252689 | 0.003274 | 0.018432 | | 428 | -1.115916 | -340.879701 | 54.252980 | 54.252689 | 0.003274 | 0.018432 | | 429 | -1.116022 | -341.105628 | 54.288937 | 54.288647 | 0.003272 | 0.018420 | | 430 | -1.116022 | 341.105628 | 54.288937 | 54.288647 | 0.003272 | 0.018420 | | 431 | -1.116122 | 339.695422 | 54.064497 | 54.064206 | 0.003286 | 0.018497 | | 432 | -1.116122 | -339.695422 | 54.064497 | 54.064206 | 0.003286 | 0.018497 | | 433 | -1.116123 | 339.694299 | 54.064319 | 54.064027 | 0.003286 | 0.018497 | | 434 | -1.116123 | -339.694299 | 54.064319 | 54.064027 | 0.003286 | 0.018497 | | 435 | -1.116483 | -339.281941 | 53.998690 | 53.998398 | 0.003291 | 0.018519 | | 436 | -1.116483 | 339.281941 | 53.998690 | 53.998398 | 0.003291 | 0.018519 | | 437 | -1.116592 | -340.579316 | 54.205173 | 54.204882 | 0.003278 | 0.018449 | | 438 | -1.116592 | 340.579316 | 54.205173 | 54.204882 | 0.003278 | 0.018449 | | 439 | -1.116631 | -339.550217 | 54.041388 | 54.041095 | 0.003289 | 0.018504 | | 440 | -1.116631 | 339.550217 | 54.041388 | 54.041095 | 0.003289 | 0.018504 | | 441 | -1.116657 | -340.591235 | 54.207070 | 54.206779 | 0.003279 | 0.018448 | | 442 | -1.116657 | 340.591235 | 54.207070 | 54.206779 | 0.003279 | 0.018448 | | 443 | -1.117075 | 342.131792 | 54.452256 | 54.451966 | 0.003265 | 0.018365 | | 444 | -1.117075 | -342.131792 | 54.452256 | 54.451966 | 0.003265 | 0.018365 | | 445 | -1.117094 | 342.130820 | 54.452101 | 54.451811 | 0.003265 | 0.018365 | | 446 | -1.117094 | -342.130820 | 54.452101 | 54.451811 | 0.003265 | 0.018365 | | 447 | -1.117293 | -341.489178 | 54.349982 | 54.349691 | 0.003272 | 0.018399 | | 448 | -1.117293 | 341.489178 | 54.349982 | 54.349691 | 0.003272 | 0.018399 | | 449 | -1.117322 | 341.487729 | 54.349751 | 54.349460 | 0.003272 | 0.018399 | | 450 | -1.117322 | -341.487729 | 54.349751 | 54.349460 | 0.003272 | 0.018399 | | 451 | -1.117513 | -342.198690 | 54.462903 | 54.462613 | 0.003266 | 0.018361 | | 452 | -1.117513 | 342.198690 | 54.462903 | 54.462613 | 0.003266 | 0.018361 | | 453 | -1.117532 | -342.200560 | 54.463201 | 54.462911 | 0.003266 | 0.018361 | | 454 | -1.117532 | 342.200560 | 54.463201 | 54.462911 | 0.003266 | 0.018361 | | 455 | -1.117646 | 343.037593 | 54.596418 | 54.596129 | 0.003258 | 0.018316 | | 456 | -1.117646 | -343.037593 | 54.596418 | 54.596129 | 0.003258 | 0.018316 | | 457 | -1.117646 | -343.037592 | 54.596418 | 54.596128 | 0.003258 | 0.018316 | | 458 | -1.117646 | 343.037592 | 54.596418 | 54.596128 | 0.003258 | 0.018316 | | 459 | -1.117777 | -341.853412 | 54.407951 | 54.407660 | 0.003270 | 0.018380 | | 460 | -1.117777 | 341.853412 | 54.407951 | 54.407660 | 0.003270 | 0.018380 | | 461 | -1.117846 | -342.009266 | 54.432756 | 54.432465 | 0.003268 | 0.018371 | | 462 | -1.117846 | 342.009266 | 54.432756 | 54.432465 | 0.003268 | 0.018371 | | 463 | -1.118011 | -342.479554 | 54.507604 | 54.507314 | 0.003264 | 0.018346 | | 464 | -1.118011 | 342.479554 | 54.507604 | 54.507314 | 0.003264 | 0.018346 | | 465 | -1.118029 | -342.507545 | 54.512059 | 54.511769 | 0.003264 | 0.018345 | | 466 | -1.118029 | 342.507545 | 54.512059 | 54.511769 | 0.003264 | 0.018345 | | 467 | -1.118087 | -343.710417 | 54.703501 | 54.703212 | 0.003253 | 0.018280 | | 468 | -1.118087 | 343.710417 | 54.703501 | 54.703212 | 0.003253 | 0.018280 | | 469 | -1.118087 | 343.710417 | 54.703501 | 54.703212 | 0.003253 | 0.018280 | | 470 | -1.118087 | -343.710417 | 54.703501 | 54.703212 | 0.003253 | 0.018280 | | 471 | -1.118088 | -342.821813 | 54.562076 | 54.561786 | 0.003261 | 0.018328 | | 472 | -1.118088 | 342.821813 | 54.562076 | 54.561786 | 0.003261 | 0.018328 | | 473 | -1.118088 | 342.822024 | 54.562110 | 54.561820 | 0.003261 | 0.018328 | | 474 | -1.118088 | -342.822024 | 54.562110 | 54.561820 | 0.003261 | 0.018328 | | 475 | -1.118409 | -344.494693 | 54.828322 | 54.828033 | 0.003247 | 0.018239 | | 476 | -1.118409 | 344.494693 | 54.828322 | 54.828033 | 0.003247 | 0.018239 | | 477 | -1.118409 | -344.494693 | 54.828322 | 54.828033 | 0.003247 | 0.018239 | | 478 | -1.118409 | 344.494693 | 54.828322 | 54.828033 | 0.003247 | 0.018239 | | 479 | -1.118504 | -343.644764 | 54.693053 | 54.692763 | 0.003255 | 0.018284 | | 480 | -1.118504 | 343.644764 | 54.693053 | 54.692763 | 0.003255 | 0.018284 | | 481 | -1.118539 | -343.740135 | 54.708231 | 54.707942 | 0.003254 | 0.018279 | | 482 | -1.118539 | 343.740135 | 54.708231 | 54.707942 | 0.003254 | 0.018279 | | 483 | -1.118797 | 345.352018 | 54.964769 | 54.964481 | 0.003240 | 0.018194 | | 484 | -1.118797 | -345.352018 | 54.964769 | 54.964481 | 0.003240 | 0.018194 | | 485 | -1.118797 | 345.352018 | 54.964769 | 54.964481 | 0.003240 | 0.018194 | | 486 | -1.118797 | -345.352018 | 54.964769 | 54.964481 | 0.003240 | 0.018194 | | 487 | -1.118942 | -344.942716 | 54.899627 | 54.899338 | 0.003244 | 0.018215 | | 488 | -1.118942 | 344.942716 | 54.899627 | 54.899338 | 0.003244 | 0.018215 | | 489 | -1.118960 | -345.001926 | 54.909051 | 54.908762 | 0.003243 | 0.018212 | | 490 | -1.118960 | 345.001926 | 54.909051 | 54.908762 | 0.003243 | 0.018212 | | 491 | -1.119142 | 346.233999 | 55.105140 | 55.104852 | 0.003232 | 0.018147 | | 492 | -1.119142 | -346.233999 | 55.105140 | 55.104852 | 0.003232 | 0.018147 | | 493 | -1.119142 | 346.233999 | 55.105140 | 55.104852 | 0.003232 | 0.018147 | | 494 | -1.119142 | -346.233999 | 55.105140 | 55.104852 | 0.003232 | 0.018147 | | 495 | -1.119223 | 345.921418 | 55.055392 | 55.055104 | 0.003235 | 0.018164 | | 496 | -1.119223 | -345.921418 | 55.055392 | 55.055104 | 0.003235 | 0.018164 | | 497 | -1.119232 | 345.957711 | 55.061168 | 55.060880 | 0.003235 | 0.018162 | | 498 | -1.119232 | -345.957711 | 55.061168 | 55.060880 | 0.003235 | 0.018162 | | 499 | -1.119414 | -346.692126 | 55.178053 | 55.177766 | 0.003229 | 0.018123 | | 500 | -1.119414 | 346.692126 | 55.178053 | 55.177766 | 0.003229 | 0.018123 | | 501 | -1.119419 | -346.714128 | 55.181555 | 55.181267 | 0.003229 | 0.018122 | | 502 | -1.119419 | 346.714128 | 55.181555 | 55.181267 | 0.003229 | 0.018122 | | 503 | -1.119436 | -347.160911 | 55.252662 | 55.252375 | 0.003225 | 0.018099 | | 504 | -1.119436 | 347.160911 | 55.252662 | 55.252375 | 0.003225 | 0.018099 | | 505 | -1.119436 | -347.160911 | 55.252662 | 55.252375 | 0.003225 | 0.018099 | | 506 | -1.119436 | 347.160911 | 55.252662 | 55.252375 | 0.003225 | 0.018099 | | 507 | -1.119550 | -347.324058 | 55.278628 | 55.278341 | 0.003223 | 0.018090 | | 508 | -1.119550 | 347.324058 | 55.278628 | 55.278341 | 0.003223 | 0.018090 | | 509 | -1.119554 | -347.339241 | 55.281044 | 55.280757 | 0.003223 | 0.018089 | | 510 | -1.119554 | 347.339241 | 55.281044 | 55.280757 | 0.003223 | 0.018089 | | 511 | -1.119645 | 347.818682 | 55.357349 | 55.357063 | 0.003219 | 0.018065 | | 512 | -1.119645 | -347.818682 | 55.357349 | 55.357063 | 0.003219 | 0.018065 | | 513 | -1.119648 | 347.835778 | 55.360070 | 55.359783 | 0.003219 | 0.018064 | | 514 | -1.119648 | -347.835778 | 55.360070 | 55.359783 | 0.003219 | 0.018064 | | 515 | -1.119700 | 348.136761 | 55.407973 | 55.407686 | 0.003216 | 0.018048 | | 516 | -1.119700 | -348.136761 | 55.407973 | 55.407686 | 0.003216 | 0.018048 | | 517 | -1.119702 | 348.145161 | 55.409310 | 55.409023 | 0.003216 | 0.018048 | | 518 | -1.119702 | -348.145161 | 55.409310 | 55.409023 | 0.003216 | 0.018048 | | 519 | -1.119747 | 348.428886 | 55.454466 | 55.454180 | 0.003214 | 0.018033 | | 520 | -1.119747 | -348.428886 | 55.454466 | 55.454180 | 0.003214 | 0.018033 | | 521 | -1.119747 | 348.429776 | 55.454608 | 55.454321 | 0.003214 | 0.018033 | | 522 | -1.119747 | -348.429776 | 55.454608 | 55.454321 | 0.003214 | 0.018033 | | 523 | -1.119798 | -348.784816 | 55.511114 | 55.510828 | 0.003211 | 0.018015 | | 524 | -1.119798 | 348.784816 | 55.511114 | 55.510828 | 0.003211 | 0.018015 | | 525 | -1.119799 | 348.788549 | 55.511708 | 55.511422 | 0.003211 | 0.018014 | | 526 | -1.119799 | -348.788549 | 55.511708 | 55.511422 | 0.003211 | 0.018014 | | 527 | -1.119838 | 349.087485 | 55.559285 | 55.558999 | 0.003208 | 0.017999 | | 528 | -1.119838 | -349.087485 | 55.559285 | 55.558999 | 0.003208 | 0.017999 | | 529 | -1.119838 | 349.088940 | 55.559516 | 55.559230 | 0.003208 | 0.017999 | | 530 | -1.119838 | -349.088940 | 55.559516 | 55.559230 | 0.003208 | 0.017999 | | 531 | -1.119864 | 349.305750 | 55.594023 | 55.593737 | 0.003206 | 0.017988 | | 532 | -1.119864 | -349.305750 | 55.594023 | 55.593737 | 0.003206 | 0.017988 | | 533 | -1.119864 | 349.306973 | 55.594217 | 55.593931 | 0.003206 | 0.017988 | | 534 | -1.119864 | -349.306973 | 55.594217 | 55.593931 | 0.003206 | 0.017988 | | 535 | -1.119888 | -349.526188 | 55.629106 | 55.628821 | 0.003204 | 0.017976 | | 536 | -1.119888 | 349.526188 | 55.629106 | 55.628821 | 0.003204 | 0.017976 | | 537 | -1.119888 | -349.527157 | 55.629260 | 55.628975 | 0.003204 | 0.017976 | | 538 | -1.119888 | 349.527157 | 55.629260 | 55.628975 | 0.003204 | 0.017976 | | 539 | -1.119908 | -349.726171 | 55.660934 | 55.660649 | 0.003202 | 0.017966 | | 540 | -1.119908 | 349.726171 | 55.660934 | 55.660649 | 0.003202 | 0.017966 | | 541 | -1.119908 | -349.726960 | 55.661060 | 55.660774 | 0.003202 | 0.017966 | | 542 | -1.119908 | 349.726960 | 55.661060 | 55.660774 | 0.003202 | 0.017966 | | 543 | -1.119924 | 349.905629 | 55.689496 | 55.689211 | 0.003201 | 0.017957 | | 544 | -1.119924 | -349.905629 | 55.689496 | 55.689211 | 0.003201 | 0.017957 | | 545 | -1.119924 | -349.906343 | 55.689609 | 55.689324 | 0.003201 | 0.017957 | | 546 | -1.119924 | 349.906343 | 55.689609 | 55.689324 | 0.003201 | 0.017957 | | 547 | -1.119938 | 350.066095 | 55.715034 | 55.714749 | 0.003199 | 0.017949 | | 548 | -1.119938 | -350.066095 | 55.715034 | 55.714749 | 0.003199 | 0.017949 | | 549 | -1.119938 | 350.067052 | 55.715187 | 55.714902 | 0.003199 | 0.017949 | | 550 | -1.119938 | -350.067052 | 55.715187 | 55.714902 | 0.003199 | 0.017949 | | 551 | -1.119946 | -350.175587 | 55.732461 | 55.732176 | 0.003198 | 0.017943 | | 552 | -1.119946 | 350.175587 | 55.732461 | 55.732176 | 0.003198 | 0.017943 | | 553 | -1.119947 | 350.185229 | 55.733995 | 55.733710 | 0.003198 | 0.017942 | | 554 | -1.119947 | -350.185229 | 55.733995 | 55.733710 | 0.003198 | 0.017942 | | 555 | -1.119950 | -350.224672 | 55.740273 | 55.739988 | 0.003198 | 0.017940 | | 556 | -1.119950 | 350.224672 | 55.740273 | 55.739988 | 0.003198 | 0.017940 | | 557 | -1.119950 | -350.225655 | 55.740429 | 55.740144 | 0.003198 | 0.017940 | | 558 | -1.119950 | 350.225655 | 55.740429 | 55.740144 | 0.003198 | 0.017940 | | 559 | -1.119959 | -350.360870 | 55.761949 | 55.761664 | 0.003197 | 0.017933 | | 560 | -1.119959 | 350.360870 | 55.761949 | 55.761664 | 0.003197 | 0.017933 | | 561 | -1.119959 | -350.360874 | 55.761950 | 55.761665 | 0.003197 | 0.017933 | | 562 | -1.119959 | 350.360874 | 55.761950 | 55.761665 | 0.003197 | 0.017933 | | 563 | -1.119968 | -350.498253 | 55.783814 | 55.783530 | 0.003195 | 0.017926 | | 564 | -1.119968 | 350.498253 | 55.783814 | 55.783530 | 0.003195 | 0.017926 | | 565 | -1.119968 | -350.498253 | 55.783814 | 55.783530 | 0.003195 | 0.017926 | | 566 | -1.119968 | 350.498253 | 55.783814 | 55.783530 | 0.003195 | 0.017926 | | 567 | -1.202238 | 215.725167 | 34.334260 | 34.333727 | 0.005573 | 0.029126 | | 568 | -1.202238 | -215.725167 | 34.334260 | 34.333727 | 0.005573 | 0.029126 | | 569 | -1.211213 | 218.125851 | 34.716343 | 34.715807 | 0.005553 | 0.028805 | | 570 | -1.211213 | -218.125851 | 34.716343 | 34.715807 | 0.005553 | 0.028805 | | 571 | -1.314901 | 27.808792 | 4.430852 | 4.425907 | 0.047231 | 0.225942 | | 572 | -1.314901 | -27.808792 | 4.430852 | 4.425907 | 0.047231 | 0.225942 | | 573 | -1.331926 | -193.853508 | 30.853472 | 30.852744 | 0.006871 | 0.032412 | | 574 | -1.331926 | 193.853508 | 30.853472 | 30.852744 | 0.006871 | 0.032412 | | 575 | -1.387904 | -100.607607 | 16.013722 | 16.012198 | 0.013794 | 0.062452 | | 576 | -1.387904 | 100.607607 | 16.013722 | 16.012198 | 0.013794 | 0.062452 | | 577 | -1.429024 | 181.697099 | 28.918886 | 28.917992 | 0.007865 | 0.034581 | | 578 | -1.429024 | -181.697099 | 28.918886 | 28.917992 | 0.007865 | 0.034581 | | 579 | -1.467702 | -29.975101 | 4.776401 | 4.770685 | 0.048905 | 0.209613 | | 580 | -1.467702 | 29.975101 | 4.776401 | 4.770685 | 0.048905 | 0.209613 | | 581 | -1.475946 | -189.568142 | 30.171621 | 30.170707 | 0.007786 | 0.033145 | | 582 | -1.475946 | 189.568142 | 30.171621 | 30.170707 | 0.007786 | 0.033145 | | 583 | -1.552107 | 121.192652 | 19.289991 | 19.288410 | 0.012806 | 0.051845 | | 584 | -1.552107 | -121.192652 | 19.289991 | 19.288410 | 0.012806 | 0.051845 | | 585 | -1.569138 | 147.912741 | 23.542368 | 23.541044 | 0.010608 | 0.042479 | | 586 | -1.569138 | -147.912741 | 23.542368 | 23.541044 | 0.010608 | 0.042479 | | 587 | -1.679571 | -166.822478 | 26.551968 | 26.550622 | 0.010068 | 0.037664 | | 588 | -1.679571 | 166.822478 | 26.551968 | 26.550622 | 0.010068 | 0.037664 | | 589 | -1.900781 | 146.675612 | 23.346109 | 23.344149 | 0.012958 | 0.042837 | | 590 | -1.900781 | -146.675612 | 23.346109 | 23.344149 | 0.012958 | 0.042837 | | 591 | -3.829164 | 6.037090 | 1.137807 | 0.960833 | 0.535618 | 1.040764 | | 592 | -3.829164 | -6.037090 | 1.137807 | 0.960833 | 0.535618 | 1.040764 | | 593 | -10.569038 | -2.319301 | 1.722140 | 0.369128 | 0.976758 | 2.709085 | | 594 | -10.569038 | 2.319301 | 1.722140 | 0.369128 | 0.976758 | 2.709085 | | 595 | -15.876819 | 0.420548 | 2.527760 | 0.066932 | 0.999649 | 14.940485 | | 596 | -15.876819 | -0.420548 | 2.527760 | 0.066932 | 0.999649 | 14.940485 | | 597 | -22.629853 | 0.000000 | 3.601653 | 0.000000 | 1.000000 | inf | | 598 | -26.267735 | 0.000000 | 4.180640 | 0.000000 | 1.000000 | inf | | 599 | -29.470709 | 0.000000 | 4.690409 | 0.000000 | 1.000000 | inf | | 600 | -30.882640 | -35.322709 | 7.467456 | 5.621784 | 0.658206 | 0.177879 | | 601 | -30.882640 | 35.322709 | 7.467456 | 5.621784 | 0.658206 | 0.177879 | | 602 | -32.639619 | 0.000000 | 5.194757 | 0.000000 | 1.000000 | inf | | 603 | -34.172137 | 34.458728 | 7.723753 | 5.484277 | 0.704148 | 0.182339 | | 604 | -34.172137 | -34.458728 | 7.723753 | 5.484277 | 0.704148 | 0.182339 | | 605 | -35.603269 | 0.000000 | 5.666436 | 0.000000 | 1.000000 | inf | | 606 | -36.439642 | -71.349091 | 12.750825 | 11.355561 | 0.454837 | 0.088063 | | 607 | -36.439642 | 71.349091 | 12.750825 | 11.355561 | 0.454837 | 0.088063 | | 608 | -37.441913 | 34.065477 | 8.056375 | 5.421689 | 0.739671 | 0.184444 | | 609 | -37.441913 | -34.065477 | 8.056375 | 5.421689 | 0.739671 | 0.184444 | | 610 | -38.677028 | 0.000000 | 6.155640 | 0.000000 | 1.000000 | inf | | 611 | -39.517763 | 106.863529 | 18.133516 | 17.007859 | 0.346841 | 0.058796 | | 612 | -39.517763 | -106.863529 | 18.133516 | 17.007859 | 0.346841 | 0.058796 | | 613 | -40.027452 | 70.647916 | 12.923269 | 11.243965 | 0.492953 | 0.088937 | | 614 | -40.027452 | -70.647916 | 12.923269 | 11.243965 | 0.492953 | 0.088937 | | 615 | -40.915687 | -33.727014 | 8.439122 | 5.367821 | 0.771636 | 0.186295 | | 616 | -40.915687 | 33.727014 | 8.439122 | 5.367821 | 0.771636 | 0.186295 | | 617 | -41.445961 | -142.148833 | 23.565714 | 22.623689 | 0.279912 | 0.044201 | | 618 | -41.445961 | 142.148833 | 23.565714 | 22.623689 | 0.279912 | 0.044201 | | 619 | -41.867200 | 0.000000 | 6.663372 | 0.000000 | 1.000000 | inf | | 620 | -42.728119 | -177.201966 | 29.010864 | 28.202569 | 0.234408 | 0.035458 | | 621 | -42.728119 | 177.201966 | 29.010864 | 28.202569 | 0.234408 | 0.035458 | | 622 | -43.280388 | -106.316208 | 18.269108 | 16.920750 | 0.377046 | 0.059099 | | 623 | -43.280388 | 106.316208 | 18.269108 | 16.920750 | 0.377046 | 0.059099 | | 624 | -43.336124 | 70.371812 | 13.153375 | 11.200022 | 0.524364 | 0.089286 | | 625 | -43.336124 | -70.371812 | 13.153375 | 11.200022 | 0.524364 | 0.089286 | | 626 | -43.703283 | -212.025014 | 34.454227 | 33.744829 | 0.201879 | 0.029634 | | 627 | -43.703283 | 212.025014 | 34.454227 | 33.744829 | 0.201879 | 0.029634 | | 628 | -44.464423 | -33.642593 | 8.874096 | 5.354385 | 0.797460 | 0.186763 | | 629 | -44.464423 | 33.642593 | 8.874096 | 5.354385 | 0.797460 | 0.186763 | | 630 | -44.565889 | 246.761554 | 39.908680 | 39.273321 | 0.177728 | 0.025463 | | 631 | -44.565889 | -246.761554 | 39.908680 | 39.273321 | 0.177728 | 0.025463 | | 632 | -45.337149 | -281.615994 | 45.397682 | 44.820577 | 0.158943 | 0.022311 | | 633 | -45.337149 | 281.615994 | 45.397682 | 44.820577 | 0.158943 | 0.022311 | | 634 | -45.383435 | -141.762571 | 23.690192 | 22.562214 | 0.304894 | 0.044322 | | 635 | -45.383435 | 141.762571 | 23.690192 | 22.562214 | 0.304894 | 0.044322 | | 636 | -45.635849 | 0.000000 | 7.263171 | 0.000000 | 1.000000 | inf | | 637 | -45.889180 | 316.668954 | 50.925862 | 50.399429 | 0.143414 | 0.019841 | | 638 | -45.889180 | -316.668954 | 50.925862 | 50.399429 | 0.143414 | 0.019841 | | 639 | -46.088938 | 351.858377 | 56.478371 | 56.000000 | 0.129878 | 0.017857 | | 640 | -46.676008 | 106.038788 | 18.439235 | 16.876597 | 0.402876 | 0.059254 | | 641 | -46.676008 | -106.038788 | 18.439235 | 16.876597 | 0.402876 | 0.059254 | | 642 | -46.767036 | 70.167150 | 13.420626 | 11.167449 | 0.554609 | 0.089546 | | 643 | -46.767036 | -70.167150 | 13.420626 | 11.167449 | 0.554609 | 0.089546 | | 644 | -46.853578 | -176.902076 | 29.125616 | 28.154840 | 0.256028 | 0.035518 | | 645 | -46.853578 | 176.902076 | 29.125616 | 28.154840 | 0.256028 | 0.035518 | | 646 | -47.914776 | 211.777248 | 34.557310 | 33.705396 | 0.220673 | 0.029669 | | 647 | -47.914776 | -211.777248 | 34.557310 | 33.705396 | 0.220673 | 0.029669 | | 648 | -48.163221 | 33.591764 | 9.345665 | 5.346295 | 0.820211 | 0.187045 | | 649 | -48.163221 | -33.591764 | 9.345665 | 5.346295 | 0.820211 | 0.187045 | | 650 | -48.252213 | 0.000000 | 7.679578 | 0.000000 | 1.000000 | inf | | 651 | -48.812816 | 246.582271 | 40.006344 | 39.244787 | 0.194189 | 0.025481 | | 652 | -48.812816 | -246.582271 | 40.006344 | 39.244787 | 0.194189 | 0.025481 | | 653 | -48.813029 | -141.441198 | 23.813922 | 22.511066 | 0.326231 | 0.044423 | | 654 | -48.813029 | 141.441198 | 23.813922 | 22.511066 | 0.326231 | 0.044423 | | 655 | -49.598916 | 281.503089 | 45.492720 | 44.802608 | 0.173520 | 0.022320 | | 656 | -49.598916 | -281.503089 | 45.492720 | 44.802608 | 0.173520 | 0.022320 | | 657 | -50.130642 | -105.892724 | 18.646515 | 16.853351 | 0.427884 | 0.059335 | | 658 | -50.130642 | 105.892724 | 18.646515 | 16.853351 | 0.427884 | 0.059335 | | 659 | -50.155612 | 316.615486 | 51.019264 | 50.390920 | 0.156461 | 0.019845 | | 660 | -50.155612 | -316.615486 | 51.019264 | 50.390920 | 0.156461 | 0.019845 | | 661 | -50.199758 | 70.036291 | 13.714223 | 11.146622 | 0.582573 | 0.089713 | | 662 | -50.199758 | -70.036291 | 13.714223 | 11.146622 | 0.582573 | 0.089713 | | 663 | -50.274891 | -176.631667 | 29.228367 | 28.111803 | 0.273758 | 0.035572 | | 664 | -50.274891 | 176.631667 | 29.228367 | 28.111803 | 0.273758 | 0.035572 | | 665 | -50.356641 | 351.858377 | 56.570596 | 56.000000 | 0.141673 | 0.017857 | | 666 | -50.561744 | 0.000000 | 8.047151 | 0.000000 | 1.000000 | inf | | 667 | -51.384725 | -211.606085 | 34.656889 | 33.678154 | 0.235974 | 0.029693 | | 668 | -51.384725 | 211.606085 | 34.656889 | 33.678154 | 0.235974 | 0.029693 | | 669 | -52.064864 | 33.485400 | 9.852221 | 5.329367 | 0.841067 | 0.187640 | | 670 | -52.064864 | -33.485400 | 9.852221 | 5.329367 | 0.841067 | 0.187640 | | 671 | -52.270240 | -141.316069 | 23.980382 | 22.491151 | 0.346911 | 0.044462 | | 672 | -52.270240 | 141.316069 | 23.980382 | 22.491151 | 0.346911 | 0.044462 | | 673 | -52.308018 | -246.494054 | 40.104345 | 39.230747 | 0.207585 | 0.025490 | | 674 | -52.308018 | 246.494054 | 40.104345 | 39.230747 | 0.207585 | 0.025490 | | 675 | -53.091325 | -281.471463 | 45.587508 | 44.797575 | 0.185352 | 0.022323 | | 676 | -53.091325 | 281.471463 | 45.587508 | 44.797575 | 0.185352 | 0.022323 | | 677 | -53.523354 | 105.809988 | 18.872114 | 16.840183 | 0.451381 | 0.059382 | | 678 | -53.523354 | -105.809988 | 18.872114 | 16.840183 | 0.451381 | 0.059382 | | 679 | -53.632759 | -316.607749 | 51.107559 | 50.389688 | 0.167019 | 0.019845 | | 680 | -53.632759 | 316.607749 | 51.107559 | 50.389688 | 0.167019 | 0.019845 | | 681 | -53.713623 | 176.509633 | 29.364326 | 28.092381 | 0.291128 | 0.035597 | | 682 | -53.713623 | -176.509633 | 29.364326 | 28.092381 | 0.291128 | 0.035597 | | 683 | -53.826264 | 351.858377 | 56.651466 | 56.000000 | 0.151218 | 0.017857 | | 684 | -53.844578 | 69.950619 | 14.049269 | 11.132987 | 0.609970 | 0.089823 | | 685 | -53.844578 | -69.950619 | 14.049269 | 11.132987 | 0.609970 | 0.089823 | | 686 | -53.932606 | 0.000000 | 8.583641 | 0.000000 | 1.000000 | inf | | 687 | -54.796128 | 211.491959 | 34.771427 | 33.659991 | 0.250812 | 0.029709 | | 688 | -54.796128 | -211.491959 | 34.771427 | 33.659991 | 0.250812 | 0.029709 | | 689 | -55.610859 | 33.325370 | 10.318284 | 5.303897 | 0.857773 | 0.188541 | | 690 | -55.610859 | -33.325370 | 10.318284 | 5.303897 | 0.857773 | 0.188541 | | 691 | -55.689855 | 141.250770 | 24.164910 | 22.480758 | 0.366785 | 0.044482 | | 692 | -55.689855 | -141.250770 | 24.164910 | 22.480758 | 0.366785 | 0.044482 | | 693 | -55.734550 | -246.399761 | 40.206452 | 39.215740 | 0.220622 | 0.025500 | | 694 | -55.734550 | 246.399761 | 40.206452 | 39.215740 | 0.220622 | 0.025500 | | 695 | -56.557649 | 281.398311 | 45.681566 | 44.785932 | 0.197047 | 0.022328 | | 696 | -56.557649 | -281.398311 | 45.681566 | 44.785932 | 0.197047 | 0.022328 | | 697 | -57.110723 | -105.753973 | 19.128767 | 16.831268 | 0.475172 | 0.059413 | | 698 | -57.110723 | 105.753973 | 19.128767 | 16.831268 | 0.475172 | 0.059413 | | 699 | -57.130384 | 316.567088 | 51.197105 | 50.383217 | 0.177600 | 0.019848 | | 700 | -57.130384 | -316.567088 | 51.197105 | 50.383217 | 0.177600 | 0.019848 | | 701 | -57.151223 | -176.440063 | 29.517711 | 28.081308 | 0.308151 | 0.035611 | | 702 | -57.151223 | 176.440063 | 29.517711 | 28.081308 | 0.308151 | 0.035611 | | 703 | -57.334973 | 351.858377 | 56.738596 | 56.000000 | 0.160828 | 0.017857 | | 704 | -57.797155 | 69.873743 | 14.432160 | 11.120752 | 0.637375 | 0.089922 | | 705 | -57.797155 | -69.873743 | 14.432160 | 11.120752 | 0.637375 | 0.089922 | | 706 | -58.238487 | -211.435876 | 34.904261 | 33.651065 | 0.265553 | 0.029717 | | 707 | -58.238487 | 211.435876 | 34.904261 | 33.651065 | 0.265553 | 0.029717 | | 708 | -58.985465 | -33.192087 | 10.772097 | 5.282685 | 0.871495 | 0.189298 | | 709 | -58.985465 | 33.192087 | 10.772097 | 5.282685 | 0.871495 | 0.189298 | | 710 | -59.167429 | -246.361687 | 40.324620 | 39.209680 | 0.233525 | 0.025504 | | 711 | -59.167429 | 246.361687 | 40.324620 | 39.209680 | 0.233525 | 0.025504 | | 712 | -59.256196 | -141.231341 | 24.375965 | 22.477666 | 0.386894 | 0.044489 | | 713 | -59.256196 | 141.231341 | 24.375965 | 22.477666 | 0.386894 | 0.044489 | | 714 | -59.966273 | 281.381357 | 45.788914 | 44.783234 | 0.208433 | 0.022330 | | 715 | -59.966273 | -281.381357 | 45.788914 | 44.783234 | 0.208433 | 0.022330 | | 716 | -60.004171 | 1.461338 | 9.552792 | 0.232579 | 0.999704 | 4.299611 | | 717 | -60.004171 | -1.461338 | 9.552792 | 0.232579 | 0.999704 | 4.299611 | | 718 | -60.524592 | 316.563727 | 51.295275 | 50.382682 | 0.187791 | 0.019848 | | 719 | -60.524592 | -316.563727 | 51.295275 | 50.382682 | 0.187791 | 0.019848 | | 720 | -60.724664 | 351.858377 | 56.827855 | 56.000000 | 0.170069 | 0.017857 | | 721 | -60.725048 | -176.425011 | 29.695649 | 28.078913 | 0.325458 | 0.035614 | | 722 | -60.725048 | 176.425011 | 29.695649 | 28.078913 | 0.325458 | 0.035614 | | 723 | -61.050615 | 105.665542 | 19.422371 | 16.817193 | 0.500274 | 0.059463 | | 724 | -61.050615 | -105.665542 | 19.422371 | 16.817193 | 0.500274 | 0.059463 | | 725 | -61.075659 | 0.000000 | 9.720493 | 0.000000 | 1.000000 | inf | | 726 | -61.346469 | 69.792184 | 14.788859 | 11.107771 | 0.660199 | 0.090027 | | 727 | -61.346469 | -69.792184 | 14.788859 | 11.107771 | 0.660199 | 0.090027 | | 728 | -61.829886 | -211.392322 | 35.053727 | 33.644133 | 0.280727 | 0.029723 | | 729 | -61.829886 | 211.392322 | 35.053727 | 33.644133 | 0.280727 | 0.029723 | | 730 | -62.781634 | 246.294823 | 40.452501 | 39.199039 | 0.247006 | 0.025511 | | 731 | -62.781634 | -246.294823 | 40.452501 | 39.199039 | 0.247006 | 0.025511 | | 732 | -62.929241 | 32.946966 | 11.305147 | 5.243673 | 0.885924 | 0.190706 | | 733 | -62.929241 | -32.946966 | 11.305147 | 5.243673 | 0.885924 | 0.190706 | | 734 | -63.197906 | 141.166186 | 24.616011 | 22.467296 | 0.408606 | 0.044509 | | 735 | -63.197906 | -141.166186 | 24.616011 | 22.467296 | 0.408606 | 0.044509 | | 736 | -63.604162 | 281.320050 | 45.903569 | 44.773477 | 0.220526 | 0.022335 | | 737 | -63.604162 | -281.320050 | 45.903569 | 44.773477 | 0.220526 | 0.022335 | | 738 | -64.180338 | -316.529779 | 51.402419 | 50.377279 | 0.198719 | 0.019850 | | 739 | -64.180338 | 316.529779 | 51.402419 | 50.377279 | 0.198719 | 0.019850 | | 740 | -64.386941 | 351.858377 | 56.929880 | 56.000000 | 0.180002 | 0.017857 | | 741 | -64.589799 | 105.551154 | 19.694669 | 16.798988 | 0.521958 | 0.059527 | | 742 | -64.589799 | -105.551154 | 19.694669 | 16.798988 | 0.521958 | 0.059527 | | 743 | -64.637285 | 69.696959 | 15.128634 | 11.092615 | 0.679992 | 0.090150 | | 744 | -64.637285 | -69.696959 | 15.128634 | 11.092615 | 0.679992 | 0.090150 | | 745 | -64.694697 | -176.381060 | 29.900670 | 28.071918 | 0.344356 | 0.035623 | | 746 | -64.694697 | 176.381060 | 29.900670 | 28.071918 | 0.344356 | 0.035623 | | 747 | -64.771747 | 0.000000 | 10.308744 | 0.000000 | 1.000000 | inf | | 748 | -65.832104 | -211.365674 | 35.233797 | 33.639892 | 0.297371 | 0.029727 | | 749 | -65.832104 | 211.365674 | 35.233797 | 33.639892 | 0.297371 | 0.029727 | | 750 | -66.701546 | 141.026543 | 24.828978 | 22.445071 | 0.427560 | 0.044553 | | 751 | -66.701546 | -141.026543 | 24.828978 | 22.445071 | 0.427560 | 0.044553 | | 752 | -66.800140 | -246.281319 | 40.613132 | 39.196889 | 0.261777 | 0.025512 | | 753 | -66.800140 | 246.281319 | 40.613132 | 39.196889 | 0.261777 | 0.025512 | | 754 | -67.619843 | -281.313309 | 46.047687 | 44.772404 | 0.233715 | 0.022335 | | 755 | -67.619843 | 281.313309 | 46.047687 | 44.772404 | 0.233715 | 0.022335 | | 756 | -67.868949 | -105.428356 | 19.955601 | 16.779444 | 0.541286 | 0.059597 | | 757 | -67.868949 | 105.428356 | 19.955601 | 16.779444 | 0.541286 | 0.059597 | | 758 | -68.186016 | 316.527115 | 51.532480 | 50.376855 | 0.210588 | 0.019850 | | 759 | -68.186016 | -316.527115 | 51.532480 | 50.376855 | 0.210588 | 0.019850 | | 760 | -68.198245 | 176.244217 | 30.076926 | 28.050138 | 0.360878 | 0.035650 | | 761 | -68.198245 | -176.244217 | 30.076926 | 28.050138 | 0.360878 | 0.035650 | | 762 | -68.388553 | 351.858377 | 57.047959 | 56.000000 | 0.190793 | 0.017857 | | 763 | -68.578841 | -32.592349 | 12.084586 | 5.187233 | 0.903189 | 0.192781 | | 764 | -68.578841 | 32.592349 | 12.084586 | 5.187233 | 0.903189 | 0.192781 | | 765 | -68.595368 | -69.515583 | 15.543288 | 11.063749 | 0.702380 | 0.090385 | | 766 | -68.595368 | 69.515583 | 15.543288 | 11.063749 | 0.702380 | 0.090385 | | 767 | -69.371175 | 211.255741 | 35.388755 | 33.622396 | 0.311985 | 0.029742 | | 768 | -69.371175 | -211.255741 | 35.388755 | 33.622396 | 0.311985 | 0.029742 | | 769 | -69.970974 | -140.859652 | 25.032082 | 22.418510 | 0.444878 | 0.044606 | | 770 | -69.970974 | 140.859652 | 25.032082 | 22.418510 | 0.444878 | 0.044606 | | 771 | -70.388014 | 246.207763 | 40.755083 | 39.185182 | 0.274876 | 0.025520 | | 772 | -70.388014 | -246.207763 | 40.755083 | 39.185182 | 0.274876 | 0.025520 | | 773 | -71.236074 | 281.270855 | 46.179040 | 44.765647 | 0.245513 | 0.022339 | | 774 | -71.236074 | -281.270855 | 46.179040 | 44.765647 | 0.245513 | 0.022339 | | 775 | -71.291210 | 3.919085 | 11.363480 | 0.623742 | 0.998492 | 1.603228 | | 776 | -71.291210 | -3.919085 | 11.363480 | 0.623742 | 0.998492 | 1.603228 | | 777 | -71.480906 | 176.099249 | 30.248010 | 28.027066 | 0.376109 | 0.035680 | | 778 | -71.480906 | -176.099249 | 30.248010 | 28.027066 | 0.376109 | 0.035680 | | 779 | -71.511939 | -32.497012 | 12.501530 | 5.172060 | 0.910407 | 0.193347 | | 780 | -71.511939 | 32.497012 | 12.501530 | 5.172060 | 0.910407 | 0.193347 | | 781 | -71.808080 | 316.508402 | 51.654047 | 50.373877 | 0.221253 | 0.019852 | | 782 | -71.808080 | -316.508402 | 51.654047 | 50.373877 | 0.221253 | 0.019852 | | 783 | -71.851202 | 105.271767 | 20.285070 | 16.754522 | 0.563738 | 0.059685 | | 784 | -71.851202 | -105.271767 | 20.285070 | 16.754522 | 0.563738 | 0.059685 | | 785 | -72.010158 | 351.858377 | 57.160732 | 56.000000 | 0.200501 | 0.017857 | | 786 | -72.173154 | -32.720681 | 12.612070 | 5.207658 | 0.910772 | 0.192025 | | 787 | -72.173154 | 32.720681 | 12.612070 | 5.207658 | 0.910772 | 0.192025 | | 788 | -72.681232 | 211.168850 | 35.543559 | 33.608566 | 0.325448 | 0.029754 | | 789 | -72.681232 | -211.168850 | 35.543559 | 33.608566 | 0.325448 | 0.029754 | | 790 | -73.692957 | 246.188140 | 40.899802 | 39.182059 | 0.286764 | 0.025522 | | 791 | -73.692957 | -246.188140 | 40.899802 | 39.182059 | 0.286764 | 0.025522 | | 792 | -73.972481 | -2.000937 | 11.777392 | 0.318459 | 0.999634 | 3.140121 | | 793 | -73.972481 | 2.000937 | 11.777392 | 0.318459 | 0.999634 | 3.140121 | | 794 | -73.984999 | -140.684708 | 25.298111 | 22.390667 | 0.465453 | 0.044661 | | 795 | -73.984999 | 140.684708 | 25.298111 | 22.390667 | 0.465453 | 0.044661 | | 796 | -74.508606 | -281.292379 | 46.312977 | 44.769073 | 0.256049 | 0.022337 | | 797 | -74.508606 | 281.292379 | 46.312977 | 44.769073 | 0.256049 | 0.022337 | | 798 | -74.757158 | 69.289437 | 16.222612 | 11.027756 | 0.733419 | 0.090680 | | 799 | -74.757158 | -69.289437 | 16.222612 | 11.027756 | 0.733419 | 0.090680 | | 800 | -75.045687 | -316.530852 | 51.773970 | 50.377450 | 0.230693 | 0.019850 | | 801 | -75.045687 | 316.530852 | 51.773970 | 50.377450 | 0.230693 | 0.019850 | | 802 | -75.233144 | 351.858377 | 57.265785 | 56.000000 | 0.209090 | 0.017857 | | 803 | -75.487405 | 175.907873 | 30.465569 | 27.996608 | 0.394353 | 0.035719 | | 804 | -75.487405 | -175.907873 | 30.465569 | 27.996608 | 0.394353 | 0.035719 | | 805 | -76.676467 | -211.007198 | 35.731372 | 33.582839 | 0.341533 | 0.029777 | | 806 | -76.676467 | 211.007198 | 35.731372 | 33.582839 | 0.341533 | 0.029777 | | 807 | -77.232425 | 69.389625 | 16.524367 | 11.043702 | 0.743866 | 0.090549 | | 808 | -77.232425 | -69.389625 | 16.524367 | 11.043702 | 0.743866 | 0.090549 | | 809 | -77.356873 | -69.260283 | 16.525367 | 11.023116 | 0.745020 | 0.090718 | | 810 | -77.356873 | 69.260283 | 16.525367 | 11.023116 | 0.745020 | 0.090718 | | 811 | -77.358540 | -32.280256 | 13.340905 | 5.137562 | 0.922875 | 0.194645 | | 812 | -77.358540 | 32.280256 | 13.340905 | 5.137562 | 0.922875 | 0.194645 | | 813 | -77.690399 | 246.077645 | 41.069996 | 39.164474 | 0.301067 | 0.025533 | | 814 | -77.690399 | -246.077645 | 41.069996 | 39.164474 | 0.301067 | 0.025533 | | 815 | -78.253904 | 105.127152 | 20.858038 | 16.731506 | 0.597108 | 0.059767 | | 816 | -78.253904 | -105.127152 | 20.858038 | 16.731506 | 0.597108 | 0.059767 | | 817 | -78.514848 | -281.228183 | 46.470483 | 44.758856 | 0.268902 | 0.022342 | | 818 | -78.514848 | 281.228183 | 46.470483 | 44.758856 | 0.268902 | 0.022342 | | 819 | -79.058786 | 316.501686 | 51.920531 | 50.372808 | 0.242343 | 0.019852 | | 820 | -79.058786 | -316.501686 | 51.920531 | 50.372808 | 0.242343 | 0.019852 | | 821 | -79.248414 | 351.858377 | 57.402806 | 56.000000 | 0.219724 | 0.017857 | | 822 | -80.292059 | -105.140847 | 21.055070 | 16.733686 | 0.606926 | 0.059760 | | 823 | -80.292059 | 105.140847 | 21.055070 | 16.733686 | 0.606926 | 0.059760 | | 824 | -80.376065 | -140.569414 | 25.771344 | 22.372317 | 0.496375 | 0.044698 | | 825 | -80.376065 | 140.569414 | 25.771344 | 22.372317 | 0.496375 | 0.044698 | | 826 | -80.738344 | -105.068490 | 21.089122 | 16.722169 | 0.609314 | 0.059801 | | 827 | -80.738344 | 105.068490 | 21.089122 | 16.722169 | 0.609314 | 0.059801 | | 828 | -81.728740 | 175.741308 | 30.846755 | 27.970098 | 0.421682 | 0.035752 | | 829 | -81.728740 | -175.741308 | 30.846755 | 27.970098 | 0.421682 | 0.035752 | | 830 | -82.328091 | -4.824289 | 13.125400 | 0.767809 | 0.998288 | 1.302406 | | 831 | -82.328091 | 4.824289 | 13.125400 | 0.767809 | 0.998288 | 1.302406 | | 832 | -82.600442 | 140.579658 | 25.950297 | 22.373948 | 0.506594 | 0.044695 | | 833 | -82.600442 | -140.579658 | 25.950297 | 22.373948 | 0.506594 | 0.044695 | | 834 | -82.777733 | -69.111821 | 17.162628 | 10.999488 | 0.767626 | 0.090913 | | 835 | -82.777733 | 69.111821 | 17.162628 | 10.999488 | 0.767626 | 0.090913 | | 836 | -82.862711 | 210.764530 | 36.043558 | 33.544217 | 0.365891 | 0.029811 | | 837 | -82.862711 | -210.764530 | 36.043558 | 33.544217 | 0.365891 | 0.029811 | | 838 | -82.995671 | 140.490869 | 25.970052 | 22.359816 | 0.508631 | 0.044723 | | 839 | -82.995671 | -140.490869 | 25.970052 | 22.359816 | 0.508631 | 0.044723 | | 840 | -84.019349 | 245.657331 | 41.321103 | 39.097579 | 0.323614 | 0.025577 | | 841 | -84.019349 | -245.657331 | 41.321103 | 39.097579 | 0.323614 | 0.025577 | | 842 | -84.263147 | 25.881773 | 14.029257 | 4.119212 | 0.955923 | 0.242765 | | 843 | -84.263147 | -25.881773 | 14.029257 | 4.119212 | 0.955923 | 0.242765 | | 844 | -84.334363 | 3.013423 | 13.430797 | 0.479601 | 0.999362 | 2.085066 | | 845 | -84.334363 | -3.013423 | 13.430797 | 0.479601 | 0.999362 | 2.085066 | | 846 | -84.417542 | -175.895762 | 31.051794 | 27.994680 | 0.432679 | 0.035721 | | 847 | -84.417542 | 175.895762 | 31.051794 | 27.994680 | 0.432679 | 0.035721 | | 848 | -84.577439 | -175.688159 | 31.033040 | 27.961639 | 0.433761 | 0.035763 | | 849 | -84.577439 | 175.688159 | 31.033040 | 27.961639 | 0.433761 | 0.035763 | | 850 | -85.105900 | 280.538672 | 46.658453 | 44.649116 | 0.290302 | 0.022397 | | 851 | -85.105900 | -280.538672 | 46.658453 | 44.649116 | 0.290302 | 0.022397 | | 852 | -85.168644 | 38.197052 | 14.855827 | 6.079250 | 0.912437 | 0.164494 | | 853 | -85.168644 | -38.197052 | 14.855827 | 6.079250 | 0.912437 | 0.164494 | | 854 | -85.665558 | 211.188899 | 36.271736 | 33.611757 | 0.375888 | 0.029751 | | 855 | -85.665558 | -211.188899 | 36.271736 | 33.611757 | 0.375888 | 0.029751 | | 856 | -85.756665 | -210.815246 | 36.222096 | 33.552289 | 0.376803 | 0.029804 | | 857 | -85.756665 | 210.815246 | 36.222096 | 33.552289 | 0.376803 | 0.029804 | | 858 | -85.966733 | -315.560139 | 52.053273 | 50.222956 | 0.262847 | 0.019911 | | 859 | -85.966733 | 315.560139 | 52.053273 | 50.222956 | 0.262847 | 0.019911 | | 860 | -85.974527 | 105.007973 | 21.599556 | 16.712538 | 0.633498 | 0.059835 | | 861 | -85.974527 | -105.007973 | 21.599556 | 16.712538 | 0.633498 | 0.059835 | | 862 | -86.375776 | 246.597853 | 41.585233 | 39.247267 | 0.330577 | 0.025479 | | 863 | -86.375776 | -246.597853 | 41.585233 | 39.247267 | 0.330577 | 0.025479 | | 864 | -86.519761 | 350.775137 | 57.500737 | 55.827597 | 0.239476 | 0.017912 | | 865 | -86.519761 | -350.775137 | 57.500737 | 55.827597 | 0.239476 | 0.017912 | | 866 | -86.708900 | 282.092865 | 46.969538 | 44.896474 | 0.293811 | 0.022273 | | 867 | -86.708900 | -282.092865 | 46.969538 | 44.896474 | 0.293811 | 0.022273 | | 868 | -86.734242 | -245.973742 | 41.510438 | 39.147937 | 0.332547 | 0.025544 | | 869 | -86.734242 | 245.973742 | 41.510438 | 39.147937 | 0.332547 | 0.025544 | | 870 | -86.757772 | -317.567748 | 52.394664 | 50.542477 | 0.263537 | 0.019785 | | 871 | -86.757772 | 317.567748 | 52.394664 | 50.542477 | 0.263537 | 0.019785 | | 872 | -87.524583 | 281.194311 | 46.871277 | 44.753465 | 0.297196 | 0.022345 | | 873 | -87.524583 | -281.194311 | 46.871277 | 44.753465 | 0.297196 | 0.022345 | | 874 | -88.040412 | -316.496355 | 52.284532 | 50.371959 | 0.267996 | 0.019852 | | 875 | -88.040412 | 316.496355 | 52.284532 | 50.371959 | 0.267996 | 0.019852 | | 876 | -88.170415 | 140.558523 | 26.407599 | 22.370584 | 0.531391 | 0.044702 | | 877 | -88.170415 | -140.558523 | 26.407599 | 22.370584 | 0.531391 | 0.044702 | | 878 | -88.219075 | 351.858377 | 57.733315 | 56.000000 | 0.243196 | 0.017857 | | 879 | -88.657831 | 27.265548 | 14.762529 | 4.339447 | 0.955821 | 0.230444 | | 880 | -88.657831 | -27.265548 | 14.762529 | 4.339447 | 0.955821 | 0.230444 | | 881 | -89.526558 | 36.237090 | 15.371543 | 5.767312 | 0.926946 | 0.173391 | | 882 | -89.526558 | -36.237090 | 15.371543 | 5.767312 | 0.926946 | 0.173391 | | 883 | -89.792477 | 175.890513 | 31.430648 | 27.993845 | 0.454681 | 0.035722 | | 884 | -89.792477 | -175.890513 | 31.430648 | 27.993845 | 0.454681 | 0.035722 | | 885 | -90.026130 | -62.325176 | 17.426653 | 9.919360 | 0.822195 | 0.100813 | | 886 | -90.026130 | 62.325176 | 17.426653 | 9.919360 | 0.822195 | 0.100813 | | 887 | -90.376140 | 75.533052 | 18.745918 | 12.021459 | 0.767304 | 0.083185 | | 888 | -90.376140 | -75.533052 | 18.745918 | 12.021459 | 0.767304 | 0.083185 | | 889 | -90.531744 | 0.000000 | 14.408575 | 0.000000 | 1.000000 | inf | | 890 | -91.010852 | -211.042064 | 36.578546 | 33.588388 | 0.395992 | 0.029772 | | 891 | -91.010852 | 211.042064 | 36.578546 | 33.588388 | 0.395992 | 0.029772 | | 892 | -91.898481 | -246.139516 | 41.815668 | 39.174321 | 0.349776 | 0.025527 | | 893 | -91.898481 | 246.139516 | 41.815668 | 39.174321 | 0.349776 | 0.025527 | | 894 | -92.541781 | -281.308683 | 47.132053 | 44.771668 | 0.312494 | 0.022336 | | 895 | -92.541781 | 281.308683 | 47.132053 | 44.771668 | 0.312494 | 0.022336 | | 896 | -92.956744 | -316.560408 | 52.509422 | 50.382154 | 0.281750 | 0.019848 | | 897 | -92.956744 | 316.560408 | 52.509422 | 50.382154 | 0.281750 | 0.019848 | | 898 | -93.101793 | 351.858377 | 57.927209 | 56.000000 | 0.255797 | 0.017857 | | 899 | -93.387365 | -98.033026 | 21.548706 | 15.602441 | 0.689743 | 0.064093 | | 900 | -93.387365 | 98.033026 | 21.548706 | 15.602441 | 0.689743 | 0.064093 | | 901 | -93.532091 | -111.580776 | 23.172502 | 17.758632 | 0.642403 | 0.056311 | | 902 | -93.532091 | 111.580776 | 23.172502 | 17.758632 | 0.642403 | 0.056311 | | 903 | -93.692886 | -4.289078 | 14.927302 | 0.682628 | 0.998954 | 1.464927 | | 904 | -93.692886 | 4.289078 | 14.927302 | 0.682628 | 0.998954 | 1.464927 | | 905 | -94.349268 | 63.861696 | 18.132562 | 10.163905 | 0.828132 | 0.098387 | | 906 | -94.349268 | -63.861696 | 18.132562 | 10.163905 | 0.828132 | 0.098387 | | 907 | -94.745043 | 73.683778 | 19.102520 | 11.727138 | 0.789380 | 0.085272 | | 908 | -94.745043 | -73.683778 | 19.102520 | 11.727138 | 0.789380 | 0.085272 | | 909 | -95.202577 | 6.142057 | 15.183461 | 0.977539 | 0.997925 | 1.022977 | | 910 | -95.202577 | -6.142057 | 15.183461 | 0.977539 | 0.997925 | 1.022977 | | 911 | -95.692689 | 133.532122 | 26.145974 | 21.252297 | 0.582497 | 0.047054 | | 912 | -95.692689 | -133.532122 | 26.145974 | 21.252297 | 0.582497 | 0.047054 | | 913 | -95.747216 | -147.119968 | 27.936936 | 23.414870 | 0.545466 | 0.042708 | | 914 | -95.747216 | 147.119968 | 27.936936 | 23.414870 | 0.545466 | 0.042708 | | 915 | -97.319856 | 168.949269 | 31.031137 | 26.889111 | 0.499142 | 0.037190 | | 916 | -97.319856 | -168.949269 | 31.031137 | 26.889111 | 0.499142 | 0.037190 | | 917 | -97.471196 | 182.439238 | 32.920349 | 29.036107 | 0.471229 | 0.034440 | | 918 | -97.471196 | -182.439238 | 32.920349 | 29.036107 | 0.471229 | 0.034440 | | 919 | -97.628457 | -99.626238 | 22.200091 | 15.856008 | 0.699909 | 0.063068 | | 920 | -97.628457 | 99.626238 | 22.200091 | 15.856008 | 0.699909 | 0.063068 | | 921 | -97.948677 | -19.505992 | 15.895131 | 3.104475 | 0.980742 | 0.322116 | | 922 | -97.948677 | 19.505992 | 15.895131 | 3.104475 | 0.980742 | 0.322116 | | 923 | -97.952152 | -109.843672 | 23.423508 | 17.482163 | 0.665552 | 0.057201 | | 924 | -97.952152 | 109.843672 | 23.423508 | 17.482163 | 0.665552 | 0.057201 | | 925 | -98.399975 | 204.263076 | 36.085013 | 32.509478 | 0.433999 | 0.030760 | | 926 | -98.399975 | -204.263076 | 36.085013 | 32.509478 | 0.433999 | 0.030760 | | 927 | -98.873793 | -217.775410 | 38.065044 | 34.660033 | 0.413404 | 0.028852 | | 928 | -98.873793 | 217.775410 | 38.065044 | 34.660033 | 0.413404 | 0.028852 | | 929 | -99.104578 | -239.370236 | 41.233058 | 38.096956 | 0.382532 | 0.026249 | | 930 | -99.104578 | 239.370236 | 41.233058 | 38.096956 | 0.382532 | 0.026249 | | 931 | -99.299795 | -43.283124 | 17.240145 | 6.888723 | 0.916701 | 0.145165 | | 932 | -99.299795 | 43.283124 | 17.240145 | 6.888723 | 0.916701 | 0.145165 | | 933 | -99.680943 | -274.334108 | 46.454570 | 43.661629 | 0.341510 | 0.022903 | | 934 | -99.680943 | 274.334108 | 46.454570 | 43.661629 | 0.341510 | 0.022903 | | 935 | -99.833988 | -135.101920 | 26.735830 | 21.502138 | 0.594299 | 0.046507 | | 936 | -99.833988 | 135.101920 | 26.735830 | 21.502138 | 0.594299 | 0.046507 | | 937 | -99.860702 | 253.218384 | 43.321645 | 40.300958 | 0.366868 | 0.024813 | | 938 | -99.860702 | -253.218384 | 43.321645 | 40.300958 | 0.366868 | 0.024813 | | 939 | -100.175662 | -309.342071 | 51.750492 | 49.233320 | 0.308083 | 0.020311 | | 940 | -100.175662 | 309.342071 | 51.750492 | 49.233320 | 0.308083 | 0.020311 | | 941 | -100.185518 | -145.532934 | 28.120014 | 23.162286 | 0.567035 | 0.043174 | | 942 | -100.185518 | 145.532934 | 28.120014 | 23.162286 | 0.567035 | 0.043174 | | 943 | -100.392397 | 288.649950 | 48.639330 | 45.940066 | 0.328498 | 0.021767 | | 944 | -100.392397 | -288.649950 | 48.639330 | 45.940066 | 0.328498 | 0.021767 | | 945 | -100.497591 | -344.475254 | 57.110455 | 54.824939 | 0.280066 | 0.018240 | | 946 | -100.497591 | 344.475254 | 57.110455 | 54.824939 | 0.280066 | 0.018240 | | 947 | -100.581039 | 323.994069 | 53.992878 | 51.565258 | 0.296483 | 0.019393 | | 948 | -100.581039 | -323.994069 | 53.992878 | 51.565258 | 0.296483 | 0.019393 | | 949 | -101.426917 | 170.408514 | 31.561866 | 27.121357 | 0.511459 | 0.036871 | | 950 | -101.426917 | -170.408514 | 31.561866 | 27.121357 | 0.511459 | 0.036871 | | 951 | -101.798814 | 180.982879 | 33.048247 | 28.804320 | 0.490246 | 0.034717 | | 952 | -101.798814 | -180.982879 | 33.048247 | 28.804320 | 0.490246 | 0.034717 | | 953 | -101.883732 | 6.664475 | 16.249954 | 1.060684 | 0.997867 | 0.942788 | | 954 | -101.883732 | -6.664475 | 16.249954 | 1.060684 | 0.997867 | 0.942788 | | 955 | -102.388876 | 20.965247 | 16.633804 | 3.336723 | 0.979673 | 0.299695 | | 956 | -102.388876 | -20.965247 | 16.633804 | 3.336723 | 0.979673 | 0.299695 | | 957 | -102.649996 | -205.587598 | 36.572158 | 32.720283 | 0.446713 | 0.030562 | | 958 | -102.649996 | 205.587598 | 36.572158 | 32.720283 | 0.446713 | 0.030562 | | 959 | -102.964960 | -216.303713 | 38.127187 | 34.425805 | 0.429808 | 0.029048 | | 960 | -102.964960 | 216.303713 | 38.127187 | 34.425805 | 0.429808 | 0.029048 | | 961 | -103.611476 | 240.711782 | 41.708769 | 38.310470 | 0.395367 | 0.026103 | | 962 | -103.611476 | -240.711782 | 41.708769 | 38.310470 | 0.395367 | 0.026103 | | 963 | -103.754665 | -56.004045 | 18.765092 | 8.913321 | 0.879989 | 0.112192 | | 964 | -103.754665 | 56.004045 | 18.765092 | 8.913321 | 0.879989 | 0.112192 | | 965 | -103.791363 | 4.588853 | 16.535046 | 0.730339 | 0.999024 | 1.369228 | | 966 | -103.791363 | -4.588853 | 16.535046 | 0.730339 | 0.999024 | 1.369228 | | 967 | -103.792027 | 41.452110 | 17.787701 | 6.597308 | 0.928676 | 0.151577 | | 968 | -103.792027 | -41.452110 | 17.787701 | 6.597308 | 0.928676 | 0.151577 | | 969 | -103.817204 | -251.551913 | 43.311314 | 40.035730 | 0.381494 | 0.024978 | | 970 | -103.817204 | 251.551913 | 43.311314 | 40.035730 | 0.381494 | 0.024978 | | 971 | -103.986268 | -34.363132 | 17.430169 | 5.469062 | 0.949499 | 0.182847 | | 972 | -103.986268 | 34.363132 | 17.430169 | 5.469062 | 0.949499 | 0.182847 | | 973 | -104.260819 | -80.880610 | 21.001212 | 12.872549 | 0.790127 | 0.077685 | | 974 | -104.260819 | 80.880610 | 21.001212 | 12.872549 | 0.790127 | 0.077685 | | 975 | -104.322107 | 275.875236 | 46.941333 | 43.906907 | 0.353705 | 0.022775 | | 976 | -104.322107 | -275.875236 | 46.941333 | 43.906907 | 0.353705 | 0.022775 | | 977 | -104.432405 | -286.781615 | 48.574814 | 45.642712 | 0.342172 | 0.021909 | | 978 | -104.432405 | 286.781615 | 48.574814 | 45.642712 | 0.342172 | 0.021909 | | 979 | -104.765916 | -311.109362 | 52.246700 | 49.514593 | 0.319140 | 0.020196 | | 980 | -104.765916 | 311.109362 | 52.246700 | 49.514593 | 0.319140 | 0.020196 | | 981 | -104.812659 | -322.040367 | 53.900610 | 51.254316 | 0.309485 | 0.019511 | | 982 | -104.812659 | 322.040367 | 53.900610 | 51.254316 | 0.309485 | 0.019511 | | 983 | -104.929967 | 346.389833 | 57.603584 | 55.129654 | 0.289915 | 0.018139 | | 984 | -104.929967 | -346.389833 | 57.603584 | 55.129654 | 0.289915 | 0.018139 | | 985 | -105.208237 | 0.000000 | 16.744411 | 0.000000 | 1.000000 | inf | | 986 | -106.069332 | 34.680255 | 17.760881 | 5.519534 | 0.950485 | 0.181175 | | 987 | -106.069332 | -34.680255 | 17.760881 | 5.519534 | 0.950485 | 0.181175 | | 988 | -106.442343 | -69.300672 | 20.214906 | 11.029544 | 0.838036 | 0.090666 | | 989 | -106.442343 | 69.300672 | 20.214906 | 11.029544 | 0.838036 | 0.090666 | | 990 | -107.170073 | 91.634457 | 22.441580 | 14.584077 | 0.760047 | 0.068568 | | 991 | -107.170073 | -91.634457 | 22.441580 | 14.584077 | 0.760047 | 0.068568 | | 992 | -107.393409 | 117.085123 | 25.286245 | 18.634676 | 0.675948 | 0.053663 | | 993 | -107.393409 | -117.085123 | 25.286245 | 18.634676 | 0.675948 | 0.053663 | | 994 | -108.092648 | 57.766084 | 19.506022 | 9.193758 | 0.881957 | 0.108769 | | 995 | -108.092648 | -57.766084 | 19.506022 | 9.193758 | 0.881957 | 0.108769 | | 996 | -108.203753 | 32.422115 | 17.977638 | 5.160140 | 0.957921 | 0.193793 | | 997 | -108.203753 | -32.422115 | 17.977638 | 5.160140 | 0.957921 | 0.193793 | | 998 | -108.324174 | 69.398551 | 20.474951 | 11.045122 | 0.842020 | 0.090538 | | 999 | -108.324174 | -69.398551 | 20.474951 | 11.045122 | 0.842020 | 0.090538 | Generating an instance of StabilityDerivatives Variable print_info has no assigned value in the settings file. will default to the value: c_bool(True) Variable folder has no assigned value in the settings file. will default to the value: ./output/ ``` ``` /home/ng213/code/sharpy/sharpy/postproc/asymptoticstability.py:171: UserWarning: Plotting modes is under development warn.warn('Plotting modes is under development') ``` ``` |===|===|===|===|===|===|===| | der | X | Y | Z | L | M | N | |===|===|===|===|===|===|===| | u | -0.000000 | -0.032063 | 0.000000 | 103.702544 | -0.000000 | 0.215455 | | v | 131.843986 | 0.000000 | -982.084813 | -0.000000 | 1279.474176 | -0.000000 | | w | 0.000000 | -187.481605 | -0.000000 |-22078.785295 | 0.000000 | -3334.924961 | | p | -187.422614 | 0.000000 | 1572.771760 | 0.000000 | -2857.778575 | 0.000000 | | q | 0.000000 | -2.610892 | 0.000000 | 1127.543337 | -0.000000 | -38.871906 | | r | 0.000000 | 0.000000 | -0.000000 | 0.000000 | 0.000000 | 0.000000 | | flap1 | -0.000000 | 1.108474 | 0.000000 | 271.384778 | -0.000000 | 23.331888 | |===|===|===|===|===|===|===| | der | C_D | C_Y | C_L | C_l | C_m | C_n | |===|===|===|===|===|===|===| | u | -0.000000 | -0.000006 | 0.000000 | 0.001009 | -0.000000 | -0.000078 | | v | 0.010573 | 0.000000 | -0.193187 | -0.000000 | 0.347457 | -0.000000 | | w | 0.000000 | -0.036606 | -0.000000 | -0.217442 | 0.000000 | -0.015495 | | p | -0.479599 | 0.000000 | 12.034020 | 0.000000 | -30.222290 | 0.000000 | | q | 0.000000 | -0.019853 | 0.000000 | 0.010944 | -0.000000 | -0.001245 | | r | 0.000000 | 0.000000 | -0.000000 | 0.000000 | 0.000000 | 0.000000 | | alpha | 0.000000 | -1.024980 | -0.000000 | -6.088362 | 0.000000 | -0.433847 | | beta | 0.296049 | 0.000000 | -5.409222 | -0.000000 | 9.728798 | -0.000000 | FINISHED - Elapsed time = 29.3233356 seconds FINISHED - CPU process time = 69.2634336 seconds ``` ##### Post-processing[¶](#Post-processing) ###### Nonlinear Equilibrium[¶](#Nonlinear-Equilibrium) The files can be opened with Paraview to see the deformation and aerodynamic loading on the flying wing in trim conditions. ###### Asymptotic Stability[¶](#Asymptotic-Stability) ``` [18]: ``` ``` eigenvalues_trim = np.loadtxt('./output/horten_u_inf2800_M4N11Msf5/stability/eigenvalues.dat') ``` ####### Flight Dynamics modes[¶](#Flight-Dynamics-modes) The flight dynamics modes can be found close to the origin of the Argand diagram. In particular, the phugoid is the mode that is closest to the imaginary axis. An exercise is left to the user to compare this phugoid predicition with the nonlinear response! ``` [19]: ``` ``` fig = plt.figure() plt.scatter(eigenvalues_trim[:, 0], eigenvalues_trim[:, 1], marker='x', color='k') plt.xlim(-0.5, 0.5) plt.ylim(-0.5, 0.5) plt.grid() plt.xlabel('Real Part, $Re(\lambda)$ [rad/s]') plt.ylabel('Imaginary Part, $Im(\lambda)$ [rad/s]'); ``` ####### Structural Modes[¶](#Structural-Modes) Looking further out on the plot, the structural modes appear. There is a curve given the Newmark-\(\beta\) integration scheme and on top of it several modes are damped by the presence of the aerodynamics. Try changing `newmark_damp` in the `LinearAssembler` settings to see how this plot changes! ``` [20]: ``` ``` fig = plt.figure() plt.scatter(eigenvalues_trim[:, 0], eigenvalues_trim[:, 1], marker='x', color='k') plt.xlim(-5, 0.5) plt.ylim(-200, 200) plt.grid() plt.xlabel('Real Part, $Re(\lambda)$ [rad/s]') plt.ylabel('Imaginary Part, $Im(\lambda)$ [rad/s]'); ``` ###### Stability Derivatives[¶](#Stability-Derivatives) Stability derivatives are calculated using the steady-state frequency response of the UVLM system. The output is saved in `./output/<case_name>/stability/`. Note that stability derivatives are expressed in the SHARPy Frame of Reference, which is South East Up (not the conventional in flight dynamics literature). This body attached frame of reference has: * \(x\) positive downstream * \(y\) positive towards the right wing * \(z\) positive up #### Simulation NREL 5MW wind turbine[¶](#Simulation-NREL-5MW-wind-turbine) ``` [1]: ``` ``` %config InlineBackend.figure_format = 'svg' from IPython.display import Image url = 'https://raw.githubusercontent.com/ImperialCollegeLondon/sharpy/dev_doc/docs/source/content/example_notebooks/images/turbulence_no_legend.png' Image(url=url, width=800) ``` ``` [1]: ``` In this notebook the blade loads on the NREL-5MW reference wind turbine computed with SHARPy and OpenFAST will be compared. However, zero-drag airfoils have been used. OpenFAST: *https://openfast.readthedocs.io* NREL-5MW: <NAME>.; <NAME>.; <NAME>. and <NAME>.. *Definition of a 5-MW Reference Wind Turbine for Offshore System Development*, Technical Report, NREL 2009 Load the required packages: ``` [2]: ``` ``` # Required packages import numpy as np import os import matplotlib.pyplot as plt # Required SHARPy modules import sharpy.sharpy_main import sharpy.utils.algebra as algebra import sharpy.utils.generate_cases as gc import cases.templates.template_wt as template_wt ``` These are the results from the OpenFAST simulation for comparison: out-of-plane `of_cNdrR` and in-plane `of_cTdrR` coefficients along the blade and thrust `of_ct` and power `of_cp` rotor coefficients ``` [3]: ``` ``` of_rR = np.array([0.20158356, 0.3127131, 0.40794048, 0.5984148, 0.6936519, 0.85238045, 0.899999, 0.95555407, 0.98729974, 1.0]) of_cNdrR = np.array([0.08621394, 0.14687876, 0.19345148, 0.2942731, 0.36003628, 0.43748564, 0.44762507, 0.38839236, 0.29782477, 0.0]) of_cTdrR = np.array([0.048268348, 0.051957503, 0.05304592, 0.052862607, 0.056001827, 0.0536646, 0.050112925, 0.038993906, 0.023664437, 0.0]) of_ct = 0.69787693 of_cp = 0.48813498 ``` ##### Create SHARPy case[¶](#Create-SHARPy-case) Definition of parameters ``` [4]: ``` ``` # Mathematical constants deg2rad = np.pi/180. # Case case = 'rotor' # route = os.path.dirname(os.path.realpath(__file__)) + '/' route = './' # Geometry discretization chord_panels = np.array([8], dtype=int) revs_in_wake = 5 # Operation rotation_velocity = 12.1*2*np.pi/60 pitch_deg = 0. #degrees # Wind WSP = 12. air_density = 1.225 # Simulation dphi = 4.*deg2rad revs_to_simulate = 5 ``` Computation of associated parameters ``` [5]: ``` ``` dt = dphi/rotation_velocity time_steps = int(revs_to_simulate*2.*np.pi/dphi) mstar = int(revs_in_wake*2.*np.pi/dphi) ``` Generation of the rotor geometry based on the information in the excel file ``` [6]: ``` ``` rotor = template_wt.rotor_from_excel_type02( chord_panels, rotation_velocity, pitch_deg, excel_file_name= 'source/type02_db_NREL5MW_v01.xlsx', excel_sheet_parameters = 'parameters', excel_sheet_structural_blade = 'structural_blade', excel_sheet_discretization_blade = 'discretization_blade', excel_sheet_aero_blade = 'aero_blade', excel_sheet_airfoil_info = 'airfoil_info', excel_sheet_airfoil_coord = 'airfoil_coord', m_distribution = 'uniform', n_points_camber = 100, tol_remove_points = 1e-8, wsp = WSP, dt = dt) ``` ``` WARNING: The poisson cofficient is assumed equal to 0.3 WARNING: Cross-section area is used as shear area WARNING: Using perpendicular axis theorem to compute the inertia around xB WARNING: Replacing node 49 by node 0 WARNING: Replacing node 98 by node 0 ``` Define simulation details. The steady simulation is faster than the dynamic simulation. However, the dynamic simulation includes wake self-induction and provides more accurate results. ``` [7]: ``` ``` steady_simulation = False ``` ``` [8]: ``` ``` SimInfo = gc.SimulationInformation() SimInfo.set_default_values() if steady_simulation: SimInfo.solvers['SHARPy']['flow'] = ['BeamLoader', 'AerogridLoader', 'StaticCoupledRBM', 'BeamPlot', 'AerogridPlot', 'SaveData'] else: SimInfo.solvers['SHARPy']['flow'] = ['BeamLoader', 'AerogridLoader', 'StaticCoupledRBM', 'DynamicCoupled'] SimInfo.solvers['SHARPy']['case'] = case SimInfo.solvers['SHARPy']['route'] = route SimInfo.solvers['SHARPy']['write_log'] = True SimInfo.set_variable_all_dicts('dt', dt) SimInfo.set_variable_all_dicts('rho', air_density) SimInfo.solvers['SteadyVelocityField']['u_inf'] = WSP SimInfo.solvers['SteadyVelocityField']['u_inf_direction'] = np.array([0., 0., 1.]) SimInfo.solvers['BeamLoader']['unsteady'] = 'on' SimInfo.solvers['AerogridLoader']['unsteady'] = 'on' SimInfo.solvers['AerogridLoader']['mstar'] = mstar SimInfo.solvers['AerogridLoader']['freestream_dir'] = np.array([0.,0.,0.]) SimInfo.solvers['StaticCoupledRBM']['structural_solver'] = 'RigidDynamicPrescribedStep' SimInfo.solvers['StaticCoupledRBM']['structural_solver_settings'] = SimInfo.solvers['RigidDynamicPrescribedStep'] SimInfo.solvers['StaticCoupledRBM']['aero_solver'] = 'SHWUvlm' SimInfo.solvers['StaticCoupledRBM']['aero_solver_settings'] = SimInfo.solvers['SHWUvlm'] SimInfo.solvers['StaticCoupledRBM']['tolerance'] = 1e-8 SimInfo.solvers['StaticCoupledRBM']['n_load_steps'] = 0 SimInfo.solvers['StaticCoupledRBM']['relaxation_factor'] = 0. SimInfo.solvers['SHWUvlm']['convection_scheme'] = 2 SimInfo.solvers['SHWUvlm']['num_cores'] = 8 SimInfo.solvers['SHWUvlm']['rot_vel'] = rotation_velocity SimInfo.solvers['SHWUvlm']['rot_axis'] = np.array([0.,0.,1.]) SimInfo.solvers['SHWUvlm']['rot_center'] = np.zeros((3),) SimInfo.solvers['SHWUvlm']['velocity_field_generator'] = 'SteadyVelocityField' SimInfo.solvers['SHWUvlm']['velocity_field_input'] = SimInfo.solvers['SteadyVelocityField'] SimInfo.solvers['SaveData']['compress_float'] = True # Only used for steady_simulation = False SimInfo.solvers['StepUvlm']['convection_scheme'] = 3 SimInfo.solvers['StepUvlm']['num_cores'] = 8 SimInfo.solvers['StepUvlm']['velocity_field_generator'] = 'SteadyVelocityField' SimInfo.solvers['StepUvlm']['velocity_field_input'] = SimInfo.solvers['SteadyVelocityField'] SimInfo.solvers['DynamicCoupled']['structural_solver'] = 'RigidDynamicPrescribedStep' SimInfo.solvers['DynamicCoupled']['structural_solver_settings'] = SimInfo.solvers['RigidDynamicPrescribedStep'] SimInfo.solvers['DynamicCoupled']['aero_solver'] = 'StepUvlm' SimInfo.solvers['DynamicCoupled']['aero_solver_settings'] = SimInfo.solvers['StepUvlm'] SimInfo.solvers['DynamicCoupled']['postprocessors'] = ['BeamPlot', 'AerogridPlot', 'Cleanup', 'SaveData'] SimInfo.solvers['DynamicCoupled']['postprocessors_settings'] = {'BeamPlot': SimInfo.solvers['BeamPlot'], 'AerogridPlot': SimInfo.solvers['AerogridPlot'], 'Cleanup': SimInfo.solvers['Cleanup'], 'SaveData': SimInfo.solvers['SaveData']} SimInfo.solvers['DynamicCoupled']['minimum_steps'] = 0 SimInfo.solvers['DynamicCoupled']['include_unsteady_force_contribution'] = True SimInfo.solvers['DynamicCoupled']['relaxation_factor'] = 0. SimInfo.solvers['DynamicCoupled']['final_relaxation_factor'] = 0. SimInfo.solvers['DynamicCoupled']['dynamic_relaxation'] = False SimInfo.solvers['DynamicCoupled']['relaxation_steps'] = 0 # Define dynamic simulation (used regardless the value of "steady_simulation" variable) SimInfo.define_num_steps(time_steps) SimInfo.with_forced_vel = True SimInfo.for_vel = np.zeros((time_steps,6), dtype=float) SimInfo.for_vel[:,5] = rotation_velocity SimInfo.for_acc = np.zeros((time_steps,6), dtype=float) SimInfo.with_dynamic_forces = True SimInfo.dynamic_forces = np.zeros((time_steps,rotor.StructuralInformation.num_node,6), dtype=float) ``` Generate simulation files ``` [9]: ``` ``` gc.clean_test_files(SimInfo.solvers['SHARPy']['route'], SimInfo.solvers['SHARPy']['case']) rotor.generate_h5_files(SimInfo.solvers['SHARPy']['route'], SimInfo.solvers['SHARPy']['case']) SimInfo.generate_solver_file() SimInfo.generate_dyn_file(time_steps) ``` ##### Run SHARPy case[¶](#Run-SHARPy-case) ``` [10]: ``` ``` sharpy_output = sharpy.sharpy_main.main(['', SimInfo.solvers['SHARPy']['route'] + SimInfo.solvers['SHARPy']['case'] + '.sharpy']) ``` ``` --- ###### ## ## ### ######## ######## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #### ###### ######### ## ## ######## ######## ## ## ## ## ######### ## ## ## ## ## ## ## ## ## ## ## ## ## ## ###### ## ## ## ## ## ## ## ## --- Aeroelastics Lab, Aeronautics Department. Copyright (c), Imperial College London. All rights reserved. License available at https://github.com/imperialcollegelondon/sharpy Running SHARPy from /home/arturo/code/sharpy/docs/source/content/example_notebooks SHARPy being run is in /home/arturo/code/sharpy The branch being run is dev_doc The version and commit hash are: v0.1-1816-g6185d82-6185d82 The available solvers on this session are: _BaseStructural AerogridLoader BeamLoader DynamicCoupled DynamicUVLM LinDynamicSim LinearAssembler Modal NoAero NonLinearDynamic NonLinearDynamicCoupledStep NonLinearDynamicMultibody NonLinearDynamicPrescribedStep NonLinearStatic NonLinearStaticMultibody PrescribedUvlm RigidDynamicPrescribedStep SHWUvlm StaticCoupled StaticCoupledRBM StaticTrim StaticUvlm StepLinearUVLM StepUvlm Trim FrequencyResponse WriteVariablesTime BeamPlot LiftDistribution AeroForcesCalculator CreateSnapshot StabilityDerivatives Cleanup AsymptoticStability PickleData SaveData StallCheck BeamLoads PlotFlowField AerogridPlot PreSharpy Generating an instance of BeamLoader Generating an instance of AerogridLoader The aerodynamic grid contains 3 surfaces Surface 0, M=8, N=48 Wake 0, M=450, N=48 Surface 1, M=8, N=48 Wake 1, M=450, N=48 Surface 2, M=8, N=48 Wake 2, M=450, N=48 In total: 1152 bound panels In total: 64800 wake panels Total number of panels = 65952 Generating an instance of StaticCoupledRBM Generating an instance of RigidDynamicPrescribedStep Generating an instance of SHWUvlm i_step: 0, i_iter: 0 i_step: 0, i_iter: 1 Pos res = 0.000000e+00. Psi res = 0.000000e+00. Pos_dot res = 0.000000e+00. Psi_dot res = 0.000000e+00. Resultant forces and moments: (array([1.19344329e-02, 4.42818203e-03, 8.51720587e+05]), array([-2.23034084e-02, 8.00768152e-03, 6.02114389e+06])) Generating an instance of DynamicCoupled Generating an instance of RigidDynamicPrescribedStep Generating an instance of StepUvlm Generating an instance of BeamPlot Generating an instance of AerogridPlot Generating an instance of Cleanup Generating an instance of SaveData Variable save_linear has no assigned value in the settings file. will default to the value: c_bool(False) Variable save_linear_uvlm has no assigned value in the settings file. will default to the value: c_bool(False) Variable skip_attr has no assigned value in the settings file. will default to the value: ['fortran', 'airfoils', 'airfoil_db', 'settings_types', 'ct_dynamic_forces_list', 'ct_gamma_dot_list', 'ct_gamma_list', 'ct_gamma_star_list', 'ct_normals_list', 'ct_u_ext_list', 'ct_u_ext_star_list', 'ct_zeta_dot_list', 'ct_zeta_list', 'ct_zeta_star_list', 'dynamic_input', ['fortran', 'airfoils', 'airfoil_db', 'settings_types', 'ct_dynamic_forces_list', 'ct_forces_list', 'ct_gamma_dot_list', 'ct_gamma_list', 'ct_gamma_star_list', 'ct_normals_list', 'ct_u_ext_list', 'ct_u_ext_star_list', 'ct_zeta_dot_list', 'ct_zeta_list', 'ct_zeta_star_list', 'dynamic_input']] Variable format has no assigned value in the settings file. will default to the value: h5 |===|===|===|===|===|===|===|===| | ts | t | iter | struc ratio | iter time | residual vel | FoR_vel(x) | FoR_vel(z) | |===|===|===|===|===|===|===|===| ``` ``` /home/arturo/code/sharpy/sharpy/solvers/dynamiccoupled.py:449: RuntimeWarning: divide by zero encountered in log10 np.log10(self.res_dqdt), ``` ``` | 1 | 0.0551 | 1 | 0.000689 | 46.721959 | -inf | 0.000000e+00 | 0.000000e+00 | | 2 | 0.1102 | 1 | 0.000686 | 46.839166 | -inf | 0.000000e+00 | 0.000000e+00 | | 3 | 0.1653 | 1 | 0.000687 | 46.816562 | -inf | 0.000000e+00 | 0.000000e+00 | | 4 | 0.2204 | 1 | 0.000666 | 48.386563 | -inf | 0.000000e+00 | 0.000000e+00 | | 5 | 0.2755 | 1 | 0.000683 | 47.135425 | -inf | 0.000000e+00 | 0.000000e+00 | | 6 | 0.3306 | 1 | 0.000707 | 46.560263 | -inf | 0.000000e+00 | 0.000000e+00 | | 7 | 0.3857 | 1 | 0.000697 | 46.562699 | -inf | 0.000000e+00 | 0.000000e+00 | | 8 | 0.4408 | 1 | 0.000692 | 46.757933 | -inf | 0.000000e+00 | 0.000000e+00 | | 9 | 0.4959 | 1 | 0.000691 | 46.622405 | -inf | 0.000000e+00 | 0.000000e+00 | | 10 | 0.5510 | 1 | 0.000700 | 46.480478 | -inf | 0.000000e+00 | 0.000000e+00 | | 11 | 0.6061 | 1 | 0.000703 | 46.450803 | -inf | 0.000000e+00 | 0.000000e+00 | | 12 | 0.6612 | 1 | 0.000708 | 46.466538 | -inf | 0.000000e+00 | 0.000000e+00 | | 13 | 0.7163 | 1 | 0.000576 | 55.899692 | -inf | 0.000000e+00 | 0.000000e+00 | | 14 | 0.7713 | 1 | 0.000642 | 50.701285 | -inf | 0.000000e+00 | 0.000000e+00 | | 15 | 0.8264 | 1 | 0.000671 | 47.593583 | -inf | 0.000000e+00 | 0.000000e+00 | | 16 | 0.8815 | 1 | 0.000699 | 47.287386 | -inf | 0.000000e+00 | 0.000000e+00 | | 17 | 0.9366 | 1 | 0.000691 | 46.559619 | -inf | 0.000000e+00 | 0.000000e+00 | | 18 | 0.9917 | 1 | 0.000681 | 46.953868 | -inf | 0.000000e+00 | 0.000000e+00 | | 19 | 1.0468 | 1 | 0.000693 | 46.526422 | -inf | 0.000000e+00 | 0.000000e+00 | | 20 | 1.1019 | 1 | 0.000692 | 46.634115 | -inf | 0.000000e+00 | 0.000000e+00 | | 21 | 1.1570 | 1 | 0.000691 | 46.424641 | -inf | 0.000000e+00 | 0.000000e+00 | | 22 | 1.2121 | 1 | 0.000684 | 46.882152 | -inf | 0.000000e+00 | 0.000000e+00 | | 23 | 1.2672 | 1 | 0.000694 | 46.586424 | -inf | 0.000000e+00 | 0.000000e+00 | | 24 | 1.3223 | 1 | 0.000687 | 46.776280 | -inf | 0.000000e+00 | 0.000000e+00 | | 25 | 1.3774 | 1 | 0.000692 | 46.704067 | -inf | 0.000000e+00 | 0.000000e+00 | | 26 | 1.4325 | 1 | 0.000679 | 46.886657 | -inf | 0.000000e+00 | 0.000000e+00 | | 27 | 1.4876 | 1 | 0.000684 | 46.516365 | -inf | 0.000000e+00 | 0.000000e+00 | | 28 | 1.5427 | 1 | 0.000682 | 46.684166 | -inf | 0.000000e+00 | 0.000000e+00 | | 29 | 1.5978 | 1 | 0.000683 | 47.033028 | -inf | 0.000000e+00 | 0.000000e+00 | | 30 | 1.6529 | 1 | 0.000694 | 46.291923 | -inf | 0.000000e+00 | 0.000000e+00 | | 31 | 1.7080 | 1 | 0.000687 | 46.680159 | -inf | 0.000000e+00 | 0.000000e+00 | | 32 | 1.7631 | 1 | 0.000683 | 46.629501 | -inf | 0.000000e+00 | 0.000000e+00 | | 33 | 1.8182 | 1 | 0.000682 | 46.806173 | -inf | 0.000000e+00 | 0.000000e+00 | | 34 | 1.8733 | 1 | 0.000687 | 46.757005 | -inf | 0.000000e+00 | 0.000000e+00 | | 35 | 1.9284 | 1 | 0.000683 | 47.033428 | -inf | 0.000000e+00 | 0.000000e+00 | | 36 | 1.9835 | 1 | 0.000659 | 48.662729 | -inf | 0.000000e+00 | 0.000000e+00 | | 37 | 2.0386 | 1 | 0.000659 | 48.571643 | -inf | 0.000000e+00 | 0.000000e+00 | | 38 | 2.0937 | 1 | 0.000673 | 47.434747 | -inf | 0.000000e+00 | 0.000000e+00 | | 39 | 2.1488 | 1 | 0.000661 | 48.286789 | -inf | 0.000000e+00 | 0.000000e+00 | | 40 | 2.2039 | 1 | 0.000679 | 47.779096 | -inf | 0.000000e+00 | 0.000000e+00 | | 41 | 2.2590 | 1 | 0.000677 | 48.413777 | -inf | 0.000000e+00 | 0.000000e+00 | | 42 | 2.3140 | 1 | 0.000671 | 47.798777 | -inf | 0.000000e+00 | 0.000000e+00 | | 43 | 2.3691 | 1 | 0.000710 | 46.501405 | -inf | 0.000000e+00 | 0.000000e+00 | | 44 | 2.4242 | 1 | 0.000690 | 46.611010 | -inf | 0.000000e+00 | 0.000000e+00 | | 45 | 2.4793 | 1 | 0.000688 | 46.704990 | -inf | 0.000000e+00 | 0.000000e+00 | | 46 | 2.5344 | 1 | 0.000728 | 46.796206 | -inf | 0.000000e+00 | 0.000000e+00 | | 47 | 2.5895 | 1 | 0.000693 | 46.510013 | -inf | 0.000000e+00 | 0.000000e+00 | | 48 | 2.6446 | 1 | 0.000692 | 46.604489 | -inf | 0.000000e+00 | 0.000000e+00 | | 49 | 2.6997 | 1 | 0.000693 | 46.648462 | -inf | 0.000000e+00 | 0.000000e+00 | | 50 | 2.7548 | 1 | 0.000703 | 46.233052 | -inf | 0.000000e+00 | 0.000000e+00 | | 51 | 2.8099 | 1 | 0.000696 | 46.494563 | -inf | 0.000000e+00 | 0.000000e+00 | | 52 | 2.8650 | 1 | 0.000689 | 46.427002 | -inf | 0.000000e+00 | 0.000000e+00 | | 53 | 2.9201 | 1 | 0.000685 | 46.556616 | -inf | 0.000000e+00 | 0.000000e+00 | | 54 | 2.9752 | 1 | 0.000688 | 46.986618 | -inf | 0.000000e+00 | 0.000000e+00 | | 55 | 3.0303 | 1 | 0.000685 | 47.021588 | -inf | 0.000000e+00 | 0.000000e+00 | | 56 | 3.0854 | 1 | 0.000695 | 46.501523 | -inf | 0.000000e+00 | 0.000000e+00 | | 57 | 3.1405 | 1 | 0.000690 | 46.545832 | -inf | 0.000000e+00 | 0.000000e+00 | | 58 | 3.1956 | 1 | 0.000688 | 46.394942 | -inf | 0.000000e+00 | 0.000000e+00 | | 59 | 3.2507 | 1 | 0.000690 | 46.572368 | -inf | 0.000000e+00 | 0.000000e+00 | | 60 | 3.3058 | 1 | 0.000689 | 46.388416 | -inf | 0.000000e+00 | 0.000000e+00 | | 61 | 3.3609 | 1 | 0.000674 | 47.326130 | -inf | 0.000000e+00 | 0.000000e+00 | | 62 | 3.4160 | 1 | 0.000691 | 46.475252 | -inf | 0.000000e+00 | 0.000000e+00 | | 63 | 3.4711 | 1 | 0.000692 | 46.700463 | -inf | 0.000000e+00 | 0.000000e+00 | | 64 | 3.5262 | 1 | 0.000684 | 46.761324 | -inf | 0.000000e+00 | 0.000000e+00 | | 65 | 3.5813 | 1 | 0.000690 | 46.305145 | -inf | 0.000000e+00 | 0.000000e+00 | | 66 | 3.6364 | 1 | 0.000681 | 47.033500 | -inf | 0.000000e+00 | 0.000000e+00 | | 67 | 3.6915 | 1 | 0.000690 | 46.707418 | -inf | 0.000000e+00 | 0.000000e+00 | | 68 | 3.7466 | 1 | 0.000687 | 46.829002 | -inf | 0.000000e+00 | 0.000000e+00 | | 69 | 3.8017 | 1 | 0.000686 | 46.771566 | -inf | 0.000000e+00 | 0.000000e+00 | | 70 | 3.8567 | 1 | 0.000687 | 46.798868 | -inf | 0.000000e+00 | 0.000000e+00 | | 71 | 3.9118 | 1 | 0.000687 | 46.656061 | -inf | 0.000000e+00 | 0.000000e+00 | | 72 | 3.9669 | 1 | 0.000689 | 46.650955 | -inf | 0.000000e+00 | 0.000000e+00 | | 73 | 4.0220 | 1 | 0.000689 | 46.676483 | -inf | 0.000000e+00 | 0.000000e+00 | | 74 | 4.0771 | 1 | 0.000701 | 46.733123 | -inf | 0.000000e+00 | 0.000000e+00 | | 75 | 4.1322 | 1 | 0.000693 | 46.613006 | -inf | 0.000000e+00 | 0.000000e+00 | | 76 | 4.1873 | 1 | 0.000691 | 46.569005 | -inf | 0.000000e+00 | 0.000000e+00 | | 77 | 4.2424 | 1 | 0.000691 | 46.714434 | -inf | 0.000000e+00 | 0.000000e+00 | | 78 | 4.2975 | 1 | 0.000691 | 46.564491 | -inf | 0.000000e+00 | 0.000000e+00 | | 79 | 4.3526 | 1 | 0.000691 | 46.293432 | -inf | 0.000000e+00 | 0.000000e+00 | | 80 | 4.4077 | 1 | 0.000691 | 46.444557 | -inf | 0.000000e+00 | 0.000000e+00 | | 81 | 4.4628 | 1 | 0.000685 | 46.583821 | -inf | 0.000000e+00 | 0.000000e+00 | | 82 | 4.5179 | 1 | 0.000692 | 46.464751 | -inf | 0.000000e+00 | 0.000000e+00 | | 83 | 4.5730 | 1 | 0.000691 | 46.614111 | -inf | 0.000000e+00 | 0.000000e+00 | | 84 | 4.6281 | 1 | 0.000682 | 47.093682 | -inf | 0.000000e+00 | 0.000000e+00 | | 85 | 4.6832 | 1 | 0.000668 | 47.848219 | -inf | 0.000000e+00 | 0.000000e+00 | | 86 | 4.7383 | 1 | 0.000690 | 46.453935 | -inf | 0.000000e+00 | 0.000000e+00 | | 87 | 4.7934 | 1 | 0.000671 | 47.508853 | -inf | 0.000000e+00 | 0.000000e+00 | | 88 | 4.8485 | 1 | 0.000704 | 46.453028 | -inf | 0.000000e+00 | 0.000000e+00 | | 89 | 4.9036 | 1 | 0.000686 | 46.783497 | -inf | 0.000000e+00 | 0.000000e+00 | | 90 | 4.9587 | 1 | 0.000695 | 46.473095 | -inf | 0.000000e+00 | 0.000000e+00 | | 91 | 5.0138 | 1 | 0.000699 | 46.384601 | -inf | 0.000000e+00 | 0.000000e+00 | | 92 | 5.0689 | 1 | 0.000671 | 48.773508 | -inf | 0.000000e+00 | 0.000000e+00 | | 93 | 5.1240 | 1 | 0.000682 | 46.843241 | -inf | 0.000000e+00 | 0.000000e+00 | | 94 | 5.1791 | 1 | 0.000691 | 46.690389 | -inf | 0.000000e+00 | 0.000000e+00 | | 95 | 5.2342 | 1 | 0.000685 | 46.811431 | -inf | 0.000000e+00 | 0.000000e+00 | | 96 | 5.2893 | 1 | 0.000683 | 46.832472 | -inf | 0.000000e+00 | 0.000000e+00 | | 97 | 5.3444 | 1 | 0.000690 | 46.410221 | -inf | 0.000000e+00 | 0.000000e+00 | | 98 | 5.3994 | 1 | 0.000693 | 46.644631 | -inf | 0.000000e+00 | 0.000000e+00 | | 99 | 5.4545 | 1 | 0.000695 | 46.328560 | -inf | 0.000000e+00 | 0.000000e+00 | | 100 | 5.5096 | 1 | 0.000693 | 46.709863 | -inf | 0.000000e+00 | 0.000000e+00 | | 101 | 5.5647 | 1 | 0.000696 | 46.303366 | -inf | 0.000000e+00 | 0.000000e+00 | | 102 | 5.6198 | 1 | 0.000688 | 46.450453 | -inf | 0.000000e+00 | 0.000000e+00 | | 103 | 5.6749 | 1 | 0.000704 | 46.611641 | -inf | 0.000000e+00 | 0.000000e+00 | | 104 | 5.7300 | 1 | 0.000686 | 46.622834 | -inf | 0.000000e+00 | 0.000000e+00 | | 105 | 5.7851 | 1 | 0.000693 | 46.442312 | -inf | 0.000000e+00 | 0.000000e+00 | | 106 | 5.8402 | 1 | 0.000697 | 46.894939 | -inf | 0.000000e+00 | 0.000000e+00 | | 107 | 5.8953 | 1 | 0.000695 | 46.583571 | -inf | 0.000000e+00 | 0.000000e+00 | | 108 | 5.9504 | 1 | 0.000692 | 46.596199 | -inf | 0.000000e+00 | 0.000000e+00 | | 109 | 6.0055 | 1 | 0.000689 | 46.675746 | -inf | 0.000000e+00 | 0.000000e+00 | | 110 | 6.0606 | 1 | 0.000688 | 46.948383 | -inf | 0.000000e+00 | 0.000000e+00 | | 111 | 6.1157 | 1 | 0.000708 | 46.583486 | -inf | 0.000000e+00 | 0.000000e+00 | | 112 | 6.1708 | 1 | 0.000687 | 46.777358 | -inf | 0.000000e+00 | 0.000000e+00 | | 113 | 6.2259 | 1 | 0.000696 | 46.655726 | -inf | 0.000000e+00 | 0.000000e+00 | | 114 | 6.2810 | 1 | 0.000692 | 46.488556 | -inf | 0.000000e+00 | 0.000000e+00 | | 115 | 6.3361 | 1 | 0.000687 | 46.641207 | -inf | 0.000000e+00 | 0.000000e+00 | | 116 | 6.3912 | 1 | 0.000699 | 46.693835 | -inf | 0.000000e+00 | 0.000000e+00 | | 117 | 6.4463 | 1 | 0.000717 | 47.359647 | -inf | 0.000000e+00 | 0.000000e+00 | | 118 | 6.5014 | 1 | 0.000689 | 46.684606 | -inf | 0.000000e+00 | 0.000000e+00 | | 119 | 6.5565 | 1 | 0.000686 | 46.892280 | -inf | 0.000000e+00 | 0.000000e+00 | | 120 | 6.6116 | 1 | 0.000690 | 46.391236 | -inf | 0.000000e+00 | 0.000000e+00 | | 121 | 6.6667 | 1 | 0.000687 | 46.619935 | -inf | 0.000000e+00 | 0.000000e+00 | | 122 | 6.7218 | 1 | 0.000685 | 46.847298 | -inf | 0.000000e+00 | 0.000000e+00 | | 123 | 6.7769 | 1 | 0.000698 | 46.383369 | -inf | 0.000000e+00 | 0.000000e+00 | | 124 | 6.8320 | 1 | 0.000690 | 46.623488 | -inf | 0.000000e+00 | 0.000000e+00 | | 125 | 6.8871 | 1 | 0.000689 | 46.456695 | -inf | 0.000000e+00 | 0.000000e+00 | | 126 | 6.9421 | 1 | 0.000748 | 46.666541 | -inf | 0.000000e+00 | 0.000000e+00 | | 127 | 6.9972 | 1 | 0.000695 | 46.393158 | -inf | 0.000000e+00 | 0.000000e+00 | | 128 | 7.0523 | 1 | 0.000632 | 50.929908 | -inf | 0.000000e+00 | 0.000000e+00 | | 129 | 7.1074 | 1 | 0.000685 | 47.093784 | -inf | 0.000000e+00 | 0.000000e+00 | | 130 | 7.1625 | 1 | 0.000688 | 46.682273 | -inf | 0.000000e+00 | 0.000000e+00 | | 131 | 7.2176 | 1 | 0.000690 | 46.667180 | -inf | 0.000000e+00 | 0.000000e+00 | | 132 | 7.2727 | 1 | 0.000689 | 46.673738 | -inf | 0.000000e+00 | 0.000000e+00 | | 133 | 7.3278 | 1 | 0.000690 | 46.813912 | -inf | 0.000000e+00 | 0.000000e+00 | | 134 | 7.3829 | 1 | 0.000698 | 46.399704 | -inf | 0.000000e+00 | 0.000000e+00 | | 135 | 7.4380 | 1 | 0.000687 | 46.776766 | -inf | 0.000000e+00 | 0.000000e+00 | | 136 | 7.4931 | 1 | 0.000687 | 46.979810 | -inf | 0.000000e+00 | 0.000000e+00 | | 137 | 7.5482 | 1 | 0.000693 | 46.578824 | -inf | 0.000000e+00 | 0.000000e+00 | | 138 | 7.6033 | 1 | 0.000689 | 46.882381 | -inf | 0.000000e+00 | 0.000000e+00 | | 139 | 7.6584 | 1 | 0.000676 | 48.142078 | -inf | 0.000000e+00 | 0.000000e+00 | | 140 | 7.7135 | 1 | 0.000693 | 46.784018 | -inf | 0.000000e+00 | 0.000000e+00 | | 141 | 7.7686 | 1 | 0.000688 | 46.642419 | -inf | 0.000000e+00 | 0.000000e+00 | | 142 | 7.8237 | 1 | 0.000689 | 46.662693 | -inf | 0.000000e+00 | 0.000000e+00 | | 143 | 7.8788 | 1 | 0.000689 | 46.802617 | -inf | 0.000000e+00 | 0.000000e+00 | | 144 | 7.9339 | 1 | 0.000690 | 46.776085 | -inf | 0.000000e+00 | 0.000000e+00 | | 145 | 7.9890 | 1 | 0.000701 | 46.875717 | -inf | 0.000000e+00 | 0.000000e+00 | | 146 | 8.0441 | 1 | 0.000685 | 46.822466 | -inf | 0.000000e+00 | 0.000000e+00 | | 147 | 8.0992 | 1 | 0.000677 | 47.384767 | -inf | 0.000000e+00 | 0.000000e+00 | | 148 | 8.1543 | 1 | 0.000675 | 47.679214 | -inf | 0.000000e+00 | 0.000000e+00 | | 149 | 8.2094 | 1 | 0.000689 | 46.763218 | -inf | 0.000000e+00 | 0.000000e+00 | | 150 | 8.2645 | 1 | 0.000679 | 46.940251 | -inf | 0.000000e+00 | 0.000000e+00 | | 151 | 8.3196 | 1 | 0.000686 | 46.627359 | -inf | 0.000000e+00 | 0.000000e+00 | | 152 | 8.3747 | 1 | 0.000687 | 46.664798 | -inf | 0.000000e+00 | 0.000000e+00 | | 153 | 8.4298 | 1 | 0.000687 | 46.633910 | -inf | 0.000000e+00 | 0.000000e+00 | | 154 | 8.4848 | 1 | 0.000679 | 46.877187 | -inf | 0.000000e+00 | 0.000000e+00 | | 155 | 8.5399 | 1 | 0.000696 | 46.384902 | -inf | 0.000000e+00 | 0.000000e+00 | | 156 | 8.5950 | 1 | 0.000690 | 46.680191 | -inf | 0.000000e+00 | 0.000000e+00 | | 157 | 8.6501 | 1 | 0.000690 | 46.595866 | -inf | 0.000000e+00 | 0.000000e+00 | | 158 | 8.7052 | 1 | 0.000711 | 46.561896 | -inf | 0.000000e+00 | 0.000000e+00 | | 159 | 8.7603 | 1 | 0.000682 | 46.789459 | -inf | 0.000000e+00 | 0.000000e+00 | | 160 | 8.8154 | 1 | 0.000712 | 46.585605 | -inf | 0.000000e+00 | 0.000000e+00 | | 161 | 8.8705 | 1 | 0.000674 | 47.731465 | -inf | 0.000000e+00 | 0.000000e+00 | | 162 | 8.9256 | 1 | 0.000679 | 47.806239 | -inf | 0.000000e+00 | 0.000000e+00 | | 163 | 8.9807 | 1 | 0.000687 | 46.597511 | -inf | 0.000000e+00 | 0.000000e+00 | | 164 | 9.0358 | 1 | 0.000695 | 46.662126 | -inf | 0.000000e+00 | 0.000000e+00 | | 165 | 9.0909 | 1 | 0.000678 | 47.138759 | -inf | 0.000000e+00 | 0.000000e+00 | | 166 | 9.1460 | 1 | 0.000687 | 46.654425 | -inf | 0.000000e+00 | 0.000000e+00 | | 167 | 9.2011 | 1 | 0.000683 | 47.072656 | -inf | 0.000000e+00 | 0.000000e+00 | | 168 | 9.2562 | 1 | 0.000688 | 46.655192 | -inf | 0.000000e+00 | 0.000000e+00 | | 169 | 9.3113 | 1 | 0.000683 | 46.990134 | -inf | 0.000000e+00 | 0.000000e+00 | | 170 | 9.3664 | 1 | 0.000704 | 46.960453 | -inf | 0.000000e+00 | 0.000000e+00 | | 171 | 9.4215 | 1 | 0.000688 | 46.615369 | -inf | 0.000000e+00 | 0.000000e+00 | | 172 | 9.4766 | 1 | 0.000686 | 46.761498 | -inf | 0.000000e+00 | 0.000000e+00 | | 173 | 9.5317 | 1 | 0.000686 | 46.698208 | -inf | 0.000000e+00 | 0.000000e+00 | | 174 | 9.5868 | 1 | 0.000686 | 46.964062 | -inf | 0.000000e+00 | 0.000000e+00 | | 175 | 9.6419 | 1 | 0.000693 | 46.608047 | -inf | 0.000000e+00 | 0.000000e+00 | | 176 | 9.6970 | 1 | 0.000684 | 46.848123 | -inf | 0.000000e+00 | 0.000000e+00 | | 177 | 9.7521 | 1 | 0.000687 | 46.693659 | -inf | 0.000000e+00 | 0.000000e+00 | | 178 | 9.8072 | 1 | 0.000682 | 46.766093 | -inf | 0.000000e+00 | 0.000000e+00 | | 179 | 9.8623 | 1 | 0.000689 | 46.497713 | -inf | 0.000000e+00 | 0.000000e+00 | | 180 | 9.9174 | 1 | 0.000694 | 46.824867 | -inf | 0.000000e+00 | 0.000000e+00 | | 181 | 9.9725 | 1 | 0.000686 | 46.686000 | -inf | 0.000000e+00 | 0.000000e+00 | | 182 |10.0275 | 1 | 0.000687 | 46.847956 | -inf | 0.000000e+00 | 0.000000e+00 | | 183 |10.0826 | 1 | 0.000696 | 46.639555 | -inf | 0.000000e+00 | 0.000000e+00 | | 184 |10.1377 | 1 | 0.000694 | 46.582859 | -inf | 0.000000e+00 | 0.000000e+00 | | 185 |10.1928 | 1 | 0.000692 | 46.696143 | -inf | 0.000000e+00 | 0.000000e+00 | | 186 |10.2479 | 1 | 0.000688 | 47.038347 | -inf | 0.000000e+00 | 0.000000e+00 | | 187 |10.3030 | 1 | 0.000662 | 48.575645 | -inf | 0.000000e+00 | 0.000000e+00 | | 188 |10.3581 | 1 | 0.000662 | 48.531096 | -inf | 0.000000e+00 | 0.000000e+00 | | 189 |10.4132 | 1 | 0.000674 | 47.867980 | -inf | 0.000000e+00 | 0.000000e+00 | | 190 |10.4683 | 1 | 0.000683 | 47.185120 | -inf | 0.000000e+00 | 0.000000e+00 | | 191 |10.5234 | 1 | 0.000680 | 47.472356 | -inf | 0.000000e+00 | 0.000000e+00 | | 192 |10.5785 | 1 | 0.000671 | 48.044142 | -inf | 0.000000e+00 | 0.000000e+00 | | 193 |10.6336 | 1 | 0.000679 | 47.648927 | -inf | 0.000000e+00 | 0.000000e+00 | | 194 |10.6887 | 1 | 0.000669 | 48.052096 | -inf | 0.000000e+00 | 0.000000e+00 | | 195 |10.7438 | 1 | 0.000672 | 47.870581 | -inf | 0.000000e+00 | 0.000000e+00 | | 196 |10.7989 | 1 | 0.000688 | 47.670530 | -inf | 0.000000e+00 | 0.000000e+00 | | 197 |10.8540 | 1 | 0.000669 | 47.552862 | -inf | 0.000000e+00 | 0.000000e+00 | | 198 |10.9091 | 1 | 0.000668 | 47.895285 | -inf | 0.000000e+00 | 0.000000e+00 | | 199 |10.9642 | 1 | 0.000677 | 47.475987 | -inf | 0.000000e+00 | 0.000000e+00 | | 200 |11.0193 | 1 | 0.000689 | 47.705888 | -inf | 0.000000e+00 | 0.000000e+00 | | 201 |11.0744 | 1 | 0.000677 | 47.488158 | -inf | 0.000000e+00 | 0.000000e+00 | | 202 |11.1295 | 1 | 0.000712 | 48.576639 | -inf | 0.000000e+00 | 0.000000e+00 | | 203 |11.1846 | 1 | 0.000673 | 47.734807 | -inf | 0.000000e+00 | 0.000000e+00 | | 204 |11.2397 | 1 | 0.000670 | 47.568058 | -inf | 0.000000e+00 | 0.000000e+00 | | 205 |11.2948 | 1 | 0.000663 | 48.250896 | -inf | 0.000000e+00 | 0.000000e+00 | | 206 |11.3499 | 1 | 0.000670 | 47.559952 | -inf | 0.000000e+00 | 0.000000e+00 | | 207 |11.4050 | 1 | 0.000670 | 47.573367 | -inf | 0.000000e+00 | 0.000000e+00 | | 208 |11.4601 | 1 | 0.000687 | 46.850101 | -inf | 0.000000e+00 | 0.000000e+00 | | 209 |11.5152 | 1 | 0.000685 | 46.660332 | -inf | 0.000000e+00 | 0.000000e+00 | | 210 |11.5702 | 1 | 0.000685 | 46.485644 | -inf | 0.000000e+00 | 0.000000e+00 | | 211 |11.6253 | 1 | 0.000687 | 46.798719 | -inf | 0.000000e+00 | 0.000000e+00 | | 212 |11.6804 | 1 | 0.000688 | 46.841973 | -inf | 0.000000e+00 | 0.000000e+00 | | 213 |11.7355 | 1 | 0.000685 | 46.615664 | -inf | 0.000000e+00 | 0.000000e+00 | | 214 |11.7906 | 1 | 0.000685 | 46.819705 | -inf | 0.000000e+00 | 0.000000e+00 | | 215 |11.8457 | 1 | 0.000695 | 46.534639 | -inf | 0.000000e+00 | 0.000000e+00 | | 216 |11.9008 | 1 | 0.000687 | 47.054001 | -inf | 0.000000e+00 | 0.000000e+00 | | 217 |11.9559 | 1 | 0.000685 | 46.458661 | -inf | 0.000000e+00 | 0.000000e+00 | | 218 |12.0110 | 1 | 0.000685 | 46.475964 | -inf | 0.000000e+00 | 0.000000e+00 | | 219 |12.0661 | 1 | 0.000681 | 46.980705 | -inf | 0.000000e+00 | 0.000000e+00 | | 220 |12.1212 | 1 | 0.000683 | 46.885118 | -inf | 0.000000e+00 | 0.000000e+00 | | 221 |12.1763 | 1 | 0.000690 | 46.385044 | -inf | 0.000000e+00 | 0.000000e+00 | | 222 |12.2314 | 1 | 0.000687 | 46.472218 | -inf | 0.000000e+00 | 0.000000e+00 | | 223 |12.2865 | 1 | 0.000711 | 46.681981 | -inf | 0.000000e+00 | 0.000000e+00 | | 224 |12.3416 | 1 | 0.000686 | 46.755873 | -inf | 0.000000e+00 | 0.000000e+00 | | 225 |12.3967 | 1 | 0.000684 | 47.479635 | -inf | 0.000000e+00 | 0.000000e+00 | | 226 |12.4518 | 1 | 0.000697 | 46.237011 | -inf | 0.000000e+00 | 0.000000e+00 | | 227 |12.5069 | 1 | 0.000681 | 46.954574 | -inf | 0.000000e+00 | 0.000000e+00 | | 228 |12.5620 | 1 | 0.000665 | 47.804645 | -inf | 0.000000e+00 | 0.000000e+00 | | 229 |12.6171 | 1 | 0.000683 | 46.649178 | -inf | 0.000000e+00 | 0.000000e+00 | | 230 |12.6722 | 1 | 0.000619 | 51.822896 | -inf | 0.000000e+00 | 0.000000e+00 | | 231 |12.7273 | 1 | 0.000691 | 46.453337 | -inf | 0.000000e+00 | 0.000000e+00 | | 232 |12.7824 | 1 | 0.000688 | 46.512926 | -inf | 0.000000e+00 | 0.000000e+00 | | 233 |12.8375 | 1 | 0.000693 | 46.372334 | -inf | 0.000000e+00 | 0.000000e+00 | | 234 |12.8926 | 1 | 0.000691 | 46.443784 | -inf | 0.000000e+00 | 0.000000e+00 | | 235 |12.9477 | 1 | 0.000685 | 46.595713 | -inf | 0.000000e+00 | 0.000000e+00 | | 236 |13.0028 | 1 | 0.000685 | 46.675990 | -inf | 0.000000e+00 | 0.000000e+00 | | 237 |13.0579 | 1 | 0.000673 | 47.477098 | -inf | 0.000000e+00 | 0.000000e+00 | | 238 |13.1129 | 1 | 0.000685 | 46.907466 | -inf | 0.000000e+00 | 0.000000e+00 | | 239 |13.1680 | 1 | 0.000679 | 46.745359 | -inf | 0.000000e+00 | 0.000000e+00 | | 240 |13.2231 | 1 | 0.000667 | 47.812544 | -inf | 0.000000e+00 | 0.000000e+00 | | 241 |13.2782 | 1 | 0.000676 | 46.828156 | -inf | 0.000000e+00 | 0.000000e+00 | | 242 |13.3333 | 1 | 0.000674 | 47.228874 | -inf | 0.000000e+00 | 0.000000e+00 | | 243 |13.3884 | 1 | 0.000675 | 47.213317 | -inf | 0.000000e+00 | 0.000000e+00 | | 244 |13.4435 | 1 | 0.000685 | 46.946864 | -inf | 0.000000e+00 | 0.000000e+00 | | 245 |13.4986 | 1 | 0.000686 | 46.518108 | -inf | 0.000000e+00 | 0.000000e+00 | | 246 |13.5537 | 1 | 0.000683 | 46.918194 | -inf | 0.000000e+00 | 0.000000e+00 | | 247 |13.6088 | 1 | 0.000681 | 46.772611 | -inf | 0.000000e+00 | 0.000000e+00 | | 248 |13.6639 | 1 | 0.000681 | 46.738571 | -inf | 0.000000e+00 | 0.000000e+00 | | 249 |13.7190 | 1 | 0.000701 | 46.844015 | -inf | 0.000000e+00 | 0.000000e+00 | | 250 |13.7741 | 1 | 0.000692 | 46.682392 | -inf | 0.000000e+00 | 0.000000e+00 | | 251 |13.8292 | 1 | 0.000684 | 46.749444 | -inf | 0.000000e+00 | 0.000000e+00 | | 252 |13.8843 | 1 | 0.000690 | 46.676376 | -inf | 0.000000e+00 | 0.000000e+00 | | 253 |13.9394 | 1 | 0.000688 | 46.902780 | -inf | 0.000000e+00 | 0.000000e+00 | | 254 |13.9945 | 1 | 0.000700 | 46.342343 | -inf | 0.000000e+00 | 0.000000e+00 | | 255 |14.0496 | 1 | 0.000676 | 47.290718 | -inf | 0.000000e+00 | 0.000000e+00 | | 256 |14.1047 | 1 | 0.000690 | 46.483884 | -inf | 0.000000e+00 | 0.000000e+00 | | 257 |14.1598 | 1 | 0.000687 | 46.475610 | -inf | 0.000000e+00 | 0.000000e+00 | | 258 |14.2149 | 1 | 0.000697 | 46.243039 | -inf | 0.000000e+00 | 0.000000e+00 | | 259 |14.2700 | 1 | 0.000691 | 46.327818 | -inf | 0.000000e+00 | 0.000000e+00 | | 260 |14.3251 | 1 | 0.000684 | 47.023605 | -inf | 0.000000e+00 | 0.000000e+00 | | 261 |14.3802 | 1 | 0.000684 | 46.992628 | -inf | 0.000000e+00 | 0.000000e+00 | | 262 |14.4353 | 1 | 0.000689 | 46.655423 | -inf | 0.000000e+00 | 0.000000e+00 | | 263 |14.4904 | 1 | 0.000687 | 46.780777 | -inf | 0.000000e+00 | 0.000000e+00 | | 264 |14.5455 | 1 | 0.000687 | 46.914670 | -inf | 0.000000e+00 | 0.000000e+00 | | 265 |14.6006 | 1 | 0.000688 | 46.680806 | -inf | 0.000000e+00 | 0.000000e+00 | | 266 |14.6556 | 1 | 0.000678 | 47.492095 | -inf | 0.000000e+00 | 0.000000e+00 | | 267 |14.7107 | 1 | 0.000688 | 46.786733 | -inf | 0.000000e+00 | 0.000000e+00 | | 268 |14.7658 | 1 | 0.000680 | 47.164431 | -inf | 0.000000e+00 | 0.000000e+00 | | 269 |14.8209 | 1 | 0.000708 | 45.651456 | -inf | 0.000000e+00 | 0.000000e+00 | | 270 |14.8760 | 1 | 0.000684 | 47.118467 | -inf | 0.000000e+00 | 0.000000e+00 | | 271 |14.9311 | 1 | 0.000686 | 46.986024 | -inf | 0.000000e+00 | 0.000000e+00 | | 272 |14.9862 | 1 | 0.000705 | 45.811613 | -inf | 0.000000e+00 | 0.000000e+00 | | 273 |15.0413 | 1 | 0.000701 | 45.809912 | -inf | 0.000000e+00 | 0.000000e+00 | | 274 |15.0964 | 1 | 0.000698 | 46.075125 | -inf | 0.000000e+00 | 0.000000e+00 | | 275 |15.1515 | 1 | 0.000697 | 46.349127 | -inf | 0.000000e+00 | 0.000000e+00 | | 276 |15.2066 | 1 | 0.000707 | 45.612900 | -inf | 0.000000e+00 | 0.000000e+00 | | 277 |15.2617 | 1 | 0.000699 | 45.962354 | -inf | 0.000000e+00 | 0.000000e+00 | | 278 |15.3168 | 1 | 0.000703 | 45.854950 | -inf | 0.000000e+00 | 0.000000e+00 | | 279 |15.3719 | 1 | 0.000688 | 46.591173 | -inf | 0.000000e+00 | 0.000000e+00 | | 280 |15.4270 | 1 | 0.000691 | 46.439605 | -inf | 0.000000e+00 | 0.000000e+00 | | 281 |15.4821 | 1 | 0.000700 | 45.609099 | -inf | 0.000000e+00 | 0.000000e+00 | | 282 |15.5372 | 1 | 0.000705 | 45.473630 | -inf | 0.000000e+00 | 0.000000e+00 | | 283 |15.5923 | 1 | 0.000704 | 45.499228 | -inf | 0.000000e+00 | 0.000000e+00 | | 284 |15.6474 | 1 | 0.000707 | 45.474494 | -inf | 0.000000e+00 | 0.000000e+00 | | 285 |15.7025 | 1 | 0.000690 | 46.229681 | -inf | 0.000000e+00 | 0.000000e+00 | | 286 |15.7576 | 1 | 0.000699 | 45.890873 | -inf | 0.000000e+00 | 0.000000e+00 | | 287 |15.8127 | 1 | 0.000693 | 46.117946 | -inf | 0.000000e+00 | 0.000000e+00 | | 288 |15.8678 | 1 | 0.000698 | 45.995581 | -inf | 0.000000e+00 | 0.000000e+00 | | 289 |15.9229 | 1 | 0.000707 | 45.766717 | -inf | 0.000000e+00 | 0.000000e+00 | | 290 |15.9780 | 1 | 0.000726 | 45.936229 | -inf | 0.000000e+00 | 0.000000e+00 | | 291 |16.0331 | 1 | 0.000709 | 46.439345 | -inf | 0.000000e+00 | 0.000000e+00 | | 292 |16.0882 | 1 | 0.000690 | 46.546813 | -inf | 0.000000e+00 | 0.000000e+00 | | 293 |16.1433 | 1 | 0.000693 | 46.523355 | -inf | 0.000000e+00 | 0.000000e+00 | | 294 |16.1983 | 1 | 0.000709 | 45.438468 | -inf | 0.000000e+00 | 0.000000e+00 | | 295 |16.2534 | 1 | 0.000698 | 46.057489 | -inf | 0.000000e+00 | 0.000000e+00 | | 296 |16.3085 | 1 | 0.000692 | 46.396248 | -inf | 0.000000e+00 | 0.000000e+00 | | 297 |16.3636 | 1 | 0.000671 | 47.804347 | -inf | 0.000000e+00 | 0.000000e+00 | | 298 |16.4187 | 1 | 0.000700 | 45.745559 | -inf | 0.000000e+00 | 0.000000e+00 | | 299 |16.4738 | 1 | 0.000695 | 46.054604 | -inf | 0.000000e+00 | 0.000000e+00 | | 300 |16.5289 | 1 | 0.000705 | 45.744685 | -inf | 0.000000e+00 | 0.000000e+00 | | 301 |16.5840 | 1 | 0.000698 | 45.782885 | -inf | 0.000000e+00 | 0.000000e+00 | | 302 |16.6391 | 1 | 0.000694 | 46.331677 | -inf | 0.000000e+00 | 0.000000e+00 | | 303 |16.6942 | 1 | 0.000693 | 46.501020 | -inf | 0.000000e+00 | 0.000000e+00 | | 304 |16.7493 | 1 | 0.000697 | 45.868491 | -inf | 0.000000e+00 | 0.000000e+00 | | 305 |16.8044 | 1 | 0.000699 | 46.378612 | -inf | 0.000000e+00 | 0.000000e+00 | | 306 |16.8595 | 1 | 0.000695 | 46.308039 | -inf | 0.000000e+00 | 0.000000e+00 | | 307 |16.9146 | 1 | 0.000691 | 46.526447 | -inf | 0.000000e+00 | 0.000000e+00 | | 308 |16.9697 | 1 | 0.000696 | 46.122122 | -inf | 0.000000e+00 | 0.000000e+00 | | 309 |17.0248 | 1 | 0.000684 | 46.856025 | -inf | 0.000000e+00 | 0.000000e+00 | | 310 |17.0799 | 1 | 0.000701 | 46.113206 | -inf | 0.000000e+00 | 0.000000e+00 | | 311 |17.1350 | 1 | 0.000692 | 46.398112 | -inf | 0.000000e+00 | 0.000000e+00 | | 312 |17.1901 | 1 | 0.000767 | 45.619590 | -inf | 0.000000e+00 | 0.000000e+00 | | 313 |17.2452 | 1 | 0.000675 | 47.797250 | -inf | 0.000000e+00 | 0.000000e+00 | | 314 |17.3003 | 1 | 0.000685 | 46.677846 | -inf | 0.000000e+00 | 0.000000e+00 | | 315 |17.3554 | 1 | 0.000689 | 46.488186 | -inf | 0.000000e+00 | 0.000000e+00 | | 316 |17.4105 | 1 | 0.000711 | 46.798838 | -inf | 0.000000e+00 | 0.000000e+00 | | 317 |17.4656 | 1 | 0.000690 | 46.569277 | -inf | 0.000000e+00 | 0.000000e+00 | | 318 |17.5207 | 1 | 0.000695 | 46.209487 | -inf | 0.000000e+00 | 0.000000e+00 | | 319 |17.5758 | 1 | 0.000698 | 45.909949 | -inf | 0.000000e+00 | 0.000000e+00 | | 320 |17.6309 | 1 | 0.000679 | 47.280223 | -inf | 0.000000e+00 | 0.000000e+00 | | 321 |17.6860 | 1 | 0.000696 | 45.957118 | -inf | 0.000000e+00 | 0.000000e+00 | | 322 |17.7410 | 1 | 0.000689 | 46.661208 | -inf | 0.000000e+00 | 0.000000e+00 | | 323 |17.7961 | 1 | 0.000689 | 46.485928 | -inf | 0.000000e+00 | 0.000000e+00 | | 324 |17.8512 | 1 | 0.000702 | 45.910141 | -inf | 0.000000e+00 | 0.000000e+00 | | 325 |17.9063 | 1 | 0.000698 | 46.091762 | -inf | 0.000000e+00 | 0.000000e+00 | | 326 |17.9614 | 1 | 0.000697 | 46.191874 | -inf | 0.000000e+00 | 0.000000e+00 | | 327 |18.0165 | 1 | 0.000697 | 46.020327 | -inf | 0.000000e+00 | 0.000000e+00 | | 328 |18.0716 | 1 | 0.000683 | 47.056295 | -inf | 0.000000e+00 | 0.000000e+00 | | 329 |18.1267 | 1 | 0.000679 | 47.225815 | -inf | 0.000000e+00 | 0.000000e+00 | | 330 |18.1818 | 1 | 0.000687 | 46.711211 | -inf | 0.000000e+00 | 0.000000e+00 | | 331 |18.2369 | 1 | 0.000683 | 46.838738 | -inf | 0.000000e+00 | 0.000000e+00 | | 332 |18.2920 | 1 | 0.000690 | 46.530371 | -inf | 0.000000e+00 | 0.000000e+00 | | 333 |18.3471 | 1 | 0.000682 | 47.151909 | -inf | 0.000000e+00 | 0.000000e+00 | | 334 |18.4022 | 1 | 0.000705 | 46.465808 | -inf | 0.000000e+00 | 0.000000e+00 | | 335 |18.4573 | 1 | 0.000698 | 46.918384 | -inf | 0.000000e+00 | 0.000000e+00 | | 336 |18.5124 | 1 | 0.000680 | 46.982040 | -inf | 0.000000e+00 | 0.000000e+00 | | 337 |18.5675 | 1 | 0.000688 | 46.834463 | -inf | 0.000000e+00 | 0.000000e+00 | | 338 |18.6226 | 1 | 0.000685 | 47.082787 | -inf | 0.000000e+00 | 0.000000e+00 | | 339 |18.6777 | 1 | 0.000667 | 47.776211 | -inf | 0.000000e+00 | 0.000000e+00 | | 340 |18.7328 | 1 | 0.000694 | 46.342369 | -inf | 0.000000e+00 | 0.000000e+00 | | 341 |18.7879 | 1 | 0.000679 | 47.334009 | -inf | 0.000000e+00 | 0.000000e+00 | | 342 |18.8430 | 1 | 0.000676 | 47.224326 | -inf | 0.000000e+00 | 0.000000e+00 | | 343 |18.8981 | 1 | 0.000681 | 47.006368 | -inf | 0.000000e+00 | 0.000000e+00 | | 344 |18.9532 | 1 | 0.000682 | 46.777730 | -inf | 0.000000e+00 | 0.000000e+00 | | 345 |19.0083 | 1 | 0.000694 | 46.232870 | -inf | 0.000000e+00 | 0.000000e+00 | | 346 |19.0634 | 1 | 0.000692 | 46.253289 | -inf | 0.000000e+00 | 0.000000e+00 | | 347 |19.1185 | 1 | 0.000693 | 46.055296 | -inf | 0.000000e+00 | 0.000000e+00 | | 348 |19.1736 | 1 | 0.000687 | 46.546440 | -inf | 0.000000e+00 | 0.000000e+00 | | 349 |19.2287 | 1 | 0.000685 | 46.769534 | -inf | 0.000000e+00 | 0.000000e+00 | | 350 |19.2837 | 1 | 0.000698 | 45.705568 | -inf | 0.000000e+00 | 0.000000e+00 | | 351 |19.3388 | 1 | 0.000699 | 45.612452 | -inf | 0.000000e+00 | 0.000000e+00 | | 352 |19.3939 | 1 | 0.000689 | 46.311269 | -inf | 0.000000e+00 | 0.000000e+00 | | 353 |19.4490 | 1 | 0.000704 | 45.581321 | -inf | 0.000000e+00 | 0.000000e+00 | | 354 |19.5041 | 1 | 0.000692 | 46.268870 | -inf | 0.000000e+00 | 0.000000e+00 | | 355 |19.5592 | 1 | 0.000695 | 45.857139 | -inf | 0.000000e+00 | 0.000000e+00 | | 356 |19.6143 | 1 | 0.000672 | 47.383261 | -inf | 0.000000e+00 | 0.000000e+00 | | 357 |19.6694 | 1 | 0.000697 | 46.039425 | -inf | 0.000000e+00 | 0.000000e+00 | | 358 |19.7245 | 1 | 0.000673 | 47.209931 | -inf | 0.000000e+00 | 0.000000e+00 | | 359 |19.7796 | 1 | 0.000693 | 46.166173 | -inf | 0.000000e+00 | 0.000000e+00 | | 360 |19.8347 | 1 | 0.000689 | 46.307711 | -inf | 0.000000e+00 | 0.000000e+00 | | 361 |19.8898 | 1 | 0.000690 | 46.204995 | -inf | 0.000000e+00 | 0.000000e+00 | | 362 |19.9449 | 1 | 0.000707 | 45.873240 | -inf | 0.000000e+00 | 0.000000e+00 | | 363 |20.0000 | 1 | 0.000702 | 45.896570 | -inf | 0.000000e+00 | 0.000000e+00 | | 364 |20.0551 | 1 | 0.000703 | 45.762990 | -inf | 0.000000e+00 | 0.000000e+00 | | 365 |20.1102 | 1 | 0.000689 | 46.398000 | -inf | 0.000000e+00 | 0.000000e+00 | | 366 |20.1653 | 1 | 0.000698 | 45.893426 | -inf | 0.000000e+00 | 0.000000e+00 | | 367 |20.2204 | 1 | 0.000690 | 46.226002 | -inf | 0.000000e+00 | 0.000000e+00 | | 368 |20.2755 | 1 | 0.000694 | 46.151370 | -inf | 0.000000e+00 | 0.000000e+00 | | 369 |20.3306 | 1 | 0.000688 | 46.556177 | -inf | 0.000000e+00 | 0.000000e+00 | | 370 |20.3857 | 1 | 0.000707 | 45.573970 | -inf | 0.000000e+00 | 0.000000e+00 | | 371 |20.4408 | 1 | 0.000693 | 46.446767 | -inf | 0.000000e+00 | 0.000000e+00 | | 372 |20.4959 | 1 | 0.000694 | 46.152634 | -inf | 0.000000e+00 | 0.000000e+00 | | 373 |20.5510 | 1 | 0.000696 | 46.155208 | -inf | 0.000000e+00 | 0.000000e+00 | | 374 |20.6061 | 1 | 0.000705 | 45.472315 | -inf | 0.000000e+00 | 0.000000e+00 | | 375 |20.6612 | 1 | 0.000676 | 47.016364 | -inf | 0.000000e+00 | 0.000000e+00 | | 376 |20.7163 | 1 | 0.000700 | 45.441678 | -inf | 0.000000e+00 | 0.000000e+00 | | 377 |20.7713 | 1 | 0.000705 | 45.608852 | -inf | 0.000000e+00 | 0.000000e+00 | | 378 |20.8264 | 1 | 0.000693 | 46.611538 | -inf | 0.000000e+00 | 0.000000e+00 | | 379 |20.8815 | 1 | 0.000701 | 46.255489 | -inf | 0.000000e+00 | 0.000000e+00 | | 380 |20.9366 | 1 | 0.000688 | 46.646919 | -inf | 0.000000e+00 | 0.000000e+00 | | 381 |20.9917 | 1 | 0.000585 | 54.423797 | -inf | 0.000000e+00 | 0.000000e+00 | | 382 |21.0468 | 1 | 0.000667 | 47.716783 | -inf | 0.000000e+00 | 0.000000e+00 | | 383 |21.1019 | 1 | 0.000700 | 46.025102 | -inf | 0.000000e+00 | 0.000000e+00 | | 384 |21.1570 | 1 | 0.000699 | 45.861447 | -inf | 0.000000e+00 | 0.000000e+00 | | 385 |21.2121 | 1 | 0.000698 | 46.278300 | -inf | 0.000000e+00 | 0.000000e+00 | | 386 |21.2672 | 1 | 0.000699 | 45.952605 | -inf | 0.000000e+00 | 0.000000e+00 | | 387 |21.3223 | 1 | 0.000704 | 45.592786 | -inf | 0.000000e+00 | 0.000000e+00 | | 388 |21.3774 | 1 | 0.000689 | 46.150288 | -inf | 0.000000e+00 | 0.000000e+00 | | 389 |21.4325 | 1 | 0.000697 | 45.931689 | -inf | 0.000000e+00 | 0.000000e+00 | | 390 |21.4876 | 1 | 0.000697 | 45.977116 | -inf | 0.000000e+00 | 0.000000e+00 | | 391 |21.5427 | 1 | 0.000690 | 45.976451 | -inf | 0.000000e+00 | 0.000000e+00 | | 392 |21.5978 | 1 | 0.000690 | 46.597288 | -inf | 0.000000e+00 | 0.000000e+00 | | 393 |21.6529 | 1 | 0.000681 | 46.697688 | -inf | 0.000000e+00 | 0.000000e+00 | | 394 |21.7080 | 1 | 0.000696 | 47.318465 | -inf | 0.000000e+00 | 0.000000e+00 | | 395 |21.7631 | 1 | 0.000699 | 46.267378 | -inf | 0.000000e+00 | 0.000000e+00 | | 396 |21.8182 | 1 | 0.000689 | 46.555969 | -inf | 0.000000e+00 | 0.000000e+00 | | 397 |21.8733 | 1 | 0.000679 | 47.319640 | -inf | 0.000000e+00 | 0.000000e+00 | | 398 |21.9284 | 1 | 0.000671 | 47.558353 | -inf | 0.000000e+00 | 0.000000e+00 | | 399 |21.9835 | 1 | 0.000689 | 46.302612 | -inf | 0.000000e+00 | 0.000000e+00 | | 400 |22.0386 | 1 | 0.000667 | 47.758410 | -inf | 0.000000e+00 | 0.000000e+00 | | 401 |22.0937 | 1 | 0.000687 | 46.813205 | -inf | 0.000000e+00 | 0.000000e+00 | | 402 |22.1488 | 1 | 0.000689 | 46.336472 | -inf | 0.000000e+00 | 0.000000e+00 | | 403 |22.2039 | 1 | 0.000707 | 46.668331 | -inf | 0.000000e+00 | 0.000000e+00 | | 404 |22.2590 | 1 | 0.000688 | 46.649211 | -inf | 0.000000e+00 | 0.000000e+00 | | 405 |22.3140 | 1 | 0.000691 | 46.482820 | -inf | 0.000000e+00 | 0.000000e+00 | | 406 |22.3691 | 1 | 0.000681 | 46.947114 | -inf | 0.000000e+00 | 0.000000e+00 | | 407 |22.4242 | 1 | 0.000689 | 46.618766 | -inf | 0.000000e+00 | 0.000000e+00 | | 408 |22.4793 | 1 | 0.000680 | 46.817495 | -inf | 0.000000e+00 | 0.000000e+00 | | 409 |22.5344 | 1 | 0.000686 | 46.703945 | -inf | 0.000000e+00 | 0.000000e+00 | | 410 |22.5895 | 1 | 0.000675 | 47.620927 | -inf | 0.000000e+00 | 0.000000e+00 | | 411 |22.6446 | 1 | 0.000689 | 46.717914 | -inf | 0.000000e+00 | 0.000000e+00 | | 412 |22.6997 | 1 | 0.000674 | 47.499379 | -inf | 0.000000e+00 | 0.000000e+00 | | 413 |22.7548 | 1 | 0.000683 | 46.991697 | -inf | 0.000000e+00 | 0.000000e+00 | | 414 |22.8099 | 1 | 0.000844 | 47.362969 | -inf | 0.000000e+00 | 0.000000e+00 | | 415 |22.8650 | 1 | 0.000681 | 47.379998 | -inf | 0.000000e+00 | 0.000000e+00 | | 416 |22.9201 | 1 | 0.000688 | 46.840415 | -inf | 0.000000e+00 | 0.000000e+00 | | 417 |22.9752 | 1 | 0.000696 | 46.394910 | -inf | 0.000000e+00 | 0.000000e+00 | | 418 |23.0303 | 1 | 0.000680 | 47.481963 | -inf | 0.000000e+00 | 0.000000e+00 | | 419 |23.0854 | 1 | 0.000677 | 47.425381 | -inf | 0.000000e+00 | 0.000000e+00 | | 420 |23.1405 | 1 | 0.000675 | 47.313570 | -inf | 0.000000e+00 | 0.000000e+00 | | 421 |23.1956 | 1 | 0.000680 | 47.051804 | -inf | 0.000000e+00 | 0.000000e+00 | | 422 |23.2507 | 1 | 0.000692 | 46.392776 | -inf | 0.000000e+00 | 0.000000e+00 | | 423 |23.3058 | 1 | 0.000700 | 46.097488 | -inf | 0.000000e+00 | 0.000000e+00 | | 424 |23.3609 | 1 | 0.000704 | 45.675144 | -inf | 0.000000e+00 | 0.000000e+00 | | 425 |23.4160 | 1 | 0.000678 | 47.188528 | -inf | 0.000000e+00 | 0.000000e+00 | | 426 |23.4711 | 1 | 0.000674 | 47.721276 | -inf | 0.000000e+00 | 0.000000e+00 | | 427 |23.5262 | 1 | 0.000679 | 47.517682 | -inf | 0.000000e+00 | 0.000000e+00 | | 428 |23.5813 | 1 | 0.000699 | 47.314847 | -inf | 0.000000e+00 | 0.000000e+00 | | 429 |23.6364 | 1 | 0.000677 | 46.942705 | -inf | 0.000000e+00 | 0.000000e+00 | | 430 |23.6915 | 1 | 0.000685 | 46.500190 | -inf | 0.000000e+00 | 0.000000e+00 | | 431 |23.7466 | 1 | 0.000690 | 46.497821 | -inf | 0.000000e+00 | 0.000000e+00 | | 432 |23.8017 | 1 | 0.000681 | 46.785045 | -inf | 0.000000e+00 | 0.000000e+00 | | 433 |23.8567 | 1 | 0.000695 | 45.813771 | -inf | 0.000000e+00 | 0.000000e+00 | | 434 |23.9118 | 1 | 0.000710 | 45.188591 | -inf | 0.000000e+00 | 0.000000e+00 | | 435 |23.9669 | 1 | 0.000692 | 46.043509 | -inf | 0.000000e+00 | 0.000000e+00 | | 436 |24.0220 | 1 | 0.000703 | 45.755100 | -inf | 0.000000e+00 | 0.000000e+00 | | 437 |24.0771 | 1 | 0.000696 | 45.757484 | -inf | 0.000000e+00 | 0.000000e+00 | | 438 |24.1322 | 1 | 0.000683 | 46.637429 | -inf | 0.000000e+00 | 0.000000e+00 | | 439 |24.1873 | 1 | 0.000688 | 46.360789 | -inf | 0.000000e+00 | 0.000000e+00 | | 440 |24.2424 | 1 | 0.000691 | 46.138279 | -inf | 0.000000e+00 | 0.000000e+00 | | 441 |24.2975 | 1 | 0.000697 | 46.142083 | -inf | 0.000000e+00 | 0.000000e+00 | | 442 |24.3526 | 1 | 0.000699 | 46.033069 | -inf | 0.000000e+00 | 0.000000e+00 | | 443 |24.4077 | 1 | 0.000685 | 46.517459 | -inf | 0.000000e+00 | 0.000000e+00 | | 444 |24.4628 | 1 | 0.000699 | 45.831956 | -inf | 0.000000e+00 | 0.000000e+00 | | 445 |24.5179 | 1 | 0.000683 | 47.009859 | -inf | 0.000000e+00 | 0.000000e+00 | | 446 |24.5730 | 1 | 0.000703 | 46.299859 | -inf | 0.000000e+00 | 0.000000e+00 | | 447 |24.6281 | 1 | 0.000684 | 46.631679 | -inf | 0.000000e+00 | 0.000000e+00 | | 448 |24.6832 | 1 | 0.000696 | 46.171868 | -inf | 0.000000e+00 | 0.000000e+00 | | 449 |24.7383 | 1 | 0.000704 | 45.749519 | -inf | 0.000000e+00 | 0.000000e+00 | | 450 |24.7934 | 1 | 0.000673 | 47.822073 | -inf | 0.000000e+00 | 0.000000e+00 | ...Finished FINISHED - Elapsed time = 21435.5371006 seconds FINISHED - CPU process time = 165060.5815336 seconds ``` ##### Postprocessing[¶](#Postprocessing) Read the structural and aerodynamic information of the last time step ``` [11]: ``` ``` tstep = sharpy_output.structure.timestep_info[-1] astep = sharpy_output.aero.timestep_info[-1] ``` Separate the structure into blades ``` [12]: ``` ``` # Define beams ielem = 0 nblades = np.max(sharpy_output.structure.beam_number) + 1 nodes_blade = [] first_node = 0 for iblade in range(nblades): nodes_blade.append(np.zeros((sharpy_output.structure.num_node,), dtype=bool)) while sharpy_output.structure.beam_number[ielem] <= iblade: ielem += 1 if ielem == sharpy_output.structure.num_elem: break nodes_blade[iblade][first_node:sharpy_output.structure.connectivities[ielem-1,1]+1] = True first_node = sharpy_output.structure.connectivities[ielem-1,1]+1 ``` Compute the radial position of the nodes and initialise the rest of the variables ``` [13]: ``` ``` r = [] c = [] dr = [] forces = [] CN_drR = [] CTan_drR = [] CP_drR = [] nodes_num = [] for iblade in range(nblades): forces.append(tstep.steady_applied_forces[nodes_blade[iblade]].copy()) nodes_num.append(np.arange(0, sharpy_output.structure.num_node, 1)[nodes_blade[iblade]]) r.append(np.linalg.norm(tstep.pos[nodes_blade[iblade], :], axis=1)) dr.append(np.zeros(np.sum(nodes_blade[iblade]))) dr[iblade][0] = 0.5*(r[iblade][1]-r[iblade][0]) dr[iblade][-1] = 0.5 * (r[iblade][-1] - r[iblade][-2]) for inode in range(1,len(r[iblade]) - 1): dr[iblade][inode] = 0.5*(r[iblade][inode+1] - r[iblade][inode-1]) CN_drR.append(np.zeros(len(r[iblade]))) c.append(np.zeros(len(r[iblade]))) CTan_drR.append(np.zeros(len(r[iblade]))) CP_drR.append(np.zeros(len(r[iblade]))) ``` Transform the loads computed by SHARPy into out-of-plane and in-plane components ``` [14]: ``` ``` rho = sharpy_output.settings['StaticCoupledRBM']['aero_solver_settings']['rho'].value uinf = sharpy_output.settings['StaticCoupledRBM']['aero_solver_settings']['velocity_field_input']['u_inf'].value R = np.max(r[0]) Cp = 0 Ct = 0 global_force_factor = 0.5 * rho * uinf** 2 * np.pi * R**2 global_power_factor = global_force_factor*uinf for iblade in range(nblades): for inode in range(len(r[iblade])): forces[iblade][inode, 0] *= 0. # Discard the spanwise component node_global_index = nodes_num[iblade][inode] ielem = sharpy_output.structure.node_master_elem[node_global_index, 0] inode_in_elem = sharpy_output.structure.node_master_elem[node_global_index, 1] CAB = algebra.crv2rotation(tstep.psi[ielem, inode_in_elem, :]) c[iblade][inode] = sharpy_output.aero.aero_dict['chord'][ielem,inode_in_elem] forces_AFoR = np.dot(CAB, forces[iblade][inode, 0:3]) CN_drR[iblade][inode] = forces_AFoR[2]/dr[iblade][inode]*R / global_force_factor CTan_drR[iblade][inode] = np.linalg.norm(forces_AFoR[0:2])/dr[iblade][inode]*R / global_force_factor CP_drR[iblade][inode] = np.linalg.norm(forces_AFoR[0:2])/dr[iblade][inode]*R * r[iblade][inode]*rotation_velocity / global_power_factor Cp += np.sum(CP_drR[iblade]*dr[iblade]/R) Ct += np.sum(CN_drR[iblade]*dr[iblade]/R) ``` ##### Results[¶](#Results) Plot of the loads along the blade ``` [15]: ``` ``` fig, list_plots = plt.subplots(1, 2, figsize=(12, 3)) list_plots[0].grid() list_plots[0].set_xlabel("r/R [-]") list_plots[0].set_ylabel("CN/d(r/R) [-]") list_plots[0].plot(r[0]/R, CN_drR[0], '-', label='SHARPy') list_plots[0].plot(of_rR, of_cNdrR, '-', label='OpenFAST') list_plots[0].legend() list_plots[1].grid() list_plots[1].set_xlabel("r/R [-]") list_plots[1].set_ylabel("CT/d(r/R) [-]") list_plots[1].plot(r[0]/R, CTan_drR[0], '-', label='SHARPy') list_plots[1].plot(of_rR, of_cTdrR, '-', label='OpenFAST') list_plots[1].legend() plt.show() ``` Print the rotor thrust and power coefficients ``` [16]: ``` ``` print(" OpenFAST SHARPy") print("Cp[-] %.2f %.2f" % (of_cp, Cp)) print("Ct[-] %.2f %.2f" % (of_ct, Ct)) ``` ``` OpenFAST SHARPy Cp[-] 0.49 0.55 Ct[-] 0.70 0.76 ``` #### Downloadable files[¶](#downloadable-files) * [`./example_notebooks/linear_goland_flutter.ipynb`](_downloads/3a4219d135cdddc192b36990f57b9aab/linear_goland_flutter.ipynb) * [`./example_notebooks/nonlinear_t-tail_HALE.ipynb`](_downloads/bb605592bd9d48091965d2a39e68d94e/nonlinear_t-tail_HALE.ipynb) * [`./example_notebooks/linear_horten.ipynb`](_downloads/1f788fbfbff167d84a6073be771741f1/linear_horten.ipynb) * [`./example_notebooks/wind_turbine.ipynb`](_downloads/9bf1c218857bf056ddcb061c98293dad/wind_turbine.ipynb) Input data for wind turbine: * [`./example_notebooks/source/type02_db_NREL5MW_v01.xlsx`](_downloads/99d03d8d40fe5b3b9bb82403945c5bdc/type02_db_NREL5MW_v01.xlsx) ### Contributing to SHARPy[¶](#contributing-to-sharpy) #### Bug fixes and features[¶](#bug-fixes-and-features) SHARPy is a collaborative effort, and this means that some coding practices need to be encouraged so that the code is kept tidy and consistent. Any user is welcome to raise issues for bug fixes and feature proposals through Github. If you are submitting a bug report: 1. Make sure your SHARPy, xbeam and uvlm local copies are up to date and in the same branch. 2. Double check that your python distribution is updated by comparing with the `utils/environment_*.yml` file. 3. Try to assemble a minimal working example that can be run quickly and easily. 4. Describe as accurately as possible your setup (OS, path, compilers…) and the problem. 5. Raise an issue with all this information in the Github repo and label it `potential bug`. Please bear in mind that we do not have the resources to provide support for user modifications of the code through Github. If you have doubts about how to modify certain parts of the code, contact us through email and we will help you as much as we can. If you are fixing a bug: 1. THANKS! 2. Please create a pull request from your modified fork, and describe in a few lines which bug you are fixing, a minimal example that triggers the bug and how you are fixing it. We will review it ASAP and hopefully it will be incorporated in the code! If you have an idea for new functionality but do not know how to implement it: 1. We welcome tips and suggestions from users, as it allow us to broaden the scope of the code. The more people using it, the better! 2. Feel free to fill an issue in Github, and tag it as `feature proposal`. Please understand that the more complete the description of the potential feature, the more likely it is that some of the developers will give it a go. If you have developed new functionality and you want to share it with the world: 1. AWESOME! Please follow the same instructions than for the bug fix submission. If you have some peer-reviewed references related to the new code, even better, as it will save us some precious time. #### Code formatting[¶](#code-formatting) We try to follow the [PEP8](https://www.python.org/dev/peps/pep-0008/) standards (with spaces, no tabs please!) and [Google Python Style Guide](http://google.github.io/styleguide/pyguide.html). We do not ask you to freak out over formatting, but please, try to keep it tidy and descriptive. A good tip is to run `pylint` <https://www.pylint.org/> to make sure there are no obvious formatting problems. #### Documentation[¶](#documentation) Contributing to SHARPy’s documentation benefits everyone. As a developer, writing documentation helps you better understand what you have done and whether your functions etc make logical sense. As a user, any documentation is better than digging through the code. The more we have documented, the easier the code is to use and the more users we can have. If you want to contribute by documenting code, you have come to the right place. SHARPy is documented using Sphinx and it extracts the documentation directly from the source code. It is then sorted into directories automatically and a human readable website generated. The amount of work you need to do is minimal. That said, the recipe for a successfully documented class, function, module is the following: 1. Your documentation has to be written in ReStructuredText (rst). I know, another language… hence I will leave a few tips: * Inline code is written using two backticks ```` * Inline math is written as `:math:`1+\exp^{i\pi} = 0``. Don’t forget the backticks! * Math in a single or multiple lines is simple: ``` .. math:: 1 + \exp{i\pi} = 0 ``` * Lists in ReStructuredText are tricky, I must admit. Therefore, I will link to some [examples](http://docutils.sourceforge.net/docs/user/rst/quickref.html#enumerated-lists). The key resides in not forgetting the spaces, in particular when you go onto a second line! * The definite example list can be found [here](http://docutils.sourceforge.net/docs/user/rst/quickref.html). 2. Titles and docstrings, the bare minimum: * Start docstrings with `r` such that they are interpreted raw: ``` r""" My docstring """ ``` * All functions, modules and classes should be given a title that goes in the first line of the docstring * If you are writing a whole package with an `__init__.py` file, even if it’s empty, give it a human readable docstring. This will then be imported into the documentation * For modules with several functions, the module docstring has to be at the very top of the file, prior to the `import` statements. 3. We use the Google documentation style. See [description](https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings). 4. Function arguments and returns: * Function arguments are simple to describe: ``` def func(arg1, arg2): """Summary line. Extended description of function. Args: arg1 (int): Description of arg1 arg2 (str): Description of arg2 Returns: bool: Description of return value """ return True ``` 5. Solver settings: * If your code has a settings dictionary, with defaults and types then make sure that: + They are defined as class variables and not instance attributes. + Define a `settings_types`, `settings_default` and `settings_description` dictionaries. + After all your settings, update the docstring with the automatically generated settings table. You will need to import the `sharpy.utils.settings` module ``` settings_types = dict() settings_default = dict() settings_description = dict() # keep adding settings settings_table = sharpy.utils.settings.SettingsTable() __doc__ += settings_table.generate(settings_types, settings_default ,settings_description) ``` 6. See how your docs looks like! * Once you are done, run the following `SHARPy` command: ``` sharpy any_string -d ``` * If you are making minor updates to docstrings (i.e. you are not documenting a previously undocumented function/class/module) you can simply change directory to `sharpy/docs` and run ``` make html ``` * Your documentation will compile and warnings will appear etc. You can check the result by opening ``` docs/build/index.html ``` and navigating to your recently created page. * Make sure that **before committing** any changes in the documentation you update the entire `docs` directory by running ``` sharpy any_string -d ``` Thank you for reading through this and contributing to make SHARPy a better documented, more user friendly code! #### Git branching model[¶](#git-branching-model) For the development of SHARPy, we try to follow [this](https://nvie.com/posts/a-successful-git-branching-model/) branching model summarised by the schematic BranchingModel *Credit: <NAME> https://nvie.com/posts/a-successful-git-branching-model/* Therefore, attending to this model our branches have the following versions of the code: * `master`: latest stable release - paired with the appropriate tag. * `develop`: latest stable development build. Features get merged to develop. * `rc-**`: release candidate branch. Prior to releasing tests are performed on this branch. * `dev_doc`: documentation development branch. All work relating to documentation gets done here. * `fix_**`: hotfix branch. * `dev_**`: feature development branch. If you contribute, please make sure you know what branch to work from. If in doubt please ask! ### The SHARPy Case files[¶](#the-sharpy-case-files) SHARPy takes as input a series of `.h5` files that contain the numerical data and a `.sharpy` file that contains the settings for each of the solvers. How these files are generated is at the user’s discretion, though templates are provided, and all methods are valid as long as the required variables are provided with the appropriate format. #### Modular Framework[¶](#modular-framework) SHARPy is built with a modular framework in mind. The following diagram shows the strutuctre of a nonlinear, time marching aeroelastic simulation Each of the blocks correspond to individual solvers with specific settings. How we choose which solvers to run, in which order and with what settings is done through the solver configuration file, explained in the next section. #### Solver configuration file[¶](#solver-configuration-file) The solver configuration file is the main input to SHARPy. It is a [ConfigObj](http://pypi.org/project/configobj/) formatted file with the `.sharpy` extension. It contains the settings for each of the solvers and the order in which to run them. A typical way to assemble the solver configuration file is to place all your desired settings in a dictionary and then convert to and write your `ConfigObj`. If a setting is not provided the default value will be used. The settings that each solver takes, its type and default value are explained in their relevant documentation pages. ``` import configobj filename = '<case_route>/<case_name>.sharpy' config = configobj.ConfigObj() config.filename = filename config['SHARPy'] = {'case': '<your SHARPy case name>', # an example setting # Rest of your settings for the PreSHARPy class } config['BeamLoader'] = {'orientation': [1., 0., 0.], # an example setting # Rest of settings for the BeamLoader solver } # Continue as above for the remainder of solvers that you would like to include # finally, write the config file config.write() ``` The resulting `.sharpy` file is a plain text file with your specified settings for each of the solvers. Note that, therefore, if one of your settings is a `np.array`, it will get transformed into a string of plain text before being read by SHARPy. However, any setting with `list(float)` specified as its setting type will get converted into a `np.array` once it is read by SHARPy. #### FEM file[¶](#fem-file) The `case.fem.h5` file has several components. We go one by one: * `num_node_elem [int]` : number of nodes per element. Always 3 in our case (3 nodes per structural elements - quadratic beam elements). * `num_elem [int]` : number of structural elements. * `num_node [int]` : number of nodes. For simple structures, it is `num_elem*(num_node_elem - 1) - 1`. For more complicated ones, you need to calculate it properly. * `coordinates [num_node, 3]`: coordinates of the nodes in body-attached FoR (A). * `connectivites [num_elem, num_node_elem]` : Beam element’s connectivities. Every row refers to an element, and the three integers in that row are the indices of the three nodes belonging to that elem. Now, the catch: the ordering is not as you’d think. Order them as `[0, 2, 1]`. That means, first one, last one, central one. The following image shows the node indices inside the circles representing the nodes, the element indices in blue and the resulting connectivities matrix next to it. Connectivities are tricky when considering complex configurations. Pay attention at the beginning and you’ll save yourself a lot of trouble. * `stiffness_db [:, 6, 6]`: database of stiffness matrices. > The first dimension has as many elements as different stiffness matrices are in the model. > * `elem_stiffness [num_elem]` : array of indices (starting at 0). > It links every element (index) to the stiffness matrix index in `stiffness_db`. > For example `elem_stiffness[0] = 0` ; `elem_stiffness[2] = 1` means that the element `0` has a stiffness matrix > equal to `stiffness_db[0, :, :]` , and the second element has a stiffness matrix equal to > `stiffness_db[1, :, :]`. > The shape of a stiffness matrix, \(\mathrm{S}\) is: > \[\begin{split}\mathrm{S} = \begin{bmatrix} > EA & & & & & \\ > & GA_y & & & & \\ > & & GA_z & & & \\ > & & & GJ & & \\ > & & & & EI_y & \\ > & & & & & EI_z \\ > \end{bmatrix}\end{split}\] > with the cross terms added if needed. > `mass_db` and `elem_mass` follow the same scheme than the stiffness, but the mass matrix is given by: > \[\begin{split}\mathrm{M} = \begin{bmatrix} > m\mathbf{I} & -\tilde{\boldsymbol{\xi}}_{cg}m \\ > \tilde{\boldsymbol{\xi}}_{cg}m & \mathbf{J}\\ > \end{bmatrix}\end{split}\] > where \(m\) is the distributed mass per unit length \(kg/m\) , \((\tilde{\bullet})\) is the > skew-symmetric matrix of a vector and \(\boldsymbol{\xi}_{cg}\) is the location of the centre of gravity > with respect to the elastic axis in MATERIAL (local) FoR. And what is the Material FoR? This is an important point, > because all the inputs that move WITH the beam are in material FoR. For example: follower forces, stiffness, mass, > lumped masses… > The material frame of reference is noted as \(B\). Essentially, the \(x\) component is tangent to the beam in the > increasing node ordering, \(z\) looks up generally and \(y\) is oriented such that the FoR is right handed. > In the practice (vertical surfaces, structural twist effects…) it is more complicated than this. The only > sure thing about \(B\) is that its \(x\) direction is tangent to the beam in the increasing node number direction. > However, with just this, we have an infinite number of potential reference frames, with \(y\) and \(z\) > being normal to \(x\) but rotating around it. The solution is to indicate a `for_delta`, or frame of > reference delta vector (\(\Delta\)). > Now we can define unequivocally the material frame of reference. With \(x_B\) and \(\Delta\) defining a > plane, \(y_b\) is chosen such that the \(z\) component is oriented upwards with respect to the lifting surface. > From this definition comes the only constraint to \(\Delta\): it cannot be parallel to \(x_B\). > * `frame_of_reference_delta [num_elem, num_node_elem, 3]`: rotation vector to FoR \(B\). > contains the \(\Delta\) vector in body-attached (\(A\)) frame of reference. > As a rule of thumb: > \[\begin{split}\Delta = > \begin{cases} > [-1, 0, 0], \quad \text{if right wing} \\ > [1, 0, 0], \quad \text{if left wing} \\ > [0, 1, 0], \quad \text{if fuselage} \\ > [-1, 0, 0], \quad \text{if vertical fin} \\ > \end{cases}\end{split}\] > These rules of thumb only work if the nodes increase towards the tip of the surfaces (and the tail in the > case of the fuselage). > * `structural_twist [num_elem, num_node_elem]`: Element twist. > Technically not necessary, as the same effect can be achieved with `FoR_delta`. > * `boundary_conditions [num_node]`: boundary conditions. > An array of integers `(np.zeros((num_node, ), dtype=int))` and contains all `0` except for > > > > + One node NEEDS to have a `1` , this is the reference node. Usually, the first node has 1 and is located > > in `[0, 0, 0]`. This makes things much easier. > > + If the node is a tip of a beam (is not attached to 2 elements, but just 1), it needs to have a `-1`. > > * `beam_number [num_elem]`: beam index. > Is another array of integers. Usually you don’t need to modify its value. Leave it at 0. > * `app_forces [num_elem, 6]`: applied forces and moments. > Contains the applied forces `app_forces[:, 0:3]` and moments `app_forces[:, 3:6]` in a > given node. > Important points: the forces are given in Material FoR (check above). That means that in a > symmetrical model, a thrust force oriented upstream would have the shape `[0, T, 0, 0, 0, 0]` in the > right wing, while the left would be `[0, -T, 0, 0, 0, 0]`. Likewise, a torsional moment for twisting the wing > leading edge up would be `[0, 0, 0, M, 0, 0]` for the right, and `[0, 0, 0, -M, 0, 0]` for the left. > But careful, because an out-of-plane bending moment (wing tip up) has the same sign (think about it). > * `lumped_mass [:]`: lumped masses. > Is an array with as many masses as needed (in kg this time). Their order is important, as more > information is required to implement them in a model. > * `lumped_mass_nodes [:]`: Lumped mass nodes. > Is an array of integers. It contains the index of the nodes related to the masses given > in lumped_mass in order. > * `lumped_mass_inertia [:, 3, 3]`: Lumped mass inertia. > Is an array of `3x3` inertial tensors. The relationship is set by the ordering as well. > * `lumped_mass_position [:, 3]`: Lumped mass position. > Is the relative position of the lumped mass with respect to the node > (given in `lumped_masss_nodes` ) coordinates. ATTENTION: the lumped mass is solidly attached to the node, and > thus, its position is given in Material FoR. #### Aerodynamics file[¶](#aerodynamics-file) All the aerodynamic data is contained in `case.aero.h5`. It is important to know that the input for aero is usually based on elements (and inside the elements, their nodes). This causes sometimes an overlap in information, as some nodes are shared by two adjacent elements (like in the connectivities graph in the previous section). The easier way of dealing with this is to make sure the data is consistent, so that the properties of the last node of the first element are the same than the first node of the second element. Item by item: * `airfoils`: Airfoil group. > In the `aero.h5` file, there is a Group called `airfoils`. The airfoils are stored in this group (which acts as a > folder) as a two-column matrix with \(x/c\) and \(y/c\) in each column. They are named `'0', '1'` , > and so on. > * `chords [num_elem, num_node_elem]`: Chord > Is an array with the chords of every airfoil given in an element/node basis. > * `twist [num_elem, num_node_elem]`: Twist. > Has the twist angle in radians. It is implemented as a rotation around the local \(x\) axis. > * `sweep [num_elem, num_node_elem]`: Sweep. > Same here, just a rotation around \(z\). > * `airfoil_distribution_input [num_elem, num_node_elem]`: Airfoil distribution. > Contains the indices of the airfoils that you put previously in `airfoils`. > * `surface_distribution_input [num_elem]`: Surface integer array. > It contains the index of the surface the element belongs > to. Surfaces need to be continuous, so please note that if your beam numbering is not continuous, you need to make > a surface per continuous section. > * `surface_m [num_surfaces]`: Chordwise panelling. > Is an integer array with the number of chordwise panels for every surface. > * `m_distribution [string]`: Discretisation method. > Is a string with the chordwise panel distribution. In almost all cases, leave it at `uniform`. > * `aero_node_input [num_node]`: Aerodynamic node definition. > Is a boolean (`True` or `False`) array that indicates if that node has a lifting > surface attached to it. > * `elastic_axis [num_elem, num_node_elem]`: elastic axis. > Indicates the elastic axis location with respect to the leading edge as a > fraction of the chord of that rib. Note that the elastic axis is already determined, as the beam is fixed now, so > this settings controls the location of the lifting surface wrt the beam. > * `control_surface [num_elem, num_node_elem]`: Control surface. > Is an integer array containing `-1` if that section has no control surface associated to it, and `0, 1, 2 ...` > if the section belongs to the control surface `0, 1, 2 ...` respectively. > * `control_surface_type [num_control_surface]`: Control Surface type. > Contains `0` if the control surface deflection is static, and `1` is it > is dynamic. > * `control_surface_chord [num_control_surface]`: Control surface chord. > Is an INTEGER array with the number of panels belonging to the control > surface. For example, if `M = 4` and you want your control surface to be \(0.25c\), you need to put `1`. > * `control_surface_hinge_coord [num_control_surface]`: Control surface hinge coordinate. > Only necessary for lifting surfaces that are deflected as a > whole, like some horizontal tails in some aircraft. Leave it at `0` if you are not modelling this. > * `airfoil_efficiency [num_elem, num_node_elem, 2, 3]`: Airfoil efficiency. > This is an optional setting that introduces a user-defined efficiency and constant terms to the mapping > between the aerodynamic forces calculated at the lattice grid and the structural nodes. The formatting of the > 4-dimensional array is simple. The first two dimensions correspond to the element index and the local node index. > The third index is whether the term is the multiplier to the force `0` or a constant term `1`. The final term refers to, > in the **local, body-attached** `B` frame, the factors and constant terms for: `fy, fz, mx`. > For more information on how these factors are included in the mapping terms > see [`sharpy.aero.utils.mapping.aero2struct_force_mapping()`](index.html#module-sharpy.aero.utils.mapping.aero2struct_force_mapping). ### SHARPy Solvers[¶](#sharpy-solvers) The available SHARPy solvers are listed below. Attending to SHARPy’s modular structure, they can be run independently so the order in which you desire to run them is important. The starting point is the PreSharpy loader. It contains the simulation configuration and which solvers are to be run and in the order that should happen. #### Aero Solvers[¶](#aero-solvers) ##### DynamicUVLM[¶](#dynamicuvlm) *class* `sharpy.solvers.dynamicuvlm.``DynamicUVLM`[[source]](_modules/sharpy/solvers/dynamicuvlm.html#DynamicUVLM)[¶](#sharpy.solvers.dynamicuvlm.DynamicUVLM) Dynamic Aerodynamic Time Domain Simulation Provides an aerodynamic only simulation in time by time stepping the solution. The type of aerodynamic solver is parsed as a setting. To Do: Clean timestep information for memory efficiency Warning Under development. Issues encountered when using the linear UVLM as the aerodynamic solver with integration order = 1. The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | | --- | --- | --- | --- | | `print_info` | `bool` | Write status to screen | `True` | | `structural_solver` | `str` | Structural solver to use in the coupled simulation | `None` | | `structural_solver_settings` | `dict` | Dictionary of settings for the structural solver | `None` | | `aero_solver` | `str` | Aerodynamic solver to use in the coupled simulation | `None` | | `aero_solver_settings` | `dict` | Dictionary of settings for the aerodynamic solver | `None` | | `n_time_steps` | `int` | Number of time steps for the simulation | `None` | | `dt` | `float` | Time step | `None` | | `include_unsteady_force_contribution` | `bool` | If on, added mass contribution is added to the forces. This depends on the time derivative of the bound circulation. Check `filter_gamma_dot` in the aero solver | `False` | | `postprocessors` | `list(str)` | List of the postprocessors to run at the end of every time step | `[]` | | `postprocessors_settings` | `dict` | Dictionary with the applicable settings for every `psotprocessor`. Every `postprocessor` needs its entry, even if empty | `{}` | ##### NoAero[¶](#noaero) *class* `sharpy.solvers.noaero.``NoAero`[[source]](_modules/sharpy/solvers/noaero.html#NoAero)[¶](#sharpy.solvers.noaero.NoAero) Solver to be used with DynamicCoupled when aerodynamics are not of interest The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | | --- | --- | --- | --- | | | | | | `initialise`(*data*, *custom_settings=None*)[[source]](_modules/sharpy/solvers/noaero.html#NoAero.initialise)[¶](#sharpy.solvers.noaero.NoAero.initialise) To be called just once per simulation. ##### PrescribedUvlm[¶](#prescribeduvlm) *class* `sharpy.solvers.prescribeduvlm.``PrescribedUvlm`[[source]](_modules/sharpy/solvers/prescribeduvlm.html#PrescribedUvlm)[¶](#sharpy.solvers.prescribeduvlm.PrescribedUvlm) This class runs a prescribed rigid body motion simulation of a rigid aerodynamic body. The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | | --- | --- | --- | --- | | `print_info` | `bool` | Write status to screen | `True` | | `structural_solver` | `str` | Structural solver to use in the coupled simulation | `None` | | `structural_solver_settings` | `dict` | Dictionary of settings for the structural solver | `None` | | `aero_solver` | `str` | Aerodynamic solver to use in the coupled simulation | `None` | | `aero_solver_settings` | `dict` | Dictionary of settings for the aerodynamic solver | `None` | | `n_time_steps` | `int` | Number of time steps for the simulation | `None` | | `dt` | `float` | Time step | `None` | | `postprocessors` | `list(str)` | List of the postprocessors to run at the end of every time step | `[]` | | `postprocessors_settings` | `dict` | Dictionary with the applicable settings for every `psotprocessor`. Every `postprocessor` needs its entry, even if empty | `{}` | ##### SHWUvlm[¶](#shwuvlm) *class* `sharpy.solvers.shwuvlm.``SHWUvlm`[[source]](_modules/sharpy/solvers/shwuvlm.html#SHWUvlm)[¶](#sharpy.solvers.shwuvlm.SHWUvlm) Steady vortex method assuming helicoidal wake shape The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | | --- | --- | --- | --- | | `print_info` | `bool` | Output run-time information | `True` | | `num_cores` | `int` | Number of cores to used in parallelisation | `0` | | `convection_scheme` | `int` | Convection scheme for the wake (only 2 tested for this solver) | `2` | | `dt` | `float` | time step used to discretise the wake | `0.1` | | `iterative_solver` | `bool` | | `False` | | `iterative_tol` | `float` | | `0.0001` | | `iterative_precond` | `bool` | | `False` | | `velocity_field_generator` | `str` | Name of the velocity field generator | `SteadyVelocityField` | | `velocity_field_input` | `dict` | Dictionary of inputs needed by the velocity field generator | `{}` | | `gamma_dot_filtering` | `int` | Parameter used to filter gamma dot (only odd numbers bigger than one allowed) | `0` | | `rho` | `float` | Density | `1.225` | | `rot_vel` | `float` | Rotation velocity in rad/s | `0.0` | | `rot_axis` | `list(float)` | Axis of rotation of the wake | `[1.0, 0.0, 0.0]` | | `rot_center` | `list(float)` | Center of rotation of the wake | `[0.0, 0.0, 0.0]` | ##### StaticUvlm[¶](#staticuvlm) *class* `sharpy.solvers.staticuvlm.``StaticUvlm`[[source]](_modules/sharpy/solvers/staticuvlm.html#StaticUvlm)[¶](#sharpy.solvers.staticuvlm.StaticUvlm) `StaticUvlm` solver class, inherited from `BaseSolver` Aerodynamic solver that runs a UVLM routine to solve the steady or unsteady aerodynamic problem. The aerodynamic problem is posed in the form of an `Aerogrid` object. | Parameters: | * **data** ([*PreSharpy*](index.html#sharpy.presharpy.presharpy.PreSharpy)) – object with problem data * **custom_settings** (*dict*) – custom settings that override the settings in the solver `.txt` file. None by default | `settings`[¶](#sharpy.solvers.staticuvlm.StaticUvlm.settings) Name-value pair of settings employed by solver. See Notes for valid combinations | Type: | dict | `settings_types`[¶](#sharpy.solvers.staticuvlm.StaticUvlm.settings_types) Acceptable data types for entries in `settings` | Type: | dict | `settings_default`[¶](#sharpy.solvers.staticuvlm.StaticUvlm.settings_default) Default values for the available `settings` | Type: | dict | `data`[¶](#sharpy.solvers.staticuvlm.StaticUvlm.data) object containing the information of the problem | Type: | [PreSharpy](index.html#sharpy.presharpy.presharpy.PreSharpy) | `velocity_generator`[¶](#sharpy.solvers.staticuvlm.StaticUvlm.velocity_generator) object containing the flow conditions information | Type: | object | The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | | --- | --- | --- | --- | | `print_info` | `bool` | Print info to screen | `True` | | `horseshoe` | `bool` | Horseshoe wake modelling for steady simulations. | `False` | | `num_cores` | `int` | Number of cores to use in the VLM lib | `0` | | `n_rollup` | `int` | Number of rollup iterations for free wake. Use at least `n_rollup > 1.1*m_star` | `1` | | `rollup_dt` | `float` | Controls when the AIC matrix is refreshed during the wake rollup | `0.1` | | `rollup_aic_refresh` | `int` | | `1` | | `rollup_tolerance` | `float` | Convergence criterium for rollup wake | `0.0001` | | `iterative_solver` | `bool` | Not in use | `False` | | `iterative_tol` | `float` | Not in use | `0.0001` | | `iterative_precond` | `bool` | Not in use | `False` | | `velocity_field_generator` | `str` | Name of the velocity field generator to be used in the simulation | `SteadyVelocityField` | | `velocity_field_input` | `dict` | Dictionary of settings for the velocity field generator | `{}` | | `rho` | `float` | Air density | `1.225` | `next_step`()[[source]](_modules/sharpy/solvers/staticuvlm.html#StaticUvlm.next_step)[¶](#sharpy.solvers.staticuvlm.StaticUvlm.next_step) Updates de aerogrid based on the info of the step, and increases the self.ts counter ##### StepLinearUVLM[¶](#steplinearuvlm) *class* `sharpy.solvers.steplinearuvlm.``StepLinearUVLM`[[source]](_modules/sharpy/solvers/steplinearuvlm.html#StepLinearUVLM)[¶](#sharpy.solvers.steplinearuvlm.StepLinearUVLM) Time domain aerodynamic solver that uses a linear UVLM formulation to be used with the `solvers.DynamicCoupled` solver. To use this solver, the `solver_id = StepLinearUVLM` must be given as the name for the `aero_solver` is the case of an aeroelastic solver, where the setting below would be parsed through `aero_solver_settings`. Notes The `integr_order` variable refers to the finite differencing scheme used to calculate the bound circulation derivative with respect to time \(\dot{\mathbf{\Gamma}}\). A first order scheme is used when `integr_order == 1` \[\dot{\mathbf{\Gamma}}^{n+1} = \frac{\mathbf{\Gamma}^{n+1}-\mathbf{\Gamma}^n}{\Delta t}\] If `integr_order == 2` a higher order scheme is used (but it isn’t exactly second order accurate [1]). \[\dot{\mathbf{\Gamma}}^{n+1} = \frac{3\mathbf{\Gamma}^{n+1}-4\mathbf{\Gamma}^n + \mathbf{\Gamma}^{n-1}} {2\Delta t}\] If `track_body` is `True`, the UVLM is projected onto a frame `U` that is: > * Coincident with `G` at the linearisation timestep. > * Thence, rotates by the same quantity as the FoR `A`. It is similar to a stability axes and is recommended any time rigid body dynamics are included. See also `sharpy.sharpy.linear.assembler.linearuvlm.LinearUVLM` References [1] <NAME>., & <NAME>.. State-Space Realizations and Internal Balancing in Potential-Flow Aerodynamics with Arbitrary Kinematics. AIAA Journal, 57(6), 1–14. 2019. <https://doi.org/10.2514/1.J058153The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | | --- | --- | --- | --- | | `dt` | `float` | Time step | `0.1` | | `integr_order` | `int` | Integration order of the circulation derivative. Either `1` or `2`. | `2` | | `ScalingDict` | `dict` | Dictionary of scaling factors to achieve normalised UVLM realisation. | `{}` | | `remove_predictor` | `bool` | Remove the predictor term from the UVLM equations | `True` | | `use_sparse` | `bool` | Assemble UVLM plant matrix in sparse format | `True` | | `density` | `float` | Air density | `1.225` | | `track_body` | `bool` | UVLM inputs and outputs projected to coincide with lattice at linearisation | `True` | | `track_body_number` | `int` | Frame of reference number to follow. If `-1` track `A` frame. | `-1` | The settings that `ScalingDict` accepts are the following: | Name | Type | Description | Default | | --- | --- | --- | --- | | `length` | `float` | Reference length to be used for UVLM scaling | `1.0` | | `speed` | `float` | Reference speed to be used for UVLM scaling | `1.0` | | `density` | `float` | Reference density to be used for UVLM scaling | `1.0` | `initialise`(*data*, *custom_settings=None*)[[source]](_modules/sharpy/solvers/steplinearuvlm.html#StepLinearUVLM.initialise)[¶](#sharpy.solvers.steplinearuvlm.StepLinearUVLM.initialise) Initialises the Linear UVLM aerodynamic solver and the chosen velocity generator. Settings are parsed into the standard SHARPy settings format for solvers. It then checks whether there is any previous information about the linearised system (in order for a solution to be restarted without overwriting the linearisation). If a linearised system does not exist, a linear UVLM system is created linearising about the current time step. The reference values for the input and output are transformed into column vectors \(\mathbf{u}\) and \(\mathbf{y}\), respectively. The information pertaining to the linear system is stored in a dictionary `self.data.aero.linear` within the main `data` variable. | Parameters: | * **data** ([*PreSharpy*](index.html#sharpy.presharpy.presharpy.PreSharpy)) – class containing the problem information * **custom_settings** (*dict*) – custom settings dictionary | `pack_input_vector`()[[source]](_modules/sharpy/solvers/steplinearuvlm.html#StepLinearUVLM.pack_input_vector)[¶](#sharpy.solvers.steplinearuvlm.StepLinearUVLM.pack_input_vector) Transform a SHARPy AeroTimestep instance into a column vector containing the input to the linear UVLM system. \[[\zeta,\, \dot{\zeta}, u_{ext}] \longrightarrow \mathbf{u}\] If the `track_body` option is on, the function projects all the input into a frame that: > 1. is equal to the FoR G at time 0 (linearisation point) > 2. rotates as the body frame specified in the `track_body_number` | Returns: | Input vector | | Return type: | np.ndarray | *static* `pack_state_vector`(*aero_tstep*, *aero_tstep_m1*, *dt*, *integr_order*)[[source]](_modules/sharpy/solvers/steplinearuvlm.html#StepLinearUVLM.pack_state_vector)[¶](#sharpy.solvers.steplinearuvlm.StepLinearUVLM.pack_state_vector) Transform SHARPy Aerotimestep format into column vector containing the state information. The state vector is of a different form depending on the order of integration chosen. If a second order scheme is chosen, the state includes the bound circulation at the previous timestep, hence the timestep information for the previous timestep shall be parsed. The transformation is of the form: * If `integr_order==1`: > \[\mathbf{x}_n = [\mathbf{\Gamma}^T_n,\, > \mathbf{\Gamma_w}_n^T,\, > \Delta t \,\mathbf{\dot{\Gamma}}_n^T]^T\] > * Else, if `integr_order==2`: > \[\mathbf{x}_n = [\mathbf{\Gamma}_n^T,\, > \mathbf{\Gamma_w}_n^T,\, > \Delta t \,\mathbf{\dot{\Gamma}}_n^T,\, > \mathbf{\Gamma}_{n-1}^T]^T\] For the second order integration scheme, if the previous timestep information is not parsed, a first order stencil is employed to estimate the bound circulation at the previous timestep: > \[\mathbf{\Gamma}^{n-1} = \mathbf{\Gamma}^n - \Delta t \mathbf{\dot{\Gamma}}^n\] | Parameters: | * **aero_tstep** ([*AeroTimeStepInfo*](index.html#sharpy.aero.models.aerogrid.Aerogrid.sharpy.utils.datastructures.AeroTimeStepInfo)) – Aerodynamic timestep information at the current timestep `n`. * **aero_tstep_m1** ([*AeroTimeStepInfo*](index.html#sharpy.aero.models.aerogrid.Aerogrid.sharpy.utils.datastructures.AeroTimeStepInfo)) – | | Returns: | State vector | | Return type: | np.ndarray | `run`(*aero_tstep*, *structure_tstep*, *convect_wake=False*, *dt=None*, *t=None*, *unsteady_contribution=False*)[[source]](_modules/sharpy/solvers/steplinearuvlm.html#StepLinearUVLM.run)[¶](#sharpy.solvers.steplinearuvlm.StepLinearUVLM.run) Solve the linear aerodynamic UVLM model at the current time step `n`. The step increment is solved as: \[\begin{split}\mathbf{x}^n &= \mathbf{A\,x}^{n-1} + \mathbf{B\,u}^n \\ \mathbf{y}^n &= \mathbf{C\,x}^n + \mathbf{D\,u}^n\end{split}\] A change of state is possible in order to solve the system without the predictor term. In which case the system is solved by: \[\begin{split}\mathbf{h}^n &= \mathbf{A\,h}^{n-1} + \mathbf{B\,u}^{n-1} \\ \mathbf{y}^n &= \mathbf{C\,h}^n + \mathbf{D\,u}^n\end{split}\] Variations are taken with respect to initial reference state. The state and input vectors for the linear UVLM system are of the form: > If `integr_order==1`: > \[\mathbf{x}_n = [\delta\mathbf{\Gamma}^T_n,\, > \delta\mathbf{\Gamma_w}_n^T,\, > \Delta t \,\delta\mathbf{\dot{\Gamma}}_n^T]^T\] > Else, if `integr_order==2`: > \[\mathbf{x}_n = [\delta\mathbf{\Gamma}_n^T,\, > \delta\mathbf{\Gamma_w}_n^T,\, > \Delta t \,\delta\mathbf{\dot{\Gamma}}_n^T,\, > \delta\mathbf{\Gamma}_{n-1}^T]^T\] > And the input vector: > \[\mathbf{u}_n = [\delta\mathbf{\zeta}_n^T,\, > \delta\dot{\mathbf{\zeta}}_n^T,\,\delta\mathbf{u_{ext}}^T_n]^T\] where the subscript `n` refers to the time step. The linear UVLM system is then solved as detailed in [`sharpy.linear.src.linuvlm.Dynamic.solve_step()`](index.html#sharpy.linear.src.linuvlm.Dynamic.solve_step). The output is a column vector containing the aerodynamic forces at the panel vertices. To Do: option for impulsive start? | Parameters: | * **aero_tstep** ([*AeroTimeStepInfo*](index.html#sharpy.aero.models.aerogrid.Aerogrid.sharpy.utils.datastructures.AeroTimeStepInfo)) – object containing the aerodynamic data at the current time step * **structure_tstep** (*StructTimeStepInfo*) – object containing the structural data at the current time step * **convect_wake** (*bool*) – for backward compatibility only. The linear UVLM assumes a frozen wake geometry * **dt** (*float*) – time increment * **t** (*float*) – current time * **unsteady_contribution** (*bool*) – (backward compatibily). Unsteady aerodynamic effects are always included | | Returns: | updated `self.data` class with the new forces and circulation terms of the system | | Return type: | [PreSharpy](index.html#sharpy.presharpy.presharpy.PreSharpy) | `unpack_ss_vectors`(*y_n*, *x_n*, *u_n*, *aero_tstep*)[[source]](_modules/sharpy/solvers/steplinearuvlm.html#StepLinearUVLM.unpack_ss_vectors)[¶](#sharpy.solvers.steplinearuvlm.StepLinearUVLM.unpack_ss_vectors) Transform column vectors used in the state space formulation into SHARPy format The column vectors are transformed into lists with one entry per aerodynamic surface. Each entry contains a matrix with the quantities at each grid vertex. \[\mathbf{y}_n \longrightarrow \mathbf{f}_{aero}\] \[\mathbf{x}_n \longrightarrow \mathbf{\Gamma}_n,\, \mathbf{\Gamma_w}_n,\, \mathbf{\dot{\Gamma}}_n\] If the `track_body` option is on, the output forces are projected from the linearization frame, to the G frame. Note that the linearisation frame is: > 1. equal to the FoR G at time 0 (linearisation point) > 2. rotates as the body frame specified in the `track_body_number` | Parameters: | * **y_n** (*np.ndarray*) – Column output vector of linear UVLM system * **x_n** (*np.ndarray*) – Column state vector of linear UVLM system * **u_n** (*np.ndarray*) – Column input vector of linear UVLM system * **aero_tstep** ([*AeroTimeStepInfo*](index.html#sharpy.aero.models.aerogrid.Aerogrid.sharpy.utils.datastructures.AeroTimeStepInfo)) – aerodynamic timestep information class instance | | Returns: | Tuple containing: forces (list): Aerodynamic forces in a list with `n_surf` entries. Each entry is a `(6, M+1, N+1)` matrix, where the first 3 indices correspond to the components in `x`, `y` and `z`. The latter 3 are zero. gamma (list): Bound circulation list with `n_surf` entries. Circulation is stored in an `(M+1, N+1)` matrix, corresponding to the panel vertices. gamma_dot (list): Bound circulation derivative list with `n_surf` entries. Circulation derivative is stored in an `(M+1, N+1)` matrix, corresponding to the panel vertices. gamma_star (list): Wake (free) circulation list with `n_surf` entries. Wake circulation is stored in an `(M_star+1, N+1)` matrix, corresponding to the panel vertices of the wake. | | Return type: | tuple | ##### StepUvlm[¶](#stepuvlm) *class* `sharpy.solvers.stepuvlm.``StepUvlm`[[source]](_modules/sharpy/solvers/stepuvlm.html#StepUvlm)[¶](#sharpy.solvers.stepuvlm.StepUvlm) StepUVLM is the main solver to use for unsteady aerodynamics. The desired flow field is injected into the simulation by means of a `generator`. For a list of available velocity field generators see the documentation page on generators which can be found under SHARPy Source Code. Typical generators could be: * [`SteadyVelocityField`](index.html#sharpy.generators.steadyvelocityfield.SteadyVelocityField) * [`GustVelocityField`](index.html#sharpy.generators.gustvelocityfield.GustVelocityField) * [`TurbVelocityField`](index.html#sharpy.generators.turbvelocityfield.TurbVelocityField) amongst others. The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | Options | | --- | --- | --- | --- | --- | | `print_info` | `bool` | Print info to screen | `True` | | | `num_cores` | `int` | Number of cores to use in the VLM lib | `0` | | | `n_time_steps` | `int` | Number of time steps to be run | `100` | | | `convection_scheme` | `int` | `0`: fixed wake, `2`: convected with background flow;``3``: full force-free wake | `3` | `0`, `2`, `3` | | `dt` | `float` | Time step | `0.1` | | | `iterative_solver` | `bool` | Not in use | `False` | | | `iterative_tol` | `float` | Not in use | `0.0001` | | | `iterative_precond` | `bool` | Not in use | `False` | | | `velocity_field_generator` | `str` | Name of the velocity field generator to be used in the simulation | `SteadyVelocityField` | | | `velocity_field_input` | `dict` | Dictionary of settings for the velocity field generator | `{}` | | | `gamma_dot_filtering` | `int` | Filtering parameter for the Welch filter for the Gamma_dot estimation. Used when `unsteady_force_contribution` is `on`. | `0` | | | `rho` | `float` | Air density | `1.225` | | `initialise`(*data*, *custom_settings=None*)[[source]](_modules/sharpy/solvers/stepuvlm.html#StepUvlm.initialise)[¶](#sharpy.solvers.stepuvlm.StepUvlm.initialise) To be called just once per simulation. `run`(*aero_tstep=None*, *structure_tstep=None*, *convect_wake=True*, *dt=None*, *t=None*, *unsteady_contribution=False*)[[source]](_modules/sharpy/solvers/stepuvlm.html#StepUvlm.run)[¶](#sharpy.solvers.stepuvlm.StepUvlm.run) Runs a step of the aerodynamics as implemented in UVLM. #### Coupled Solvers[¶](#coupled-solvers) ##### DynamicCoupled[¶](#dynamiccoupled) *class* `sharpy.solvers.dynamiccoupled.``DynamicCoupled`[[source]](_modules/sharpy/solvers/dynamiccoupled.html#DynamicCoupled)[¶](#sharpy.solvers.dynamiccoupled.DynamicCoupled) The `DynamicCoupled` solver couples the aerodynamic and structural solvers of choice to march forward in time the aeroelastic system’s solution. Using the `DynamicCoupled` solver requires that an instance of the `StaticCoupled` solver is called in the SHARPy solution `flow` when defining the problem case. The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | | --- | --- | --- | --- | | `print_info` | `bool` | Write status to screen | `True` | | `structural_solver` | `str` | Structural solver to use in the coupled simulation | `None` | | `structural_solver_settings` | `dict` | Dictionary of settings for the structural solver | `None` | | `aero_solver` | `str` | Aerodynamic solver to use in the coupled simulation | `None` | | `aero_solver_settings` | `dict` | Dictionary of settings for the aerodynamic solver | `None` | | `n_time_steps` | `int` | Number of time steps for the simulation | `None` | | `dt` | `float` | Time step | `None` | | `fsi_substeps` | `int` | Max iterations in the FSI loop | `70` | | `fsi_tolerance` | `float` | Convergence threshold for the FSI loop | `1e-05` | | `structural_substeps` | `int` | Number of extra structural time steps per aero time step. 0 is a fully coupled simulation. | `0` | | `relaxation_factor` | `float` | Relaxation parameter in the FSI iteration. 0 is no relaxation and -> 1 is very relaxed | `0.2` | | `final_relaxation_factor` | `float` | Relaxation factor reached in `relaxation_steps` with `dynamic_relaxation` on | `0.0` | | `minimum_steps` | `int` | Number of minimum FSI iterations before convergence | `3` | | `relaxation_steps` | `int` | Length of the relaxation factor ramp between `relaxation_factor` and `final_relaxation_factor` with `dynamic_relaxation` on | `100` | | `dynamic_relaxation` | `bool` | Controls if relaxation factor is modified during the FSI iteration process | `False` | | `postprocessors` | `list(str)` | List of the postprocessors to run at the end of every time step | `[]` | | `postprocessors_settings` | `dict` | Dictionary with the applicable settings for every `psotprocessor`. Every `postprocessor` needs its entry, even if empty | `{}` | | `controller_id` | `dict` | Dictionary of id of every controller (key) and its type (value) | `{}` | | `controller_settings` | `dict` | Dictionary with settings (value) of every controller id (key) | `{}` | | `cleanup_previous_solution` | `bool` | Controls if previous `timestep_info` arrays are reset before running the solver | `False` | | `include_unsteady_force_contribution` | `bool` | If on, added mass contribution is added to the forces. This depends on the time derivative of the bound circulation. Check `filter_gamma_dot` in the aero solver | `False` | | `steps_without_unsteady_force` | `int` | Number of initial timesteps that don’t include unsteady forces contributions. This avoids oscillations due to no perfectly trimmed initial conditions | `0` | | `pseudosteps_ramp_unsteady_force` | `int` | Length of the ramp with which unsteady force contribution is introduced every time step during the FSI iteration process | `0` | `convergence`(*k*, *tstep*, *previous_tstep*)[[source]](_modules/sharpy/solvers/dynamiccoupled.html#DynamicCoupled.convergence)[¶](#sharpy.solvers.dynamiccoupled.DynamicCoupled.convergence) Check convergence in the FSI loop. Convergence is determined as: \[\epsilon_q^k = \frac{|| q^k - q^{k - 1} ||}{q^0}\] \[\epsilon_\dot{q}^k = \frac{|| \dot{q}^k - \dot{q}^{k - 1} ||}{\dot{q}^0}\] FSI converged if \(\epsilon_q^k < \mathrm{FSI\ tolerance}\) and \(\epsilon_\dot{q}^k < \mathrm{FSI\ tolerance}\) `get_g`()[[source]](_modules/sharpy/solvers/dynamiccoupled.html#DynamicCoupled.get_g)[¶](#sharpy.solvers.dynamiccoupled.DynamicCoupled.get_g) Getter for `g`, the gravity value `get_rho`()[[source]](_modules/sharpy/solvers/dynamiccoupled.html#DynamicCoupled.get_rho)[¶](#sharpy.solvers.dynamiccoupled.DynamicCoupled.get_rho) Getter for `rho`, the density value `initialise`(*data*, *custom_settings=None*)[[source]](_modules/sharpy/solvers/dynamiccoupled.html#DynamicCoupled.initialise)[¶](#sharpy.solvers.dynamiccoupled.DynamicCoupled.initialise) Controls the initialisation process of the solver, including processing the settings and initialising the aero and structural solvers, postprocessors and controllers. *static* `interpolate_timesteps`(*step0*, *step1*, *out_step*, *coeff*)[[source]](_modules/sharpy/solvers/dynamiccoupled.html#DynamicCoupled.interpolate_timesteps)[¶](#sharpy.solvers.dynamiccoupled.DynamicCoupled.interpolate_timesteps) Performs a linear interpolation between step0 and step1 based on coeff in [0, 1]. 0 means info in out_step == step0 and 1 out_step == step1. Quantities interpolated: * steady_applied_forces * unsteady_applied_forces * velocity input in Lagrange constraints `process_controller_output`(*controlled_state*)[[source]](_modules/sharpy/solvers/dynamiccoupled.html#DynamicCoupled.process_controller_output)[¶](#sharpy.solvers.dynamiccoupled.DynamicCoupled.process_controller_output) This function modified the solver properties and parameters as requested from the controller. This keeps the main loop much cleaner, while allowing for flexibility Please, if you add options in here, always code the possibility of that specific option not being there without the code complaining to the user. If it possible, use the same Key for the new setting as for the setting in the solver. For example, if you want to modify the structural_substeps variable in settings, use that Key in the info dictionary. As a convention: a value of None returns the value to the initial one specified in settings, while the key not being in the dict is ignored, so if any change was made before, it will stay there. `run`()[[source]](_modules/sharpy/solvers/dynamiccoupled.html#DynamicCoupled.run)[¶](#sharpy.solvers.dynamiccoupled.DynamicCoupled.run) Run the time stepping procedure with controllers and postprocessors included. `set_g`(*new_g*)[[source]](_modules/sharpy/solvers/dynamiccoupled.html#DynamicCoupled.set_g)[¶](#sharpy.solvers.dynamiccoupled.DynamicCoupled.set_g) Setter for `g`, the gravity value `set_rho`(*new_rho*)[[source]](_modules/sharpy/solvers/dynamiccoupled.html#DynamicCoupled.set_rho)[¶](#sharpy.solvers.dynamiccoupled.DynamicCoupled.set_rho) Setter for `rho`, the density value ##### LinDynamicSim[¶](#lindynamicsim) *class* `sharpy.solvers.lindynamicsim.``LinDynamicSim`[[source]](_modules/sharpy/solvers/lindynamicsim.html#LinDynamicSim)[¶](#sharpy.solvers.lindynamicsim.LinDynamicSim) Time-domain solution of Linear Time Invariant Systems The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | | --- | --- | --- | --- | | `folder` | `str` | Output directory | `./output/` | | `write_dat` | `list(str)` | List of vectors to write: `x`, `y`, `u` and/or `t` | `[]` | | `reference_velocity` | `float` | Velocity to scale the structural equations when using a non-dimensional system | `1.0` | | `n_tsteps` | `int` | Number of time steps to run | `10` | | `physical_time` | `float` | Time to run | `2.0` | | `dt` | `float` | Time increment for the solution of systems without a specified dt | `0.001` | | `postprocessors` | `list(str)` | | `[]` | | `postprocessors_settings` | `dict` | | `{}` | ##### RigidDynamicPrescribedStep[¶](#rigiddynamicprescribedstep) *class* `sharpy.solvers.rigiddynamicprescribedstep.``RigidDynamicPrescribedStep`[[source]](_modules/sharpy/solvers/rigiddynamicprescribedstep.html#RigidDynamicPrescribedStep)[¶](#sharpy.solvers.rigiddynamicprescribedstep.RigidDynamicPrescribedStep) @modified <NAME> > The settings that this solver accepts are given by a dictionary, with the following key-value pairs: > | Name | Type | Description | Default | > | --- | --- | --- | --- | > | `dt` | `float` | Time step of simulation | `0.01` | > | `num_steps` | `int` | Number of timesteps to be run | `500` | ##### StaticCoupled[¶](#staticcoupled) *class* `sharpy.solvers.staticcoupled.``StaticCoupled`[[source]](_modules/sharpy/solvers/staticcoupled.html#StaticCoupled)[¶](#sharpy.solvers.staticcoupled.StaticCoupled) This class is the main FSI driver for static simulations. It requires a `structural_solver` and a `aero_solver` to be defined. The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | | --- | --- | --- | --- | | `print_info` | `bool` | Write status to screen | `True` | | `structural_solver` | `str` | Structural solver to use in the coupled simulation | `None` | | `structural_solver_settings` | `dict` | Dictionary of settings for the structural solver | `None` | | `aero_solver` | `str` | Aerodynamic solver to use in the coupled simulation | `None` | | `aero_solver_settings` | `dict` | Dictionary of settings for the aerodynamic solver | `None` | | `max_iter` | `int` | Max iterations in the FSI loop | `100` | | `n_load_steps` | `int` | Length of ramp for forces and gravity during FSI iteration | `0` | | `tolerance` | `float` | Convergence threshold for the FSI loop | `1e-05` | | `relaxation_factor` | `float` | Relaxation parameter in the FSI iteration. 0 is no relaxation and -> 1 is very relaxed | `0.0` | ##### StaticCoupledRBM[¶](#staticcoupledrbm) *class* `sharpy.solvers.staticcoupledrbm.``StaticCoupledRBM`[[source]](_modules/sharpy/solvers/staticcoupledrbm.html#StaticCoupledRBM)[¶](#sharpy.solvers.staticcoupledrbm.StaticCoupledRBM) Steady coupled solver including rigid body motions The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | | --- | --- | --- | --- | | `print_info` | `bool` | Output run-time information | `True` | | `structural_solver` | `str` | Name of the structural solver used in the computation | `None` | | `structural_solver_settings` | `dict` | Dictionary os settings needed by the structural solver | `None` | | `aero_solver` | `str` | Name of the aerodynamic solver used in the computation | `None` | | `aero_solver_settings` | `dict` | Dictionary os settings needed by the aerodynamic solver | `None` | | `max_iter` | `int` | Maximum numeber of FSI iterations | `100` | | `n_load_steps` | `int` | Number of steps to ramp up the application of loads | `1` | | `tolerance` | `float` | FSI tolerance | `1e-05` | | `relaxation_factor` | `float` | Relaxation factor | `0.0` | #### Flight dynamics Solvers[¶](#flight-dynamics-solvers) ##### StaticTrim[¶](#statictrim) *class* `sharpy.solvers.statictrim.``StaticTrim`[[source]](_modules/sharpy/solvers/statictrim.html#StaticTrim)[¶](#sharpy.solvers.statictrim.StaticTrim) The `StaticTrim` solver determines the longitudinal state of trim (equilibrium) for an aeroelastic system in static conditions. It wraps around the desired solver to yield the state of trim of the system, in most cases the [`StaticCoupled`](index.html#sharpy.solvers.staticcoupled.StaticCoupled) solver. It calculates the required angle of attack, elevator deflection and thrust required to achieve longitudinal equilibrium. The output angles are shown in degrees. The results from the trimming iteration can be saved to a text file by using the save_info option. The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | | --- | --- | --- | --- | | `print_info` | `bool` | Print info to screen | `True` | | `solver` | `str` | Solver to run in trim routine | | | `solver_settings` | `dict` | Solver settings dictionary | `{}` | | `max_iter` | `int` | Maximum number of iterations of trim routine | `100` | | `fz_tolerance` | `float` | Tolerance in vertical force | `0.01` | | `fx_tolerance` | `float` | Tolerance in horizontal force | `0.01` | | `m_tolerance` | `float` | Tolerance in pitching moment | `0.01` | | `tail_cs_index` | `int` | Index of control surfaces that move to achieve trim | `0` | | `thrust_nodes` | `list(int)` | Nodes at which thrust is applied | `[0]` | | `initial_alpha` | `float` | Initial angle of attack | `0.0` | | `initial_deflection` | `float` | Initial control surface deflection | `0.0` | | `initial_thrust` | `float` | Initial thrust setting | `0.0` | | `initial_angle_eps` | `float` | Initial change of control surface deflection | `0.05` | | `initial_thrust_eps` | `float` | Initial thrust setting change | `2.0` | | `relaxation_factor` | `float` | Relaxation factor | `0.2` | | `save_info` | `bool` | Save trim results to text file | `False` | | `folder` | `str` | Output location for trim results | `./output/` | `trim_algorithm`()[[source]](_modules/sharpy/solvers/statictrim.html#StaticTrim.trim_algorithm)[¶](#sharpy.solvers.statictrim.StaticTrim.trim_algorithm) Trim algorithm method The trim condition is found iteratively. | Returns: | array of trim values for angle of attack, control surface deflection and thrust. | | Return type: | np.array | ##### Trim[¶](#trim) *class* `sharpy.solvers.trim.``Trim`[[source]](_modules/sharpy/solvers/trim.html#Trim)[¶](#sharpy.solvers.trim.Trim) Trim routine with support for lateral dynamics. It usually struggles much more than the `StaticTrim` (only longitudinal) solver. We advise to start with `StaticTrim` even if you configuration is not totally symmetric. The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | | --- | --- | --- | --- | | `print_info` | `bool` | Print info to screen | `True` | | `solver` | `str` | Solver to run in trim routine | | | `solver_settings` | `dict` | Solver settings dictionary | `{}` | | `max_iter` | `int` | Maximum number of iterations of trim routine | `100` | | `tolerance` | `float` | Threshold for convergence of trim | `0.0001` | | `initial_alpha` | `float` | Initial angle of attack | `0.0` | | `initial_beta` | `float` | Initial sideslip angle | `0.0` | | `initial_roll` | `float` | Initial roll angle | `0` | | `cs_indices` | `list(int)` | Indices of control surfaces to be trimmed | `[]` | | `initial_cs_deflection` | `list(float)` | Initial deflection of the control surfaces in order. | `[]` | | `thrust_nodes` | `list(int)` | Nodes at which thrust is applied | `[0]` | | `initial_thrust` | `list(float)` | Initial thrust setting | `[1.0]` | | `thrust_direction` | `list(float)` | Thrust direction setting | `[0.0, 1.0, 0.0]` | | `special_case` | `dict` | Extra settings for specific cases such as differential thrust control | `{}` | | `refine_solution` | `bool` | If `True` and the optimiser routine allows for it, the optimiser will try to improve the solution with hybrid methods | `False` | #### Linear Solvers[¶](#linear-solvers) ##### LinearAssembler[¶](#linearassembler) *class* `sharpy.solvers.linearassembler.``LinearAssembler`[[source]](_modules/sharpy/solvers/linearassembler.html#LinearAssembler)[¶](#sharpy.solvers.linearassembler.LinearAssembler) Warning Under development - please advise of new features and bugs! Creates a workspace containing the different linear elements of the state-space. The user specifies which elements to build sequentially via the `linear_system` setting. The most common uses will be: > * Aerodynamic: `sharpy.linear.assembler.LinearUVLM` solver > * Structural: `sharpy.linear.assembler.LinearBeam` solver > * Aeroelastic: `sharpy.linear.assembler.LinearAeroelastic` solver The solver enables to load a user specific assembly of a state-space by means of the `LinearCustom` block. See `sharpy.sharpy.linear.assembler.LinearAssembler` for a detailed description of each of the state-space assemblies. Upon assembly of the linear system, the data structure `data.linear` will be created. The `Linear` contains the state-space as an attribute. This state space will be the one employed by postprocessors. Important: running the linear routines requires information on the tangent mass, stiffness and gyroscopic structural matrices therefore the solver `solvers.modal.Modal` must have been run prior to linearisation. In addition, if the problem includes rigid body velocities, at least one timestep of `solvers.DynamicCoupled` must have run such that the rigid body velocity is included. Example: The typical `flow` setting used prior to using this solver for an aeroelastic simulation with rigid body dynamics will be similar to: ``` >>> flow = ['BeamLoader', >>> 'AerogridLoader', >>> 'StaticTrim', >>> 'DynamicCoupled', # a single time step will suffice >>> 'Modal', >>> 'LinearAssembler'] ``` The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | | --- | --- | --- | --- | | `linear_system` | `str` | Name of chosen state space assembly type | `None` | | `linear_system_settings` | `dict` | Settings for the desired state space assembler | `{}` | | `linearisation_tstep` | `int` | Chosen linearisation time step number from available time steps | `-1` | ##### Modal[¶](#modal) *class* `sharpy.solvers.modal.``Modal`[[source]](_modules/sharpy/solvers/modal.html#Modal)[¶](#sharpy.solvers.modal.Modal) `Modal` solver class, inherited from `BaseSolver` Extracts the `M`, `K` and `C` matrices from the `Fortran` library for the beam. Depending on the choice of modal projection, these may or may not be transformed to a state-space form to compute the eigenvalues and mode shapes of the structure. The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | | --- | --- | --- | --- | | `print_info` | `bool` | Write status to screen | `True` | | `folder` | `str` | Output folder | `./output` | | `rigid_body_modes` | `bool` | Write modes with rigid body mode shapes | `False` | | `use_undamped_modes` | `bool` | Project the modes onto undamped mode shapes | `True` | | `NumLambda` | `int` | Number of modes to retain | `20` | | `keep_linear_matrices` | `bool` | Save M, C and K matrices to output dictionary | `True` | | `write_modes_vtk` | `bool` | Write Paraview files with mode shapes | `True` | | `print_matrices` | `bool` | Write M, C and K matrices to file | `False` | | `write_dat` | `bool` | Write mode shapes, frequencies and damping to file | `True` | | `continuous_eigenvalues` | `bool` | Use continuous time eigenvalues | `False` | | `dt` | `float` | Time step to compute discrete time eigenvalues | `0` | | `delta_curved` | `float` | Threshold for linear expressions in rotation formulas | `0.01` | | `plot_eigenvalues` | `bool` | Plot to screen root locus diagram | `False` | | `max_rotation_deg` | `float` | Scale mode shape to have specified maximum rotation | `15.0` | | `max_displacement` | `float` | Scale mode shape to have specified maximum displacement | `0.15` | | `use_custom_timestep` | `int` | If > -1, it will use that time step geometry for calculating the modes | `-1` | | `rigid_modes_cg` | `bool` | Modify the ridid body modes such that they are defined wrt to the CG | `False` | `free_free_modes`(*phi*, *M*)[[source]](_modules/sharpy/solvers/modal.html#Modal.free_free_modes)[¶](#sharpy.solvers.modal.Modal.free_free_modes) Returns the rigid body modes defined with respect to the centre of gravity The transformation from the modes defined at the FoR A origin, \(\boldsymbol{\Phi}\), to the modes defined using the centre of gravity as a reference is \[\boldsymbol{\Phi}_{rr,CG}|_{TRA} = \boldsymbol{\Phi}_{RR}|_{TRA} + \tilde{\mathbf{r}}_{CG} \boldsymbol{\Phi}_{RR}|_{ROT}\] \[\boldsymbol{\Phi}_{rr,CG}|_{ROT} = \boldsymbol{\Phi}_{RR}|_{ROT}\] | Returns: | Transformed eigenvectors | | Return type: | (np.array) | `run`()[[source]](_modules/sharpy/solvers/modal.html#Modal.run)[¶](#sharpy.solvers.modal.Modal.run) Extracts the eigenvalues and eigenvectors of the clamped structure. If `use_undamped_modes == True` then the free vibration modes of the clamped structure are found solving: > \[\mathbf{M\,\ddot{\eta}} + \mathbf{K\,\eta} = 0\] that flows down to solving the non-trivial solutions to: > \[(-\omega_n^2\,\mathbf{M} + \mathbf{K})\mathbf{\Phi} = 0\] On the other hand, if the damped modes are chosen because the system has damping, the free vibration modes are found solving the equation of motion of the form: > \[\mathbf{M\,\ddot{\eta}} + \mathbf{C\,\dot{\eta}} + \mathbf{K\,\eta} = 0\] which can be written in state space form, with the state vector \(\mathbf{x} = [\eta^T,\,\dot{\eta}^T]^T\) as > \[\begin{split}\mathbf{\dot{x}} = \begin{bmatrix} 0 & \mathbf{I} \\ -\mathbf{M^{-1}K} & -\mathbf{M^{-1}C} > \end{bmatrix} \mathbf{x}\end{split}\] and therefore the mode shapes and frequencies correspond to the solution of the eigenvalue problem > \[\mathbf{A\,\Phi} = \mathbf{\Lambda\,\Phi}.\] From the eigenvalues, the following system characteristics are provided: > * Natural Frequency: \(\omega_n = |\lambda|\) > * Damped natural frequency: \(\omega_d = \text{Im}(\lambda) = \omega_n \sqrt{1-\zeta^2}\) > * Damping ratio: \(\zeta = -\frac{\text{Re}(\lambda)}{\omega_n}\) In addition to the above, the modal output dictionary includes the following: > * `M`: Tangent mass matrix > * `C`: Tangent damping matrix > * `K`: Tangent stiffness matrix > * `Ccut`: Modal damping matrix \(\mathbf{C}_m = \mathbf{\Phi}^T\mathbf{C}\mathbf{\Phi}\) > * `Kin_damp`: Forces gain matrix (when damped): \(K_{in} = \mathbf{\Phi}_L^T \mathbf{M}^{-1}\) > * `eigenvectors`: Right eigenvectors > * `eigenvectors_left`: Left eigenvectors given when the system is damped | Returns: | updated data object with modal analysis as part of the last structural time step. | | Return type: | [PreSharpy](index.html#sharpy.presharpy.presharpy.PreSharpy) | #### Loader Solvers[¶](#loader-solvers) ##### PreSharpy[¶](#presharpy) *class* `sharpy.presharpy.presharpy.``PreSharpy`(*in_settings=None*)[[source]](_modules/sharpy/presharpy/presharpy.html#PreSharpy)[¶](#sharpy.presharpy.presharpy.PreSharpy) The PreSharpy solver is the main loader solver of SHARPy. It takes the admin-like settings for the simulation, including the case name, case route and the list of solvers to run and in which order to run them. This order of solvers is referred to, throughout SHARPy, as the `flow` setting. This is a mandatory solver for all simulations at the start so it is never included in the `flow` setting. The settings for this solver are parsed through in the configuration file under the header `SHARPy`. I.e, when you are defining the config file for a simulation, the settings for PreSharpy are included as: ``` import configobj filename = '<case_route>/<case_name>.sharpy' config = configobj.ConfigObj() config.filename = filename config['SHARPy'] = {'case': '<your SHARPy case name>', # an example setting # Rest of your settings for the PreSHARPy class } ``` The following are the settings that the PreSharpy class takes: | Name | Type | Description | Default | | --- | --- | --- | --- | | `flow` | `list(str)` | List of the desired solvers’ `solver_id` to run in sequential order. | `None` | | `case` | `str` | Case name | `default_case_name` | | `route` | `str` | Route to case files | `None` | | `write_screen` | `bool` | Display output on terminal screen. | `True` | | `write_log` | `bool` | Write log file | `False` | | `log_folder` | `str` | Log folder destination directory | | ##### AerogridLoader[¶](#aerogridloader) *class* `sharpy.solvers.aerogridloader.``AerogridLoader`[[source]](_modules/sharpy/solvers/aerogridloader.html#AerogridLoader)[¶](#sharpy.solvers.aerogridloader.AerogridLoader) `AerogridLoader` class, inherited from `BaseSolver` Generates aerodynamic grid based on the input data | Parameters: | **data** ([*PreSharpy*](index.html#sharpy.presharpy.presharpy.PreSharpy)) – `ProblemData` class structure | `settings`[¶](#sharpy.solvers.aerogridloader.AerogridLoader.settings) Name-value pair of the settings employed by the aerodynamic solver | Type: | dict | `settings_types`[¶](#sharpy.solvers.aerogridloader.AerogridLoader.settings_types) Acceptable types for the values in `settings` | Type: | dict | `settings_default`[¶](#sharpy.solvers.aerogridloader.AerogridLoader.settings_default) Name-value pair of default values for the aerodynamic settings | Type: | dict | `data`[¶](#sharpy.solvers.aerogridloader.AerogridLoader.data) class structure | Type: | ProblemData | `aero_file_name`[¶](#sharpy.solvers.aerogridloader.AerogridLoader.aero_file_name) name of the `.aero.h5` HDF5 file | Type: | str | `aero`[¶](#sharpy.solvers.aerogridloader.AerogridLoader.aero) empty attribute `aero_data_dict`[¶](#sharpy.solvers.aerogridloader.AerogridLoader.aero_data_dict) key-value pairs of aerodynamic data | Type: | dict | Notes The `control_surface_deflection` setting allows the user to use a time specific control surface deflection, should the problem include them. This setting takes a list of strings, each for the required control surface generator. The `control_surface_deflection_generator_settings` setting is a list of dictionaries, one for each control surface. The dictionaries specify the settings for the generator `DynamicControlSurface`. If the relevant control surface is simply static, an empty string should be parsed. See the documentation for `DynamicControlSurface` generators for accepted key-value pairs as settings. The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | | --- | --- | --- | --- | | `unsteady` | `bool` | Unsteady effects | `False` | | `aligned_grid` | `bool` | Align grid | `True` | | `freestream_dir` | `list(float)` | Free stream flow direction | `[1.0, 0.0, 0.0]` | | `mstar` | `int` | Number of chordwise wake panels | `10` | | `control_surface_deflection` | `list(str)` | List of control surface generators for each control surface | `[]` | | `control_surface_deflection_generator_settings` | `list(dict)` | List of dictionaries with the settings for each generator | `[]` | ##### BeamLoader[¶](#beamloader) *class* `sharpy.solvers.beamloader.``BeamLoader`[[source]](_modules/sharpy/solvers/beamloader.html#BeamLoader)[¶](#sharpy.solvers.beamloader.BeamLoader) `BeamLoader` class solver inherited from `BaseSolver` Loads the structural beam solver with the specified user settings. | Parameters: | **data** (*ProblemData*) – class containing the problem information | `settings`[¶](#sharpy.solvers.beamloader.BeamLoader.settings) contains the specific settings for the solver | Type: | dict | `settings_types`[¶](#sharpy.solvers.beamloader.BeamLoader.settings_types) Key value pairs of the accepted types for the settings values | Type: | dict | `settings_default`[¶](#sharpy.solvers.beamloader.BeamLoader.settings_default) Dictionary containing the default solver settings, should none be provided. | Type: | dict | `data`[¶](#sharpy.solvers.beamloader.BeamLoader.data) class containing the data for the problem | Type: | ProblemData | `fem_file_name`[¶](#sharpy.solvers.beamloader.BeamLoader.fem_file_name) name of the `.fem.h5` HDF5 file | Type: | str | `dyn_file_name`[¶](#sharpy.solvers.beamloader.BeamLoader.dyn_file_name) name of the `.dyn.h5` HDF5 file | Type: | str | `fem_data_dict`[¶](#sharpy.solvers.beamloader.BeamLoader.fem_data_dict) key-value pairs of FEM data | Type: | dict | `dyn_data_dict`[¶](#sharpy.solvers.beamloader.BeamLoader.dyn_data_dict) key-value pairs of data for dynamic problems | Type: | dict | `structure`[¶](#sharpy.solvers.beamloader.BeamLoader.structure) Empty attribute | Type: | None | Notes For further reference on Quaternions see: <https://en.wikipedia.org/wiki/QuaternionSee also *class* `sharpy.utils.solver_interface.``BaseSolver`[¶](#sharpy.solvers.beamloader.BeamLoader.sharpy.utils.solver_interface.BaseSolver) *class* `sharpy.structure.models.beam.``Beam`[¶](#sharpy.solvers.beamloader.BeamLoader.sharpy.structure.models.beam.Beam) The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | | --- | --- | --- | --- | | `unsteady` | `bool` | If `True` it will be a dynamic problem. The default is usually good for all simulations | `True` | | `orientation` | `list(float)` | Initial attitude of the structure given as the quaternion that parametrises the rotation from G to A frames of reference. | `[1.0, 0, 0, 0]` | #### Structural Solvers[¶](#structural-solvers) ##### NonLinearDynamic[¶](#nonlineardynamic) *class* `sharpy.solvers.nonlineardynamic.``NonLinearDynamic`[[source]](_modules/sharpy/solvers/nonlineardynamic.html#NonLinearDynamic)[¶](#sharpy.solvers.nonlineardynamic.NonLinearDynamic) Structural solver used for the dynamic simulation of free-flying structures. This solver provides an interface to the structural library (`xbeam`) and updates the structural parameters for every time step of the simulation. This solver is called as part of a standalone structural simulation. The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | | --- | --- | --- | --- | | `print_info` | `bool` | Print output to screen | `True` | | `max_iterations` | `int` | Sets maximum number of iterations | `100` | | `num_load_steps` | `int` | | `1` | | `delta_curved` | `float` | | `0.01` | | `min_delta` | `float` | Structural solver tolerance | `1e-05` | | `newmark_damp` | `float` | Sets the Newmark damping coefficient | `0.0001` | | `gravity_on` | `bool` | Flag to include gravitational forces | `False` | | `gravity` | `float` | Gravitational acceleration | `9.81` | | `relaxation_factor` | `float` | | `0.3` | | `dt` | `float` | Time step increment | `0.01` | | `num_steps` | `int` | | `500` | | `prescribed_motion` | `bool` | | `None` | | `gravity_dir` | `list(float)` | | `[0, 0, 1]` | ##### NonLinearDynamicCoupledStep[¶](#nonlineardynamiccoupledstep) *class* `sharpy.solvers.nonlineardynamiccoupledstep.``NonLinearDynamicCoupledStep`[[source]](_modules/sharpy/solvers/nonlineardynamiccoupledstep.html#NonLinearDynamicCoupledStep)[¶](#sharpy.solvers.nonlineardynamiccoupledstep.NonLinearDynamicCoupledStep) Structural solver used for the dynamic simulation of free-flying structures. This solver provides an interface to the structural library (`xbeam`) and updates the structural parameters for every k-th step in the FSI iteration. This solver can be called as part of a standalone structural simulation or as the structural solver of a coupled aeroelastic simulation. The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | | --- | --- | --- | --- | | `print_info` | `bool` | Print output to screen | `True` | | `max_iterations` | `int` | Sets maximum number of iterations | `100` | | `num_load_steps` | `int` | | `1` | | `delta_curved` | `float` | | `0.01` | | `min_delta` | `float` | Structural solver tolerance | `1e-05` | | `newmark_damp` | `float` | Sets the Newmark damping coefficient | `0.0001` | | `gravity_on` | `bool` | Flag to include gravitational forces | `False` | | `gravity` | `float` | Gravitational acceleration | `9.81` | | `relaxation_factor` | `float` | | `0.3` | | `dt` | `float` | Time step increment | `0.01` | | `num_steps` | `int` | | `500` | | `balancing` | `bool` | | `False` | | `initial_velocity_direction` | `list(float)` | Initial velocity of the reference node given in the inertial FOR | `[-1.0, 0.0, 0.0]` | | `initial_velocity` | `float` | Initial velocity magnitude of the reference node | `0` | ##### NonLinearDynamicMultibody[¶](#nonlineardynamicmultibody) *class* `sharpy.solvers.nonlineardynamicmultibody.``NonLinearDynamicMultibody`[[source]](_modules/sharpy/solvers/nonlineardynamicmultibody.html#NonLinearDynamicMultibody)[¶](#sharpy.solvers.nonlineardynamicmultibody.NonLinearDynamicMultibody) Nonlinear dynamic multibody Nonlinear dynamic step solver for multibody structures. The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | | --- | --- | --- | --- | | `print_info` | `bool` | Print output to screen | `True` | | `max_iterations` | `int` | Sets maximum number of iterations | `100` | | `num_load_steps` | `int` | | `1` | | `delta_curved` | `float` | | `0.01` | | `min_delta` | `float` | Structural solver tolerance | `1e-05` | | `newmark_damp` | `float` | Sets the Newmark damping coefficient | `0.0001` | | `gravity_on` | `bool` | Flag to include gravitational forces | `False` | | `gravity` | `float` | Gravitational acceleration | `9.81` | | `relaxation_factor` | `float` | | `0.3` | | `dt` | `float` | Time step increment | `0.01` | | `num_steps` | `int` | | `500` | ##### NonLinearDynamicPrescribedStep[¶](#nonlineardynamicprescribedstep) *class* `sharpy.solvers.nonlineardynamicprescribedstep.``NonLinearDynamicPrescribedStep`[[source]](_modules/sharpy/solvers/nonlineardynamicprescribedstep.html#NonLinearDynamicPrescribedStep)[¶](#sharpy.solvers.nonlineardynamicprescribedstep.NonLinearDynamicPrescribedStep) Structural solver used for the dynamic simulation of clamped structures or those subject to a prescribed motion. This solver provides an interface to the structural library (`xbeam`) and updates the structural parameters for every k-th step in the FSI iteration. This solver can be called as part of a standalone structural simulation or as the structural solver of a coupled aeroelastic simulation. The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | | --- | --- | --- | --- | | `print_info` | `bool` | Print output to screen | `True` | | `max_iterations` | `int` | Sets maximum number of iterations | `100` | | `num_load_steps` | `int` | | `1` | | `delta_curved` | `float` | | `0.01` | | `min_delta` | `float` | Structural solver tolerance | `1e-05` | | `newmark_damp` | `float` | Sets the Newmark damping coefficient | `0.0001` | | `gravity_on` | `bool` | Flag to include gravitational forces | `False` | | `gravity` | `float` | Gravitational acceleration | `9.81` | | `relaxation_factor` | `float` | | `0.3` | | `dt` | `float` | Time step increment | `0.01` | | `num_steps` | `int` | | `500` | ##### NonLinearStatic[¶](#nonlinearstatic) *class* `sharpy.solvers.nonlinearstatic.``NonLinearStatic`[[source]](_modules/sharpy/solvers/nonlinearstatic.html#NonLinearStatic)[¶](#sharpy.solvers.nonlinearstatic.NonLinearStatic) Structural solver used for the static simulation of free-flying structures. This solver provides an interface to the structural library (`xbeam`) and updates the structural parameters for every k-th step of the FSI iteration. This solver can be called as part of a standalone structural simulation or as the structural solver of a coupled static aeroelastic simulation. The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | | --- | --- | --- | --- | | `print_info` | `bool` | Print output to screen | `True` | | `max_iterations` | `int` | Sets maximum number of iterations | `100` | | `num_load_steps` | `int` | | `1` | | `delta_curved` | `float` | | `0.01` | | `min_delta` | `float` | Structural solver tolerance | `1e-05` | | `newmark_damp` | `float` | Sets the Newmark damping coefficient | `0.0001` | | `gravity_on` | `bool` | Flag to include gravitational forces | `False` | | `gravity` | `float` | Gravitational acceleration | `9.81` | | `relaxation_factor` | `float` | | `0.3` | | `dt` | `float` | Time step increment | `0.01` | | `num_steps` | `int` | | `500` | | `initial_position` | `list(float)` | | `<sphinx.ext.autodoc.importer._MockObject object at 0x7fa06fdf3860>` | ##### NonLinearStaticMultibody[¶](#nonlinearstaticmultibody) *class* `sharpy.solvers.nonlinearstaticmultibody.``NonLinearStaticMultibody`[[source]](_modules/sharpy/solvers/nonlinearstaticmultibody.html#NonLinearStaticMultibody)[¶](#sharpy.solvers.nonlinearstaticmultibody.NonLinearStaticMultibody) Nonlinear static multibody Nonlinear static solver for multibody structures. The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | | --- | --- | --- | --- | | `print_info` | `bool` | Print output to screen | `True` | | `max_iterations` | `int` | Sets maximum number of iterations | `100` | | `num_load_steps` | `int` | | `1` | | `delta_curved` | `float` | | `0.01` | | `min_delta` | `float` | Structural solver tolerance | `1e-05` | | `newmark_damp` | `float` | Sets the Newmark damping coefficient | `0.0001` | | `gravity_on` | `bool` | Flag to include gravitational forces | `False` | | `gravity` | `float` | Gravitational acceleration | `9.81` | | `relaxation_factor` | `float` | | `0.3` | | `dt` | `float` | Time step increment | `0.01` | | `num_steps` | `int` | | `500` | ### Post-Processing[¶](#post-processing) #### AeroForcesCalculator[¶](#aeroforcescalculator) *class* `sharpy.postproc.aeroforcescalculator.``AeroForcesCalculator`[[source]](_modules/sharpy/postproc/aeroforcescalculator.html#AeroForcesCalculator)[¶](#sharpy.postproc.aeroforcescalculator.AeroForcesCalculator) Calculates the total aerodynamic forces on the frame of reference `A`. The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | | --- | --- | --- | --- | | `folder` | `str` | Output folder location | `./output` | | `write_text_file` | `bool` | Write `txt` file with results | `False` | | `text_file_name` | `str` | Text file name | | | `screen_output` | `bool` | Show results on screen | `True` | | `unsteady` | `bool` | Include unsteady contributions | `False` | | `coefficients` | `bool` | Calculate aerodynamic coefficients | `False` | | `q_ref` | `float` | Reference dynamic pressure | `1` | | `S_ref` | `float` | Reference area | `1` | #### AerogridPlot[¶](#aerogridplot) *class* `sharpy.postproc.aerogridplot.``AerogridPlot`[[source]](_modules/sharpy/postproc/aerogridplot.html#AerogridPlot)[¶](#sharpy.postproc.aerogridplot.AerogridPlot) Aerodynamic Grid Plotter The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | | --- | --- | --- | --- | | `folder` | `str` | Output folder | `./output` | | `include_rbm` | `bool` | | `True` | | `include_forward_motion` | `bool` | | `False` | | `include_applied_forces` | `bool` | | `True` | | `include_unsteady_applied_forces` | `bool` | | `False` | | `minus_m_star` | `int` | | `0` | | `name_prefix` | `str` | Prefix to add to file name | | | `u_inf` | `float` | | `0.0` | | `dt` | `float` | | `0.0` | | `include_velocities` | `bool` | | `False` | | `num_cores` | `int` | | `1` | #### AsymptoticStability[¶](#asymptoticstability) *class* `sharpy.postproc.asymptoticstability.``AsymptoticStability`[[source]](_modules/sharpy/postproc/asymptoticstability.html#AsymptoticStability)[¶](#sharpy.postproc.asymptoticstability.AsymptoticStability) Calculates the asymptotic stability properties of the linearised aeroelastic system by computing the corresponding eigenvalues. To use an iterative eigenvalue solver, the setting `iterative_eigvals` should be set to `on`. This will be beneficial when deailing with very large systems. However, the direct method is preferred and more efficient when the system is of a relatively small size (typically around 5000 states). Warning The setting `modes_to_plot` to plot the eigenvectors in Paraview is currently under development. The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | | --- | --- | --- | --- | | `folder` | `str` | Output folder | `./output` | | `print_info` | `bool` | Print information and table of eigenvalues | `False` | | `reference_velocity` | `float` | Reference velocity at which to compute eigenvalues for scaled systems | `1.0` | | `frequency_cutoff` | `float` | Truncate higher frequency modes. If zero none are truncated | `0` | | `export_eigenvalues` | `bool` | Save eigenvalues and eigenvectors to file. | `False` | | `display_root_locus` | `bool` | Show plot with eigenvalues on Argand diagram | `False` | | `velocity_analysis` | `list(float)` | List containing min, max and number of velocities to analyse the system | `[]` | | `iterative_eigvals` | `bool` | Calculate the first `num_evals` using an iterative solver. | `False` | | `num_evals` | `int` | Number of eigenvalues to retain. | `200` | | `modes_to_plot` | `list(int)` | List of mode numbers to simulate and plot | `[]` | | `postprocessors` | `list(str)` | To be used with `modes_to_plot`. Under development. | `[]` | | `postprocessors_settings` | `dict` | To be used with `modes_to_plot`. Under development. | `{}` | `display_root_locus`()[[source]](_modules/sharpy/postproc/asymptoticstability.html#AsymptoticStability.display_root_locus)[¶](#sharpy.postproc.asymptoticstability.AsymptoticStability.display_root_locus) Displays root locus diagrams. Returns the `fig` and `ax` handles for further editing. | Returns: | ax: | | Return type: | fig | `export_eigenvalues`(*num_evals*)[[source]](_modules/sharpy/postproc/asymptoticstability.html#AsymptoticStability.export_eigenvalues)[¶](#sharpy.postproc.asymptoticstability.AsymptoticStability.export_eigenvalues) Saves a `num_evals` number of eigenvalues and eigenvectors to file. The files are saved in the output directoy and include: > * `eigenvectors.dat`: `(num_dof, num_evals)` array of eigenvectors > * `eigenvalues_r.dat`: `(num_evals, 1)` array of the real part of the eigenvalues > * `eigenvalues_i.dat`: `(num_evals, 1)` array of the imaginary part of the eigenvalues. The units of the eigenvalues are `rad/s` References Loading and saving complex arrays: <https://stackoverflow.com/questions/6494102/how-to-save-and-load-an-array-of-complex-numbers-using-numpy-savetxt/6522396| Parameters: | **num_evals** – Number of eigenvalues to save | `mode_time_domain`(*fact*, *fact_rbm*, *mode_num*, *cycles=2*)[[source]](_modules/sharpy/postproc/asymptoticstability.html#AsymptoticStability.mode_time_domain)[¶](#sharpy.postproc.asymptoticstability.AsymptoticStability.mode_time_domain) Returns a single, scaled mode shape in time domain. | Parameters: | * **fact** – Structural deformation scaling * **fact_rbm** – Rigid body motion scaling * **mode_num** – Number of mode to plot * **cycles** – Number of periods/cycles to plot | | Returns: | Time domain array and scaled eigenvector in time. | | Return type: | tuple | `plot_modes`()[[source]](_modules/sharpy/postproc/asymptoticstability.html#AsymptoticStability.plot_modes)[¶](#sharpy.postproc.asymptoticstability.AsymptoticStability.plot_modes) Warning Under development Plot the aeroelastic mode shapes for the first `n_modes_to_plot` `print_eigenvalues`()[[source]](_modules/sharpy/postproc/asymptoticstability.html#AsymptoticStability.print_eigenvalues)[¶](#sharpy.postproc.asymptoticstability.AsymptoticStability.print_eigenvalues) Prints the eigenvalues to a table with the corresponding natural frequency, period and damping ratios `run`()[[source]](_modules/sharpy/postproc/asymptoticstability.html#AsymptoticStability.run)[¶](#sharpy.postproc.asymptoticstability.AsymptoticStability.run) Computes the eigenvalues and eigenvectors | Returns: | Eigenvalues sorted and frequency truncated eigenvectors (np.ndarray): Corresponding mode shapes | | Return type: | eigenvalues (np.ndarray) | *static* `sort_eigenvalues`(*eigenvalues*, *eigenvectors*, *frequency_cutoff=0*)[[source]](_modules/sharpy/postproc/asymptoticstability.html#AsymptoticStability.sort_eigenvalues)[¶](#sharpy.postproc.asymptoticstability.AsymptoticStability.sort_eigenvalues) Sort continuous-time eigenvalues by order of magnitude. The conjugate of complex eigenvalues is removed, then if specified, high frequency modes are truncated. Finally, the eigenvalues are sorted by largest to smallest real part. | Parameters: | * **eigenvalues** (*np.ndarray*) – Continuous-time eigenvalues * **eigenvectors** (*np.ndarray*) – Corresponding right eigenvectors * **frequency_cutoff** (*float*) – Cutoff frequency for truncation `[rad/s]` | Returns: #### BeamLoads[¶](#beamloads) *class* `sharpy.postproc.beamloads.``BeamLoads`[[source]](_modules/sharpy/postproc/beamloads.html#BeamLoads)[¶](#sharpy.postproc.beamloads.BeamLoads) Writes to file the total loads acting on the beam elements The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | | --- | --- | --- | --- | | `csv_output` | `bool` | Write `csv` file with results | `False` | | `output_file_name` | `str` | Output file name | `beam_loads` | | `folder` | `str` | Output folder path | `./output` | #### BeamPlot[¶](#beamplot) *class* `sharpy.postproc.beamplot.``BeamPlot`[[source]](_modules/sharpy/postproc/beamplot.html#BeamPlot)[¶](#sharpy.postproc.beamplot.BeamPlot) Plots beam to Paraview format The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | | --- | --- | --- | --- | | `folder` | `str` | Output folder path | `./output` | | `include_rbm` | `bool` | Include frame of reference rigid body motion | `True` | | `include_FoR` | `bool` | Include frame of reference variables | `False` | | `include_applied_forces` | `bool` | Write beam applied forces | `True` | | `include_applied_moments` | `bool` | Write beam applied moments | `True` | | `name_prefix` | `str` | Name prefix for files | | | `output_rbm` | `bool` | Write `csv` file with rigid body motion data | `True` | #### FrequencyResponse[¶](#frequencyresponse) *class* `sharpy.postproc.frequencyresponse.``FrequencyResponse`[[source]](_modules/sharpy/postproc/frequencyresponse.html#FrequencyResponse)[¶](#sharpy.postproc.frequencyresponse.FrequencyResponse) Frequency Response Calculator Computes the frequency response of a linear system. If a reduced order model has been created, a comparison is made between the two responses. The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | Options | | --- | --- | --- | --- | --- | | `folder` | `str` | Output folder | `./output` | | | `print_info` | `bool` | Write output to screen | `False` | | | `compute_fom` | `bool` | Compute frequency response of full order model (use caution if large) | `False` | | | `load_fom` | `str` | Folder to locate full order model frequency response data | | | | `frequency_unit` | `str` | Units of frequency, “w” for rad/s, “k” reduced | `k` | `w`, `k` | | `frequency_bounds` | `list(float)` | Lower and upper frequency bounds in the corresponding unit | `[0.001, 1]` | | | `num_freqs` | `int` | Number of frequencies to evaluate | `50` | | | `quick_plot` | `bool` | Produce array of `.png` plots showing response. Requires matplotlib | `False` | | `run`()[[source]](_modules/sharpy/postproc/frequencyresponse.html#FrequencyResponse.run)[¶](#sharpy.postproc.frequencyresponse.FrequencyResponse.run) Get the frequency response of the linear state-space Returns: #### PickleData[¶](#pickledata) *class* `sharpy.postproc.pickledata.``PickleData`[[source]](_modules/sharpy/postproc/pickledata.html#PickleData)[¶](#sharpy.postproc.pickledata.PickleData) This postprocessor writes the SHARPy `data` structure in a pickle file, such that classes and methods from SHARPy are retained for restarted solutions or further post-processing. The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | | --- | --- | --- | --- | | `folder` | `str` | Folder to output pickle file | `./output` | #### SaveData[¶](#savedata) *class* `sharpy.postproc.savedata.``SaveData`[[source]](_modules/sharpy/postproc/savedata.html#SaveData)[¶](#sharpy.postproc.savedata.SaveData) The `SaveData` postprocessor writes the SHARPy variables into `hdf5` files. The linear state space files may be saved to `.mat` if desired instead. It has options to save the following classes: > * `Aerogrid` including `sharpy.sharpy.utils.datastructures.AeroTimeStepInfo` > * `Beam` including `sharpy.sharpy.utils.datastructures.StructTimeStepInfo` > * `sharpy.solvers.linearassembler.Linear` including classes in `sharpy.linear.assembler` Notes This method saves simply the data. If you would like to preserve the SHARPy methods of the relevant classes see also `sharpy.solvers.pickledata.PickleData`. The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | Options | | --- | --- | --- | --- | --- | | `folder` | `str` | Folder to save data | `./output` | | | `save_aero` | `bool` | Save aerodynamic classes. | `True` | | | `save_struct` | `bool` | Save structural classes. | `True` | | | `save_linear` | `bool` | Save linear state space system. | `False` | | | `save_linear_uvlm` | `bool` | Save linear UVLM state space system. Use with caution when dealing with large systems. | `False` | | | `skip_attr` | `list(str)` | List of attributes to skip when writing file | `['fortran', 'airfoils', 'airfoil_db', 'settings_types', 'ct_dynamic_forces_list', 'ct_gamma_dot_list', 'ct_gamma_list', 'ct_gamma_star_list', 'ct_normals_list', 'ct_u_ext_list', 'ct_u_ext_star_list', 'ct_zeta_dot_list', 'ct_zeta_list', 'ct_zeta_star_list', 'dynamic_input']` | | | `compress_float` | `bool` | Compress float | `False` | | | `format` | `str` | Save linear state space to hdf5 `.h5` or Matlab `.mat` format. | `h5` | `h5`, `mat` | #### StabilityDerivatives[¶](#stabilityderivatives) *class* `sharpy.postproc.stabilityderivatives.``StabilityDerivatives`[[source]](_modules/sharpy/postproc/stabilityderivatives.html#StabilityDerivatives)[¶](#sharpy.postproc.stabilityderivatives.StabilityDerivatives) Outputs the stability derivatives of a free-flying aircraft Warning Under Development To Do: * Coefficient of stability derivatives * Option to output in NED frame The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | | --- | --- | --- | --- | | `print_info` | `bool` | Display info to screen | `True` | | `folder` | `str` | Output directory | `./output/` | | `u_inf` | `float` | Free stream reference velocity | `1.0` | | `S_ref` | `float` | Reference planform area | `1.0` | | `b_ref` | `float` | Reference span | `1.0` | | `c_ref` | `float` | Reference chord | `1.0` | `uvlm_steady_state_transfer_function`()[[source]](_modules/sharpy/postproc/stabilityderivatives.html#StabilityDerivatives.uvlm_steady_state_transfer_function)[¶](#sharpy.postproc.stabilityderivatives.StabilityDerivatives.uvlm_steady_state_transfer_function) Stability derivatives calculated using the transfer function of the UVLM projected onto the structural degrees of freedom at zero frequency (steady state). | Returns: | matrix containing the steady state values of the transfer function between the force output (columns) and the velocity / control surface inputs (rows). | | Return type: | np.array | #### StallCheck[¶](#stallcheck) *class* `sharpy.postproc.stallcheck.``StallCheck`[[source]](_modules/sharpy/postproc/stallcheck.html#StallCheck)[¶](#sharpy.postproc.stallcheck.StallCheck) Outputs the incidence angle of every panel of the surface. The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | | --- | --- | --- | --- | | `print_info` | `bool` | Print info to screen | `True` | | `airfoil_stall_angles` | `dict` | Dictionary of stall angles for each airfoil | `{}` | | `output_degrees` | `bool` | Output incidence angles in degrees vs radians | `False` | #### WriteVariablesTime[¶](#writevariablestime) *class* `sharpy.postproc.writevariablestime.``WriteVariablesTime`[[source]](_modules/sharpy/postproc/writevariablestime.html#WriteVariablesTime)[¶](#sharpy.postproc.writevariablestime.WriteVariablesTime) Write variables with time `WriteVariablesTime` is a class inherited from `BaseSolver` It is a postprocessor that outputs the value of variables with time onto a text file. `settings_types`[¶](#sharpy.postproc.writevariablestime.WriteVariablesTime.settings_types) Acceptable data types of the input data | Type: | dict | `settings_default`[¶](#sharpy.postproc.writevariablestime.WriteVariablesTime.settings_default) Default values for input data should the user not provide them | Type: | dict | `See the list of arguments` `dir`[¶](#sharpy.postproc.writevariablestime.WriteVariablesTime.dir) directory to output the information | Type: | str | The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | | --- | --- | --- | --- | | `folder` | `str` | Output folder directory | `./output/` | | `delimiter` | `str` | Delimiter to be used in the output file | `` `` | | `FoR_variables` | `list(str)` | Variables of `StructTimeStepInfo` associated to the frame of reference to be writen | `['']` | | `FoR_number` | `list(int)` | Number of the A frame of reference to output (for multibody configurations) | `<sphinx.ext.autodoc.importer._MockObject object at 0x7fa06fa285f8>` | | `structure_variables` | `list(str)` | Variables of `StructTimeStepInfo` associated to the frame of reference to be writen | `['']` | | `structure_nodes` | `list(int)` | Number of the nodes to be writen | `<sphinx.ext.autodoc.importer._MockObject object at 0x7fa06fa28668>` | | `aero_panels_variables` | `list(str)` | Variables of `AeroTimeStepInfo` associated to panels to be writen | `['']` | | `aero_panels_isurf` | `list(int)` | Number of the panels’ surface to be output | `<sphinx.ext.autodoc.importer._MockObject object at 0x7fa06fa286a0>` | | `aero_panels_im` | `list(int)` | Chordwise index of the panels to be output | `<sphinx.ext.autodoc.importer._MockObject object at 0x7fa06fa286d8>` | | `aero_panels_in` | `list(int)` | Spanwise index of the panels to be output | `<sphinx.ext.autodoc.importer._MockObject object at 0x7fa06fa28710>` | | `aero_nodes_variables` | `list(str)` | Variables of `AeroTimeStepInfo` associated to nodes to be writen | `['']` | | `aero_nodes_isurf` | `list(int)` | Number of the nodes’ surface to be output | `<sphinx.ext.autodoc.importer._MockObject object at 0x7fa06fa28748>` | | `aero_nodes_im` | `list(int)` | Chordwise index of the nodes to be output | `<sphinx.ext.autodoc.importer._MockObject object at 0x7fa06fa28780>` | | `aero_nodes_in` | `list(int)` | Spanwise index of the nodes to be output | `<sphinx.ext.autodoc.importer._MockObject object at 0x7fa06fa287b8>` | | `cleanup_old_solution` | `bool` | Remove the existing files | `false` | ### SHARPy Source Code[¶](#sharpy-source-code) The core SHARPy documentation is found herein. Note The docs are still a work in progress and therefore, most functions/classes with which there is not much user interaction are not fully documented. We would appreciate any help by means of you contributing to our growing documentation! If you feel that a function/class is not well documented and, hence, you cannot use it, feel free to raise an issue so that we can improve it. #### Aerodynamic Packages[¶](#aerodynamic-packages) ##### Models[¶](#models) ###### Aerogrid[¶](#aerogrid) Aerogrid contains all the necessary routines to generate an aerodynamic grid based on the input dictionaries. ####### Aerogrid[¶](#aerogrid) *class* `sharpy.aero.models.aerogrid.``Aerogrid`[[source]](_modules/sharpy/aero/models/aerogrid.html#Aerogrid)[¶](#sharpy.aero.models.aerogrid.Aerogrid) `Aerogrid` is the main object containing information of the grid of panels It is created by the solver [`sharpy.solvers.aerogridloader.AerogridLoader`](index.html#sharpy.solvers.aerogridloader.AerogridLoader) *static* `compute_gamma_dot`(*dt*, *tstep*, *previous_tsteps*)[[source]](_modules/sharpy/aero/models/aerogrid.html#Aerogrid.compute_gamma_dot)[¶](#sharpy.aero.models.aerogrid.Aerogrid.compute_gamma_dot) Computes the temporal derivative of circulation (gamma) using finite differences. It will use a first order approximation for the first evaluation (when `len(previous_tsteps) == 1`), and then second order ones. \[\left.\frac{d\Gamma}{dt}\right|^n \approx \lim_{\Delta t \rightarrow 0}\frac{\Gamma^n-\Gamma^{n-1}}{\Delta t}\] For the second time step and onwards, the following second order approximation is used: \[\left.\frac{d\Gamma}{dt}\right|^n \approx \lim_{\Delta t \rightarrow 0}\frac{3\Gamma^n -4\Gamma^{n-1}+\Gamma^{n-2}}{2\Delta t}\] | Parameters: | * **dt** (*float*) – delta time for the finite differences * **tstep** ([*AeroTimeStepInfo*](index.html#sharpy.aero.models.aerogrid.Aerogrid.sharpy.utils.datastructures.AeroTimeStepInfo)) – tstep at time n (current) * **previous_tsteps** (*list**(*[*AeroTimeStepInfo*](index.html#sharpy.aero.models.aerogrid.Aerogrid.sharpy.utils.datastructures.AeroTimeStepInfo)*)*) – previous tstep structure in order: `[n-N,..., n-2, n-1]` | | Returns: | first derivative of circulation with respect to time | | Return type: | float | See also *class* `sharpy.utils.datastructures.``AeroTimeStepInfo`[¶](#sharpy.aero.models.aerogrid.Aerogrid.sharpy.utils.datastructures.AeroTimeStepInfo) ####### generate_strip[¶](#module-sharpy.aero.models.aerogrid.generate_strip) Returns a strip of panels in `A` frame of reference, it has to be then rotated to simulate angles of attack, etc ##### Utilities[¶](#utilities) ###### Force Mapping Utilities[¶](#force-mapping-utilities) ####### aero2struct_force_mapping[¶](#module-sharpy.aero.utils.mapping.aero2struct_force_mapping) Maps the aerodynamic forces at the lattice to the structural nodes The aerodynamic forces from the UVLM are always in the inertial `G` frame of reference and have to be transformed to the body or local `B` frame of reference in which the structural forces are defined. Since the structural nodes and aerodynamic panels are coincident in a spanwise direction, the aerodynamic forces that correspond to a structural node are the summation of the `M+1` forces defined at the lattice at that spanwise location. \[\begin{split}\mathbf{f}_{struct}^B &= \sum\limits_{i=0}^{m+1}C^{BG}\mathbf{f}_{i,aero}^G \\ \mathbf{m}_{struct}^B &= \sum\limits_{i=0}^{m+1}C^{BG}(\mathbf{m}_{i,aero}^G + \tilde{\boldsymbol{\zeta}}^G\mathbf{f}_{i, aero}^G)\end{split}\] where \(\tilde{\boldsymbol{\zeta}}^G\) is the skew-symmetric matrix of the vector between the lattice grid vertex and the structural node. It is possible to introduce efficiency and constant terms in the mapping of forces that are user-defined. For more info see [`efficiency_local_aero2struct_forces()`](index.html#module-sharpy.aero.utils.mapping.efficiency_local_aero2struct_forces). The efficiency and constant terms are introduced by means of the array `airfoil_efficiency` in the `aero.h5` input file. If this variable has been defined, the function used to map the forces will be [`efficiency_local_aero2struct_forces()`](index.html#module-sharpy.aero.utils.mapping.efficiency_local_aero2struct_forces). Else, the standard formulation [`local_aero2struct_forces()`](index.html#module-sharpy.aero.utils.mapping.local_aero2struct_forces) will be used. | param aero_forces: | | --- | | | Aerodynamic forces from the UVLM in inertial frame of reference | | type aero_forces: | | | list | | param struct2aero_mapping: | | | Structural to aerodynamic node mapping | | type struct2aero_mapping: | | | dict | | param zeta: | Aerodynamic grid coordinates | | type zeta: | list | | param pos_def: | Vector of structural node displacements | | type pos_def: | np.ndarray | | param psi_def: | Vector of structural node rotations (CRVs) | | type psi_def: | np.ndarray | | param master: | Unused | | param conn: | Connectivities matrix | | type conn: | np.ndarray | | param cag: | Transformation matrix between inertial and body-attached reference `A` | | type cag: | np.ndarray | | param aero_dict: | | | Dictionary containing the grid’s information. | | type aero_dict: | dict | | returns: | structural forces in an `n_node x 6` vector | | rtype: | np.ndarray | ####### efficiency_local_aero2struct_forces[¶](#module-sharpy.aero.utils.mapping.efficiency_local_aero2struct_forces) Maps the local aerodynamic forces at a given vertex to its corresponding structural node, introducing user-defined efficiency and constant value factors. \[\begin{split}\mathbf{f}_{struct}^B &= \varepsilon^f_0 C^{BG}\mathbf{f}_{i,aero}^G + \varepsilon^f_1\\ \mathbf{m}_{struct}^B &= \varepsilon^m_0 (C^{BG}(\mathbf{m}_{i,aero}^G + \tilde{\boldsymbol{\zeta}}^G\mathbf{f}_{i, aero}^G)) + \varepsilon^m_1\end{split}\] | param local_aero_forces: | | --- | | | aerodynamic forces and moments at a grid vertex | | type local_aero_forces: | | | np.ndarray | | param chi_g: | vector between grid vertex and structural node in inertial frame | | type chi_g: | np.ndarray | | param cbg: | transformation matrix between inertial and body frames of reference | | type cbg: | np.ndarray | | param force_efficiency: | | | force efficiency matrix for all structural elements. Its size is `n_elem x n_node_elem x 2 x 3` | | type force_efficiency: | | | np.ndarray | | param moment_efficiency: | | | moment efficiency matrix for all structural elements. Its size is `n_elem x n_node_elem x 2 x 3` | | type moment_efficiency: | | | np.ndarray | | param i_elem: | element index | | type i_elem: | int | | param i_local_node: | | | local node index within element | | type i_local_node: | | | int | | returns: | corresponding aerodynamic force at the structural node from the force and moment at a grid vertex | | rtype: | np.ndarray | ####### local_aero2struct_forces[¶](#module-sharpy.aero.utils.mapping.local_aero2struct_forces) Maps the local aerodynamic forces at a given vertex to its corresponding structural node. \[\begin{split}\mathbf{f}_{struct}^B &= C^{BG}\mathbf{f}_{i,aero}^G\\ \mathbf{m}_{struct}^B &= C^{BG}(\mathbf{m}_{i,aero}^G + \tilde{\boldsymbol{\zeta}}^G\mathbf{f}_{i, aero}^G)\end{split}\] | param local_aero_forces: | | --- | | | aerodynamic forces and moments at a grid vertex | | type local_aero_forces: | | | np.ndarray | | param chi_g: | vector between grid vertex and structural node in inertial frame | | type chi_g: | np.ndarray | | param cbg: | transformation matrix between inertial and body frames of reference | | type cbg: | np.ndarray | | param force_efficiency: | | | Unused. See [`efficiency_local_aero2struct_forces()`](index.html#module-sharpy.aero.utils.mapping.efficiency_local_aero2struct_forces). | | type force_efficiency: | | | np.ndarray | | param moment_efficiency: | | | Unused. See [`efficiency_local_aero2struct_forces()`](index.html#module-sharpy.aero.utils.mapping.efficiency_local_aero2struct_forces). | | type moment_efficiency: | | | np.ndarray | | param i_elem: | | | type i_elem: | int | | returns: | corresponding aerodynamic force at the structural node from the force and moment at a grid vertex | | rtype: | np.ndarray | #### Controllers[¶](#controllers) ##### ControlSurfacePidController[¶](#controlsurfacepidcontroller) *class* `sharpy.controllers.controlsurfacepidcontroller.``ControlSurfacePidController`[[source]](_modules/sharpy/controllers/controlsurfacepidcontroller.html#ControlSurfacePidController)[¶](#sharpy.controllers.controlsurfacepidcontroller.ControlSurfacePidController) The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | | --- | --- | --- | --- | | `time_history_input_file` | `str` | Route and file name of the time history of desired state | `None` | | `P` | `float` | Proportional gain of the controller | `None` | | `I` | `float` | Integral gain of the controller | `0.0` | | `D` | `float` | Differential gain of the controller | `0.0` | | `input_type` | `str` | Quantity used to define the reference state. Supported: pitch | `None` | | `dt` | `float` | Time step of the simulation | `None` | | `controlled_surfaces` | `list(int)` | Control surface indices to be actuated by this controller | `None` | | `controlled_surfaces_coeff` | `list(float)` | Control surface deflection coefficients. For example, for antisymmetric deflections => [1, -1]. | `[1.0]` | | `write_controller_log` | `bool` | Write a time history of input, required input, and control | `True` | | `controller_log_route` | `str` | Directory where the log will be stored | `./output/` | `control`(*data*, *controlled_state*)[[source]](_modules/sharpy/controllers/controlsurfacepidcontroller.html#ControlSurfacePidController.control)[¶](#sharpy.controllers.controlsurfacepidcontroller.ControlSurfacePidController.control) Main routine of the controller. Input is data (the self.data in the solver), and currrent_state which is a dictionary with [‘structural’, ‘aero’] time steps for the current iteration. | Parameters: | * **data** – problem data containing all the information. * **controlled_state** – dict with two vars: structural and aero containing the timestep_info that will be returned with the control variables. | | Returns: | A dict with structural and aero time steps and control input included. | ##### TakeOffTrajectoryController[¶](#takeofftrajectorycontroller) *class* `sharpy.controllers.takeofftrajectorycontroller.``TakeOffTrajectoryController`[[source]](_modules/sharpy/controllers/takeofftrajectorycontroller.html#TakeOffTrajectoryController)[¶](#sharpy.controllers.takeofftrajectorycontroller.TakeOffTrajectoryController) The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | | --- | --- | --- | --- | | `trajectory_input_file` | `str` | Route and file name of the trajectory file given as a csv with columns: time, x, y, z | `None` | | `dt` | `float` | Time step of the simulation | `None` | | `trajectory_method` | `str` | Trajectory controller method. For now, “lagrange” is the supported option | `lagrange` | | `controlled_constraint` | `str` | Name of the controlled constraint in the multibody context Usually, it is something like constraint_00. | `None` | | `controller_log_route` | `str` | Directory where the log will be stored | `./output/` | | `write_controller_log` | `bool` | Controls if the log from the controller is written or not. | `True` | | `free_trajectory_structural_solver` | `str` | If different than and empty string, the structural solver will be changed after the end of the trajectory has been reached | | | `free_trajectory_structural_substeps` | `int` | Controls the structural solver structural substeps once the end of the trajectory has been reached | `0` | | `initial_ramp_length_structural_substeps` | `int` | Controls the number of timesteps that are used to increase the structural substeps from 0 | `10` | `control`(*data*, *controlled_state*)[[source]](_modules/sharpy/controllers/takeofftrajectorycontroller.html#TakeOffTrajectoryController.control)[¶](#sharpy.controllers.takeofftrajectorycontroller.TakeOffTrajectoryController.control) Main routine of the controller. Input is data (the self.data in the solver), and currrent_state which is a dictionary with [‘structural’, ‘aero’] time steps for the current iteration. | Parameters: | * **data** – problem data containing all the information. * **controlled_state** – dict with two vars: structural and aero containing the timestep_info that will be returned with the control variables. | | Returns: | A dict with structural and aero time steps and control input included. | `process_trajectory`(*dxdt=True*)[[source]](_modules/sharpy/controllers/takeofftrajectorycontroller.html#TakeOffTrajectoryController.process_trajectory)[¶](#sharpy.controllers.takeofftrajectorycontroller.TakeOffTrajectoryController.process_trajectory) See <https://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.interpolate.UnivariateSpline.html#### Generators[¶](#generators) Velocity field generators prescribe the flow conditions for your problem. For instance, you can have an aircraft at a prescribed fixed location in a velocity field towards the aircraft. Alternatively, you can have a free moving aircraft in a static velocity field. Dynamic Control Surface generators enable the user to prescribe a certain control surface deflection in time. ##### BumpVelocityField[¶](#bumpvelocityfield) *class* `sharpy.generators.bumpvelocityfield.``BumpVelocityField`[[source]](_modules/sharpy/generators/bumpvelocityfield.html#BumpVelocityField)[¶](#sharpy.generators.bumpvelocityfield.BumpVelocityField) Bump Velocity Field Generator `BumpVelocityField` is a class inherited from `BaseGenerator` The `BumpVelocityField` class generates a bump-shaped gust profile velocity field, and the profile has the characteristics specified by the user. To call this generator, the `generator_id = BumpVelocityField` shall be used. This is parsed as the value for the `velocity_field_generator` key in the desired aerodynamic solver’s settings. The resultant velocity, $w_g$, is calculated as follows: \[w_g = \frac{w_0}{4}\left( 1 + \cos(\frac{(x - x_0)}{H_x} \right)\left( 1 + \cos(\frac{(y - y_0)}{H_y} \right)\] Notes For now, only simulations where the inertial FoR is fixed are supported. The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | | --- | --- | --- | --- | | `gust_intensity` | `float` | Intensity of the gust | `None` | | `x0` | `float` | x location of the centre of the bump | `0.0` | | `y0` | `float` | y location of the centre of the bump | `0.0` | | `hx` | `float` | Gust gradient in the x direction | `1.0` | | `hy` | `float` | Gust gradient in the y direction | `1.0` | | `relative_motion` | `bool` | When true the gust will move at the prescribed velocity | `False` | | `u_inf` | `float` | Free stream velocity | `None` | | `u_inf_direction` | `list(float)` | Free stream velocity direction | `<sphinx.ext.autodoc.importer._MockObject object at 0x7fa070a653c8>` | ##### DynamicControlSurface[¶](#dynamiccontrolsurface) *class* `sharpy.generators.dynamiccontrolsurface.``DynamicControlSurface`[[source]](_modules/sharpy/generators/dynamiccontrolsurface.html#DynamicControlSurface)[¶](#sharpy.generators.dynamiccontrolsurface.DynamicControlSurface) Dynamic Control Surface deflection Generator `DynamicControlSurface` class inherited from `BaseGenerator` The object generates a deflection in radians based on the time series given as a vector in the input data To call this generator, the `generator_id = DynamicControlSurface` shall be used. This is parsed as the value for the `control_surface_deflection_generator` key in the aerogridloader solver’s settings. | Parameters: | **in_dict** (*dict*) – Input data in the form of dictionary. See acceptable entries below. | Name | Type | Description | Default | | --- | --- | --- | --- | | `dt` | `float` | Timestep for the simulation | `None` | | `deflection_file` | `str` | Relative path to the file with the deflection information. | `None` | | `settings_types`[¶](#sharpy.generators.dynamiccontrolsurface.DynamicControlSurface.settings_types) Acceptable data types of the input data | Type: | dict | `settings_default`[¶](#sharpy.generators.dynamiccontrolsurface.DynamicControlSurface.settings_default) Default values for input data should the user not provide them | Type: | dict | `deflection`[¶](#sharpy.generators.dynamiccontrolsurface.DynamicControlSurface.deflection) Array of deflection of the control surface | Type: | np.array | `deflection_dot`[¶](#sharpy.generators.dynamiccontrolsurface.DynamicControlSurface.deflection_dot) Array of the time derivative of the cs deflection. Calculated using 1st order finite differences. | Type: | np.array | See also *class* `sharpy.utils.generator_interface.``BaseGenerator`[¶](#sharpy.generators.dynamiccontrolsurface.DynamicControlSurface.sharpy.utils.generator_interface.BaseGenerator) ##### GridBox[¶](#gridbox) *class* `sharpy.generators.gridbox.``GridBox`[[source]](_modules/sharpy/generators/gridbox.html#GridBox)[¶](#sharpy.generators.gridbox.GridBox) Generatex a grid within a box to be used to generate the flow field during the postprocessing The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | | --- | --- | --- | --- | | `coords_0` | `list(float)` | First bounding box corner | `[0.0, 0.0, 0.0]` | | `coords_1` | `list(float)` | Second bounding box corner | `[10.0, 0.0, 10.0]` | | `spacing` | `list(float)` | Spacing parameters of the bbox | `[1.0, 1.0, 1.0]` | | `moving` | `bool` | If `True`, the box moves with the body frame of reference. It does not rotate with it, though | `False` | ##### Gust Velocity Field Generators[¶](#gust-velocity-field-generators) These generators are used to create a gust velocity field. [`GustVelocityField`](index.html#sharpy.generators.gustvelocityfield.GustVelocityField) is the main class that should be parsed as the `velocity_field_input` to the desired aerodynamic solver. The remaining classes are the specific gust profiles and parsed as `gust_shape`. Examples: The typical input to the aerodynamic solver settings would therefore read similar to: ``` >>> aero_settings = {'<some_aero_settings>': '<some_aero_settings>', >>> 'velocity_field_generator': 'GustVelocityField', >>> 'velocity_field_input': {'u_inf': 1, >>> 'gust_shape': '<desired_gust>', >>> 'gust_parameters': '<gust_settings>'}} ``` ###### DARPA[¶](#darpa) *class* `sharpy.generators.gustvelocityfield.``DARPA`[[source]](_modules/sharpy/generators/gustvelocityfield.html#DARPA)[¶](#sharpy.generators.gustvelocityfield.DARPA) Discrete, non-uniform span model \[U_z = \frac{u_{de}}{2}\left[1-\cos\left(\frac{2\pi x}{S}\right)\right]\cos\left(\frac{\pi y}{b}\right)\] This gust can be used by using the setting `gust_shape = 'DARPA'` in [`GustVelocityField`](index.html#sharpy.generators.gustvelocityfield.GustVelocityField). The `GustVelocityField` generator takes the following settings as a dictionary assignedto `gust_parameters`. | Name | Type | Description | Default | | --- | --- | --- | --- | | `gust_length` | `float` | Length of gust | `0.0` | | `gust_intensity` | `float` | Intensity of the gust | `0.0` | | `span` | `float` | Wing span | `0.0` | ###### GustVelocityField[¶](#gustvelocityfield) *class* `sharpy.generators.gustvelocityfield.``GustVelocityField`[[source]](_modules/sharpy/generators/gustvelocityfield.html#GustVelocityField)[¶](#sharpy.generators.gustvelocityfield.GustVelocityField) Gust Velocity Field Generator `GustVelocityField` is a class inherited from `BaseGenerator` The `GustVelocityField` class generates a gust profile velocity field, and the profile has the characteristics specified by the user. To call this generator, the `generator_id = GustVelocityField` shall be used. This is parsed as the value for the `velocity_field_generator` key in the desired aerodynamic solver’s settings. Notation: \(u_{de}\) is the gust intensity, \(S\) is the gust length and \(b\) is the wing span. \(x\) and \(y\) refer to the chordwise and spanwise distance penetrated into the gust, respectively. Several gust profiles are available. Your chosen gust profile should be parsed to `gust_shape` and the corresponding settings as a dictionary to `gust_parameters`. This generator takes the following settings | Name | Type | Description | Default | | --- | --- | --- | --- | | `u_inf` | `float` | Free stream velocity | `None` | | `u_inf_direction` | `list(float)` | Free stream velocity relative component | `[1.0, 0.0, 0.0]` | | `offset` | `float` | Spatial offset of the gust with respect to origin | `0.0` | | `relative_motion` | `bool` | If true, the gust is convected with u_inf | `False` | | `gust_shape` | `str` | Gust profile shape | `None` | | `gust_parameters` | `dict` | Dictionary of parameters specific of the gust_shape selected | `{}` | ###### continuous_sin[¶](#continuous-sin) *class* `sharpy.generators.gustvelocityfield.``continuous_sin`[[source]](_modules/sharpy/generators/gustvelocityfield.html#continuous_sin)[¶](#sharpy.generators.gustvelocityfield.continuous_sin) Continuous sinusoidal gust model \[U_z = \frac{u_{de}}{2}\sin\left(\frac{2\pi x}{S}\right)\] This gust can be used by using the setting `gust_shape = 'continuous_sin'` in [`GustVelocityField`](index.html#sharpy.generators.gustvelocityfield.GustVelocityField). The `GustVelocityField` generator takes the following settings as a dictionary assignedto `gust_parameters`. | Name | Type | Description | Default | | --- | --- | --- | --- | | `gust_length` | `float` | Length of gust | `0.0` | | `gust_intensity` | `float` | Intensity of the gust | `0.0` | ###### lateral_one_minus_cos[¶](#lateral-one-minus-cos) *class* `sharpy.generators.gustvelocityfield.``lateral_one_minus_cos`[[source]](_modules/sharpy/generators/gustvelocityfield.html#lateral_one_minus_cos)[¶](#sharpy.generators.gustvelocityfield.lateral_one_minus_cos) This gust can be used by using the setting `gust_shape = 'lateral 1-cos'` in [`GustVelocityField`](index.html#sharpy.generators.gustvelocityfield.GustVelocityField). The `GustVelocityField` generator takes the following settings as a dictionary assignedto `gust_parameters`. | Name | Type | Description | Default | | --- | --- | --- | --- | | `gust_length` | `float` | Length of gust | `0.0` | | `gust_intensity` | `float` | Intensity of the gust | `0.0` | ###### one_minus_cos[¶](#one-minus-cos) *class* `sharpy.generators.gustvelocityfield.``one_minus_cos`[[source]](_modules/sharpy/generators/gustvelocityfield.html#one_minus_cos)[¶](#sharpy.generators.gustvelocityfield.one_minus_cos) One minus cos gust (single bump) > \[U_z = \frac{u_{de}}{2}\left[1-\cos\left(\frac{2\pi x}{S}\right)\right]\] This gust can be used by using the setting `gust_shape = '1-cos'` in [`GustVelocityField`](index.html#sharpy.generators.gustvelocityfield.GustVelocityField). The `GustVelocityField` generator takes the following settings as a dictionary assignedto `gust_parameters`. | Name | Type | Description | Default | | --- | --- | --- | --- | | `gust_length` | `float` | Length of gust, \(S\). | `0.0` | | `gust_intensity` | `float` | Intensity of the gust \(u_{de}\). | `0.0` | ###### span_sine[¶](#span-sine) *class* `sharpy.generators.gustvelocityfield.``span_sine`[[source]](_modules/sharpy/generators/gustvelocityfield.html#span_sine)[¶](#sharpy.generators.gustvelocityfield.span_sine) This gust can be used by using the setting `gust_shape = 'span sine'` in [`GustVelocityField`](index.html#sharpy.generators.gustvelocityfield.GustVelocityField). The `GustVelocityField` generator takes the following settings as a dictionary assignedto `gust_parameters`. | Name | Type | Description | Default | | --- | --- | --- | --- | | `gust_intensity` | `float` | Intensity of the gust | `0.0` | | `span` | `float` | Wing span | `0.0` | | `periods_per_span` | `int` | Number of times that the sine is repeated in the span of the wing | `1` | | `perturbation_dir` | `list(float)` | Direction in which the perturbation will be applied in A FoR | `<sphinx.ext.autodoc.importer._MockObject object at 0x7fa070aa7ac8>` | | `span_dir` | `list(float)` | Direction of the span of the wing | `<sphinx.ext.autodoc.importer._MockObject object at 0x7fa070aa7400>` | | `span_with_gust` | `float` | Extension of the span to which the gust will be applied | `0.0` | ###### time_varying[¶](#time-varying) *class* `sharpy.generators.gustvelocityfield.``time_varying`[[source]](_modules/sharpy/generators/gustvelocityfield.html#time_varying)[¶](#sharpy.generators.gustvelocityfield.time_varying) The inflow velocity changes with time but it is uniform in space. It is read from a 4 column file: \[time[s] \Delta U_x \Delta U_y \Delta U_z\] This gust can be used by using the setting `gust_shape = 'time varying'` in :class:.`GustVelocityField`. The `GustVelocityField` generator takes the following settings as a dictionary assignedto `gust_parameters`. | Name | Type | Description | Default | | --- | --- | --- | --- | | `file` | `str` | File with the information | | ###### time_varying_global[¶](#time-varying-global) *class* `sharpy.generators.gustvelocityfield.``time_varying_global`[[source]](_modules/sharpy/generators/gustvelocityfield.html#time_varying_global)[¶](#sharpy.generators.gustvelocityfield.time_varying_global) Similar to the previous one but the velocity changes instanteneously in the whole flow field. It is not fed into the solid. This gust can be used by using the setting `gust_shape = 'time varying global'` in [`GustVelocityField`](index.html#sharpy.generators.gustvelocityfield.GustVelocityField). The `GustVelocityField` generator takes the following settings as a dictionary assignedto `gust_parameters`. | Name | Type | Description | Default | | --- | --- | --- | --- | | `file` | `str` | File with the information (only for time varying) | | ##### ShearVelocityField[¶](#shearvelocityfield) *class* `sharpy.generators.shearvelocityfield.``ShearVelocityField`[[source]](_modules/sharpy/generators/shearvelocityfield.html#ShearVelocityField)[¶](#sharpy.generators.shearvelocityfield.ShearVelocityField) Shear Velocity Field Generator `ShearVelocityField` class inherited from `BaseGenerator` The object creates a steady velocity field with shear \[\hat{u} = \hat{u}\_\infty \left( \frac{h - h\_\mathrm{corr}}{h\_\mathrm{ref}} \right)^{\mathrm{shear}\_\mathrm{exp}}\] \[h = \zeta \cdot \mathrm{shear}\_\mathrm{direction}\] The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | | --- | --- | --- | --- | | `u_inf` | `float` | Free stream velocity magnitude | `None` | | `u_inf_direction` | `list(float)` | `x`, `y` and `z` relative components of the free stream velocity | `<sphinx.ext.autodoc.importer._MockObject object at 0x7fa070a65908>` | | `shear_direction` | `list(float)` | `x`, `y` and `z` relative components of the direction along which shear applies | `<sphinx.ext.autodoc.importer._MockObject object at 0x7fa070a65198>` | | `shear_exp` | `float` | Exponent of the shear law | `0.0` | | `h_ref` | `float` | Reference height at which `u_inf` is defined | `1.0` | | `h_corr` | `float` | Height to correct shear law | `0.0` | ##### SteadyVelocityField[¶](#steadyvelocityfield) *class* `sharpy.generators.steadyvelocityfield.``SteadyVelocityField`[[source]](_modules/sharpy/generators/steadyvelocityfield.html#SteadyVelocityField)[¶](#sharpy.generators.steadyvelocityfield.SteadyVelocityField) Steady Velocity Field Generator `SteadyVelocityField` class inherited from `BaseGenerator` The object creates a steady velocity field with the velocity and flow direction specified by the user. To call this generator, the `generator_id = SteadyVelocityField` shall be used. This is parsed as the value for the `velocity_field_generator` key in the desired aerodynamic solver’s settings. | Parameters: | **in_dict** (*dict*) – Input data in the form of dictionary. See acceptable entries below: | Name | Type | Description | Default | | --- | --- | --- | --- | | `u_inf` | `float` | Free stream velocity magnitude | `0` | | `u_inf_direction` | `list(float)` | `x`, `y` and `z` relative components of the free stream velocity | `[1.0, 0.0, 0.0]` | | `settings_types`[¶](#sharpy.generators.steadyvelocityfield.SteadyVelocityField.settings_types) Acceptable data types of the input data | Type: | dict | `settings_default`[¶](#sharpy.generators.steadyvelocityfield.SteadyVelocityField.settings_default) Default values for input data should the user not provide them | Type: | dict | `u_inf`[¶](#sharpy.generators.steadyvelocityfield.SteadyVelocityField.u_inf) Free stream velocity selection | Type: | float | `u_inf_direction`[¶](#sharpy.generators.steadyvelocityfield.SteadyVelocityField.u_inf_direction) `x`, `y` and `z` relative contributions to the free stream velocity | Type: | list(float) | See also *class* `sharpy.utils.generator_interface.``BaseGenerator`[¶](#sharpy.generators.steadyvelocityfield.SteadyVelocityField.sharpy.utils.generator_interface.BaseGenerator) ##### TrajectoryGenerator[¶](#trajectorygenerator) *class* `sharpy.generators.trajectorygenerator.``TrajectoryGenerator`[[source]](_modules/sharpy/generators/trajectorygenerator.html#TrajectoryGenerator)[¶](#sharpy.generators.trajectorygenerator.TrajectoryGenerator) `TrajectoryGenerator` is used to generate nodal positions or velocities for trajectory constraints such as the ones included in the multibody solver. It is usually called from a `Controller` module. The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | | --- | --- | --- | --- | | `angle_end` | `float` | Trajectory angle wrt horizontal at release | `0.0` | | `veloc_end` | `float` | Release velocity at release | `None` | | `shape` | `str` | Shape of the `z` vs `x` function. `quadratic` or `linear` are supported | `quadratic` | | `acceleration` | `str` | Acceleration law, possible values are `linear` or `constant` | `linear` | | `dt` | `float` | Time step of the simulation | `None` | | `coords_end` | `list(float)` | Coordinates of the final ramp point | `None` | | `plot` | `bool` | Plot the ramp shape. Requires matplotlib installed | `False` | | `print_info` | `bool` | Print information on runtime | `False` | | `time_offset` | `float` | Time interval before the start of the ramp acceleration | `0.0` | | `offset` | `list(float)` | Coordinates of the starting point of the simulation | `<sphinx.ext.autodoc.importer._MockObject object at 0x7fa070aa70f0>` | | `return_velocity` | `bool` | If `True`, nodal velocities are given, if `False`, coordinates are the output | `False` | ##### TurbVelocityField[¶](#turbvelocityfield) *class* `sharpy.generators.turbvelocityfield.``TurbVelocityField`[[source]](_modules/sharpy/generators/turbvelocityfield.html#TurbVelocityField)[¶](#sharpy.generators.turbvelocityfield.TurbVelocityField) Turbulent Velocity Field Generator `TurbVelocitityField` is a class inherited from `BaseGenerator` The `TurbVelocitityField` class generates a velocity field based on the input from an [XDMF](<http://www.xdmf.org>) file. It supports time-dependant fields as well as frozen turbulence. To call this generator, the `generator_id = TurbVelocityField` shall be used. This is parsed as the value for the `velocity_field_generator` key in the desired aerodynamic solver’s settings. Supported files: * field_id.xdmf: Steady or Unsteady XDMF file This generator also performs time interpolation between two different time steps. For now, only linear interpolation is possible. Space interpolation is done through scipy.interpolate trilinear interpolation. However, turbulent fields are read directly from the binary file and not copied into memory. This is performed using np.memmap. The overhead of this procedure is ~18% for the interpolation stage, however, initially reading the binary velocity field (which will be much more common with time-domain simulations) is faster by a factor of 1e4. Also, memory savings are quite substantial: from 6Gb for a typical field to a handful of megabytes for the whole program. | Parameters: | **in_dict** (*dict*) – Input data in the form of dictionary. See acceptable entries below: | Attributes: See also *class* `sharpy.utils.generator_interface.``BaseGenerator`[¶](#sharpy.generators.turbvelocityfield.TurbVelocityField.sharpy.utils.generator_interface.BaseGenerator) The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | | --- | --- | --- | --- | | `print_info` | `bool` | Output solver-specific information in runtime. | `True` | | `turbulent_field` | `str` | XDMF file path of the velocity field | `None` | | `offset` | `list(float)` | Spatial offset in the 3 dimensions | `<sphinx.ext.autodoc.importer._MockObject object at 0x7fa070b1e748>` | | `centre_y` | `bool` | Flat for changing the domain to [`-y_max/2`, `y_max/2`] | `True` | | `periodicity` | `str` | Axes in which periodicity is enforced | `xy` | | `frozen` | `bool` | If `True`, the turbulent field will not be updated in time | `True` | | `store_field` | `bool` | If `True`, the xdmf snapshots are stored in memory. Only two at a time for the linear interpolation | `False` | `read_btl`(*in_file*)[[source]](_modules/sharpy/generators/turbvelocityfield.html#TurbVelocityField.read_btl)[¶](#sharpy.generators.turbvelocityfield.TurbVelocityField.read_btl) Legacy function, not using the custom format based on HDF5 anymore. `read_grid`(*i_grid*, *i_cache=0*)[[source]](_modules/sharpy/generators/turbvelocityfield.html#TurbVelocityField.read_grid)[¶](#sharpy.generators.turbvelocityfield.TurbVelocityField.read_grid) This function returns an interpolator list of size 3 made of scipy.interpolate.RegularGridInterpolator objects. `read_xdmf`(*in_file*)[[source]](_modules/sharpy/generators/turbvelocityfield.html#TurbVelocityField.read_xdmf)[¶](#sharpy.generators.turbvelocityfield.TurbVelocityField.read_xdmf) Reads the xml file <case_name>.xdmf. Writes the self.grid_data data structure with all the information necessary. Note: this function does not load any turbulence data (such as ux000, …), it only reads the header information contained in the xdmf file. ##### TurbVelocityFieldBts[¶](#turbvelocityfieldbts) *class* `sharpy.generators.turbvelocityfieldbts.``TurbVelocityFieldBts`[[source]](_modules/sharpy/generators/turbvelocityfieldbts.html#TurbVelocityFieldBts)[¶](#sharpy.generators.turbvelocityfieldbts.TurbVelocityFieldBts) Turbulent Velocity Field Generator from TurbSim bts files `TurbVelocitityFieldBts` is a class inherited from `BaseGenerator` The `TurbVelocitityFieldBts` class generates a velocity field based on the input from a bts file generated by TurbSim. <https://nwtc.nrel.gov/TurbSimTo call this generator, the `generator_id = TurbVelocityField` shall be used. This is parsed as the value for the `velocity_field_generator` key in the desired aerodynamic solver’s settings. The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | | --- | --- | --- | --- | | `print_info` | `bool` | Output solver-specific information in runtime | `True` | | `turbulent_field` | `str` | BTS file path of the velocity file | `None` | | `new_orientation` | `str` | New order of the axes | `xyz` | | `u_fed` | `list(float)` | Velocity at which the turbulence field is fed into the solid | `<sphinx.ext.autodoc.importer._MockObject object at 0x7fa070a65278>` | | `u_out` | `list(float)` | Velocity to set for points outside the interpolating box | `<sphinx.ext.autodoc.importer._MockObject object at 0x7fa070a656a0>` | | `case_with_tower` | `bool` | Does the SHARPy case will include the tower in the simulation? | `False` | #### Linear SHARPy[¶](#linear-sharpy) The code included herein enables the assembly of linearised state-space systems based on the previous solution of a nonlinear problem that will be used as linearisation reference. The code is structured in the following way: > * Assembler: different state-spaces to assemble, from only structural/aerodynamic to fully coupled aeroelastic > * Src: source code required for the linearisation and utilities for the manipulation of state-space elements References: > <NAME>. , <NAME>.. State-Space Realizations and Internal Balancing in Potential-Flow Aerodynamics > with Arbitrary Kinematics. AIAA Journal, Vol. 57, No.6, June 2019 ##### Assembler[¶](#assembler) ###### Control surface deflector for linear systems[¶](#control-surface-deflector-for-linear-systems) Control surface deflector for linear systems ####### LinControlSurfaceDeflector[¶](#lincontrolsurfacedeflector) *class* `sharpy.linear.assembler.lincontrolsurfacedeflector.``LinControlSurfaceDeflector`[[source]](_modules/sharpy/linear/assembler/lincontrolsurfacedeflector.html#LinControlSurfaceDeflector)[¶](#sharpy.linear.assembler.lincontrolsurfacedeflector.LinControlSurfaceDeflector) Subsystem that deflects control surfaces for use with linear state space systems The current version supports only deflections. Future work will include standalone state-space systems to model physical actuators. `assemble`()[[source]](_modules/sharpy/linear/assembler/lincontrolsurfacedeflector.html#LinControlSurfaceDeflector.assemble)[¶](#sharpy.linear.assembler.lincontrolsurfacedeflector.LinControlSurfaceDeflector.assemble) Warning Under-development Will assemble the state-space for an actuator model Returns: `generate`(*linuvlm=None*, *tsaero0=None*, *tsstruct0=None*, *aero=None*, *structure=None*)[[source]](_modules/sharpy/linear/assembler/lincontrolsurfacedeflector.html#LinControlSurfaceDeflector.generate)[¶](#sharpy.linear.assembler.lincontrolsurfacedeflector.LinControlSurfaceDeflector.generate) Generates a matrix mapping a linear control surface deflection onto the aerodynamic grid. The parsing of arguments is temporary since this state space element will include a full actuator model. The parsing of arguments is optional if the class has been previously initialised. | Parameters: | * **linuvlm** – * **tsaero0** – * **tsstruct0** – * **aero** – * **structure** – | Returns: ####### der_R_arbitrary_axis_times_v[¶](#module-sharpy.linear.assembler.lincontrolsurfacedeflector.der_R_arbitrary_axis_times_v) Linearised rotation vector of the vector `v` by angle `theta` about an arbitrary axis `u`. The rotation of a vector \(\mathbf{v}\) about the axis \(\mathbf{u}\) by an angle \(\boldsymbol{\theta}\) can be expressed as \[\mathbf{w} = \mathbf{R}(\mathbf{u}, \theta) \mathbf{v},\] where \(\mathbf{R}\) is a \(\mathbb{R}^{3\times 3}\) matrix. This expression can be linearised for it to be included in the linear solver as \[\delta\mathbf{w} = \frac{\partial}{\partial\theta}\left(\mathbf{R}(\mathbf{u}, \theta_0)\right)\delta\theta\] The matrix \(\mathbf{R}\) is \[\begin{split}\mathbf{R} = \begin{bmatrix}\cos \theta +u_{x}^{2}\left(1-\cos \theta \right) & u_{x}u_{y}\left(1-\cos \theta \right)-u_{z}\sin \theta & u_{x}u_{z}\left(1-\cos \theta \right)+u_{y}\sin \theta \\ u_{y}u_{x}\left(1-\cos \theta \right)+u_{z}\sin \theta & \cos \theta +u_{y}^{2}\left(1-\cos \theta \right)& u_{y}u_{z}\left(1-\cos \theta \right)-u_{x}\sin \theta \\ u_{z}u_{x}\left(1-\cos \theta \right)-u_{y}\sin \theta & u_{z}u_{y}\left(1-\cos \theta \right)+u_{x}\sin \theta & \cos \theta +u_{z}^{2}\left(1-\cos \theta \right)\end{bmatrix},\end{split}\] and its linearised expression becomes \[\begin{split}\frac{\partial}{\partial\theta}\left(\mathbf{R}(\mathbf{u}, \theta_0)\right) = \begin{bmatrix} -\sin \theta +u_{x}^{2}\sin \theta \mathbf{v}_1 + u_{x}u_{y}\sin \theta-u_{z} \cos \theta \mathbf{v}_2 + u_{x}u_{z}\sin \theta +u_{y}\cos \theta \mathbf{v}_3 \\ u_{y}u_{x}\sin \theta+u_{z}\cos \theta\mathbf{v}_1 -\sin \theta +u_{y}^{2}\sin \theta\mathbf{v}_2 + u_{y}u_{z}\sin \theta-u_{x}\cos \theta\mathbf{v}_3 \\ u_{z}u_{x}\sin \theta-u_{y}\cos \theta\mathbf{v}_1 + u_{z}u_{y}\sin \theta+u_{x}\cos \theta\mathbf{v}_2 -\sin \theta +u_{z}^{2}\sin\theta\mathbf{v}_3\end{bmatrix}_{\theta=\theta_0}\end{split}\] and is of dimension \(\mathbb{R}^{3\times 1}\). | param u: | Arbitrary rotation axis | | type u: | numpy.ndarray | | param theta: | Rotation angle (radians) | | type theta: | float | | param v: | Vector to rotate | | type v: | numpy.ndarray | | returns: | Linearised rotation vector of dimensions \(\mathbb{R}^{3\times 1}\). | | rtype: | numpy.ndarray | ###### LinearAeroelastic[¶](#linearaeroelastic) *class* `sharpy.linear.assembler.linearaeroelastic.``LinearAeroelastic`[[source]](_modules/sharpy/linear/assembler/linearaeroelastic.html#LinearAeroelastic)[¶](#sharpy.linear.assembler.linearaeroelastic.LinearAeroelastic) Assemble a linearised aeroelastic system The aeroelastic system can be seen as the coupling between a linearised aerodynamic system (System 1) and a linearised beam system (System 2). The coupled system retains inputs and outputs from both systems such that \[\mathbf{u} = [\mathbf{u}_1;\, \mathbf{u}_2]\] and the outputs are also ordered in a similar fashion \[\mathbf{y} = [\mathbf{y}_1;\, \mathbf{y}_2]\] Reference the individual systems for the particular ordering of the respective input and output variables. The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | | --- | --- | --- | --- | | `aero_settings` | `dict` | Linear UVLM settings | `None` | | `beam_settings` | `dict` | Linear Beam settings | `None` | | `uvlm_filename` | `str` | Path to .data.h5 file containing UVLM/ROM state space to load | | | `track_body` | `bool` | UVLM inputs and outputs projected to coincide with lattice at linearisation | `True` | | `use_euler` | `bool` | Parametrise orientations in terms of Euler angles | `False` | `assemble`()[[source]](_modules/sharpy/linear/assembler/linearaeroelastic.html#LinearAeroelastic.assemble)[¶](#sharpy.linear.assembler.linearaeroelastic.LinearAeroelastic.assemble) Assembly of the linearised aeroelastic system. The UVLM state-space system has already been assembled. Prior to assembling the beam’s first order state-space, the damping and stiffness matrices have to be modified to include the damping and stiffenning terms that arise from the linearisation of the aeordynamic forces with respect to the A frame of reference. See `sharpy.linear.src.lin_aeroela.get_gebm2uvlm_gains()` for details on the linearisation. Then the beam is assembled as per the given settings in normalised time if the aerodynamic system has been scaled. The discrete time systems of the UVLM and the beam must have the same time step. The UVLM inputs and outputs are then projected onto the structural degrees of freedom (obviously with the exception of external gusts and control surfaces). Hence, the gains \(\mathbf{K}_{sa}\) and \(\mathbf{K}_{as}\) are added to the output and input of the UVLM system, respectively. These gains perform the following relation: \[\begin{split}\begin{bmatrix}\zeta \\ \zeta' \\ u_g \\ \delta \end{bmatrix} = \mathbf{K}_{as} \begin{bmatrix} \eta \\ \eta' \\ u_g \\ \delta \end{bmatrix} =\end{split}\] \[\mathbf{N}_{nodes} = \mathbf{K}_{sa} \mathbf{f}_{vertices}\] If the beam is expressed in modal form, the UVLM is further projected onto the beam’s modes to have the following input/output structure: Returns: `get_gebm2uvlm_gains`(*data*)[[source]](_modules/sharpy/linear/assembler/linearaeroelastic.html#LinearAeroelastic.get_gebm2uvlm_gains)[¶](#sharpy.linear.assembler.linearaeroelastic.LinearAeroelastic.get_gebm2uvlm_gains) Provides: > * the gain matrices required to connect the linearised GEBM and UVLM > > > inputs/outputs > * the stiffening and damping factors to be added to the linearised > GEBM equations in order to account for non-zero aerodynamic loads at > the linearisation point. The function produces the gain matrices: > * `Kdisp`: gains from GEBM to UVLM grid displacements > * `Kvel_disp`: influence of GEBM dofs displacements to UVLM grid > velocities. > * `Kvel_vel`: influence of GEBM dofs displacements to UVLM grid > displacements. > * `Kforces` (UVLM->GEBM) dimensions are the transpose than the > Kdisp and Kvel* matrices. Hence, when allocation this term, `ii` > and `jj` indices will unintuitively refer to columns and rows, > respectively. And the stiffening/damping terms accounting for non-zero aerodynamic forces at the linearisation point: > * `Kss`: stiffness factor (flexible dof -> flexible dof) accounting > for non-zero forces at the linearisation point. > * `Csr`: damping factor (rigid dof -> flexible dof) > * `Crs`: damping factor (flexible dof -> rigid dof) > * `Crr`: damping factor (rigid dof -> rigid dof) Stiffening and damping related terms due to the non-zero aerodynamic forces at the linearisation point: \[\mathbf{F}_{A,n} = C^{AG}(\mathbf{\chi})\sum_j \mathbf{f}_{G,j} \rightarrow \delta\mathbf{F}_{A,n} = C^{AG}_0 \sum_j \delta\mathbf{f}_{G,j} + \frac{\partial}{\partial\chi}(C^{AG}\sum_j \mathbf{f}_{G,j}^0)\delta\chi\] The term multiplied by the variation in the quaternion, \(\delta\chi\), couples the forces with the rigid body equations and becomes part of \(\mathbf{C}_{sr}\). Similarly, the linearisation of the moments results in expression that contribute to the stiffness and damping matrices. \[\mathbf{M}_{B,n} = \sum_j \tilde{X}_B C^{BA}(\Psi)C^{AG}(\chi)\mathbf{f}_{G,j}\] \[\delta\mathbf{M}_{B,n} = \sum_j \tilde{X}_B\left(C_0^{BG}\delta\mathbf{f}_{G,j} + \frac{\partial}{\partial\Psi}(C^{BA}\delta\mathbf{f}^0_{A,j})\delta\Psi + \frac{\partial}{\partial\chi}(C^{BA}_0 C^{AG} \mathbf{f}_{G,j})\delta\chi\right)\] The linearised equations of motion for the geometrically exact beam model take the input term \(\delta \mathbf{Q}_n = \{\delta\mathbf{F}_{A,n},\, T_0^T\delta\mathbf{M}_{B,n}\}\), which means that the moments should be provided as \(T^T(\Psi)\mathbf{M}_B\) instead of \(\mathbf{M}_A = C^{AB}\mathbf{M}_B\), where \(T(\Psi)\) is the tangential operator. \[\delta(T^T\mathbf{M}_B) = T^T_0\delta\mathbf{M}_B + \frac{\partial}{\partial\Psi}(T^T\delta\mathbf{M}_B^0)\delta\Psi\] is the linearised expression for the moments, where the first term would correspond to the input terms to the beam equations and the second arises due to the non-zero aerodynamic moment at the linearisation point and must be subtracted (since it comes from the forces) to form part of \(\mathbf{K}_{ss}\). In addition, the \(\delta\mathbf{M}_B\) term depends on both \(\delta\Psi\) and \(\delta\chi\), therefore those terms would also contribute to \(\mathbf{K}_{ss}\) and \(\mathbf{C}_{sr}\), respectively. The contribution from the total forces and moments will be accounted for in \(\mathbf{C}_{rr}\) and \(\mathbf{C}_{rs}\). \[\delta\mathbf{F}_{tot,A} = \sum_n\left(C^{GA}_0 \sum_j \delta\mathbf{f}_{G,j} + \frac{\partial}{\partial\chi}(C^{AG}\sum_j \mathbf{f}_{G,j}^0)\delta\chi\right)\] Therefore, after running this method, the beam matrices will be updated as: ``` >>> K_beam[:flex_dof, :flex_dof] += Kss >>> C_beam[:flex_dof, -rigid_dof:] += Csr >>> C_beam[-rigid_dof:, :flex_dof] += Crs >>> C_beam[-rigid_dof:, -rigid_dof:] += Crr ``` Track body option The `track_body` setting restricts the UVLM grid to linear translation motions and therefore should be used to ensure that the forces are computed using the reference linearisation frame. The UVLM and beam are linearised about a reference equilibrium condition. The UVLM is defined in the inertial reference frame while the beam employs the body attached frame and therefore a projection from one frame onto another is required during the coupling process. However, the inputs to the UVLM (i.e. the lattice grid coordinates) are obtained from the beam deformation which is expressed in A frame and therefore the grid coordinates need to be projected onto the inertial frame `G`. As the beam rotates, the projection onto the `G` frame of the lattice grid coordinates will result in a grid that is not coincident with that at the linearisation reference and therefore the grid coordinates must be projected onto the original frame, which will be referred to as `U`. The transformation between the inertial frame `G` and the `U` frame is a function of the rotation of the `A` frame and the original position: \[C^{UG}(\chi) = C^{GA}(\chi_0)C^{AG}(\chi)\] Therefore, the grid coordinates obtained in `A` frame and projected onto the `G` frame can be transformed to the `U` frame using \[\zeta_U = C^{UG}(\chi) \zeta_G\] which allows the grid lattice coordinates to be projected onto the original linearisation frame. In a similar fashion, the output lattice vertex forces of the UVLM are defined in the original linearisation frame `U` and need to be transformed onto the inertial frame `G` prior to projecting them onto the `A` frame to use them as the input forces to the beam system. \[\boldsymbol{f}_G = C^{GU}(\chi)\boldsymbol{f}_U\] The linearisation of the above relations lead to the following expressions that have to be added to the coupling matrices: > * `Kdisp_vel` terms: > > > > \[\delta\boldsymbol{\zeta}_U= C^{GA}_0 \frac{\partial}{\partial \boldsymbol{\chi}} > > \left(C^{AG}\boldsymbol{\zeta}_{G,0}\right)\delta\boldsymbol{\chi} + \delta\boldsymbol{\zeta}_G\] > > > * `Kvel_vel` terms: > > > > \[\delta\dot{\boldsymbol{\zeta}}_U= C^{GA}_0 \frac{\partial}{\partial \boldsymbol{\chi}} > > \left(C^{AG}\dot{\boldsymbol{\zeta}}_{G,0}\right)\delta\boldsymbol{\chi} > > + \delta\dot{\boldsymbol{\zeta}}_G\] > > The transformation of the forces and moments introduces terms that are functions of the orientation and are included as stiffening and damping terms in the beam’s matrices: > * `Csr` damping terms relating to translation forces: > > > > \[C_{sr}^{tra} -= \frac{\partial}{\partial\boldsymbol{\chi}} > > \left(C^{GA} C^{AG}_0 \boldsymbol{f}_{G,0}\right)\delta\boldsymbol{\chi}\] > > > * `Csr` damping terms related to moments: > > > > \[C_{sr}^{rot} -= T^\top\widetilde{\mathbf{X}}_B C^{BG} > > \frac{\partial}{\partial\boldsymbol{\chi}} > > \left(C^{GA} C^{AG}_0 \boldsymbol{f}_{G,0}\right)\delta\boldsymbol{\chi}\] > > The `track_body` setting. When `track_body` is enabled, the UVLM grid is no longer coincident with the inertial reference frame throughout the simulation but rather it is able to rotate as the `A` frame rotates. This is to simulate a free flying vehicle, where, for instance, the orientation does not affect the aerodynamics. The UVLM defined in this frame of reference, named `U`, satisfies the following convention: > * The `U` frame is coincident with the `G` frame at the time of linearisation. > * The `U` frame rotates as the `A` frame rotates. Transformations related to the `U` frame of reference: > * The angle between the `U` frame and the `A` frame is always constant and equal > to \(\boldsymbol{\Theta}_0\). > * The angle between the `A` frame and the `G` frame is \(\boldsymbol{\Theta}=\boldsymbol{\Theta}_0 > + \delta\boldsymbol{\Theta}\) > * The projection of a vector expressed in the `G` frame onto the `U` frame is expressed by: > > > > \[\boldsymbol{v}^U = C^{GA}_0 C^{AG} \boldsymbol{v}^G\] > > > * The reverse, a projection of a vector expressed in the `U` frame onto the `G` frame, is expressed by > > > > \[\boldsymbol{v}^U = C^{GA} C^{AG}_0 \boldsymbol{v}^U\] > > The effect this has on the aeroelastic coupling between the UVLM and the structural dynamics is that the orientation and change of orientation of the vehicle has no effect on the aerodynamics. The aerodynamics are solely affected by the contribution of the 6-rigid body velocities (as well as the flexible DOFs velocities). `update`(*u_infty*)[[source]](_modules/sharpy/linear/assembler/linearaeroelastic.html#LinearAeroelastic.update)[¶](#sharpy.linear.assembler.linearaeroelastic.LinearAeroelastic.update) Updates the aeroelastic scaled system with the new reference velocity. Only the beam equations need updating since the only dependency in the forward flight velocity resides there. | Parameters: | **u_infty** (*float*) – New reference velocity | | Returns: | Updated aeroelastic state-space system | | Return type: | [sharpy.linear.src.libss.ss](index.html#sharpy.linear.src.libss.ss) | ###### Linear State Beam Element Class[¶](#linear-state-beam-element-class) Linear State Beam Element Class ####### LinearBeam[¶](#linearbeam) *class* `sharpy.linear.assembler.linearbeam.``LinearBeam`[[source]](_modules/sharpy/linear/assembler/linearbeam.html#LinearBeam)[¶](#sharpy.linear.assembler.linearbeam.LinearBeam) State space member Define class for linear state-space realisation of GEBM flexible-body equations from SHARPy `timestep_info` class and with the nonlinear structural information. State-space models can be defined in continuous or discrete time (dt required). Modal projection, either on the damped or undamped modal shapes, is also avaiable. Notes on the settings: > 1. `modal_projection={True,False}`: determines whether to project the states > onto modal coordinates. Projection over damped or undamped modal > shapes can be obtained selecting: > > > > * `proj_modes = {'damped','undamped'}` > > while > > > > > > > > > * `inout_coords={'modes','nodal'}` > > > > > > determines whether the modal state-space inputs/outputs are modal > > coords or nodal degrees-of-freedom. If `modes` is selected, the > > `Kin` and `Kout` gain matrices are generated to transform nodal to modal > > dofs > > > > > 2. `dlti={True,False}`: if true, generates discrete-time system. > The continuous to discrete transformation method is determined by: > ``` > discr_method={ 'newmark', # Newmark-beta > 'zoh', # Zero-order hold > 'bilinear'} # Bilinear (Tustin) transformation > ``` > DLTIs can be obtained directly using the Newmark-\(\beta\) method > > > `discr_method='newmark'` > > `newmark_damp=xx` with `xx<<1.0` > > > > for full-states descriptions (`modal_projection=False`) and modal projection > over the undamped structural modes (`modal_projection=True` and `proj_modes`). > The Zero-order holder and bilinear methods, instead, work in all > descriptions, but require the continuous state-space equations. The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | Options | | --- | --- | --- | --- | --- | | `modal_projection` | `bool` | Use modal projection | `True` | | | `inout_coords` | `str` | Beam state space input/output coordinates | `nodes` | `nodes`, `modes` | | `num_modes` | `int` | Number of modes to retain | `10` | | | `discrete_time` | `bool` | Assemble beam in discrete time | `True` | | | `dt` | `float` | Discrete time system integration time step | `0.001` | | | `proj_modes` | `str` | Use `undamped` or `damped` modes | `undamped` | `damped`, `undamped` | | `discr_method` | `str` | Discrete time assembly system method: | `newmark` | `newmark`, `zoh`, `bilinear` | | `newmark_damp` | `float` | Newmark damping value. For systems assembled using `newmark` | `0.0001` | | | `use_euler` | `bool` | Use euler angles for rigid body parametrisation | `False` | | | `print_info` | `bool` | Display information on screen | `True` | | | `gravity` | `bool` | Linearise gravitational forces | `False` | | | `remove_dofs` | `list(str)` | Remove desired degrees of freedom | `[]` | `eta`, `V`, `W`, `orient` | | `remove_sym_modes` | `bool` | Remove symmetric modes if wing is clamped | `False` | | `assemble`(*t_ref=None*)[[source]](_modules/sharpy/linear/assembler/linearbeam.html#LinearBeam.assemble)[¶](#sharpy.linear.assembler.linearbeam.LinearBeam.assemble) Assemble the beam state-space system. | Parameters: | **t_ref** (*float*) – Scaling factor to non-dimensionalise the beam’s time step. | Returns: `remove_symmetric_modes`()[[source]](_modules/sharpy/linear/assembler/linearbeam.html#LinearBeam.remove_symmetric_modes)[¶](#sharpy.linear.assembler.linearbeam.LinearBeam.remove_symmetric_modes) Removes symmetric modes when the wing is clamped at the midpoint. It will force the wing tip displacements in `z` to be postive for all modes. Updates the mode shapes matrix, the natural frequencies and the number of modes. `unpack_ss_vector`(*x_n*, *u_n*, *struct_tstep*)[[source]](_modules/sharpy/linear/assembler/linearbeam.html#LinearBeam.unpack_ss_vector)[¶](#sharpy.linear.assembler.linearbeam.LinearBeam.unpack_ss_vector) Warning Under development. Missing: * Accelerations * Double check the cartesian rotation vector * Tangential operator for the moments Takes the state \(x = [\eta, \dot{\eta}]\) and input vectors \(u = N\) of a linearised beam and returns a SHARPy timestep instance, including the reference values. | Parameters: | * **x_n** (*np.ndarray*) – Structural beam state vector in nodal space * **y_n** (*np.ndarray*) – Beam input vector (nodal forces) * **struct_tstep** (*utils.datastructures.StructTimeStepInfo*) – Reference timestep used for linearisation | | Returns: | new timestep with linearised values added to the reference value | | Return type: | utils.datastructures.StructTimeStepInfo | ###### LinearGustGenerator[¶](#lineargustgenerator) *class* `sharpy.linear.assembler.lineargustassembler.``LinearGustGenerator`[[source]](_modules/sharpy/linear/assembler/lineargustassembler.html#LinearGustGenerator)[¶](#sharpy.linear.assembler.lineargustassembler.LinearGustGenerator) Reduces the entire gust field input to a user-defined set of more comprehensive inputs ###### Linear UVLM State Space System[¶](#linear-uvlm-state-space-system) Linear UVLM State Space System ####### LinearUVLM[¶](#linearuvlm) *class* `sharpy.linear.assembler.linearuvlm.``LinearUVLM`[[source]](_modules/sharpy/linear/assembler/linearuvlm.html#LinearUVLM)[¶](#sharpy.linear.assembler.linearuvlm.LinearUVLM) Linear UVLM System Assembler Produces state-space model of the form > \[\begin{split}\mathbf{x}_{n+1} &= \mathbf{A}\,\mathbf{x}_n + \mathbf{B} \mathbf{u}_{n+1} \\ > \mathbf{y}_n &= \mathbf{C}\,\mathbf{x}_n + \mathbf{D} \mathbf{u}_n\end{split}\] where the state, inputs and outputs are: > \[\mathbf{x}_n = \{ \delta \mathbf{\Gamma}_n,\, \delta \mathbf{\Gamma_{w_n}},\, > \Delta t\,\delta\mathbf{\Gamma}'_n,\, \delta\mathbf{\Gamma}_{n-1} \}\] > \[\mathbf{u}_n = \{ \delta\mathbf{\zeta}_n,\, \delta\mathbf{\zeta}'_n,\, > \delta\mathbf{u}_{ext,n} \}\] > \[\mathbf{y} = \{\delta\mathbf{f}\}\] with \(\mathbf{\Gamma}\in\mathbb{R}^{MN}\) being the vector of vortex circulations, \(\mathbf{\zeta}\in\mathbb{R}^{3(M+1)(N+1)}\) the vector of vortex lattice coordinates and \(\mathbf{f}\in\mathbb{R}^{3(M+1)(N+1)}\) the vector of aerodynamic forces and moments. Note that \((\bullet)'\) denotes a derivative with respect to time. Note that the input is atypically defined at time `n+1`. If the setting `remove_predictor = True` the predictor term `u_{n+1}` is eliminated through the change of state[1]: > \[\begin{split}\mathbf{h}_n &= \mathbf{x}_n - \mathbf{B}\,\mathbf{u}_n \\\end{split}\] such that: > \[\begin{split}\mathbf{h}_{n+1} &= \mathbf{A}\,\mathbf{h}_n + \mathbf{A\,B}\,\mathbf{u}_n \\ > \mathbf{y}_n &= \mathbf{C\,h}_n + (\mathbf{C\,B}+\mathbf{D})\,\mathbf{u}_n\end{split}\] which only modifies the equivalent \(\mathbf{B}\) and \(\mathbf{D}\) matrices. The `integr_order` setting refers to the finite differencing scheme used to calculate the bound circulation derivative with respect to time \(\dot{\mathbf{\Gamma}}\). A first order scheme is used when `integr_order == 1` \[\dot{\mathbf{\Gamma}}^{n+1} = \frac{\mathbf{\Gamma}^{n+1}-\mathbf{\Gamma}^n}{\Delta t}\] If `integr_order == 2` a higher order scheme is used (but it isn’t exactly second order accurate [1]). \[\dot{\mathbf{\Gamma}}^{n+1} = \frac{3\mathbf{\Gamma}^{n+1}-4\mathbf{\Gamma}^n + \mathbf{\Gamma}^{n-1}} {2\Delta t}\] References [1] Franklin, GF and Powell, JD. Digital Control of Dynamic Systems, Addison-Wesley Publishing Company, 1980 [2] <NAME>., & <NAME>.. State-Space Realizations and Internal Balancing in Potential-Flow Aerodynamics with Arbitrary Kinematics. AIAA Journal, 57(6), 1–14. 2019. <https://doi.org/10.2514/1.J058153The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | Options | | --- | --- | --- | --- | --- | | `dt` | `float` | Time step | `0.1` | | | `integr_order` | `int` | Integration order of the circulation derivative. | `2` | `1`, `2` | | `ScalingDict` | `dict` | Dictionary of scaling factors to achieve normalised UVLM realisation. | `{}` | | | `remove_predictor` | `bool` | Remove the predictor term from the UVLM equations | `True` | | | `use_sparse` | `bool` | Assemble UVLM plant matrix in sparse format | `True` | | | `density` | `float` | Air density | `1.225` | | | `remove_inputs` | `list(str)` | List of inputs to remove. `u_gust` to remove external velocity input. | `[]` | `u_gust` | | `gust_assembler` | `str` | Selected linear gust assembler. | | `leading_edge` | | `rom_method` | `list(str)` | List of model reduction methods to reduce UVLM. | `[]` | | | `rom_method_settings` | `dict` | Dictionary with settings for the desired ROM methods, where the name of the ROM method is the key to the dictionary | `{}` | | The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | Options | | --- | --- | --- | --- | --- | | `length` | `float` | Reference length to be used for UVLM scaling | `1.0` | | | `speed` | `float` | Reference speed to be used for UVLM scaling | `1.0` | | | `density` | `float` | Reference density to be used for UVLM scaling | `1.0` | | `assemble`(*track_body=False*)[[source]](_modules/sharpy/linear/assembler/linearuvlm.html#LinearUVLM.assemble)[¶](#sharpy.linear.assembler.linearuvlm.LinearUVLM.assemble) Assembles the linearised UVLM system, removes the desired inputs and adds linearised control surfaces (if present). With all possible inputs present, these are ordered as \[\mathbf{u} = [\boldsymbol{\zeta},\,\dot{\boldsymbol{\zeta}},\,\mathbf{w},\,\delta]\] Control surface inputs are ordered last as: \[[\delta_1, \delta_2, \dots, \dot{\delta}_1, \dot{\delta_2}]\] `remove_inputs`(*remove_list=<class 'list'>*)[[source]](_modules/sharpy/linear/assembler/linearuvlm.html#LinearUVLM.remove_inputs)[¶](#sharpy.linear.assembler.linearuvlm.LinearUVLM.remove_inputs) Remove certain inputs from the input vector To do: * Support for block UVLM | Parameters: | **remove_list** (*list*) – Inputs to remove | `unpack_input_vector`(*u_n*)[[source]](_modules/sharpy/linear/assembler/linearuvlm.html#LinearUVLM.unpack_input_vector)[¶](#sharpy.linear.assembler.linearuvlm.LinearUVLM.unpack_input_vector) Unpacks the input vector into the corresponding grid coordinates, velocities and external velocities. | Parameters: | **u_n** (*np.ndarray*) – UVLM input vector. May contain control surface deflections and external velocities. | | Returns: | Tuple containing `zeta`, `zeta_dot` and `u_ext`, accounting for the effect of control surfaces. | | Return type: | tuple | `unpack_ss_vector`(*data*, *x_n*, *aero_tstep*, *track_body=False*)[[source]](_modules/sharpy/linear/assembler/linearuvlm.html#LinearUVLM.unpack_ss_vector)[¶](#sharpy.linear.assembler.linearuvlm.LinearUVLM.unpack_ss_vector) Transform column vectors used in the state space formulation into SHARPy format The column vectors are transformed into lists with one entry per aerodynamic surface. Each entry contains a matrix with the quantities at each grid vertex. \[\mathbf{y}_n \longrightarrow \mathbf{f}_{aero}\] \[\mathbf{x}_n \longrightarrow \mathbf{\Gamma}_n,\, \mathbf{\Gamma_w}_n,\, \mathbf{\dot{\Gamma}}_n\] If the `track_body` option is on, the output forces are projected from the linearization frame, to the G frame. Note that the linearisation frame is: > 1. equal to the FoR G at time 0 (linearisation point) > 2. rotates as the body frame specified in the `track_body_number` | Parameters: | * **y_n** (*np.ndarray*) – Column output vector of linear UVLM system * **x_n** (*np.ndarray*) – Column state vector of linear UVLM system * **u_n** (*np.ndarray*) – Column input vector of linear UVLM system * **aero_tstep** ([*AeroTimeStepInfo*](index.html#sharpy.aero.models.aerogrid.Aerogrid.sharpy.utils.datastructures.AeroTimeStepInfo)) – aerodynamic timestep information class instance | | Returns: | Tuple containing: forces (list): Aerodynamic forces in a list with `n_surf` entries. Each entry is a `(6, M+1, N+1)` matrix, where the first 3 indices correspond to the components in `x`, `y` and `z`. The latter 3 are zero. gamma (list): Bound circulation list with `n_surf` entries. Circulation is stored in an `(M+1, N+1)` matrix, corresponding to the panel vertices. gamma_dot (list): Bound circulation derivative list with `n_surf` entries. Circulation derivative is stored in an `(M+1, N+1)` matrix, corresponding to the panel vertices. gamma_star (list): Wake (free) circulation list with `n_surf` entries. Wake circulation is stored in an `(M_star+1, N+1)` matrix, corresponding to the panel vertices of the wake. | | Return type: | tuple | ##### Linearised System Source Code[¶](#linearised-system-source-code) ###### Assembly of linearised UVLM system[¶](#assembly-of-linearised-uvlm-system) 1<NAME>, 25 May 2018 Includes: * Boundary conditions methods: + AICs: allocate aero influence coefficient matrices of multi-surfaces configurations + `nc_dqcdzeta_Sin_to_Sout`: derivative matrix of `nc*dQ/dzeta` where Q is the induced velocity at the bound collocation points of one surface to another. + `nc_dqcdzeta_coll`: assembles `nc_dqcdzeta_coll_Sin_to_Sout` matrices in multi-surfaces configurations + `uc_dncdzeta`: assemble derivative matrix dnc/dzeta*Uc at bound collocation points ####### AICs[¶](#module-sharpy.linear.src.assembly.AICs) Given a list of bound (Surfs) and wake (Surfs_star) instances of surface.AeroGridSurface, returns the list of AIC matrices in the format: > * AIC_list[ii][jj] contains the AIC from the bound surface Surfs[jj] to > Surfs[ii]. > - AIC_star_list[ii][jj] contains the AIC from the wake surface Surfs[jj] > to Surfs[ii]. ####### dfqsdgamma_vrel0[¶](#module-sharpy.linear.src.assembly.dfqsdgamma_vrel0) Assemble derivative of quasi-steady force w.r.t. gamma with fixed relative velocity - the changes in induced velocities due to gamma are not accounted for. The routine exploits the get_joukovski_qs method insude the AeroGridSurface class ####### dfqsduinput[¶](#module-sharpy.linear.src.assembly.dfqsduinput) Assemble derivative of quasi-steady force w.r.t. external input velocity. ####### dfqsdvind_gamma[¶](#module-sharpy.linear.src.assembly.dfqsdvind_gamma) Assemble derivative of quasi-steady force w.r.t. induced velocities changes due to gamma. Note: the routine is memory consuming but avoids unnecessary computations. ####### dfqsdvind_zeta[¶](#module-sharpy.linear.src.assembly.dfqsdvind_zeta) Assemble derivative of quasi-steady force w.r.t. induced velocities changes due to zeta. ####### dfqsdzeta_omega[¶](#module-sharpy.linear.src.assembly.dfqsdzeta_omega) Assemble derivative of quasi-steady force w.r.t. to zeta The contribution implemented is related with the omega x zeta term call: Der_list = dfqsdzeta_omega(Surfs,Surfs_star) ####### dfqsdzeta_vrel0[¶](#module-sharpy.linear.src.assembly.dfqsdzeta_vrel0) Assemble derivative of quasi-steady force w.r.t. zeta with fixed relative velocity - the changes in induced velocities due to zeta over the surface inducing the velocity are not accounted for. The routine exploits the available relative velocities at the mid-segment points ####### dfunstdgamma_dot[¶](#module-sharpy.linear.src.assembly.dfunstdgamma_dot) Computes derivative of unsteady aerodynamic force with respect to changes in circulation. Note: the function also checks that the first derivative of the circulation at the linearisation point is null. If not, a further contribution to the added mass, depending on the changes in panel area and normal, arises and needs to be implemented. ####### dvinddzeta[¶](#module-sharpy.linear.src.assembly.dvinddzeta) Produces derivatives of induced velocity by Surf_in w.r.t. the zetac point. Derivatives are divided into those associated to the movement of zetac, and to the movement of the Surf_in vertices (DerVert). If Surf_in is bound (IsBound==True), the circulation over the TE due to the wake is not included in the input. If Surf_in is a wake (IsBound==False), derivatives w.r.t. collocation points are computed ad the TE contribution on DerVert. In this case, the chordwise paneling Min_bound of the associated input is required so as to calculate Kzeta and correctly allocate the derivative matrix. The output derivatives are: - Dercoll: 3 x 3 matrix - Dervert: 3 x 3*Kzeta (if Surf_in is a wake, Kzeta is that of the bound) Warning: zetac must be contiguously stored! ####### dvinddzeta_cpp[¶](#module-sharpy.linear.src.assembly.dvinddzeta_cpp) Used by autodoc_mock_imports. ####### eval_panel_cpp[¶](#module-sharpy.linear.src.assembly.eval_panel_cpp) Used by autodoc_mock_imports. ####### nc_domegazetadzeta[¶](#module-sharpy.linear.src.assembly.nc_domegazetadzeta) Produces a list of derivative matrix d(omaga x zeta)/dzeta, where omega is the rotation speed of the A FoR, ASSUMING constant panel norm. Each list is such that: - the ii-th element is associated to the ii-th bound surface collocation point, and will contain a sub-list such that: > * the j-th element of the sub-list is the dAIC_dzeta matrices w.r.t. the > zeta d.o.f. of the j-th bound surface. Hence, DAIC*[ii][jj] will have size K_ii x Kzeta_jj call: ncDOmegaZetavert = nc_domegazetadzeta(Surfs,Surfs_star) ####### nc_dqcdzeta[¶](#module-sharpy.linear.src.assembly.nc_dqcdzeta) Produces a list of derivative matrix \[\frac{\partial(\mathcal{A}\boldsymbol{\Gamma}_0)}{\partial\boldsymbol{\zeta}}\] where \(\mathcal{A}\) is the aerodynamic influence coefficient matrix at the bound surfaces collocation point, assuming constant panel norm. Each list is such that: > * the `ii`-th element is associated to the `ii`-th bound surface collocation > point, and will contain a sub-list such that: > > > > + the `j`-th element of the sub-list is the `dAIC_dzeta` matrices w.r.t. the > > `zeta` d.o.f. of the `j`-th bound surface. > > Hence, `DAIC*[ii][jj]` will have size `K_ii x Kzeta_jj` If `Merge` is `True`, the derivatives due to collocation points movement are added to `Dvert` to minimise storage space. To do: > * Dcoll is highly sparse, exploit? ####### nc_dqcdzeta_Sin_to_Sout[¶](#module-sharpy.linear.src.assembly.nc_dqcdzeta_Sin_to_Sout) Computes derivative matrix of nc*dQ/dzeta where Q is the induced velocity induced by bound surface Surf_in onto bound surface Surf_out. The panel normals of Surf_out are constant. The input/output are: - Der_coll of size (Kout,3*Kzeta_out): derivative due to the movement of collocation point on Surf_out. - Der_vert of size: > * (Kout,3*Kzeta_in) if Surf_in_bound is True > * (Kout,3*Kzeta_bound_in) if Surf_in_bound is False; Kzeta_bound_in is > the number of vertices in the bound surface of whom Surf_out is the wake. Note that: - if Surf_in_bound is False, only the TE movement contributes to Der_vert. - if Surf_in_bound is False, the allocation of Der_coll could be speed-up by scanning only the wake segments along the chordwise direction, as on the others the net circulation is null. ####### test_wake_prop_term[¶](#module-sharpy.linear.src.assembly.test_wake_prop_term) Test allocation of single term of wake propagation matrix ####### uc_dncdzeta[¶](#module-sharpy.linear.src.assembly.uc_dncdzeta) Build derivative of \[\boldsymbol{u}_c\frac{\partial\boldsymbol{n}_c}{\partial\boldsymbol{zeta}}\] where \(\boldsymbol{u}_c\) is the total velocity at the collocation points. | param Surf: | the input can also be a list of `surface.AerogridSurface` | | type Surf: | surface.AerogridSurface | References * [:module:`linear.develop_sym.linsum_Wnc`](#id1) * [:module:`lib_ucdncdzeta`](#id3) ####### wake_prop[¶](#module-sharpy.linear.src.assembly.wake_prop) Assembly of wake propagation matrices, in sparse or dense matrices format Note: wake propagation matrices are very sparse. Nonetheless, allocation in dense format (from numpy.zeros) or sparse does not have important differences in terms of cpu time and memory used as numpy.zeros does not allocate memory until this is accessed. ###### Mapping methods for bound surface panels[¶](#mapping-methods-for-bound-surface-panels) 19. Maraniello, 19 May 2018 ####### AeroGridMap[¶](#aerogridmap) *class* `sharpy.linear.src.gridmapping.``AeroGridMap`(*M: number of chord-wise*, *N: number of span-wise*)[[source]](_modules/sharpy/linear/src/gridmapping.html#AeroGridMap)[¶](#sharpy.linear.src.gridmapping.AeroGridMap) Produces mapping between panels, segment and vertices of a surface. Grid elements are identified through the indices (m,n), where: > * m: chordwise index > * n: spanwise index The same indexing is applied to panel, vertices and segments. Elements: - panels=(M,N) - vertices=(M+1,N+1) - segments: these are divided in segments developing along the chordwise and spanwise directions. > * chordwise: (M,N+1) > * spanwise: (M+1,N) Mapping structures: - Mpv: for each panel (mp,np) returns the chord/span-wise indices of its vertices, (mv,nv). This has size (M,N,4,2) - Mps: maps each panel (mp,np) to the ii-th segment. This has size (M,N,4,2) # - Mps_extr: for each panel (m,n) returns the indices of the extrema of each side # of the panel. Note: - mapping matrices are stored as np.int16 or np.int32 arrays `from_panel_to_segments`(*m: chordwise index*, *n: spanwise index*)[[source]](_modules/sharpy/linear/src/gridmapping.html#AeroGridMap.from_panel_to_segments)[¶](#sharpy.linear.src.gridmapping.AeroGridMap.from_panel_to_segments) For each panel (m,n) it provides the ms,ns indices of each segment. `from_panel_to_vertices`(*m: chordwise index*, *n: spanwise index*)[[source]](_modules/sharpy/linear/src/gridmapping.html#AeroGridMap.from_panel_to_vertices)[¶](#sharpy.linear.src.gridmapping.AeroGridMap.from_panel_to_vertices) From panel of indices (m,n) to indices of vertices `from_vertex_to_panel`(*m: chordwise index*, *n: spanwise index*)[[source]](_modules/sharpy/linear/src/gridmapping.html#AeroGridMap.from_vertex_to_panel)[¶](#sharpy.linear.src.gridmapping.AeroGridMap.from_vertex_to_panel) Returns the panel for which the vertex is locally numbered as 0,1,2,3. Returns a (4,2) array such that its elements are: > [vv_local,(m,n) of panel] where vv_local is the local verteix number. Important: indices -1 are possible is the vertex does not have local index 0,1,2 or 3 with respect to any panel. `map_panels_to_segments`()[[source]](_modules/sharpy/linear/src/gridmapping.html#AeroGridMap.map_panels_to_segments)[¶](#sharpy.linear.src.gridmapping.AeroGridMap.map_panels_to_segments) Mapping from panel of segments. self.Mpv is a (M,N,4,2) array such that: > [m, n, local_segment_number, > chordwise/spanwise index of segment,] `map_panels_to_vertices`()[[source]](_modules/sharpy/linear/src/gridmapping.html#AeroGridMap.map_panels_to_vertices)[¶](#sharpy.linear.src.gridmapping.AeroGridMap.map_panels_to_vertices) Mapping from panel of vertices. self.Mpv is a (M,N,4,2) array such that its element are: > [m, n, local_vertex_number, spanwise/chordwise indices of vertex] `map_panels_to_vertices_1D_scalar`()[[source]](_modules/sharpy/linear/src/gridmapping.html#AeroGridMap.map_panels_to_vertices_1D_scalar)[¶](#sharpy.linear.src.gridmapping.AeroGridMap.map_panels_to_vertices_1D_scalar) Mapping: - FROM: the index of a scalar quantity defined at panel collocation point and stored in 1D array. - TO: index of a scalar quantity defined at vertices and stored in 1D The Mpv1d_scalar has size (K,4) where: [1d index of panel, index of vertex 0,1,2 or 3] `map_vertices_to_panels`()[[source]](_modules/sharpy/linear/src/gridmapping.html#AeroGridMap.map_vertices_to_panels)[¶](#sharpy.linear.src.gridmapping.AeroGridMap.map_vertices_to_panels) Maps from vertices to panels. Produces a (M+1,N+1,4,2) array, associating vertices to panels. Its elements are: > [m vertex, > n vertex, > vertex local index, > chordwise/spanwise panel indices] `map_vertices_to_panels_1D_scalar`()[[source]](_modules/sharpy/linear/src/gridmapping.html#AeroGridMap.map_vertices_to_panels_1D_scalar)[¶](#sharpy.linear.src.gridmapping.AeroGridMap.map_vertices_to_panels_1D_scalar) Mapping: - FROM: the index of a scalar quantity defined at vertices and stored in 1D array. - TO: index of a scalar quantity defined at panels and stored in 1D The Mpv1d_scalar has size (Kzeta,4) where: [1d index of vertex, index of vertex 0,1,2 or 3 w.r.t. panel] ###### Defines interpolation methods (geometrically-exact) and matrices (linearisation)[¶](#defines-interpolation-methods-geometrically-exact-and-matrices-linearisation) Defines interpolation methods (geometrically-exact) and matrices (linearisation) <NAME>, 20 May 2018 ####### get_Wnv_vector[¶](#module-sharpy.linear.src.interp.get_Wnv_vector) Provide projection matrix from nodal velocities to normal velocity at collocation points ####### get_Wvc_scalar[¶](#module-sharpy.linear.src.interp.get_Wvc_scalar) Produce scalar interpolation matrix Wvc for state-space realisation. Important: this will not work for coordinates extrapolation, as it would require information about the panel size. It works for other forces/scalar quantities extrapolation. It assumes the quantity at the collocation point is determined proportionally to the weight associated to each vertex and obtained through get_panel_wcv. ####### get_panel_wcv[¶](#module-sharpy.linear.src.interp.get_panel_wcv) Produces a compact array with weights for bilinear interpolation, where aN,aM in [0,1] are distances in the chordwise and spanwise directions such that: > * (aM,aN)=(0,0) –> quantity at vertex 0 > * (aM,aN)=(1,0) –> quantity at vertex 1 > * (aM,aN)=(1,1) –> quantity at vertex 2 > * (aM,aN)=(0,1) –> quantity at vertex 3 ###### Induced Velocity Derivatives[¶](#induced-velocity-derivatives) Calculate derivatives of induced velocity. Methods: * eval_seg_exp and eval_seg_exp_loop: profide ders in format [Q_{x,y,z},ZetaPoint_{x,y,z}] and use fully-expanded analytical formula. * eval_panel_exp: iterates through whole panel * eval_seg_comp and eval_seg_comp_loop: profide ders in format [Q_{x,y,z},ZetaPoint_{x,y,z}] and use compact analytical formula. ####### Dvcross_by_skew3d[¶](#module-sharpy.linear.src.lib_dbiot.Dvcross_by_skew3d) Used by autodoc_mock_imports. ####### eval_panel_comp[¶](#module-sharpy.linear.src.lib_dbiot.eval_panel_comp) Used by autodoc_mock_imports. ####### eval_panel_cpp[¶](#module-sharpy.linear.src.lib_dbiot.eval_panel_cpp) Used by autodoc_mock_imports. ####### eval_panel_exp[¶](#module-sharpy.linear.src.lib_dbiot.eval_panel_exp) Used by autodoc_mock_imports. ####### eval_panel_fast[¶](#module-sharpy.linear.src.lib_dbiot.eval_panel_fast) Used by autodoc_mock_imports. ####### eval_panel_fast_coll[¶](#module-sharpy.linear.src.lib_dbiot.eval_panel_fast_coll) Used by autodoc_mock_imports. ####### eval_seg_comp_loop[¶](#module-sharpy.linear.src.lib_dbiot.eval_seg_comp_loop) Used by autodoc_mock_imports. ####### eval_seg_exp[¶](#module-sharpy.linear.src.lib_dbiot.eval_seg_exp) Used by autodoc_mock_imports. ####### eval_seg_exp_loop[¶](#module-sharpy.linear.src.lib_dbiot.eval_seg_exp_loop) Used by autodoc_mock_imports. ###### Induced Velocity Derivatives with respect to Panel Normal[¶](#induced-velocity-derivatives-with-respect-to-panel-normal) Calculate derivative of > \[\boldsymbol{u}_c\frac{\partial\boldsymbol{n}_c}{\partial\boldsymbol{zeta}}\] with respect to local panel coordinates. ####### eval[¶](#module-sharpy.linear.src.lib_ucdncdzeta.eval) Returns a 4 x 3 array, containing the derivative of Wnc*Uc w.r.t the panel vertices coordinates. ###### Fitting Tools Library[¶](#fitting-tools-library) @author: <NAME> @date: 15 Jan 2018 ####### fitfrd[¶](#module-sharpy.linear.src.libfit.fitfrd) Wrapper for fitfrd (mag=0) and fitfrdmag (mag=1) functions in continuous and discrete time (if ds in input). Input: > kv,yv: frequency array and frequency response > N: order for rational function approximation > mag=1,0: Flag for determining method to use > dt (optional): sampling time for DLTI systems ####### get_rfa_res[¶](#module-sharpy.linear.src.libfit.get_rfa_res) Returns magnitude of the residual Yfit-Yv of a RFA approximation at each point kv. The coefficients of the approximations are: - cnum=xv[:Nnum] - cdem=xv[Nnum:] where cnum and cden are as per the ‘rfa’ function. ####### get_rfa_res_norm[¶](#module-sharpy.linear.src.libfit.get_rfa_res_norm) Define residual scalar norm of Pade approximation of coefficients cnum=xv[:Nnum] and cden[Nnum:] (see get_rfa_res and rfa function) and time-step ds (if discrete time). ####### poly_fit[¶](#module-sharpy.linear.src.libfit.poly_fit) Find best II order fitting polynomial from frequency response Yv over the frequency range kv for both continuous (ds=None) and discrete (ds>0) LTI systems. Input: - kv: frequency points - Yv: frequency response - dyv,ddyv: frequency responses of I and II order derivatives - method=’leastsq’,’dev’: algorithm for minimisation - Bup (only ‘dev’ method): bounds for bv coefficients as per scipy.optimize.differential_evolution. This is a length 3 array. Important: - this function attributes equal weight to each data-point! ####### rfa[¶](#module-sharpy.linear.src.libfit.rfa) Evaluates over the frequency range kv.the rational function approximation: [cnum[-1] + cnum[-2] z + … + cnum[0] z**Nnum ]/… > [cden[-1] + cden[-2] z + … + cden[0] z**Nden] where the numerator and denominator polynomial orders, Nnum and Nden, are the length of the cnum and cden arrays and: > * z=exp(1.j*kv*ds), with ds sampling time if ds is given (discrete-time > system) > - z=1.*kv, if ds is None (continuous time system) ####### rfa_fit_dev[¶](#module-sharpy.linear.src.libfit.rfa_fit_dev) Find best fitting RFA approximation from frequency response Yv over the frequency range kv for both continuous (ds=None) and discrete (ds>0) LTI systems. The RFA approximation is found through a 2-stage strategy: a. an evolutionary algoryhtm is run to determine the optimal fitting coefficients b. the search is refined through a least squares algorithm. and is stopped as soon as: 1. the maximum absolute error in frequency response of the RFA falls below `TolAbs` 2. the maximum number of iterations is reached. Input: - kv: frequency range for approximation - Yv: frequency response vector over kv - TolAbs: maximum admissible absolute difference in frequency response between RFA and original system. - Nnum,Ndem: number of coefficients for Pade approximation. - ds: sampling time for DLTI systems - NtrialMax: maximum number of repetition of global and least square optimisations - Cfbouds: maximum absolute values of coeffieicnts (only for evolutionary algorithm) - OutFull: if False, only outputs optimal coefficients of RFA. Otherwise, > outputs cost and RFA coefficients of each trial. Output: - cnopt: optimal coefficients (numerator) - cdopt: optimal coefficients (denominator) Important: - this function has the same objective as fitfrd in matwrapper module. While generally slower, the global optimisation approach allows to verify the results from fitfrd. ####### rfa_mimo[¶](#module-sharpy.linear.src.libfit.rfa_mimo) Given the frequency response of a MIMO DLTI system, this function returns the A,B,C,D matrices associated to the rational function approximation of the original system. Input: - Yfull: frequency response (as per libss.freqresp) of full size system over the frequencies kv. - kv: array of frequencies over which the RFA approximation is evaluated. - tolAbs: absolute tolerance for the rfa fitting - Nnum: number of numerator coefficients for RFA - Nden: number of denominator coefficients for RFA - NtrialMax: maximum number of attempts - method=[‘intependent’]. Method used to produce the system: > * intependent: each input-output combination is treated separately. The > resulting system is a collection of independent SISO DLTIs ####### rfader[¶](#module-sharpy.linear.src.libfit.rfader) Evaluates over the frequency range kv.the derivative of order m of the rational function approximation: [cnum[-1] + cnum[-2] z + … + cnum[0] z**Nnum ]/… > [cden[-1] + cden[-2] z + … + cden[0] z**Nden] where the numerator and denominator polynomial orders, Nnum and Nden, are the length of the cnum and cden arrays and: > * z=exp(1.j*kv*ds), with ds sampling time if ds is given (discrete-time > system) > - z=1.*kv, if ds is None (continuous time system) ###### Collect tools to manipulate sparse and/or mixed dense/sparse matrices.[¶](#collect-tools-to-manipulate-sparse-and-or-mixed-dense-sparse-matrices) Collect tools to manipulate sparse and/or mixed dense/sparse matrices. author: <NAME> date: Dec 2018 Comment: manipulating large linear system may require using both dense and sparse matrices. While numpy/scipy automatically handle most operations between mixed dense/sparse arrays, some (e.g. dot product) require more attention. This library collects methods to handle these situations. Classes: scipy.sparse matrices are wrapped so as to ensure compatibility with numpy arrays upon conversion to dense. - csc_matrix: this is a wrapper of scipy.csc_matrix. - SupportedTypes: types supported for operations - WarningTypes: due to some bugs in scipy (v.1.1.0), sum (+) operations between np.ndarray and scipy.sparse matrices can result in numpy.matrixlib.defmatrix.matrix types. This list contains such undesired types that can result from dense/sparse operations and raises a warning if required. (b) convert these types into numpy.ndarrays. Methods: - dot: handles matrix dot products across different types. - solve: solves linear systems Ax=b with A and b dense, sparse or mixed. - dense: convert matrix to numpy array Warning: - only sparse types into SupportedTypes are supported! To Do: - move these methods into an algebra module? ####### block_dot[¶](#module-sharpy.linear.src.libsparse.block_dot) dot product between block matrices. Inputs: A, B: are nested lists of dense/sparse matrices of compatible shape for block matrices product. Empty blocks can be defined with None. (see numpy.block) ####### block_sum[¶](#module-sharpy.linear.src.libsparse.block_sum) dot product between block matrices. Inputs: A, B: are nested lists of dense/sparse matrices of compatible shape for block matrices product. Empty blocks can be defined with None. (see numpy.block) ####### csc_matrix[¶](#csc-matrix) *class* `sharpy.linear.src.libsparse.``csc_matrix`(*arg1*, *shape=None*, *dtype=None*, *copy=False*)[[source]](_modules/sharpy/linear/src/libsparse.html#csc_matrix)[¶](#sharpy.linear.src.libsparse.csc_matrix) Wrapper of scipy.csc_matrix that ensures best compatibility with numpy.ndarray. The following methods have been overwritten to ensure that numpy.ndarray are returned instead of numpy.matrixlib.defmatrix.matrix. > * todense > * _add_dense Warning: this format is memory inefficient to allocate new sparse matrices. Consider using: - scipy.sparse.lil_matrix, which supports slicing, or - scipy.sparse.coo_matrix, though slicing is not supported :( `todense`()[[source]](_modules/sharpy/linear/src/libsparse.html#csc_matrix.todense)[¶](#sharpy.linear.src.libsparse.csc_matrix.todense) As per scipy.spmatrix.todense but returns a numpy.ndarray. ####### dense[¶](#module-sharpy.linear.src.libsparse.dense) If required, converts sparse array to dense. ####### dot[¶](#module-sharpy.linear.src.libsparse.dot) Method to compute C = A*B , where * is the matrix product, with dense/sparse/mixed matrices. The format (sparse or dense) of C is specified through ‘type_out’. If type_out==None, the output format is sparse if both A and B are sparse, dense otherwise. The following formats are supported: - numpy.ndarray - scipy.csc_matrix ####### eye_as[¶](#module-sharpy.linear.src.libsparse.eye_as) Produces an identity matrix as per M, in shape and type ####### solve[¶](#module-sharpy.linear.src.libsparse.solve) Wrapper of numpy.linalg.solve and scipy.sparse.linalg.spsolve for solution of the linear system A x = b. - if A is a dense numpy array np.linalg.solve is called for solution. Note that if B is sparse, this requires convertion to dense. In this case, solution through LU factorisation of A should be considered to exploit the sparsity of B. - if A is sparse, scipy.sparse.linalg.spsolve is used. ####### zeros_as[¶](#module-sharpy.linear.src.libsparse.zeros_as) Produces an identity matrix as per M, in shape and type ###### Linear Time Invariant systems[¶](#linear-time-invariant-systems) Linear Time Invariant systems author: <NAME> date: 15 Sep 2017 (still basement…) Library of methods to build/manipulate state-space models. The module supports the sparse arrays types defined in libsparse. The module includes: Classes: - ss: provides a class to build DLTI/LTI systems with full and/or sparse > matrices and wraps many of the methods in these library. Methods include: > - freqresp: wraps the freqresp function > - addGain: adds gains in input/output. This is not a wrapper of addGain, as > the system matrices are overwritten Methods for state-space manipulation: - couple: feedback coupling. Does not support sparsity - freqresp: calculate frequency response. Supports sparsity. - series: series connection between systems - parallel: parallel connection between systems - SSconv: convert state-space model with predictions and delays - addGain: add gains to state-space model. - join2: merge two state-space models into one. - join: merge a list of state-space models into one. - sum state-space models and/or gains - scale_SS: scale state-space model - simulate: simulates discrete time solution - Hnorm_from_freq_resp: compute H norm of a frequency response - adjust_phase: remove discontinuities from a frequency response Special Models: - SSderivative: produces DLTI of a numerical derivative scheme - SSintegr: produces DLTI of an integration scheme - build_SS_poly: build state-space model with polynomial terms. Filtering: - butter Utilities: - get_freq_from_eigs: clculate frequency corresponding to eigenvalues Comments: - the module supports sparse matrices hence relies on libsparse. to do: * remove unnecessary coupling routines * couple function can handle sparse matrices but only outputs dense matrices + verify if typical coupled systems are sparse + update routine + add method to automatically determine whether to use sparse or dense? ####### Hnorm_from_freq_resp[¶](#module-sharpy.linear.src.libss.Hnorm_from_freq_resp) Given a frequency response over a domain kv, this funcion computes the H norms through numerical integration. Note that if kv[-1]<np.pi/dt, the method assumed gv=0 for each frequency kv[-1]<k<np.pi/dt. Warning: only use for SISO systems! For MIMO definitions are different ####### SSconv[¶](#module-sharpy.linear.src.libss.SSconv) Convert a DLTI system with prediction and delay of the form: > \[\begin{split}\mathbf{x}_{n+1} &= \mathbf{A\,x}_n + \mathbf{B_0\,u}_n + \mathbf{B_1\,u}_{n+1} + \mathbf{B_{m1}\,u}_{n-1} \\ > \mathbf{y}_n &= \mathbf{C\,x}_n + \mathbf{D\,u}_n\end{split}\] into the state-space form: > \[\begin{split}\mathbf{h}_{n+1} &= \mathbf{A_h\,h}_n + \mathbf{B_h\,u}_n \\ > \mathbf{y}_n &= \mathbf{C_h\,h}_n + \mathbf{D_h\,u}_n\end{split}\] If \(\mathbf{B_{m1}}\) is `None`, the original state is retrieved through > \[\mathbf{x}_n = \mathbf{h}_n + \mathbf{B_1\,u}_n\] and only the \(\mathbf{B}\) and \(\mathbf{D}\) matrices are modified. If \(\mathbf{B_{m1}}\) is not `None`, the SS is augmented with the new state > \[\mathbf{g}_{n} = \mathbf{u}_{n-1}\] or, equivalently, with the equation > \[\mathbf{g}_{n+1} = \mathbf{u}_n\] leading to the new form > \[\begin{split}\mathbf{H}_{n+1} &= \mathbf{A_A\,H}_{n} + \mathbf{B_B\,u}_n \\ > \mathbf{y}_n &= \mathbf{C_C\,H}_{n} + \mathbf{D_D\,u}_n\end{split}\] where \(\mathbf{H} = (\mathbf{x},\,\mathbf{g})\). | param A: | dynamics matrix | | type A: | np.ndarray | | param B0: | input matrix for input at current time step `n`. Set to None if this is zero. | | type B0: | np.ndarray | | param B1: | input matrix for input at time step `n+1` (predictor term) | | type B1: | np.ndarray | | param C: | output matrix | | type C: | np.ndarray | | param D: | direct matrix | | type D: | np.ndarray | | param Bm1: | input matrix for input at time step `n-1` (delay term) | | type Bm1: | np.ndarray | | returns: | tuple packed with the state-space matrices \(\mathbf{A},\,\mathbf{B},\,\mathbf{C}\) and \(\mathbf{D}\). | | rtype: | tuple | References Franklin, GF and Powell, JD. Digital Control of Dynamic Systems, Addison-Wesley Publishing Company, 1980 Warning functions untested for delays (Bm1 != 0) ####### SSderivative[¶](#module-sharpy.linear.src.libss.SSderivative) Given a time-step ds, and an single input time history u, this SS model returns the output y=[u,du/ds], where du/dt is computed with second order accuracy. ####### SSintegr[¶](#module-sharpy.linear.src.libss.SSintegr) Builds a state-space model of an integrator. * method: Numerical scheme. Available options are: + 1tay: 1st order Taylor (fwd) I[ii+1,:]=I[ii,:] + ds*F[ii,:] + trap: I[ii+1,:]=I[ii,:] + 0.5*dx*(F[ii,:]+F[ii+1,:]) Note: other option can be constructured if information on derivative of F is available (for e.g.) ####### addGain[¶](#module-sharpy.linear.src.libss.addGain) Convert input u or output y of a SS DLTI system through gain matrix K. We have the following transformations: - where=’in’: the input dof of the state-space are changed > u_new -> Kmat*u -> SS -> y => u_new -> SSnew -> y * where=’out’: the output dof of the state-space are changed u -> SS -> y -> Kmat*u -> ynew => u -> SSnew -> ynew * where=’parallel’: the input dofs are changed, but not the output > > > > + > > {u_1 -> SS -> y_1 { u_2 -> y_2= Kmat*u_2 => u_new=(u_1,u_2) -> SSnew -> y=y_1+y_2 {y = y_1+y_2 + Warning: function not tested for Kmat stored in sparse format ####### adjust_phase[¶](#module-sharpy.linear.src.libss.adjust_phase) Modify the phase y of a frequency response to remove discontinuities. ####### build_SS_poly[¶](#module-sharpy.linear.src.libss.build_SS_poly) Builds a discrete-time state-space representation of a polynomial system whose frequency response has from: > Ypoly[oo,ii](k) = -A2[oo,ii] D2(k) - A1[oo,ii] D1(k) - A0[oo,ii] where C1,D2 are discrete-time models of first and second derivatives, ds is the time-step and the coefficient matrices are such that: > A{nn}=Acf[oo,ii,nn] ####### butter[¶](#module-sharpy.linear.src.libss.butter) build MIMO butterworth filter of order ord and cut-off freq over Nyquist freq ratio Wn. The filter will have N input and N output and N*ord states. Note: the state-space form of the digital filter does not depend on the sampling time, but only on the Wn ratio. As a result, this function only returns the A,B,C,D matrices of the filter state-space form. ####### compare_ss[¶](#module-sharpy.linear.src.libss.compare_ss) Assert matrices of state-space models are identical ####### couple[¶](#module-sharpy.linear.src.libss.couple) Couples 2 dlti systems ss01 and ss02 through the gains K12 and K21, where K12 transforms the output of ss02 into an input of ss01. Other inputs: - out_sparse: if True, the output system is stored as sparse (not recommended) ####### freqresp[¶](#module-sharpy.linear.src.libss.freqresp) In-house frequency response function supporting dense/sparse types Inputs: - SS: instance of ss class, or scipy.signal.StateSpace* - wv: frequency range - dlti: True if discrete-time system is considered. Outputs: - Yfreq[outputs,inputs,len(wv)]: frequency response over wv Warnings: - This function may not be very efficient for dense matrices (as A is not reduced to upper Hessenberg form), but can exploit sparsity in the state-space matrices. ####### get_freq_from_eigs[¶](#module-sharpy.linear.src.libss.get_freq_from_eigs) Compute natural freq corresponding to eigenvalues, eigs, of a continuous or discrete-time (dlti=True) systems. Note: if dlti=True, the frequency is normalised by (1./dt), where dt is the DLTI time-step - i.e. the frequency in Hertz is obtained by multiplying fn by (1./dt). ####### join[¶](#module-sharpy.linear.src.libss.join) Given a list of state-space models belonging to the ss class, creates a joined system whose output is the sum of the state-space outputs. If wv is not None, this is a list of weights, such that the output is: > y = sum( wv[ii] y_ii ) Ref: equation (4.22) of <NAME>., <NAME>. & <NAME>., 2015. A Survey of Projection-Based Model Reduction Methods for Parametric Dynamical Systems. SIAM Review, 57(4), pp.483–531. Warning: - system matrices must be numpy arrays - the function does not perform any check! ####### join2[¶](#module-sharpy.linear.src.libss.join2) Join two state-spaces or gain matrices such that, given: > \[\begin{split}\mathbf{u}_1 \longrightarrow &\mathbf{SS}_1 \longrightarrow \mathbf{y}_1 \\ > \mathbf{u}_2 \longrightarrow &\mathbf{SS}_2 \longrightarrow \mathbf{y}_2\end{split}\] we obtain: > \[\mathbf{u} \longrightarrow \mathbf{SS}_{TOT} \longrightarrow \mathbf{y}\] with \(\mathbf{u}=(\mathbf{u}_1,\mathbf{u}_2)^T\) and \(\mathbf{y}=(\mathbf{y}_1,\mathbf{y}_2)^T\). The output \(\mathbf{SS}_{TOT}\) is either a gain matrix or a state-space system according to the input \(\mathbf{SS}_1\) and \(\mathbf{SS}_2\) | param SS1: | State space 1 or gain 1 | | type SS1: | scsig.StateSpace or np.ndarray | | param SS2: | State space 2 or gain 2 | | type SS2: | scsig.StateSpace or np.ndarray | | returns: | combined state space or gain matrix | | rtype: | scsig.StateSpace or np.ndarray | ####### parallel[¶](#module-sharpy.linear.src.libss.parallel) Returns the sum (or parallel connection of two systems). Given two state-space models with the same output, but different input: > u1 –> SS01 –> y > u2 –> SS02 –> y ####### project[¶](#module-sharpy.linear.src.libss.project) Given 2 transformation matrices, (WT,V) of shapes (Nk,self.states) and (self.states,Nk) respectively, this routine returns a projection of the state space ss_here according to: > Anew = WT A V > Bnew = WT B > Cnew = C V > Dnew = D The projected model has the same number of inputs/outputs as the original one, but Nk states. ####### random_ss[¶](#module-sharpy.linear.src.libss.random_ss) Define random system from number of states (Nx), inputs (Nu) and output (Ny). ####### scale_SS[¶](#module-sharpy.linear.src.libss.scale_SS) Given a state-space system, scales the equations such that the original input and output, \(u\) and \(y\), are substituted by \(u_{AD}=\frac{u}{u_{ref}}\) and \(y_{AD}=\frac{y}{y_{ref}}\). If the original system has form: > \[\begin{split}\mathbf{x}^{n+1} &= \mathbf{A\,x}^n + \mathbf{B\,u}^n \\ > \mathbf{y}^{n} &= \mathbf{C\,x}^{n} + \mathbf{D\,u}^n\end{split}\] the transformation is such that: > \[\begin{split}\mathbf{x}^{n+1} &= \mathbf{A\,x}^n + \mathbf{B}\,\frac{u_{ref}}{x_{ref}}\mathbf{u_{AD}}^n \\ > \mathbf{y_{AD}}^{n+1} &= \frac{1}{y_{ref}}(\mathbf{C}\,x_{ref}\,\mathbf{x}^{n+1} + \mathbf{D}\,u_{ref}\,\mathbf{u_{AD}}^n)\end{split}\] By default, the state-space model is manipulated by reference (`byref=True`) | param SSin: | original state-space formulation | | type SSin: | scsig.dlti | | param input_scal: | | | input scaling factor \(u_{ref}\). It can be a float or an array, in which case the each element of the input vector will be scaled by a different factor. | | type input_scal: | | | float or np.ndarray | | param output_scal: | | | output scaling factor \(y_{ref}\). It can be a float or an array, in which case the each element of the output vector will be scaled by a different factor. | | type output_scal: | | | float or np.ndarray | | param state_scal: | | | state scaling factor \(x_{ref}\). It can be a float or an array, in which case the each element of the state vector will be scaled by a different factor. | | type state_scal: | | | float or np.ndarray | | param byref: | state space manipulation order | | type byref: | bool | | returns: | scaled state space formulation | | rtype: | scsig.dlti | ####### series[¶](#module-sharpy.linear.src.libss.series) Connects two state-space blocks in series. If these are instances of DLTI state-space systems, they need to have the same type and time-step. If the input systems are sparse, they are converted to dense. The connection is such that: \[u \rightarrow \mathsf{SS01} \rightarrow \mathsf{SS02} \rightarrow y \Longrightarrow u \rightarrow \mathsf{SStot} \rightarrow y\] | param SS01: | State Space 1 instance. Can be DLTI/CLTI, dense or sparse. | | type SS01: | libss.ss | | param SS02: | State Space 2 instance. Can be DLTI/CLTI, dense or sparse. | | type SS02: | libss.ss | Returns libss.ss: Combined state space system in series in dense format. ####### simulate[¶](#module-sharpy.linear.src.libss.simulate) Routine to simulate response to generic input. @warning: this routine is for testing and may lack of robustness. Use > scipy.signal instead. ####### ss[¶](#ss) *class* `sharpy.linear.src.libss.``ss`(*A*, *B*, *C*, *D*, *dt=None*)[[source]](_modules/sharpy/linear/src/libss.html#ss)[¶](#sharpy.linear.src.libss.ss) Wrap state-space models allocation into a single class and support both full and sparse matrices. The class emulates > scipy.signal.ltisys.StateSpaceContinuous > scipy.signal.ltisys.StateSpaceDiscrete but supports sparse matrices and other functionalities. Methods: - get_mats: return matrices as tuple - check_types: check matrices types are supported - freqresp: calculate frequency response over range. - addGain: project inputs/outputs - scale: allows scaling a system `addGain`(*K*, *where*)[[source]](_modules/sharpy/linear/src/libss.html#ss.addGain)[¶](#sharpy.linear.src.libss.ss.addGain) Projects input u or output y the state-space system through the gain matrix K. The input ‘where’ determines whether inputs or outputs are projected as: > * where=’in’: inputs are projected such that: > u_new -> u=K*u_new -> SS -> y => u_new -> SSnew -> y > * where=’out’: outputs are projected such that: > u -> SS -> y -> y_new=K*y => u -> SSnew -> ynew Warning: this is not a wrapper of the addGain method in this module, as the state-space matrices are directly overwritten. `freqresp`(*wv*)[[source]](_modules/sharpy/linear/src/libss.html#ss.freqresp)[¶](#sharpy.linear.src.libss.ss.freqresp) Calculate frequency response over frequencies wv Note: this wraps frequency response function. `inputs`[¶](#sharpy.linear.src.libss.ss.inputs) Number of inputs \(m\) to the system. `max_eig`()[[source]](_modules/sharpy/linear/src/libss.html#ss.max_eig)[¶](#sharpy.linear.src.libss.ss.max_eig) Returns most unstable eigenvalue `outputs`[¶](#sharpy.linear.src.libss.ss.outputs) Number of outputs \(p\) of the system. `project`(*WT*, *V*)[[source]](_modules/sharpy/linear/src/libss.html#ss.project)[¶](#sharpy.linear.src.libss.ss.project) Given 2 transformation matrices, (WT,V) of shapes (Nk,self.states) and (self.states,Nk) respectively, this routine projects the state space model states according to: > Anew = WT A V > Bnew = WT B > Cnew = C V > Dnew = D The projected model has the same number of inputs/outputs as the original one, but Nk states. `scale`(*input_scal=1.0*, *output_scal=1.0*, *state_scal=1.0*)[[source]](_modules/sharpy/linear/src/libss.html#ss.scale)[¶](#sharpy.linear.src.libss.ss.scale) Given a state-space system, scales the equations such that the original state, input and output, (x, u and y), are substituted by > xad=x/state_scal > uad=u/input_scal > yad=y/output_scal The entries input_scal/output_scal/state_scal can be: * floats: in this case all input/output are scaled by the same value * lists/arrays of length Nin/Nout: in this case each dof will be scaled by a different factor If the original system has form: xnew=A*x+B*u y=C*x+D*u the transformation is such that: xnew=A*x+(B*uref/xref)*uad yad=1/yref( C*xref*x+D*uref*uad ) `states`[¶](#sharpy.linear.src.libss.ss.states) Number of states \(n\) of the system. `truncate`(*N*)[[source]](_modules/sharpy/linear/src/libss.html#ss.truncate)[¶](#sharpy.linear.src.libss.ss.truncate) Retains only the first N states. ####### ss_block[¶](#ss-block) *class* `sharpy.linear.src.libss.``ss_block`(*A*, *B*, *C*, *D*, *S_states*, *S_inputs*, *S_outputs*, *dt=None*)[[source]](_modules/sharpy/linear/src/libss.html#ss_block)[¶](#sharpy.linear.src.libss.ss_block) State-space model in block form. This class has the same purpose as “ss”, but the A, B, C, D are allocated in the form of nested lists. The format is similar to the one used in numpy.block but: > 1. Block matrices can contain both dense and sparse matrices > 2. Empty blocks are defined through None type Methods: - remove_block: drop one of the blocks from the s-s model - addGain: project inputs/outputs - project: project state `addGain`(*K*, *where*)[[source]](_modules/sharpy/linear/src/libss.html#ss_block.addGain)[¶](#sharpy.linear.src.libss.ss_block.addGain) Projects input u or output y the state-space system through the gain block matrix K. The input ‘where’ determines whether inputs or outputs are projected as: > * where=’in’: inputs are projected such that: > u_new -> u=K*u_new -> SS -> y => u_new -> SSnew -> y > * where=’out’: outputs are projected such that: > u -> SS -> y -> y_new=K*y => u -> SSnew -> ynew Input: K must be a list of list of matrices. The size of K must be compatible with either B or C for block matrix product. `get_sizes`(*M*)[[source]](_modules/sharpy/linear/src/libss.html#ss_block.get_sizes)[¶](#sharpy.linear.src.libss.ss_block.get_sizes) Get the size of each block in M. `project`(*WT*, *V*, *by_arrays=True*, *overwrite=False*)[[source]](_modules/sharpy/linear/src/libss.html#ss_block.project)[¶](#sharpy.linear.src.libss.ss_block.project) Given 2 transformation matrices, (W,V) of shape (Nk,self.states), this routine projects the state space model states according to: > Anew = W^T A V > Bnew = W^T B > Cnew = C V > Dnew = D The projected model has the same number of inputs/outputs as the original one, but Nk states. Inputs: - WT = W^T - V = V - by_arrays: if True, W, V are either numpy.array or sparse matrices. If > False, they are block matrices. * overwrite: if True, overwrites the A, B, C matrices `remove_block`(*where*, *index*)[[source]](_modules/sharpy/linear/src/libss.html#ss_block.remove_block)[¶](#sharpy.linear.src.libss.ss_block.remove_block) Remove a block from either inputs or outputs. Inputs: - where = {‘in’, ‘out’}: determined whether to remove inputs or outputs - index: index of block to remove ####### ss_to_scipy[¶](#module-sharpy.linear.src.libss.ss_to_scipy) Converts to a scipy.signal linear time invariant system | param ss: | SHARPy state space object | | type ss: | libss.ss | | returns: | scipy.signal.dlti | ####### sum_ss[¶](#module-sharpy.linear.src.libss.sum_ss) Given 2 systems or gain matrices (or a combination of the two) having the same amount of input/output, the function returns a gain or state space model summing the two. Namely, given: > u -> SS1 -> y1 > u -> SS2 -> y2 we obtain: u -> SStot -> y1+y2 if negative=False ###### Linear aeroelastic model based on coupled GEBM + UVLM[¶](#linear-aeroelastic-model-based-on-coupled-gebm-uvlm) Linear aeroelastic model based on coupled GEBM + UVLM S. Maraniello, Jul 2018 ####### LinAeroEla[¶](#linaeroela) *class* `sharpy.linear.src.lin_aeroelastic.``LinAeroEla`(*data*, *custom_settings_linear=None*, *uvlm_block=False*)[[source]](_modules/sharpy/linear/src/lin_aeroelastic.html#LinAeroEla)[¶](#sharpy.linear.src.lin_aeroelastic.LinAeroEla) Future work: * settings are converted from string to type in __init__ method. * implement all settings of LinUVLM (e.g. support for sparse matrices) When integrating in SHARPy: * define: + self.setting_types + self.setting_default * use settings.to_custom_types(self.in_dict, self.settings_types, self.settings_default) for conversion to type. | Parameters: | * **data** (*sharpy.presharpy.PreSharpy*) – main SHARPy data class * **settings_linear** (*dict*) – optional settings file if they are not included in the `data` structure | `settings`[¶](#sharpy.linear.src.lin_aeroelastic.LinAeroEla.settings) solver settings for the linearised aeroelastic solution | Type: | dict | `lingebm`[¶](#sharpy.linear.src.lin_aeroelastic.LinAeroEla.lingebm) linearised geometrically exact beam model | Type: | [lingebm.FlexDynamic](index.html#sharpy.linear.src.lingebm.FlexDynamic) | `num_dof_str`[¶](#sharpy.linear.src.lin_aeroelastic.LinAeroEla.num_dof_str) number of structural degrees of freedom | Type: | int | `num_dof_rig`[¶](#sharpy.linear.src.lin_aeroelastic.LinAeroEla.num_dof_rig) number of rigid degrees of freedom | Type: | int | `num_dof_flex`[¶](#sharpy.linear.src.lin_aeroelastic.LinAeroEla.num_dof_flex) number of flexible degrees of freedom (`num_dof_flex+num_dof_rigid=num_dof_str`) | Type: | int | `linuvl`[¶](#sharpy.linear.src.lin_aeroelastic.LinAeroEla.linuvl) linearised UVLM class | Type: | [linuvlm.Dynamic](index.html#sharpy.linear.src.linuvlm.Dynamic) | `tsaero`[¶](#sharpy.linear.src.lin_aeroelastic.LinAeroEla.tsaero) aerodynamic state timestep info | Type: | [sharpy.utils.datastructures.AeroTimeStepInfo](index.html#sharpy.aero.models.aerogrid.Aerogrid.sharpy.utils.datastructures.AeroTimeStepInfo) | `tsstr`[¶](#sharpy.linear.src.lin_aeroelastic.LinAeroEla.tsstr) structural state timestep info | Type: | sharpy.utils.datastructures.StructTimeStepInfo | `dt`[¶](#sharpy.linear.src.lin_aeroelastic.LinAeroEla.dt) time increment | Type: | float | `q`[¶](#sharpy.linear.src.lin_aeroelastic.LinAeroEla.q) corresponding vector of displacements of dimensions `[1, num_dof_str]` | Type: | np.array | `dq`[¶](#sharpy.linear.src.lin_aeroelastic.LinAeroEla.dq) time derivative (\(\dot{\mathbf{q}}\)) of the corresponding vector of displacements with dimensions `[1, num_dof_str]` | Type: | np.array | `SS`[¶](#sharpy.linear.src.lin_aeroelastic.LinAeroEla.SS) state space formulation (discrete or continuous time), as selected by the user | Type: | scipy.signal | `assemble_ss`(*beam_num_modes=None*)[[source]](_modules/sharpy/linear/src/lin_aeroelastic.html#LinAeroEla.assemble_ss)[¶](#sharpy.linear.src.lin_aeroelastic.LinAeroEla.assemble_ss) Assemble State Space formulation `get_gebm2uvlm_gains`()[[source]](_modules/sharpy/linear/src/lin_aeroelastic.html#LinAeroEla.get_gebm2uvlm_gains)[¶](#sharpy.linear.src.lin_aeroelastic.LinAeroEla.get_gebm2uvlm_gains) Provides: * the gain matrices required to connect the linearised GEBM and UVLM > inputs/outputs * the stiffening and damping factors to be added to the linearised GEBM equations in order to account for non-zero aerodynamic loads at the linearisation point. The function produces the gain matrices: > * `Kdisp`: gains from GEBM to UVLM grid displacements > * `Kvel_disp`: influence of GEBM dofs displacements to UVLM grid > velocities. > * `Kvel_vel`: influence of GEBM dofs displacements to UVLM grid > displacements. > * `Kforces` (UVLM->GEBM) dimensions are the transpose than the > Kdisp and Kvel* matrices. Hence, when allocation this term, `ii` > and `jj` indices will unintuitively refer to columns and rows, > respectively. And the stiffening/damping terms accounting for non-zero aerodynamic forces at the linearisation point: > * `Kss`: stiffness factor (flexible dof -> flexible dof) accounting > for non-zero forces at the linearisation point. > - `Csr`: damping factor (rigid dof -> flexible dof) > - `Crs`: damping factor (flexible dof -> rigid dof) > - `Crr`: damping factor (rigid dof -> rigid dof) Stiffening and damping related terms due to the non-zero aerodynamic forces at the linearisation point: \[\mathbf{F}_{A,n} = C^{AG}(\mathbf{\chi})\sum_j \mathbf{f}_{G,j} \rightarrow \delta\mathbf{F}_{A,n} = C^{AG}_0 \sum_j \delta\mathbf{f}_{G,j} + \frac{\partial}{\partial\chi}(C^{AG}\sum_j \mathbf{f}_{G,j}^0)\delta\chi\] The term multiplied by the variation in the quaternion, \(\delta\chi\), couples the forces with the rigid body equations and becomes part of \(\mathbf{C}_{sr}\). Similarly, the linearisation of the moments results in expression that contribute to the stiffness and damping matrices. \[\mathbf{M}_{B,n} = \sum_j \tilde{X}_B C^{BA}(\Psi)C^{AG}(\chi)\mathbf{f}_{G,j}\] \[\delta\mathbf{M}_{B,n} = \sum_j \tilde{X}_B\left(C_0^{BG}\delta\mathbf{f}_{G,j} + \frac{\partial}{\partial\Psi}(C^{BA}\delta\mathbf{f}^0_{A,j})\delta\Psi + \frac{\partial}{\partial\chi}(C^{BA}_0 C^{AG} \mathbf{f}_{G,j})\delta\chi\right)\] The linearised equations of motion for the geometrically exact beam model take the input term \(\delta \mathbf{Q}_n = \{\delta\mathbf{F}_{A,n},\, T_0^T\delta\mathbf{M}_{B,n}\}\), which means that the moments should be provided as \(T^T(\Psi)\mathbf{M}_B\) instead of \(\mathbf{M}_A = C^{AB}\mathbf{M}_B\), where \(T(\Psi)\) is the tangential operator. \[\delta(T^T\mathbf{M}_B) = T^T_0\delta\mathbf{M}_B + \frac{\partial}{\partial\Psi}(T^T\delta\mathbf{M}_B^0)\delta\Psi\] is the linearised expression for the moments, where the first term would correspond to the input terms to the beam equations and the second arises due to the non-zero aerodynamic moment at the linearisation point and must be subtracted (since it comes from the forces) to form part of \(\mathbf{K}_{ss}\). In addition, the \(\delta\mathbf{M}_B\) term depends on both \(\delta\Psi\) and \(\delta\chi\), therefore those terms would also contribute to \(\mathbf{K}_{ss}\) and \(\mathbf{C}_{sr}\), respectively. The contribution from the total forces and moments will be accounted for in \(\mathbf{C}_{rr}\) and \(\mathbf{C}_{rs}\). \[\delta\mathbf{F}_{tot,A} = \sum_n\left(C^{GA}_0 \sum_j \delta\mathbf{f}_{G,j} + \frac{\partial}{\partial\chi}(C^{AG}\sum_j \mathbf{f}_{G,j}^0)\delta\chi\right)\] Therefore, after running this method, the beam matrices should be updated as: ``` >>> K_beam[:flex_dof, :flex_dof] += Kss >>> C_beam[:flex_dof, -rigid_dof:] += Csr >>> C_beam[-rigid_dof:, :flex_dof] += Crs >>> C_beam[-rigid_dof:, -rigid_dof:] += Crr ``` Track body option The `track_body` setting restricts the UVLM grid to linear translation motions and therefore should be used to ensure that the forces are computed using the reference linearisation frame. The UVLM and beam are linearised about a reference equilibrium condition. The UVLM is defined in the inertial reference frame while the beam employs the body attached frame and therefore a projection from one frame onto another is required during the coupling process. However, the inputs to the UVLM (i.e. the lattice grid coordinates) are obtained from the beam deformation which is expressed in A frame and therefore the grid coordinates need to be projected onto the inertial frame `G`. As the beam rotates, the projection onto the `G` frame of the lattice grid coordinates will result in a grid that is not coincident with that at the linearisation reference and therefore the grid coordinates must be projected onto the original frame, which will be referred to as `U`. The transformation between the inertial frame `G` and the `U` frame is a function of the rotation of the `A` frame and the original position: \[C^{UG}(\chi) = C^{GA}(\chi_0)C^{AG}(\chi)\] Therefore, the grid coordinates obtained in `A` frame and projected onto the `G` frame can be transformed to the `U` frame using \[\zeta_U = C^{UG}(\chi) \zeta_G\] which allows the grid lattice coordinates to be projected onto the original linearisation frame. In a similar fashion, the output lattice vertex forces of the UVLM are defined in the original linearisation frame `U` and need to be transformed onto the inertial frame `G` prior to projecting them onto the `A` frame to use them as the input forces to the beam system. \[\boldsymbol{f}_G = C^{GU}(\chi)\boldsymbol{f}_U\] The linearisation of the above relations lead to the following expressions that have to be added to the coupling matrices: > * `Kdisp_vel` terms: > > > > \[\delta\boldsymbol{\zeta}_U= C^{GA}_0 \frac{\partial}{\partial \boldsymbol{\chi}} > > \left(C^{AG}\boldsymbol{\zeta}_{G,0}\right)\delta\boldsymbol{\chi} + \delta\boldsymbol{\zeta}_G\] > > > * `Kvel_vel` terms: > > > > \[\delta\dot{\boldsymbol{\zeta}}_U= C^{GA}_0 \frac{\partial}{\partial \boldsymbol{\chi}} > > \left(C^{AG}\dot{\boldsymbol{\zeta}}_{G,0}\right)\delta\boldsymbol{\chi} > > + \delta\dot{\boldsymbol{\zeta}}_G\] > > The transformation of the forces and moments introduces terms that are functions of the orientation and are included as stiffening and damping terms in the beam’s matrices: > * `Csr` damping terms relating to translation forces: > > > > \[C_{sr}^{tra} -= \frac{\partial}{\partial\boldsymbol{\chi}} > > \left(C^{GA} C^{AG}_0 \boldsymbol{f}_{G,0}\right)\delta\boldsymbol{\chi}\] > > > * `Csr` damping terms related to moments: > > > > \[C_{sr}^{rot} -= T^\top\widetilde{\mathbf{X}}_B C^{BG} > > \frac{\partial}{\partial\boldsymbol{\chi}} > > \left(C^{GA} C^{AG}_0 \boldsymbol{f}_{G,0}\right)\delta\boldsymbol{\chi}\] > > The `track_body` setting. When `track_body` is enabled, the UVLM grid is no longer coincident with the inertial reference frame throughout the simulation but rather it is able to rotate as the `A` frame rotates. This is to simulate a free flying vehicle, where, for instance, the orientation does not affect the aerodynamics. The UVLM defined in this frame of reference, named `U`, satisfies the following convention: > * The `U` frame is coincident with the `G` frame at the time of linearisation. > * The `U` frame rotates as the `A` frame rotates. Transformations related to the `U` frame of reference: > * The angle between the `U` frame and the `A` frame is always constant and equal > to \(\boldsymbol{\Theta}_0\). > * The angle between the `A` frame and the `G` frame is \(\boldsymbol{\Theta}=\boldsymbol{\Theta}_0 > + \delta\boldsymbol{\Theta}\) > * The projection of a vector expressed in the `G` frame onto the `U` frame is expressed by: > > > > \[\boldsymbol{v}^U = C^{GA}_0 C^{AG} \boldsymbol{v}^G\] > > > * The reverse, a projection of a vector expressed in the `U` frame onto the `G` frame, is expressed by > > > > \[\boldsymbol{v}^U = C^{GA} C^{AG}_0 \boldsymbol{v}^U\] > > The effect this has on the aeroelastic coupling between the UVLM and the structural dynamics is that the orientation and change of orientation of the vehicle has no effect on the aerodynamics. The aerodynamics are solely affected by the contribution of the 6-rigid body velocities (as well as the flexible DOFs velocities). `reshape_struct_input`()[[source]](_modules/sharpy/linear/src/lin_aeroelastic.html#LinAeroEla.reshape_struct_input)[¶](#sharpy.linear.src.lin_aeroelastic.LinAeroEla.reshape_struct_input) Reshape structural input in a column vector ###### Utilities functions for linear analysis[¶](#utilities-functions-for-linear-analysis) Utilities functions for linear analysis ####### Info[¶](#info) *class* `sharpy.linear.src.lin_utils.``Info`(*zeta*, *zeta_dot*, *u_ext*, *ftot*, *mtot*, *q*, *qdot*, *SSaero=None*, *SSbeam=None*, *Kas=None*, *Kftot=None*, *Kmtot=None*, *Kmtot_disp=None*, *Asteady_inv=None*)[[source]](_modules/sharpy/linear/src/lin_utils.html#Info)[¶](#sharpy.linear.src.lin_utils.Info) Summarise info about a data point ####### comp_tot_force[¶](#module-sharpy.linear.src.lin_utils.comp_tot_force) Compute total force with exact displacements ####### extract_from_data[¶](#module-sharpy.linear.src.lin_utils.extract_from_data) Extract relevant info from data structure. If assemble is True, it will also generate a linear UVLM and the displacements/velocities gain matrices ####### solve_linear[¶](#module-sharpy.linear.src.lin_utils.solve_linear) Given 2 Info() classes associated to a reference linearisation point Ref and a perturbed state Pert, the method produces in output the prediction at the Pert state of a linearised model. The solution is carried on using both the aero and beam input ###### Linear beam model class[¶](#linear-beam-model-class) Linear beam model class <NAME>, Aug 2018 <NAME> ####### FlexDynamic[¶](#flexdynamic) *class* `sharpy.linear.src.lingebm.``FlexDynamic`(*tsinfo*, *structure=None*, *custom_settings={}*)[[source]](_modules/sharpy/linear/src/lingebm.html#FlexDynamic)[¶](#sharpy.linear.src.lingebm.FlexDynamic) Define class for linear state-space realisation of GEBM flexible-body equations from SHARPy``timestep_info`` class and with the nonlinear structural information. The linearised beam takes the following arguments: | Parameters: | * **tsinfo** (*sharpy.utils.datastructures.StructImeStepInfo*) – Structural timestep containing the modal information * **structure** (*sharpy.solvers.beamloader.Beam*) – Beam class with the structural information * **custom_settings** (*dict*) – settings for the linearised beam | State-space models can be defined in continuous or discrete time (dt required). Modal projection, either on the damped or undamped modal shapes, is also avaiable. The rad/s array wv can be optionally passed for freq. response analysis To produce the state-space equations: 1. Set the settings: 1. `modal_projection={True,False}`: determines whether to project the states onto modal coordinates. Projection over damped or undamped modal shapes can be obtained selecting: > * `proj_modes={'damped','undamped'}` while > > > > * `inout_coords={'modes','nodal'}` > > determines whether the modal state-space inputs/outputs are modal > coords or nodal degrees-of-freedom. If `modes` is selected, the > `Kin` and `Kout` gain matrices are generated to transform nodal to modal > dofs > 2. `dlti={True,False}`: if true, generates discrete-time system. The continuous to discrete transformation method is determined by: ``` discr_method={ 'newmark', # Newmark-beta 'zoh', # Zero-order hold 'bilinear'} # Bilinear (Tustin) transformation ``` DLTIs can be obtained directly using the Newmark-\(\beta\) method > `discr_method='newmark'` > `newmark_damp=xx` with `xx<<1.0` for full-states descriptions (`modal_projection=False`) and modal projection over the undamped structural modes (`modal_projection=True` and `proj_modes`). The Zero-order holder and bilinear methods, instead, work in all descriptions, but require the continuous state-space equations. 2. Generate an instance of the beam 2. Run `self.assemble()`. The method accepts an additional parameter, `Nmodes`, which allows using a lower number of modes than specified in `self.Nmodes` Examples ``` >>> beam_settings = {'modal_projection': True, >>> 'inout_coords': 'modes', >>> 'discrete_time': False, >>> 'proj_modes': 'undamped', >>> 'use_euler': True} >>> >>> beam = lingebm.FlexDynamic(tsstruct0, structure=data.structure, custom_settings=beam_settings) >>> >>> beam.assemble() ``` Notes * Modal projection will automatically select between damped/undamped modes shapes, based on the data available from tsinfo. * If the full system matrices are available, use the modal_sol methods to override mode-shapes and eigenvectors `assemble`(*Nmodes=None*)[[source]](_modules/sharpy/linear/src/lingebm.html#FlexDynamic.assemble)[¶](#sharpy.linear.src.lingebm.FlexDynamic.assemble) Assemble state-space model Several assembly options are available: 1. Discrete-time, Newmark-\(\beta\): * Modal projection onto undamped modes. It uses the modal projection such that the generalised coordinates \(\eta\) are transformed into modal space by > > > > \[\mathbf{\eta} = \mathbf{\Phi\,q}\] > > where \(\mathbf{\Phi}\) are the first `Nmodes` right eigenvectors. > Therefore, the equation of motion can be re-written such that the modes normalise the mass matrix to > become the identity matrix. > > > > \[\mathbf{I_{Nmodes}}\mathbf{\ddot{q}} + \mathbf{\Lambda_{Nmodes}\,q} = 0\] > > The system is then assembled in Newmark-\(\beta\) form as detailed in [`newmark_ss()`](index.html#module-sharpy.linear.src.lingebm.newmark_ss) > * Full size system assembly. No modifications are made to the mass, damping or stiffness matrices and the system is directly assembled by [`newmark_ss()`](index.html#module-sharpy.linear.src.lingebm.newmark_ss). 2. Continuous time state-space | Parameters: | **Nmodes** (*int*) – number of modes to retain | `cont2disc`(*dt=None*)[[source]](_modules/sharpy/linear/src/lingebm.html#FlexDynamic.cont2disc)[¶](#sharpy.linear.src.lingebm.FlexDynamic.cont2disc) Convert continuous-time SS model into `converge_modal`(*wv=None*, *tol=None*, *Yref=None*, *Print=False*)[[source]](_modules/sharpy/linear/src/lingebm.html#FlexDynamic.converge_modal)[¶](#sharpy.linear.src.lingebm.FlexDynamic.converge_modal) Determine number of modes required to achieve a certain convergence of the modal solution in a prescribed frequency range `wv`. The H-infinity norm of the error w.r.t. `Yref` is used for assessing convergence. Warning if a reference freq. response, Yref, is not provided, the full- state continuous-time frequency response is used as reference. This requires the full-states matrices `Mstr`, `Cstr`, `Kstr` to be available. `euler_propagation_equations`(*tsstr*)[[source]](_modules/sharpy/linear/src/lingebm.html#FlexDynamic.euler_propagation_equations)[¶](#sharpy.linear.src.lingebm.FlexDynamic.euler_propagation_equations) Introduce the linearised Euler propagation equations that relate the body fixed angular velocities to the Earth fixed Euler angles. This method will remove the quaternion propagation equations created by SHARPy; the resulting system will have 9 rigid degrees of freedom. | Parameters: | **tsstr** – | Returns: `freqresp`(*wv=None*, *bode=True*)[[source]](_modules/sharpy/linear/src/lingebm.html#FlexDynamic.freqresp)[¶](#sharpy.linear.src.lingebm.FlexDynamic.freqresp) Computes the frequency response of the current state-space model. If `self.modal=True`, the in/out are determined according to `self.inout_coords` `linearise_gravity_forces`(*tsstr=None*)[[source]](_modules/sharpy/linear/src/lingebm.html#FlexDynamic.linearise_gravity_forces)[¶](#sharpy.linear.src.lingebm.FlexDynamic.linearise_gravity_forces) Linearises gravity forces and includes the resulting terms in the C and K matrices. The method takes the linearisation condition (optional argument), linearises and updates: > * Stiffness matrix > * Damping matrix > * Modal damping matrix The method works for both the quaternion and euler angle orientation parametrisation. | Parameters: | **tsstr** (*sharpy.utils.datastructures.StructTimeStepInfo*) – Structural timestep at the linearisation point | Notes The gravity forces are linearised to express them in terms of the beam formulation input variables: > * Nodal forces: \(\delta \mathbf{f}_A\) > * Nodal moments: \(\delta(T^T \mathbf{m}_B)\) > * Total forces (rigid body equations): \(\delta \mathbf{F}_A\) > * Total moments (rigid body equations): \(\delta \mathbf{M}_A\) Gravity forces are naturally expressed in `G` (inertial) frame \[\mathbf{f}_{G,0} = \mathbf{M\,g}\] where the \(\mathbf{M}\) is the tangent mass matrix obtained at the linearisation reference. To obtain the gravity forces expressed in A frame we make use of the projection matrices \[\mathbf{f}_A = C^{AG}(\boldsymbol{\chi}) \mathbf{f}_{G,0}\] that projects a vector in the inertial frame `G` onto the body attached frame `A`. The projection of a vector can then be linearised as \[\delta \mathbf{f}_A = C^{AG} \delta \mathbf{f}_{G,0} + \frac{\partial}{\partial \boldsymbol{\chi}}(C^{AG} \mathbf{f}_{G,0}) \delta\boldsymbol{\chi}.\] * Nodal forces: > The linearisation of the gravity forces acting at each node is simply > \[\delta \mathbf{f}_A = > + \frac{\partial}{\partial \boldsymbol{\chi}}(C^{AG} \mathbf{f}_{G,0}) \delta\boldsymbol{\chi}\] > where it is assumed that \(\delta\mathbf{f}_G = 0\). > * Nodal moments: > The gravity moments can be expressed in the local node frame of reference `B` by > \[\mathbf{m}_B = \tilde{X}_{B,CG}C^{BA}(\Psi)C^{AG}(\boldsymbol{\chi})\mathbf{f}_{G,0}\] > The linearisation is given by: > \[\delta \mathbf{m}_B = \tilde{X}_{B,CG} > \left(\frac{\partial}{\partial\Psi}(C^{BA}\mathbf{f}_{A,0})\delta\Psi + > C^{BA}\frac{\partial}{\partial\boldsymbol{\chi}}(C^{AG}\mathbf{f}_{G,0})\delta\boldsymbol{\chi}\right)\] > However, recall that the input moments are defined in tangential space > \(\delta(T^\top\mathbf{m}_B)\) whose linearised expression is > \[\delta(T^T(\Psi) \mathbf{m}_B) = T_0^T \delta \mathbf{m}_B + > \frac{\partial}{\partial \Psi}(T^T \mathbf{m}_{B,0})\delta\Psi\] > where the \(\delta \mathbf{m}_B\) term has been defined above. > * Total forces: > The total forces include the contribution from all flexible degrees of freedom as well as the gravity > forces arising from the mass at the clamped node > \[\mathbf{F}_A = \sum_n \mathbf{f}_A + \mathbf{f}_{A,clamped}\] > which becomes > \[\delta \mathbf{F}_A = \sum_n \delta \mathbf{f}_A + > \frac{\partial}{\partial\boldsymbol{\chi}}\left(C^{AG}\mathbf{f}_{G,clamped}\right) > \delta\boldsymbol{\chi}.\] > * Total moments: > The total moments, as opposed to the nodal moments, are expressed in A frame and again require the > addition of the moments from the flexible structural nodes as well as the ones from the clamped node > itself. > \[\mathbf{M}_A = \sum_n \tilde{X}_{A,n}^{CG} C^{AG} \mathbf{f}_{n,G} > + \tilde{X}_{A,clamped}C^{AG}\mathbf{f}_{G, clamped}\] > where \(X_{A,n}^{CG} = R_{A,n} + C^{AB}(\Psi)X_{B,n}^{CG}\). Its linearised form is > \[\delta X_{A,n}^{CG} = \delta R_{A,n} > + \frac{\partial}{\partial \Psi}(C^{AB} X_{B,CG})\delta\Psi\] > Therefore, the overall linearisation of the total moment is defined as > \[\delta \mathbf{M}_A = > \tilde{X}_{A,total}^{CG} \frac{\partial}{\partial \boldsymbol{\chi}}(C^{AG}\mathbf{F}_{G, total}) > \delta \boldsymbol{\chi} > -\sum_n \tilde{C}^{AG}\mathbf{f}_{G,0} \delta X_{A,n}^{CG}\] > where \(X_{A, total}\) is the centre of gravity of the entire system expressed in `A` frame and > \(\mathbf{F}_{G, total}\) are the gravity forces of the overall system in `G` frame, including the > contributions from the clamped node. The linearisation introduces damping and stiffening terms since the \(\delta\boldsymbol{\chi}\) and \(\delta\boldsymbol{\Psi}\) terms are found in the damping and stiffness matrices respectively. Therefore, the beam matrices need updating to account for these terms: > * Terms from the linearisation of the nodal moments will be assembled in the rows corresponding to > moment equations and columns corresponding to the cartesian rotation vector > > > > \[K_{ss}^{m,\Psi} \leftarrow -T_0^T \tilde{X}_{B,CG} > > \frac{\partial}{\partial\Psi}(C^{BA}\mathbf{f}_{A,0}) > > -\frac{\partial}{\partial \Psi}(T^T \mathbf{m}_{B,0})\] > > > * Terms from the linearisation of the translation forces with respect to the orientation are assembled > in the damping matrix, the rows corresponding to translational forces and columns to orientation > degrees of freedom > > > > \[C_{sr}^{f,\boldsymbol{\chi}} \leftarrow - > > \frac{\partial}{\partial \boldsymbol{\chi}}(C^{AG} \mathbf{f}_{G,0})\] > > > * Terms from the linearisation of the moments with respect to the orientation are assembled in the > damping matrix, with the rows correspondant to the moments and the columns to the orientation degrees > of freedom > > > > \[C_{sr}^{m,\boldsymbol{\chi}} \leftarrow - > > T_0^T\tilde{X}_{B,CG}C^{BA}\frac{\partial}{\partial\boldsymbol{\chi}}(C^{AG}\mathbf{f}_{G,0})\] > > > * Terms from the linearisation of the total forces with respect to the orientation correspond to the > rigid body equations in the damping matrix, the rows to the translational forces and columns to the > orientation > > > > \[C_{rr}^{F,\boldsymbol{\chi}} \leftarrow > > - \sum_n \frac{\partial}{\partial \boldsymbol{\chi}}(C^{AG} \mathbf{f}_{G,0})\] > > > * Terms from the linearisation of the total moments with respect to the orientation correspond to the > rigid body equations in the damping matrix, the rows to the moments and the columns to the orientation > > > > \[C_{rr}^{M,\boldsymbol{\chi}} \leftarrow > > - \sum_n\tilde{X}_{A,n}^{CG} \frac{\partial}{\partial \boldsymbol{\chi}}(C^{AG}\mathbf{f}_{G,0})\] > > > * Terms from the linearisation of the total moments with respect to the nodal position \(R_A\) are > included in the stiffness matrix, the rows corresponding to the moments in the rigid body > equations and the columns to the nodal position > > > > \[K_{rs}^{M,R} \leftarrow + \sum_n \tilde{\mathbf{f}_{A,0}}\] > > > * Terms from the linearisation of the total moments with respect to the cartesian rotation vector are > included in the stiffness matrix, the rows corresponding to the moments in the rigid body equations > and the columns to the cartesian rotation vector > > > > \[K_{rs}^{M, \Psi} \leftarrow > > + \sum_n \tilde{\mathbf{f}_{A,0}}\frac{\partial}{\partial \Psi}(C^{AB} X_{B,CG})\] > > `reshape_struct_input`()[[source]](_modules/sharpy/linear/src/lingebm.html#FlexDynamic.reshape_struct_input)[¶](#sharpy.linear.src.lingebm.FlexDynamic.reshape_struct_input) Reshape structural input in a column vector `scale_system_normalised_time`(*time_ref*)[[source]](_modules/sharpy/linear/src/lingebm.html#FlexDynamic.scale_system_normalised_time)[¶](#sharpy.linear.src.lingebm.FlexDynamic.scale_system_normalised_time) Scale the system with a normalised time step. The resulting time step is \(\Delta t = \Delta \bar{t}/t_{ref}\), where the over bar denotes dimensional time. The structural equations of motion are rescaled as: \[\mathbf{M}\ddot{\boldsymbol{\eta}} + \mathbf{C} t_{ref} \dot{\boldsymbol{\eta}} + \mathbf{K} t_{ref}^2 \boldsymbol{\eta} = t_{ref}^2 \mathbf{N}\] For aeroelastic applications, the reference time is usually defined using the semi-chord, \(b\), and the free stream velocity, \(U_\infty\). \[t_{ref,ae} = \frac{b}{U_\infty}\] | Parameters: | **time_ref** (*float*) – Normalisation factor such that \(t/\bar{t}\) is non-dimensional. | `tune_newmark_damp`(*amplification_factor=0.999*)[[source]](_modules/sharpy/linear/src/lingebm.html#FlexDynamic.tune_newmark_damp)[¶](#sharpy.linear.src.lingebm.FlexDynamic.tune_newmark_damp) Tune artifical damping to achieve a percent reduction of the lower frequency (lower damped) mode `update_modal`()[[source]](_modules/sharpy/linear/src/lingebm.html#FlexDynamic.update_modal)[¶](#sharpy.linear.src.lingebm.FlexDynamic.update_modal) Re-projects the full-states continuous-time structural dynamics equations \[\mathbf{M}\,\mathbf{\ddot{x}} +\mathbf{C}\,\mathbf{\dot{x}} + \mathbf{K\,x} = \mathbf{F}\] onto modal space. The modes used to project are controlled through the `self.proj_modes={damped or undamped}` attribute. Warning This method overrides SHARPy `timestep_info` results and requires `Mstr`, `Cstr`, `Kstr` to be available. `update_truncated_modes`(*nmodes*)[[source]](_modules/sharpy/linear/src/lingebm.html#FlexDynamic.update_truncated_modes)[¶](#sharpy.linear.src.lingebm.FlexDynamic.update_truncated_modes) Updates the system to the specified number of modes | Parameters: | **nmodes** – | Returns: ####### newmark_ss[¶](#module-sharpy.linear.src.lingebm.newmark_ss) Produces a discrete-time state-space model of the structural equations \[\begin{split}\mathbf{\ddot{x}} &= \mathbf{M}^{-1}( -\mathbf{C}\,\mathbf{\dot{x}}-\mathbf{K}\,\mathbf{x}+\mathbf{F} ) \\ \mathbf{y} &= \mathbf{x}\end{split}\] based on the Newmark-\(\beta\) integration scheme. The output state-space model has form: \[\begin{split}\mathbf{X}_{n+1} &= \mathbf{A}\,\mathbf{X}_n + \mathbf{B}\,\mathbf{F}_n \\ \mathbf{Y} &= \mathbf{C}\,\mathbf{X} + \mathbf{D}\,\mathbf{F}\end{split}\] with \(\mathbf{X} = [\mathbf{x}, \mathbf{\dot{x}}]^T\) Note that as the state-space representation only requires the input force \(\mathbf{F}\) to be evaluated at time-step \(n\),the \(\mathbf{C}\) and \(\mathbf{D}\) matrices are, in general, fully populated. The Newmark-\(\beta\) integration scheme is carried out following the modifications presented by Geradin [1] that render it unconditionally stable. The displacement and velocities are estimated as: \[\begin{split}x_{n+1} &= x_n + \Delta t \dot{x}_n + \left(\frac{1}{2}-\theta_2\right)\Delta t^2 \ddot{x}_n + \theta_2\Delta t \ddot{x}_{n+1} \\ \dot{x}_{n+1} &= \dot{x}_n + (1-\theta_1)\Delta t \ddot{x}_n + \theta_1\Delta t \ddot{x}_{n+1}\end{split}\] The stencil is unconditionally stable if the tuning parameters \(\theta_1\) and \(\theta_2\) are chosen as: \[\begin{split}\theta_1 &= \frac{1}{2} + \alpha \\ \theta_2 &= \frac{1}{4} \left(\theta_1 + \frac{1}{2}\right)^2 \\ \theta_2 &= \frac{5}{80} + \frac{1}{4} (\theta_1 + \theta_1^2) \text{TBC SOURCE}\end{split}\] where \(\alpha>0\) accounts for small positive algorithmic damping. The following steps describe how to apply the Newmark-beta scheme to a state-space formulation. The original idea is based on [1]. The equation of a second order system dynamics reads: \[M\mathbf{\ddot q} + C\mathbf{\dot q} + K\mathbf{q} = F\] Applying that equation to the time steps \(n\) and \(n+1\), rearranging terms and multiplying by \(M^{-1}\): \[\begin{split}\mathbf{\ddot q}_{n} = - M^{-1}C\mathbf{\dot q}_{n} - M^{-1}K\mathbf{q}_{n} + M^{-1}F_{n} \\ \mathbf{\ddot q}_{n+1} = - M^{-1}C\mathbf{\dot q}_{n+1} - M^{-1}K\mathbf{q}_{n+1} + M^{-1}F_{n+1}\end{split}\] The relations of the Newmark-beta scheme are: \[\begin{split}\mathbf{q}_{n+1} &= \mathbf{q}_n + \mathbf{\dot q}_n\Delta t + (\frac{1}{2}-\beta)\mathbf{\ddot q}_n \Delta t^2 + \beta \mathbf{\ddot q}_{n+1} \Delta t^2 + O(\Delta t^3) \\ \mathbf{\dot q}_{n+1} &= \mathbf{\dot q}_n + (1-\gamma)\mathbf{\ddot q}_n \Delta t + \gamma \mathbf{\ddot q}_{n+1} \Delta t + O(\Delta t^3)\end{split}\] Substituting the former relation onto the later ones, rearranging terms, and writing it in state-space form: \[\begin{split}\begin{bmatrix} I + M^{-1}K \Delta t^2\beta \quad \Delta t^2\beta M^{-1}C \\ (\gamma \Delta t M^{-1}K) \quad (I + \gamma \Delta t M^{-1}C) \end{bmatrix} \begin{Bmatrix} \mathbf{\dot q}_{n+1} \\ \mathbf{\ddot q}_{n+1} \end{Bmatrix} = \begin{bmatrix} (I - \Delta t^2(1/2-\beta)M^{-1}K \quad (\Delta t - \Delta t^2(1/2-\beta)M^{-1}C \\ (-(1-\gamma)\Delta t M^{-1}K \quad (I - (1-\gamma)\Delta tM^{-1}C \end{bmatrix} \begin{Bmatrix} \mathbf{q}_{n} \\ \mathbf{\dot q}_{n} \end{Bmatrix} + \begin{Bmatrix} (\Delta t^2(1/2-\beta) \\ (1-\gamma)\Delta t \end{Bmatrix} M^{-1}F_n+ \begin{Bmatrix} (\Delta t^2\beta) \\ (\gamma \Delta t) \end{Bmatrix}M^{-1}F_{n+1}\end{split}\] To understand SHARPy code, it is convenient to apply the following change of notation: \[\begin{split}\textrm{th1} = \gamma \\ \textrm{th2} = \beta \\ \textrm{a0} = \Delta t^2 (1/2 -\beta) \\ \textrm{b0} = \Delta t (1 -\gamma) \\ \textrm{a1} = \Delta t^2 \beta \\ \textrm{b1} = \Delta t \gamma \\\end{split}\] Finally: \[\begin{split}A_{ss1} \begin{Bmatrix} \mathbf{\dot q}_{n+1} \\ \mathbf{\ddot q}_{n+1} \end{Bmatrix} = A_{ss0} \begin{Bmatrix} \mathbf{\dot q}_{n} \\ \mathbf{\ddot q}_{n} \end{Bmatrix} + \begin{Bmatrix} (\Delta t^2(1/2-\beta) \\ (1-\gamma)\Delta t \end{Bmatrix} M^{-1}F_n+ \begin{Bmatrix} (\Delta t^2\beta) \\ (\gamma \Delta t) \end{Bmatrix}M^{-1}F_{n+1}\end{split}\] To finally isolate the vector at \(n+1\), instead of inverting the \(A_{ss1}\) matrix, several systems are solved. Moreover, the output equation is simply \(y=x\). | param Minv: | Inverse mass matrix \(\mathbf{M^{-1}}\) | | type Minv: | np.array | | param C: | Damping matrix \(\mathbf{C}\) | | type C: | np.array | | param K: | Stiffness matrix \(\mathbf{K}\) | | type K: | np.array | | param dt: | Timestep increment | | type dt: | float | | param num_damp: | Numerical damping. Default `1e-4` | | type num_damp: | float | | returns: | the A, B, C, D matrices of the state space packed in a tuple with the predictor and delay term removed. | | rtype: | tuple | References [1] - <NAME>., <NAME>. - Mechanical Vibrations: Theory and application to structural dynamics ####### sort_eigvals[¶](#module-sharpy.linear.src.lingebm.sort_eigvals) sort by magnitude (frequency) and imaginary part if complex conj ###### Linearise UVLM solver[¶](#linearise-uvlm-solver) Linearise UVLM solver <NAME>, 7 Jun 2018 ####### Dynamic[¶](#dynamic) *class* `sharpy.linear.src.linuvlm.``Dynamic`(*tsdata*, *dt=None*, *dynamic_settings=None*, *integr_order=2*, *RemovePredictor=True*, *ScalingDict=None*, *UseSparse=True*, *for_vel=<sphinx.ext.autodoc.importer._MockObject object>*)[[source]](_modules/sharpy/linear/src/linuvlm.html#Dynamic)[¶](#sharpy.linear.src.linuvlm.Dynamic) Class for dynamic linearised UVLM solution. Linearisation around steady-state are only supported. The class is built upon Static, and inherits all the methods contained there. Input: * tsdata: aero timestep data from SHARPy solution * dt: time-step * integr_order=2: integration order for UVLM unsteady aerodynamic force * RemovePredictor=True: if true, the state-space model is modified so as to accept in input perturbations, u, evaluated at time-step n rather than n+1. * ScalingDict=None: disctionary containing fundamental reference units: > ``` > {'length': reference_length, > 'speed': reference_speed, > 'density': reference density} > ``` used to derive scaling quantities for the state-space model variables. The scaling factors are stored in `self.ScalingFact`. Note that while time, circulation, angular speeds) are scaled accordingly, FORCES ARE NOT. These scale by \(q_\infty b^2\), where \(b\) is the reference length and \(q_\infty\) is the dynamic pressure. * UseSparse=False: builds the A and B matrices in sparse form. C and D are dense anyway so the sparse format cannot be applied to them. `- nondimss` normalises a dimensional state-space model based on the scaling factors in self.ScalingFact. `- dimss` inverse of nondimss. `- assemble_ss` builds state-space model. See function for more details. `- assemble_ss_profiling` generate profiling report of the assembly and saves it into self.prof_out. To read the report: > ``` > import pstats > p = pstats.Stats(self.prof_out) > ``` `- solve_steady` solves for the steady state. Several methods available. `- solve_step` solves one time-step `- freqresp` ad-hoc method for fast frequency response (only implemented) for `remove_predictor=False` `Nx`[¶](#sharpy.linear.src.linuvlm.Dynamic.Nx) Number of states | Type: | int | `Nu`[¶](#sharpy.linear.src.linuvlm.Dynamic.Nu) Number of inputs | Type: | int | `Ny`[¶](#sharpy.linear.src.linuvlm.Dynamic.Ny) Number of outputs | Type: | int | `K`[¶](#sharpy.linear.src.linuvlm.Dynamic.K) Number of paneles \(K = MN\) | Type: | int | `K_star`[¶](#sharpy.linear.src.linuvlm.Dynamic.K_star) Number of wake panels \(K^*=M^*N\) | Type: | int | `Kzeta`[¶](#sharpy.linear.src.linuvlm.Dynamic.Kzeta) Number of panel vertices \(K_\zeta=(M+1)(N+1)\) | Type: | int | `Kzeta_star`[¶](#sharpy.linear.src.linuvlm.Dynamic.Kzeta_star) Number of wake panel vertices \(K_{\zeta,w} = (M^*+1)(N+1)\) | Type: | int | To do: Upgrade to linearise around unsteady snapshot (adjoint) `Nu` Number of inputs \(m\) to the system. `Nx` Number of states \(n\) of the system. `Ny` Number of outputs \(p\) of the system. `assemble_ss`()[[source]](_modules/sharpy/linear/src/linuvlm.html#Dynamic.assemble_ss)[¶](#sharpy.linear.src.linuvlm.Dynamic.assemble_ss) Produces state-space model of the form > \[\begin{split}\mathbf{x}_{n+1} &= \mathbf{A}\,\mathbf{x}_n + \mathbf{B} \mathbf{u}_{n+1} \\ > \mathbf{y}_n &= \mathbf{C}\,\mathbf{x}_n + \mathbf{D} \mathbf{u}_n\end{split}\] where the state, inputs and outputs are: > \[\mathbf{x}_n = \{ \delta \mathbf{\Gamma}_n,\, \delta \mathbf{\Gamma_{w_n}},\, > \Delta t\,\delta\mathbf{\Gamma}'_n,\, \delta\mathbf{\Gamma}_{n-1} \}\] > \[\mathbf{u}_n = \{ \delta\mathbf{\zeta}_n,\, \delta\mathbf{\zeta}'_n,\, > \delta\mathbf{u}_{ext,n} \}\] > \[\mathbf{y} = \{\delta\mathbf{f}\}\] with \(\mathbf{\Gamma}\in\mathbb{R}^{MN}\) being the vector of vortex circulations, \(\mathbf{\zeta}\in\mathbb{R}^{3(M+1)(N+1)}\) the vector of vortex lattice coordinates and \(\mathbf{f}\in\mathbb{R}^{3(M+1)(N+1)}\) the vector of aerodynamic forces and moments. Note that \((\bullet)'\) denotes a derivative with respect to time. Note that the input is atypically defined at time `n+1`, therefore by default `self.remove_predictor = True` and the predictor term `u_{n+1}` is eliminated through the change of state[1]: > \[\begin{split}\mathbf{h}_n &= \mathbf{x}_n - \mathbf{B}\,\mathbf{u}_n \\\end{split}\] such that: > \[\begin{split}\mathbf{h}_{n+1} &= \mathbf{A}\,\mathbf{h}_n + \mathbf{A\,B}\,\mathbf{u}_n \\ > \mathbf{y}_n &= \mathbf{C\,h}_n + (\mathbf{C\,B}+\mathbf{D})\,\mathbf{u}_n\end{split}\] which only modifies the equivalent \(\mathbf{B}\) and \(\mathbf{D}\) matrices. References [1] Franklin, GF and Powell, JD. Digital Control of Dynamic Systems, Addison-Wesley Publishing Company, 1980 To do: - remove all calls to scipy.linalg.block_diag `assemble_ss_profiling`()[[source]](_modules/sharpy/linear/src/linuvlm.html#Dynamic.assemble_ss_profiling)[¶](#sharpy.linear.src.linuvlm.Dynamic.assemble_ss_profiling) Generate profiling report for assembly and save it in self.prof_out. To read the report: import pstats p=pstats.Stats(self.prof_out) `balfreq`(*DictBalFreq*)[[source]](_modules/sharpy/linear/src/linuvlm.html#Dynamic.balfreq)[¶](#sharpy.linear.src.linuvlm.Dynamic.balfreq) Low-rank method for frequency limited balancing. The Observability ad controllability Gramians over the frequencies kv are solved in factorised form. Balancd modes are then obtained with a square-root method. Details: Observability and controllability Gramians are solved in factorised form through explicit integration. The number of integration points determines both the accuracy and the maximum size of the balanced model. Stability over all (Nb) balanced states is achieved if: > 1. one of the Gramian is integrated through the full Nyquist range > 2. the integration points are enough. Note, however, that even when stability is not achieved over the full balanced states, stability of the balanced truncated model with Ns<=Nb states is normally observed even when a low number of integration points is used. Two integration methods (trapezoidal rule on uniform grid and Gauss-Legendre quadrature) are provided. Input: * DictBalFreq: dictionary specifying integration method with keys: > + `frequency`: defines limit frequencies for balancing. The balanced > model will be accurate in the range [0,F], where F is the value of > this key. Note that F units must be consistent with the units specified > in the self.ScalingFacts dictionary. > + `method_low`: [‘gauss’,’trapz’] specifies whether to use gauss > quadrature or trapezoidal rule in the low-frequency range [0,F] > + `options_low`: options to use for integration in the low-frequencies. > These depend on the integration scheme (See below). > + `method_high`: method to use for integration in the range [F,F_N], > where F_N is the Nyquist frequency. See ‘method_low’. > + `options_high`: options to use for integration in the high-frequencies. > + `check_stability`: if True, the balanced model is truncated to > eliminate unstable modes - if any is found. Note that very accurate > balanced model can still be obtained, even if high order modes are > unstable. Note that this option is overridden if “” > + `get_frequency_response`: if True, the function also returns the > frequency response evaluated at the low-frequency range integration > points. If True, this option also allows to automatically tune the > balanced model. Future options: > * `truncation_tolerance`: if `get_frequency_response` is True, allows > to truncate the balanced model so as to achieved a prescribed > tolerance in the low-frequwncy range. > * `Ncpu`: for parallel run The following integration schemes are available: > * `trapz`: performs integration over equally spaced points using > trapezoidal rule. It accepts options dictionaries with keys: > > > > + `points`: number of integration points to use (including > > domain boundary) > > > * `gauss` performs gauss-lobotto quadrature. The domain can be > partitioned in Npart sub-domain in which the gauss-lobotto quadrature > of order Ord can be applied. A total number of Npart*Ord points is > required. It accepts options dictionaries of the form: > > > > + `partitions`: number of partitions > > + `order`: quadrature order. > > Example: The following dictionary > ``` > DictBalFreq={ 'frequency': 1.2, > 'method_low': 'trapz', > 'options_low': {'points': 12}, > 'method_high': 'gauss', > 'options_high': {'partitions': 2, 'order': 8}, > 'check_stability': True } > ``` balances the state-space model self.SS in the frequency range [0, 1.2] using > 1. 12 equally-spaced points integration of the Gramians in the low-frequency range [0,1.2] and > 2. a 2 Gauss-Lobotto 8-th order quadratures of the controllability > Gramian in the high-frequency range. A total number of 28 integration points will be required, which will result into a balanced model with number of states `min{2*28* number_inputs, 2*28* number_outputs}` The model is finally truncated so as to retain only the first Ns stable modes. `balfreq_profiling`()[[source]](_modules/sharpy/linear/src/linuvlm.html#Dynamic.balfreq_profiling)[¶](#sharpy.linear.src.linuvlm.Dynamic.balfreq_profiling) Generate profiling report for balfreq function and saves it into `self.prof_out.` The function also returns a `pstats.Stats` object. To read the report: ``` import pstats p=pstats.Stats(self.prof_out).sort_stats('cumtime') p.print_stats(20) ``` `freqresp`(*kv*)[[source]](_modules/sharpy/linear/src/linuvlm.html#Dynamic.freqresp)[¶](#sharpy.linear.src.linuvlm.Dynamic.freqresp) Ad-hoc method for fast UVLM frequency response over the frequencies kv. The method, only requires inversion of a K x K matrix at each frequency as the equation for propagation of wake circulation are solved exactly. The algorithm implemented here can be used also upon projection of the state-space model. Note: This method is very similar to the “minsize” solution option is the steady_solve. `get_Cw_cpx`(*zval*)[[source]](_modules/sharpy/linear/src/linuvlm.html#Dynamic.get_Cw_cpx)[¶](#sharpy.linear.src.linuvlm.Dynamic.get_Cw_cpx) Produces a sparse matrix > \[\bar{\mathbf{C}}(z)\] where > \[z = e^{k \Delta t}\] such that the wake circulation frequency response at \(z\) is > \[\bar{\boldsymbol{\Gamma}}_w = \bar{\mathbf{C}}(z) \bar{\mathbf{\Gamma}}\] `nondimss`()[[source]](_modules/sharpy/linear/src/linuvlm.html#Dynamic.nondimss)[¶](#sharpy.linear.src.linuvlm.Dynamic.nondimss) Scale state-space model based of self.ScalingFacts `solve_steady`(*usta*, *method='direct'*)[[source]](_modules/sharpy/linear/src/linuvlm.html#Dynamic.solve_steady)[¶](#sharpy.linear.src.linuvlm.Dynamic.solve_steady) Steady state solution from state-space model. Warning: these methods are less efficient than the solver in Static class, Static.solve, and should be used only for verification purposes. The “minsize” method, however, guarantees the inversion of a K x K matrix only, similarly to what is done in Static.solve. `solve_step`(*x_n*, *u_n*, *u_n1=None*, *transform_state=False*)[[source]](_modules/sharpy/linear/src/linuvlm.html#Dynamic.solve_step)[¶](#sharpy.linear.src.linuvlm.Dynamic.solve_step) Solve step. If the predictor term has not been removed (`remove_predictor = False`) then the system is solved as: > \[\begin{split}\mathbf{x}^{n+1} &= \mathbf{A\,x}^n + \mathbf{B\,u}^n \\ > \mathbf{y}^{n+1} &= \mathbf{C\,x}^{n+1} + \mathbf{D\,u}^n\end{split}\] Else, if `remove_predictor = True`, the state is modified as > \[\mathbf{h}^n = \mathbf{x}^n - \mathbf{B\,u}^n\] And the system solved by: > \[\begin{split}\mathbf{h}^{n+1} &= \mathbf{A\,h}^n + \mathbf{B_{mod}\,u}^{n} \\ > \mathbf{y}^{n+1} &= \mathbf{C\,h}^{n+1} + \mathbf{D_{mod}\,u}^{n+1}\end{split}\] Finally, the original state is recovered using the reverse transformation: > \[\mathbf{x}^{n+1} = \mathbf{h}^{n+1} + \mathbf{B\,u}^{n+1}\] where the modifications to the \(\mathbf{B}_{mod}\) and \(\mathbf{D}_{mod}\) are detailed in [`Dynamic.assemble_ss()`](#sharpy.linear.src.linuvlm.Dynamic.assemble_ss). Notes Although the original equations include the term \(\mathbf{u}_{n+1}\), it is a reasonable approximation to take \(\mathbf{u}_{n+1}\approx\mathbf{u}_n\) given a sufficiently small time step, hence if the input at time `n+1` is not parsed, it is estimated from \(u^n\). | Parameters: | * **x_n** (*np.array*) – State vector at the current time step \(\mathbf{x}^n\) * **u_n** (*np.array*) – Input vector at time step \(\mathbf{u}^n\) * **u_n1** (*np.array*) – Input vector at time step \(\mathbf{u}^{n+1}\) * **transform_state** (*bool*) – When the predictor term is removed, if true it will transform the state vector. If false it will be assumed that the state vector that is parsed is already transformed i.e. it is \(\mathbf{h}\). | | Returns: | Updated state and output vector packed in a tuple \((\mathbf{x}^{n+1},\,\mathbf{y}^{n+1})\) | | Return type: | Tuple | Notes To speed-up the solution and use minimal memory: * solve for bound vorticity (and) * propagate the wake * compute the output separately. `unpack_state`(*xvec*)[[source]](_modules/sharpy/linear/src/linuvlm.html#Dynamic.unpack_state)[¶](#sharpy.linear.src.linuvlm.Dynamic.unpack_state) Unpacks the state vector into physical constituents for full order models. The state vector \(\mathbf{x}\) of the form > \[\mathbf{x}_n = \{ \delta \mathbf{\Gamma}_n,\, \delta \mathbf{\Gamma_{w_n}},\, > \Delta t\,\delta\mathbf{\Gamma}'_n,\, \delta\mathbf{\Gamma}_{n-1} \}\] Is unpacked into: > \[{\delta \mathbf{\Gamma}_n,\, \delta \mathbf{\Gamma_{w_n}},\, > \,\delta\mathbf{\Gamma}'_n}\] | Parameters: | **xvec** (*np.ndarray*) – State vector | | Returns: | Column vectors for bound circulation, wake circulation and circulation derivative packed in a tuple. | | Return type: | tuple | ####### DynamicBlock[¶](#dynamicblock) *class* `sharpy.linear.src.linuvlm.``DynamicBlock`(*tsdata*, *dt=None*, *dynamic_settings=None*, *integr_order=2*, *RemovePredictor=True*, *ScalingDict=None*, *UseSparse=True*, *for_vel=<sphinx.ext.autodoc.importer._MockObject object>*)[[source]](_modules/sharpy/linear/src/linuvlm.html#DynamicBlock)[¶](#sharpy.linear.src.linuvlm.DynamicBlock) Class for dynamic linearised UVLM solution. Linearisation around steady-state are only supported. The class is a low-memory implementation of Dynamic, and inherits most of the methods contained there. State-space models are allocated in list-block form (as per numpy.block) to minimise memory usage. This class provides lower memory / computational time assembly, frequency response and frequency limited balancing. Input: > * tsdata: aero timestep data from SHARPy solution > * dt: time-step > * integr_order=2: integration order for UVLM unsteady aerodynamic force > * RemovePredictor=True: if true, the state-space model is modified so as > to accept in input perturbations, u, evaluated at time-step n rather than > n+1. > * ScalingDict=None: disctionary containing fundamental reference units > ``` > >>> {'length': reference_length, > 'speed': reference_speed, > 'density': reference density} > ``` > used to derive scaling quantities for the state-space model variables. > The scaling factors are stores in `self.ScalingFact`. > Note that while time, circulation, angular speeds) are scaled > accordingly, FORCES ARE NOT. These scale by qinf*b**2, where b is the > reference length and qinf is the dinamic pressure. > * UseSparse=False: builds the A and B matrices in sparse form. C and D > are dense, hence the sparce format is not used. `- nondimss` normalises a dimensional state-space model based on the scaling factors in self.ScalingFact. `- dimss` inverse of nondimss. `- assemble_ss` builds state-space model. See function for more details. `- assemble_ss_profiling` generate profiling report of the assembly and saves it into self.prof_out. To read the report: ``` >>> import pstats p=pstats.Stats(self.prof_out) ``` `- freqresp` ad-hoc method for fast frequency response (only implemented) for remove_predictor=False To do: upgrade to linearise around unsteady snapshot (adjoint) `assemble_ss`()[[source]](_modules/sharpy/linear/src/linuvlm.html#DynamicBlock.assemble_ss)[¶](#sharpy.linear.src.linuvlm.DynamicBlock.assemble_ss) Produces block-form of state-space model > \[\begin{split}\mathbf{x}_{n+1} &= \mathbf{A}\,\mathbf{x}_n + \mathbf{B} \mathbf{u}_{n+1} \\ > \mathbf{y}_n &= \mathbf{C}\,\mathbf{x}_n + \mathbf{D} \mathbf{u}_n\end{split}\] where the state, inputs and outputs are: > \[\mathbf{x}_n = \{ \delta \mathbf{\Gamma}_n,\, \delta \mathbf{\Gamma_{w_n}},\, > \Delta t\,\delta\mathbf{\Gamma}'_n,\, \delta\mathbf{\Gamma}_{n-1} \}\] > \[\mathbf{u}_n = \{ \delta\mathbf{\zeta}_n,\, \delta\mathbf{\zeta}'_n,\, > \delta\mathbf{u}_{ext,n} \}\] > \[\mathbf{y} = \{\delta\mathbf{f}\}\] with \(\mathbf{\Gamma}\) being the vector of vortex circulations, \(\mathbf{\zeta}\) the vector of vortex lattice coordinates and \(\mathbf{f}\) the vector of aerodynamic forces and moments. Note that \((\bullet)'\) denotes a derivative with respect to time. Note that the input is atypically defined at time `n+1`, therefore by default `self.remove_predictor = True` and the predictor term `u_{n+1}` is eliminated through the change of state[1]: > \[\begin{split}\mathbf{h}_n &= \mathbf{x}_n - \mathbf{B}\,\mathbf{u}_n \\\end{split}\] such that: > \[\begin{split}\mathbf{h}_{n+1} &= \mathbf{A}\,\mathbf{h}_n + \mathbf{A\,B}\,\mathbf{u}_n \\ > \mathbf{y}_n &= \mathbf{C\,h}_n + (\mathbf{C\,B}+\mathbf{D})\,\mathbf{u}_n\end{split}\] which only modifies the equivalent \(\mathbf{B}\) and \(\mathbf{D}\) matrices. References [1] Franklin, GF and Powell, JD. Digital Control of Dynamic Systems, Addison-Wesley Publishing Company, 1980 To do: - remove all calls to scipy.linalg.block_diag `balfreq`(*DictBalFreq*)[[source]](_modules/sharpy/linear/src/linuvlm.html#DynamicBlock.balfreq)[¶](#sharpy.linear.src.linuvlm.DynamicBlock.balfreq) Low-rank method for frequency limited balancing. The Observability ad controllability Gramians over the frequencies kv are solved in factorised form. Balancd modes are then obtained with a square-root method. Details: Observability and controllability Gramians are solved in factorised form through explicit integration. The number of integration points determines both the accuracy and the maximum size of the balanced model. Stability over all (Nb) balanced states is achieved if: 1. one of the Gramian is integrated through the full Nyquist range 2. the integration points are enough. Note, however, that even when stability is not achieved over the full balanced states, stability of the balanced truncated model with Ns<=Nb states is normally observed even when a low number of integration points is used. Two integration methods (trapezoidal rule on uniform grid and Gauss-Legendre quadrature) are provided. Input: * DictBalFreq: dictionary specifying integration method with keys: > + ‘frequency’: defines limit frequencies for balancing. The balanced > model will be accurate in the range [0,F], where F is the value of > this key. Note that F units must be consistent with the units specified > in the self.ScalingFacts dictionary. > + ‘method_low’: [‘gauss’,’trapz’] specifies whether to use gauss > quadrature or trapezoidal rule in the low-frequency range [0,F] > + ‘options_low’: options to use for integration in the low-frequencies. > These depend on the integration scheme (See below). > + ‘method_high’: method to use for integration in the range [F,F_N], > where F_N is the Nyquist frequency. See ‘method_low’. > + ‘options_high’: options to use for integration in the high-frequencies. > + ‘check_stability’: if True, the balanced model is truncated to > eliminate unstable modes - if any is found. Note that very accurate > balanced model can still be obtained, even if high order modes are > unstable. Note that this option is overridden if “” > + ‘get_frequency_response’: if True, the function also returns the > frequency response evaluated at the low-frequency range integration > points. If True, this option also allows to automatically tune the > balanced model. Future options: > * ‘truncation_tolerance’: if ‘get_frequency_response’ is True, allows > to truncatethe balanced model so as to achieved a prescribed > tolerance in the low-frequwncy range. > * Ncpu: for parallel run The following integration schemes are available: * ‘trapz’: performs integration over equally spaced points using trapezoidal rule. It accepts options dictionaries with keys: * ‘points’: number of integration points to use (including domain boundary) * ‘gauss’ performs gauss-lobotto quadrature. The domain can be partitioned in Npart sub-domain in which the gauss-lobotto quadrature of order Ord can be applied. A total number of Npart*Ord points is required. It accepts options dictionaries of the form: > * ‘partitions’: number of partitions > * ‘order’: quadrature order. Example: The following dictionary > DictBalFreq={ ‘frequency’: 1.2, > ‘method_low’: ‘trapz’, > ‘options_low’: {‘points’: 12}, > ‘method_high’: ‘gauss’, > ‘options_high’: {‘partitions’: 2, ‘order’: 8}, > ‘check_stability’: True } balances the state-space model self.SS in the frequency range [0, 1.2] using > 1. 12 equally-spaced points integration of the Gramians in the low-frequency range [0,1.2] and (b) a 2 Gauss-Lobotto 8-th order quadratures of the controllability Gramian in the high-frequency range. A total number of 28 integration points will be required, which will result into a balanced model with number of states > min{ 2*28* number_inputs, 2*28* number_outputs } The model is finally truncated so as to retain only the first Ns stable modes. `freqresp`(*kv*)[[source]](_modules/sharpy/linear/src/linuvlm.html#DynamicBlock.freqresp)[¶](#sharpy.linear.src.linuvlm.DynamicBlock.freqresp) Ad-hoc method for fast UVLM frequency response over the frequencies kv. The method, only requires inversion of a K x K matrix at each frequency as the equation for propagation of wake circulation are solved exactly. The algorithm implemented here can be used also upon projection of the state-space model. Note: This method is very similar to the “minsize” solution option is the steady_solve. `nondimss`()[[source]](_modules/sharpy/linear/src/linuvlm.html#DynamicBlock.nondimss)[¶](#sharpy.linear.src.linuvlm.DynamicBlock.nondimss) Scale state-space model based of self.ScalingFacts. `solve_step`(*x_n*, *u_n*, *u_n1=None*, *transform_state=False*)[[source]](_modules/sharpy/linear/src/linuvlm.html#DynamicBlock.solve_step)[¶](#sharpy.linear.src.linuvlm.DynamicBlock.solve_step) Solve step. If the predictor term has not been removed (`remove_predictor = False`) then the system is solved as: > \[\begin{split}\mathbf{x}^{n+1} &= \mathbf{A\,x}^n + \mathbf{B\,u}^n \\ > \mathbf{y}^{n+1} &= \mathbf{C\,x}^{n+1} + \mathbf{D\,u}^n\end{split}\] Else, if `remove_predictor = True`, the state is modified as > \[\mathbf{h}^n = \mathbf{x}^n - \mathbf{B\,u}^n\] And the system solved by: > \[\begin{split}\mathbf{h}^{n+1} &= \mathbf{A\,h}^n + \mathbf{B_{mod}\,u}^{n} \\ > \mathbf{y}^{n+1} &= \mathbf{C\,h}^{n+1} + \mathbf{D_{mod}\,u}^{n+1}\end{split}\] Finally, the original state is recovered using the reverse transformation: > \[\mathbf{x}^{n+1} = \mathbf{h}^{n+1} + \mathbf{B\,u}^{n+1}\] where the modifications to the \(\mathbf{B}_{mod}\) and \(\mathbf{D}_{mod}\) are detailed in [`Dynamic.assemble_ss()`](index.html#sharpy.linear.src.linuvlm.Dynamic.assemble_ss). Notes Although the original equations include the term \(\mathbf{u}_{n+1}\), it is a reasonable approximation to take \(\mathbf{u}_{n+1}\approx\mathbf{u}_n\) given a sufficiently small time step, hence if the input at time `n+1` is not parsed, it is estimated from \(u^n\). | Parameters: | * **x_n** (*np.array*) – State vector at the current time step \(\mathbf{x}^n\) * **u_n** (*np.array*) – Input vector at time step \(\mathbf{u}^n\) * **u_n1** (*np.array*) – Input vector at time step \(\mathbf{u}^{n+1}\) * **transform_state** (*bool*) – When the predictor term is removed, if true it will transform the state vector. If false it will be assumed that the state vector that is parsed is already transformed i.e. it is \(\mathbf{h}\). | | Returns: | Updated state and output vector packed in a tuple \((\mathbf{x}^{n+1},\,\mathbf{y}^{n+1})\) | | Return type: | Tuple | Notes Because in BlockDynamics the predictor is never removed when building ‘self.SS’, the implementation change with respect to Dynamic. However, formulas are consistent. ####### Frequency[¶](#frequency) *class* `sharpy.linear.src.linuvlm.``Frequency`(*tsdata*, *dt*, *integr_order=2*, *RemovePredictor=True*, *ScalingDict=None*, *UseSparse=True*)[[source]](_modules/sharpy/linear/src/linuvlm.html#Frequency)[¶](#sharpy.linear.src.linuvlm.Frequency) Class for frequency description of linearised UVLM solution. Linearisation around steady-state are only supported. The class is built upon Static, and inherits all the methods contained there. The class supports most of the features of Dynamics but has lower memory requirements of Dynamic, and should be preferred for: > 1. producing memory and computationally cheap frequency responses > 2. building reduced order models using RFA/polynomial fitting Usage: Upon initialisation, the assemble method produces all the matrices required for the frequency description of the UVLM (see assemble for details). A state-space model is not allocated but: > * Time stepping is also possible (but not implemented yet) as all the > fundamental terms describing the UVLM equations are still produced > (except the propagation of wake circulation) > * ad-hoc methods for scaling, unscaling and frequency response are > provided. Input: > * tsdata: aero timestep data from SHARPy solution > * dt: time-step > * integr_order=0,1,2: integration order for UVLM unsteady aerodynamic > force. If 0, the derivative is computed exactly. > * RemovePredictor=True: This flag is only used for the frequency response > calculation. The frequency description, in fact, naturally arises > without the predictor, but lags can be included during the frequency > response calculation. See Dynamic documentation for more details. > * ScalingDict=None: disctionary containing fundamental reference units > > > > ``` > > {'length': reference_length, > > 'speed': reference_speed, > > 'density': reference density} > > > ``` > > > > used to derive scaling quantities for the state-space model variables. > The scaling factors are stores in `self.ScalingFact.` > Note that while time, circulation, angular speeds) are scaled > accordingly, FORCES ARE NOT. These scale by qinf*b**2, where b is the > reference length and qinf is the dinamic pressure. > * UseSparse=False: builds the A and B matrices in sparse form. C and D > are dense, hence the sparce format is not used. `- nondimss` normalises matrices produced by the assemble method based on the scaling factors in self.ScalingFact. `- dimss` inverse of nondimss. `- assemble` builds matrices for UVLM minimal size description. `- assemble_profiling` generate profiling report of the assembly and saves it into self.prof_out. To read the report: > ``` > import pstats > p=pstats.Stats(self.prof_out) > ``` `- freqresp` fast algorithm for frequency response. Methods to implement: > * solve_steady: runs freqresp at 0 frequency. > * solve_step: solves one time-step `assemble`()[[source]](_modules/sharpy/linear/src/linuvlm.html#Frequency.assemble)[¶](#sharpy.linear.src.linuvlm.Frequency.assemble) Assembles matrices for minumal size frequency description of UVLM. The state equation is represented in the form: > \[\mathbf{A_0} \mathbf{\Gamma} + > \mathbf{A_{w_0}} \mathbf{\Gamma_w} = > \mathbf{B_0} \mathbf{u}\] While the output equation is as per the Dynamic class, namely: > \[\mathbf{y} = > \mathbf{C} \mathbf{x} + \mathbf{D} \mathbf{u}\] where > \[\mathbf{x} = > [\mathbf{\Gamma}; \mathbf{\Gamma_w}; \Delta\mathbf(\Gamma)]\] The propagation of wake circulation matrices are not produced as these are not required for frequency response analysis. `assemble_profiling`()[[source]](_modules/sharpy/linear/src/linuvlm.html#Frequency.assemble_profiling)[¶](#sharpy.linear.src.linuvlm.Frequency.assemble_profiling) Generate profiling report for assembly and save it in self.prof_out. To read the report: import pstats p=pstats.Stats(self.prof_out) `freqresp`(*kv*)[[source]](_modules/sharpy/linear/src/linuvlm.html#Frequency.freqresp)[¶](#sharpy.linear.src.linuvlm.Frequency.freqresp) Ad-hoc method for fast UVLM frequency response over the frequencies kv. The method, only requires inversion of a K x K matrix at each frequency as the equation for propagation of wake circulation are solved exactly. `get_Cw_cpx`(*zval*)[[source]](_modules/sharpy/linear/src/linuvlm.html#Frequency.get_Cw_cpx)[¶](#sharpy.linear.src.linuvlm.Frequency.get_Cw_cpx) Produces a sparse matrix > \[\bar{\mathbf{C}}(z)\] where > \[z = e^{k \Delta t}\] such that the wake circulation frequency response at \(z\) is > \[\bar{\goldsymbol{\Gamma}}_w = \bar{\mathbf{C}}(z) \bar{\boldsymbol{\Gamma}}\] `nondimss`()[[source]](_modules/sharpy/linear/src/linuvlm.html#Frequency.nondimss)[¶](#sharpy.linear.src.linuvlm.Frequency.nondimss) Scale state-space model based of self.ScalingFacts ####### Static[¶](#static) *class* `sharpy.linear.src.linuvlm.``Static`(*tsdata*, *for_vel=<sphinx.ext.autodoc.importer._MockObject object>*)[[source]](_modules/sharpy/linear/src/linuvlm.html#Static)[¶](#sharpy.linear.src.linuvlm.Static) Static linear solver `assemble`()[[source]](_modules/sharpy/linear/src/linuvlm.html#Static.assemble)[¶](#sharpy.linear.src.linuvlm.Static.assemble) Assemble global matrices `assemble_profiling`()[[source]](_modules/sharpy/linear/src/linuvlm.html#Static.assemble_profiling)[¶](#sharpy.linear.src.linuvlm.Static.assemble_profiling) Generate profiling report for assembly and save it in self.prof_out. To read the report: import pstats p=pstats.Stats(self.prof_out) `get_rigid_motion_gains`(*zeta_rotation=<sphinx.ext.autodoc.importer._MockObject object>*)[[source]](_modules/sharpy/linear/src/linuvlm.html#Static.get_rigid_motion_gains)[¶](#sharpy.linear.src.linuvlm.Static.get_rigid_motion_gains) Gains to reproduce rigid-body motion such that grid displacements and velocities are given by: > * `dzeta     = Ktra*u_tra         + Krot*u_rot` > * `dzeta_dot = Ktra_vel*u_tra_dot + Krot*u_rot_dot` Rotations are assumed to happen independently with respect to the zeta_rotation point and about the x,y and z axes of the inertial frame. `get_sect_forces_gain`()[[source]](_modules/sharpy/linear/src/linuvlm.html#Static.get_sect_forces_gain)[¶](#sharpy.linear.src.linuvlm.Static.get_sect_forces_gain) Gains to computes sectional forces. Moments are computed w.r.t. mid-vertex (chord-wise index M/2) of each section. `get_total_forces_gain`(*zeta_pole=<sphinx.ext.autodoc.importer._MockObject object>*)[[source]](_modules/sharpy/linear/src/linuvlm.html#Static.get_total_forces_gain)[¶](#sharpy.linear.src.linuvlm.Static.get_total_forces_gain) Calculates gain matrices to calculate the total force (Kftot) and moment (Kmtot, Kmtot_disp) about the pole zeta_pole. Being \(f\) and \(\zeta\) the force and position at the vertex (m,n) of the lattice these are produced as: > * `ftot=sum(f) -> dftot += df` > * `mtot-sum((zeta-zeta_pole) x f) ->       dmtot +=  cross(zeta0-zeta_pole) df - cross(f0) dzeta` `reshape`()[[source]](_modules/sharpy/linear/src/linuvlm.html#Static.reshape)[¶](#sharpy.linear.src.linuvlm.Static.reshape) Reshapes state/output according to SHARPy format `solve`()[[source]](_modules/sharpy/linear/src/linuvlm.html#Static.solve)[¶](#sharpy.linear.src.linuvlm.Static.solve) Solve for bound \(\\Gamma\) using the equation; \[\begin{split}\\mathcal{A}(\\Gamma^n) = u^n\end{split}\] # … at constant rotation speed `self.Dfqsdzeta+=scalg.block_diag(*ass.dfqsdzeta_omega(MS.Surfs,MS.Surfs_star))` `total_forces`(*zeta_pole=<sphinx.ext.autodoc.importer._MockObject object>*)[[source]](_modules/sharpy/linear/src/linuvlm.html#Static.total_forces)[¶](#sharpy.linear.src.linuvlm.Static.total_forces) Calculates total force (Ftot) and moment (Mtot) (about pole zeta_pole). ###### Generation of multiple aerodynamic surfaces[¶](#generation-of-multiple-aerodynamic-surfaces) 19. Maraniello, 25 May 2018 ####### MultiAeroGridSurfaces[¶](#multiaerogridsurfaces) *class* `sharpy.linear.src.multisurfaces.``MultiAeroGridSurfaces`(*tsdata*, *for_vel=<sphinx.ext.autodoc.importer._MockObject object>*)[[source]](_modules/sharpy/linear/src/multisurfaces.html#MultiAeroGridSurfaces)[¶](#sharpy.linear.src.multisurfaces.MultiAeroGridSurfaces) Creates and assembles multiple aerodynamic surfaces from data `get_ind_velocities_at_collocation_points`()[[source]](_modules/sharpy/linear/src/multisurfaces.html#MultiAeroGridSurfaces.get_ind_velocities_at_collocation_points)[¶](#sharpy.linear.src.multisurfaces.MultiAeroGridSurfaces.get_ind_velocities_at_collocation_points) Computes normal induced velocities at collocation points. `get_ind_velocities_at_segments`(*overwrite=False*)[[source]](_modules/sharpy/linear/src/multisurfaces.html#MultiAeroGridSurfaces.get_ind_velocities_at_segments)[¶](#sharpy.linear.src.multisurfaces.MultiAeroGridSurfaces.get_ind_velocities_at_segments) Computes induced velocities at mid-segment points. `get_joukovski_qs`(*overwrite=False*)[[source]](_modules/sharpy/linear/src/multisurfaces.html#MultiAeroGridSurfaces.get_joukovski_qs)[¶](#sharpy.linear.src.multisurfaces.MultiAeroGridSurfaces.get_joukovski_qs) Returns quasi-steady forces over Warning: forces are stored in a NON-redundant format: (3,4,M,N) where the element (:,ss,mm,nn) is the contribution to the force over the ss-th segment due to the circulation of panel (mm,nn). `get_normal_ind_velocities_at_collocation_points`()[[source]](_modules/sharpy/linear/src/multisurfaces.html#MultiAeroGridSurfaces.get_normal_ind_velocities_at_collocation_points)[¶](#sharpy.linear.src.multisurfaces.MultiAeroGridSurfaces.get_normal_ind_velocities_at_collocation_points) Computes normal induced velocities at collocation points. Note: for state-equation both projected and not projected induced velocities are required at the collocation points. Hence, this method tries to first the u_ind_coll attribute in each surface. `verify_aic_coll`()[[source]](_modules/sharpy/linear/src/multisurfaces.html#MultiAeroGridSurfaces.verify_aic_coll)[¶](#sharpy.linear.src.multisurfaces.MultiAeroGridSurfaces.verify_aic_coll) Verify aic at collocaiton points using non-penetration condition `verify_joukovski_qs`()[[source]](_modules/sharpy/linear/src/multisurfaces.html#MultiAeroGridSurfaces.verify_joukovski_qs)[¶](#sharpy.linear.src.multisurfaces.MultiAeroGridSurfaces.verify_joukovski_qs) Verify quasi-steady contribution for forces matches against SHARPy. `verify_non_penetration`()[[source]](_modules/sharpy/linear/src/multisurfaces.html#MultiAeroGridSurfaces.verify_non_penetration)[¶](#sharpy.linear.src.multisurfaces.MultiAeroGridSurfaces.verify_non_penetration) Verify state variables fulfill non-penetration condition at bound surfaces ###### Geometrical methods for bound surfaces[¶](#geometrical-methods-for-bound-surfaces) Geometrical methods for bound surfaces 19. Maraniello, 20 May 2018 ####### AeroGridGeo[¶](#aerogridgeo) *class* `sharpy.linear.src.surface.``AeroGridGeo`(*Map: gridmapping.AeroGridMap instance*, *zeta: Array of vertex coordinates at each surface*, *aM: chord-wise position of collocation point in panel = 0.5*, *aN: span-wise position of collocation point in panel = 0.5*)[[source]](_modules/sharpy/linear/src/surface.html#AeroGridGeo)[¶](#sharpy.linear.src.surface.AeroGridGeo) Allows retrieving geometrical information of a surface. Requires a gridmapping.AeroGridMap mapping structure in input and the surface vertices coordinates. Indices convention: each panel is characterised through the following indices: - m,n: chord/span-wise indices Methods: - get_*: retrieve information of a panel (e.g. normal, surface area) - generate_*: apply get_* method to each panel and store info into array. Interpolation matrices, W: - these are labelled as ‘Wba’, where ‘a’ defines the initial format, b the final. Hence, given the array vb, it holds va=Wab*vb `get_panel_collocation`(*zetav_here*)[[source]](_modules/sharpy/linear/src/surface.html#AeroGridGeo.get_panel_collocation)[¶](#sharpy.linear.src.surface.AeroGridGeo.get_panel_collocation) Using bilinear interpolation, retrieves panel collocation point, where aN,aM in [0,1] are distances in the chordwise and spanwise directions such that: > * (aM,aN)=(0,0) –> quantity at vertex 0 > * (aM,aN)=(1,0) –> quantity at vertex 1 > * (aM,aN)=(1,1) –> quantity at vertex 2 > * (aM,aN)=(0,1) –> quantity at vertex 3 `get_panel_vertices_coords`(*m*, *n*)[[source]](_modules/sharpy/linear/src/surface.html#AeroGridGeo.get_panel_vertices_coords)[¶](#sharpy.linear.src.surface.AeroGridGeo.get_panel_vertices_coords) Retrieves coordinates of panel (m,n) vertices. `get_panel_wcv`()[[source]](_modules/sharpy/linear/src/surface.html#AeroGridGeo.get_panel_wcv)[¶](#sharpy.linear.src.surface.AeroGridGeo.get_panel_wcv) Produces a compact array with weights for bilinear interpolation, where aN,aM in [0,1] are distances in the chordwise and spanwise directions such that: > * (aM,aN)=(0,0) –> quantity at vertex 0 > * (aM,aN)=(1,0) –> quantity at vertex 1 > * (aM,aN)=(1,1) –> quantity at vertex 2 > * (aM,aN)=(0,1) –> quantity at vertex 3 `interp_vertex_to_coll`(*q_vert*)[[source]](_modules/sharpy/linear/src/surface.html#AeroGridGeo.interp_vertex_to_coll)[¶](#sharpy.linear.src.surface.AeroGridGeo.interp_vertex_to_coll) Project a quantity q_vert (scalar or vector) defined at vertices to collocation points. `project_coll_to_normal`(*q_coll*)[[source]](_modules/sharpy/linear/src/surface.html#AeroGridGeo.project_coll_to_normal)[¶](#sharpy.linear.src.surface.AeroGridGeo.project_coll_to_normal) Project a vector quantity q_coll defined at collocation points to normal. ####### AeroGridSurface[¶](#aerogridsurface) *class* `sharpy.linear.src.surface.``AeroGridSurface`(*Map*, *zeta*, *gamma*, *u_ext=None*, *zeta_dot=None*, *gamma_dot=None*, *rho=1.0*, *aM=0.5*, *aN=0.5*, *for_vel=<sphinx.ext.autodoc.importer._MockObject object>*)[[source]](_modules/sharpy/linear/src/surface.html#AeroGridSurface)[¶](#sharpy.linear.src.surface.AeroGridSurface) Contains geometric and aerodynamic information about bound/wake surface. Compulsory input are those that apply to both bound and wake surfaces: * `zeta`: defines geometry * `gamma`: circulation With respect to [`AeroGridGeo`](index.html#sharpy.linear.src.surface.AeroGridGeo), the class contains methods to: * project prescribed input velocity at nodes (`u_ext`, `zeta_dot`) over collocation points. * compute induced velocity over ANOTHER surface. * compute AIC induced over ANOTHER surface | Parameters: | * **Map** ([*gridmapping.AeroGridMap*](index.html#sharpy.linear.src.gridmapping.AeroGridMap)) – Map of grid. * **zeta** (*list**(**np.ndarray**)*) – Grid vertices coordinates in inertial (G) frame. * **zeta_dot** (*list**(**np.ndarray**)*) – Grid vertices velocities in inertial (G) frame. Default is `None`. * **u_ext** (*list**(**np.ndarray**)*) – Grid external velocities in inertial (G) frame. Default is `None`. * **gamma_dot** (*list**(**np.ndarray**)*) – Panel circulation derivative. Default is `None`. * **rho** (*float*) – Air density. Default is `1.` * **aM** (*float*) – Chordwise position in panel of collocation point. Default is `0.5` * **aN** (*float*) – Spanwise position in panel of collocation point. Default is `0.5` * **for_vel** (*np.ndarray*) – Frame of reference velocity (including rotational velocity) in the inertial frame. | To add: * project prescribed input velocity at nodes (u_ext, zeta_dot) over mid-point segments `get_aic3`(*zeta_target*)[[source]](_modules/sharpy/linear/src/surface.html#AeroGridSurface.get_aic3)[¶](#sharpy.linear.src.surface.AeroGridSurface.get_aic3) Produces influence coefficinet matrix to calculate the induced velocity at a target point. The aic3 matrix has shape (3,K) `get_aic_over_surface`(*Surf_target*, *target='collocation'*, *Project=True*)[[source]](_modules/sharpy/linear/src/surface.html#AeroGridSurface.get_aic_over_surface)[¶](#sharpy.linear.src.surface.AeroGridSurface.get_aic_over_surface) Produces influence coefficient matrices such that the velocity induced over the Surface_target is given by the product: if target==’collocation’: if Project: u_ind_coll_norm.rehape(-1)=AIC*self.gamma.reshape(-1,order=’C’) else: u_ind_coll_norm[ii,:,:].rehape(-1)= AIC[ii,:,:]*self.gamma.reshape(-1,order=’C’) where ii=0,1,2 if targer==’segments’: * AIC has shape (3,self.maps.K,4,Mout,Nout), such that AIC[:,:,ss,mm,nn] is the influence coefficient matrix associated to the induced velocity at segment ss of panel (mm,nn) `get_induced_velocity`(*zeta_target*)[[source]](_modules/sharpy/linear/src/surface.html#AeroGridSurface.get_induced_velocity)[¶](#sharpy.linear.src.surface.AeroGridSurface.get_induced_velocity) Computes induced velocity at a point zeta_target. `get_induced_velocity_over_surface`(*Surf_target*, *target='collocation'*, *Project=False*)[[source]](_modules/sharpy/linear/src/surface.html#AeroGridSurface.get_induced_velocity_over_surface)[¶](#sharpy.linear.src.surface.AeroGridSurface.get_induced_velocity_over_surface) Computes induced velocity over an instance of AeroGridSurface, where target specifies the target grid (collocation or segments). If Project is True, velocities are projected onver panel normal (only available at collocation points). Note: for state-equation, both projected and non-projected velocities at the collocation points are required. Hence, it is suggested to use this method with Projection=False, and project afterwards. Warning: induced velocities at grid segments are stored in a redundant format: > (3,4,M,N) where the element (:,ss,mm,nn) is the induced velocity over the ss-th segment of panel (mm,nn). A fast looping is implemented to re-use previously computed velocities `get_input_velocities_at_collocation_points`()[[source]](_modules/sharpy/linear/src/surface.html#AeroGridSurface.get_input_velocities_at_collocation_points)[¶](#sharpy.linear.src.surface.AeroGridSurface.get_input_velocities_at_collocation_points) > Returns velocities at collocation points from nodal values `u_ext` and > `zeta_dot` of shape `(3, M+1, N+1)` at the collocation points. > Notes: > > > > \[oldsymbol{u}_{c} = \mathcal{W}_{cv}(oldsymbol(\] > u)_0 - oldsymbol{zeta}_0) > is the input velocity at the collocation point, where \(\mathcal{W}_{cv} projects the velocity > from the grid points onto the collocation point. This variable is referred to as > \) and depends on the coordinates `zeta` when the body is rotating. `get_input_velocities_at_segments`()[[source]](_modules/sharpy/linear/src/surface.html#AeroGridSurface.get_input_velocities_at_segments)[¶](#sharpy.linear.src.surface.AeroGridSurface.get_input_velocities_at_segments) Returns velocities at mid-segment points from nodal values u_ext and zeta_dot of shape (3,M+1,N+1). Warning: input velocities at grid segments are stored in a redundant format: > (3,4,M,N) where the element (:,ss,mm,nn) is the induced velocity over the ss-th segment of panel (mm,nn). A fast looping is implemented to re-use previously computed velocities 2018/08/24: Include effects due to rotation (omega x zeta). Now it depends on the coordinates zeta `get_joukovski_qs`(*gammaw_TE=None*, *recompute_velocities=True*)[[source]](_modules/sharpy/linear/src/surface.html#AeroGridSurface.get_joukovski_qs)[¶](#sharpy.linear.src.surface.AeroGridSurface.get_joukovski_qs) Returns quasi-steady forces evaluated at mid-segment points over the surface. Important: the circulation at the first row of wake panel is required! Hence all Warning: forces are stored in a NON-redundant format: (3,4,M,N) where the element (:,ss,mm,nn) is the contribution to the force over the ss-th segment due to the circulation of panel (mm,nn). `get_joukovski_unsteady`()[[source]](_modules/sharpy/linear/src/surface.html#AeroGridSurface.get_joukovski_unsteady)[¶](#sharpy.linear.src.surface.AeroGridSurface.get_joukovski_unsteady) Returns added mass effects over lattive grid `get_normal_input_velocities_at_collocation_points`()[[source]](_modules/sharpy/linear/src/surface.html#AeroGridSurface.get_normal_input_velocities_at_collocation_points)[¶](#sharpy.linear.src.surface.AeroGridSurface.get_normal_input_velocities_at_collocation_points) From nodal input velocity to normal velocities at collocation points. ####### get_aic3_cpp[¶](#module-sharpy.linear.src.surface.get_aic3_cpp) Used by autodoc_mock_imports. ##### Utils[¶](#utils) ###### State-space modules loading utilities[¶](#state-space-modules-loading-utilities) ####### sys_list_from_path[¶](#module-sharpy.linear.utils.ss_interface.sys_list_from_path) Returns the files containing linear system state space elements | param cwd: | Current working directory | | type cwd: | str | Returns: ###### Linear State Space Element Class[¶](#linear-state-space-element-class) Linear State Space Element Class ####### Element[¶](#element) *class* `sharpy.linear.utils.sselements.``Element`[[source]](_modules/sharpy/linear/utils/sselements.html#Element)[¶](#sharpy.linear.utils.sselements.Element) State space member #### Model Order Reduction[¶](#model-order-reduction) ##### Balancing Methods[¶](#balancing-methods) The following classes are available to reduce a linear system employing balancing methods. The main class is [`Balanced`](index.html#sharpy.rom.balanced.Balanced) and the other available classes: * [`Direct`](index.html#sharpy.rom.balanced.Direct) * [`Iterative`](index.html#sharpy.rom.balanced.Iterative) * [`FrequencyLimited`](index.html#sharpy.rom.balanced.FrequencyLimited) correspond to the reduction algorithm. ###### Balanced[¶](#balanced) *class* `sharpy.rom.balanced.``Balanced`[[source]](_modules/sharpy/rom/balanced.html#Balanced)[¶](#sharpy.rom.balanced.Balanced) Balancing ROM methods Main class to load a balancing ROM. See below for the appropriate settings to be parsed in the `algorithm_settings` based on your selection. Supported algorithms: * Direct balancing [`Direct`](index.html#sharpy.rom.balanced.Direct) * Iterative balancing [`Iterative`](index.html#sharpy.rom.balanced.Iterative) * Frequency limited balancing [`FrequencyLimited`](index.html#sharpy.rom.balanced.FrequencyLimited) The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | Options | | --- | --- | --- | --- | --- | | `print_info` | `bool` | Write output to screen | `True` | | | `algorithm` | `str` | Balanced realisation method | | `Direct`, `Iterative`, `FrequencyLimited` | | `algorithm_settings` | `dict` | Settings for the desired algorithm | `{}` | | ###### Direct[¶](#direct) *class* `sharpy.rom.balanced.``Direct`[[source]](_modules/sharpy/rom/balanced.html#Direct)[¶](#sharpy.rom.balanced.Direct) Find balanced realisation of continuous (`DLTI = False`) and discrete (`DLTI = True`) time of LTI systems using scipy libraries. The function proceeds to achieve balanced realisation of the state-space system by first solving the Lyapunov equations. They are solved using Barlets-Stewart algorithm for Sylvester equation, which is based on A matrix Schur decomposition. \[\begin{split}\mathbf{A\,W_c + W_c\,A^T + B\,B^T} &= 0 \\ \mathbf{A^T\,W_o + W_o\,A + C^T\,C} &= 0\end{split}\] to obtain the reachability and observability gramians, which are positive definite matrices. Then, the gramians are decomposed into their Cholesky factors such that: \[\begin{split}\mathbf{W_c} &= \mathbf{Q_c\,Q_c^T} \\ \mathbf{W_o} &= \mathbf{Q_o\,Q_o^T}\end{split}\] A singular value decomposition (SVD) of the product of the Cholesky factors is performed \[(\mathbf{Q_o^T\,Q_c}) = \mathbf{U\,\Sigma\,V^*}\] The singular values are then used to build the transformation matrix \(\mathbf{T}\) \[\begin{split}\mathbf{T} &= \mathbf{Q_c\,V\,\Sigma}^{-1/2} \\ \mathbf{T}^{-1} &= \mathbf{\Sigma}^{-1/2}\,\mathbf{U^T\,Q_o^T}\end{split}\] The balanced system is therefore of the form: \[\begin{split}\mathbf{A_b} &= \mathbf{T\,A\,T^{-1}} \\ \mathbf{B_b} &= \mathbf{T\,B} \\ \mathbf{C_b} &= \mathbf{C\,T^{-1}} \\ \mathbf{D_b} &= \mathbf{D}\end{split}\] Warning This function may be less computationally efficient than the `balreal` Matlab implementation and does not offer the option to bound the realisation in frequency and time. Notes Lyapunov equations are solved using Barlets-Stewart algorithm for Sylvester equation, which is based on A matrix Schur decomposition. | Parameters: | * **A** (*np.ndarray*) – Plant Matrix * **B** (*np.ndarray*) – Input Matrix * **C** (*np.ndarray*) – Output Matrix * **DLTI** (*bool*) – Discrete time state-space flag * **Schur** (*bool*) – Use Schur decomposition to solve the Lyapunov equations | | Returns: | Tuple of the form `(S, T, Tinv)` containing: * Singular values in diagonal matrix (`S`) * Transformation matrix (`T`). * Inverse transformation matrix(`Tinv`). | | Return type: | tuple of np.ndarrays | References Anthoulas, A.C.. Approximation of Large Scale Dynamical Systems. Chapter 7. Advances in Design and Control. SIAM. 2005. The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | Options | | --- | --- | --- | --- | --- | | `tune` | `bool` | Tune ROM to specified tolerance | `True` | | | `use_schur` | `bool` | Use Schur decomposition during build | `False` | | | `rom_tolerance` | `float` | Absolute accuracy with respect to full order frequency response | `0.01` | | | `rom_tune_freq_range` | `list(float)` | Beginning and end of frequency range where to tune ROM | `[0, 1]` | | | `convergence` | `str` | ROM tuning convergence. If `min` attempts to find minimal number of states.If `all` it starts from larger size ROM until convergence to specified tolerance is found. | `min` | | | `reduction_method` | `str` | Desired reduction method | `realisation` | `realisation`, `truncation` | ###### FrequencyLimited[¶](#frequencylimited) *class* `sharpy.rom.balanced.``FrequencyLimited`[[source]](_modules/sharpy/rom/balanced.html#FrequencyLimited)[¶](#sharpy.rom.balanced.FrequencyLimited) Method for frequency limited balancing. The Observability and controllability Gramians over the frequencies kv are solved in factorised form. Balanced modes are then obtained with a square-root method. Details: > * Observability and controllability Gramians are solved in factorised form > through explicit integration. The number of integration points determines > both the accuracy and the maximum size of the balanced model. > * Stability over all (Nb) balanced states is achieved if: > > > > 1. one of the Gramian is integrated through the full Nyquist range > > 2. the integration points are enough. > > Input: * DictBalFreq: dictionary specifying integration method with keys: > + `frequency`: defines limit frequencies for balancing. The balanced > model will be accurate in the range `[0,F]`, where `F` is the value of > this key. Note that `F` units must be consistent with the units specified > in the `self.ScalingFacts` dictionary. > + `method_low`: `['gauss','trapz']` specifies whether to use gauss > quadrature or trapezoidal rule in the low-frequency range `[0,F]`. > + `options_low`: options to use for integration in the low-frequencies. > These depend on the integration scheme (See below). > + `method_high`: method to use for integration in the range [F,F_N], > where F_N is the Nyquist frequency. See ‘method_low’. > + `options_high`: options to use for integration in the high-frequencies. > + `check_stability`: if True, the balanced model is truncated to > eliminate unstable modes - if any is found. Note that very accurate > balanced model can still be obtained, even if high order modes are > unstable. Note that this option is overridden if “” > + `get_frequency_response`: if True, the function also returns the > frequency response evaluated at the low-frequency range integration > points. If True, this option also allows to automatically tune the > balanced model. Future options: * Ncpu: for parallel run The following integration schemes are available: * `trapz`: performs integration over equally spaced points using trapezoidal rule. It accepts options dictionaries with keys: > + `points`: number of integration points to use (including > domain boundary) > * `gauss` performs gauss-lobotto quadrature. The domain can be partitioned in Npart sub-domain in which the gauss-lobotto quadrature of order Ord can be applied. A total number of Npart*Ord points is required. It accepts options dictionaries of the form: > + `partitions`: number of partitions > + `order`: quadrature order. Examples The following dictionary ``` >>> DictBalFreq={'frequency': 1.2, >>> 'method_low': 'trapz', >>> 'options_low': {'points': 12}, >>> 'method_high': 'gauss', >>> 'options_high': {'partitions': 2, 'order': 8}, >>> 'check_stability': True } ``` balances the state-space model in the frequency range [0, 1.2] using: > 1. 12 equally-spaced points integration of the Gramians in > the low-frequency range [0,1.2] and > 2. A 2 Gauss-Lobotto 8-th order quadratures of the controllability > Gramian in the high-frequency range. A total number of 28 integration points will be required, which will result into a balanced model with number of states ``` >>> min{ 2*28* number_inputs, 2*28* number_outputs } ``` The model is finally truncated so as to retain only the first Ns stable modes. The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | Options | | --- | --- | --- | --- | --- | | `frequency` | `float` | defines limit frequencies for balancing. The balanced model will be accurate in the range `[0,F]`, where `F` is the value of this key. Note that `F` units must be consistent with the units specified in the in the `self.ScalingFacts` dictionary. | `1.0` | | | `method_low` | `str` | Specifies whether to use gauss quadrature or trapezoidal rule in the low-frequency range `[0,F]` | `trapz` | `gauss`, `trapz` | | `options_low` | `dict` | Settings for the low frequency integration. See Notes. | `{}` | | | `method_high` | `str` | Specifies whether to use gauss quadrature or trapezoidal rule in the high-frequency range `[F,FN]` | `trapz` | `gauss`, `trapz` | | `options_high` | `dict` | Settings for the high frequency integration. See Notes. | `{}` | | | `check_stability` | `bool` | if True, the balanced model is truncated to eliminate unstable modes - if any is found. Note that very accurate balanced model can still be obtained, even if high order modes are unstable. | `True` | | | `get_frequency_response` | `bool` | if True, the function also returns the frequency response evaluated at the low-frequency range integration points. If True, this option also allows to automatically tune the balanced model. | `False` | | The parameters of integration take the following options: | Name | Type | Description | Default | | --- | --- | --- | --- | | `points` | `int` | Trapezoidal points of integration | `12` | | `partitions` | `int` | Number of Gauss-Lobotto quadratures | `2` | | `order` | `int` | Order of Gauss-Lobotto quadratures | `2` | ###### Iterative[¶](#iterative) *class* `sharpy.rom.balanced.``Iterative`[[source]](_modules/sharpy/rom/balanced.html#Iterative)[¶](#sharpy.rom.balanced.Iterative) Find balanced realisation of DLTI system. Notes Lyapunov equations are solved using iterative squared Smith algorithm, in its low or full rank version. These implementations are as per the low_rank_smith and smith_iter functions respectively but, for computational efficiency, the iterations are rewritten here so as to solve for the observability and controllability Gramians contemporary. * Exploiting sparsity: > This algorithm is not ideal to exploit sparsity. However, the following > strategies are implemented: > > > > + if the A matrix is provided in sparse format, the powers of A will be > > calculated exploiting sparsity UNTIL the number of non-zero elements > > is below 15% the size of A. Upon this threshold, the cost of the matrix > > multiplication rises dramatically, and A is hence converted to a dense > > numpy array. > The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | | --- | --- | --- | --- | | `lowrank` | `bool` | Use low rank methods | `True` | | `smith_tol` | `float` | Smith tolerance | `1e-10` | | `tolSVD` | `float` | SVD threshold | `1e-06` | ##### Krylov-subspaces model order reduction techniques[¶](#krylov-subspaces-model-order-reduction-techniques) ###### Krylov[¶](#krylov) *class* `sharpy.rom.krylov.``Krylov`[[source]](_modules/sharpy/rom/krylov.html#Krylov)[¶](#sharpy.rom.krylov.Krylov) Model Order Reduction Methods for Single Input Single Output (SISO) and MIMO Linear Time-Invariant (LTI) Systems using moment matching (Krylov Methods). Examples General calling sequences for different systems SISO single point interpolation: ``` >>> algorithm = 'one_sided_arnoldi' >>> interpolation_point = np.array([0.0]) >>> krylov_r = 4 >>> >>> rom = Krylov() >>> rom.initialise(sharpy_data, FullOrderModelSS) >>> rom.run(algorithm, krylov_r, interpolation_point) ``` 2 by 2 MIMO with tangential, multipoint interpolation: ``` >>> algorithm = 'dual_rational_arnoldi' >>> interpolation_point = np.array([0.0, 1.0j]) >>> krylov_r = 4 >>> right_vector = np.block([[1, 0], [0, 1]]) >>> left_vector = right_vector >>> >>> rom = Krylov() >>> rom.initialise(sharpy_data, FullOrderModelSS) >>> rom.run(algorithm, krylov_r, interpolation_point, right_vector, left_vector) ``` 2 by 2 MIMO multipoint interpolation: ``` >>> algorithm = 'mimo_rational_arnoldi' >>> interpolation_point = np.array([0.0]) >>> krylov_r = 4 >>> >>> rom = Krylov() >>> rom.initialise(sharpy_data, FullOrderModelSS) >>> rom.run(algorithm, krylov_r, interpolation_point) ``` The settings that this solver accepts are given by a dictionary, with the following key-value pairs: | Name | Type | Description | Default | Options | | --- | --- | --- | --- | --- | | `print_info` | `bool` | Write ROM information to screen and log | `True` | | | `frequency` | `list(complex)` | Interpolation points in the continuous time complex plane [rad/s] | `[0]` | | | `algorithm` | `str` | Krylov reduction method algorithm | | | | `r` | `int` | Moments to match at the interpolation points | `1` | | | `single_side` | `str` | Construct the rom using a single side. Leave blank (or empty string) for both. | | `controllability`, `observability` | | `tangent_input_file` | `str` | Filepath to .h5 file containing tangent interpolation vectors | | | | `restart_arnoldi` | `bool` | Restart Arnoldi iteration with r-=1 if ROM is unstable | `False` | | `check_stability`(*restart_arnoldi=False*)[[source]](_modules/sharpy/rom/krylov.html#Krylov.check_stability)[¶](#sharpy.rom.krylov.Krylov.check_stability) Checks the stability of the ROM by computing its eigenvalues. If the resulting system is unstable, the Arnoldi procedure can be restarted to eliminate the eigenvalues outside the stability boundary. However, if this is the case, the ROM no longer matches the moments of the original system at the specific frequencies since now the approximation is done with respect to a system of the form: > \[\begin{split}\Sigma = \left(\begin{array}{c|c} \mathbf{A} & \mathbf{\bar{B}} > \\ \hline \mathbf{C} & \ \end{array}\right)\end{split}\] where \(\mathbf{\bar{B}} = (\mu \mathbf{I}_n - \mathbf{A})\mathbf{B}\) | Parameters: | **restart_arnoldi** (*bool*) – Restart the relevant Arnoldi algorithm with the unstable eigenvalues removed. | `dual_rational_arnoldi`(*frequency*, *r*)[[source]](_modules/sharpy/rom/krylov.html#Krylov.dual_rational_arnoldi)[¶](#sharpy.rom.krylov.Krylov.dual_rational_arnoldi) Dual Rational Arnoli Interpolation for SISO sytems [1] and MIMO systems through tangential interpolation [2]. Effectively the same as the two_sided_arnoldi and the resulting V matrices for each interpolation point are concatenated \[\begin{split}\bigcup\limits_{k = 1}^K\mathcal{K}_{b_k}((\sigma_i\mathbf{I}_n - \mathbf{A})^{-1}, (\sigma_i\mathbf{I}_n - \mathbf{A})^{-1}\mathbf{b})\subseteq\mathcal{V}&=\text{range}(\mathbf{V}) \\ \bigcup\limits_{k = 1}^K\mathcal{K}_{c_k}((\sigma_i\mathbf{I}_n - \mathbf{A})^{-T}, (\sigma_i\mathbf{I}_n - \mathbf{A})^{-T}\mathbf{c}^T)\subseteq\mathcal{Z}&=\text{range}(\mathbf{Z})\end{split}\] For MIMO systems, tangential interpolation is used through the right and left tangential direction vectors \(\mathbf{r}_i\) and \(\mathbf{l}_i\). \[\begin{split}\bigcup\limits_{k = 1}^K\mathcal{K}_{b_k}((\sigma_i\mathbf{I}_n - \mathbf{A})^{-1}, (\sigma_i\mathbf{I}_n - \mathbf{A})^{-1}\mathbf{Br}_i)\subseteq\mathcal{V}&=\text{range}(\mathbf{V}) \\ \bigcup\limits_{k = 1}^K\mathcal{K}_{c_k}((\sigma_i\mathbf{I}_n - \mathbf{A})^{-T}, (\sigma_i\mathbf{I}_n - \mathbf{A})^{-T}\mathbf{C}^T\mathbf{l}_i)\subseteq\mathcal{Z}&=\text{range}(\mathbf{Z})\end{split}\] | Parameters: | * **frequency** (*np.ndarray*) – Array containing the interpolation points \(\sigma = \{\sigma_1, \dots, \sigma_K\}\in\mathbb{C}\) * **r** (*int*) – Krylov space order \(b_k\) and \(c_k\). At the moment, different orders for the controllability and observability constructions are not supported. * **right_tangent** (*np.ndarray*) – Matrix containing the right tangential direction interpolation vector for each interpolation point in column form, i.e. \(\mathbf{r}\in\mathbb{R}^{m \times K}\). * **left_tangent** (*np.ndarray*) – Matrix containing the left tangential direction interpolation vector for each interpolation point in column form, i.e. \(\mathbf{l}\in\mathbb{R}^{p \times K}\). | | Returns: | The reduced order model matrices: \(\mathbf{A}_r\), \(\mathbf{B}_r\) and \(\mathbf{C}_r\). | | Return type: | tuple | References [1] Grimme [2] Gallivan `mimo_rational_arnoldi`(*frequency*, *r*)[[source]](_modules/sharpy/rom/krylov.html#Krylov.mimo_rational_arnoldi)[¶](#sharpy.rom.krylov.Krylov.mimo_rational_arnoldi) Construct full rank orthonormal projection basis \(\mathbf{V}\) and \(\mathbf{W}\). The main issue that one normally encounters with MIMO systems is that the minimality assumption of the system does not guarantee the resulting Krylov space to be full rank, unlike in the SISO case. Therefore, the construction is performed vector by vector, where linearly dependent vectors are eliminated or deflated from the Krylov subspace. If the number of inputs differs the number of outputs, both Krylov spaces will be built such that both are the same size, therefore one Krylov space may be of higher order than the other one. Following the method for vector-wise construction in Gugercin [1]. | Parameters: | * **frequency** (*np.ndarray*) – Array containing interpolation frequencies * **r** (*int*) – Krylov space order | | Returns: | Tuple of reduced system matrices `A`, `B` and `C`. | | Return type: | tuple | References [1] <NAME>. Projection Methods for Model Reduction of Large-Scale Dynamical Systems PhD Thesis. Rice University 2003. `one_sided_arnoldi`(*frequency*, *r*)[[source]](_modules/sharpy/rom/krylov.html#Krylov.one_sided_arnoldi)[¶](#sharpy.rom.krylov.Krylov.one_sided_arnoldi) One-sided Arnoldi method expansion about a single interpolation point, \(\sigma\). The projection matrix \(\mathbf{V}\) is constructed using an order \(r\) Krylov space. The space for a single finite interpolation point known as a Pade approximation is described by: > \[\text{range}(\textbf{V}) = \mathcal{K}_r((\sigma\mathbf{I}_n - \mathbf{A})^{-1}, > (\sigma\mathbf{I}_n - \mathbf{A})^{-1}\mathbf{b})\] In the case of an interpolation about infinity, the problem is known as partial realisation and the Krylov space is > \[\text{range}(\textbf{V}) = \mathcal{K}_r(\mathbf{A}, \mathbf{b})\] The resulting orthogonal projection leads to the following reduced order system: > \[\begin{split}\hat{\Sigma} : \left(\begin{array}{c|c} \hat{A} & \hat{B} \\ > \hline \hat{C} & {D}\end{array}\right) > \text{with } \begin{cases}\hat{A}=V^TAV\in\mathbb{R}^{k\times k},\,\\ > \hat{B}=V^TB\in\mathbb{R}^{k\times m},\,\\ > \hat{C}=CV\in\mathbb{R}^{p\times k},\,\\ > \hat{D}=D\in\mathbb{R}^{p\times m}\end{cases}\end{split}\] | Parameters: | * **frequency** (*complex*) – Interpolation point \(\sigma \in \mathbb{C}\) * **r** (*int*) – Number of moments to match. Equivalent to Krylov space order and order of the ROM. | | Returns: | The reduced order model matrices: \(\mathbf{A}_r\), \(\mathbf{B}_r\) and \(\mathbf{C}_r\) | | Return type: | tuple | `real_rational_arnoldi`(*frequency*, *r*)[[source]](_modules/sharpy/rom/krylov.html#Krylov.real_rational_arnoldi)[¶](#sharpy.rom.krylov.Krylov.real_rational_arnoldi) When employing complex frequencies, the projection matrix can be normalised to be real Following Algorithm 1b in Lee(2006) :param frequency: :param r: Returns: `restart`()[[source]](_modules/sharpy/rom/krylov.html#Krylov.restart)[¶](#sharpy.rom.krylov.Krylov.restart) Implicitly Restarted Krylov Algorithm `run`(*ss*)[[source]](_modules/sharpy/rom/krylov.html#Krylov.run)[¶](#sharpy.rom.krylov.Krylov.run) Performs Model Order Reduction employing Krylov space projection methods. Supported methods include: | Algorithm | Interpolation Points | Systems | | --- | --- | --- | | `one_sided_arnoldi` | 1 | SISO Systems | | `two_sided_arnoldi` | 1 | SISO Systems | | `dual_rational_arnoldi` | K | SISO systems and Tangential interpolation for MIMO systems | | `mimo_rational_arnoldi` | K | MIMO systems. Uses vector-wise construction (more robust) | | `mimo_block_arnoldi` | K | MIMO systems. Uses block Arnoldi methods (more efficient) | | Parameters: | **ss** ([*sharpy.linear.src.libss.ss*](index.html#sharpy.linear.src.libss.ss)) – State space to reduce | | Returns: | Reduced state space system | | Return type: | ([libss.ss](index.html#sharpy.linear.src.libss.ss)) | `stable_realisation`(**args*, ***kwargs*)[[source]](_modules/sharpy/rom/krylov.html#Krylov.stable_realisation)[¶](#sharpy.rom.krylov.Krylov.stable_realisation) Remove unstable poles left after reduction Using a Schur decomposition of the reduced plant matrix \(\mathbf{A}_m\in\mathbb{C}^{m\times m}\), the method removes the unstable eigenvalues that could have appeared after the moment-matching reduction. The oblique projection matrices \(\mathbf{T}_L\in\mathbb{C}^{m \times p}\) and \(\mathbf{T}_R\in\mathbb{C}^{m \times p}\) result in a stable realisation \[\mathbf{A}_s = \mathbf{T}_L^\top\mathbf{AT}_R \in \mathbb{C}^{p\times p}.\] | Parameters: | **A** (*np.ndarray*) – plant matrix (if not provided `self.ssrom.A` will be used). | | Returns: | Left and right projection matrices \(\mathbf{T}_L\in\mathbb{C}^{m \times p}\) and \(\mathbf{T}_R\in\mathbb{C}^{m \times p}\) | | Return type: | tuple | References Jaimoukha, <NAME>., <NAME>.. Implicitly Restarted Krylov Subspace Methods for Stable Partial Realizations. SIAM Journal of Matrix Analysis and Applications, 1997. See also The method employs [`sharpy.rom.utils.krylovutils.schur_ordered()`](index.html#module-sharpy.rom.utils.krylovutils.schur_ordered) and [`sharpy.rom.utils.krylovutils.remove_a12()`](index.html#module-sharpy.rom.utils.krylovutils.remove_a12). `two_sided_arnoldi`(*frequency*, *r*)[[source]](_modules/sharpy/rom/krylov.html#Krylov.two_sided_arnoldi)[¶](#sharpy.rom.krylov.Krylov.two_sided_arnoldi) Two-sided projection with a single interpolation point following the Arnoldi procedure. Very similar to the one-sided method available, but it adds the projection \(\mathbf{W}\) built using the Krylov space for the \(\mathbf{c}\) vector: > \[\mathcal{K}_r((\sigma\mathbf{I}_n - \mathbf{A})^{-T}, > (\sigma\mathbf{I}_n - \mathbf{A})^{-T}\mathbf{c}^T)\subseteq\mathcal{W}=\text{range}(\mathbf{W})\] The oblique projection \(\mathbf{VW}^T\) matches twice as many moments as the single sided projection. The resulting system takes the form: > \[\begin{split}\hat{\Sigma} : \left(\begin{array}{c|c} \hat{A} & \hat{B} \\ > \hline \hat{C} & {D}\end{array}\right) > \text{with } \begin{cases}\hat{A}=W^TAV\in\mathbb{R}^{k\times k},\,\\ > \hat{B}=W^TB\in\mathbb{R}^{k\times m},\,\\ > \hat{C}=CV\in\mathbb{R}^{p\times k},\,\\ > \hat{D}=D\in\mathbb{R}^{p\times m}\end{cases}\end{split}\] | Parameters: | * **frequency** (*complex*) – Interpolation point \(\sigma \in \mathbb{C}\) * **r** (*int*) – Number of moments to match on each side. The resulting ROM will be of order \(2r\). | | Returns: | The reduced order model matrices: \(\mathbf{A}_r\), \(\mathbf{B}_r\) and \(\mathbf{C}_r\). | | Return type: | tuple | ##### Utils[¶](#utils) ###### Krylov Model Reduction Methods Utilities[¶](#krylov-model-reduction-methods-utilities) ####### check_eye[¶](#module-sharpy.rom.utils.krylovutils.check_eye) Simple utility to verify matrix inverses Asserts that \[\mathbf{T}^{-1}\mathbf{T} = \mathbf{I}\] | param T: | Matrix to test | | type T: | np.ndarray | | param Tinv: | Supposed matrix inverse | | type Tinv: | np.ndarray | | param msg: | Output error message if inverse check not satisfied | | type msg: | str | | param eps: | Error threshold (\(10^\varepsilon\)) | | type eps: | float | | raises: | `AssertionError` – if matrix inverse check is not satisfied | ####### construct_krylov[¶](#module-sharpy.rom.utils.krylovutils.construct_krylov) Contructs a Krylov subspace in an iterative manner following the methods of Gugercin [1]. The construction of the Krylov space is focused on Pade and partial realisation cases for the purposes of model reduction. I.e. the partial realisation form of the Krylov space is used if `approx_type = 'partial_realisation'` > \[\text{range}(\textbf{V}) = \mathcal{K}_r(\mathbf{A}, \mathbf{b})\] Else, it is replaced by the Pade approximation form: > \[\text{range}(\textbf{V}) = \mathcal{K}_r((\sigma\mathbf{I}_n - \mathbf{A})^{-1}, > (\sigma\mathbf{I}_n - \mathbf{A})^{-1}\mathbf{b})\] Note that no inverses are actually computed but rather a single LU decomposition is performed at the beginning of the algorithm. Forward and backward substitution is used thereinafter to calculate the required vectors. The algorithm also builds the Krylov space for the \(\mathbf{C}^T\) matrix. It should simply replace `B` and `side` should be `side = 'c'`. Examples Partial Realisation: ``` >>> V = construct_krylov(r, A, B, 'partial_realisation', 'b') >>> W = construct_krylov(r, A, C.T, 'partial_realisation', 'c') ``` Pade Approximation: ``` >>> V = construct_krylov(r, (sigma * np.eye(nx) - A), B, 'Pade', 'b') >>> W = construct_krylov(r, (sigma * np.eye(nx) - A), C.T, 'Pade', 'c') ``` References [1]. <NAME>. - Projection Methods for Model Reduction of Large-Scale Dynamical Systems. PhD Thesis. Rice University. 2003. | param r: | Krylov space order | | type r: | int | | param lu_A: | For Pade approximations it should be the LU decomposition of \((\sigma I - \mathbf{A})\) in tuple form, as output from the `scipy.linalg.lu_factor()`. For partial realisations it is simply \(\mathbf{A}\). | | type lu_A: | np.ndarray | | param B: | If doing the B side it should be \(\mathbf{B}\), else \(\mathbf{C}^T\). | | type B: | np.ndarray | | param approx_type: | | | Type of approximation: `partial_realisation` or `Pade`. | | type approx_type: | | | str | | param side: | Side of the projection `b` or `c`. | | returns: | Projection matrix | | rtype: | np.ndarray | ####### evec[¶](#module-sharpy.rom.utils.krylovutils.evec) j-th unit vector (in row format) | param j: | Unit vector dimension | | returns: | j-th unit vector | | rtype: | np.ndarray | Examples ``` >>> evec(2) np.array([0, 1]) >>> evec(3) np.array([0, 0, 1]) ``` ####### lu_factor[¶](#module-sharpy.rom.utils.krylovutils.lu_factor) LU Factorisation wrapper of: \[LU = (\sigma \mathbf{I} - \mathbf{A})\] In the case of `A` being a sparse matrix, the sparse methods in scipy are employed | param sigma: | Expansion frequency | | type sigma: | float | | param A: | Dynamics matrix | | type A: | csc_matrix or np.ndarray | | returns: | tuple (dense) or SuperLU (sparse) objects containing the LU factorisation | | rtype: | tuple or SuperLU | ####### lu_solve[¶](#module-sharpy.rom.utils.krylovutils.lu_solve) LU solve wrapper. Computes the solution to \[\mathbf{Ax} = \mathbf{b}\] or \[\mathbf{A}^T\mathbf{x} = \mathbf{b}\] if `trans=1`. It uses the `SuperLU.solve()` method if the input is a `SuperLU` or else will revert to the dense methods in scipy. | param lu_A: | object or tuple containing the information of the LU factorisation | | type lu_A: | SuperLU or tuple | | param b: | Right hand side vector to solve | | type b: | np.ndarray | | param trans: | `0` or `1` for either solution option. | | type trans: | int | | returns: | Solution to the system. | | rtype: | np.ndarray | ####### mgs_ortho[¶](#module-sharpy.rom.utils.krylovutils.mgs_ortho) Modified Gram-Schmidt Orthogonalisation Orthogonalises input matrix \(\mathbf{X}\) column by column. | param X: | Input matrix of dimensions \(n\) by \(m\). | | type X: | np.ndarray | | returns: | Orthogonalised matrix of dimensions \(n\) by \(m\). | | rtype: | np.ndarray | Notes This method is faster than scipy’s `scipy.linalg.qr()` method that returns an orthogonal matrix as part of the QR decomposition, albeit at a higher number of function calls. ####### remove_a12[¶](#module-sharpy.rom.utils.krylovutils.remove_a12) Basis change to remove the (1, 2) block of the block-ordered real Schur matrix \(\mathbf{A}\) Being \(\mathbf{A}_s\in\mathbb{R}^{m\times m}\) a matrix of the form \[\begin{split}\mathbf{A}_s = \begin{bmatrix} A_{11} & A_{12} \\ 0 & A_{22} \end{bmatrix}\end{split}\] the (1,2) block is removed by solving the Sylvester equation \[\mathbf{A}_{11}\mathbf{X} - \mathbf{X}\mathbf{A}_{22} + \mathbf{A}_{12} = 0\] used to build the change of basis \[\begin{split}\mathbf{T} = \begin{bmatrix} \mathbf{I}_{s,s} & -\mathbf{X}_{s,u} \\ \mathbf{0}_{u, s} & \mathbf{I}_{u,u} \end{bmatrix}\end{split}\] where \(s\) and \(u\) are the respective number of stable and unstable eigenvalues, such that \[\begin{split}\mathbf{TA}_s\mathbf{T}^\top = \begin{bmatrix} A_{11} & \mathbf{0} \\ 0 & A_{22} \end{bmatrix}.\end{split}\] | param As: | Block-ordered real Schur matrix (can be built using [`sharpy.rom.utils.krylovutils.schur_ordered()`](index.html#module-sharpy.rom.utils.krylovutils.schur_ordered)). | | type As: | np.ndarray | | param n_stable: | Number of stable eigenvalues in `As`. | | type n_stable: | int | | returns: | Basis transformation \(\mathbf{T}\in\mathbb{R}^{m\times m}\). | | rtype: | np.ndarray | References <NAME>., <NAME>.. Implicitly Restarted Krylov Subspace Methods for Stable Partial Realizations SIAM Journal of Matrix Analysis and Applications, 1997. ####### schur_ordered[¶](#module-sharpy.rom.utils.krylovutils.schur_ordered) Returns block ordered complex Schur form of matrix \(\mathbf{A}\) \[\begin{split}\mathbf{TAT}^H = \mathbf{A}_s = \begin{bmatrix} A_{11} & A_{12} \\ 0 & A_{22} \end{bmatrix}\end{split}\] where \(A_{11}\in\mathbb{C}^{s\times s}\) contains the \(s\) stable eigenvalues of \(\mathbf{A}\in\mathbb{R}^{m\times m}\). | param A: | Matrix to decompose. | | type A: | np.ndarray | | param ct: | Continuous time system. | | type ct: | bool | | returns: | Tuple containing the Schur decomposition of \(\mathbf{A}\), \(\mathbf{A}_s\); the transformation \(\mathbf{T}\in\mathbb{C}^{m\times m}\); and the number of stable eigenvalues of \(\mathbf{A}\). | | rtype: | tuple | Notes This function is a wrapper of `scipy.linalg.schur` imposing the settings required for this application. ###### General ROM utilities[¶](#general-rom-utilities) 19. Maraniello, 14 Feb 2018 ####### balfreq[¶](#module-sharpy.rom.utils.librom.balfreq) Method for frequency limited balancing. The Observability and controllability Gramians over the frequencies kv are solved in factorised form. Balanced modes are then obtained with a square-root method. Details: > * Observability and controllability Gramians are solved in factorised form > through explicit integration. The number of integration points determines > both the accuracy and the maximum size of the balanced model. > * Stability over all (Nb) balanced states is achieved if: > > > > 1. one of the Gramian is integrated through the full Nyquist range > > 2. the integration points are enough. > > Input: * DictBalFreq: dictionary specifying integration method with keys: > + `frequency`: defines limit frequencies for balancing. The balanced > model will be accurate in the range `[0,F]`, where `F` is the value of > this key. Note that `F` units must be consistent with the units specified > in the `self.ScalingFacts` dictionary. > + `method_low`: `['gauss','trapz']` specifies whether to use gauss > quadrature or trapezoidal rule in the low-frequency range `[0,F]`. > + `options_low`: options to use for integration in the low-frequencies. > These depend on the integration scheme (See below). > + `method_high`: method to use for integration in the range [F,F_N], > where F_N is the Nyquist frequency. See ‘method_low’. > + `options_high`: options to use for integration in the high-frequencies. > + `check_stability`: if True, the balanced model is truncated to > eliminate unstable modes - if any is found. Note that very accurate > balanced model can still be obtained, even if high order modes are > unstable. Note that this option is overridden if “” > + `get_frequency_response`: if True, the function also returns the > frequency response evaluated at the low-frequency range integration > points. If True, this option also allows to automatically tune the > balanced model. Future options: * Ncpu: for parallel run The following integration schemes are available: * `trapz`: performs integration over equally spaced points using trapezoidal rule. It accepts options dictionaries with keys: > + `points`: number of integration points to use (including > domain boundary) > * `gauss` performs gauss-lobotto quadrature. The domain can be partitioned in Npart sub-domain in which the gauss-lobotto quadrature of order Ord can be applied. A total number of Npart*Ord points is required. It accepts options dictionaries of the form: > + `partitions`: number of partitions > + `order`: quadrature order. Examples The following dictionary ``` >>> DictBalFreq={'frequency': 1.2, >>> 'method_low': 'trapz', >>> 'options_low': {'points': 12}, >>> 'method_high': 'gauss', >>> 'options_high': {'partitions': 2, 'order': 8}, >>> 'check_stability': True } ``` balances the state-space model in the frequency range [0, 1.2] using: > 1. 12 equally-spaced points integration of the Gramians in > the low-frequency range [0,1.2] and > 2. A 2 Gauss-Lobotto 8-th order quadratures of the controllability > Gramian in the high-frequency range. A total number of 28 integration points will be required, which will result into a balanced model with number of states ``` >>> min{ 2*28* number_inputs, 2*28* number_outputs } ``` The model is finally truncated so as to retain only the first Ns stable modes. ####### balreal_direct_py[¶](#module-sharpy.rom.utils.librom.balreal_direct_py) Find balanced realisation of continuous (`DLTI = False`) and discrete (`DLTI = True`) time of LTI systems using scipy libraries. The function proceeds to achieve balanced realisation of the state-space system by first solving the Lyapunov equations. They are solved using Barlets-Stewart algorithm for Sylvester equation, which is based on A matrix Schur decomposition. \[\begin{split}\mathbf{A\,W_c + W_c\,A^T + B\,B^T} &= 0 \\ \mathbf{A^T\,W_o + W_o\,A + C^T\,C} &= 0\end{split}\] to obtain the reachability and observability gramians, which are positive definite matrices. Then, the gramians are decomposed into their Cholesky factors such that: \[\begin{split}\mathbf{W_c} &= \mathbf{Q_c\,Q_c^T} \\ \mathbf{W_o} &= \mathbf{Q_o\,Q_o^T}\end{split}\] A singular value decomposition (SVD) of the product of the Cholesky factors is performed \[(\mathbf{Q_o^T\,Q_c}) = \mathbf{U\,\Sigma\,V^*}\] The singular values are then used to build the transformation matrix \(\mathbf{T}\) \[\begin{split}\mathbf{T} &= \mathbf{Q_c\,V\,\Sigma}^{-1/2} \\ \mathbf{T}^{-1} &= \mathbf{\Sigma}^{-1/2}\,\mathbf{U^T\,Q_o^T}\end{split}\] The balanced system is therefore of the form: \[\begin{split}\mathbf{A_b} &= \mathbf{T\,A\,T^{-1}} \\ \mathbf{B_b} &= \mathbf{T\,B} \\ \mathbf{C_b} &= \mathbf{C\,T^{-1}} \\ \mathbf{D_b} &= \mathbf{D}\end{split}\] Warning This function may be less computationally efficient than the `balreal` Matlab implementation and does not offer the option to bound the realisation in frequency and time. Notes Lyapunov equations are solved using Barlets-Stewart algorithm for Sylvester equation, which is based on A matrix Schur decomposition. | param A: | Plant Matrix | | type A: | np.ndarray | | param B: | Input Matrix | | type B: | np.ndarray | | param C: | Output Matrix | | type C: | np.ndarray | | param DLTI: | Discrete time state-space flag | | type DLTI: | bool | | param Schur: | Use Schur decomposition to solve the Lyapunov equations | | type Schur: | bool | | returns: | Tuple of the form `(S, T, Tinv)` containing: * Singular values in diagonal matrix (`S`) * Transformation matrix (`T`). * Inverse transformation matrix(`Tinv`). | | rtype: | tuple of np.ndarrays | References Anthoulas, A.C.. Approximation of Large Scale Dynamical Systems. Chapter 7. Advances in Design and Control. SIAM. 2005. ####### balreal_iter[¶](#module-sharpy.rom.utils.librom.balreal_iter) Find balanced realisation of DLTI system. Notes Lyapunov equations are solved using iterative squared Smith algorithm, in its low or full rank version. These implementations are as per the low_rank_smith and smith_iter functions respectively but, for computational efficiency, the iterations are rewritten here so as to solve for the observability and controllability Gramians contemporary. * Exploiting sparsity: > This algorithm is not ideal to exploit sparsity. However, the following > strategies are implemented: > > > > + if the A matrix is provided in sparse format, the powers of A will be > > calculated exploiting sparsity UNTIL the number of non-zero elements > > is below 15% the size of A. Upon this threshold, the cost of the matrix > > multiplication rises dramatically, and A is hence converted to a dense > > numpy array. > ####### balreal_iter_old[¶](#module-sharpy.rom.utils.librom.balreal_iter_old) Find balanced realisation of DLTI system. Notes: Lyapunov equations are solved using iterative squared Smith algorithm, in its low or full rank version. These implementations are as per the low_rank_smith and smith_iter functions respectively but, for computational efficiency,, the iterations are rewritten here so as to solve for the observability and controllability Gramians contemporary. ####### check_stability[¶](#module-sharpy.rom.utils.librom.check_stability) Checks the stability of the system. | param A: | System plant matrix | | type A: | np.ndarray | | param dt: | Discrete time system | | type dt: | bool | | returns: | True if the system is stable | | rtype: | bool | ####### eigen_dec[¶](#module-sharpy.rom.utils.librom.eigen_dec) Eigen decomposition of state-space model (either discrete or continuous time) defined by the A,B,C matrices. Eigen-states are organised in decreasing damping order or increased frequency order such that the truncation > `A[:N,:N], B[:N,:], C[:,:N]` will retain the least N damped (or lower frequency) modes. If the eigenvalues of A, eigs, are complex, the state-space is automatically convert into real by separating its real and imaginary part. This procedure retains the minimal number of states as only 2 equations are added for each pair of complex conj eigenvalues. Extra care is however required when truncating the system, so as to ensure that the chosen value of N does not retain the real part, but not the imaginary part, of a complex pair. For this reason, the function also returns an optional output, `Nlist`, such that, for each N in Nlist, the truncation > A[:N,:N], B[:N,:], C[:,:N] does guarantee that both the real and imaginary part of a complex conj pair is included in the truncated model. Note that if ``order_by == None`, the eigs and UR must be given in input and must be such that complex pairs are stored consecutively. | param A: | state-space matrix | | param B: | state-space matrix | | param C: | matrices of state-space model | | param dlti: | specifies whether discrete (True) or continuous-time. This information is only required to order the eigenvalues in decreasing dmaping order | | param N: | number of states to retain. If None, all states are retained | | param eigs,Ur: | eigenvalues and right eigenvector of A matrix as given by: eigs,Ur=scipy.linalg.eig(A,b=None,left=False,right=True) | | param Urinv: | inverse of Ur | | param order_by={‘damp’,’freq’,’stab’}: | | | order according to increasing damping (damp) | | param or decreasing frequency: | | | If None, the same order as eigs/UR is followed. | | type or decreasing frequency: | | | freq) or decreasing damping (stab | | param tol: | absolute tolerance used to identify complex conj pair of eigenvalues | | param complex: | if true, the system is left in complex form | Returns: (Aproj,Bproj,Cproj): state-space matrices projected over the first N (or N+1 > if N removes the imaginary part equations of a complex conj pair of > eigenvalues) related to the least damped modes Nlist: list of acceptable truncation values ####### get_gauss_weights[¶](#module-sharpy.rom.utils.librom.get_gauss_weights) Returns gauss-legendre frequency grid (kv of length Npart*order) and weights (wv) for Gramians integration. The integration grid is divided into Npart partitions, and in each of them integration is performed using a Gauss-Legendre quadrature of order order. Note: integration points are never located at k0 or kend, hence there is no need for special treatment as in (for e.g.) a uniform grid case (see get_unif_weights) ####### get_trapz_weights[¶](#module-sharpy.rom.utils.librom.get_trapz_weights) Returns uniform frequency grid (kv of length Nk) and weights (wv) for Gramians integration using trapezoidal rule. If knyq is True, it is assumed that kend is also the Nyquist frequency. ####### low_rank_smith[¶](#module-sharpy.rom.utils.librom.low_rank_smith) Low-rank smith algorithm for Stein equation A.T X A - X = -Q Q.T The algorithm can only be used if T is symmetric positive-definite, but this is not checked in this routine for computational performance. The solution X is provided in its factorised form: > X=Z Z.T As in the most general case, a solution X exists only if the eigenvalues of S are stricktly smaller than one, and the algorithm will not converge otherwise. The algorithm can not exploits parsity, hence, while convergence can be improved for very large matrices, it can not be employed if matrices are too large to be stored in memory. Parameters: - tol: tolerance for stopping convergence of Smith algorithm - Square: if true the squared-Smith algorithm is used - tolSVD: tolerance for reduce Z matrix based on singular values - kmax: if given, the Z matrix is forced to have size kmax - tolAbs: if True, the tolerance - fullOut: not implemented - Convergence: ‘Zk’,’res’. > * If ‘Zk’ the iteration is stopped when the inf norm of the incremental > matrix goes below tol. > - If ‘res’ the residual of the Lyapunov equation is computed. This > strategy may fail to converge if kmax is too low or tolSVD too large! Ref. <NAME>, <NAME> and <NAME>, “On the squared Smith method for large-scale Stein equations”, 2014. ####### modred[¶](#module-sharpy.rom.utils.librom.modred) Produces a reduced order model with N states from balanced or modal system SSb. Both “truncation” and “residualisation” methods are employed. Note: - this method is designed for small size systems, i.e. a deep copy of SSb is produced by default. ####### res_discrete_lyap[¶](#module-sharpy.rom.utils.librom.res_discrete_lyap) Provides residual of discrete Lyapunov equation: A.T X A - X = -Q Q.T If Factorised option is true, X=Z*Z.T otherwise X=Z is chosen. Reminder: contr: A W A.T - W = - B B.T obser: A.T W A - W = - C.T C ####### smith_iter[¶](#module-sharpy.rom.utils.librom.smith_iter) Solves the Stein equation S.T X S - X = -T by mean of Smith or squared-Smith algorithm. Note that a solution X exists only if the eigenvalues of S are stricktly smaller than one, and the algorithm will not converge otherwise. The algorithm can not exploit sparsity, hence, while convergence can be improved for very large matrices, it can not be employed if matrices are too large to be stored in memory. Ref. Penzt, “A cyclic low-rank Smith method for large sparse Lyapunov equations”, 2000. ####### tune_rom[¶](#module-sharpy.rom.utils.librom.tune_rom) Starting from a balanced DLTI, this function determines the number of states N required in a ROM (obtained either through ‘residualisation’ or ‘truncation’ as specified in method - see also librom.modred) to match the frequency response of SSb over the frequency array, kv, with absolute accuracy tol. gv contains the balanced system Hankel singular value, and is used to determine the upper bound for the ROM order N. Unless kv does not conver the full Nyquist frequency range, the ROM accuracy is not guaranteed to increase monothonically with the number of states. To account for this, two criteria can be used to determine the ROM convergence: > * convergence=’all’: in this case, the number of ROM states N is chosen > such that any ROM of order greater than N produces an error smaller than > tol. To guarantee this the ROM frequency response is computed for all > N<=Nb, where Nb is the number of balanced states. This method is > numerically inefficient. > * convergence=’min’: atempts to find the minimal number of states to > achieve the accuracy tol. Note: - the input state-space model, SSb, must be balanced. - the routine in not implemented for numerical efficiency and assumes that SSb is small. ###### Methods for the interpolation of DLTI ROMs[¶](#methods-for-the-interpolation-of-dlti-roms) This is library for state-space models interpolation. These routines are intended for small size state-space models (ROMs), hence some methods may not be optimised to exploit sparsity structures. For generality purposes, all methods require in input interpolatory weights. The module includes the methods: > * [`transfer_function()`](index.html#module-sharpy.rom.utils.librom_interp.transfer_function): returns an interpolatory state-space model based on the > transfer function method [1]. This method is general and is, effectively, a > wrapper of the [`sharpy.linear.src.libss.join()`](index.html#module-sharpy.linear.src.libss.join) method. > * `BT_transfer_function()`: evolution of transfer function methods. The growth of > the interpolated system size is avoided through balancing. References: > [1] <NAME>., <NAME>. & <NAME>., 2015. A Survey of Projection-Based > Model Reduction Methods for Parametric Dynamical Systems. SIAM Review, 57(4), > pp.483–531. Author: <NAME> Date: Mar-Apr 2019 ####### FLB_transfer_function[¶](#module-sharpy.rom.utils.librom_interp.FLB_transfer_function) Returns an interpolatory state-space model based on the transfer function method [1]. This method is applicable to frequency limited balanced state-space models only. Features: > * stability preserved > * the interpolated state-space model has the same size than the tabulated ones > * all state-space models, need to have the same size and the same numbers of > hankel singular values. > * suitable for any ROM | param SS_list: | List of state-space models instances of [`sharpy.linear.src.libss.ss`](index.html#sharpy.linear.src.libss.ss) class. | | type SS_list: | list | | param wv: | list of interpolatory weights. | | type wv: | list | | param U_list: | small size, thin SVD factors of Gramians square roots of each state space model (\(\mathbf{U}\)). | | type U_list: | list | | param VT_list: | small size, thin SVD factors of Gramians square roots of each state space model (\(\mathbf{V}^\top\)). | | type VT_list: | list | | param hsv_list: | small size, thin SVD factors of Gramians square roots of each state space model. If `None`, it is assumed that `U_list = [ U_i sqrt(hsv_i) ]` `VT_list = [ sqrt(hsv_i) V_i.T ]` where `U_i` and `V_i.T` are square matrices and hsv is an array. | | type hsv_list: | list | | param M_list: | for fast on-line evaluation. Small size product of Gramians factors of each state-space model. Each element of this list is equal to: `M_i = U_i hsv_i V_i.T` | | type M_list: | list | Notes Message for future generations: > * the implementation is divided into an offline and online part. References: <NAME>. and <NAME>., Frequency-limited balanced truncation for parametric reduced-order modelling of the UVLM. Only in the best theaters. See also Frequency-Limited Balanced ROMs may be obtained from SHARPy using [`sharpy.rom.balanced.FrequencyLimited`](index.html#sharpy.rom.balanced.FrequencyLimited). ####### InterpROM[¶](#interprom) *class* `sharpy.rom.utils.librom_interp.``InterpROM`(*SS*, *VV=None*, *WWT=None*, *Vref=None*, *WTref=None*, *method_proj=None*)[[source]](_modules/sharpy/rom/utils/librom_interp.html#InterpROM)[¶](#sharpy.rom.utils.librom_interp.InterpROM) State-space 1D interpolation class. This class allows interpolating from a list of state-space models, SS. State-space models are required to have the same number of inputs and outputs and need to have the same number of states. For state-space interpolation, state-space models also need to be defined over the same set of generalised coordinates. If this is not the case, the projection matrices W and V used to produce the ROMs, ie \[\mathbf{A}_{proj} = \mathbf{W}^\top \mathbf{A V}\] where A is the full-states matrix, also need to be provided. This will allow projecting the state-space models onto a common set of generalised coordinates before interpoling. For development purposes, the method currently creates a hard copy of the projected matrices into the self.AA, self.BB, self.CC lists Inputs: * SS: list of state-space models (instances of libss.ss class) * VV: list of V matrices used to produce SS. If None, it is assumed that ROMs are defined over the same basis * WWT: list of W^T matrices used to derive the ROMs. * Vref, WTref: reference subspaces for projection. Some methods neglect this input (e.g. panzer) * method_proj: method for projection of state-space models over common coordinates. Available options are: > + leastsq: find left/right projectors using least squares approx. Suitable > for all basis. > + strongMAC: strong Modal Assurance Criterion [4] enforcement for general > basis. See Ref. [3], Eq. (7) > + strongMAC_BT: strong Modal Assurance Criterion [4] enforcement for > basis obtained by Balanced Truncation. Equivalent to strongMAC > + maraniello_BT: this is equivalent to strongMAC and strongMAC_BT but > avoids inversions. However, performance are the same as other strongMAC > approaches - it works only when basis map the same subspaces > + weakMAC_right_orth: weak MAC enforcement [1,3] for state-space models > with right orthonoraml basis, i.e. V.T V = I. This is like Ref. [1], but > implemented only on one side. > + weakMAC: implementation of weak MAC enforcement for a general system. > The method orthonormalises the right basis (V) and then solves the > orthogonal Procrustes problem. > + for orthonormal basis (V.T V = I): !!! These methods are not tested !!! > > > > - panzer: produces a new reference point based on svd [2] > > - amsallem: project over Vref,WTref [1] > > References: [1] <NAME> and <NAME>, An online method for interpolating linear parametric reduced-order models, SIAM J. Sci. Comput., 33 (2011), pp. 2169–2198. [2] Panzer, <NAME>, <NAME>, and <NAME>, Parametric model order reduction by matrix interpolation, at–Automatisierungstechnik, 58 (2010), pp. 475–484. [3] <NAME>., <NAME>. & <NAME>., 2004. Riemannian Geometry of Grassmann Manifolds with a View on Algorithmic Computation. Acta Applicandae Mathematicae, 80(2), pp.199–220. [4] <NAME>., <NAME>. & <NAME>., 2013. On parametric model order reduction by matrix interpolation. 2013 European Control Conference (ECC), pp.3433–3438. `project`()[[source]](_modules/sharpy/rom/utils/librom_interp.html#InterpROM.project)[¶](#sharpy.rom.utils.librom_interp.InterpROM.project) Project the state-space models onto the generalised coordinates of state-space model IImap ####### transfer_function[¶](#module-sharpy.rom.utils.librom_interp.transfer_function) Returns an interpolatory state-space model based on the transfer function method [1]. This method is general and is, effectively, a wrapper of the [`sharpy.linear.src.libss.join()`](index.html#module-sharpy.linear.src.libss.join) method. Features: > * stability preserved > * system size increases with interpolatory order, but can be optimised for > fast on-line evaluation | param SS_list: | List of state-space models instances of [`sharpy.linear.src.libss.ss`](index.html#sharpy.linear.src.libss.ss) class. | | type SS_list: | list | | param wv: | list of interpolatory weights. | | type wv: | list | Notes For fast online evaluation, this routine can be optimised to return a class that handles each state-space model independently. See ref. [1] for more details. References [1] <NAME>., <NAME>. & <NAME>., 2015. A Survey of Projection-Based Model Reduction Methods for Parametric Dynamical Systems. SIAM Review, 57(4), pp.483–531. #### Structural Packages[¶](#structural-packages) ##### Models[¶](#models) ###### Element[¶](#element) *class* `sharpy.structure.models.beamstructures.``Element`(*ielem*, *n_nodes*, *global_connectivities*, *coordinates*, *frame_of_reference_delta*, *structural_twist*, *num_mem*, *stiff_index*, *mass_index*)[[source]](_modules/sharpy/structure/models/beamstructures.html#Element)[¶](#sharpy.structure.models.beamstructures.Element) This class stores all the required data for the definition of a linear or quadratic beam element. `get_triad`()[[source]](_modules/sharpy/structure/models/beamstructures.html#Element.get_triad)[¶](#sharpy.structure.models.beamstructures.Element.get_triad) Generates two unit vectors in body FoR that define the local FoR for a beam element. These vectors are calculated using frame_of_reference_delta :return: ##### Utils[¶](#utils) ###### LagrangeConstraints library[¶](#lagrangeconstraints-library) LagrangeConstraints library Library used to create the matrices associate to boundary conditions through the method of Lagrange Multipliers Args: Returns: Examples: To use this library: import sharpy.structure.utils.lagrangeconstraints as lagrangeconstraints Notes: ####### define_FoR_dof[¶](#module-sharpy.structure.utils.lagrangeconstraints.define_FoR_dof) define_FoR_dof Define the position of the first degree of freedom associated to a certain frame of reference | param MB_beam: | list of ‘Beam’ | | type MB_beam: | list | | param node_body: | | | body to which the node belongs | | type node_body: | int | | param num_node: | number os the node within the body | | type num_node: | int | | returns: | first degree of freedom associated to the node | | rtype: | node_dof(int) | Examples: Notes: ####### define_node_dof[¶](#module-sharpy.structure.utils.lagrangeconstraints.define_node_dof) define_node_dof Define the position of the first degree of freedom associated to a certain node | param MB_beam: | list of ‘Beam’ | | type MB_beam: | list | | param node_body: | | | body to which the node belongs | | type node_body: | int | | param num_node: | number os the node within the body | | type num_node: | int | | returns: | first degree of freedom associated to the node | | rtype: | node_dof(int) | Examples: Notes: ####### define_num_LM_eq[¶](#module-sharpy.structure.utils.lagrangeconstraints.define_num_LM_eq) define_num_LM_eq Define the number of equations needed to define the boundary boundary conditions | param lc_list(): | | --- | | | list of all the defined contraints | | returns: | number of new equations needed to define the boundary boundary conditions | | rtype: | num_LM_eq(int) | Examples num_LM_eq = lagrangeconstraints.define_num_LM_eq(lc_list) Notes: ####### generate_lagrange_matrix[¶](#module-sharpy.structure.utils.lagrangeconstraints.generate_lagrange_matrix) generate_lagrange_matrix Generates the matrices associated to the Lagrange multipliers boundary conditions | param lc_list(): | | --- | | | list of all the defined contraints | | param MBdict: | dictionary with the MultiBody and LagrangeMultipliers information | | type MBdict: | MBdict | | param MB_beam: | list of ‘beams’ of each of the bodies that form the system | | type MB_beam: | list | | param MB_tstep: | list of ‘StructTimeStepInfo’ of each of the bodies that form the system | | type MB_tstep: | list | | param num_LM_eq: | | | number of new equations needed to define the boundary boundary conditions | | type num_LM_eq: | int | | param sys_size: | total number of degrees of freedom of the multibody system | | type sys_size: | int | | param dt: | time step | | type dt: | float | | param Lambda: | list of Lagrange multipliers values | | type Lambda: | numpy array | | param Lambda_dot: | | | list of the first derivative of the Lagrange multipliers values | | type Lambda_dot: | | | numpy array | | param dynamic_or_static: | | | string defining if the computation is dynamic or static | | type dynamic_or_static: | | | str | | returns: | Damping matrix associated to the Lagrange Multipliers equations LM_K (numpy array): Stiffness matrix associated to the Lagrange Multipliers equations LM_Q (numpy array): Vector of independent terms associated to the Lagrange Multipliers equations | | rtype: | LM_C (numpy array) | Examples: Notes: ###### get_mode_zeta[¶](#module-sharpy.structure.utils.modalutils.get_mode_zeta) Retrieves the UVLM grid nodal displacements associated to the eigenvector `eigvect` ###### scale_mode[¶](#module-sharpy.structure.utils.modalutils.scale_mode) Scales the eigenvector such that: 1. the maximum change in component of the beam cartesian rotation vector is equal to rot_max_deg degrees. 2. the maximum translational displacement does not exceed perc_max the maximum nodal position. Warning If the eigenvector is in state-space form, only the first half of the eigenvector is scanned for determining the scaling. ###### write_modes_vtk[¶](#module-sharpy.structure.utils.modalutils.write_modes_vtk) Writes a vtk file for each of the first `NumLambda` eigenvectors. When these are associated to the state-space form of the structural equations, only the displacement field is saved. ###### write_zeta_vtk[¶](#module-sharpy.structure.utils.modalutils.write_zeta_vtk) Given a list of arrays representing the coordinates of a set of n_surf UVLM lattices and organised as: > zeta[n_surf][3,M+1,N=1] this function writes a vtk for each of the n_surf surfaces. Input: * zeta: lattice coordinates to plot * zeta_ref: reference lattice used to compute the magnitude of displacements * filename_root: initial part of filename (full path) without file extension (.vtk) ###### Xbopts[¶](#xbopts) `sharpy.structure.utils.xbeamlib.``Xbopts`[¶](#sharpy.structure.utils.xbeamlib.Xbopts) ###### cbeam3_asbly_dynamic[¶](#module-sharpy.structure.utils.xbeamlib.cbeam3_asbly_dynamic) Used by autodoc_mock_imports. ###### cbeam3_asbly_static[¶](#module-sharpy.structure.utils.xbeamlib.cbeam3_asbly_static) Used by autodoc_mock_imports. ###### cbeam3_correct_gravity_forces[¶](#module-sharpy.structure.utils.xbeamlib.cbeam3_correct_gravity_forces) Used by autodoc_mock_imports. ###### cbeam3_loads[¶](#module-sharpy.structure.utils.xbeamlib.cbeam3_loads) Used by autodoc_mock_imports. ###### cbeam3_solv_modal[¶](#module-sharpy.structure.utils.xbeamlib.cbeam3_solv_modal) Used by autodoc_mock_imports. ###### cbeam3_solv_nlnstatic[¶](#module-sharpy.structure.utils.xbeamlib.cbeam3_solv_nlnstatic) Used by autodoc_mock_imports. ###### xbeam3_asbly_dynamic[¶](#module-sharpy.structure.utils.xbeamlib.xbeam3_asbly_dynamic) Used by autodoc_mock_imports. #### Utilities[¶](#utilities) ##### Algebra package[¶](#algebra-package) Algebra package Extensive library with geometrical and algebraic operations Note: Tests can be found in `tests/utils/algebra_test` ###### cross3[¶](#module-sharpy.utils.algebra.cross3) Computes the cross product of two vectors (v and w) with size 3 ###### crv2quat[¶](#module-sharpy.utils.algebra.crv2quat) Converts a Cartesian rotation vector, > \[\vec{\psi} = \psi\,\mathbf{\hat{n}}\] into a “minimal rotation” quaternion, i.e. being the quaternion, \(\vec{\chi}\), defined as: > \[\vec{\chi}= > \left[\cos\left(\frac{\psi}{2}\right),\, > \sin\left(\frac{\psi}{2}\right)\mathbf{\hat{n}}\right]\] the rotation axis, \(\mathbf{\hat{n}}\) is such that the rotation angle, \(\psi\), is in \([-\pi,\,\pi]\) or, equivalently, \(\chi_0\ge0\). | param psi: | Cartesian Rotation Vector, CRV: \(\vec{\psi} = \psi\,\mathbf{\hat{n}}\). | | type psi: | np.array | | returns: | equivalent quaternion \(\vec{\chi}\) | | rtype: | np.array | ###### crv2rotation[¶](#module-sharpy.utils.algebra.crv2rotation) Given a Cartesian rotation vector, \(\boldsymbol{\Psi}\), the function produces the rotation matrix required to rotate a vector according to \(\boldsymbol{\Psi}\). The rotation matrix is given by \[\mathbf{R} = \mathbf{I} + \frac{\sin||\boldsymbol{\Psi}||}{||\boldsymbol{\Psi}||} \tilde{\boldsymbol{\Psi}} + \frac{1-\cos{||\boldsymbol{\Psi}||}}{||\boldsymbol{\Psi}||^2}\tilde{\boldsymbol{\Psi}} \tilde{\boldsymbol{\Psi}}\] To avoid the singularity when \(||\boldsymbol{\Psi}||=0\), the series expansion is used \[\mathbf{R} = \mathbf{I} + \tilde{\boldsymbol{\Psi}} + \frac{1}{2!}\tilde{\boldsymbol{\Psi}}^2.\] | param psi: | Cartesian rotation vector \(\boldsymbol{\Psi}\). | | type psi: | np.array | | returns: | equivalent rotation matrix | | rtype: | np.array | References Geradin and Cardona, Flexible Multibody Dynamics: A finite element approach. Chapter 4 ###### crv2tan[¶](#module-sharpy.utils.algebra.crv2tan) Returns the tangential operator, \(\mathbf{T}(\boldsymbol{\Psi})\), that is a function of the Cartesian Rotation Vector, \(\boldsymbol{\Psi}\). \[\boldsymbol{T}(\boldsymbol{\Psi}) = \mathbf{I} + \left(\frac{\cos ||\boldsymbol{\Psi}|| - 1}{||\boldsymbol{\Psi}||^2}\right)\tilde{\boldsymbol{\Psi}} + \left(1 - \frac{\sin||\boldsymbol{\Psi}||}{||\boldsymbol{\Psi}||}\right) \frac{\tilde{\boldsymbol{\Psi}}\tilde{\boldsymbol{\Psi}}}{||\boldsymbol{\Psi}||^2}\] When the norm of the CRV approaches 0, the series expansion expression is used in-lieu of the above expression \[\boldsymbol{T}(\boldsymbol{\Psi}) = \mathbf{I} -\frac{1}{2!}\tilde{\boldsymbol{\Psi}} + \frac{1}{3!}\tilde{\boldsymbol{\Psi}}^2\] | param psi: | Cartesian Rotation Vector, \(\boldsymbol{\Psi}\). | | type psi: | np.array | | returns: | Tangential operator | | rtype: | np.array | References Geradin and Cardona. Flexible Multibody Dynamics: A Finite Element Approach. Chapter 4. ###### crv_bounds[¶](#module-sharpy.utils.algebra.crv_bounds) Forces the Cartesian rotation vector norm, \(\|\vec{\psi}\|\), to be in the range \([-\pi,\pi]\), i.e. determines the rotation axis orientation, \(\mathbf{\hat{n}}\), so as to ensure “minimal rotation”. | param crv_ini: | Cartesian rotation vector, \(\vec{\psi}\) | | type crv_ini: | np.array | | returns: | modified and bounded, equivalent Cartesian rotation vector | | rtype: | np.array | ###### der_CcrvT_by_v[¶](#module-sharpy.utils.algebra.der_CcrvT_by_v) Being C=C(fv0) the rotation matrix depending on the Cartesian rotation vector fv0 and defined as C=crv2rotation(fv0), the function returns the derivative, w.r.t. the CRV components, of the vector dot(C.T,v), where v is a constant vector. The elements of the resulting derivative matrix D are ordered such that: \[d(C.T*v) = D*d(fv0)\] where \(d(.)\) is a delta operator. ###### der_Ccrv_by_v[¶](#module-sharpy.utils.algebra.der_Ccrv_by_v) Being C=C(fv0) the rotational matrix depending on the Cartesian rotation vector fv0 and defined as C=crv2rotation(fv0), the function returns the derivative, w.r.t. the CRV components, of the vector dot(C,v), where v is a constant vector. The elements of the resulting derivative matrix D are ordered such that: \[d(C*v) = D*d(fv0)\] where \(d(.)\) is a delta operator. ###### der_Ceuler_by_v[¶](#module-sharpy.utils.algebra.der_Ceuler_by_v) Provides the derivative of the product between the rotation matrix \(C^{AG}(\mathbf{\Theta})\) and a constant vector, \(\mathbf{v}\), with respect to the Euler angles, \(\mathbf{\Theta}=[\phi,\theta,\psi]^T\): \[\frac{\partial}{\partial\Theta}(C^{AG}(\Theta)\mathbf{v}^G) = \frac{\partial \mathbf{f}}{\partial\mathbf{\Theta}}\] where \(\frac{\partial \mathbf{f}}{\partial\mathbf{\Theta}}\) is the resulting 3 by 3 matrix. Being \(C^{AG}(\Theta)\) the rotation matrix from the G frame to the A frame in terms of the Euler angles \(\Theta\) as: \[\begin{split}C^{AG}(\Theta) = \begin{bmatrix} \cos\theta\cos\psi & -\cos\theta\sin\psi & \sin\theta \\ \cos\phi\sin\psi + \sin\phi\sin\theta\cos\psi & \cos\phi\cos\psi - \sin\phi\sin\theta\sin\psi & -\sin\phi\cos\theta \\ \sin\phi\sin\psi - \cos\phi\sin\theta\cos\psi & \sin\phi\cos\psi + \cos\phi\sin\theta\sin\psi & \cos\phi\cos\theta \end{bmatrix}\end{split}\] the components of the derivative at hand are the following, where \(f_{1\theta} = \frac{\partial \mathbf{f}_1}{\partial\theta}\). \[\begin{split}f_{1\phi} =&0 \\ f_{1\theta} = &-v_1\sin\theta\cos\psi \\ &+v_2\sin\theta\sin\psi \\ &+v_3\cos\theta \\ f_{1\psi} = &-v_1\cos\theta\sin\psi \\ &- v_2\cos\theta\cos\psi\end{split}\] \[\begin{split}f_{2\phi} = &+v_1(-\sin\phi\sin\psi + \cos\phi\sin\theta\cos\psi) + \\ &+v_2(-\sin\phi\cos\psi - \cos\phi\sin\theta\sin\psi) + \\ &+v_3(-\cos\phi\cos\theta)\\ f_{2\theta} = &+v_1(\sin\phi\cos\theta\cos\psi) + \\ &+v_2(-\sin\phi\cos\theta\sin\psi) +\\ &+v_3(\sin\phi\sin\theta) \\ f_{2\psi} = &+v_1(\cos\phi\cos\psi - \sin\phi\sin\theta\sin\psi) + \\ &+v_2(-\cos\phi\sin\psi - \sin\phi\sin\theta\cos\psi)\end{split}\] \[\begin{split}f_{3\phi} = &+v_1(\cos\phi\sin\psi+\sin\phi\sin\theta\cos\psi) + \\ &+v_2(\cos\phi\cos\psi - \sin\phi\sin\theta\sin\psi) + \\ &+v_3(-\sin\phi\cos\theta)\\ f_{3\theta} = &+v_1(-\cos\phi\cos\theta\cos\psi)+\\ &+v_2(\cos\phi\cos\theta\sin\psi) + \\ &+v_3(-\cos\phi\sin\theta)\\ f_{3\psi} = &+v_1(\sin\phi\cos\psi+\cos\phi\sin\theta\sin\psi) + \\ &+v_2(-\sin\phi\sin\psi + \cos\phi\sin\theta\cos\psi)\end{split}\] | param euler: | Vector of Euler angles, \(\mathbf{\Theta} = [\phi, \theta, \psi]\), in radians. | | type euler: | np.ndarray | | param v: | 3 dimensional vector in G frame. | | type v: | np.ndarray | | returns: | Resulting 3 by 3 matrix \(\frac{\partial \mathbf{f}}{\partial\mathbf{\Theta}}\). | | rtype: | np.ndarray | ###### der_Ceuler_by_v_NED[¶](#module-sharpy.utils.algebra.der_Ceuler_by_v_NED) Provides the derivative of the product between the rotation matrix \(C^{AG}(\mathbf{\Theta})\) and a constant vector, \(\mathbf{v}\), with respect to the Euler angles, \(\mathbf{\Theta}=[\phi,\theta,\psi]^T\): \[\frac{\partial}{\partial\Theta}(C^{AG}(\Theta)\mathbf{v}^G) = \frac{\partial \mathbf{f}}{\partial\mathbf{\Theta}}\] where \(\frac{\partial \mathbf{f}}{\partial\mathbf{\Theta}}\) is the resulting 3 by 3 matrix. Being \(C^{AG}(\Theta)\) the rotation matrix from the G frame to the A frame in terms of the Euler angles \(\Theta\) as: \[\begin{split}C^{AG}(\Theta) = \begin{bmatrix} \cos\theta\cos\psi & \cos\theta\sin\psi & -\sin\theta \\ -\cos\phi\sin\psi + \sin\phi\sin\theta\cos\psi & \cos\phi\cos\psi + \sin\phi\sin\theta\sin\psi & \sin\phi\cos\theta \\ \sin\phi\sin\psi + \cos\phi\sin\theta\cos\psi & -\sin\phi\cos\psi + \cos\psi\sin\theta\sin\psi & \cos\phi\cos\theta \end{bmatrix}\end{split}\] the components of the derivative at hand are the following, where \(f_{1\theta} = \frac{\partial \mathbf{f}_1}{\partial\theta}\). \[\begin{split}f_{1\phi} =&0 \\ f_{1\theta} = &-v_1\sin\theta\cos\psi \\ &-v_2\sin\theta\sin\psi \\ &-v_3\cos\theta \\ f_{1\psi} = &-v_1\cos\theta\sin\psi + v_2\cos\theta\cos\psi\end{split}\] \[\begin{split}f_{2\phi} = &+v_1(\sin\phi\sin\psi + \cos\phi\sin\theta\cos\psi) + \\ &+v_2(-\sin\phi\cos\psi + \cos\phi\sin\theta\sin\psi) + \\ &+v_3(\cos\phi\cos\theta) \\ f_{2\theta} = &+v_1(\sin\phi\cos\theta\cos\psi) + \\ &+v_2(\sin\phi\cos\theta\sin\psi) +\\ &-v_3(\sin\phi\sin\theta) \\ f_{2\psi} = &+v_1(-\cos\phi\cos\psi - \sin\phi\sin\theta\sin\psi) + \\ &+v_2(-\cos\phi\sin\psi + \sin\phi\sin\theta\cos\psi)\end{split}\] \[\begin{split}f_{3\phi} = &+v_1(\cos\phi\sin\psi-\sin\phi\sin\theta\cos\psi) + \\ &+v_2(-\cos\phi\cos\psi - \sin\phi\sin\theta\sin\psi) + \\ &+v_3(-\sin\phi\cos\theta) \\ f_{3\theta} = &+v_1(\cos\phi\cos\theta\cos\psi)+\\ &+v_2(\cos\phi\cos\theta\sin\psi) + \\ &+v_3(-\cos\phi\sin\theta) \\ f_{3\psi} = &+v_1(\sin\phi\cos\psi-\cos\phi\sin\theta\sin\psi) + \\ &+v_2(\sin\phi\sin\psi + \cos\phi\sin\theta\cos\psi)\end{split}\] Note This function is defined in a North East Down frame which is not the typically used one in SHARPy. | param euler: | Vector of Euler angles, \(\mathbf{\Theta} = [\phi, \theta, \psi]\), in radians. | | type euler: | np.ndarray | | param v: | 3 dimensional vector in G frame. | | type v: | np.ndarray | | returns: | Resulting 3 by 3 matrix \(\frac{\partial \mathbf{f}}{\partial\mathbf{\Theta}}\). | | rtype: | np.ndarray | ###### der_CquatT_by_v[¶](#module-sharpy.utils.algebra.der_CquatT_by_v) Returns the derivative with respect to quaternion components of a projection matrix times a constant vector. Being \(\mathbf{C}=\mathbf{R}(\boldsymbol{\chi})^\top\) the projection matrix depending on the quaternion \(\boldsymbol{\chi}\) and obtained through the function defined as `C=quat2rotation(q).T`, this function returns the derivative with respect to the quaternion components, of the vector \((\mathbf{C\cdot v})\), where \(\mathbf{v}\) is a constant vector. The derivative operation is defined as: \[\delta(\mathbf{C}\cdot \mathbf{v}) = \frac{\partial}{\partial\boldsymbol{\chi}}\left(\mathbf{C\cdot v}\right)\delta\boldsymbol{\chi}\] where, for simplicity, we define \[\mathbf{D} = \frac{\partial}{\partial\boldsymbol{\chi}}\left(\mathbf{C\cdot v}\right) \in \mathbb{R}^{3\times4}\] and \(\delta(\bullet)\) is a delta operator. The members of \(\mathbf{D}\) are the following: \[\begin{split}\mathbf{D}_{11} &= 2 (q_0 v_x - q_2 v_z + q_3 v_y)\\ \mathbf{D}_{12} &= 2 (q_1 v_x - q_2 v_y + q_3 v_z)\\ \mathbf{D}_{13} &= 2 (-q_0 v_z + q_1 v_y - q_2 v_x)\\ \mathbf{D}_{14} &= 2 (q_0 v_y + q_1 v_z - q_3 v_x)\end{split}\] \[\begin{split}\mathbf{D}_{21} &= 2 (q_0 v_y + q_1 v_z - q_3 v_x)\\ \mathbf{D}_{22} &= 2 (q_0 v_z - q_1 v_y + q_2 v_x)\\ \mathbf{D}_{23} &= 2 (q_1 v_x + q_2 v_y + q_3 v_z)\\ \mathbf{D}_{24} &= 2 (-q_0 v_x + q_2 v_z - q_3 v_y)\end{split}\] \[\begin{split}\mathbf{D}_{31} &= 2 (q_0 v_z - q_1 v_y + q_2 v_x)\\ \mathbf{D}_{32} &= 2 (-q_0 v_y - q_1 v_z + q_3 v_x)\\ \mathbf{D}_{33} &= 2 (q_0 v_x - q_2 v_z + q_3 v_y)\\ \mathbf{D}_{34} &= 2 (q_1 v_x + q_2 v_y + q_3 v_z)\\\end{split}\] | returns: | \(\mathbf{D}\) matrix. | | rtype: | np.array | ###### der_Cquat_by_v[¶](#module-sharpy.utils.algebra.der_Cquat_by_v) Being C=C(quat) the rotational matrix depending on the quaternion q and defined as C=quat2rotation(q), the function returns the derivative, w.r.t. the quanternion components, of the vector dot(C,v), where v is a constant vector. The elements of the resulting derivative matrix D are ordered such that: \[d(C*v) = D*d(q)\] where \(d(.)\) is a delta operator. ###### der_Peuler_by_v[¶](#module-sharpy.utils.algebra.der_Peuler_by_v) Provides the derivative of the product between the projection matrix \(P^{AG}(\mathbf{\Theta})\) (that projects a vector in G frame onto A frame) and a constant vector expressed in G frame of reference, \(\mathbf{v}_G\), with respect to the Euler angles, \(\mathbf{\Theta}=[\phi,\theta,\psi]^T\): \[\frac{\partial}{\partial\Theta}(P^{AG}(\Theta)\mathbf{v}^G) = \frac{\partial \mathbf{f}}{\partial\mathbf{\Theta}}\] where \(\frac{\partial \mathbf{f}}{\partial\mathbf{\Theta}}\) is the resulting 3 by 3 matrix. Being \(P^{AG}(\Theta)\) the projection matrix from the G frame to the A frame in terms of the Euler angles \(\Theta\) as \(P^{AG}(\Theta) = \tau_x(-\Phi)\tau_y(-\Theta)\tau_z(-\Psi)\), where the rotation matrix is expressed as: \[\begin{split}C^{AG}(\Theta) = \begin{bmatrix} \cos\theta\cos\psi & -\cos\theta\sin\psi & \sin\theta \\ \cos\phi\sin\psi + \sin\phi\sin\theta\cos\psi & \cos\phi\cos\psi - \sin\phi\sin\theta\sin\psi & -\sin\phi\cos\theta \\ \sin\phi\sin\psi - \cos\phi\sin\theta\cos\psi & \sin\phi\cos\psi + \cos\phi\sin\theta\sin\psi & \cos\phi\cos\theta \end{bmatrix}\end{split}\] and the projection matrix as: \[\begin{split}P^{AG}(\Theta) = \begin{bmatrix} \cos\theta\cos\psi & \cos\theta\sin\psi & -\sin\theta \\ -\cos\phi\sin\psi + \sin\phi\sin\theta\cos\psi & \cos\phi\cos\psi + \sin\phi\sin\theta\sin\psi & \sin\phi\cos\theta \\ \sin\phi\sin\psi + \cos\phi\sin\theta\cos\psi & -\sin\phi\cos\psi + \cos\phi\sin\theta\sin\psi & \cos\phi\cos\theta \end{bmatrix}\end{split}\] the components of the derivative at hand are the following, where \(f_{1\theta} = \frac{\partial \mathbf{f}_1}{\partial\theta}\). \[\begin{split}f_{1\phi} =&0 \\ f_{1\theta} = &-v_1\sin\theta\cos\psi \\ &+v_2\sin\theta\sin\psi \\ &+v_3\cos\theta \\ f_{1\psi} = &-v_1\cos\theta\sin\psi \\ &- v_2\cos\theta\cos\psi\end{split}\] \[\begin{split}f_{2\phi} = &+v_1(-\sin\phi\sin\psi + \cos\phi\sin\theta\cos\psi) + \\ &+v_2(-\sin\phi\cos\psi - \cos\phi\sin\theta\sin\psi) + \\ &+v_3(-\cos\phi\cos\theta)\\ f_{2\theta} = &+v_1(\sin\phi\cos\theta\cos\psi) + \\ &+v_2(-\sin\phi\cos\theta\sin\psi) +\\ &+v_3(\sin\phi\sin\theta)\\ f_{2\psi} = &+v_1(\cos\phi\cos\psi - \sin\phi\sin\theta\sin\psi) + \\ &+v_2(-\cos\phi\sin\psi - \sin\phi\sin\theta\cos\psi)\end{split}\] \[\begin{split}f_{3\phi} = &+v_1(\cos\phi\sin\psi+\sin\phi\sin\theta\cos\psi) + \\ &+v_2(\cos\phi\cos\psi - \sin\phi\sin\theta\sin\psi) + \\ &+v_3(-\sin\phi\cos\theta)\\ f_{3\theta} = &+v_1(-\cos\phi\cos\theta\cos\psi)+\\ &+v_2(\cos\phi\cos\theta\sin\psi) + \\ &+v_3(-\cos\phi\sin\theta)\\ f_{3\psi} = &+v_1(\sin\phi\cos\psi+\cos\phi\sin\theta\sin\psi) + \\ &+v_2(-\sin\phi\sin\psi + \cos\phi\sin\theta\cos\psi)\end{split}\] | param euler: | Vector of Euler angles, \(\mathbf{\Theta} = [\phi, \theta, \psi]\), in radians. | | type euler: | np.ndarray | | param v: | 3 dimensional vector in G frame. | | type v: | np.ndarray | | returns: | Resulting 3 by 3 matrix \(\frac{\partial \mathbf{f}}{\partial\mathbf{\Theta}}\). | | rtype: | np.ndarray | ###### der_TanT_by_xv[¶](#module-sharpy.utils.algebra.der_TanT_by_xv) Being fv0 a cartesian rotation vector and Tan the corresponding tangential operator (computed through crv2tan(fv)), the function returns the derivative of dot(Tan^T,xv), where xv is a constant vector. The elements of the resulting derivative matrix D are ordered such that: \[d(Tan^T*xv) = D*d(fv)\] where \(d(.)\) is a delta operator. Note The derivative expression has been derived symbolically and verified by FDs. A more compact expression may be possible. ###### der_Tan_by_xv[¶](#module-sharpy.utils.algebra.der_Tan_by_xv) Being fv0 a cartesian rotation vector and Tan the corresponding tangential operator (computed through crv2tan(fv)), the function returns the derivative of dot(Tan,xv), where xv is a constant vector. The elements of the resulting derivative matrix D are ordered such that: \[d(Tan*xv) = D*d(fv)\] where \(d(.)\) is a delta operator. Note The derivative expression has been derived symbolically and verified by FDs. A more compact expression may be possible. ###### der_Teuler_by_w[¶](#module-sharpy.utils.algebra.der_Teuler_by_w) Calculates the matrix \[\frac{\partial}{\partial\Theta}\left.\left(T^{GA}(\mathbf{\Theta}) \mathbf{\omega}^A\right)\right|_{\Theta_0,\omega^A_0}\] from the linearised euler propagation equations \[\delta\mathbf{\dot{\Theta}} = \frac{\partial}{\partial\Theta}\left.\left(T^{GA}(\mathbf{\Theta}) \mathbf{\omega}^A\right)\right|_{\Theta_0,\omega^A_0}\delta\mathbf{\Theta} + T^{GA}(\mathbf{\Theta_0}) \delta\mathbf{\omega}^A\] where \(T^{GA}\) is the nonlinear relation between the euler angle rates and the rotational velocities and is provided by `deuler_dt()`. The concerned matrix is calculated as follows: \[\begin{split}\frac{\partial}{\partial\Theta}\left.\left(T^{GA}(\mathbf{\Theta}) \mathbf{\omega}^A\right)\right|_{\Theta_0,\omega^A_0} = \\ \begin{bmatrix} q\cos\phi\tan\theta-r\sin\phi\tan\theta & q\sin\phi\sec^2\theta + r\cos\phi\sec^2\theta & 0 \\ -q\sin\phi - r\cos\phi & 0 & 0 \\ q\frac{\cos\phi}{\cos\theta}-r\frac{\sin\phi}{\cos\theta} & q\sin\phi\tan\theta\sec\theta + r\cos\phi\tan\theta\sec\theta & 0 \end{bmatrix}_{\Theta_0, \omega^A_0}\end{split}\] Note This function is defined in a North East Down frame which is not the typically used one in SHARPy. | param euler: | Euler angles at the linearisation point \(\mathbf{\Theta}_0 = [\phi,\theta,\psi]\) or roll, pitch and yaw angles, respectively. | | type euler: | np.ndarray | | param w: | Rotational velocities at the linearisation point in A frame \(\omega^A_0\). | | type w: | np.ndarray | | returns: | Computed \(\frac{\partial}{\partial\Theta}\left.\left(T^{GA}(\mathbf{\Theta})\mathbf{\omega}^A\right)\right|_{\Theta_0,\omega^A_0}\) | | rtype: | np.ndarray | ###### der_Teuler_by_w_NED[¶](#module-sharpy.utils.algebra.der_Teuler_by_w_NED) Warning Based on a NED G frame Calculates the matrix \[\frac{\partial}{\partial\Theta}\left.\left(T^{GA}(\mathbf{\Theta}) \mathbf{\omega}^A\right)\right|_{\Theta_0,\omega^A_0}\] from the linearised euler propagation equations \[\delta\mathbf{\dot{\Theta}} = \frac{\partial}{\partial\Theta}\left.\left(T^{GA}(\mathbf{\Theta}) \mathbf{\omega}^A\right)\right|_{\Theta_0,\omega^A_0}\delta\mathbf{\Theta} + T^{GA}(\mathbf{\Theta_0}) \delta\mathbf{\omega}^A\] where \(T^{GA}\) is the nonlinear relation between the euler angle rates and the rotational velocities and is provided by `deuler_dt()`. The concerned matrix is calculated as follows: \[\begin{split}\frac{\partial}{\partial\Theta}\left.\left(T^{GA}(\mathbf{\Theta}) \mathbf{\omega}^A\right)\right|_{\Theta_0,\omega^A_0} = \\ \begin{bmatrix} q\cos\phi\tan\theta-r\sin\phi\tan\theta & q\sin\phi\sec^2\theta + r\cos\phi\sec^2\theta & 0 \\ -q\sin\phi - r\cos\phi & 0 & 0 \\ q\frac{\cos\phi}{\cos\theta}-r\frac{\sin\phi}{\cos\theta} & q\sin\phi\tan\theta\sec\theta + r\cos\phi\tan\theta\sec\theta & 0 \end{bmatrix}_{\Theta_0, \omega^A_0}\end{split}\] | param euler: | Euler angles at the linearisation point \(\mathbf{\Theta}_0 = [\phi,\theta,\psi]\) or roll, pitch and yaw angles, respectively. | | type euler: | np.ndarray | | param w: | Rotational velocities at the linearisation point in A frame \(\omega^A_0\). | | type w: | np.ndarray | | returns: | Computed \(\frac{\partial}{\partial\Theta}\left.\left(T^{GA}(\mathbf{\Theta})\mathbf{\omega}^A\right)\right|_{\Theta_0,\omega^A_0}\) | | rtype: | np.ndarray | ###### der_quat_wrt_crv[¶](#module-sharpy.utils.algebra.der_quat_wrt_crv) Provides change of quaternion, dquat, due to elementary rotation, dcrv, expressed as a 3 components Cartesian rotation vector such that \[C(quat + dquat) = C(quat0)C(dw)\] where C are rotation matrices. Examples Assume 3 FoRs, G, A and B where: * G is the initial FoR * quat0 defines te rotation required to obtain A from G, namely: Cga=quat2rotation(quat0) * dcrv is an inifinitesimal Cartesian rotation vector, defined in A components, which describes an infinitesimal rotation A -> B, namely: ..math :: Cab=crv2rotation(dcrv) * The total rotation G -> B is: Cga = Cga * Cab * As dcrv -> 0, Cga is equal to: \[algebra.quat2rotation(quat0 + dquat),\] where dquat is the output of this function. ###### deuler_dt[¶](#module-sharpy.utils.algebra.deuler_dt) Rate of change of the Euler angles in time for a given angular velocity in A frame \(\omega^A=[p, q, r]\). \[\begin{split}\begin{bmatrix}\dot{\phi} \\ \dot{\theta} \\ \dot{\psi}\end{bmatrix} = \begin{bmatrix} 1 & \sin\phi\tan\theta & -\cos\phi\tan\theta \\ 0 & \cos\phi & \sin\phi \\ 0 & -\frac{\sin\phi}{\cos\theta} & \frac{\cos\phi}{\cos\theta} \end{bmatrix} \begin{bmatrix} p \\ q \\ r \end{bmatrix}\end{split}\] | param euler: | Euler angles \([\phi, \theta, \psi]\) for roll, pitch and yaw, respectively. | | type euler: | np.ndarray | | returns: | Propagation matrix relating the rotational velocities to the euler angles. | | rtype: | np.ndarray | ###### deuler_dt_NED[¶](#module-sharpy.utils.algebra.deuler_dt_NED) Warning Based on a NED frame Rate of change of the Euler angles in time for a given angular velocity in A frame \(\omega^A=[p, q, r]\). \[\begin{split}\begin{bmatrix}\dot{\phi} \\ \dot{\theta} \\ \dot{\psi}\end{bmatrix} = \begin{bmatrix} 1 & \sin\phi\tan\theta & \cos\phi\tan\theta \\ 0 & \cos\phi & -\sin\phi \\ 0 & \frac{\sin\phi}{\cos\theta} & \frac{\cos\phi}{\cos\theta} \end{bmatrix} \begin{bmatrix} p \\ q \\ r \end{bmatrix}\end{split}\] Note This function is defined in a North East Down frame which is not the typically used one in SHARPy. | param euler: | Euler angles \([\phi, \theta, \psi]\) for roll, pitch and yaw, respectively. | | type euler: | np.ndarray | | returns: | Propagation matrix relating the rotational velocities to the euler angles. | | rtype: | np.ndarray | ###### euler2quat[¶](#module-sharpy.utils.algebra.euler2quat) | param euler: | Euler angles | | returns: | Equivalent quaternion. | | rtype: | np.ndarray | ###### euler2rot[¶](#module-sharpy.utils.algebra.euler2rot) Transforms Euler angles (roll, pitch and yaw \(\Phi, \Theta, \Psi\)) into a 3x3 rotation matrix describing that rotates a vector in yaw pitch, and roll. The rotations are performed successively, first in yaw, then in pitch and finally in roll. \[\mathbf{T}_{AG} = \mathbf{\tau}_x(\Phi) \mathbf{\tau}_y(\Theta) \mathbf{\tau}_z(\Psi)\] where \(\mathbf{\tau}\) represents the rotation about the subscripted axis. | param euler: | 1x3 array with the Euler angles in the form `[roll, pitch, yaw]` in radians | | type euler: | np.array | | returns: | 3x3 transformation matrix describing the rotation by the input Euler angles. | | rtype: | np.array | ###### get_triad[¶](#module-sharpy.utils.algebra.get_triad) Generates two unit vectors in body FoR that define the local FoR for a beam element. These vectors are calculated using frame_of_reference_delta :return: ###### mat2quat[¶](#module-sharpy.utils.algebra.mat2quat) Rotation matrix to quaternion function. Warning This function is deprecated and now longer supported. Please use `algebra.rotation2quat(rot.T)` instead. | param rot: | Rotation matrix | | returns: | equivalent quaternion | | rtype: | np.array | ###### multiply_matrices[¶](#module-sharpy.utils.algebra.multiply_matrices) multiply_matrices Multiply a series of matrices from left to right | param *argv: | series of numpy arrays | | returns: | product of all the given matrices | | rtype: | sol(numpy array) | Examples solution = multiply_matrices(A, B, C) ###### norm3d[¶](#module-sharpy.utils.algebra.norm3d) Norm of a 3D vector Notes Faster than np.linalg.norm | param v: | 3D vector | | type v: | np.ndarray | | returns: | Norm of the vector | | rtype: | np.ndarray | ###### normsq3d[¶](#module-sharpy.utils.algebra.normsq3d) Square of the norm of a 3D vector | param v: | 3D vector | | type v: | np.ndarray | | returns: | Square of the norm of the vector | | rtype: | np.ndarray | ###### quadskew[¶](#module-sharpy.utils.algebra.quadskew) Generates the matrix needed to obtain the quaternion in the following time step through integration of the FoR angular velocity. | param vector: | FoR angular velocity | | type vector: | np.array | Notes The angular velocity is assumed to be constant in the time interval Equivalent to lib_xbeam function Quaternion ODE to compute orientation of body-fixed frame a See Shearer and Cesnik (2007) for definition | returns: | matrix | | rtype: | np.array | ###### quat2euler[¶](#module-sharpy.utils.algebra.quat2euler) Quaternion to Euler angles transformation. Transforms a normalised quaternion \(\chi\longrightarrow[\phi, \theta, \psi]\) to roll, pitch and yaw angles respectively. The transformation is valid away from the singularity present at: \[\Delta = \frac{1}{2}\] where \(\Delta = q_0 q_2 - q_1 q_3\). The transformation is carried out as follows: \[\begin{split}\psi &= \arctan{\left(2\frac{q_0q_3+q_1q_2}{1-2(q_2^2+q_3^2)}\right)} \\ \theta &= \arcsin(2\Delta) \\ \phi &= \arctan\left(2\frac{q_0q_1 + q_2q_3}{1-2(q_1^2+q_2^2)}\right)\end{split}\] | param quat: | Normalised quaternion. | | type quat: | np.ndarray | | returns: | Array containing the Euler angles \([\phi, \theta, \psi]\) for roll, pitch and yaw, respectively. | | rtype: | np.ndarray | References <NAME>. - A tutorial on SE(3) transformation parameterizations and on-manifold optimization. Technical Report 012010. ETS Ingenieria Informatica. Universidad de Malaga. 2013. ###### quat2rotation[¶](#module-sharpy.utils.algebra.quat2rotation) Calculate rotation matrix based on quaternions. If B is a FoR obtained rotating a FoR A by an angle \(\phi\) about an axis \(\mathbf{n}\) (recall \(\mathbf{n}\) will be invariant during the rotation), and \(\mathbf{q}\) is the related quaternion, \(\mathbf{q}(\phi,\mathbf{n})\), the function will return the matrix \(C^{AB}\) such that: > * \(C^{AB}\) rotates FoR A onto FoR B. > * \(C^{AB}\) transforms the coordinates of a vector defined in B component to > A components i.e. \(\mathbf{v}^A = C^{AB}(\mathbf{q})\mathbf{v}^B\). \[\begin{split}C^{AB}(\mathbf{q}) = \begin{pmatrix} q_0^2 + q_1^2 - q_2^2 -q_3^2 & 2(q_1 q_2 - q_0 q_3) & 2(q_1 q_3 + q_0 q_2) \\ 2(q_1 q_2 + q_0 q_3) & q_0^2 - q_1^2 + q_2^2 - q_3^2 & 2(q_2 q_3 - q_0 q_1) \\ 2(q_1 q_3 - q_0 q_2) & 2(q_2 q_3 + q_0 q_1) & q_0^2 -q_1^2 -q_2^2 +q_3^2 \end{pmatrix}\end{split}\] Notes The inverse rotation is defined as the transpose of the matrix \(C^{BA} = C^{{AB}^T}\). In typical SHARPy applications, the quaternion relation between the A and G frames is expressed as \(C^{GA}(\mathbf{q})\), and in the context of this function it corresponds to: ``` >>> C_ga = quat2rotation(q1) >>> C_ag = quat2rotation.T(q1) ``` | param q: | Quaternion \(\mathbf{q}(\phi, \mathbf{n})\). | | type q: | np.ndarray | | returns: | \(C^{AB}\) rotation matrix from FoR B to FoR A. | | rtype: | np.ndarray | References Stevens, L. Aircraft Control and Simulation. 1985. pg 41 ###### quat_bound[¶](#module-sharpy.utils.algebra.quat_bound) Given a quaternion, \(\vec{\chi}\), associated to a rotation of angle \(\psi\) about an axis \(\mathbf{\hat{n}}\), the function “bounds” the quaternion, i.e. sets the rotation axis \(\mathbf{\hat{n}}\) such that \(\psi\) in \([-\pi,\pi]\). Notes As quaternions are defined as: > \[\vec{\chi}= > \left[\cos\left(\frac{\psi}{2}\right),\, > \sin\left(\frac{\psi}{2}\right)\mathbf{\hat{n}}\right]\] this is equivalent to enforcing \(\chi_0\ge0\). | param quat: | quaternion to bound | | type quat: | np.array | | returns: | bounded quaternion | | rtype: | np.array | ###### rotation2crv[¶](#module-sharpy.utils.algebra.rotation2crv) Given a rotation matrix \(C^{AB}\) rotating the frame A onto B, the function returns the minimal size Cartesian rotation vector, \(\vec{\psi}\) representing this rotation. | param Cab: | rotation matrix \(C^{AB}\) | | type Cab: | np.array | | returns: | equivalent Cartesian rotation vector, \(\vec{\psi}\). | | rtype: | np.array | Notes this is the inverse of `algebra.crv2rotation` for Cartesian rotation vectors associated to rotations in the range \([-\pi,\,\pi]\), i.e.: > `fv == algebra.rotation2crv(algebra.crv2rotation(fv))` for each Cartesian rotation vector of the form \(\vec{\psi} = \psi\,\mathbf{\hat{n}}\) represented as `fv=a*nv` such that `nv` is a unit vector and the scalar `a` is in the range \([-\pi,\,\pi]\). ###### rotation2quat[¶](#module-sharpy.utils.algebra.rotation2quat) Given a rotation matrix \(C^{AB}\) rotating the frame A onto B, the function returns the minimal “positive angle” quaternion representing this rotation, where the quaternion, \(\vec{\chi}\) is defined as: > \[\vec{\chi}= > \left[\cos\left(\frac{\psi}{2}\right),\, > \sin\left(\frac{\psi}{2}\right)\mathbf{\hat{n}}\right]\] | param Cab: | rotation matrix \(C^{AB}\) from frame A to B | | type Cab: | np.array | | returns: | equivalent quaternion \(\vec{\chi}\) | | rtype: | np.array | Notes This is the inverse of `algebra.quat2rotation` for Cartesian rotation vectors associated to rotations in the range \([-\pi,\pi]\), i.e.: > `fv == algebra.rotation2crv(algebra.crv2rotation(fv))` where `fv` represents the Cartesian Rotation Vector, \(\vec{\psi}\) defined as: > \[\vec{\psi} = \psi\,\mathbf{\hat{n}}\] such that \(\mathbf{\hat{n}}\) is a unit vector and the scalar \(\psi\) is in the range \([-\pi,\,\pi]\). ###### rotation3d_x[¶](#module-sharpy.utils.algebra.rotation3d_x) Rotation matrix about the x axis by the input angle \(\Phi\) \[\begin{split}\mathbf{\tau}_x = \begin{bmatrix} 1 & 0 & 0 \\ 0 & \cos(\Phi) & -\sin(\Phi) \\ 0 & \sin(\Phi) & \cos(\Phi) \end{bmatrix}\end{split}\] | param angle: | angle of rotation in radians about the x axis | | type angle: | float | | returns: | 3x3 rotation matrix about the x axis | | rtype: | np.array | ###### rotation3d_y[¶](#module-sharpy.utils.algebra.rotation3d_y) Rotation matrix about the y axis by the input angle \(\Theta\) \[\begin{split}\mathbf{\tau}_y = \begin{bmatrix} \cos(\Theta) & 0 & -\sin(\Theta) \\ 0 & 1 & 0 \\ \sin(\Theta) & 0 & \cos(\Theta) \end{bmatrix}\end{split}\] | param angle: | angle of rotation in radians about the y axis | | type angle: | float | | returns: | 3x3 rotation matrix about the y axis | | rtype: | np.array | ###### rotation3d_z[¶](#module-sharpy.utils.algebra.rotation3d_z) Rotation matrix about the z axis by the input angle \(\Psi\) \[\begin{split}\mathbf{\tau}_z = \begin{bmatrix} \cos(\Psi) & -\sin(\Psi) & 0 \\ \sin(\Psi) & \cos(\Psi) & 0 \\ 0 & 0 & 1 \end{bmatrix}\end{split}\] | param angle: | angle of rotation in radians about the z axis | | type angle: | float | | returns: | 3x3 rotation matrix about the z axis | | rtype: | np.array | ###### skew[¶](#module-sharpy.utils.algebra.skew) Returns a skew symmetric matrix such that \[\boldsymbol{v} \times \boldsymbol{u} = \tilde{\boldsymbol{v}}{\boldsymbol{u}\] where \[\begin{split}\tilde{\boldsymbol{v}} = \begin{bmatrix} 0 & -v_z & v_y \\ v_z & 0 & -v_x \\ -v_y & v_x & 0 \end{bmatrix}.\end{split}\] | param vector: | 3-dimensional vector | | type vector: | np.ndarray | | returns: | Skew-symmetric matrix. | | rtype: | np.array | ###### tangent_vector[¶](#module-sharpy.utils.algebra.tangent_vector) Tangent vector calculation for 2+ noded elements. Calculates the tangent vector interpolating every dimension separately. It uses a (n_nodes - 1) degree polynomial, and the differentiation is analytical. Calculation method: > 1. A n_nodes-1 polynomial is fitted through the nodes per dimension. > 2. Those polynomials are analytically differentiated with respect to the node index > 3. The tangent vector is given by: > \[\vec{t} = \frac{s_x'\vec{i} + s_y'\vec{j} + s_z'\vec{k}}{\left| s_x'\vec{i} + s_y'\vec{j} + s_z'\vec{k}\right|}\] > where \('\) notes the differentiation with respect to the index number | param in_coord: | array of coordinates of the nodes. Dimensions = `[n_nodes, ndim]` | | type in_coord: | np.ndarray | Notes Dimensions are treated independent from each other, interpolating polynomials are computed individually. ###### triad2rotation[¶](#module-sharpy.utils.algebra.triad2rotation) If the input triad is the “b” coord system given in “a” frame, (the vectors of the triad are xb, yb, zb), this function returns Rab, ie the rotation matrix required to rotate the FoR A onto B. :param xb: :param yb: :param zb: :return: rotation matrix Rab ###### unit_vector[¶](#module-sharpy.utils.algebra.unit_vector) Transforms the input vector into a unit vector \[\mathbf{\hat{v}} = \frac{\mathbf{v}}{\|\mathbf{v}\|}\] | param vector: | vector to normalise | | type vector: | np.array | | returns: | unit vector | | rtype: | np.array | ##### Analytical Functions[¶](#analytical-functions) Analytical solutions for 2D aerofoil based on thin plates theory Author: <NAME> Date: 23 May 2017 References: 1. Simpson, R.J.S., <NAME>. & <NAME>., 2013. Induced-Drag Calculations in the Unsteady Vortex Lattice Method. AIAA Journal, 51(7), pp.1775–1779. 2. <NAME>., 2009. Propulsive Force of a Flexible Flapping Thin Airfoil. Journal of Aircraft, 46(2), pp.465–473. ###### flat_plate_analytical[¶](#module-sharpy.utils.analytical.flat_plate_analytical) Computes the analytical frequency response of a plat plate for the input output sequences in `input_seq` and `output_seq` over the frequency points `kv`, if available. The output complex values array `Yan` has shape `(Nout, Nin, Nk)`; if an analytical solution is not available, the response is assumed to be zero. If `plunge_deriv` is `True`, the plunge response is expressed in terms of first derivative dh. | param kv: | Frequency range of length `Nk`. | | type kv: | np.array | | param x_ea_perc: | | | Elastic axis location along the chord as chord length percentage. | | type x_ea_perc: | float | | param x_fh_perc: | | | Flap hinge location along the chord as chord length percentage. | | type x_fh_perc: | float | | param input_seq: | | | List of `Nin` number of inputs. Supported inputs include: * `gust_sears`: Response to a continuous sinusoidal gust. * `pitch`: Response to an oscillatory pitching motion. * `plunge`: Response to an oscillatory plunging motion. | | type input_seq: | list(str) | | param output_seq: | | | List of `Nout` number of outputs. Supported outputs include: * `Fy`: Vertical force. * `Mz`: Pitching moment. | | type output_seq: | | | list(str) | | param output_scal: | | | Array of factors by which to divide the desired outputs. Dimensions of `Nout`. | | type output_scal: | | | np.array | | param plunge_deriv: | | | If `True` expresses the plunge response in terms of the first derivative, i.e. the | | type plunge_deriv: | | | bool | :param rate of change of plunge \(d\dot{h}\).: | returns: | A `(Nout, Nin, Nk)` array containing the scaled frequency response for the inputs and outputs specified. | | rtype: | np.array | See also The lift coefficient due to pitch and plunging motions is calculated using [`sharpy.utils.analytical.theo_CL_freq_resp()`](index.html#module-sharpy.utils.analytical.theo_CL_freq_resp). In turn, the pitching moment is found using [`sharpy.utils.analytical.theo_CM_freq_resp()`](index.html#module-sharpy.utils.analytical.theo_CM_freq_resp). The response to the continuous sinusoidal gust is calculated using [`sharpy.utils.analytical.sears_CL_freq_resp()`](index.html#module-sharpy.utils.analytical.sears_CL_freq_resp). ###### garrick_drag_pitch[¶](#module-sharpy.utils.analytical.garrick_drag_pitch) Returns Garrick solution for drag coefficient at a specific time. Ref.[1], eq.(9), (10) and (11) The aerofoil pitching motion is assumed to be: > \[a(t)=A\sin(\omegat)=A\sin(ks)\] The \(C_d\) is such that: > * \(C_d>0\): drag > * \(C_d<0\): suction ###### garrick_drag_plunge[¶](#module-sharpy.utils.analytical.garrick_drag_plunge) Returns Garrick solution for drag coefficient at a specific time. Ref.[1], eq.(8) (see also eq.(1) and (2)) or Ref[2], eq.(2) The aerofoil vertical motion is assumed to be: \[h(t)=-H\cos(wt)\] The \(C_d\) is such that: > * \(C_d>0\): drag > * \(C_d<0\): suction ###### nc_derivs[¶](#module-sharpy.utils.analytical.nc_derivs) Provides non-circulatory aerodynamic lift and moment coefficients derivatives Ref. Palacios and Cesnik, Chap 3. | param x_ea_perc: | | --- | | | position of axis of rotation in percentage of chord (measured from LE) | | param x_fc_perc: | | | position of flap axis of rotation in percentage of chord (measured from LE) | ###### qs_derivs[¶](#module-sharpy.utils.analytical.qs_derivs) Provides quasi-steady aerodynamic lift and moment coefficients derivatives Ref. Palacios and Cesnik, Chap 3. | param x_ea_perc: | | --- | | | position of axis of rotation in percentage of chord (measured from LE) | | param x_fc_perc: | | | position of flap axis of rotation in percentage of chord (measured from LE) | ###### sears_CL_freq_resp[¶](#module-sharpy.utils.analytical.sears_CL_freq_resp) Frequency response of lift coefficient according Sear’s solution. Ref. Palacios and Cesnik, Chap.3 ###### sears_fun[¶](#module-sharpy.utils.analytical.sears_fun) Produces Sears function ###### sears_lift_sin_gust[¶](#module-sharpy.utils.analytical.sears_lift_sin_gust) Returns the lift coefficient for a sinusoidal gust (see set_gust.sin) as the imaginary part of the CL complex function defined below. The input gust must be the imaginary part of \[wgust = w0*\exp(1.0j*C*(Ux*S.time[tt] - xcoord) )\] with: \[C=2\pi/L\] and `xcoord=0` at the aerofoil half-chord. ###### theo_CL_freq_resp[¶](#module-sharpy.utils.analytical.theo_CL_freq_resp) Frequency response of lift coefficient according Theodorsen’s theory. The output is a 3 elements array containing the CL frequency response w.r.t. to pitch, plunge and flap motion, respectively. Sign conventions are as follows: > * plunge: positive when moving upward > * x_ea_perc: position of axis of rotation in percentage of chord (measured > from LE) > * x_fc_perc: position of flap axis of rotation in percentage of chord > (measured from LE) Warning this function uses different input/output w.r.t. theo_lift ###### theo_CM_freq_resp[¶](#module-sharpy.utils.analytical.theo_CM_freq_resp) Frequency response of moment coefficient according Theodorsen’s theory. The output is a 3 elements array containing the CL frequency response w.r.t. to pitch, plunge and flap motion, respectively. ###### theo_fun[¶](#module-sharpy.utils.analytical.theo_fun) Returns the value of Theodorsen’s function at a reduced frequency \(k\). \[\mathcal{C}(jk) = \frac{H_1^{(2)}(k)}{H_1^{(2)}(k) + jH_0^{(2)}(k)}\] where \(H_0^{(2)}(k)\) and \(H_1^{(2)}(k)\) are Hankel functions of the second kind. | param k: | Reduced frequency/frequencies at which to evaluate the function. | | type k: | np.array | | returns: | Value of Theodorsen’s function evaluated at the desired reduced frequencies. | | rtype: | np.array | ###### theo_lift[¶](#module-sharpy.utils.analytical.theo_lift) Theodorsen’s solution for lift of aerofoil undergoing sinusoidal motion. Time histories are built assuming: > * `a(t)=+/- A cos(w t) ??? not verified` > * \(h(t)=-H\cos(w t)\) | param w: | frequency (rad/sec) of oscillation | | param A: | amplitude of angle of attack change | | param H: | amplitude of plunge motion | | param c: | aerofoil chord | | param rhoinf: | flow density | | param uinf: | flow speed | | param x12: | distance of elastic axis from mid-point of aerofoil (positive if the elastic axis is ahead) | ###### wagner_imp_start[¶](#module-sharpy.utils.analytical.wagner_imp_start) Lift coefficient resulting from impulsive start solution. ##### Controller Utilities[¶](#controller-utilities) ###### PID[¶](#pid) *class* `sharpy.utils.control_utils.``PID`(*gain_p*, *gain_i*, *gain_d*, *dt*)[[source]](_modules/sharpy/utils/control_utils.html#PID)[¶](#sharpy.utils.control_utils.PID) Class implementing a classic PID controller Instance attributes: :param gain_p: Proportional gain. :param gain_i: Integral gain. :param gain_d: Derivative gain. :param dt: Simulation time step. The class should be used as: > pid = PID(100, 10, 0.1, 0.1) > pid.set_point(target_point) > control = pid(current_point) ##### Data Management Structures[¶](#data-management-structures) Classes for the Aerotimestep and Structuraltimestep, amongst others ###### LinearTimeStepInfo[¶](#lineartimestepinfo) *class* `sharpy.utils.datastructures.``LinearTimeStepInfo`[[source]](_modules/sharpy/utils/datastructures.html#LinearTimeStepInfo)[¶](#sharpy.utils.datastructures.LinearTimeStepInfo) Linear timestep info containing the state, input and output variables for a given timestep ##### Documentation Generator[¶](#documentation-generator) Functions to automatically document the code. Comments and complaints: <NAME> ###### check_folder_in_ignore[¶](#module-sharpy.utils.docutils.check_folder_in_ignore) Checks whether a folder is in the `ignore_list`. | param folder: | Absolute path to folder | | type folder: | str | | param ignore_list: | | | Ignore list | | type ignore_list: | | | list | | returns: | Bool whether file/folder is in ignore list. | | rtype: | bool | ###### generate_documentation[¶](#module-sharpy.utils.docutils.generate_documentation) Main routine that generates the documentation in `./docs/source/includes` ###### output_documentation_module_page[¶](#module-sharpy.utils.docutils.output_documentation_module_page) Generates the documentation for a package with a single page per module in the desired folder Returns: ###### write_file[¶](#module-sharpy.utils.docutils.write_file) Writes the contents of a python file with one module per page. Warning If the function to be written does not have a docstring no output will be produced and a warning will be given. | param file: | Absolute path to file | | type file: | str | ###### write_folder[¶](#module-sharpy.utils.docutils.write_folder) Creates the documentation for the contents in a folder. It checks that the file folder is not in the `ignore_list`. If there is a subfolder in the folder, this gets opened, written and an index file is created. | param folder: | Absolute path to folder | | type folder: | str | | param ignore_list: | | | List with filenames and folders to ignore and skip | | type ignore_list: | | | list | | returns: | Tuple containing the title and body of the docstring found for it to be added to the index of the current folder. | | rtype: | tuple | ##### SHARPy Exception Classes[¶](#sharpy-exception-classes) ###### DocumentationError[¶](#documentationerror) *class* `sharpy.utils.exceptions.``DocumentationError`[[source]](_modules/sharpy/utils/exceptions.html#DocumentationError)[¶](#sharpy.utils.exceptions.DocumentationError) Error in documentation ###### NotConvergedSolver[¶](#notconvergedsolver) *class* `sharpy.utils.exceptions.``NotConvergedSolver`[[source]](_modules/sharpy/utils/exceptions.html#NotConvergedSolver)[¶](#sharpy.utils.exceptions.NotConvergedSolver) To be raised when the solver does not converge. Before this, SHARPy would add a pdb trace, but this causes problems when using SHARPy as a black box. ###### NotValidSetting[¶](#notvalidsetting) *class* `sharpy.utils.exceptions.``NotValidSetting`(*setting*, *variable*, *options*, *value=None*, *message=''*)[[source]](_modules/sharpy/utils/exceptions.html#NotValidSetting)[¶](#sharpy.utils.exceptions.NotValidSetting) Raised when a user gives a setting an invalid value ##### Generate cases[¶](#generate-cases) Generate cases This library provides functions and classes to help in the definition of SHARPy cases Examples: > tests in: tests/utils/generate_cases > examples: test/coupled/multibody/fix_node_velocity_wrtG/test_fix_node_velocity_wrtG > > > test/coupled/multibody/fix_node_velocity_wrtA/test_fix_node_velocity_wrtA > > test/coupled/multibody/double_pendulum/test_double_pendulum_geradin > > test/coupled/prescribed/WindTurbine/test_rotor Notes: > To use this library: import sharpy.utils.generate_cases as generate_cases ###### AerodynamicInformation[¶](#aerodynamicinformation) *class* `sharpy.utils.generate_cases.``AerodynamicInformation`[[source]](_modules/sharpy/utils/generate_cases.html#AerodynamicInformation)[¶](#sharpy.utils.generate_cases.AerodynamicInformation) Aerodynamic information needed to build a case Note It should be defined after the StructuralInformation of the case `assembly_aerodynamics`(**args*)[[source]](_modules/sharpy/utils/generate_cases.html#AerodynamicInformation.assembly_aerodynamics)[¶](#sharpy.utils.generate_cases.AerodynamicInformation.assembly_aerodynamics) This function concatenates aerodynamic properties to be writen in the same h5 File | Parameters: | ***args** – list of AerodynamicInformation() to be meged into ‘self’ | `change_airfoils_discretezation`(*airfoils*, *new_num_nodes*)[[source]](_modules/sharpy/utils/generate_cases.html#AerodynamicInformation.change_airfoils_discretezation)[¶](#sharpy.utils.generate_cases.AerodynamicInformation.change_airfoils_discretezation) Changes the discretization of the matrix of airfoil coordinates | Parameters: | * **airfoils** (*np.array*) – Matrix with the x-y coordinates of all the airfoils to be modified * **new_num_nodes** (*int*) – Number of points that the output coordinates will have | | Returns: | Matrix with the x-y coordinates of all the airfoils with the new discretization | | Return type: | new_airfoils (np.array) | `check_AerodynamicInformation`(*StructuralInformation*)[[source]](_modules/sharpy/utils/generate_cases.html#AerodynamicInformation.check_AerodynamicInformation)[¶](#sharpy.utils.generate_cases.AerodynamicInformation.check_AerodynamicInformation) Check some properties of the AerodynamicInformation() Notes These conditions have to be to correctly define a case but they are not the only ones `copy`()[[source]](_modules/sharpy/utils/generate_cases.html#AerodynamicInformation.copy)[¶](#sharpy.utils.generate_cases.AerodynamicInformation.copy) Returns a copy of the object | Returns: | new object with the same properties | | Return type: | copied([AerodynamicInformation](index.html#sharpy.utils.generate_cases.AerodynamicInformation)) | `create_aerodynamics_from_vec`(*StructuralInformation*, *vec_aero_node*, *vec_chord*, *vec_twist*, *vec_sweep*, *vec_surface_m*, *vec_surface_distribution*, *vec_m_distribution*, *vec_elastic_axis*, *vec_airfoil_distribution*, *airfoils*, *user_defined_m_distribution=None*)[[source]](_modules/sharpy/utils/generate_cases.html#AerodynamicInformation.create_aerodynamics_from_vec)[¶](#sharpy.utils.generate_cases.AerodynamicInformation.create_aerodynamics_from_vec) Defines the whole case from the appropiated variables in vector form (associated to nodes) | Parameters: | * **StructuralInformation** ([*StructuralInformation*](index.html#sharpy.utils.generate_cases.StructuralInformation)) – Structural infromation of the case * **vec_aero_node** (*np.array*) – defines if a node has aerodynamic properties or not * **vec_chord** (*np.array*) – chord of the nodes * **vec_twist** (*np.array*) – twist of the nodes * **vec_sweep** (*np.array*) – sweep of the nodes * **vec_surface_m** (*np.array*) – Number of panels in the chord direction * **vec_surface_distribution** (*np.array*) – Surface at which each element belongs * **vec_m_distribution** (*np.array*) – distribution of the panels along the chord * **vec_elastic_axis** (*np.array*) – position of the elastic axis in the chord * **vec_airfoil_distribution** (*np.array*) – airfoil at each element node * **airfoils** (*np.array*) – coordinates of the camber lines of the airfoils | `create_one_uniform_aerodynamics`(*StructuralInformation*, *chord*, *twist*, *sweep*, *num_chord_panels*, *m_distribution*, *elastic_axis*, *num_points_camber*, *airfoil*)[[source]](_modules/sharpy/utils/generate_cases.html#AerodynamicInformation.create_one_uniform_aerodynamics)[¶](#sharpy.utils.generate_cases.AerodynamicInformation.create_one_uniform_aerodynamics) Defines the whole case from the appropiated variables constant at every point | Parameters: | * **StructuralInformation** ([*StructuralInformation*](index.html#sharpy.utils.generate_cases.StructuralInformation)) – Structural infromation of the case * **chord** (*float*) – chord * **twist** (*float*) – twist * **sweep** (*float*) – sweep * **num_chord_panels** (*int*) – Number of panels in the chord direction * **m_distribution** (*str*) – distribution of the panels along the chord * **elastic_axis** (*float*) – position of the elastic axis in the chord * **num_points_camber** (*int*) – Number of points to define the camber line * **airfoils** (*np.array*) – coordinates of the camber lines of the airfoils | `generate_aero_file`(*route*, *case_name*, *StructuralInformation*)[[source]](_modules/sharpy/utils/generate_cases.html#AerodynamicInformation.generate_aero_file)[¶](#sharpy.utils.generate_cases.AerodynamicInformation.generate_aero_file) Writes the h5 file with the aerodynamic information | Parameters: | * **route** (*string*) – path of the case * **case_name** (*string*) – name of the case | `generate_full_aerodynamics`(*aero_node*, *chord*, *twist*, *sweep*, *surface_m*, *surface_distribution*, *m_distribution*, *elastic_axis*, *airfoil_distribution*, *airfoils*)[[source]](_modules/sharpy/utils/generate_cases.html#AerodynamicInformation.generate_full_aerodynamics)[¶](#sharpy.utils.generate_cases.AerodynamicInformation.generate_full_aerodynamics) Defines the whole case from the appropiated variables | Parameters: | * **aero_node** (*np.array*) – defines if a node has aerodynamic properties or not * **chord** (*np.array*) – chord of the elements * **twist** (*np.array*) – twist of the elements * **sweep** (*np.array*) – sweep of the elements * **surface_m** (*np.array*) – Number of panels in the chord direction * **surface_distribution** (*np.array*) – Surface at which each element belongs * **m_distribution** (*str*) – distribution of the panels along the chord * **elastic_axis** (*np.array*) – position of the elastic axis in the chord * **airfoil_distribution** (*np.array*) – airfoil at each element node * **airfoils** (*np.array*) – coordinates of the camber lines of the airfoils | `interpolate_airfoils_camber`(*pure_airfoils_camber*, *r_pure_airfoils*, *r*, *n_points_camber*)[[source]](_modules/sharpy/utils/generate_cases.html#AerodynamicInformation.interpolate_airfoils_camber)[¶](#sharpy.utils.generate_cases.AerodynamicInformation.interpolate_airfoils_camber) Create the camber of the airfoil at each node position from the camber of the pure airfoils present in the blade | Parameters: | * **pure_airfoils_camber** (*np.array*) – xy coordinates of the camber lines of the pure airfoils * **r_pure_airfoils** (*np.array*) – radial position of the pure airfoils * **r** (*np.array*) – radial positions to compute the camber lines through linear interpolation | | Returns: | camber lines at the new radial positions | | Return type: | airfoils_camber (np.array) | `interpolate_airfoils_camber_thickness`(*pure_airfoils_camber*, *thickness_pure_airfoils*, *blade_thickness*, *n_points_camber*)[[source]](_modules/sharpy/utils/generate_cases.html#AerodynamicInformation.interpolate_airfoils_camber_thickness)[¶](#sharpy.utils.generate_cases.AerodynamicInformation.interpolate_airfoils_camber_thickness) Create the camber of the airfoil at each node position from the camber of the pure airfoils present in the blade based on the thickness | Parameters: | * **pure_airfoils_camber** (*np.array*) – xy coordinates of the camber lines of the pure airfoils * **thicknesss_pure_airfoils** (*np.array*) – thickness of the pure airfoils * **blade_thickness** (*np.array*) – thickness of the blade positions | | Returns: | camber lines at the new radial positions | | Return type: | airfoils_camber (np.array) | `set_to_zero`(*num_node_elem*, *num_node*, *num_elem*, *num_airfoils=1*, *num_surfaces=0*, *num_points_camber=100*)[[source]](_modules/sharpy/utils/generate_cases.html#AerodynamicInformation.set_to_zero)[¶](#sharpy.utils.generate_cases.AerodynamicInformation.set_to_zero) Sets to zero all the variables | Parameters: | * **num_node_elem** (*int*) – number of nodes per element * **num_node** (*int*) – number of nodes * **num_elem** (*int*) – number of elements * **num_airfoils** (*int*) – number of different airfoils * **num_surfaces** (*int*) – number of aerodynamic surfaces * **num_points_camber** (*int*) – number of points to define the camber line of the airfoil | ###### AeroelasticInformation[¶](#aeroelasticinformation) *class* `sharpy.utils.generate_cases.``AeroelasticInformation`[[source]](_modules/sharpy/utils/generate_cases.html#AeroelasticInformation)[¶](#sharpy.utils.generate_cases.AeroelasticInformation) Structural and aerodynamic information needed to build a case `assembly`(**args*)[[source]](_modules/sharpy/utils/generate_cases.html#AeroelasticInformation.assembly)[¶](#sharpy.utils.generate_cases.AeroelasticInformation.assembly) This function concatenates structures and aerodynamic properties to be writen in the same h5 File | Parameters: | ***args** – list of AeroelasticInformation() to be meged into ‘self’ | Notes: `copy`()[[source]](_modules/sharpy/utils/generate_cases.html#AeroelasticInformation.copy)[¶](#sharpy.utils.generate_cases.AeroelasticInformation.copy) Returns a copy of the object | Returns: | new object with the same properties | | Return type: | copied([AeroelasticInformation](index.html#sharpy.utils.generate_cases.AeroelasticInformation)) | `generate`(*StructuralInformation*, *AerodynamicInformation*)[[source]](_modules/sharpy/utils/generate_cases.html#AeroelasticInformation.generate)[¶](#sharpy.utils.generate_cases.AeroelasticInformation.generate) Generates an object from the structural and the aerodynamic information | Parameters: | * **StructuralInformation** ([*StructuralInformation*](index.html#sharpy.utils.generate_cases.StructuralInformation)) – structural information * **AerodynamicInformation** ([*AerodynamicInformation*](index.html#sharpy.utils.generate_cases.AerodynamicInformation)) – aerodynamic information | `generate_h5_files`(*route*, *case_name*)[[source]](_modules/sharpy/utils/generate_cases.html#AeroelasticInformation.generate_h5_files)[¶](#sharpy.utils.generate_cases.AeroelasticInformation.generate_h5_files) write_h5_files Writes the structural and aerodynamic h5 files `remove_duplicated_points`(*tol*)[[source]](_modules/sharpy/utils/generate_cases.html#AeroelasticInformation.remove_duplicated_points)[¶](#sharpy.utils.generate_cases.AeroelasticInformation.remove_duplicated_points) Removes the points that are closer than ‘tol’ and modifies the aeroelastic information accordingly | Parameters: | **tol** (*float*) – tolerance. Maximum distance between nodes to be merged | Notes This function will not work if an element or an aerdoynamic surface is completely eliminated This function only checks geometrical proximity, not aeroelastic properties as a merging criteria ###### SimulationInformation[¶](#simulationinformation) *class* `sharpy.utils.generate_cases.``SimulationInformation`[[source]](_modules/sharpy/utils/generate_cases.html#SimulationInformation)[¶](#sharpy.utils.generate_cases.SimulationInformation) Simulation information needed to build a case `define_num_steps`(*num_steps*)[[source]](_modules/sharpy/utils/generate_cases.html#SimulationInformation.define_num_steps)[¶](#sharpy.utils.generate_cases.SimulationInformation.define_num_steps) Set the number of steps in the simulation for all the solvers | Parameters: | **num_steps** (*int*) – number of steps | `define_uinf`(*unit_vector*, *norm*)[[source]](_modules/sharpy/utils/generate_cases.html#SimulationInformation.define_uinf)[¶](#sharpy.utils.generate_cases.SimulationInformation.define_uinf) Set the inflow velocity in the simulation for all the solvers | Parameters: | * **unit_vector** (*np.array*) – direction of the inflow velocity * **norm** (*float*) – Norm of the inflow velocity | `generate_dyn_file`(*num_steps*)[[source]](_modules/sharpy/utils/generate_cases.html#SimulationInformation.generate_dyn_file)[¶](#sharpy.utils.generate_cases.SimulationInformation.generate_dyn_file) Generates the dynamic file | Parameters: | * **route** (*string*) – path of the case * **case_name** (*string*) – name of the case * **num_steps** (*int*) – number of steps | `generate_solver_file`()[[source]](_modules/sharpy/utils/generate_cases.html#SimulationInformation.generate_solver_file)[¶](#sharpy.utils.generate_cases.SimulationInformation.generate_solver_file) Generates the solver file | Parameters: | * **route** (*string*) – path of the case * **case_name** (*string*) – name of the case | `set_default_values`()[[source]](_modules/sharpy/utils/generate_cases.html#SimulationInformation.set_default_values)[¶](#sharpy.utils.generate_cases.SimulationInformation.set_default_values) Set the default values for all the solvers `set_variable_all_dicts`(*variable*, *value*)[[source]](_modules/sharpy/utils/generate_cases.html#SimulationInformation.set_variable_all_dicts)[¶](#sharpy.utils.generate_cases.SimulationInformation.set_variable_all_dicts) Defines the value of a variable in all the available solvers | Parameters: | * **variable** (*str*) – variable name * **(** **)** (*value*) – value | ###### StructuralInformation[¶](#structuralinformation) *class* `sharpy.utils.generate_cases.``StructuralInformation`[[source]](_modules/sharpy/utils/generate_cases.html#StructuralInformation)[¶](#sharpy.utils.generate_cases.StructuralInformation) Structural information needed to build a case `assembly_structures`(**args*)[[source]](_modules/sharpy/utils/generate_cases.html#StructuralInformation.assembly_structures)[¶](#sharpy.utils.generate_cases.StructuralInformation.assembly_structures) This function concatenates structures to be writen in the same h5 File | Parameters: | ***args** – list of StructuralInformation() to be meged into ‘self’ | Notes The structures does NOT merge any node (even if nodes are defined at the same coordinates) `check_StructuralInformation`()[[source]](_modules/sharpy/utils/generate_cases.html#StructuralInformation.check_StructuralInformation)[¶](#sharpy.utils.generate_cases.StructuralInformation.check_StructuralInformation) Check some properties of the StructuralInformation() Notes These conditions have to be to correctly define a case but they are not the only ones `compute_basic_num_elem`()[[source]](_modules/sharpy/utils/generate_cases.html#StructuralInformation.compute_basic_num_elem)[¶](#sharpy.utils.generate_cases.StructuralInformation.compute_basic_num_elem) It computes the number of elements when no nodes are shared between beams `compute_basic_num_node`()[[source]](_modules/sharpy/utils/generate_cases.html#StructuralInformation.compute_basic_num_node)[¶](#sharpy.utils.generate_cases.StructuralInformation.compute_basic_num_node) It computes the number of nodes when no nodes are shared between beams `copy`()[[source]](_modules/sharpy/utils/generate_cases.html#StructuralInformation.copy)[¶](#sharpy.utils.generate_cases.StructuralInformation.copy) Returns a copy of the object | Returns: | new object with the same properties | | Return type: | copied([StructuralInformation](index.html#sharpy.utils.generate_cases.StructuralInformation)) | `create_frame_of_reference_delta`(*y_BFoR='y_AFoR'*)[[source]](_modules/sharpy/utils/generate_cases.html#StructuralInformation.create_frame_of_reference_delta)[¶](#sharpy.utils.generate_cases.StructuralInformation.create_frame_of_reference_delta) Define the coordinates of the yB axis in the AFoR | Parameters: | **y_BFoR** (*string*) – Direction of the yB axis | `create_mass_db_from_vector`(*vec_mass_per_unit_length*, *vec_mass_iner_x*, *vec_mass_iner_y*, *vec_mass_iner_z*, *vec_pos_cg_B*, *vec_mass_iner_yz=None*)[[source]](_modules/sharpy/utils/generate_cases.html#StructuralInformation.create_mass_db_from_vector)[¶](#sharpy.utils.generate_cases.StructuralInformation.create_mass_db_from_vector) Create the mass matrices from the vectors of properties | Parameters: | * **vec_mass_per_unit_length** (*np.array*) – masses per unit length * **vec_mass_iner_x** (*np.array*) – inertias around the x axis * **vec_mass_iner_y** (*np.array*) – inertias around the y axis * **vec_mass_iner_z** (*np.array*) – inertias around the z axis * **vec_pos_cg_B** (*np.array*) – position of the masses * **vec_mass_iner_yz** (*np.array*) – inertias around the yz axis | `create_simple_connectivities`()[[source]](_modules/sharpy/utils/generate_cases.html#StructuralInformation.create_simple_connectivities)[¶](#sharpy.utils.generate_cases.StructuralInformation.create_simple_connectivities) Create the matrix of connectivities for one single beam with the nodes ordered in increasing xB direction `create_stiff_db_from_vector`(*vec_EA*, *vec_GAy*, *vec_GAz*, *vec_GJ*, *vec_EIy*, *vec_EIz*, *vec_EIyz=None*)[[source]](_modules/sharpy/utils/generate_cases.html#StructuralInformation.create_stiff_db_from_vector)[¶](#sharpy.utils.generate_cases.StructuralInformation.create_stiff_db_from_vector) Create the stiffness matrices from the vectors of properties | Parameters: | * **vec_EA** (*np.array*) – Axial stiffness * **vec_GAy** (*np.array*) – Shear stiffness in the y direction * **vec_GAz** (*np.array*) – Shear stiffness in the z direction * **vec_GJ** (*np.array*) – Torsional stiffness * **vec_EIy** (*np.array*) – Bending stiffness in the y direction * **vec_EIz** (*np.array*) – Bending stiffness in the z direction * **vec_EIyz** (*np.array*) – Bending stiffness in the yz direction | `generate_fem_file`(*route*, *case_name*)[[source]](_modules/sharpy/utils/generate_cases.html#StructuralInformation.generate_fem_file)[¶](#sharpy.utils.generate_cases.StructuralInformation.generate_fem_file) Writes the h5 file with the structural information | Parameters: | * **route** (*string*) – path of the case * **case_name** (*string*) – name of the case | `generate_full_structure`(*num_node_elem*, *num_node*, *num_elem*, *coordinates*, *connectivities*, *elem_stiffness*, *stiffness_db*, *elem_mass*, *mass_db*, *frame_of_reference_delta*, *structural_twist*, *boundary_conditions*, *beam_number*, *app_forces*, *lumped_mass_nodes=None*, *lumped_mass=None*, *lumped_mass_inertia=None*, *lumped_mass_position=None*)[[source]](_modules/sharpy/utils/generate_cases.html#StructuralInformation.generate_full_structure)[¶](#sharpy.utils.generate_cases.StructuralInformation.generate_full_structure) Defines the whole case from the appropiated variables | Parameters: | * **num_node_elem** (*int*) – number of nodes per element * **num_node** (*int*) – number of nodes * **num_elem** (*int*) – number of elements * **coordinates** (*np.array*) – nodes coordinates * **connectivities** (*np.array*) – element connectivities * **elem_stiffness** (*np.array*) – element stiffness index * **stiffness_db** (*np.array*) – Stiffness matrices * **elem_mass** (*np.array*) – element mass index * **mass_db** (*np.array*) – Mass matrices * **frame_of_reference_delta** (*np.array*) – element direction of the y axis in the BFoR wrt the AFoR * **structural_twist** (*np.array*) – element based twist * **boundary_conditions** (*np.array*) – node boundary condition * **beam_number** (*np.array*) – node beam number * **app_forces** (*np.array*) – steady applied follower forces at the nodes * **lumped_mass_nodes** (*np.array*) – nodes with lumped masses * **lumped_mass** (*np.array*) – value of the lumped masses * **lumped_mass_inertia** (*np.array*) – inertia of the lumped masses * **lumped_mass_position** (*np.array*) – position of the lumped masses | `generate_uniform_beam`(*node_pos*, *mass_per_unit_length*, *mass_iner_x*, *mass_iner_y*, *mass_iner_z*, *pos_cg_B*, *EA*, *GAy*, *GAz*, *GJ*, *EIy*, *EIz*, *num_node_elem=3*, *y_BFoR='y_AFoR'*, *num_lumped_mass=0*)[[source]](_modules/sharpy/utils/generate_cases.html#StructuralInformation.generate_uniform_beam)[¶](#sharpy.utils.generate_cases.StructuralInformation.generate_uniform_beam) Generates the input data for SHARPy of a uniform beam | Parameters: | * **node_pos** (*np.array*) – coordinates of the nodes * **mass_per_unit_length** (*float*) – mass per unit length * **mass_iner_x** (*float*) – Inertia of the mass in the x direction * **mass_iner_y** (*float*) – Inertia of the mass in the y direction * **mass_iner_z** (*float*) – Inertia of the mass in the z direction * **pos_cg_B** (*np.array*) – position of the masses * **EA** (*np.array*) – Axial stiffness * **GAy** (*np.array*) – Shear stiffness in the y direction * **GAz** (*np.array*) – Shear stiffness in the z direction * **GJ** (*np.array*) – Torsional stiffness * **EIy** (*np.array*) – Bending stiffness in the y direction * **EIz** (*np.array*) – Bending stiffness in the z direction * **num_node_elem** (*int*) – number of nodes per element * **y_BFoR** (*str*) – orientation of the yB axis * **num_lumped_mass** (*int*) – number of lumped masses | `generate_uniform_sym_beam`(*node_pos*, *mass_per_unit_length*, *mass_iner*, *EA*, *GA*, *GJ*, *EI*, *num_node_elem=3*, *y_BFoR='y_AFoR'*, *num_lumped_mass=0*)[[source]](_modules/sharpy/utils/generate_cases.html#StructuralInformation.generate_uniform_sym_beam)[¶](#sharpy.utils.generate_cases.StructuralInformation.generate_uniform_sym_beam) Generates the input data for SHARPy of a uniform symmetric beam | Parameters: | * **node_pos** (*np.array*) – coordinates of the nodes * **mass_per_unit_length** (*float*) – mass per unit length * **mass_iner** (*float*) – Inertia of the mass * **EA** (*float*) – Axial stiffness * **GA** (*float*) – Shear stiffness * **GJ** (*float*) – Torsional stiffness * **EI** (*float*) – Bending stiffness * **num_node_elem** (*int*) – number of nodes per element * **y_BFoR** (*str*) – orientation of the yB axis * **num_lumped_mass** (*int*) – number of lumped masses | `rotate_around_origin`(*axis*, *angle*)[[source]](_modules/sharpy/utils/generate_cases.html#StructuralInformation.rotate_around_origin)[¶](#sharpy.utils.generate_cases.StructuralInformation.rotate_around_origin) Rotates a structure | Parameters: | * **axis** (*np.array*) – axis of rotation * **angle** (*float*) – angle of rotation in radians | `set_to_zero`(*num_node_elem*, *num_node*, *num_elem*, *num_mass_db=None*, *num_stiffness_db=None*, *num_lumped_mass=0*)[[source]](_modules/sharpy/utils/generate_cases.html#StructuralInformation.set_to_zero)[¶](#sharpy.utils.generate_cases.StructuralInformation.set_to_zero) Sets to zero all the variables | Parameters: | * **num_node_elem** (*int*) – number of nodes per element * **num_node** (*int*) – number of nodes * **num_elem** (*int*) – number of elements * **num_mass_db** (*int*) – number of different mass matrices in the case * **num_stiffness_db** (*int*) – number of different stiffness matrices in the case * **num_lumped_mass** (*int*) – number of lumped masses in the case | ###### clean_test_files[¶](#module-sharpy.utils.generate_cases.clean_test_files) clean_test_files Removes the previous h5 files | param route: | path of the case | | type route: | string | | param case_name: | | | name of the case | | type case_name: | string | ###### from_node_array_to_elem_matrix[¶](#module-sharpy.utils.generate_cases.from_node_array_to_elem_matrix) from_node_array_to_elem_matrix Same as the previous function but with an array as input ###### from_node_list_to_elem_matrix[¶](#module-sharpy.utils.generate_cases.from_node_list_to_elem_matrix) from_node_list_to_elem_matrix Convert list of properties associated to nodes to matrix of properties associated to elements based on the connectivities The ‘ith’ value of the ‘node_list’ array stores the property of the ‘ith’ node. The ‘jth’ ‘kth’ value of the ‘elem_matrix’ array stores the property of the ‘kth’ node within the ‘jth’ element | param node_list: | | --- | | | Properties of the nodes | | type node_list: | np.array | | param connectivities: | | | Connectivities between the nodes to form elements | | type connectivities: | | | np.array | | returns: | Properties of the elements | | rtype: | elem_matrix (np.array) | ###### get_airfoil_camber[¶](#module-sharpy.utils.generate_cases.get_airfoil_camber) get_airfoil_camber Define the camber of an airfoil based on its coordinates | param x: | x coordinates of the airfoil surface | | type x: | np.array | | param y: | y coordinates of the airfoil surface | | type y: | np.array | | param n_points_camber: | | | number of points to define the camber line | | type n_points_camber: | | | int | | returns: | x coordinates of the camber line camber_y (np.array): y coordinates of the camber line | | rtype: | camber_x (np.array) | Notes The x and y vectors are expected in XFOIL format: TE - suction side - LE - pressure side - TE ###### get_aoacl0_from_camber[¶](#module-sharpy.utils.generate_cases.get_aoacl0_from_camber) This section provies the angle of attach of zero lift for a thin airfoil which camber line is defined by ‘x’ and ‘y’ coordinates Check Theory of wing sections. Abbott. pg 69 ###### get_factor_geometric_progression[¶](#module-sharpy.utils.generate_cases.get_factor_geometric_progression) This function provides the factor in a geometric series which first element is ‘a0’, has ‘n’ points and the sum of the spacings is ‘Sn_target’ approximately. \[\sum_{k=1}^n a_0 r^{k-1} = \frac{a_0 (1 - r^n)}{1 - r}\] ###### get_mu0_from_camber[¶](#module-sharpy.utils.generate_cases.get_mu0_from_camber) This funrcion provides the constant \(\mu_0\) for a thin airfoil which camber line is defined by ‘x’ and ‘y’ coordinates Check Theory of wing sections. Abbott. pg 69 ###### read_column_sheet_type01[¶](#module-sharpy.utils.generate_cases.read_column_sheet_type01) read_column_sheet_type01 This function reads a column from an excel file with the following format: > * First row: column_name > * Second row: units (not read, not checked) > * Third row: type of data (see below) | param excel_file_name: | | --- | | | File name | | type excel_file_name: | | | string | | param excel_sheet: | | | Name of the sheet inside the excel file | | type excel_sheet: | | | string | | param column_name: | | | Name of the column | | type column_name: | | | string | | returns: | Data in the excel file according to the type of data defined in the third row | | rtype: | var | ##### Generator Interface[¶](#generator-interface) ###### output_documentation[¶](#module-sharpy.utils.generator_interface.output_documentation) Creates the `.rst` files for the generators that have a docstring such that they can be parsed to Sphinx | param route: | Path to folder where generator files are to be created. | | type route: | str | ##### Airfoil Geometry Utils[¶](#airfoil-geometry-utils) ###### generate_naca_camber[¶](#module-sharpy.utils.geo_utils.generate_naca_camber) Defines the x and y coordinates of a 4-digit NACA profile’s camber line (i.e no thickness). The NACA 4-series airfoils follow the nomenclature: NACA MPTT where: * M indicates the maximum camber \(M = 100m\) * P indicates the position of the maximum camber \(P=10p\) * TT indicates the thickness to chord ratio \(TT=(t/c)*100\) | param M: | maximum camber times 100 (i.e. the first of the 4 digits) | | type M: | float | | param P: | position of the maximum camber times 10 (i.e. the second of the 4 digits) | | type P: | float | | returns: | `x` and `y` coordinates of the chosen airfoil | | rtype: | (x_vec,y_vec) | Example The NACA2400 airfoil would have 2% camber with the maximum at 40% of the chord and 0 thickness. To plot the camber line one would use this function as: > `x_vec, y_vec = generate_naca_camber(M = 2, P = 4)` ###### interpolate_naca_camber[¶](#module-sharpy.utils.geo_utils.interpolate_naca_camber) Interpolate aerofoil camber at non-dimensional coordinate eta in (0,1), where (M00,P00) and (M01,P01) define the camber properties at eta=0 and eta=1 respectively. Notes For two surfaces, eta can be in (-1,1). In this case, the root is eta=0 and the tips are at eta=+-1. ##### H5 File Management Utilities[¶](#h5-file-management-utilities) Set of utilities for opening/reading files ###### add_array_to_grp[¶](#module-sharpy.utils.h5utils.add_array_to_grp) Add numpy array (data) as dataset ‘name’ to the group grp. If compress is True, 64-bit float arrays are converted to 32-bit ###### add_as_grp[¶](#module-sharpy.utils.h5utils.add_as_grp) Given a class, dictionary, list or tuples instance ‘obj’, the routine adds it as a sub-group of name grpname to the parent group grpParent. An attribute _read_as, specifying the type of obj, is added to the group so as to allow reading correctly the h5 file. Usage and Remarks: * if obj contains dictionaries, listes or tuples, these are automatically saved * if list only contains scalars or arrays of the same dimension, this will be saved as a numpy array * if obj contains classes, only those that are instances of the classes specified in ClassesToSave will be saved * If grpParent already contains a sub-group with name grpname, this will not be overwritten. However, pre-existing attributes of the sub-group will be overwritten if obj contains attrributes with the same names. * attributes belonging to SkipAttr will not be saved - This functionality needs improving * if compress_float is True, numpy arrays will be saved in single precisions. ###### check_file_exists[¶](#module-sharpy.utils.h5utils.check_file_exists) Checks if the file exists and throws a FileNotFoundError exception that includes the route to the non-existing file. | param file_name: | | --- | | | path to the HDF5 file | | type file_name: | str | | returns: | if the file does not exist, an error is raised with path to the non-existent file | | rtype: | FileNotFoundError | ###### read_group[¶](#module-sharpy.utils.h5utils.read_group) Read an hdf5 group ###### readh5[¶](#module-sharpy.utils.h5utils.readh5) Read the HDF5 file ‘filename’ into a class. Groups within the hdf5 file are by default loaded as sub classes, unless they include a _read_as attribute (see sharpy.postproc.savedata). In this case, group can be loaded as classes, dictionaries, lists or tuples. filename: string to file location GroupName = string or list of strings. Default is None: if given, allows reading a specific group h5 file. Warning Groups that need to be read as lists and tuples are assumed to conform to the format used in sharpy.postproc.savedata ###### save_list_as_array[¶](#module-sharpy.utils.h5utils.save_list_as_array) Works for both lists and tuples. Returns True if the saving was successful. ###### saveh5[¶](#module-sharpy.utils.h5utils.saveh5) Creates h5filename and saves all the classes specified in class_inst Args savedir: target directory h5filename: file name class_inst: a number of classes to save permission=[‘a’,’w’]: append or overwrite, according to h5py.File ClassesToSave: if the classes in class_inst contain sub-classes, these will be saved only if instances of the classes in this list ##### Modelling Utilities[¶](#modelling-utilities) Modelling Utilities ###### mass_matrix_generator[¶](#module-sharpy.utils.model_utils.mass_matrix_generator) This function takes the mass, position of the center of gravity wrt the elastic axis and the inertia matrix J (3x3) and returns the complete 6x6 mass matrix. ##### Multibody library[¶](#multibody-library) Multibody library Library used to manipulate multibody systems Args: Returns: Examples: To use this library: import sharpy.utils.multibody as mb Notes: ###### disp2state[¶](#module-sharpy.utils.multibody.disp2state) disp2state Fills the vector of states according to the displacements information Longer description | param MB_beam: | each entry represents a body | | type MB_beam: | list of beam | | param MB_tstep: | each entry represents a body | | type MB_tstep: | list of StructTimeStepInfo | | param q: | Vector of states | | type q: | numpy array | | param dqdt: | Time derivatives of states | | type dqdt: | numpy array | | param dqddt: | Second time derivatives of states | | type dqddt: | numpy array | Returns: Examples: Notes: ###### merge_multibody[¶](#module-sharpy.utils.multibody.merge_multibody) merge_multibody This functions merges a series of bodies into a multibody system at a certain time step Longer description | param MB_beam: | each entry represents a body | | type MB_beam: | list of beam | | param MB_tstep: | each entry represents a body | | type MB_tstep: | list of StructTimeStepInfo | | param beam: | structural information of the multibody system | | type beam: | beam | | param tstep: | timestep information of the multibody system | | type tstep: | StructTimeStepInfo | | param mb_data_dict (): | | | Dictionary including the multibody information | | param dt: | time step | | type dt: | int | | returns: | structural information of the multibody system tstep (StructTimeStepInfo): timestep information of the multibody system | | rtype: | beam (beam) | Examples: Notes: ###### split_multibody[¶](#module-sharpy.utils.multibody.split_multibody) split_multibody This functions splits a structure at a certain time step in its different bodies Longer description | param beam: | structural information of the multibody system | | type beam: | beam | | param tstep: | timestep information of the multibody system | | type tstep: | StructTimeStepInfo | | param mb_data_dict (): | | | Dictionary including the multibody information | | returns: | each entry represents a body MB_tstep (list of StructTimeStepInfo): each entry represents a body | | rtype: | MB_beam (list of beam) | Examples: Notes: ###### state2disp[¶](#module-sharpy.utils.multibody.state2disp) state2disp Recovers the displacements from the states Longer description | param MB_beam: | each entry represents a body | | type MB_beam: | list of beam | | param MB_tstep: | each entry represents a body | | type MB_tstep: | list of StructTimeStepInfo | | param q: | Vector of states | | type q: | numpy array | | param dqdt: | Time derivatives of states | | type dqdt: | numpy array | | param dqddt: | Second time derivatives of states | | type dqddt: | numpy array | Returns: Examples: Notes: ###### update_mb_dB_before_merge[¶](#module-sharpy.utils.multibody.update_mb_dB_before_merge) update_mb_db_before_merge Updates the FoR information database before merge the bodies Longer description | param tstep: | timestep information of the multibody system | | type tstep: | StructTimeStepInfo | | param MB_tstep: | each entry represents a body | | type MB_tstep: | list of StructTimeStepInfo | Returns: Examples: Notes: ###### update_mb_db_before_split[¶](#module-sharpy.utils.multibody.update_mb_db_before_split) update_mb_db_before_split Updates the FoR information database before split the system Longer description | param tstep: | timestep information of the multibody system | | type tstep: | StructTimeStepInfo | Returns: Examples: Notes At this point, this function does nothing, but we might need it at some point ##### Plotting utilities[¶](#plotting-utilities) ###### set_axes_equal[¶](#module-sharpy.utils.plotutils.set_axes_equal) Make axes of 3D plot have equal scale so that spheres appear as spheres, cubes as cubes, etc.. This is one possible solution to Matplotlib’s ax.set_aspect(‘equal’) and ax.axis(‘equal’) not working for 3D. Input ax: a matplotlib axis, e.g., as output from plt.gca(). ##### Settings Generator Utilities[¶](#settings-generator-utilities) Settings Generator Utilities ###### SettingsTable[¶](#settingstable) *class* `sharpy.utils.settings.``SettingsTable`[[source]](_modules/sharpy/utils/settings.html#SettingsTable)[¶](#sharpy.utils.settings.SettingsTable) Generates the documentation’s setting table at runtime. Sphinx is our chosen documentation manager and takes docstrings in reStructuredText format. Given that the SHARPy solvers contain several settings, this class produces a table in reStructuredText format with the solver’s settings and adds it to the solver’s docstring. This table will then be printed alongside the remaining docstrings. To generate the table, parse the setting’s description to a solver dictionary named `settings_description`, in a similar fashion to what is done with `settings_types` and `settings_default`. If no description is given it will be left blank. Then, add at the end of the solver’s class declaration method an instance of the `SettingsTable` class and a call to the `SettingsTable.generate()` method. Examples The end of the solver’s class declaration should contain ``` # Generate documentation table settings_table = settings.SettingsTable() __doc__ += settings_table.generate(settings_types, settings_default, settings_description) ``` to generate the settings table. `generate`(*settings_types*, *settings_default*, *settings_description*, *settings_options={}*, *header_line=None*)[[source]](_modules/sharpy/utils/settings.html#SettingsTable.generate)[¶](#sharpy.utils.settings.SettingsTable.generate) Returns a rst-format table with the settings’ names, types, description and default values | Parameters: | * **settings_types** (*dict*) – Setting types. * **settings_default** (*dict*) – Settings default value. * **settings_description** (*dict*) – Setting description. * **header_line** (*str*) – Header line description (optional) | | Returns: | .rst formatted string with a table containing the settings’ information. | | Return type: | str | ###### check_settings_in_options[¶](#module-sharpy.utils.settings.check_settings_in_options) Checks that settings given a type `str` or `int` and allowable options are indeed valid. | param settings: | Dictionary of processed settings | | type settings: | dict | | param settings_types: | | | Dictionary of settings types | | type settings_types: | | | dict | | param settings_options: | | | Dictionary of options (may be empty) | | type settings_options: | | | dict | | raises: | `exception.NotValidSetting` – if the setting is not allowed. | ###### load_config_file[¶](#module-sharpy.utils.settings.load_config_file) This function reads the flight condition and solver input files. | param file_name: | | --- | | | contains the path and file name of the file to be read by the `configparser` reader. | | type file_name: | str | | returns: | a `ConfigParser` object that behaves like a dictionary | | rtype: | config (dict) | ### `SHARPy` Test Cases[¶](#sharpy-test-cases) The following test cases are provided as a tutorial and introduction to `SHARPy` as well as for code validation purposes. * Geradin and <NAME> - See [Installation](https://ic-sharpy.readthedocs.io/en/dev_doc/content/installation.html#running-and-modifiying-a-test-case) and see test case in `./sharpy/tests/xbeam/` ### A Short Debugging Guide[¶](#a-short-debugging-guide) We have put together a list of common traps you may fall into, hopefully you will find the tools here to get yourself out of them! * Did you forget conda activate sharpy_env and source bin/sharpy_vars.sh? > + If you do in the terminal: which sharpy, do you get the one you want? > + If you do which python, does the result point to anaconda3/envs/sharpy_env/bin (or similar)? > * Wrong input (inconsistent connectivities, mass = 0…) > + Sometimes not easy to detect. For the structural model, run BeamLoader and BeamPlot with no structural solver > in between. Go over the structure in Paraview. Check the fem.h5 file with HDFView. > + Remember that connectivities are ordered as $[0, 2, 1]$ (the central node goes last). > + Make sure the num_elem and num_node variables are actually your correct number of elements and nodes. > * Not running the actual case you want to. > + Cleanup the folder and regenerate the case > * Not running the SHARPy version you want. > + Check at the beginning of the execution the path to the SHARPy folder. > * Not running the correct branch of the code. > + You probably want to use develop. Again, check the first few lines of SHARPy output. > * Very different (I’m talking orders of magnitude) stiffnesses between nodes or directions? * Maybe the UVLM requires a smaller a smaller vortex core cutoff (only for linear UVLM simulations, as the nonlinear uses another vortex core model). * Newmark damping is not enough for this case? * Do you have an element with almost 0 mass or inertia? * Are you mass matrices consistent? Check that \(I_{xx} = I_{yy} + I_{zz}\). * Have a look at the $dot{Gamma}$ filtering and numerical parameters in the settings of StepUvlm and DynamicCoupled. * Add more relaxation to the StaticCoupled or DynamicCoupled solvers. * The code has a bug (depending on where, it may be likely). > + Go over the rest of the list. Plot the case in paraview. Go over the rest of the list again. Prepare the simplest > example that reproduces the problem and raise an issue. > * The code diverges because it has to (physical unstable behaviour) + Then don’t complain * Your model still doesn’t work and you don’t know why. + import pdb; pdb.set_trace() and patience * If nothing else works… get a rubber duck (or a very very patient good friend) and go over every step If your model doesn’t do what it is supposed to do: * Check for symmetric response where the model is symmetric. > + If it is not, run the beam solver first and make sure your properties are correct. Make sure the matrices for mass > and stiffness are rotated if they need to be (remember the Material FoR definition and the for_delta?) > + Now run the aerodynamic solver only and double check that the forces are symmetric. > + Make sure your tolerances are low enough so that at least 4 FSI iterations are performed in StaticCoupled or > DynamicCoupled. > * Make sure your inputs are correct. For example: a dynamic case can be run with \(u_\infty = 0\) and the plane moving forwards, or \(u_\infty = x\) whatever and the plane velocity = 0. It is very easy to mix both, and end up with double the effective incoming speed (or none). * Run simple stuff before coupling it. For example, if your wing tip deflections don’t match what you’d expect, calculate the deflection under a small tip force (not too small, make sure the deflection is > 1% of the length!) by hand, and compare. * It is more difficult to do the same with the UVLM, as you need a VERY VERY high aspect ratio to get close to the 2D potential solutions. You are going to have to take my word for it: the UVLM works. * But check the aero grid geometry in Paraview, including chords lengths and angles. Citing SHARPy[¶](#citing-sharpy) --- SHARPy has been published in the Journal of Open Source Software (JOSS) and the relevant paper can be found [here](https://joss.theoj.org/papers/10.21105/joss.01885). If you are using SHARPy for your work, please remember to cite it using the paper in JOSS as: > del Carre et al., (2019). SHARPy: A dynamic aeroelastic simulation toolbox for very flexible aircraft and wind > turbines. Journal of Open Source Software, 4(44), 1885, <https://doi.org/10.21105/joss.01885The bibtex entry for this citation is: ``` @Article{delCarre2019, doi = {10.21105/joss.01885}, url = {https://doi.org/10.21105/joss.01885}, year = {2019}, month = dec, publisher = {The Open Journal}, volume = {4}, number = {44}, pages = {1885}, author = {<NAME> and <NAME>{\~{n}}oz-Sim\'on and <NAME> and <NAME>ios}, title = {{SHARPy}: A dynamic aeroelastic simulation toolbox for very flexible aircraft and wind turbines}, journal = {Journal of Open Source Software} } ``` Indices and tables[¶](#indices-and-tables) --- * [Index](genindex.html) * [Module Index](py-modindex.html) * [Search Page](search.html) Contact[¶](#contact) --- SHARPy is developed at the Department of Aeronautics, Imperial College London. To get in touch, visit the [Loads Control and Aeroelastics Lab](http://imperial.ac.uk/aeroelastics) website. sharpy_intro Introduction to SHARPy[¶](#Introduction-to-SHARPy) === Version: *June 2019* Overview[¶](#Overview) === SHARPy (Simulation of High Aspect Ratio Planes in Python is a framework for linear and nonlinear aeroelastic analysis of flexible structures. It is developed by the Loads Control and Aeroelasticity lab at the Department of Aeronautics. All the code is open source and readily available in GitHub. **Important links:** Loads Control and Aeroelasticity lab <https://imperial.ac.uk/aeroelasticsMain github repository <https://github.com/imperialcollegelondon/sharpyDocumentation(!!!) <https://ic-sharpy.readthedocs.ioUVLM solver (C++) <https://github.com/imperialcollegelondon/uvlmStructural solver (Fortran) <https://github.com/imperialcollegelondon/xbeamSHARPy is mainly coded in Python, but the expensive routines, such as aero and structural solvers are coded in faster languages such as C++ and Fortran. A number of different structures and analysis methods can be run. For example: Very flexible aircraft nonlinear aeroelasticity (Alfonso)[¶](#Very-flexible-aircraft-nonlinear-aeroelasticity-(Alfonso)) --- The modular design of SHARPy allows to simulate complex aeroelastic cases involving very flexible aircraft. The structural solver supports very complex beam arrangements, while retaining geometrical nonlinearity. The UVLM solver features different wake modelling fidelities while supporting large lifting surface deformations in a native way. Among the problems studied, a few interesting ones, in no particular order are: * Catapult take off of a very flexible aircraft analysis [[Paper]](https://arc.aiaa.org/doi/abs/10.2514/6.2019-2038). In this type of simulations, a PID controller was used in order to enforce displacements and velocities in a number of structural nodes (the clamping points). Then, several take off strategies were studied in order to analyse the influence of the structural stiffness in this kind of procedures. This case is a very good example of the type of problems where nonlinear aerolasticity is essential. * Flight in full 3D atmospheric boundary layer (to be published). A very flexible aircraft is flown immersed in a turbulent boundary layer obtained from HPC LES simulations. The results are compared against simpler turbulence models such as von Karman and Kaimal. Intermittency and coherence features in the LES field are absent or less remarkable in the synthetic turbulence fields. * Lateral gust reponse of a realistic very flexible aircraft. For this problem (to be published), a realistic very flexible aircraft (University of Michigan X-HALE) model has been created in SHARPy and validated against their own aeroelastic solver for static and dynamic cases. A set of vertical and lateral gust responses have been simulated. (Results to be presented at IFASD 2019). Wind turbine aeroelasticity (Arturo)[¶](#Wind-turbine-aeroelasticity-(Arturo)) --- SHARPy is suitable to simulate wind turbine aeroelasticity. On the structural side, it accounts for material anisotropy which is needed to characterize composite blades and for geometrically non-linear deformations observed in current blades due to the increasing length and flexibility. Both rigid and flexible simulations can be performed and the structural modes can be computed accounting for rotational effects (Campbell diagrams). The rotor-tower interaction is modelled through a multibody approach based on the theory of Lagrange multipliers. Finally, he tower base can be fixed or subjected to prescribed linear and angular velocities. On the aerodynamic side, the use of potential flow theory allows the characterization of flow unsteadiness at a reasonable computational cost. Specifically, steady and dynamic simulations can be performed. The steady simulations are carried out in a non-inertial frame of reference linked to the rotor under uniform steady wind with the assumption of prescribed helicoidal wake. On the other hand, dynamic simulations can be enriched with a wide variety of incoming winds such as shear and yaw. Moreover, the wake shape can be freely computed under no assumptions accounting for self-induction and wake expansion or can be prescribed to an helicoidal shape for computational efficiency. PD: aft-loaded airfoils can be included through the definition of the camber line of the blades. Model Order Reduction[¶](#Model-Order-Reduction) --- Numerical models of physical phenomena require fine discretisations to show convergence and agreement with their real counterparts, and, in the case of SHARPy's aeroelastic systems, hundreds of thousands of states are not an uncommon encounter. However, modern hardware or the use of these models for other applications such as controller synthesis may limit their size, and we must turn to model order reduction techniques to achieve lower dimensional representations that can then be used. SHARPy offers several model order reduction methods to reduce the initially large system to a lower dimension, attending to the user's requirements of numerical efficiency or global error bound. ### Krylov Methods for Model Order Reduction - Moment Matching[¶](#Krylov-Methods-for-Model-Order-Reduction---Moment-Matching) Model reduction by moment matching can be seen as approximating a transfer function through a power series expansion about a user defined point in the complex plane. The reduction by projection retains the moments between the full and reduced systems as long as the projection matrices span certain Krylov subspaces dependant on the expansion point and the system's matrices. This can be taken advantage of, in particular for aeroelastic applications where the interest resides in the low frequency behaviour of the system, the ROM can be expanded about these low frequency points discarding accuracy higher up the frequency spectrum. #### Example 1 - Aerodynamics - Frequency response of a high AR flat plate subject to a sinusoidal gust[¶](#Example-1---Aerodynamics---Frequency-response-of-a-high-AR-flat-plate-subject-to-a-sinusoidal-gust) The objective is to compare SHARPy's solution of a very high aspect ratio flat plate subject to a sinusoidal gust to the closed form solution obtained by Sears (1944 - Ref). SHARPy's inherent 3D nature makes comparing results to the 2D solution require very high aspect ratio wings with fine discretisations, resulting in very large state space models. In this case, we would like to utilise a Krylov ROM to approximate the low frequency behaviour and perform a frequency response analysis on the reduced system, since it would represent too much computational cost if it were performed on the full system. The full order model was reduced utilising Krylov methods, in particular the Arnoldi iteration, with an expansion about zero frequency to produce the following result. As it can be seen from the image above, the ROM approximates well the low frequency, quasi-steady state and loses accuracy as the frequency is increased, just as intended. Still, perfect matching is never achieved even at the expansion frequency given the 3D nature of the wing compared to the 2D analytical solution. #### Example 2 - Aeroelastics - Flutter analysis of a Goland wing with modal projection[¶](#Example-2---Aeroelastics---Flutter-analysis-of-a-Goland-wing-with-modal-projection) The Goland wing flutter example is presented next. The aerodynamic surface is finely discretised for the UVLM solution, resulting in not only a large state space but also in large input/output dimensionality. Therefore, to reduce the number of inputs and outputs, the UVLM is projected onto the structural mode shapes, the first four in this particular case. The resulting multi input multi output system (mode shapes -> UVLM -> modal forces) was subsequently reduced using Krylov methods aimed at MIMO systems which use variations of the block Arnoldi iteration. Again, the expansion frequency selected was the zero frequency. As a sample, the transfer function from two inputs to two outputs is shown to illustrate the performance of the reduced model against the full order UVLM. The reduced aerodynamic model projected onto the modal shapes was then coupled to the linearised beam model, and the stability analysed against a change in velocity. Note that the UVLM model and its ROM are actually scaled to be independent on the freestream velocity, hence only one UVLM and ROM need to be computed. The structural model needs to be updated at each test velocity but its a lot less costly in computational terms. The resulting stability of the aeroelastic system is plotted on the Argand diagram below with changing freestream velocity. | Flutter speed [m/s] | Frequency [rad/s] | | --- | --- | | 164 | 70.27 | SHARPy installation[¶](#SHARPy-installation) === Check <https://ic-sharpy.readthedocs.io/en/latest/content/installation.html> for an in-depth guide on how to install SHARPy. This assumes a few things about your computer: * It is Linux or MacOS (definitely not Windows - if you only have Windows, check VirtualBox and make an Ubuntu or CentOS virtual machine). * It has an up-to-date compiler (this might not be straightforward, run `g++ --version`. If lower than 5.0, you will need to update it). An up-to-date Intel Compiler for C++ and Fortran is a good option as well. SHARPy relies on Anaconda <https://www.anaconda.com/distribution/> to handle all the python packages. Install the Python 3 version. You then need to download the code: 1) Create a folder. Here it will be called `code` ``` mkdir code cd code ``` 2) Clone all the necessary repos **and make sure you are in the correct branch -- usually develop** ``` git clone https://github.com/imperialcollegelondon/sharpy --branch=develop git clone https://github.com/imperialcollegelondon/xbeam --branch=develop git clone https://github.com/imperialcollegelondon/uvlm --branch=develop ``` 3) Create the conda environment and activate it ``` conda env create -f sharpy/utils/environment_linux.yml ``` (or `environment_macos.yml` if on MacOS. Now activate the conda environment `conda activate sharpy_env` you are going to have to do this every time to start a new terminal and want to run SHARPy It sometimes fails in the `pip` stage when installing. If it says something about `distwheel` failed in `mayavi`, activate the environment and run `pip install mayavi` manually. 4) Compile `xbeam` and `uvlm`. Move to the `xbeam` folder (`cd xbeam`) and run `sh run_make.sh`. Wait until it finishes. Please make sure your anaconda env is active before running this. Now the same for `uvlm`: `cd ../uvlm` and `sh run_make.sh`. 5) SHARPy is now hopefully ready to go! Navigate to the `sharpy` folder: `cd ../sharpy` and run: ``` source bin/sharpy_vars.sh ``` This command is important, as it loads the program in the terminal variables. We can run the tests now: ``` python -m unittest ``` If everything has been installed properly, the tests should pass. ``` --- Ran 28 tests in 23.465s OK ``` **Remember to run every time you start a new terminal:** ``` conda activate sharpy_env source <path-to-sharpy>/bin/sharpy_vars.sh ``` Tip: edit your `~\.bashrc` (linux) or `\.bash_profile` (MacOS) and add the following: ``` alias load_sharpy="conda activate sharpy_env && source <path-to-sharpy>/bin/sharpy_vars.sh" # Remember to modify the <path-to-sharpy> gap with your path ``` Then you just need to run in the console `load_sharpy` to load everything in one go. Basic cases[¶](#Basic-cases) === A cantilever beam (Geradin)[¶](#A-cantilever-beam-(Geradin)) --- This case can be found in `sharpy/tests/xbeam/geradin`, it is part of the test suite. Basically, it is a 5 metres long cantilevered beam with a mass at the tip. Stiffness properties: $\mathcal{S} = \mathrm{diag}(EI, GA_y, GA_z, GJ, EI_y, EI_z) = \mathrm{diag}(4.8e8, 3.231e8, 3.231e8, 1e6, 9.346e6, 9.346e6)$ There is no distributed mass, only one at the tip. $M = 6e5/9.81$. The main point about the Geradin beam is that the deflections are past the linear range. If you open `sharpy/tests/xbeam/geradin/generate_geradin.py`, you are going to see a basic input file for a structural only case. The last part is the one that is the most important (function `generate_solver_file` from line 134). This is where the solver settings and the overall flow of the program is given. * **The `flow` variable** controls the solvers and postprocessors that are going to be run in this simulation (and in which order). When you run sharpy with a valid header file, you will see a complete list of available solvers: ``` --- ###### ## ## ### ######## ######## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #### ###### ######### ## ## ######## ######## ## ## ## ## ######### ## ## ## ## ## ## ## ## ## ## ## ## ## ## ###### ## ## ## ## ## ## ## ## --- Aeroelastics Lab, Aeronautics Department. Copyright (c), Imperial College London. All rights reserved. Running SHARPy from /home/ad214/run/code/sharpy/tests/xbeam/geradin SHARPy being run is in /home/ad214/run/code/sharpy The branch being run is develop The version and commit hash are: v0.1-731-g85eb5ab-85eb5ab The available solvers on this session are: PreSharpy AerogridLoader BeamLoader Modal NonLinearDynamic NonLinearDynamicCoupledStep NonLinearDynamicPrescribedStep NonLinearStatic PIDTrajectoryControl PrescribedUvlm StaticCoupled StaticTrim StaticUvlm StepUvlm Trim RigidDynamicPrescribedStep DynamicUVLM StepLinearUVLM StaticLinearUvlm InitializeMultibody LagrangeMultipliersTrajectoryControl NonLinearDynamicMultibody SHWUvlm StaticCoupledRBM SteadyHelicoidalWake DynamicCoupled AeroForcesCalculator AerogridPlot BeamLoads BeamPlot Cleanup SaveData StallCheck WriteVariablesTime PlotFlowField CreateSnapshot LiftDistribution ``` For our simple structural simulation, we only need a few blocks: `BeamLoader` for reading the input file and generating the beam data structure, `NonLinearStatic` is the structural solver for *nonlinear*, *static* beams, `BeamPlot` outputs the deformed shape and other quantities to a Paraview file structure (more on this later), and `WriteVariablesTime` outputs the data we want to text files. * **The `case`** is a name for the simulation so we can identify results. * **The `route`** is the path to the `.sharpy` file, so that SHARPy can find all the other necessary files. Then, the other parts of the `.sharpy` (or `.sharpy`) file are based on the following structure: ``` [Header_name] variable1 = 1 variable2 = a variable3 = [a, a, b] ``` It is important to note that you need a header per every solver indicated in `flow`. If that solver needs no settings, you still need to indicate the header. There is also a main `[SHARPy]` header that contains the main program settings, such as the `flow` and the `case`. It also has a setting called `write_screen`. It is equal to `off` because this is a test case, and you don't want tons of output. If you modify it to `'on'`, you'll be able to see what's going on on the screen. ### Simple modifications of the case[¶](#Simple-modifications-of-the-case) If we want to (for example), run a stiffer beam, we can do so easily: Open `generate_geradin.py` and move to line 50 more or less where there are a few lines looking like: ``` ea = ... ga = ... #... ei = ... ``` and modify them to look like: ``` stiffness_multiplier = 10 ea = ... * stiffness_multiplier ga = ... * stiffness_multiplier #... ei = ... * stiffness_multiplier ``` you can also change the name of the case so that the results are not overwritten: In line `6`, `case_name = geradin_stiff`. Now run again the case. First, generate the files: `python generate_geradin.py`, and then run the case `sharpy geradin_stiff.sharpy`. You can now have a look at the wing tip deformations for both cases in `output/<case_name>/WriteVariablesTime/struct_pos_node-1.dat`. Guide to model definition in SHARPy[¶](#Guide-to-model-definition-in-SHARPy) === This section will take a bit of time and is quite tough to follow, but keep this as a reference. Structural data[¶](#Structural-data) --- The `case.fem.h5` file has several components. We go one by one: * `num_node_elem [int]` is always 3 in our case (3 nodes per structural elements - quadratic beam elements). * `num_elem [int]` number of structural elements. * `num_node [int]` number of nodes. For simple structures, it is `num_elem*(num_node_elem - 1) - 1`. For more complicated ones, you need to calculate it properly. * `coordinates [num_node, 3]` coordinates of the nodes in body-attached FoR. * `connectivites [num_elem, num_node_elem]` a tricky one. Every row refers to an element, and the three integers in that row are the indices of the three nodes belonging to that elem. Now, the catch: the ordering is not as you'd think. Order them as $[0, 2, 1]$. That means, first one, last one, central one. The following image shows the node indices inside the circles representing the nodes, the element indices in blue and the resulting connectivities matrix next to it. Connectivities are tricky when considering complex configurations. Pay attention at the beginning and you'll save yourself a lot of trouble. * `stiffness_db [:, 6, 6]` database of stiffness matrices. The first dimension has as many elements as different stiffness matrices are in the model. * `elem_stiffness [num_elem]` array of indices (starting at 0). Basically, it links every element (index) to the stiffness matrix index in `stiffness_db`. For example `elem_stiffness[0] = 0; elem_stiffness[2] = 1` means that the element `0` has a stiffness matrix equal to `stiffness_db[0, :, :]`, and the second element has a stiffness matrix equal to `stiffness_db[1, :, :]`. The shape of a stiffness matrix, $\mathrm{S}$ is: $$\mathrm{S} = \begin{bmatrix} EA & & & & & \\ & GA_y & & & & \\ & & GA_z & & & \\ & & & GJ & & \\ & & & & EI_y & \\ & & & & & EI_z \\ \end{bmatrix} $$ with the cross terms added if needed. * `mass_db` and `elem_mass` follow the same scheme than the stiffness, but the mass matrix is given by: $$\mathrm{M} = \begin{bmatrix} m\mathbf{I} & -\tilde{\xi}_{cg}m \\ \tilde{\xi}_{cg}m & J\\ \end{bmatrix} $$ where $m$ is the distributed mass per unit length [kg/m], $\tilde{\bullet}$ is the skew-symmetric matrix of a vector and $\xi_{cg}$ is the location of the centre of gravity with respect to the elastic axis in **MATERIAL (local) FoR**. And what is the Material FoR? This is an important point, because all the inputs that move WITH the beam are in material FoR. For example: follower forces, stiffness, mass, lumped masses... The material frame of reference is noted as $B$. Essentially, the $x$ component is tangent to the beam in the increasing node ordering, $z$ looks up generally and $y$ is oriented such that the FoR is right handed. In the practice (vertical surfaces, structural twist effects...) it is more complicated than this. The only sure thing about $B$ is that its $x$ direction is tangent to the beam in the increasing node number direction. However, with just this, we have an infinite number of potential reference frames, with $y$ and $z$ being normal to $x$ but rotating around it. The solution is to indicate a `for_delta`, or frame of reference delta vector ($\Delta$). Now we can define unequivocally the material frame of reference. With $x_B$ and $\Delta$ defining a plane, $y_b$ is chosen such that the $z$ component is oriented upwards with respect to the lifting surface. From this definition comes the only constraint to $\Delta$: it cannot be parallel to $x_B$. * `frame_fo_reference_delta [num_elem, num_node_elem, 3]` contains the $\Delta$ vector in body-attached ($A$) frame of reference. As a rule of thumb: $$\Delta = \left\{ \begin{matrix} [-1, 0, 0], \quad \text{if right wing} \\ [1, 0, 0], \quad \text{if left wing} \\ [0, 1, 0], \quad \text{if fuselage} \\ [-1, 0, 0], \quad \text{if vertical fin} \\ \end{matrix} \right. $$ These rules of thumb only work if the nodes increase towards the tip of the surfaces (and the tail in the case of the fuselage). * `structural_twist [num_elem, num_node_elem]` is technically not necessary, as the same effect can be achieved with FoR_delta. **CAUTION** previous versions of SHARPy had structural twist defined differently: ``` structural_twist = np.zeros((num_node, num_node_elem)) # this is wrong now, and will trigger and error in SHARPy, change it! structural_twist = np.zeros((num_elem, num_node_elem)) # this is right. ``` * `boundary_conditions [num_node]` is an array of integers (`np.zeros((num_node, ), dtype=int)`) and contains all `0` EXCEPT FOR: + One node NEEDS to have a `1`, this is the reference node. Usually, the first node has `1` and is located in `[0, 0, 0]`. This makes things much easier. + If the node is a tip of a beam (is not attached to 2 elements, but just 1), it needs to have a `-1`. * `beam_number [num_elem]` is another array of integers. Usually you don't need to modify its value. Leave it at 0. * `app_forces [num_elem, 6]` contains the applied forces `app_forces[:, 0:3]` and moments `app_forces[:, 3:6]` in a given node. Important points: the forces are given in Material FoR (check above). That means that in a symmetrical model, a thrust force oriented upstream would have the shape $[0, T, 0, 0, 0, 0]$ in the right wing, while the left would be $[0, -T, 0, 0, 0, 0]$. Likewise, a torsional moment for twisting the wing leading edge up would be $[0, 0, 0, M, 0, 0]$ for the right, and $[0, 0, 0, -M, 0, 0]$ for the left. But careful, because an out-of-plane bending moment (wing tip up) has the same sign (think about it). * `lumped_mass [:]` is an array with as many masses as needed (in kg this time). Their order is important, as more information is required to implement them in a model. * `lumped_mass_nodes [:]` is an array of integers. It contains the index of the nodes related to the masses given in `lumped_mass` in order. * `lumped_mass_inertia [:, 3, 3]` is an array of 3x3 inertial tensors. The relationship is set by the ordering as well. * `lumped_mass_position [:, 3]` is the relative position of the lumped mass wrt the node (given in `lumped_masss_nodes`) coordinates. **ATTENTION:** the lumped mass is solidly attached to the node, and thus, its position is given in Material FoR. Aerodynamic data[¶](#Aerodynamic-data) --- All the aero data is contained in `case.aero.h5`. It is important to know that the input for aero is usually based on elements (and inside the elements, their nodes). This causes sometimes an overlap in information, as some nodes are shared by two adjacent elements (like in the connectivities graph in the previous section). The easier way of dealing with this is to make sure the data is consistent, so that the properties of the last node of the first element are the same than the first node of the second element. Item by item: * In the `aero.h5` file, there is a Group called `airfoils`. The airfoils are stored in this group (which acts as a folder) as a two-column matrix with $x/c$ and $y/c$ in each column. They are named `'0', '1'`, and so on. * `chords [num_elem, num_node_elem]` is an array with the chords of every airfoil given in an element/node basis. * `twist [num_elem, num_node_elem]` has the twist angle in radians. It is implemented as a rotation around the local $x$ axis. * `sweep [num_elem, num_node_elem]` same here, just a rotation around $z$. * `airfoil_distribution_input [num_elem, num_node_elem]` contains the indices of the airfoils that you put previously in `airfoils` * `surface_distribution_input [num_elem]` is an integer array. It contains the index of the surface the element belongs to. Surfaces need to be continuous, so please note that if your beam numbering is not continuous, you need to make a surface per continuous section. * `surface_m [num_surfaces]` is an integer array with the number of chordwise panels for every surface. * `m_distribution [string]` is a string with the chordwise panel distribution. In almost all cases, leave it at `uniform`. * `aero_node_input [num_node]` is a boolean (True/False) array that indicates if that node has a lifting surface attached to it. * `elastic_axis [num_elem, num_node_elem]` indicates the elastic axis location with respect to the leading edge as a fraction of the chord of that rib. Note that the elastic axis is already determined, as the beam is fixed now, so this settings controls the location of the lifting surface wrt the beam. * `control_surface [num_elem, num_node_elem]` is an integer array containing `-1` if that section has no control surface associated to it, and `0, 1, 2 ...` if the section belongs to the control surface `0, 1, 2 ...` respectively. * `control_surface_type [num_control_surface]` contains `0` if the control surface deflection is static, and `1` is it is dynamic (if you need to run dynamic control surfaces, come see me). * `control_surface_chord [num_control_surface]` is an INTEGER array with the number of panels belonging to the control surface. For example, if $M = 4$ and you want your control surface to be $0.25c$, you need to put `1`. * `control_surface_hinge_coord [num_control_surface]` only necessary for lifting surfaces that are deflected as a whole, like some horizontal tails in some aircraft. Leave it at 0 if you are not modelling this. If you are, come see me. Common solver settings[¶](#Common-solver-settings) --- The solver settings are covered almost entirely in the documentation (again: <https://ic-sharpy.readthedocs.io>). I'm going to explain in more detail the important ones. The defaults settings are generally good enough for a majority of cases. ### BeamLoader[¶](#BeamLoader) BeamLoader reads the solver.txt and the fem.h5 files and generates the `beam` data structure. Its settings are simple: * `unsteady` leave it on * `orientation` is what is used to set the attitude angles. It is given as a quaternion (CAREFUL: a null rotation in quaternions is $[1, 0, 0, 0]$). You can give it in Euler angles as: `'orientation' = algebra.euler2quat(np.array([roll, alpha, beta]))`. ### AerogridLoader[¶](#AerogridLoader) * `mstar` number of chordwise panels for the wake. A good value is $$M^* = \frac{L_{\mathrm{wake}}}{\Delta t u_\infty}$$, which means that the wake panels are the same size as the main wing ones. * `freestream_dir` is a different approach to modifying the attitude angles. I'd leave alone unless you really need it. ### NonLinearStatic[¶](#NonLinearStatic) The static beam solver settings. Important ones: * `max_iterations` maximum number of iterations for the structural solver. These are not the same as the ones in `DynamicCoupled`. * `num_load_steps` if > 1, the applied forces and gravity are applied progressively in several steps in order to avoid numerical divergence. Leave it at 1 unless you have problems with convergence of static cases. * `delta_curved` leave it at $1e-1$. * `min_delta` this one is more tricky. Usually $1e-6$ works well for flexible structures. If you are running more stiff stuff, you might need to lower it even more. Don't go under $1e-11$. If you don't know, start at $1e-6$, note the wing tip deflection and lower it even more. A too low value will cause the beam solver to reach `max_iterations` without convergence. When this happens, note the residual value and set something larger than that. * `gravity_on` `on` if gravity, `off` if not. * `gravity` $9.81$ if you are on Earth. Setting it to 0 is the same as `gravity_on` = `off`, but the latter is quicker. ### StaticUvlm[¶](#StaticUvlm) * `horseshoe` if this is `on`, `mstar` (in AerogridLoader) has to be 1. It controls the wake modelling. Usual wakes with `mstar > 1` are discretised and are of finite length. The horsehoe modelling is derived from the analytical expansion of an discretised wake of infinite length. ATTENTION: use only in static simulations. * `n_rollup` how many steps are carried out to convect the wake with full free-wake. This usually should be `0`, but if you want a pretty picture, you can use `n_rollup = int(mstar*1.5)`. * `rollup_dt` if `n_rollup > 0`, set `rollup_dt` to $$ \Delta t_{\mathrm{rollup}} = \frac{c/M}{u_\infty} $$ * 'velocity_field_generator' a few options available here. This paragraph is applicable to every aero solver that has a `velocity_field_generator` setting. + `SteadyVelocityField`: quite straightforward. Give a u_inf value in `u_inf` and a direction in `u_inf_direction`. + `GustVelocityField`: this one generate gusts in several profiles. The ones you will probably use: `gust_shape: '1-cos'`, or `'continuous_sin'`. - `u_inf` and `u_inf_direction` already explained - `gust_length`: equvalent to $2H$ in metres. - `gust_intensity`: reference (peak) velocity of the gust, in m/s. - `offset` x coord. of the first point of the gust with respect to the $[0, 0, 0]$ in inertial. - `span`: span of the aircraft (you are probably not going to use this, it is implemented for DARPA gusts). + `VelocityFieldGenerator`: a full unsteady 3D field of velocities is input. Ask me if you want to use it. ### StaticCoupled[¶](#StaticCoupled) * `print_info`: set it to `on` in almost all cases. * `structural_solver` and `aero_solver`: these are strings with the name of the structural and aerodynamic solvers you want to use for the coupled simulation. A solver with that name needs to exist in SHARPy (check the list of available solvers at the start of a simulation). * `structural_solver_settings` and `aero_solver_settings`: a dictionary (each) that is basically the same you added before if you wanted to run a standalone structural or aero simulation. A code example: ``` settings = dict() settings['NonLinearStatic'] = { 'print_info': 'on', 'max_iterations': 150 # ... } settings['StaticUvlm'] = { 'print_info': 'on', 'horseshoe': 'off' # ... } settings['StaticCoupled'] = { 'structural_solver': 'NonLinearStatic', 'structural_solver_settings': settings['NonLinearStatic'], 'aero_solver': 'StaticUvlm', 'aero_solver_settings': settings['StaticUvlm'], # ... ``` * `max_iter`: max number of FSI iterations * `n_load_steps`: if > 1, it ramps the aero and gravity forces slowly to improve convergence. Leave at 0 unless you really need it, then try 4 or 5. * `tolerance`: threshold for convergence of the FSI iteration. Make sure you are choosing a reasonable value for the case. If it converges in 1 iteration: lower it. If it takes more than 10: unless it is a very complicated case (next to flutter or overspeed conditions), lower it. * `relaxation_factor` a real number $\omega \in [0, 1)$. $\omega = 0$ means no relaxation, $\omega \to 1$ means every iteration affects very little to the state of the system. Usually 0.3 is a good value. If you are (again) close to overspeed, flutter... you are going to need to raise it to 0.6 or even more. ### Static trim[¶](#Static-trim) This solver acts like a wrapper of `StaticCoupled`, just like `StaticCoupled` is a wrap for the structural and aero solver. That means that when we initialise the `StaticTrim` solver, we also create a `StaticCoupled`, a `NonlinearStatic` and a `StaticUvlm` instance. It is important to know that StaticTrim only trims the longitudinal variables ($F_x, F_z, M_y$) by modifying the angle of attack, tail deflection and thrust. No lateral/directional variables are considered. * `solver`: probably you want `StaticCoupled` * `solver_settings`: most likely, something similar to `settings['StaticCoupled`]`. * `initial_alpha`, `initial_deflection`, `initial_thrust`: initial values for the angle of attack, elevator deflection and thrust per engine. ### DynamicCoupled[¶](#DynamicCoupled) This is where things get interesting. `DynamicCoupled` performs the time stepping and FSI iteration processes. Just as `StaticCoupled`, it requires a structural solver (for free-flight elastic aircraft it will be `NonLinearDynamicCoupledStep`, ask for others), and an aero solver, which will be `StepUvlm`. The `structural_solver` and `structural_solver_settings` (and the aero equivalents) are set up the same way I showed in the `StaticCoupled`. * `fsi_substeps`: max iterations in FSI loop * `fsi_tolerance`: quite descriptive. What I said for the static coupled tolerance still applies. * `relaxation_factor`: exactly the same. There are more settings to control the relaxation, and make it vary as the simulation progresses, but you probably won't need it. * `minimum_steps`: minimum FSI steps to run even if convergence is reached. * `n_time_steps`: quite descriptive. * `dt`: $\Delta t$ * `include_unsteady_force_contribution`: this activates the added mass effects calculation. It is good to have it, but it makes the simulation a bit more challenging from a numerical point of view. Run some numbers by hand and decide if it is worth it. Removing this contribution makes the code faster. * `postprocessors`: the fun begins. `postprocessors` are modules run every time step after convergence. For example, to calculate the beam loads, or output to paraview. This variable is an array of strings (`['one_module', 'another_module']`) and the modules are run in the order they are indicated. A typical workflow would be: ``` 'postprocessors': [ 'BeamLoads', # Calculate the loads at every beam element 'BeamPlot', # Output the beam data to paraview (including the beam loads - that's why beamloads goes first) 'AerogridPlot', # Output the aero grid to paraview ] ``` * `postprocessor_settings`: hopefully by this time you already get the `_settings` thing. I won't explain the settings of the processors here, you can do it in the code. ### NonLinearDynamicCoupledStep[¶](#NonLinearDynamicCoupledStep) Almost same settings as `NonLinearStatic`, so I'm going to explain only the settings that are different. * `newmark_damp`: artificial damping parameter. Increasing this damps the higher frequencies while leaving the low frequency modes relatively untouched (please note the relatively). Start with $5\times 10^{-4}$ an increase it if needed. No more than $10^{-2}$. * `num_steps`: number of time steps * `dt`: $\Delta t$ * `initial_velocity`: if you want to aircraft to fly with a velocity, this is the place to put it. Instead, you can leave it at 0 and put the velocity as a `u_inf` contribution. Results will be the same. ### StepUvlm[¶](#StepUvlm) * `convection_scheme`: you probably want to leave it at `2`. This convects the wake with the background flow (no influence from the aircraft). `3` is a full free-wake, which looks very good, takes very long and results don't change if compared to `2` in most of the cases. * `gamma_dot_filtering`: if you added the `unsteady_forces_contribution` in `DynamicCoupled`, you probably want to put `7` or `9` here. If not, it won't be considered. A very short intro to the Command Line[¶](#A-very-short-intro-to-the-Command-Line) === The command line is where you are going to spend quite a lot of your time. Make your life easier and learn how to use it. When you open a terminal, the default location to land is your `HOME` folder. In the terminal, your home folder is identified as `~` or `$HOME`. You can navigate to the folder of your choice using `cd` (*change directory*) for example, if you want to go to `Downloads`: ``` cd Downloads ``` Extra tip: write `cd Dow` and press Tab to autocomplete. If you haven't created the folder you want to move to, you can *make dir*: `mkdir`. For example, to create a folder called `Code` in you Home directory: ``` # go to your home folder if you are not there yet cd ~ # or `cd ` or `cd $HOME`, all equivalent # create the dir mkdir Code ``` If you have a file in `Downloads` called `very_private_stuff.txt`, you can *copy* it (`cp`) or *move* it (`mv`) to your `Code` folder: ``` cd Code # copy it cp ../Downloads/very_private_stuff.txt ./ # or move it mv ../Downloads/very_private_stuff.txt ./ ``` Things that require explanation: * `..` denotes the parent folder of a location. For example: `../Downloads` means "go to the previous folder and then to Downloads" * `.` (a single period) denotes the current location. For example: `./` means "right here" You can also use `mv` to rename files: ``` mv very_private_stuff.txt homework.txt ``` If you want copy a **folder**, you need to add `-r` (this is called a *flag*) after `cp`. Example: we have a folder in `Downloads` called `justin_bieber_complete_discography` and we want to copy it to `Music` and call it `arctic_monkeys` instead: ``` cd ~ mkdir Music # note the space between cp and -r cp -r Downloads/justin_bieber_complete_discography Music/arctic_monkeys ``` You can also delete or *remove* (`rm`) files and folders. Now you don't want your Justin Bieber tunes in your `Downloads` folder, so you run: ``` cd ~/Downloads # note the -r here too rm -r justin_bieber_complete_discography ``` **ATTENTION** the `rm` command IS NOT REVERTIBLE. No Rubbish Bin, Trash, helmet or parachute. Make sure you are very very sure you actually want to delete exactly that (and nothing else). A note: some files cannot be deleted with a simple `rm file`, and the computer will say it is *protected*. Then add the flag `-f` (*force*), but this makes `rm` even more dangerous, so try not to use it. The `git` internal files are sometimes protected, so if you want to delete an old copy of SHARPy, you probably will need to use it. These are the very basic commands for command line survival. I'm going to add a few non-essential ones, but useful nevertheless: * `which`: it tells you which executable is running. For example `which sharpy` returns: `/home/user/code/sharpy/bin/sharpy`. It is useful for knowing which version of anything you are running and where it comes from. For example, it is useful for knowing if you are using the updated compilers (usually in `/usr/local/bin`) or the default ones (`/usr/bin` probably). * `top`: shows you the processes running and how much RAM they're using. * `touch`: creates and empty file with the name you tell after the command. * `ipython`: starts a nice Python terminal with autocomplete and some colours. Much better than `python`. * `history`: if you forgot *that nice command that did what I wanted and now I can't remember*. Extra: `history | grep "command"` (without the quotes) will only show you the history lines that contain `command`. * `*`: wildcard for copying, removing, moving... Everything. Example: `cp *.txt ./my_files/"`. Again: careful with `rm` and `*` together. Be VERY careful with `rm` and `-rf` and `*` together. * `pwd`: for when you get lost and don't know where you are in the folder structure. * `nano`: simple text editor. Save with `ctrl+o`, exit with `ctrl+x`. * `more`: have a look at a file quickly * `head`: show the first lines of a file * `tail`: show the last lines of a file Debugging guide[¶](#Debugging-guide) === When the program fails, there are a number of typical reasons: * Did you forget `conda activate sharpy_env` and `source bin/sharpy_vars.sh`? + Check my alias tip at the beginning of the document. + If you do in the terminal: `which sharpy`, do you get the one you want? + If you do `which python`, does the result point to `anaconda3/envs/sharpy_env/bin` (or similar)? * Wrong input (inconsistent connectivities, mass = 0...) + Sometimes not easy to detect. For the structural model, run `BeamLoader` and `BeamPlot` with no structural solver in between. Go over the structure in Paraview. Check the `fem.h5` file with HDFView. + Remember that connectivities are ordered as $[0, 2, 1]$ (the central node goes last). + Make sure the `num_elem` and `num_node` variables are actually your correct number of elements and nodes. * Not running the actual case you want to. + Cleanup the folder and regenerate the case * Not running the SHARPy version you want. + Check at the beginning of the execution the path to the SHARPy folder. * Not running the correct branch of the code. + You probably want to use `develop`. Again, check the first few lines of SHARPy output. * Very different (I'm talking orders of magnitude) stiffnesses between nodes or directions? * The UVLM requires a smaller vortex core cutoff? * Newmark damping is not enough for this case? * Do you have an element with almost 0 mass? * Have a look at the $\dot{\Gamma}$ filtering and numerical parameters in the settings of `StepUvlm` and `DynamicCoupled`. * Add more relaxation to the `StaticCoupled` or `DynamicCoupled` solvers. * The code has a bug (depending on where, it may be likely). + Go over the rest of the list. Plot the case in paraview. Go over the rest of the list again. Prepare the simplest example that reproduces the problem and come see us. * The code diverges because it has to (physical unstable behaviour) + Then don't complain * Your model still doesn't work and you don't know why. + `import pdb; pdb.set_trace()` and patience * If nothing else works... get a rubber duck (or a very very patient good friend) and go over every step If your model doesn't do what it is supposed to do: * Check for symmetric response where the model is symmetric. + If it is not, run the beam solver first and make sure your properties are correct. Make sure the matrices for mass and stiffness are rotated if they need to be (remember the Material FoR definition and the `for_delta`?) + Now run the aerodynamic solver only and double check that the forces are symmetric. + Make sure your tolerances are low enough so that at least 4 FSI iterations are performed in `StaticCoupled` or `DynamicCoupled`. * Make sure your inputs are correct. For example: a dynamic case can be run with $u_\infty = 0$ and the plane moving forwards, or $u_\infty = $whatever and the plane velocity = 0. It is very easy to mix both, and end up with double the effective incoming speed (or none). * Run simple stuff before coupling it. For example, if your wing tip deflections don't match what you'd expect, calculate the deflection under a small tip force (not too small, make sure the deflection is > 1% of the length!) by hand, and compare. * It is more difficult to do the same with the UVLM, as you need a VERY VERY high aspect ratio to get close to the 2D potential solutions. You are going to have to take my word for it: the UVLM works. * But check the aero grid geometry in Paraview, including chords lengths and angles. Other useful software[¶](#Other-useful-software) === * [HDFView](https://support.hdfgroup.org/products/java/release/download.html) for opening and inspecting the `.h5` files * Gitkraken is a good graphical interface for Git. * Pycharm for editing python and running cases. * Jupyter notebook (this was made with it) for results and tests.
angus-doc
readthedoc
Unknown
angus.ai Documentation Release <NAME>, <NAME> Jul 23, 2018 Contents 1.1 Ste... Introductio... 4 1.2 Ste... Set up your playe... 5 1.3 Ste... Online Dashboar... 13 1.4 Ste... Retrieve your Dat... 15 2.1 Main API Referenc... 19 2.2 Building Block... 25 2.3 Python SD... 75 2.4 Retrieve your dat... 78 3.1 Cameras / Images Requirement... 85 3.2 Angus SDK, Python, OpenC... 86 3.3 Windows related question... 86 i ii angus.ai Documentation, Release This documentation is here to help you integrate our product / API. We are always happy to help, so feel free to contact us at <EMAIL>. angus.ai Documentation, Release 2 Contents CHAPTER 1 Audience Analytics Angus.ai Audience Analytics computes the traffic, interest and demographics metrics of a device (a screen, a kiosk, a specific shelf, etc...). This data is automatically stored on a secured database and can be visualised in realtime on an online dashboard and/or retrieved programmatically through our API. This documentation is meant for developers wanting to install, configure and launch Angus.ai audience analytics application on a screen player. angus.ai Documentation, Release Step 1 - Introduction What data can be retrieved Angus.ai anonymous audience analytics solution computes (from each camera stream) the following metrics: • The number of people passing by the camera/device, • The number of people interested in the camera/device • The time spent stopped in front of the camera/device • The time spent looking at the camera/device • The number of people interested by the camera/device, broken down by demographics – Age – Gender – Emotion For more information about the metrics, see the page dedicated to the metrics. How it works Angus.ai audience analytics solution is based on a (lightweight) Client / Server architecture as seen on the figure below. All CPU expensive computation are made on our dedicated servers making it possible to run the solution from about any CPU board that can retrieve a camera stream and connect to a server (eg. Raspberry). angus.ai Documentation, Release Once properly installed and configured, this application will interact with Angus.ai cloud based algorithms to provide audience metrics that can be retrieve through a REST API. This tutorial will show how to do it. Requirements As you go through this tutorial, you will need: • a computer. Every operating system is ok provided that you can configure a proper Python stack. • a camera (e.g. webcam) plugged into that computer. USB and IP cameras are supported, although IP cam can be more challenging to interface. • a working internet connection. An upload bandwidth of about 400ko/sec is advised. If this is a problem, we are able to provide an “hybrid” version of our solution, where part of the CPU expensive computation is done locally, alleviating connection bandwidth requirements. Please contact us at <EMAIL>. Step 2 - Set up your player Create an account To use Angus.ai services, you need to create an account. This can be done very easily by visiting https://console. angus.ai and filling the form shown below. angus.ai Documentation, Release When done, you are ready to create you first camera stream as shown below. Get credentials for your camera After creating your personal account on https://console.angus.ai/, you will be asked to create a “stream”. This proce- dure will allow for a private “access_token” and “client_id” keys to be generated for you. This can be done by pressing the “Add a stream” button on the top right hand corner as shown below. angus.ai Documentation, Release After clicking, you will be asked to choose between a free developer stream and a paying enterprise stream. Please note that the free developer stream is only for non commercial use and will block after 3 hours of video stream computed every month as seen below. angus.ai Documentation, Release For an non restricted enterprise stream, you will need to enter a valid credit card number. Press “Continue” at the bottom of the page and you will soon get the following page. Press “Show Details” and take note of your client_id (called Login on the interface) and access_token (called Password on the interface) as they will be needed later on. angus.ai Documentation, Release The credentials that you have just created will be used to configure the Angus.ai SDK. Your are now ready to proceed to the next step. Download and configure the SDK Requirements • The SDK is Python3 compatible but the documentation code snippets are only Python2 compatible. • Also, you might want (not mandatory) to create a python virtual environnement with virtualenv in order to install the sdk in there. To do so, please refer to the following virtualenv guide for more information. Install the SDK Open a terminal and install the angus python sdk with pip. If you do not use virtualenv you may need to be root, administrator or super user depending on your platform (use sudo on linux platform). $ pip install angus-sdk-python Configure your SDK You must configure your sdk with the keys you received by creating a stream here. These keys are used to authenticate the requests you are about to send. Your API credentials can be retrieved by clicking on “Show details” on the stream you just created. In a terminal, type: angus.ai Documentation, Release $ angusme Please choose your gateway (current: https://gate.angus.ai): Please copy/paste your client_id: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx Please copy/paste your access_token: x<KEY>xxxx Fill in the “client_id” prompt with the “login” given on the interface and the “access_token” prompt with the “pass- word” given on the interface. On Windows system, if angusme does not work, please refer to the FAQ for more details. You can check this setup went well by typing the following command and checking that our server sees you: $ angusme -t Server: https://gate.angus.ai Status: OK If this command gives you an error, check that you enter the right “client_id” and “acccess_token”. You can do this by re-typing “angusme” in a command prompt. If you need help, contact us here : <EMAIL> ! Download and launch the client application Our client app is a lightweight, open source Python script. It performs two basic tasks: 1. retrieve a valid video stream. By default, one of the connected USB camera will be chosen, but you can easily modify the client app to open a different camera and even open a video file. 2. package and send the video stream over https to our computation servers. This part can also be optimized for your needs (image resolution, frame rate, etc...). If you need help to perform these optimizations, please contact us at <EMAIL>. Prerequisite • you have a working webcam plugged into your PC • you have installed OpenCV2 and OpenCV2 python bindings. Please refer to OpenCV documentation to pro- ceed, or check FAQ chapter. On Debian-like platform, OpenCV2 comes pre-installed, you just need to run $ sudo apt-get install python-opencv Note also that OpenCV2 is not an absolute pre-requisite, the following code sample can easily be adapted to be used with any other way of retrieving successive frames from a video stream. Client App Please copy/paste the following code sample in a file and run it. # -*- coding: utf-8 -*- import cv2 import numpy as np import StringIO import datetime import pytz angus.ai Documentation, Release from math import cos, sin import angus.client def main(stream_index): camera = cv2.VideoCapture(stream_index) camera.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH, 640) camera.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT, 480) camera.set(cv2.cv.CV_CAP_PROP_FPS, 10) if not camera.isOpened(): print("Cannot open stream of index {}".format(stream_index)) exit(1) print("Video stream is of resolution {} x {}".format(camera.get(3), camera. ˓→ get(4))) conn = angus.client.connect() service = conn.services.get_service("scene_analysis", version=1) service.enable_session() while camera.isOpened(): ret, frame = camera.read() if not ret: break gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) ret, buff = cv2.imencode(".jpg", gray, [cv2.IMWRITE_JPEG_QUALITY, 80]) buff = StringIO.StringIO(np.array(buff).tostring()) t = datetime.datetime.now(pytz.utc) job = service.process({"image": buff, "timestamp" : t.isoformat(), "store" : True }) res = job.result if "error" in res: print(res["error"]) else: # This parses the entities data for key, val in res["entities"].iteritems(): # display only gaze vectors # retrieving eyes points eyel, eyer = val["face_eye"] eyel = tuple(eyel) eyer = tuple(eyer) # retrieving gaze vectors psi = 0 g_yaw, g_pitch = val["gaze"] theta = - g_yaw phi = g_pitch # Computing projection on screen # and drawing vectors on current frame length = 150 xvec = int(length * (sin(phi) * sin(psi) - cos(phi) * sin(theta) * ˓→ cos(psi))) angus.ai Documentation, Release yvec = int(- length * (sin(phi) * cos(psi) - cos(phi) * sin(theta) * ˓→ sin(psi))) cv2.line(frame, eyel, (eyel[0] + xvec, eyel[1] + yvec), (0, 140, 0), ˓→ 3) xvec = int(length * (sin(phi) * sin(psi) - cos(phi) * sin(theta) * ˓→ cos(psi))) yvec = int(- length * (sin(phi) * cos(psi) - cos(phi) * sin(theta) * ˓→ sin(psi))) cv2.line(frame, eyer, (eyer[0] + xvec, eyer[1] + yvec), (0, 140, 0), ˓→ 3) cv2.imshow('original', frame) if cv2.waitKey(1) & 0xFF == ord('q'): break service.disable_session() camera.release() cv2.destroyAllWindows() if __name__ == '__main__': ### Web cam index might be different from 0 on your setup. ### To grab a given video file instead of the host computer cam, try: ### main("/path/to/myvideo.avi") main(0) To run it: $ python yourcopiedfile.py You should see two green vectors showing what your are looking displayed on your screen: angus.ai Documentation, Release The application displays by default a live view of your stream, with gaze vectors super-imposed. If you need it, it is also possible to display age, gender, emotion, etc... Please refers to the app real-time API here : (Output API). Step 3 - Online Dashboard The client app you just ran is now feeding a personal and secured database with audience analytics data that you can check by following the steps below. How to view your dashboard The collected data are meant to be collected programmatically through Angus.ai Data API (see Retrieve your data). But for demonstration purposes, we have put together a standard dashboard that allows for a simple visualization over your collected data. We will use this default dashboard to check that your installation is properly set and that your data are properly stored. But you can also use it for demonstration and even for real world deployment purposes, if it suits your needs. To view your dashboard: 1. Go back to your personal account here: https://console.angus.ai/ angus.ai Documentation, Release 2. Click on the “Show Dashboard” button on the stream you created above. 3. You should see a page showing a dashboard (see example below). If you just launch the client app as explained here (apps), your dashboard might still be empty. Indeed there is about 1min time delay between what happen in front of your camera and the dashboard refreshing for these data. After waiting for the next automatic refresh (see the watch icon on the top right hand corner), your first collected data should appear (as shown on the screenshot below). 4. If your don’t see data appear, please try to get out of the camera field of view and re-enter again. What are these metrics? People passing by: Count of people who passed (not necessarily stopping or looking) in front of the camera for at least 1 second. People Interested: Count of people who stopped for at least 3 seconds and looked in the direction of the camera more than 1 second, during the specified time duration. Average stopping time: Average time a person, among the “interested” people (see above), stay still in front of the camera (in second). angus.ai Documentation, Release Average attention time: Average time a person, among the “interested” people (see above), spend looking at the camera (in second). Age Pie Chart: Population segmentation counts of all the “interested” people (see above) for each category. Gender Chart: The gender repartition of all the “interested” people (see above). Congratulations, you now have a properly running installation of our audience analytics solution. If you want to retrieve these data programmatically (for further integration into your own dashboard for example), you have got one more step to go. Step 4 - Retrieve your Data Here is a short section to help you get started in retrieving your audience data programmatically. Check our API reference for further details. (Retrieve your data) Getting your JWT Token You need a JSON Web Token (“JWT”) token in order to securely call the data api endpoint. Your personal JWT is provided by programmatically calling the appropriate endpoint documented below. Please use your angus.ai in the command line below: • account username (it should be your email address) • Stream client_id • Stream access_token You can find these credentials on http://console.angus.ai. Request: curl -X POST -H "Content-Type: application/json" -d '{"username": "aurelien. ˓→<EMAIL>", "client_id": "xxxxx-xxxx-xxxx-xxxx-xxxxxxxxx", "access_ ˓→token": "<KEY>"}' https://console.angus.ai/api- ˓→token-authstream/ You should get a response as shown below, if this is not the case, contact us. Response: { "token": "<KEY>. ˓→<KEY> ˓→K70YXQYMAcdeW7dfscFGxUhenoXXGBAQTiWhNv-9cVc" } Once you obtained your personal JWT, you can start retrieving your data by calling the API endpoints documented in the Data API Reference page. angus.ai Documentation, Release Example Here is an example of a request for all entities from 5:45 GMT+2, the 2017, September the 3rd until now, using a time bucket of “one day”. Request: curl -X GET -H 'Authorization: Bearer <KEY> ˓→<KEY> ˓→K70YXQYMAcdeW7dfscFGxUhenoXXGBAQTiWhNv-9cVc' 'https://data.angus.ai/api/1/ ˓→entities?metrics=satisfaction,gender,category,passing_by,interested&from_ ˓→date=2017-09-03T05%3A45%3A00%2B0200&time=by_day Response: { "entities": { "2017-09-03T00:00:00+00:00": { "category": { "senior_female": 0, "senior_male": 0, "young_female": 0, "young_male": 0 }, "gender": { "?": 0, "female": 0, "male": 0 }, "interested": { "value": 0 }, "passing_by": { "value": 0 }, }, "2017-09-04T00:00:00+00:00": { "category": { "senior_female": 0, "senior_male": 0, "young_female": 0, "young_male": 8 }, "gender": { "?": 0, "female": 0, "male": 10 }, "interested": { "value": 10 }, "passing_by": { "value": 18 }, }, "2017-09-05T00:00:00+00:00": { "category": { "senior_female": 0, "senior_male": 0, angus.ai Documentation, Release "young_female": 4, "young_male": 52 }, "gender": { "?": 0, "female": 4, "male": 56 }, "interested": { "value": 60 }, "passing_by": { "value": 152 }, }, "2017-09-06T00:00:00+00:00": { "category": { "senior_female": 0, "senior_male": 0, "young_female": 0, "young_male": 3 }, "gender": { "?": 0, "female": 0, "male": 4 }, "interested": { "value": 4 }, "passing_by": { "value": 20 }, }, ... ... ... "2017-09-13T00:00:00+00:00": { "category": { "senior_female": 0, "senior_male": 0, "young_female": 0, "young_male": 0 }, "gender": { "?": 0, "female": 0, "male": 0 }, "interested": { "value": 0 }, "passing_by": { "value": 0 }, }, "2017-09-14T00:00:00+00:00": { "category": { 1.4. Step 4 - Retrieve your Data 17 angus.ai Documentation, Release "senior_female": 0, "senior_male": 0, "young_female": 0, "young_male": 43 }, "gender": { "?": 1, "female": 0, "male": 59 }, "interested": { "value": 60 }, "passing_by": { "value": 153 }, } }, "from_date": "2017-09-03T05:45:00+02:00", "total_results": 12, "nb_of_pages": 1, "next_page": "", "page": 1, "time": "by_day", "to_date": "2017-09-14T16:53:11+02:00" } What next? You have a running installation of Angus.ai audience analytics solution. Congratulations! • When time comes, you can plug more cameras by creating additional stream as shown here (create-stream). • If you need to deploy your system in a situation where internet bandwidth is a problem, or for any issues please contact Angus.ai team at: <EMAIL>, and if possible, please specify your operating system, python version, as well as the error backtrace if any. Thanks! CHAPTER 2 Full API Reference Main API Reference Introduction and RESTful architecture Angus.ai provides a RESTful API for its services. That implies: • Use of HTTP protocol • A standard protocol for encryption: ssl (HTTPs) • A resource oriented programmation style • A common resource representation: JSON • A linked resources approach In the rest of this documentation, we will use command line curl to interact with angus.ai gateway and present each of these features, one by one. Encryption and Authentication All requests to an angus.ai gateway needs to be done through Basic Authentication and https protocol (http over ssl). As a user, you need to signup first at https://console.angus.ai/register to get your credentials. These credentials are an equivalent of a login/password but for a device. If you do not have your credentials yet, you can use the following ones for this tutorial: • client id: 7f5933d2-cd7c-11e4-9fe6-490467a5e114 • access token: db19c01e-18e5-4fc2-8b81-7b3d1f44533b To check your credentials you can make a simple GET request on service list resource https://gate.angus.ai/services (we will see the content of this resource in Service directory). Curl accepts the option -u that computes the value for the Authorization HTTP header in order to conform to Basic Authentication protocol. angus.ai Documentation, Release $ curl -u 7f5933d2-cd7c-11e4-9fe6-490467a5e114:db19c01e-18e5-4fc2-8b81-7b3d1f44533b \ -s -o /dev/null -w "%{http_code}" \ https://gate.angus.ai/services 200 You just made your first call to angus.ai and got the response code 200. All communications were encrypted (because we use https protocol) and you were authenticated thanks to your credentials. Resources Angus.ai provides a “resource oriented” API. Each image, piece of sound, document and other assets are represented as a resource with at least one URL. Currently, most angus.ai resources only have a JSON representation. This means that when you get a resource (with GET) from angus.ai, only the value application/json is available for the HTTP header Accept. The response body will be a JSON object. You can have a look at the body of a response by, for example, using the previous curl command and removing the extra options: $ curl -u 7f5933d2-cd7c-11e4-9fe6-490467a5e114:db19c01e-18e5-4fc2-8b81-7b3d1f44533b \ https://gate.angus.ai/services { "services": { "dummy": { "url": "/services/dummy" }, (...) "face_detection": { "url": "/services/face_detection"} } } This response body is a JSON object, its content is not important right now, we will describe it in the next sections. Service directory We chose to follow the HATEOAS constraints by linking resources via URLs provided dynamically instead of provid- ing an a priori description of all resources with their URLs. But you must have an entry point to start the navigation. The entry point for angus.ai services is https://gate.angus.ai/ services. This resource describes a service directory. By requesting it, you get back a list of available services provided by angus.ai. $ curl -u 7f5933d2-cd7c-11e4-9fe6-490467a5e114:db19c01e-18e5-4fc2-8b81-7b3d1f44533b \ https://gate.angus.ai/services { "services": { "face_expression_estimation": { "url": "/services/face_expression_estimation" }, "dummy": { angus.ai Documentation, Release "url": "/services/dummy" }, "gaze_analysis": { "url": "/services/gaze_analysis" }, "motion_detection": { "url": "/services/motion_detection" }, "age_and_gender_estimation": { "url": "/services/age_and_gender_estimation" }, "sound_localization": { "url": "/services/sound_localization" }, "face_detection": { "url": "/services/face_detection" } } } This request reveals for example a service named dummy. A service is a resource too, so let’s get it: $ curl -u 7f5933d2-cd7c-11e4-9fe6-490467a5e114:db19c01e-18e5-4fc2-8b81-7b3d1f44533b \ https://gate.angus.ai/services/dummy { "versions": { "1": {"url": "/services/dummy/1"} } } The response shows that there is only one version of the dummy service. Let’s continue and get the new given url: $ curl -u 7f5933d2-cd7c-11e4-9fe6-490467a5e114:db19c01e-18e5-4fc2-8b81-7b3d1f44533b \ https://gate.angus.ai/services/dummy/1 { "url": "https://gate.angus.ai/services/dummy/1", "version": 1, "description": "\nA simple dummy service. You can send {\"echo\": \"Hello world\"} ˓→to get back the\nmessage \"Hello world\" as result. Moreover, the dummy service ˓→enables statefull\nfeatures", "jobs": "https://gate.angus.ai/services/dummy/1/jobs", } We started at the entry endpoint of service directory and finaly got an endpoint on a “jobs” resource. In the next section we will see how to use this resource to request new compute to angus.ai. Jobs (compute) The previous “jobs” resource is a collection of job resources. As a user, you can create a new job by using a POST request on it. To make a valid request you must comply with these constraints: • the body of the request must be a JSON message whose format matches the documentation of the service angus.ai Documentation, Release • the Content-Type header of the request must be set to application/json • you must specify the synchronous or asynchronous type of request you wish to make. Please see Asynchronous call for more details The new curl command is as follows: $ curl -u 7f5933d2-cd7c-11e4-9fe6-490467a5e114:db19c01e-18e5-4fc2-8b81-7b3d1f44533b \ -H "Content-Type: application/json" \ -d '{ "echo": "Hello world!", "async": false}' \ https://gate.angus.ai/services/dummy/1/jobs { "url": "https://gate.angus.ai/services/dummy/1/jobs/db77e78e-0dd8-11e5-a743- ˓→ 19d95545b6ca", "status": 201, "echo": "Hello world!" } The response contains an absolute url on the resource (the job), its status (201 : CREATED), and its result as a synchronous job has been requested. Note that an new url is provided to get back later on the job (accessing its result in an async way for example). $ curl -u 7f5933d2-cd7c-11e4-9fe6-490467a5e114:db19c01e-18e5-4fc2-8b81-7b3d1f44533b \ https://gate.angus.ai/services/dummy/1/jobs/db77e78e-0dd8-11e5-a743-19d95545b6ca { "url": "https://gate.angus.ai/services/dummy/1/jobs/db77e78e-0dd8-11e5-a743- ˓→ 19d95545b6ca", "status": 201, "echo": "Hello world!" } Asynchronous call All job requests are asynchronous by default if no async parameter is set. $ curl -u 7f5933d2-cd7c-11e4-9fe6-490467a5e114:db19c01e-18e5-4fc2-8b81-7b3d1f44533b \ -H "Content-Type: application/json" \ -d '{ "echo": "Hello world!"}' \ https://gate.angus.ai/services/dummy/1/jobs { "url": "https://gate.angus.ai/services/dummy/1/jobs/db77e78e-0dd8-11e5-a743- ˓→ 19d95545b6ca", "status": 202, } The response status is 202 for HTTP status code ACCEPTED, and the replied url allows to get back to the result in the future. $ curl -u 7f5933d2-cd7c-11e4-9fe6-490467a5e114:db19c01e-18e5-4fc2-8b81-7b3d1f44533b \ https://gate.angus.ai/services/dummy/1/jobs/db77e78e-0dd8-11e5-a743-19d95545b6ca { "url": "https://gate.angus.ai/services/dummy/1/jobs/db77e78e-0dd8-11e5-a743- ˓→ 19d95545b6ca", angus.ai Documentation, Release "status": 200, "echo": "Hello world!" } If you want a synchronous job with the result, you must specify async as false. $ curl -u 7f5933d2-cd7c-11e4-9fe6-490467a5e114:db19c01e-18e5-4fc2-8b81-7b3d1f44533b \ -H "Content-Type: application/json" \ -d '{ "echo": "Hello world!", "async": false}' \ https://gate.angus.ai/services/dummy/1/jobs { "url": "https://gate.angus.ai/services/dummy/1/jobs/db77e78e-0dd8-11e5-a743- ˓→ 19d95545b6ca", "status": 201, "echo": "Hello world!" } Binary attachment Most requests to Angus.ai will need you to attach binary files for sound, images, videos or other raw data from various sensors. Angus.ai provides two ways to upload them: • attached in the request • or by referring to a previously created resource Make a request with an attached binary file You need to create a multipart request to send binary file to angus.ai as follows: • the name and type of the binary part are specified with: attachment://<name_of_the_resource> • the JSON body part is prefixed with meta • the JSON body part refers to the attachement attachment://<name_of_the_resource For example, the service face_detection must be provided an image as input. You can upload it as an attachment as follows: $ curl -u 7f5933d2-cd7c-11e4-9fe6-490467a5e114:db19c01e-18e5-4fc2-8b81-7b3d1f44533b \ -F "attachment://bar=@<EMAIL>.jpg;type=image/jpg" \ -F 'meta={"async" : false, "image": "attachment://bar"};type=application/json' \ https://gate.angus.ai/services/face_detection/1/jobs { "url": "https://gate.angus.ai/services/face_detection/1/jobs/1944556c-baf8-11e5- ˓→ 85c3-0242ac110001", "status": 201, "input_size": [480, 640], "nb_faces": 1, "faces": [{"roi": [262, 76, 127, 127], "roi_confidence": 0.8440000414848328}] } angus.ai Documentation, Release Create a binary resource Angus.ai provides a “blob storage” to upload a binary resource once and use it later for one or more services. This service is available at https://gate.angus.ai/blobs. Binaries need to be sent as an attachement to the request (as shown above), made on the “blob storage” resource. The JSON body part needs to contain a key content whose value matches the attached file. $ curl -u 7f5933d2-cd7c-11e4-9fe6-490467a5e114:db19c01e-18e5-4fc2-8b81-7b3d1f44533b \ -F "attachment://[email protected];type=image/jpg" \ -F 'meta={"async": false, "content": "attachment://bar"};type=application/json' \ https://gate.angus.ai/blobs { "status": 201, "url": "https://gate.angus.ai/blobs/a5bca2da-baf6-11e5-ad97-0242ac110001" } The response contains the url of the new blob resource created. You can now use this (binary) resource it in all angus.ai services by referring to it in your requests: $ curl -u 7f5933d2-cd7c-11e4-9fe6-490467a5e114:db19c01e-18e5-4fc2-8b81-7b3d1f44533b \ -F 'meta={"async": false, "image": "https://gate.angus.ai/blobs/a5bca2da-baf6-11e5- ˓→ad97-0242ac110001"};type=application/json' \ https://gate.angus.ai/services/face_detection/1/jobs { "url": "http://localhost/services/face_detection/1/jobs/1944556c-baf8-11e5-85c3- ˓→ 0242ac110001", "status": 201, "input_size": [480, 640], "nb_faces": 1, "faces": [{"roi": [262, 76, 127, 127], "roi_confidence": 0.8440000414848328}] } Session / State Despite angus.ai API aiming at RESTful and hence stateless services, some services can currently and optionally be made statefull. In that case, the state is kept by the client and attached with each request in a state JSON parameter. For the statefull services, states are currently represented as session_id generated on the client side. In followed example, we generate a uuid session id with the uuidgen linux tool and we loop 4 times over the same image that contains a face and send it to the face detection service. $ export SESSION=`uuidgen` > for i in `seq 1 4`; do > curl -su 7f5933d2-cd7c-11e4-9fe6-490467a5e114:db19c01e-18e5-4fc2-8b81- ˓→7b3d1f44533b \ > -F "attachment://[email protected];type=image/jpg" \ > -F 'meta={"async" : false, "image": "attachment://bar", "state": { "session_ ˓→id": "'$SESSION'"}};type=application/json' \ > https://gate.angus.ai/services/face_detection/1/jobs | python -m json.tool | ˓→grep "nb_faces" > done; angus.ai Documentation, Release "nb_faces": 0 "nb_faces": 0 "nb_faces": 0 "nb_faces": 1 When a session is requested, the service try to track faces in sucessive images but returns no result at first time. Then, we can notice, the three first calls have 0 face result but the fourth one (for the same image) find a face. That validates the session id parameter is taken into account. Building Blocks Tutorial Step 1 - Introduction This documentation is meant at developers wanting to use Angus.ai building blocks API services. What the difference with other AI building blocks providers? Angus.ai is focus 100% on turning existing 2D cameras into a new generation of monitoring and alerting tools, as a consequence these building blocks are optimized to work: • on video streams • in realtime • and with low resolution 2D cameras How it works Angus.ai audience analytics solution is based on a (lightweight) Client / Server architecture as seen on the figure below. All CPU expensive computation are made on our dedicated servers making it possible to run the solution from about any CPU board that can retrieve a camera stream and connect to a server (eg. Raspberry). angus.ai Documentation, Release List of the available building blocks Blocks Scene Analysis This is Angus.ai main API that is meant to help you leverage your video streams by extracting: • how many people are visible? • who is looking at what? • are people only passing by or do they stop? • do they look happy? • what are their age and gender? • etc... Besides the code samples provided on this page, you can find a first way to use the API on GitHub here Getting Started Using the Python SDK: # -*- coding: utf-8 -*- import angus.client from pprint import pprint conn = angus.client.connect() service = conn.services.get_service("scene_analysis", version=1) service.enable_session() while True: job = service.process({"image": open("./image.jpg", 'rb'), "timestamp" : "2016-10-26T16:21:01.136287+00:00", angus.ai Documentation, Release "camera_position": "ceiling", "sensitivity": { } }) pprint(job.result) service.disable_session() Input The API takes a stream of 2d still images as input, of format “jpg” or “png”, without constraints on resolution. However, 640p x 480p tends to be a good trade-off for both precision/recall and latencies. Note also that the bigger the resolution, the longer the API will take to process and give a result. The function process() takes a dictionary as input formatted as follows: { "image" : binary file, "timestamp" : "2016-10-26T16:21:01.136287+00:00", "camera_position" : "ceiling" or "facing", "sensitivity": { "appearance": 0.7, "disappearance": 0.7, "age_estimated": 0.4, "gender_estimated": 0.5, "focus_locked": 0.9, "emotion_detected": 0.4, "direction_estimated" : 0.8 }, } • image: a python File Object returned for example by open() or a StringIO buffer. • timestamp: a string formated using the iso 8601 UTC date format. • camera_position: a preset is a list of parameters set in advance. This list of parameters is used to calibrate the API based on the camera position. • sensitivity: an optional dictionary that sets the sensitivity (between 0 and 1) of the system regarding each events. For instance, If you feel that the events “appearance” is triggered too often, you can decrease its value. • store: store process results (not video data) for analytics dashboard (Beta) Here is the list of the different presets that are available : • ceiling: this preset has to be used if the camera is a ceiling camera or if it is placed at ceiling height. • facing: this preset has to be used if the camera is placed at human height. angus.ai Documentation, Release Fig. 2.1: The “facing” preset should be used in this situation Fig. 2.2: The “ceiling” preset should be used in this situation angus.ai Documentation, Release Output API Events will be pushed to your client following that format. Note that if nothing happened, the events list will be empty, but the timestamp will still be updated. { "timestamp" : "2016-10-26T16:21:01.136287+00:00", "events" : [ { "entity_id" : "16fd2706-8baf-433b-82eb-8c7fada847da", "entity_type" : "human", "type" : "age_estimated", "confidence" : 0.96, "key" : "age" } ], "entities" : {"16fd2706-8baf-433b-82eb-8c7fada847da": { "face_roi": [339, 264, 232, 232], "face_roi_confidence": 0.71, "full_body_roi": [59, 14, 791, 1798], "full_body_roi_confidence": 0.71, "age": 25, "age_confidence": 0.34, "gender": "male", "gender_confidence": 0.99, "emotion_anger": 0.04, "emotion_surprise": 0.06, "emotion_sadness": 0.14, "emotion_neutral": 0.53, "emotion_happiness": 0.21, "emotion_smiling_degree": 0.42, "emotion_confidence": 0.37, "face_eye": [[414, 346], [499, 339]], "face_mouth": [456, 401], "face_nose": [456, 401], "face_confidence": 0.37, "gaze": [0.02, 0.14], "gaze_confidence": 0.37, "head": [-0.1751, -0.0544, -0.0564] "head_confidence": 0.3765, "direction": "unknown" } } } • timestamp: a string formated using the iso 8601 UTC date format. • entity_id : id of the human related to the event. • entity_type : type of the entity, only “human” is currently supported angus.ai Documentation, Release • type : type of the event, a list of event types can be found below. • confidence : a value between 0 and 1 which reflects the probability that the event has really occurred in the scene. • key : a string which indicates which value has been updated in the attached entities list. • face_roi : contains [pt.x, pt.y, width, height] where pt is the upper left point of the rectangle outlining the detected face. • face_roi_confidence : an estimate of the probability that a real face is indeed located at the given roi. • full_body_roi : contains [pt.x, pt.y, width, height] where pt is the upper left point of the rectangle outlining the detected human body. • full_body_roi_confidence : an estimate of the probability that a real human body is indeed located at the given roi. • age : an age estimate (in years) of the person outlined by roi. • age_confidence : an estimate of the probability that the outlined person is indeed of age age. • gender : an estimation of the gender of the person outlined by roi. Value is either "male" or "female". • gender_confidence : an estimate of the probability that the outlined person is indeed of gender gender. • emotion_neutral, emotion_happiness, emotion_surprise, emotion_anger, emotion_sadness : a float in [0, 1] measuring the intensity of the corresponding face expression. • face_eye, face_mouth, face_nose : the coordinate of the detected eyes, nose and mouth in pixels. • head : head pose orientation (yaw, pitch and roll) in radian • gaze : gaze orientation (yaw, pitch) in radian • direction : an indication of the average direction of the person. Value is either "unknown", "up", "right", "left" or "down". The list of the possible events : • "appearance" : a new human has just been detected. • "disappearance" : a known human has just disappeared. • "age_estimated" : the age of the corresponding human has just been estimated, (expect 1 or 2 events of this type for each human) • "gender_estimated" : gender estimation of the corresponding human. (expect 1 or 2 events of this type for each human) • "focus_locked" : if a human look in a specific direction for a significant time, this event is triggered with the pitch and yaw of the gaze registered in the data. • "emotion_detected" : if a remarkable emotion peak is detected, the event is triggered with the related emotion type registered in the data. • "direction_estimated" : if the human stays enough time in order to determine his average direction. Code Sample requirements: opencv2, opencv2 python bindings This code sample retrieves the stream of a web cam and display in a GUI the result of the scene_analysis service. angus.ai Documentation, Release # -*- coding: utf-8 -*- import StringIO import angus.client import cv2 import numpy as np import datetime import pytz def main(stream_index): camera = cv2.VideoCapture(stream_index) camera.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH, 640) camera.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT, 480) camera.set(cv2.cv.CV_CAP_PROP_FPS, 10) if not camera.isOpened(): print("Cannot open stream of index {}".format(stream_index)) exit(1) print("Input stream is of resolution: {} x {}".format(camera.get(3), camera. ˓→ get(4))) conn = angus.client.connect() service = conn.services.get_service("scene_analysis", version=1) service.enable_session() while camera.isOpened(): ret, frame = camera.read() if not ret: break gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) ret, buff = cv2.imencode(".jpg", gray, [cv2.IMWRITE_JPEG_QUALITY, 80]) buff = StringIO.StringIO(np.array(buff).tostring()) t = datetime.datetime.now(pytz.utc) job = service.process({"image": buff, "timestamp" : t.isoformat(), "camera_position": "facing", "sensitivity": { "appearance": 0.7, "disappearance": 0.7, "age_estimated": 0.4, "gender_estimated": 0.5, "focus_locked": 0.9, "emotion_detected": 0.4, "direction_estimated": 0.8 }, }) res = job.result if "error" in res: print(res["error"]) else: # This parses the events if "events" in res: for event in res["events"]: value = res["entities"][event["entity_id"]][event["key"]] angus.ai Documentation, Release print("{}| {}, {}".format(event["type"], event["key"], value)) # This parses the entities data for key, val in res["entities"].iteritems(): x, y, dx, dy = map(int, val["face_roi"]) cv2.rectangle(frame, (x, y), (x+dx, y+dy), (0, 255, 0), 2) cv2.imshow("original", frame) if cv2.waitKey(1) & 0xFF == 27: break service.disable_session() camera.release() cv2.destroyAllWindows() if __name__ == "__main__": ### Web cam index might be different from 0 on your setup. ### To grab a given video file instead of the host computer cam, try: ### main("/path/to/myvideo.avi") main(0) Upper Body Detection Do I see a human? How many? Where? As opposed to the Face Detection service, this service is able to detect a human even if his/her face is not visible. Getting Started Using Angus python SDK: # -*- coding: utf-8 -*- import angus.client from pprint import pprint conn = angus.client.connect() service = conn.services.get_service('upper_body_detection', version=1) job = service.process({'image': open('./macgyver.jpg', 'rb')}) pprint(job.result) Input The API takes a stream of 2d still images as input, of format jpg or png, without constraints on resolution. Note however that the bigger the resolution, the longer the API will take to process and give a result. The function process() takes a dictionary as input formatted as follows: {'image' : file} • image: a python File Object as returned for example by open() or a StringIO buffer. angus.ai Documentation, Release Output Events will be pushed to your client following that format: { "input_size" : [480, 640], "nb_upper_bodies" : 2, "upper_bodies" : [ { "upper_body_roi" : [345, 223, 34, 54], "upper_body_roi_confidence" : 0.89 }, { "upper_body_roi" : [35, 323, 45, 34], "upper_body_roi_confidence" : 0.56 } ] } • input_size : width and height of the input image in pixels (to be used as reference to roi output. • nb_upper_bodies : number of upper bodies detected in the given image • upper_body_roi : contains [pt.x, pt.y, width, height] where pt is the upper left point of the rectangle outlining the detected upper body. • upper_body_roi_confidence : an estimate of the probability that a real human upper body is indeed located at the given upper_body_roi. Code Sample requirements: opencv2, opencv2 python bindings This code sample retrieves the stream of a web cam and display in a GUI the result of the upper_body_detection service. # -*- coding: utf-8 -*- import StringIO import cv2 import numpy as np from pprint import pprint import angus.client def main(stream_index): camera = cv2.VideoCapture(stream_index) camera.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH, 640) camera.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT, 480) camera.set(cv2.cv.CV_CAP_PROP_FPS, 10) if not camera.isOpened(): print("Cannot open stream of index {}".format(stream_index)) exit(1) print("Input stream is of resolution: {} x {}".format(camera.get(3), camera. ˓→ get(4))) conn = angus.client.connect() service = conn.services.get_service("upper_body_detection", version=1) angus.ai Documentation, Release service.enable_session() while camera.isOpened(): ret, frame = camera.read() if not ret: break gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) ret, buff = cv2.imencode(".jpg", gray, [cv2.IMWRITE_JPEG_QUALITY, 80]) buff = StringIO.StringIO(np.array(buff).tostring()) job = service.process({"image": buff}) res = job.result pprint(res) for body in res['upper_bodies']: x, y, dx, dy = body['upper_body_roi'] cv2.rectangle(frame, (x, y), (x+dx, y+dy), (0,255,0)) cv2.imshow('original', frame) if cv2.waitKey(1) & 0xFF == ord('q'): break service.disable_session() camera.release() cv2.destroyAllWindows() if __name__ == '__main__': ### Web cam index might be different from 0 on your setup. ### To grab a given video file instead of the host computer cam, try: ### main("/path/to/myvideo.avi") main(0) angus.ai Documentation, Release Age and Gender Estimation How old are people in front of my object? Are they male or female? Getting Started Using the Python SDK: # -*- coding: utf-8 -*- import angus.client from pprint import pprint conn = angus.client.connect() service = conn.services.get_service('age_and_gender_estimation', version=1) job = service.process({'image': open('./macgyver.jpg', 'rb')}) pprint(job.result) Input The API takes a stream of 2d still images as input, of format jpg or png, without constraints on resolution. Note however that the bigger the resolution, the longer the API will take to process and give a result. The function process() takes a dictionary as input formatted as follows: angus.ai Documentation, Release {'image' : file} • image: a python File Object as returned for example by open() or a StringIO buffer. Output Events will be pushed to your client following that format: { "input_size" : [480, 640], "nb_faces" : 1, "faces" : [ { "roi" : [345, 223, 34, 54], "roi_confidence" : 0.89, "age" : 32, "age_confidence" :0.87, "gender" : "male", "gender_confidence" : 0.95 } ] } • input_size : width and height of the input image in pixels (to be used as reference to roi output. • nb_faces : number of faces detected in the given image • roi : contains [pt.x, pt.y, width, height] where pt is the upper left point of the rectangle outlining the detected face. • roi_confidence : an estimate of the probability that a real face is indeed located at the given roi. • age : an age estimate (in years) of the person outlined by roi. • age_confidence : an estimate of the probability that the outlined person is indeed of age age. • gender : an estimation of the gender of the person outlined by roi. Value is either "male" or "female". • gender_confidence : an estimate of the probability that the outlined person is indeed of gender gender. Code Sample requirements: opencv2, opencv2 python bindings This code sample retrieves the stream of a web cam and display in a GUI the result of the age_and_gender_estimation service. # -*- coding: utf-8 -*- import cv2 import numpy as np import StringIO import angus.client def main(stream_index): camera = cv2.VideoCapture(stream_index) camera.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH, 640) camera.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT, 480) camera.set(cv2.cv.CV_CAP_PROP_FPS, 10) angus.ai Documentation, Release if not camera.isOpened(): print("Cannot open stream of index {}".format(stream_index)) exit(1) print("Video stream is of resolution {} x {}".format(camera.get(3), camera. ˓→ get(4))) conn = angus.client.connect() service = conn.services.get_service("age_and_gender_estimation", version=1) service.enable_session() while camera.isOpened(): ret, frame = camera.read() if not ret: break gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) ret, buff = cv2.imencode(".jpg", gray, [cv2.IMWRITE_JPEG_QUALITY, 80]) buff = StringIO.StringIO(np.array(buff).tostring()) job = service.process({"image": buff}) res = job.result for face in res['faces']: x, y, dx, dy = face['roi'] age = face['age'] gender = face['gender'] cv2.rectangle(frame, (x, y), (x+dx, y+dy), (0,255,0)) cv2.putText(frame, "(age, gender) = ({:.1f}, {})".format(age, gender), (x, y), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (255, 255, 255)) cv2.imshow('original', frame) if cv2.waitKey(1) & 0xFF == ord('q'): break service.disable_session() camera.release() cv2.destroyAllWindows() if __name__ == '__main__': ### Web cam index might be different from 0 on your setup. ### To grab a given video file instead of the host computer cam, try: ### main("/path/to/myvideo.avi") main(0) angus.ai Documentation, Release Face Expression Estimation Are people in front looking happy or surprised? Getting Started Using Angus Python SDK: # -*- coding: utf-8 -*- from pprint import pprint import angus.client conn = angus.client.connect() service = conn.services.get_service('face_expression_estimation', version=1) job = service.process({'image': open('./macgyver.jpg', 'rb')}) pprint(job.result) Input The API takes a stream of 2d still images as input, of format jpg or png, without constraints on resolution. angus.ai Documentation, Release Note however that the bigger the resolution, the longer the API will take to process and give a result. The function process() takes a dictionary as input formatted as follows: {'image' : file} • image: a python File Object as returned for example by open() or a StringIO buffer. Output Events will be pushed to your client following that format: { "input_size" : [480, 640], "nb_faces" : 1, "faces" : [ { "roi" : [345, 223, 34, 54], "roi_confidence" : 0.89, "neutral" : 0.1, "happiness" : 0.2, "surprise" : 0.7, "anger" : 0.01, "sadness" : 0.1, } ] } • input_size : width and height of the input image in pixels (to be used as reference to roi output. • nb_faces : number of faces detected in the given image • roi : contains [pt.x, pt.y, width, height] where pt is the upper left point of the rectangle outlining the detected face. • roi_confidence : an estimate of the probability that a real face is indeed located at the given roi. • neutral, happiness, surprise, anger, sadness : a float in [0, 1] measuring the intensity of the corresponding face expression. Code Sample requirements: opencv2, opencv2 python bindings This code sample retrieves the stream of a web cam and display in a GUI the result of the face_expression_estimation service. # -*- coding: utf-8 -*- import StringIO import cv2 import numpy as np import angus.client def main(stream_index): camera = cv2.VideoCapture(stream_index) camera.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH, 640); camera.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT, 480); camera.set(cv2.cv.CV_CAP_PROP_FPS, 10) angus.ai Documentation, Release if not camera.isOpened(): print("Cannot open stream of index {}".format(stream_index)) exit(1) print("Input stream is of resolution: {} x {}".format(camera.get(3), camera. ˓→ get(4))) conn = angus.client.connect() service = conn.services.get_service('face_expression_estimation', 1) service.enable_session() while camera.isOpened(): ret, frame = camera.read() if not ret: break ### angus.ai computer vision services require gray images right now. gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) ret, buff = cv2.imencode(".jpg", gray, [cv2.IMWRITE_JPEG_QUALITY, 80]) buff = StringIO.StringIO(np.array(buff).tostring()) job = service.process({"image": buff}) res = job.result for face in res['faces']: x, y, dx, dy = face['roi'] cv2.rectangle(frame, (x, y), (x+dx, y+dy), (0,255,0)) ### Sorting of the 5 expressions measures ### to display the most likely on the screen exps = [(face[exp], exp) for exp in ['sadness', 'happiness', 'neutral', 'surprise', 'anger']] exps.sort() max_exp = exps[-1] cv2.putText(frame, str(max_exp[1]), (x, y), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255)) cv2.imshow('original', frame) if cv2.waitKey(1) & 0xFF == ord('q'): break ### Disabling session on the server service.disable_session() camera.release() cv2.destroyAllWindows() if __name__ == '__main__': ### Web cam index might be different from 0 on your setup. ### To grab a given video file instead of the host computer cam, try: ### main("/path/to/myvideo.avi") main(0) angus.ai Documentation, Release Face Detection Do I see human faces? How many? Where? Getting Started Using Angus python SDK: # -*- coding: utf-8 -*- import angus.client from pprint import pprint conn = angus.client.connect() service = conn.services.get_service('face_detection', version=1) job = service.process({'image': open('./macgyver.jpg', 'rb')}) pprint(job.result) Input The API takes a stream of 2d still images as input, of format jpg or png, without constraints on resolution. angus.ai Documentation, Release Note however that the bigger the resolution, the longer the API will take to process and give a result. The function process() takes a dictionary as input formatted as follows: {'image' : file} • image: a python File Object as returned for example by open() or a StringIO buffer. Output Events will be pushed to your client following that format: { "input_size" : [480, 640], "nb_faces" : 2, "faces" : [ { "roi" : [345, 223, 34, 54], "roi_confidence" : 0.89 }, { "roi" : [35, 323, 45, 34], "roi_confidence" : 0.56 } ] } • input_size : width and height of the input image in pixels (to be used as reference to roi output. • nb_faces : number of faces detected in the given image • roi : contains [pt.x, pt.y, width, height] where pt is the upper left point of the rectangle outlining the detected face. • roi_confidence : an estimate of the probability that a real face is indeed located at the given roi. Code Sample requirements: opencv2, opencv2 python bindings This code sample retrieves the stream of a web cam and display in a GUI the result of the face_detection service. # -*- coding: utf-8 -*- import cv2 import numpy as np import StringIO import angus.client def main(stream_index): camera = cv2.VideoCapture(stream_index) camera.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH, 640) camera.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT, 480) camera.set(cv2.cv.CV_CAP_PROP_FPS, 10) if not camera.isOpened(): print("Cannot open stream of index {}".format(stream_index)) angus.ai Documentation, Release exit(1) print("Video stream is of resolution {} x {}".format(camera.get(3), camera. ˓→ get(4))) conn = angus.client.connect() service = conn.services.get_service("age_and_gender_estimation", version=1) service.enable_session() while camera.isOpened(): ret, frame = camera.read() if not ret: break gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) ret, buff = cv2.imencode(".jpg", gray, [cv2.IMWRITE_JPEG_QUALITY, 80]) buff = StringIO.StringIO(np.array(buff).tostring()) job = service.process({"image": buff}) res = job.result for face in res['faces']: x, y, dx, dy = face['roi'] cv2.rectangle(frame, (x, y), (x+dx, y+dy), (0,255,0)) cv2.imshow('original', frame) if cv2.waitKey(1) & 0xFF == ord('q'): break service.disable_session() camera.release() cv2.destroyAllWindows() if __name__ == '__main__': ### Web cam index might be different from 0 on your setup. ### To grab a given video file instead of the host computer cam, try: ### main("/path/to/myvideo.avi") main(0) angus.ai Documentation, Release Face Recognition This service spots for a specified set of people in images or videos. To be able to recognize people, this service needs to be first provided with a few pictures of each person’s face. How to prepare face samples? Here are a few tips to make sure you get the most of Angus face_recognition service: • make sure the resolution of these samples is high enough. • make sure these samples show a unique face only, in order to avoid any ambiguity. • the service will perform better if you provide more than 1 sample for each person, with different face expressions. For example, the code sample shown below makes use of the following face sample (only 1 sample per people to recognize is used in that case). angus.ai Documentation, Release 2.2. Building Blocks 45 angus.ai Documentation, Release Getting Started Using the Angus python SDK: # -*- coding: utf-8 -*- import angus.client from pprint import pprint conn = angus.client.connect() service = conn.services.get_service('face_recognition', version=1) PATH = "/path/to/your/face/samples/" w1_s1 = conn.blobs.create(open(PATH + "jamel/1.jpeg", 'rb')) w1_s2 = conn.blobs.create(open(PATH + "jamel/2.jpg", 'rb')) w1_s3 = conn.blobs.create(open(PATH + "jamel/3.jpg", 'rb')) w1_s4 = conn.blobs.create(open(PATH + "jamel/4.jpg", 'rb')) w2_s1 = conn.blobs.create(open(PATH + "melissa/1.jpg", 'rb')) w2_s2 = conn.blobs.create(open(PATH + "melissa/2.jpg", 'rb')) w2_s3 = conn.blobs.create(open(PATH + "melissa/3.jpg", 'rb')) w2_s4 = conn.blobs.create(open(PATH + "melissa/4.jpg", 'rb')) album = {'jamel': [w1_s1, w1_s2, w1_s3, w1_s4], 'melissa': [w2_s1, w2_s2, w2_s3, w2_ ˓→s4]} job = service.process({'image': open(PATH + "melissa/5.jpg", 'rb'), "album" : album}) pprint(job.result) Input The API captures a stream of 2D still images as input, under jpg or png format, without any constraint of resolution. Note however that the bigger the resolution, the longer the API takes to process and give a result. The function process() takes a dictionary as input formatted as follows: { 'image' : file, 'album' : {"people1": [sample_1, sample_2], "people2" : [sample_1, sample_2]} } • image: a python File Object as returned for example by open() or a StringIO buffer. • album : a dictionary containing samples of the faces that need to be spotted. Samples need first to be provided to the service using the function blobs.create() as per the example above. The more samples the better, although 1 sample per people is enough. Output Events will be pushed to your client following that format: { "input_size" : [480, 640], "nb_faces" : 1, "faces" : [ angus.ai Documentation, Release { "roi" : [345, 223, 34, 54], "roi_confidence" : 0.89, "names" : [ { "key" : "jamel", "confidence" : 0.75 }, { "key" : "melissa", "confidence" : 0.10 } ] } ] } • input_size : width and height of the input image in pixels (to be used as reference to roi output. • nb_faces : number of faces detected in the given image • roi : Region Of Interest containing [pt.x, pt.y, width, height], where pt is the upper left point of the rectangle outlining the detected face. • roi_confidence : probability that a real face is indeed located at the given roi. • key : they key identifying a given group of samples (as specified in the album input). • confidence : probability that the corresponding people was spotted in the image / video stream. Code Sample requirements: opencv2, opencv2 python bindings This code sample captures the stream of a web cam and displays the result of the face_recognition service in a GUI. # -*- coding: utf-8 -*- import StringIO import cv2 import numpy as np import angus.client def main(stream_index): camera = cv2.VideoCapture(stream_index) camera.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH, 640) camera.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT, 480) camera.set(cv2.cv.CV_CAP_PROP_FPS, 10) if not camera.isOpened(): print("Cannot open stream of index {}".format(stream_index)) exit(1) print("Input stream is of resolution: {} x {}".format(camera.get(3), camera. ˓→ get(4))) angus.ai Documentation, Release conn = angus.client.connect() service = conn.services.get_service("face_recognition", version=1) ### Choose here the appropriate pictures. ### Pictures given as samples for the album should only contain 1 visible face. ### You can provide the API with more than 1 photo for a given person. w1_s1 = conn.blobs.create(open("./images/gwenn.jpg", 'rb')) w2_s1 = conn.blobs.create(open("./images/aurelien.jpg", 'rb')) w3_s1 = conn.blobs.create(open("./images/sylvain.jpg", 'rb')) album = {'gwenn': [w1_s1], 'aurelien': [w2_s1], 'sylvain': [w3_s1]} service.enable_session({"album" : album}) while camera.isOpened(): ret, frame = camera.read() if not ret: break gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) ret, buff = cv2.imencode(".jpg", frame, [cv2.IMWRITE_JPEG_QUALITY, 80]) buff = StringIO.StringIO(np.array(buff).tostring()) job = service.process({"image": buff}) res = job.result for face in res['faces']: x, y, dx, dy = face['roi'] cv2.rectangle(frame, (x, y), (x+dx, y+dy), (0,255,0)) if len(face['names']) > 0: name = face['names'][0]['key'] cv2.putText(frame, "Name = {}".format(name), (x, y), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255)) cv2.imshow('original', frame) if cv2.waitKey(1) & 0xFF == ord('q'): break service.disable_session() camera.release() cv2.destroyAllWindows() if __name__ == '__main__': ### Web cam index might be different from 0 on your setup. ### To grab a given video file instead of the host computer cam, try: ### main("/path/to/myvideo.avi") main(0) Gaze Analysis What are people in front of my object looking at? angus.ai Documentation, Release Getting Started Using the Python SDK: # -*- coding: utf-8 -*- import angus.client from pprint import pprint conn = angus.client.connect() service = conn.services.get_service('gaze_analysis', version=1) job = service.process({'image': open('./macgyver.jpg', 'rb')}) pprint(job.result) Input The API takes a stream of 2d still images as input, of format jpg or png, without constraints on resolution. Note however that the bigger the resolution, the longer the API will take to process and give a result. The function process() takes a dictionary as input formatted as follows: {'image' : file} • image: a python File Object as returned for example by open() or a StringIO buffer. Output Events will be pushed to your client following that format: { "input_size" : [480, 640], "nb_faces" : 1, "faces" : [ { "roi" : [250, 142, 232, 232], "roi_confidence" : 0.89, "eye_left" : [123, 253], "eye_right" : [345, 253], "nose" : [200, 320], "head_yaw" : 0.03, "head_pitch" : 0.23, "head_roll" : 0.14, "gaze_yaw" : 0.05, "gaze_pitch" : 0.12 } ] } • input_size : width and height of the input image in pixels (to be used as reference to roi output. • nb_faces : number of faces detected in the given image • roi : contains [pt.x, pt.y, width, height] where pt is the upper left point of the rectangle outlining the detected face. • roi_confidence : an estimate of the probability that a real face is indeed located at the given roi. angus.ai Documentation, Release • head_yaw, head_pitch, head_roll : head pose orientation in radian. • gaze_yaw, gaze_pitch : gaze (eyes) orientation in radian. • eye_left, eye_right, nose : the coordinate of the eyes and noze in the given image. Code Sample requirements: opencv2, opencv2 python bindings This code sample retrieves the stream of a web cam and display in a GUI the result of the face_detection service. # -*- coding: utf-8 -*- import StringIO from math import cos, sin import cv2 import numpy as np import angus.client def main(stream_index): camera = cv2.VideoCapture(0) camera.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH, 640); camera.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT, 480); camera.set(cv2.cv.CV_CAP_PROP_FPS, 10) if not camera.isOpened(): print("Cannot open stream of index {}".format(stream_index)) exit(1) print("Input stream is of resolution: {} x {}".format(camera.get(3), camera. ˓→ get(4))) conn = angus.client.connect() service = conn.services.get_service('gaze_analysis', 1) service.enable_session() while camera.isOpened(): ret, frame = camera.read() if not ret: break ### angus.ai computer vision services require gray images right now. gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) ret, buff = cv2.imencode(".jpg", gray, [cv2.IMWRITE_JPEG_QUALITY, 80]) buff = StringIO.StringIO(np.array(buff).tostring()) job = service.process({"image": buff}) res = job.result for face in res['faces']: x, y, dx, dy = map(int, face['roi']) nose = face['nose'] nose = (nose[0], nose[1]) eyel = face['eye_left'] eyel = (eyel[0], eyel[1]) eyer = face['eye_right'] angus.ai Documentation, Release eyer = (eyer[0], eyer[1]) psi = face['head_roll'] theta = - face['head_yaw'] phi = face['head_pitch'] ### head orientation length = 150 xvec = int(length*(sin(phi)*sin(psi) - cos(phi)*sin(theta)*cos(psi))) yvec = int(- length*(sin(phi)*cos(psi) - cos(phi)*sin(theta)*sin(psi))) cv2.line(frame, nose, (nose[0]+xvec, nose[1]+yvec), (0, 140, 255), 3) psi = 0 theta = - face['gaze_yaw'] phi = face['gaze_pitch'] ### gaze orientation length = 150 xvec = int(length*(sin(phi)*sin(psi) - cos(phi)*sin(theta)*cos(psi))) yvec = int(- length*(sin(phi)*cos(psi) - cos(phi)*sin(theta)*sin(psi))) cv2.line(frame, eyel, (eyel[0]+xvec, eyel[1]+yvec), (0, 140, 0), 3) xvec = int(length*(sin(phi)*sin(psi) - cos(phi)*sin(theta)*cos(psi))) yvec = int(- length*(sin(phi)*cos(psi) - cos(phi)*sin(theta)*sin(psi))) cv2.line(frame, eyer, (eyer[0]+xvec, eyer[1]+yvec), (0, 140, 0), 3) cv2.imshow('original', frame) if cv2.waitKey(1) & 0xFF == ord('q'): break ### Disabling session on the server service.disable_session() camera.release() cv2.destroyAllWindows() if __name__ == '__main__': ### Web cam index might be different from 0 on your setup. ### To grab a given video file instead of the host computer cam, try: ### main("/path/to/myvideo.avi") main(0) angus.ai Documentation, Release Motion Detection Is there anything moving in front of my object? Where exactly? Getting Started Using Angus python SDK: # -*- coding: utf-8 -*- import angus.client from pprint import pprint conn = angus.client.connect() service = conn.services.get_service('motion_detection', version=1) service.enable_session() for i in range(200): job = service.process({'image': open('./photo-{}.jpg'.format(i), 'rb')}) pprint(job.result) service.disable_session() angus.ai Documentation, Release Input The API takes a stream of 2d still images as input, of format jpg or png, without constraints on resolution. Note however that the bigger the resolution, the longer the API will take to process and give a result. The function process() takes a dictionary as input formatted as follows: {'image' : file} • image: a python File Object as returned for example by open() or a StringIO buffer. Output Events will be pushed to your client following that format: { "input_size" : [480, 640], "nb_targets": 1 "targets": [ { "mean_position" : [34, 54], "mean_velocity" : [5, 10], "confidence" : 45 } ] } • input_size : width and height of the input image in pixels. • mean_position : [pt.x, pt.y] where pt is the center of gravity of the moving pixels. • mean_velocity : [v.x, v.y] where v is the average velocity of the moving pixels. • confidence : in [0,1] measures how significant the motion is (a function of the number of keypoints moving in the same direction). Code Sample requirements: opencv2, opencv2 python bindings This code sample retrieves the stream of a web cam and display in a GUI the result of the motion_detection service. # -*- coding: utf-8 -*- import StringIO import cv2 import numpy as np import angus.client def main(stream_index): camera = cv2.VideoCapture(stream_index) camera.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH, 640) camera.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT, 480) camera.set(cv2.cv.CV_CAP_PROP_FPS, 10) angus.ai Documentation, Release if not camera.isOpened(): print("Cannot open stream of index {}".format(stream_index)) exit(1) print("Input stream is of resolution: {} x {}".format(camera.get(3), camera. ˓→ get(4))) conn = angus.client.connect() service = conn.services.get_service("motion_detection", 1) service.enable_session() while camera.isOpened(): ret, frame = camera.read() if not ret: break gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) ret, buff = cv2.imencode(".jpg", gray, [cv2.IMWRITE_JPEG_QUALITY, 80]) buff = StringIO.StringIO(np.array(buff).tostring()) job = service.process({"image": buff}) res = job.result for target in res['targets']: x, y = map(int, target['mean_position']) vx, vy = map(int, target['mean_velocity']) cv2.circle(frame, (x, y), 5, (255,255,255)) cv2.line(frame, (x, y), (x + vx, y + vy), (255,255,255)) cv2.imshow('original', frame) if cv2.waitKey(1) & 0xFF == ord('q'): break service.disable_session() camera.release() cv2.destroyAllWindows() if __name__ == '__main__': ### Web cam index might be different from 0 on your setup. ### To grab a given video file instead of the host computer cam, try: ### main("/path/to/myvideo.avi") main(0) angus.ai Documentation, Release Text To Speech This service generates a sound file (”.wav”) from a any given text in the following languages: • English (US) • English (GB) • German • Spanish (ES) • French • Italian Getting Started Using Angus python SDK: # -*- coding: utf-8 -*- import angus.client conn = angus.client.connect() service = conn.services.get_service('text_to_speech', version=1) angus.ai Documentation, Release job = service.process({'text': "Hi guys, how are you today?", 'lang' : "en-US"}) ### The output wav file is available as compressed (zlib), base64 string. sound = job.result["sound"] Input The function process() takes a dictionary formatted as follows: {'text' : "Hello guys", 'lang' : "en-US"} • text: a string containing the text to be synthesized. • lang: the code of the language to be used for synthesis. Languages currently available are: – English (US) : en-US – English (GB) : en-GB – German : de-DE – Spanish (ES) : es-ES – French : fr-FR – Italian : it-IT Output Events will be pushed to your client following the below format: { "status" : 201, "sound" : "'eJzsvHdUFNm6N1yhEzQ ... jzf//+T/jj/A8b0r/9" } • status: the http status code of the request. • sound : contains the synthesized sound file (.wav) as a compressed (zlib), base64 string. Please refer to the code sample below of how to decode it in Python. Code Sample This code sample uses Angus text_to_speech service to synthesize “hi guys, how are you today?”. # -*- coding: utf-8 -*- import angus.client import base64 import zlib import subprocess def decode_output(sound, filename): sound = base64.b64decode(sound) sound = zlib.decompress(sound) with open(filename, "wb") as f: angus.ai Documentation, Release f.write(sound) conn = angus.client.connect() service = conn.services.get_service('text_to_speech', version=1) job = service.process({'text': "Hi guys, how are you today?", 'lang' : "en-US"}) ### The output wav file is available as compressed (zlib), base64 string. ### Here, the string is decoded and played back by Linux "aplay". decode_output(job.result["sound"], "output.wav") subprocess.call(["/usr/bin/aplay", "./output.wav"]) Sound Detection Is there any noticeable sound? Getting Started Using the Angus python SDK: # -*- coding: utf-8 -*- import angus.client from pprint import pprint conn = angus.client.connect() service = conn.services.get_service('sound_detection', version=1) job = service.process({'sound': open("./sound.wav", 'rb'), 'sensitivity':0.7}) pprint(job.result) Input {'sound' : file, 'sensitivity' : 0.3} • sound : a python File Object as returned for example by open() or a StringIO buffer describing a wav file with the following format : PCM 16bit, Mono, without constraints on sample rate. • sensitivity : modifies the ability of the algorithms to detect quiet sounds. [0, 1]. The higher the value is, the better the algorithm will detect quiet sounds, but the more it will be sensitive to background noise. Output Events will be pushed to your client following that format: { "input_size" : 8192, "nb_events" : 2, "events" : [ { "index" : 3454, angus.ai Documentation, Release "type" : 'sound_on' }, { "index" : 6544, "type" : 'sound_off' } ] } • input_size : number of frame given as input (eg. in a stereo file, 1 frame = 1 left sample + 1 right sample). • nb_events : number of events detected in the given input buffer. • index : the frame index in the given input buffer where the event has been detected. • type : sound_on if the beginning of a sound is detected. sound_off if the end of a sound is detected. Note that an event of type sound_on is always followed by an event of type sound_off. Code Sample requirements: PyAudio This code sample retrieve the audio stream of a web cam and display the result of the sound_detection service. # -*- coding: utf-8 -*- import Queue import StringIO import wave import time import sys from pprint import pprint import pyaudio import numpy as np import angus.client CHUNK = 8192 PYAUDIO_FORMAT = pyaudio.paInt16 NUMPY_FORMAT = np.int16 TARGET_RATE = 16000 TARGET_CHANNELS = 1 def list_inputs(): p = pyaudio.PyAudio() for i in range(p.get_device_count()): info = p.get_device_info_by_index(i) if info['maxInputChannels'] > 0: print("Device index={} name={}".format(info['index'], info['name'])) def prepare(in_data, channels, rate): # Extract first channel in_data = np.fromstring(in_data, dtype=NUMPY_FORMAT) in_data = np.reshape(in_data, (CHUNK, channels)) in_data = in_data[:,0] # Re-sample if needed only for mono stream srcx = np.arange(0, in_data.size, 1) tgtx = np.arange(0, in_data.size, float(rate) / float(TARGET_RATE)) angus.ai Documentation, Release in_data = np.interp(tgtx, srcx, in_data).astype(NUMPY_FORMAT) return in_data.tostring() def main(stream_index): p = pyaudio.PyAudio() # Device configuration conf = p.get_device_info_by_index(stream_index) channels = int(conf['maxInputChannels']) if channels < TARGET_CHANNELS: raise RuntimeException("Bad device, no input channel") rate = int(conf['defaultSampleRate']) if rate < TARGET_RATE: raise RuntimeException("Bad device, sample rate is too low") # Angus conn = angus.client.connect() service = conn.services.get_service('sound_detection', version=1) service.enable_session() # Record Process stream_queue = Queue.Queue() def chunk_callback(in_data, frame_count, time_info, status): in_data = prepare(in_data, channels, rate) stream_queue.put(in_data) return (in_data, pyaudio.paContinue) stream = p.open(format=PYAUDIO_FORMAT, channels=channels, rate=rate, input=True, frames_per_buffer=CHUNK, input_device_index=stream_index, stream_callback=chunk_callback) stream.start_stream() # Get data and send to Angus.ai while True: nb_buffer_available = stream_queue.qsize() if nb_buffer_available > 0: print("nb buffer available = {}".format(nb_buffer_available)) if nb_buffer_available == 0: time.sleep(0.01) continue data = stream_queue.get() buff = StringIO.StringIO() wf = wave.open(buff, 'wb') wf.setnchannels(TARGET_CHANNELS) wf.setsampwidth(p.get_sample_size(PYAUDIO_FORMAT)) wf.setframerate(TARGET_RATE) wf.writeframes(data) wf.close() job = service.process( angus.ai Documentation, Release {'sound': StringIO.StringIO(buff.getvalue()), 'sensitivity': 0.7}) pprint(job.result) stream.stop_stream() stream.close() p.terminate() if __name__ == "__main__": if len(sys.argv) < 2: list_inputs() INDEX = raw_input("Please select a device number:") else: INDEX = sys.argv[1] try: main(int(INDEX)) except ValueError: print("Not a valid index") exit(1) Voice Detection (Beta) This service takes an audio stream as an input and tries to discriminate what is human voice and what is not. If detecting noise in general, and not specifically human voice, use Sound Detection instead. Getting Started Using the Angus python SDK: # -*- coding: utf-8 -*- import angus.client.cloud conn = angus.client.connect() service = conn.services.get_service('voice_detection', version=1) job = service.process({'sound': open("./sound.wav", 'rb'), 'sensitivity':0.7}) print job.result Input {'sound' : file, 'sensitivity' : 0.3} • sound : a python File Object as returned for example by open() or a StringIO buffer describing a wav file with the following format : PCM 16bit, Mono, without constraints on sample rate. • sensitivity : modifies the ability of the algorithms to detect quiet voices. [0, 1]. The higher the value is, the better the algorithm will detect quiet voices, but the more it will be sensitive to background noise. angus.ai Documentation, Release Output Events will be pushed to your client following that format: { "voice_activity" : "SILENCE" } • voice_activity : this field takes 4 different values: SILENCE when no voice is detected, VOICE when voice is detected, ON when a transition occurs between SILENCE and VOICE, and OFF when a transition occurs between VOICE and SILENCE. Code Sample requirements: PyAudio This code sample retrieve the audio stream of a web cam and display the result of the voice_detection service. # -*- coding: utf-8 -*- import Queue import StringIO import wave import time import angus.client import pyaudio import sys import numpy as np CHUNK = 8192 PYAUDIO_FORMAT = pyaudio.paInt16 NUMPY_FORMAT = np.int16 def list_inputs(): p = pyaudio.PyAudio() for i in range(p.get_device_count()): info = p.get_device_info_by_index(i) if info['maxInputChannels'] > 0: print("Device index={} name={}".format(info['index'], info['name'])) def prepare(in_data, channels, rate): # Extract first channel in_data = np.fromstring(in_data, dtype=NUMPY_FORMAT) in_data = np.reshape(in_data, (CHUNK, channels)) in_data = in_data[:,0] # Down sample if needed srcx = np.arange(0, in_data.size, 1) tgtx = np.arange(0, in_data.size, float(rate) / float(16000)) in_data = np.interp(tgtx, srcx, in_data).astype(NUMPY_FORMAT) return in_data def main(stream_index): p = pyaudio.PyAudio() angus.ai Documentation, Release # Device configuration conf = p.get_device_info_by_index(stream_index) channels = int(conf['maxInputChannels']) if channels < 1: raise RuntimeException("Bad device, no input channel") rate = int(conf['defaultSampleRate']) if rate < 16000: raise RuntimeException("Bad device, sample rate is too low") # Angus conn = angus.client.connect() service = conn.services.get_service('voice_detection', version=1) service.enable_session() # Record Process stream_queue = Queue.Queue() def chunk_callback(in_data, frame_count, time_info, status): in_data = prepare(in_data, channels, rate) stream_queue.put(in_data.tostring()) return (in_data, pyaudio.paContinue) stream = p.open(format=PYAUDIO_FORMAT, channels=channels, rate=rate, input=True, frames_per_buffer=CHUNK, input_device_index=stream_index, stream_callback=chunk_callback) stream.start_stream() # Get data and send to Angus.ai while True: nb_buffer_available = stream_queue.qsize() if nb_buffer_available == 0: time.sleep(0.01) continue data = stream_queue.get() buff = StringIO.StringIO() wf = wave.open(buff, 'wb') wf.setnchannels(1) wf.setsampwidth(p.get_sample_size(PYAUDIO_FORMAT)) wf.setframerate(16000) wf.writeframes(data) wf.close() job = service.process( {'sound': StringIO.StringIO(buff.getvalue()), 'sensitivity': 0.2}) res = job.result["voice_activity"] if res == "VOICE": print "\033[A \033[A" angus.ai Documentation, Release print "***************************" print "***** VOICE !!!! ******" print "***************************" stream.stop_stream() stream.close() p.terminate() if __name__ == "__main__": if len(sys.argv) < 2: list_inputs() index = raw_input("Please select a device number:") else: index = sys.argv[1] try: index = int(index) main(index) except ValueError: print("Not a valid index") exit(1) Sound Localization (Beta) Where is the sound coming from? Getting Started Using the Angus python SDK: # -*- coding: utf-8 -*- import angus.client from pprint import pprint conn = angus.client.connect() service = conn.services.get_service('sound_localization', version=1) job = service.process({'sound': open("./sound.wav", 'rb'), 'baseline' : 0.7, ˓→'sensitivity:0.5'}) pprint(job.result) Input {'sound' : file, 'baseline' : 0.7, 'sensitivity' : 0.3} • sound : a python File Object as returned for example by open() or a StringIO buffer describing a wav file with the following format: PCM 16bit, 48kHz, Stereo. • baseline : distance between the 2 microphones of the array in meters. angus.ai Documentation, Release • sensitivity : modifies the ability of the algorithms to locate quiet sounds. [0, 1]. The higher the value is, the better the algorithm will locate quiet sounds, but the more it will be sensitive to background noise. Output Events will be pushed to your client following that format: { "input_size" : 8192, "nb_sources" : 1, "sources" : [ { "index" : 345, "yaw" : 0.156, "confidence" : 0.53, } ] } • input_size : number of frame given as input (in a stereo file, 1 frame = 1 left sample + 1 right sample). • nb_sources : number of sound sources located. • yaw : angle of the sound source in radian as shown below: • confidence : an estimate of the probability that a real sound source is indeed located at the given yaw. Code Sample This sample assumes that you have a sound card able to record in stereo. requirements: PyAudio This code sample retrieve the audio stream of a recording device and display the result of the sound_localization service. # -*- coding: utf-8 -*- import Queue import StringIO import wave import time import sys from pprint import pprint import pyaudio import numpy as np import angus.client CHUNK = 8192 PYAUDIO_FORMAT = pyaudio.paInt16 NUMPY_FORMAT = np.int16 TARGET_RATE = 48000 TARGET_CHANNELS = 2 def list_inputs(): p = pyaudio.PyAudio() for i in range(p.get_device_count()): info = p.get_device_info_by_index(i) angus.ai Documentation, Release if info['maxInputChannels'] > 0: print("Device index={} name={}".format(info['index'], info['name'])) def prepare(in_data, channels, rate): # Extract first channel in_data = np.fromstring(in_data, dtype=NUMPY_FORMAT) in_data = np.reshape(in_data, (CHUNK, channels)) # Re-sample if needed srcx = np.arange(0, CHUNK, 1) tgtx = np.arange(0, CHUNK, float(rate) / float(TARGET_RATE)) print ((in_data[:,0]).size) left = np.interp(tgtx, srcx, in_data[:,0]).astype(NUMPY_FORMAT) right = np.interp(tgtx, srcx, in_data[:,1]).astype(NUMPY_FORMAT) print left.size print CHUNK c = np.empty((left.size + right.size), dtype=NUMPY_FORMAT) c[0::2] = left c[1::2] = right return c.tostring() def main(stream_index): p = pyaudio.PyAudio() # Device configuration conf = p.get_device_info_by_index(stream_index) channels = int(conf['maxInputChannels']) if channels < TARGET_CHANNELS: raise RuntimeException("Bad device, no input channel") rate = int(conf['defaultSampleRate']) # Angus conn = angus.client.connect() service = conn.services.get_service('sound_localization', version=1) service.enable_session() # Record Process stream_queue = Queue.Queue() def chunk_callback(in_data, frame_count, time_info, status): in_data = prepare(in_data, channels, rate) stream_queue.put(in_data) return (in_data, pyaudio.paContinue) stream = p.open(format=PYAUDIO_FORMAT, channels=channels, rate=rate, input=True, frames_per_buffer=CHUNK, input_device_index=stream_index, stream_callback=chunk_callback) stream.start_stream() while True: nb_buffer_available = stream_queue.qsize() angus.ai Documentation, Release if nb_buffer_available > 0: print("nb buffer available = {}".format(nb_buffer_available)) if nb_buffer_available == 0: time.sleep(0.01) continue data = stream_queue.get() buff = StringIO.StringIO() wf = wave.open(buff, 'wb') wf.setnchannels(TARGET_CHANNELS) wf.setsampwidth(p.get_sample_size(PYAUDIO_FORMAT)) wf.setframerate(TARGET_RATE) wf.writeframes(data) wf.close() job = service.process( {'sound': StringIO.StringIO(buff.getvalue()), 'baseline': 0.14, ˓→'sensitivity': 0.7}) pprint(job.result['sources']) stream.stop_stream() stream.close() p.terminate() if __name__ == "__main__": if len(sys.argv) < 2: list_inputs() INDEX = raw_input("Please select a device number:") else: INDEX = sys.argv[1] try: main(int(INDEX)) except ValueError: print("Not a valid index") exit(1) Qrcode decoder Do I see a qrcode ? What is the content ? Getting Started You can use this qrcode for example: angus.ai Documentation, Release Using Angus python SDK: # -*- coding: utf-8 -*- import angus.client from pprint import pprint conn = angus.client.connect() service = conn.services.get_service('qrcode_decoder', version=1) job = service.process({'image': open('./qrcode.jpg', 'rb')}) pprint(job.result) Input The API takes a stream of 2d still images as input, of format jpg or png, without constraints on resolution. Note however that the bigger the resolution, the longer the API will take to process and give a result. The function process() takes a dictionary as input formatted as follows: {'image' : file} • image: a python File Object as returned for example by open() or a StringIO buffer. angus.ai Documentation, Release Output Events will be pushed to your client following that format: { "type": "QRCODE", "data": "http://www.angus.ai" } • type : qrcode data type • data : content Code Sample requirements: opencv2, opencv2 python bindings This code sample retrieves the stream of a webcam and print on standard output the qrcode content data. # -*- coding: utf-8 -*- import StringIO import cv2 import numpy as np from pprint import pprint import angus.client def main(stream_index): camera = cv2.VideoCapture(stream_index) camera.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH, 640) camera.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT, 480) camera.set(cv2.cv.CV_CAP_PROP_FPS, 10) if not camera.isOpened(): print("Cannot open stream of index {}".format(stream_index)) exit(1) print("Input stream is of resolution: {} x {}".format(camera.get(3), camera. ˓→ get(4))) conn = angus.client.connect() service = conn.services.get_service("qrcode_decoder", version=1) service.enable_session() while camera.isOpened(): ret, frame = camera.read() if not ret: break gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) ret, buff = cv2.imencode(".jpg", gray, [cv2.IMWRITE_JPEG_QUALITY, 80]) buff = StringIO.StringIO(np.array(buff).tostring()) job = service.process({"image": buff}) if "data" in job.result: pprint(job.result["data"]) angus.ai Documentation, Release cv2.imshow('original', frame) if cv2.waitKey(1) & 0xFF == ord('q'): break service.disable_session() camera.release() cv2.destroyAllWindows() if __name__ == '__main__': ### Web cam index might be different from 0 on your setup. ### To grab a given video file instead of the host computer cam, try: ### main("/path/to/myvideo.avi") main(0) Dummy Is my configuration correct ? I want to make my first call to angus.ai cloud. Getting Started Using Angus python SDK: # -*- coding: utf-8 -*- import angus.client from pprint import pprint conn = angus.client.connect() service = conn.services.get_service('dummy', version=1) job = service.process({'echo': 'Hello world'}) pprint(job.result) Input The API takes a optional string as parameter and return a result equals to these string. The function process() takes a dictionary as input formatted as follows: {'echo' : 'Hello world!'} • echo: a python string or unicode object. Output The service just return the input parameter if defined or the string "echo" if no parameter. {'echo': 'Hello world!'} • echo : the copy of the input string or "echo" if no defined. angus.ai Documentation, Release Requirements As you go through this tutorial, you will need: • a computer. Every operating system is ok provided that you can configure a Python or Java stack. • a camera (e.g. webcam) plugged into that computer. USB and IP cameras are supported, although IP cam can be more challenging to interface. If you need help doing so please contact us at <EMAIL>. • a working internet connection. Step 2 - Install our SDK Create an account To use Angus.ai services, you need to create an account. This can be done very easily by visiting https://console. angus.ai and filling the form shown below. angus.ai Documentation, Release When done, you are ready to create you first camera stream as shown below. Get credentials for your camera After creating your personal account on https://console.angus.ai/, you will be asked to create a “stream”. This proce- dure will allow for a private “access_token” and “client_id” keys to be generated for you. This can be done by pressing the “Add a stream” button on the top right hand corner as shown below. angus.ai Documentation, Release After clicking, you will be asked to choose between a free developer stream and a paying enterprise stream. Please note that the free developer stream is only for non commercial use and will block after 3 hours of video stream computed every month as seen below. angus.ai Documentation, Release For an non restricted enterprise stream, you will need to enter a valid credit card number. Press “Continue” at the bottom of the page and you will soon get the following page. Press “Show Details” and take note of your client_id (called Login on the interface) and access_token (called Password on the interface) as they will be needed later on. angus.ai Documentation, Release The credentials that you have just created will be used to configure the Angus.ai SDK. Your are now ready to proceed to the next step. Download and configure the SDK Requirements • The SDK is Python3 compatible but the documentation code snippets are only Python2 compatible. • Also, you might want (not mandatory) to create a python virtual environnement with virtualenv in order to install the sdk in there. To do so, please refer to the following virtualenv guide for more information. Install the SDK Open a terminal and install the angus python sdk with pip. If you do not use virtualenv you may need to be root, administrator or super user depending on your platform (use sudo on linux platform). $ pip install angus-sdk-python Configure your SDK You must configure your sdk with the keys you received by creating a stream here. These keys are used to authenticate the requests you are about to send. Your API credentials can be retrieved by clicking on “Show details” on the stream you just created. angus.ai Documentation, Release In a terminal, type: $ angusme Please choose your gateway (current: https://gate.angus.ai): Please copy/paste your client_id: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx Please copy/paste your access_token: <KEY> Fill in the “client_id” prompt with the “login” given on the interface and the “access_token” prompt with the “pass- word” given on the interface. On Windows system, if angusme does not work, please refer to the FAQ for more details. You can check this setup went well by typing the following command and checking that our server sees you: $ angusme -t Server: https://gate.angus.ai Status: OK If this command gives you an error, check that you enter the right “client_id” and “acccess_token”. You can do this by re-typing “angusme” in a command prompt. If you need help, contact us here : <EMAIL> ! Step 3 - Pick your building block What next? Congratulations! You went through all the steps to use our building blocks. • When time comes, you can plug more cameras by creating additional stream as shown here (create-stream). • If you need to deploy your system in a situation where internet bandwidth is a problem, please contact us at <EMAIL>. For any issues please contact Angus.ai team at: <EMAIL>, and if possible, please specify your operating system, Python version, as well as the error backtrace if any. Thanks! Python SDK Our SDK are here to help you call Angus.ai http API easily, without drafting the appropriate HTTP request yourself. Installing and configuring one of our SDKs is needed to run: • the audience analytics client applications shown here (apps) • and/or the building blocks code samples documented here (Building Blocks) Don’t want to use Python? If the SDK in the language of your choice is not provided here, you can: • contact us at <EMAIL>. • or use our http API directly by referring to our full API reference (Main API Reference) Requirements • The SDK is Python3 compatible but the documentation code snippets are only Python2 compatible. • Also, you might want (not mandatory) to create a python virtual environnement with virtualenv in order to install the sdk in there. angus.ai Documentation, Release To do so, please refer to the following virtualenv guide for more information. Install the SDK Open a terminal and install the angus python sdk with pip. If you do not use virtualenv you may need to be root, administrator or super user depending on your platform (use sudo on linux platform). $ pip install angus-sdk-python Configure your SDK You must configure your sdk with the keys you received by creating a stream here. These keys are used to authenticate the requests you are about to send. Your API credentials can be retrieved by clicking on “Show details” on the stream you just created. In a terminal, type: $ angusme Please choose your gateway (current: https://gate.angus.ai): Please copy/paste your client_id: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx Please copy/paste your access_token: x<KEY> Fill in the “client_id” prompt with the “login” given on the interface and the “access_token” prompt with the “pass- word” given on the interface. On Windows system, if angusme does not work, please refer to the FAQ for more details. You can check this setup went well by typing the following command and checking that our server sees you: $ angusme -t Server: https://gate.angus.ai Status: OK If this command gives you an error, check that you enter the right “client_id” and “acccess_token”. You can do this by re-typing “angusme” in a command prompt. If you need help, contact us here : <EMAIL> ! Access your sensor stream Angus.ai API is specifically designed to process a video stream. This section will show you a way to access the stream of a webcam plugged to your computer by using OpenCV2. Note that the following code sample can be adapted to process a video file instead. Note also that OpenCV2 is not an absolute pre-requisite, the following code sample can easily be adapted to be used with any other way of retrieving successive frames from a video stream. If you need assistance, please contact us at <EMAIL> Prerequisite • you have a working webcam plugged into your PC • you have installed OpenCV2 and OpenCV2 python bindings. Please refer to OpenCV documentation to pro- ceed, or check FAQ chapter. angus.ai Documentation, Release On Debian-like platform, OpenCV2 comes pre-installed, you just need to run $ sudo apt-get install python-opencv Then copy this code snippet in a file and run it. # -*- coding: utf-8 -*- import cv2 def main(stream_index): camera = cv2.VideoCapture(stream_index) camera.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH, 640) camera.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT, 480) camera.set(cv2.cv.CV_CAP_PROP_FPS, 10) if not camera.isOpened(): print("Cannot open stream of index {}".format(stream_index)) exit(1) print("Video stream is of resolution {} x {}".format(camera.get(3), camera. ˓→ get(4))) while camera.isOpened(): ret, frame = camera.read() if not ret: break cv2.imshow('original', frame) if cv2.waitKey(1) & 0xFF == ord('q'): break camera.release() cv2.destroyAllWindows() if __name__ == '__main__': ### Web cam index might be different from 0 on your setup. ### To grab a given video file instead of the host computer cam, try: ### main("/path/to/myvideo.avi") main(0) $ python yourcopiedfile.py Check that your web cam video stream is correctly displayed on your screen. angus.ai Documentation, Release You are setup to start using Angus.ai services: • our plug and play audience analytics solution here (Audience Analytics). • one of our building blocks here (Building Blocks). Retrieve your data API Authentication This documentation is aimed at developers wanting to retrieve programmatically the data computed by Angus.ai audience analytics solution through our Data REST API (see diagram below). angus.ai Documentation, Release Prerequisite This procedure requires that you already have a properly configured audience analytics client application running on your device. If this is not the case, please follow our step by step instruction here: (Audience Analytics). API Authentication Info You need a JSON Web Token (“JWT”) token in order to securely call the data api endpoint. Your personal JWT is provided by programmatically calling the appropriate endpoint documented below. Endpoint and parameters To retrieve a JWT token, you have to make a POST request to: https://console.angus.ai/api-token-authstream • Description: retrieve a JWT token associated to • Authentication: none • Parameters: – username: your console login (email) – client_id: the client_id associated with your stream – access_token: the access_token associated with your stream • Response Code: 200 OK • Response: JSON angus.ai Documentation, Release Example Request: $ curl -X POST -H "Content-Type: application/json" -d '{"username": ˓→"<EMAIL>", "client_id": "3bd15f50-c69f-11e5-ae3c- ˓→0242ad110002", "access_token": "543eb007-1bfe-89d7-b092-e127a78fe91c"}' ˓→https://console.angus.ai/api-token-authstream/ Response: { "token": "<KEY> ˓→<KEY> ˓→K70YXQYMAcdeW7dfscFGxUhenoXXGBAQTiWhNv-9cVc" } Once provided, you will need to put this token as a HTTP header Authorization: Bearer [YOURJWTTOKEN] (see the Python example in Retrieving the data) in every HTTP requests you make. Retrieving the data Once you obtained your personal JWT, you can start retrieving your data by calling the endpoint documented in the Data API Reference page. Python example For this example, you will need to install requests and pytz modules import requests import pytz import datetime import json def get_token(): data = { "username": "<EMAIL>", "client_id": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "access_token": "<KEY>", } req = requests.post('https://console.angus.ai/api-token-authstream/', json=data) req.raise_for_status() req = req.json() return req['token'] def get(token, metrics, from_date, to_date, size): entities_url = 'https://data.angus.ai/api/1/entities' params = { "metrics": ",".join(metrics), "from_date": from_date.isoformat(), "to_date": to_date.isoformat(), angus.ai Documentation, Release "time": size, } headers = { "Authorization": "Bearer {}".format(token) } req = requests.get(entities_url, params=params, headers=headers) req.raise_for_status() req = req.json() return req def get_overall(token): to_date = datetime.datetime.now(pytz.UTC) from_date = to_date - datetime.timedelta(hours=24) metrics = [ "passing_by", "interested", "stop_time", "attention_time", ] return get(token, metrics, from_date, to_date, "global") def main(): token = get_token() overall = get_overall(token) print(json.dumps(overall, indent=2)) if __name__ == "__main__": main() The metrics 1. Passing By • Keyword: passing_by • Schema: { "value": 0 } • Description: Count of people who passed (not necessarily stopping or looking) in front of the camera during at least 1 second. 2. Interested • Keyword: interested • Schema: angus.ai Documentation, Release { "value": 0 } • Description: Count of people who stopped for at least 3 seconds and looked in the direction of the camera more than 1 second. 3. Average stopping time • Keyword: stop_time • Schema: { "value": null } • Description: Average time a person, among the “interested” people (see above), stay still in front of the camera. (in second) 4. Average attention time • Keyword: attention_time • Schema: { "value": null } • Description: Average time a person, among the “interested” people (see above), spend looking at the camera. (in second) 5. Category • Keyword: category • Schema: { "senior_female": 0, "senior_male": 0, "young_female": 0, "young_male": 0 } • Description: Population segmentation counts of all the “interested” people (see above) for each category. Note: When no age or gender has been found about an interested person, it will not be included in any of these category. 6. Gender • Keyword: gender • Schema: angus.ai Documentation, Release { "?": 0, "female": 0, "male": 0 } • Description: The gender repartition of all the “interested” people (see above). Data API Reference Get entities Api endpoint for fetching filtered aggregated data (called entities) • URL /api/1/entities • Method: GET • URL Params Required: – metrics=[string]: a list of desired information from the db (comma separated without whitespaces) * Possible values: interested, passing_by, stop_time, attention_time, category, gender, satisfaction * Default value: none – from_date=[iso-date]: the date to start the search from in iso format urlencoded (ex: 2017-09- 03T05:45:00+0200 becomes 2017-09-03T05%3A45%3A00%2B0200) * Default value: none Optional: – to_date=[iso-date]: the date to end the search to in iso format urlencoded * Default value: the current date – time=[string]: a time bucket to aggregate data into * Possible values: by_hour, by_day, global * Default value: global – page=[integer]: a page if enough results for pagination. * Default value: 1 • Success Response: – Code: 200 – Json content: * entities=[json]: the actual data returned by the api * time=[str]: the actual time bucket used to return data * from_date=[iso-date]: the date from which the search has been made angus.ai Documentation, Release * to_date=[iso-date]: the date to which the search has been made * total_results=[integer]: the total number of results for this search * nb_of_pages=[integer]: the total number of pages for paginated results * page=[integer]: the current retrieved page * next_page=[str]: the complete URL to call to get the results for the next page { "entities": { "date1": { "metric1": { "value": 3 } }, "date2": { "metric1": { "value": 5 } }, ... ... ... }, "from_date": "2017-09-03T05:45:00+02:00", "to_date": "2017-09-14T16:53:11+02:00" "time": "by_day", "total_results": 63, "nb_of_pages": 2, "next_page": "https://data.angus.ai/api/1/........&page=2", "page": 1, } • Error Response: – Code: 401 UNAUTHORIZED – Explanation: If no “Authorization” header is provided or if there is a problem with the JWT token, the error message will explain the problem OR – Code: 400 BAD REQUEST – Explanation: If the request is not well formatted (for instance, a required param is not provided, etc...) or any other kind of problem with the request, the error message should be self explicit • Sample Call: Here is an example of a request for all the metrics between September 3rd 2017, 5:45 GMT+2 until now, using a time bucket of “one day”. $ curl -X GET -H 'Authorization: Bearer <KEY> ˓→eyJ1c2VybmFtZSI6ImF1cmVsaWVuLm1vcmVhdUB<KEY> ˓→K70YXQYMAcdeW7dfscFGxUhenoXXGBAQTiWhNv-9cVc' 'https://data.angus.ai/api/1/ ˓→entities?metrics=satisfaction,gender,category,passing_by,interested&from_ ˓→date=2017-09-03T05%3A45%3A00%2B0200&time=by_day CHAPTER 3 FAQ Cameras / Images Requirements Do I need a specific camera? No, our solutions are made to work with any cameras (IP cam or USB webcam). At Angus.ai, we use 50$ Logitech USB webcam on a daily basis, with no problem at all. What are the supported image formats? The supported formats are: JPEG and PNG. What image resolution should I use? 640x480 (aka VGA) images are a good start. Using bigger images will increase the ability of the system to detect faces that are further away from the camera, but will also lead to bigger latencies. What frame rate should I use? To ensure proper analysis from our services, make sure to provide about 10 frames per second. angus.ai Documentation, Release Angus SDK, Python, OpenCV What are the requirements to run Angus SDKs? Nothing, the SDKs come with their dependencies (managed by pip). But, in order to access your webcam stream, you will need a dependency that is not packaged into our SDK. We tend to use OpenCV a lot to do this (see other questions below). Is Python SDK Python 3 compatible? Yes, it is. But the documentation code snippets and OpenCV2 are only Python 2. Sorry for the inconvenience, the Python 3 documentation is in progress. How to install OpenCV2 and its Python bindings on debian-like systems? Please, use: $ apt-get install python-opencv How to install OpenCV2 on other systems? Please follow official documentation here. For windows, check the complete guide on this FAQ. Windows related questions How can I install Pip in Windows? Pip is installed by default when you install Python 2.7.x, please use the latest Python 2.x version available. How can I run all python code snippets on Windows? Please use the latest Python 2.x version (with pip) 2.7.12. Windows installer puts Python in C:\Python27 by default, if you choose an other directory, please replace “c:Python27” by your chosen directory in the following instructions: In a Command Prompt go to python \Scripts directory: $ cd C:\Python27\Scripts angus.ai Documentation, Release Install numpy and Angus Python SDK: $ pip install numpy angus-sdk-python Configure Angus SDK: $ cd C:\Python27 $ python Scripts\angusme To install OpenCV, download OpenCV for Windows from http://opencv.org/, execute (or unzip) it. Copy <opencv_directory>\buid\python\2.7\[x86|x64]\cv2.pyd in C:\Python27\Lib. Now you can run all Python snippets of the documentation. Message “Input does not appear to be valid....” on Windows? Make sure you use the binary file mode when opening images: open("/path/to/your/image.png", "rb")
paws
cran
R
Package ‘paws.database’ September 11, 2023 Title 'Amazon Web Services' Database Services Version 0.4.0 Description Interface to 'Amazon Web Services' database services, including 'Relational Database Service' ('RDS'), 'DynamoDB' 'NoSQL' database, and more <https://aws.amazon.com/>. License Apache License (>= 2.0) URL https://github.com/paws-r/paws BugReports https://github.com/paws-r/paws/issues Imports paws.common (>= 0.6.0) Suggests testthat Encoding UTF-8 RoxygenNote 7.2.3 Collate 'dax_service.R' 'dax_interfaces.R' 'dax_operations.R' 'docdb_service.R' 'docdb_interfaces.R' 'docdb_operations.R' 'dynamodb_service.R' 'dynamodb_interfaces.R' 'dynamodb_operations.R' 'dynamodbstreams_service.R' 'dynamodbstreams_interfaces.R' 'dynamodbstreams_operations.R' 'elasticache_service.R' 'elasticache_interfaces.R' 'elasticache_operations.R' 'keyspaces_service.R' 'keyspaces_interfaces.R' 'keyspaces_operations.R' 'lakeformation_service.R' 'lakeformation_interfaces.R' 'lakeformation_operations.R' 'memorydb_service.R' 'memorydb_interfaces.R' 'memorydb_operations.R' 'neptune_service.R' 'neptune_interfaces.R' 'neptune_operations.R' 'qldb_service.R' 'qldb_interfaces.R' 'qldb_operations.R' 'qldbsession_service.R' 'qldbsession_interfaces.R' 'qldbsession_operations.R' 'rds_service.R' 'rds_operations.R' 'rds_custom.R' 'rds_interfaces.R' 'rdsdataservice_service.R' 'rdsdataservice_interfaces.R' 'rdsdataservice_operations.R' 'redshift_service.R' 'redshift_interfaces.R' 'redshift_operations.R' 'redshiftdataapiservice_service.R' 'redshiftdataapiservice_interfaces.R' 'redshiftdataapiservice_operations.R' 'redshiftserverless_service.R' 'redshiftserverless_interfaces.R' 'redshiftserverless_operations.R' 'reexports_paws.common.R' 'simpledb_service.R' 'simpledb_interfaces.R' 'simpledb_operations.R' 'timestreamquery_service.R' 'timestreamquery_interfaces.R' 'timestreamquery_operations.R' 'timestreamwrite_service.R' 'timestreamwrite_interfaces.R' 'timestreamwrite_operations.R' NeedsCompilation no Author <NAME> [aut], <NAME> [aut], <NAME> [cre], Amazon.com, Inc. [cph] Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2023-09-11 18:10:05 UTC R topics documented: da... 3 docd... 5 dynamod... 9 dynamodbstream... 13 elasticach... 15 keyspace... 19 lakeformatio... 22 memoryd... 25 neptun... 28 qld... 32 qldbsessio... 34 rd... 37 rdsdataservic... 43 redshif... 45 redshiftdataapiservic... 50 redshiftserverles... 53 simpled... 56 timestreamquer... 58 timestreamwrit... 61 dax Amazon DynamoDB Accelerator (DAX) Description DAX is a managed caching service engineered for Amazon DynamoDB. DAX dramatically speeds up database reads by caching frequently-accessed data from DynamoDB, so applications can ac- cess that data with sub-millisecond latency. You can create a DAX cluster easily, using the AWS Management Console. With a few simple modifications to your code, your application can begin taking advantage of the DAX cluster and realize significant improvements in read performance. Usage dax(config = list(), credentials = list(), endpoint = NULL, region = NULL) Arguments config Optional configuration of credentials, endpoint, and/or region. • credentials: – creds: * access_key_id: AWS access key ID * secret_access_key: AWS secret access key * session_token: AWS temporary session token – profile: The name of a profile to use. If not given, then the default profile is used. – anonymous: Set anonymous credentials. – endpoint: The complete URL to use for the constructed client. – region: The AWS Region used in instantiating the client. • close_connection: Immediately close all HTTP connections. • timeout: The time in seconds till a timeout exception is thrown when at- tempting to make a connection. The default is 60 seconds. • s3_force_path_style: Set this to true to force the request to use path-style addressing, i.e. http://s3.amazonaws.com/BUCKET/KEY. • sts_regional_endpoint: Set sts regional endpoint resolver to regional or legacy https://docs.aws.amazon.com/sdkref/latest/guide/feature-sts-regionalized-e html credentials Optional credentials shorthand for the config parameter • creds: – access_key_id: AWS access key ID – secret_access_key: AWS secret access key – session_token: AWS temporary session token • profile: The name of a profile to use. If not given, then the default profile is used. • anonymous: Set anonymous credentials. endpoint Optional shorthand for complete URL to use for the constructed client. region Optional shorthand for AWS Region used in instantiating the client. Value A client for the service. You can call the service’s operations using syntax like svc$operation(...), where svc is the name you’ve assigned to the client. The available operations are listed in the Op- erations section. Service syntax svc <- dax( config = list( credentials = list( creds = list( access_key_id = "string", secret_access_key = "string", session_token = "string" ), profile = "string", anonymous = "logical" ), endpoint = "string", region = "string", close_connection = "logical", timeout = "numeric", s3_force_path_style = "logical", sts_regional_endpoint = "string" ), credentials = list( creds = list( access_key_id = "string", secret_access_key = "string", session_token = "string" ), profile = "string", anonymous = "logical" ), endpoint = "string", region = "string" ) Operations create_cluster Creates a DAX cluster create_parameter_group Creates a new parameter group create_subnet_group Creates a new subnet group decrease_replication_factor Removes one or more nodes from a DAX cluster delete_cluster Deletes a previously provisioned DAX cluster delete_parameter_group Deletes the specified parameter group delete_subnet_group Deletes a subnet group describe_clusters Returns information about all provisioned DAX clusters if no cluster identifier is specified, or describe_default_parameters Returns the default system parameter information for the DAX caching software describe_events Returns events related to DAX clusters and parameter groups describe_parameter_groups Returns a list of parameter group descriptions describe_parameters Returns the detailed parameter list for a particular parameter group describe_subnet_groups Returns a list of subnet group descriptions increase_replication_factor Adds one or more nodes to a DAX cluster list_tags List all of the tags for a DAX cluster reboot_node Reboots a single node of a DAX cluster tag_resource Associates a set of tags with a DAX resource untag_resource Removes the association of tags from a DAX resource update_cluster Modifies the settings for a DAX cluster update_parameter_group Modifies the parameters of a parameter group update_subnet_group Modifies an existing subnet group Examples ## Not run: svc <- dax() svc$create_cluster( Foo = 123 ) ## End(Not run) docdb Amazon DocumentDB with MongoDB compatibility Description Amazon DocumentDB is a fast, reliable, and fully managed database service. Amazon Docu- mentDB makes it easy to set up, operate, and scale MongoDB-compatible databases in the cloud. With Amazon DocumentDB, you can run the same application code and use the same drivers and tools that you use with MongoDB. Usage docdb(config = list(), credentials = list(), endpoint = NULL, region = NULL) Arguments config Optional configuration of credentials, endpoint, and/or region. • credentials: – creds: * access_key_id: AWS access key ID * secret_access_key: AWS secret access key * session_token: AWS temporary session token – profile: The name of a profile to use. If not given, then the default profile is used. – anonymous: Set anonymous credentials. – endpoint: The complete URL to use for the constructed client. – region: The AWS Region used in instantiating the client. • close_connection: Immediately close all HTTP connections. • timeout: The time in seconds till a timeout exception is thrown when at- tempting to make a connection. The default is 60 seconds. • s3_force_path_style: Set this to true to force the request to use path-style addressing, i.e. http://s3.amazonaws.com/BUCKET/KEY. • sts_regional_endpoint: Set sts regional endpoint resolver to regional or legacy https://docs.aws.amazon.com/sdkref/latest/guide/feature-sts-regionalized-e html credentials Optional credentials shorthand for the config parameter • creds: – access_key_id: AWS access key ID – secret_access_key: AWS secret access key – session_token: AWS temporary session token • profile: The name of a profile to use. If not given, then the default profile is used. • anonymous: Set anonymous credentials. endpoint Optional shorthand for complete URL to use for the constructed client. region Optional shorthand for AWS Region used in instantiating the client. Value A client for the service. You can call the service’s operations using syntax like svc$operation(...), where svc is the name you’ve assigned to the client. The available operations are listed in the Op- erations section. Service syntax svc <- docdb( config = list( credentials = list( creds = list( access_key_id = "string", secret_access_key = "string", session_token = "string" ), profile = "string", anonymous = "logical" ), endpoint = "string", region = "string", close_connection = "logical", timeout = "numeric", s3_force_path_style = "logical", sts_regional_endpoint = "string" ), credentials = list( creds = list( access_key_id = "string", secret_access_key = "string", session_token = "string" ), profile = "string", anonymous = "logical" ), endpoint = "string", region = "string" ) Operations add_source_identifier_to_subscription Adds a source identifier to an existing event notification subscription add_tags_to_resource Adds metadata tags to an Amazon DocumentDB resource apply_pending_maintenance_action Applies a pending maintenance action to a resource (for example, to an Amaz copy_db_cluster_parameter_group Copies the specified cluster parameter group copy_db_cluster_snapshot Copies a snapshot of a cluster create_db_cluster Creates a new Amazon DocumentDB cluster create_db_cluster_parameter_group Creates a new cluster parameter group create_db_cluster_snapshot Creates a snapshot of a cluster create_db_instance Creates a new instance create_db_subnet_group Creates a new subnet group create_event_subscription Creates an Amazon DocumentDB event notification subscription create_global_cluster Creates an Amazon DocumentDB global cluster that can span multiple multip delete_db_cluster Deletes a previously provisioned cluster delete_db_cluster_parameter_group Deletes a specified cluster parameter group delete_db_cluster_snapshot Deletes a cluster snapshot delete_db_instance Deletes a previously provisioned instance delete_db_subnet_group Deletes a subnet group delete_event_subscription Deletes an Amazon DocumentDB event notification subscription delete_global_cluster Deletes a global cluster describe_certificates Returns a list of certificate authority (CA) certificates provided by Amazon Do describe_db_cluster_parameter_groups Returns a list of DBClusterParameterGroup descriptions describe_db_cluster_parameters Returns the detailed parameter list for a particular cluster parameter group describe_db_clusters Returns information about provisioned Amazon DocumentDB clusters describe_db_cluster_snapshot_attributes Returns a list of cluster snapshot attribute names and values for a manual DB c describe_db_cluster_snapshots Returns information about cluster snapshots describe_db_engine_versions Returns a list of the available engines describe_db_instances Returns information about provisioned Amazon DocumentDB instances describe_db_subnet_groups Returns a list of DBSubnetGroup descriptions describe_engine_default_cluster_parameters Returns the default engine and system parameter information for the cluster da describe_event_categories Displays a list of categories for all event source types, or, if specified, for a spe describe_events Returns events related to instances, security groups, snapshots, and DB param describe_event_subscriptions Lists all the subscription descriptions for a customer account describe_global_clusters Returns information about Amazon DocumentDB global clusters describe_orderable_db_instance_options Returns a list of orderable instance options for the specified engine describe_pending_maintenance_actions Returns a list of resources (for example, instances) that have at least one pendi failover_db_cluster Forces a failover for a cluster list_tags_for_resource Lists all tags on an Amazon DocumentDB resource modify_db_cluster Modifies a setting for an Amazon DocumentDB cluster modify_db_cluster_parameter_group Modifies the parameters of a cluster parameter group modify_db_cluster_snapshot_attribute Adds an attribute and values to, or removes an attribute and values from, a ma modify_db_instance Modifies settings for an instance modify_db_subnet_group Modifies an existing subnet group modify_event_subscription Modifies an existing Amazon DocumentDB event notification subscription modify_global_cluster Modify a setting for an Amazon DocumentDB global cluster reboot_db_instance You might need to reboot your instance, usually for maintenance reasons remove_from_global_cluster Detaches an Amazon DocumentDB secondary cluster from a global cluster remove_source_identifier_from_subscription Removes a source identifier from an existing Amazon DocumentDB event not remove_tags_from_resource Removes metadata tags from an Amazon DocumentDB resource reset_db_cluster_parameter_group Modifies the parameters of a cluster parameter group to the default value restore_db_cluster_from_snapshot Creates a new cluster from a snapshot or cluster snapshot restore_db_cluster_to_point_in_time Restores a cluster to an arbitrary point in time start_db_cluster Restarts the stopped cluster that is specified by DBClusterIdentifier stop_db_cluster Stops the running cluster that is specified by DBClusterIdentifier Examples ## Not run: svc <- docdb() svc$add_source_identifier_to_subscription( Foo = 123 ) ## End(Not run) dynamodb Amazon DynamoDB Description Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB lets you offload the administrative burdens of operating and scaling a distributed database, so that you don’t have to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaling. With DynamoDB, you can create database tables that can store and retrieve any amount of data, and serve any level of request traffic. You can scale up or scale down your tables’ throughput capacity without downtime or performance degradation, and use the Amazon Web Services Management Console to monitor resource utilization and performance metrics. DynamoDB automatically spreads the data and traffic for your tables over a sufficient number of servers to handle your throughput and storage requirements, while maintaining consistent and fast performance. All of your data is stored on solid state disks (SSDs) and automatically replicated across multiple Availability Zones in an Amazon Web Services Region, providing built-in high availability and data durability. Usage dynamodb(config = list(), credentials = list(), endpoint = NULL, region = NULL) Arguments config Optional configuration of credentials, endpoint, and/or region. • credentials: – creds: * access_key_id: AWS access key ID * secret_access_key: AWS secret access key * session_token: AWS temporary session token – profile: The name of a profile to use. If not given, then the default profile is used. – anonymous: Set anonymous credentials. – endpoint: The complete URL to use for the constructed client. – region: The AWS Region used in instantiating the client. • close_connection: Immediately close all HTTP connections. • timeout: The time in seconds till a timeout exception is thrown when at- tempting to make a connection. The default is 60 seconds. • s3_force_path_style: Set this to true to force the request to use path-style addressing, i.e. http://s3.amazonaws.com/BUCKET/KEY. • sts_regional_endpoint: Set sts regional endpoint resolver to regional or legacy https://docs.aws.amazon.com/sdkref/latest/guide/feature-sts-regionalized-e html credentials Optional credentials shorthand for the config parameter • creds: – access_key_id: AWS access key ID – secret_access_key: AWS secret access key – session_token: AWS temporary session token • profile: The name of a profile to use. If not given, then the default profile is used. • anonymous: Set anonymous credentials. endpoint Optional shorthand for complete URL to use for the constructed client. region Optional shorthand for AWS Region used in instantiating the client. Value A client for the service. You can call the service’s operations using syntax like svc$operation(...), where svc is the name you’ve assigned to the client. The available operations are listed in the Op- erations section. Service syntax svc <- dynamodb( config = list( credentials = list( creds = list( access_key_id = "string", secret_access_key = "string", session_token = "string" ), profile = "string", anonymous = "logical" ), endpoint = "string", region = "string", close_connection = "logical", timeout = "numeric", s3_force_path_style = "logical", sts_regional_endpoint = "string" ), credentials = list( creds = list( access_key_id = "string", secret_access_key = "string", session_token = "string" ), profile = "string", anonymous = "logical" ), endpoint = "string", region = "string" ) Operations batch_execute_statement This operation allows you to perform batch reads or writes on data stored in Dynam batch_get_item The BatchGetItem operation returns the attributes of one or more items from one or batch_write_item The BatchWriteItem operation puts or deletes multiple items in one or more tables create_backup Creates a backup for an existing table create_global_table Creates a global table from an existing table create_table The CreateTable operation adds a new table to your account delete_backup Deletes an existing backup of a table delete_item Deletes a single item in a table by primary key delete_table The DeleteTable operation deletes a table and all of its items describe_backup Describes an existing backup of a table describe_continuous_backups Checks the status of continuous backups and point in time recovery on the specified describe_contributor_insights Returns information about contributor insights for a given table or global secondary describe_endpoints Returns the regional endpoint information describe_export Describes an existing table export describe_global_table Returns information about the specified global table describe_global_table_settings Describes Region-specific settings for a global table describe_import Represents the properties of the import describe_kinesis_streaming_destination Returns information about the status of Kinesis streaming describe_limits Returns the current provisioned-capacity quotas for your Amazon Web Services acc describe_table Returns information about the table, including the current status of the table, when describe_table_replica_auto_scaling Describes auto scaling settings across replicas of the global table at once describe_time_to_live Gives a description of the Time to Live (TTL) status on the specified table disable_kinesis_streaming_destination Stops replication from the DynamoDB table to the Kinesis data stream enable_kinesis_streaming_destination Starts table data replication to the specified Kinesis data stream at a timestamp chos execute_statement This operation allows you to perform reads and singleton writes on data stored in D execute_transaction This operation allows you to perform transactional reads or writes on data stored in export_table_to_point_in_time Exports table data to an S3 bucket get_item The GetItem operation returns a set of attributes for the item with the given primary import_table Imports table data from an S3 bucket list_backups List backups associated with an Amazon Web Services account list_contributor_insights Returns a list of ContributorInsightsSummary for a table and all its global secondar list_exports Lists completed exports within the past 90 days list_global_tables Lists all global tables that have a replica in the specified Region list_imports Lists completed imports within the past 90 days list_tables Returns an array of table names associated with the current account and endpoint list_tags_of_resource List all tags on an Amazon DynamoDB resource put_item Creates a new item, or replaces an old item with a new item query You must provide the name of the partition key attribute and a single value for that restore_table_from_backup Creates a new table from an existing backup restore_table_to_point_in_time Restores the specified table to the specified point in time within EarliestRestorableD scan The Scan operation returns one or more items and item attributes by accessing ever tag_resource Associate a set of tags with an Amazon DynamoDB resource transact_get_items TransactGetItems is a synchronous operation that atomically retrieves multiple item transact_write_items TransactWriteItems is a synchronous write operation that groups up to 100 action r untag_resource Removes the association of tags from an Amazon DynamoDB resource update_continuous_backups UpdateContinuousBackups enables or disables point in time recovery for the specif update_contributor_insights Updates the status for contributor insights for a specific table or index update_global_table Adds or removes replicas in the specified global table update_global_table_settings Updates settings for a global table update_item Edits an existing item’s attributes, or adds a new item to the table if it does not alrea update_table Modifies the provisioned throughput settings, global secondary indexes, or Dynamo update_table_replica_auto_scaling Updates auto scaling settings on your global tables at once update_time_to_live The UpdateTimeToLive method enables or disables Time to Live (TTL) for the spe Examples ## Not run: svc <- dynamodb() # This example reads multiple items from the Music table using a batch of # three GetItem requests. Only the AlbumTitle attribute is returned. svc$batch_get_item( RequestItems = list( Music = list( Keys = list( list( Artist = list( S = "No One You Know" ), SongTitle = list( S = "Call Me Today" ) ), list( Artist = list( S = "Acme Band" ), SongTitle = list( S = "Happy Day" ) ), list( Artist = list( S = "No One You Know" ), SongTitle = list( S = "Scared of My Shadow" ) ) ), ProjectionExpression = "AlbumTitle" ) ) ) ## End(Not run) dynamodbstreams Amazon DynamoDB Streams Description Amazon DynamoDB Amazon DynamoDB Streams provides API actions for accessing streams and processing stream records. To learn more about application development with Streams, see Capturing Table Activity with DynamoDB Streams in the Amazon DynamoDB Developer Guide. Usage dynamodbstreams( config = list(), credentials = list(), endpoint = NULL, region = NULL ) Arguments config Optional configuration of credentials, endpoint, and/or region. • credentials: – creds: * access_key_id: AWS access key ID * secret_access_key: AWS secret access key * session_token: AWS temporary session token – profile: The name of a profile to use. If not given, then the default profile is used. – anonymous: Set anonymous credentials. – endpoint: The complete URL to use for the constructed client. – region: The AWS Region used in instantiating the client. • close_connection: Immediately close all HTTP connections. • timeout: The time in seconds till a timeout exception is thrown when at- tempting to make a connection. The default is 60 seconds. • s3_force_path_style: Set this to true to force the request to use path-style addressing, i.e. http://s3.amazonaws.com/BUCKET/KEY. • sts_regional_endpoint: Set sts regional endpoint resolver to regional or legacy https://docs.aws.amazon.com/sdkref/latest/guide/feature-sts-regionalized-e html credentials Optional credentials shorthand for the config parameter • creds: – access_key_id: AWS access key ID – secret_access_key: AWS secret access key – session_token: AWS temporary session token • profile: The name of a profile to use. If not given, then the default profile is used. • anonymous: Set anonymous credentials. endpoint Optional shorthand for complete URL to use for the constructed client. region Optional shorthand for AWS Region used in instantiating the client. Value A client for the service. You can call the service’s operations using syntax like svc$operation(...), where svc is the name you’ve assigned to the client. The available operations are listed in the Op- erations section. Service syntax svc <- dynamodbstreams( config = list( credentials = list( creds = list( access_key_id = "string", secret_access_key = "string", session_token = "string" ), profile = "string", anonymous = "logical" ), endpoint = "string", region = "string", close_connection = "logical", timeout = "numeric", s3_force_path_style = "logical", sts_regional_endpoint = "string" ), credentials = list( creds = list( access_key_id = "string", secret_access_key = "string", session_token = "string" ), profile = "string", anonymous = "logical" ), endpoint = "string", region = "string" ) Operations describe_stream Returns information about a stream, including the current status of the stream, its Amazon Resource Nam get_records Retrieves the stream records from a given shard get_shard_iterator Returns a shard iterator list_streams Returns an array of stream ARNs associated with the current account and endpoint Examples ## Not run: svc <- dynamodbstreams() # The following example describes a stream with a given stream ARN. svc$describe_stream( StreamArn = "arn:aws:dynamodb:us-west-2:111122223333:table/Forum/stream/2..." ) ## End(Not run) elasticache Amazon ElastiCache Description Amazon ElastiCache is a web service that makes it easier to set up, operate, and scale a distributed cache in the cloud. With ElastiCache, customers get all of the benefits of a high-performance, in-memory cache with less of the administrative burden involved in launching and managing a distributed cache. The service makes setup, scaling, and cluster failure handling much simpler than in a self-managed cache deployment. In addition, through integration with Amazon CloudWatch, customers get enhanced visibility into the key performance statistics associated with their cache and can receive alarms if a part of their cache runs hot. Usage elasticache( config = list(), credentials = list(), endpoint = NULL, region = NULL ) Arguments config Optional configuration of credentials, endpoint, and/or region. • credentials: – creds: * access_key_id: AWS access key ID * secret_access_key: AWS secret access key * session_token: AWS temporary session token – profile: The name of a profile to use. If not given, then the default profile is used. – anonymous: Set anonymous credentials. – endpoint: The complete URL to use for the constructed client. – region: The AWS Region used in instantiating the client. • close_connection: Immediately close all HTTP connections. • timeout: The time in seconds till a timeout exception is thrown when at- tempting to make a connection. The default is 60 seconds. • s3_force_path_style: Set this to true to force the request to use path-style addressing, i.e. http://s3.amazonaws.com/BUCKET/KEY. • sts_regional_endpoint: Set sts regional endpoint resolver to regional or legacy https://docs.aws.amazon.com/sdkref/latest/guide/feature-sts-regionalized-e html credentials Optional credentials shorthand for the config parameter • creds: – access_key_id: AWS access key ID – secret_access_key: AWS secret access key – session_token: AWS temporary session token • profile: The name of a profile to use. If not given, then the default profile is used. • anonymous: Set anonymous credentials. endpoint Optional shorthand for complete URL to use for the constructed client. region Optional shorthand for AWS Region used in instantiating the client. Value A client for the service. You can call the service’s operations using syntax like svc$operation(...), where svc is the name you’ve assigned to the client. The available operations are listed in the Op- erations section. Service syntax svc <- elasticache( config = list( credentials = list( creds = list( access_key_id = "string", secret_access_key = "string", session_token = "string" ), profile = "string", anonymous = "logical" ), endpoint = "string", region = "string", close_connection = "logical", timeout = "numeric", s3_force_path_style = "logical", sts_regional_endpoint = "string" ), credentials = list( creds = list( access_key_id = "string", secret_access_key = "string", session_token = "string" ), profile = "string", anonymous = "logical" ), endpoint = "string", region = "string" ) Operations add_tags_to_resource A tag is a key-value pair where the key and value are case-sensitive authorize_cache_security_group_ingress Allows network ingress to a cache security group batch_apply_update_action Apply the service update batch_stop_update_action Stop the service update complete_migration Complete the migration of data copy_snapshot Makes a copy of an existing snapshot create_cache_cluster Creates a cluster create_cache_parameter_group Creates a new Amazon ElastiCache cache parameter group create_cache_security_group Creates a new cache security group create_cache_subnet_group Creates a new cache subnet group create_global_replication_group Global Datastore for Redis offers fully managed, fast, reliable and secu create_replication_group Creates a Redis (cluster mode disabled) or a Redis (cluster mode enabl create_snapshot Creates a copy of an entire cluster or replication group at a specific mo create_user For Redis engine version 6 create_user_group For Redis engine version 6 decrease_node_groups_in_global_replication_group Decreases the number of node groups in a Global datastore decrease_replica_count Dynamically decreases the number of replicas in a Redis (cluster mode delete_cache_cluster Deletes a previously provisioned cluster delete_cache_parameter_group Deletes the specified cache parameter group delete_cache_security_group Deletes a cache security group delete_cache_subnet_group Deletes a cache subnet group delete_global_replication_group Deleting a Global datastore is a two-step process: delete_replication_group Deletes an existing replication group delete_snapshot Deletes an existing snapshot delete_user For Redis engine version 6 delete_user_group For Redis engine version 6 describe_cache_clusters Returns information about all provisioned clusters if no cluster identifi describe_cache_engine_versions Returns a list of the available cache engines and their versions describe_cache_parameter_groups Returns a list of cache parameter group descriptions describe_cache_parameters Returns the detailed parameter list for a particular cache parameter gro describe_cache_security_groups Returns a list of cache security group descriptions describe_cache_subnet_groups Returns a list of cache subnet group descriptions describe_engine_default_parameters Returns the default engine and system parameter information for the sp describe_events Returns events related to clusters, cache security groups, and cache par describe_global_replication_groups Returns information about a particular global replication group describe_replication_groups Returns information about a particular replication group describe_reserved_cache_nodes Returns information about reserved cache nodes for this account, or ab describe_reserved_cache_nodes_offerings Lists available reserved cache node offerings describe_service_updates Returns details of the service updates describe_snapshots Returns information about cluster or replication group snapshots describe_update_actions Returns details of the update actions describe_user_groups Returns a list of user groups describe_users Returns a list of users disassociate_global_replication_group Remove a secondary cluster from the Global datastore using the Globa failover_global_replication_group Used to failover the primary region to a secondary region increase_node_groups_in_global_replication_group Increase the number of node groups in the Global datastore increase_replica_count Dynamically increases the number of replicas in a Redis (cluster mode list_allowed_node_type_modifications Lists all available node types that you can scale your Redis cluster’s or list_tags_for_resource Lists all tags currently on a named resource modify_cache_cluster Modifies the settings for a cluster modify_cache_parameter_group Modifies the parameters of a cache parameter group modify_cache_subnet_group Modifies an existing cache subnet group modify_global_replication_group Modifies the settings for a Global datastore modify_replication_group Modifies the settings for a replication group modify_replication_group_shard_configuration Modifies a replication group’s shards (node groups) by allowing you to modify_user Changes user password(s) and/or access string modify_user_group Changes the list of users that belong to the user group purchase_reserved_cache_nodes_offering Allows you to purchase a reserved cache node offering rebalance_slots_in_global_replication_group Redistribute slots to ensure uniform distribution across existing shards reboot_cache_cluster Reboots some, or all, of the cache nodes within a provisioned cluster remove_tags_from_resource Removes the tags identified by the TagKeys list from the named resour reset_cache_parameter_group Modifies the parameters of a cache parameter group to the engine or sy revoke_cache_security_group_ingress Revokes ingress from a cache security group start_migration Start the migration of data test_failover Represents the input of a TestFailover operation which test automatic f test_migration Async API to test connection between source and target replication gro Examples ## Not run: svc <- elasticache() svc$add_tags_to_resource( Foo = 123 ) ## End(Not run) keyspaces Amazon Keyspaces Description Amazon Keyspaces (for Apache Cassandra) is a scalable, highly available, and managed Apache Cassandra-compatible database service. Amazon Keyspaces makes it easy to migrate, run, and scale Cassandra workloads in the Amazon Web Services Cloud. With just a few clicks on the Amazon Web Services Management Console or a few lines of code, you can create keyspaces and tables in Amazon Keyspaces, without deploying any infrastructure or installing software. In addition to supporting Cassandra Query Language (CQL) requests via open-source Cassan- dra drivers, Amazon Keyspaces supports data definition language (DDL) operations to manage keyspaces and tables using the Amazon Web Services SDK and CLI, as well as infrastructure as code (IaC) services and tools such as CloudFormation and Terraform. This API reference describes the supported DDL operations in detail. For the list of all supported CQL APIs, see Supported Cassandra APIs, operations, and data types in Amazon Keyspaces in the Amazon Keyspaces Developer Guide. To learn how Amazon Keyspaces API actions are recorded with CloudTrail, see Amazon Keyspaces information in CloudTrail in the Amazon Keyspaces Developer Guide. For more information about Amazon Web Services APIs, for example how to implement retry logic or how to sign Amazon Web Services API requests, see Amazon Web Services APIs in the General Reference. Usage keyspaces( config = list(), credentials = list(), endpoint = NULL, region = NULL ) Arguments config Optional configuration of credentials, endpoint, and/or region. • credentials: – creds: * access_key_id: AWS access key ID * secret_access_key: AWS secret access key * session_token: AWS temporary session token – profile: The name of a profile to use. If not given, then the default profile is used. – anonymous: Set anonymous credentials. – endpoint: The complete URL to use for the constructed client. – region: The AWS Region used in instantiating the client. • close_connection: Immediately close all HTTP connections. • timeout: The time in seconds till a timeout exception is thrown when at- tempting to make a connection. The default is 60 seconds. • s3_force_path_style: Set this to true to force the request to use path-style addressing, i.e. http://s3.amazonaws.com/BUCKET/KEY. • sts_regional_endpoint: Set sts regional endpoint resolver to regional or legacy https://docs.aws.amazon.com/sdkref/latest/guide/feature-sts-regionalized-e html credentials Optional credentials shorthand for the config parameter • creds: – access_key_id: AWS access key ID – secret_access_key: AWS secret access key – session_token: AWS temporary session token • profile: The name of a profile to use. If not given, then the default profile is used. • anonymous: Set anonymous credentials. endpoint Optional shorthand for complete URL to use for the constructed client. region Optional shorthand for AWS Region used in instantiating the client. Value A client for the service. You can call the service’s operations using syntax like svc$operation(...), where svc is the name you’ve assigned to the client. The available operations are listed in the Op- erations section. Service syntax svc <- keyspaces( config = list( credentials = list( creds = list( access_key_id = "string", secret_access_key = "string", session_token = "string" ), profile = "string", anonymous = "logical" ), endpoint = "string", region = "string", close_connection = "logical", timeout = "numeric", s3_force_path_style = "logical", sts_regional_endpoint = "string" ), credentials = list( creds = list( access_key_id = "string", secret_access_key = "string", session_token = "string" ), profile = "string", anonymous = "logical" ), endpoint = "string", region = "string" ) Operations create_keyspace The CreateKeyspace operation adds a new keyspace to your account create_table The CreateTable operation adds a new table to the specified keyspace delete_keyspace The DeleteKeyspace operation deletes a keyspace and all of its tables delete_table The DeleteTable operation deletes a table and all of its data get_keyspace Returns the name and the Amazon Resource Name (ARN) of the specified table get_table Returns information about the table, including the table’s name and current status, the keyspace name list_keyspaces Returns a list of keyspaces list_tables Returns a list of tables for a specified keyspace list_tags_for_resource Returns a list of all tags associated with the specified Amazon Keyspaces resource restore_table Restores the specified table to the specified point in time within the earliest_restorable_timestamp an tag_resource Associates a set of tags with a Amazon Keyspaces resource untag_resource Removes the association of tags from a Amazon Keyspaces resource update_table Adds new columns to the table or updates one of the table’s settings, for example capacity mode, enc Examples ## Not run: svc <- keyspaces() svc$create_keyspace( Foo = 123 ) ## End(Not run) lakeformation AWS Lake Formation Description Lake Formation Defines the public endpoint for the Lake Formation service. Usage lakeformation( config = list(), credentials = list(), endpoint = NULL, region = NULL ) Arguments config Optional configuration of credentials, endpoint, and/or region. • credentials: – creds: * access_key_id: AWS access key ID * secret_access_key: AWS secret access key * session_token: AWS temporary session token – profile: The name of a profile to use. If not given, then the default profile is used. – anonymous: Set anonymous credentials. – endpoint: The complete URL to use for the constructed client. – region: The AWS Region used in instantiating the client. • close_connection: Immediately close all HTTP connections. • timeout: The time in seconds till a timeout exception is thrown when at- tempting to make a connection. The default is 60 seconds. • s3_force_path_style: Set this to true to force the request to use path-style addressing, i.e. http://s3.amazonaws.com/BUCKET/KEY. • sts_regional_endpoint: Set sts regional endpoint resolver to regional or legacy https://docs.aws.amazon.com/sdkref/latest/guide/feature-sts-regionalized-e html credentials Optional credentials shorthand for the config parameter • creds: – access_key_id: AWS access key ID – secret_access_key: AWS secret access key – session_token: AWS temporary session token • profile: The name of a profile to use. If not given, then the default profile is used. • anonymous: Set anonymous credentials. endpoint Optional shorthand for complete URL to use for the constructed client. region Optional shorthand for AWS Region used in instantiating the client. Value A client for the service. You can call the service’s operations using syntax like svc$operation(...), where svc is the name you’ve assigned to the client. The available operations are listed in the Op- erations section. Service syntax svc <- lakeformation( config = list( credentials = list( creds = list( access_key_id = "string", secret_access_key = "string", session_token = "string" ), profile = "string", anonymous = "logical" ), endpoint = "string", region = "string", close_connection = "logical", timeout = "numeric", s3_force_path_style = "logical", sts_regional_endpoint = "string" ), credentials = list( creds = list( access_key_id = "string", secret_access_key = "string", session_token = "string" ), profile = "string", anonymous = "logical" ), endpoint = "string", region = "string" ) Operations add_lf_tags_to_resource Attaches one or more LF-tags to an existing resource assume_decorated_role_with_saml Allows a caller to assume an IAM role decorated as the SAML user specified in t batch_grant_permissions Batch operation to grant permissions to the principal batch_revoke_permissions Batch operation to revoke permissions from the principal cancel_transaction Attempts to cancel the specified transaction commit_transaction Attempts to commit the specified transaction create_data_cells_filter Creates a data cell filter to allow one to grant access to certain columns on certain create_lf_tag Creates an LF-tag with the specified name and values delete_data_cells_filter Deletes a data cell filter delete_lf_tag Deletes the specified LF-tag given a key name delete_objects_on_cancel For a specific governed table, provides a list of Amazon S3 objects that will be w deregister_resource Deregisters the resource as managed by the Data Catalog describe_resource Retrieves the current data access role for the given resource registered in Lake Fo describe_transaction Returns the details of a single transaction extend_transaction Indicates to the service that the specified transaction is still active and should not get_data_cells_filter Returns a data cells filter get_data_lake_settings Retrieves the list of the data lake administrators of a Lake Formation-managed da get_effective_permissions_for_path Returns the Lake Formation permissions for a specified table or database resourc get_lf_tag Returns an LF-tag definition get_query_state Returns the state of a query previously submitted get_query_statistics Retrieves statistics on the planning and execution of a query get_resource_lf_tags Returns the LF-tags applied to a resource get_table_objects Returns the set of Amazon S3 objects that make up the specified governed table get_temporary_glue_partition_credentials This API is identical to GetTemporaryTableCredentials except that this is used w get_temporary_glue_table_credentials Allows a caller in a secure environment to assume a role with permission to acces get_work_unit_results Returns the work units resulting from the query get_work_units Retrieves the work units generated by the StartQueryPlanning operation grant_permissions Grants permissions to the principal to access metadata in the Data Catalog and da list_data_cells_filter Lists all the data cell filters on a table list_lf_tags Lists LF-tags that the requester has permission to view list_permissions Returns a list of the principal permissions on the resource, filtered by the permiss list_resources Lists the resources registered to be managed by the Data Catalog list_table_storage_optimizers Returns the configuration of all storage optimizers associated with a specified tab list_transactions Returns metadata about transactions and their status put_data_lake_settings Sets the list of data lake administrators who have admin privileges on all resource register_resource Registers the resource as managed by the Data Catalog remove_lf_tags_from_resource Removes an LF-tag from the resource revoke_permissions Revokes permissions to the principal to access metadata in the Data Catalog and search_databases_by_lf_tags This operation allows a search on DATABASE resources by TagCondition search_tables_by_lf_tags This operation allows a search on TABLE resources by LFTags start_query_planning Submits a request to process a query statement start_transaction Starts a new transaction and returns its transaction ID update_data_cells_filter Updates a data cell filter update_lf_tag Updates the list of possible values for the specified LF-tag key update_resource Updates the data access role used for vending access to the given (registered) reso update_table_objects Updates the manifest of Amazon S3 objects that make up the specified governed update_table_storage_optimizer Updates the configuration of the storage optimizers for a table Examples ## Not run: svc <- lakeformation() svc$add_lf_tags_to_resource( Foo = 123 ) ## End(Not run) memorydb Amazon MemoryDB Description MemoryDB for Redis is a fully managed, Redis-compatible, in-memory database that delivers ultra- fast performance and Multi-AZ durability for modern applications built using microservices archi- tectures. MemoryDB stores the entire database in-memory, enabling low latency and high through- put data access. It is compatible with Redis, a popular open source data store, enabling you to leverage Redis’ flexible and friendly data structures, APIs, and commands. Usage memorydb(config = list(), credentials = list(), endpoint = NULL, region = NULL) Arguments config Optional configuration of credentials, endpoint, and/or region. • credentials: – creds: * access_key_id: AWS access key ID * secret_access_key: AWS secret access key * session_token: AWS temporary session token – profile: The name of a profile to use. If not given, then the default profile is used. – anonymous: Set anonymous credentials. – endpoint: The complete URL to use for the constructed client. – region: The AWS Region used in instantiating the client. • close_connection: Immediately close all HTTP connections. • timeout: The time in seconds till a timeout exception is thrown when at- tempting to make a connection. The default is 60 seconds. • s3_force_path_style: Set this to true to force the request to use path-style addressing, i.e. http://s3.amazonaws.com/BUCKET/KEY. • sts_regional_endpoint: Set sts regional endpoint resolver to regional or legacy https://docs.aws.amazon.com/sdkref/latest/guide/feature-sts-regionalized-e html credentials Optional credentials shorthand for the config parameter • creds: – access_key_id: AWS access key ID – secret_access_key: AWS secret access key – session_token: AWS temporary session token • profile: The name of a profile to use. If not given, then the default profile is used. • anonymous: Set anonymous credentials. endpoint Optional shorthand for complete URL to use for the constructed client. region Optional shorthand for AWS Region used in instantiating the client. Value A client for the service. You can call the service’s operations using syntax like svc$operation(...), where svc is the name you’ve assigned to the client. The available operations are listed in the Op- erations section. Service syntax svc <- memorydb( config = list( credentials = list( creds = list( access_key_id = "string", secret_access_key = "string", session_token = "string" ), profile = "string", anonymous = "logical" ), endpoint = "string", region = "string", close_connection = "logical", timeout = "numeric", s3_force_path_style = "logical", sts_regional_endpoint = "string" ), credentials = list( creds = list( access_key_id = "string", secret_access_key = "string", session_token = "string" ), profile = "string", anonymous = "logical" ), endpoint = "string", region = "string" ) Operations batch_update_cluster Apply the service update to a list of clusters supplied copy_snapshot Makes a copy of an existing snapshot create_acl Creates an Access Control List create_cluster Creates a cluster create_parameter_group Creates a new MemoryDB parameter group create_snapshot Creates a copy of an entire cluster at a specific moment in time create_subnet_group Creates a subnet group create_user Creates a MemoryDB user delete_acl Deletes an Access Control List delete_cluster Deletes a cluster delete_parameter_group Deletes the specified parameter group delete_snapshot Deletes an existing snapshot delete_subnet_group Deletes a subnet group delete_user Deletes a user describe_ac_ls Returns a list of ACLs describe_clusters Returns information about all provisioned clusters if no cluster identifier is specified, or describe_engine_versions Returns a list of the available Redis engine versions describe_events Returns events related to clusters, security groups, and parameter groups describe_parameter_groups Returns a list of parameter group descriptions describe_parameters Returns the detailed parameter list for a particular parameter group describe_reserved_nodes Returns information about reserved nodes for this account, or about a specified reserved describe_reserved_nodes_offerings Lists available reserved node offerings describe_service_updates Returns details of the service updates describe_snapshots Returns information about cluster snapshots describe_subnet_groups Returns a list of subnet group descriptions describe_users Returns a list of users failover_shard Used to failover a shard list_allowed_node_type_updates Lists all available node types that you can scale to from your cluster’s current node type list_tags Lists all tags currently on a named resource purchase_reserved_nodes_offering Allows you to purchase a reserved node offering reset_parameter_group Modifies the parameters of a parameter group to the engine or system default value tag_resource A tag is a key-value pair where the key and value are case-sensitive untag_resource Use this operation to remove tags on a resource update_acl Changes the list of users that belong to the Access Control List update_cluster Modifies the settings for a cluster update_parameter_group Updates the parameters of a parameter group update_subnet_group Updates a subnet group update_user Changes user password(s) and/or access string Examples ## Not run: svc <- memorydb() svc$batch_update_cluster( Foo = 123 ) ## End(Not run) neptune Amazon Neptune Description Amazon Neptune is a fast, reliable, fully-managed graph database service that makes it easy to build and run applications that work with highly connected datasets. The core of Amazon Neptune is a purpose-built, high-performance graph database engine optimized for storing billions of relation- ships and querying the graph with milliseconds latency. Amazon Neptune supports popular graph models Property Graph and W3C’s RDF, and their respective query languages Apache TinkerPop Gremlin and SPARQL, allowing you to easily build queries that efficiently navigate highly con- nected datasets. Neptune powers graph use cases such as recommendation engines, fraud detection, knowledge graphs, drug discovery, and network security. This interface reference for Amazon Neptune contains documentation for a programming or com- mand line interface you can use to manage Amazon Neptune. Note that Amazon Neptune is asyn- chronous, which means that some interfaces might require techniques such as polling or callback functions to determine when a command has been applied. In this reference, the parameter descrip- tions indicate whether a command is applied immediately, on the next instance reboot, or during the maintenance window. The reference structure is as follows, and we list following some related topics from the user guide. Usage neptune(config = list(), credentials = list(), endpoint = NULL, region = NULL) Arguments config Optional configuration of credentials, endpoint, and/or region. • credentials: – creds: * access_key_id: AWS access key ID * secret_access_key: AWS secret access key * session_token: AWS temporary session token – profile: The name of a profile to use. If not given, then the default profile is used. – anonymous: Set anonymous credentials. – endpoint: The complete URL to use for the constructed client. – region: The AWS Region used in instantiating the client. • close_connection: Immediately close all HTTP connections. • timeout: The time in seconds till a timeout exception is thrown when at- tempting to make a connection. The default is 60 seconds. • s3_force_path_style: Set this to true to force the request to use path-style addressing, i.e. http://s3.amazonaws.com/BUCKET/KEY. • sts_regional_endpoint: Set sts regional endpoint resolver to regional or legacy https://docs.aws.amazon.com/sdkref/latest/guide/feature-sts-regionalized-e html credentials Optional credentials shorthand for the config parameter • creds: – access_key_id: AWS access key ID – secret_access_key: AWS secret access key – session_token: AWS temporary session token • profile: The name of a profile to use. If not given, then the default profile is used. • anonymous: Set anonymous credentials. endpoint Optional shorthand for complete URL to use for the constructed client. region Optional shorthand for AWS Region used in instantiating the client. Value A client for the service. You can call the service’s operations using syntax like svc$operation(...), where svc is the name you’ve assigned to the client. The available operations are listed in the Op- erations section. Service syntax svc <- neptune( config = list( credentials = list( creds = list( access_key_id = "string", secret_access_key = "string", session_token = "string" ), profile = "string", anonymous = "logical" ), endpoint = "string", region = "string", close_connection = "logical", timeout = "numeric", s3_force_path_style = "logical", sts_regional_endpoint = "string" ), credentials = list( creds = list( access_key_id = "string", secret_access_key = "string", session_token = "string" ), profile = "string", anonymous = "logical" ), endpoint = "string", region = "string" ) Operations add_role_to_db_cluster Associates an Identity and Access Management (IAM) role with an Neptune D add_source_identifier_to_subscription Adds a source identifier to an existing event notification subscription add_tags_to_resource Adds metadata tags to an Amazon Neptune resource apply_pending_maintenance_action Applies a pending maintenance action to a resource (for example, to a DB inst copy_db_cluster_parameter_group Copies the specified DB cluster parameter group copy_db_cluster_snapshot Copies a snapshot of a DB cluster copy_db_parameter_group Copies the specified DB parameter group create_db_cluster Creates a new Amazon Neptune DB cluster create_db_cluster_endpoint Creates a new custom endpoint and associates it with an Amazon Neptune DB create_db_cluster_parameter_group Creates a new DB cluster parameter group create_db_cluster_snapshot Creates a snapshot of a DB cluster create_db_instance Creates a new DB instance create_db_parameter_group Creates a new DB parameter group create_db_subnet_group Creates a new DB subnet group create_event_subscription Creates an event notification subscription create_global_cluster Creates a Neptune global database spread across multiple Amazon Regions delete_db_cluster The DeleteDBCluster action deletes a previously provisioned DB cluster delete_db_cluster_endpoint Deletes a custom endpoint and removes it from an Amazon Neptune DB clust delete_db_cluster_parameter_group Deletes a specified DB cluster parameter group delete_db_cluster_snapshot Deletes a DB cluster snapshot delete_db_instance The DeleteDBInstance action deletes a previously provisioned DB instance delete_db_parameter_group Deletes a specified DBParameterGroup delete_db_subnet_group Deletes a DB subnet group delete_event_subscription Deletes an event notification subscription delete_global_cluster Deletes a global database describe_db_cluster_endpoints Returns information about endpoints for an Amazon Neptune DB cluster describe_db_cluster_parameter_groups Returns a list of DBClusterParameterGroup descriptions describe_db_cluster_parameters Returns the detailed parameter list for a particular DB cluster parameter group describe_db_clusters Returns information about provisioned DB clusters, and supports pagination describe_db_cluster_snapshot_attributes Returns a list of DB cluster snapshot attribute names and values for a manual describe_db_cluster_snapshots Returns information about DB cluster snapshots describe_db_engine_versions Returns a list of the available DB engines describe_db_instances Returns information about provisioned instances, and supports pagination describe_db_parameter_groups Returns a list of DBParameterGroup descriptions describe_db_parameters Returns the detailed parameter list for a particular DB parameter group describe_db_subnet_groups Returns a list of DBSubnetGroup descriptions describe_engine_default_cluster_parameters Returns the default engine and system parameter information for the cluster da describe_engine_default_parameters Returns the default engine and system parameter information for the specified describe_event_categories Displays a list of categories for all event source types, or, if specified, for a spe describe_events Returns events related to DB instances, DB security groups, DB snapshots, an describe_event_subscriptions Lists all the subscription descriptions for a customer account describe_global_clusters Returns information about Neptune global database clusters describe_orderable_db_instance_options Returns a list of orderable DB instance options for the specified engine describe_pending_maintenance_actions Returns a list of resources (for example, DB instances) that have at least one p describe_valid_db_instance_modifications You can call DescribeValidDBInstanceModifications to learn what modificatio failover_db_cluster Forces a failover for a DB cluster failover_global_cluster Initiates the failover process for a Neptune global database list_tags_for_resource Lists all tags on an Amazon Neptune resource modify_db_cluster Modify a setting for a DB cluster modify_db_cluster_endpoint Modifies the properties of an endpoint in an Amazon Neptune DB cluster modify_db_cluster_parameter_group Modifies the parameters of a DB cluster parameter group modify_db_cluster_snapshot_attribute Adds an attribute and values to, or removes an attribute and values from, a ma modify_db_instance Modifies settings for a DB instance modify_db_parameter_group Modifies the parameters of a DB parameter group modify_db_subnet_group Modifies an existing DB subnet group modify_event_subscription Modifies an existing event notification subscription modify_global_cluster Modify a setting for an Amazon Neptune global cluster promote_read_replica_db_cluster Not supported reboot_db_instance You might need to reboot your DB instance, usually for maintenance reasons remove_from_global_cluster Detaches a Neptune DB cluster from a Neptune global database remove_role_from_db_cluster Disassociates an Identity and Access Management (IAM) role from a DB clus remove_source_identifier_from_subscription Removes a source identifier from an existing event notification subscription remove_tags_from_resource Removes metadata tags from an Amazon Neptune resource reset_db_cluster_parameter_group Modifies the parameters of a DB cluster parameter group to the default value reset_db_parameter_group Modifies the parameters of a DB parameter group to the engine/system default restore_db_cluster_from_snapshot Creates a new DB cluster from a DB snapshot or DB cluster snapshot restore_db_cluster_to_point_in_time Restores a DB cluster to an arbitrary point in time start_db_cluster Starts an Amazon Neptune DB cluster that was stopped using the Amazon con stop_db_cluster Stops an Amazon Neptune DB cluster Examples ## Not run: svc <- neptune() svc$add_role_to_db_cluster( Foo = 123 ) ## End(Not run) qldb Amazon QLDB Description The resource management API for Amazon QLDB Usage qldb(config = list(), credentials = list(), endpoint = NULL, region = NULL) Arguments config Optional configuration of credentials, endpoint, and/or region. • credentials: – creds: * access_key_id: AWS access key ID * secret_access_key: AWS secret access key * session_token: AWS temporary session token – profile: The name of a profile to use. If not given, then the default profile is used. – anonymous: Set anonymous credentials. – endpoint: The complete URL to use for the constructed client. – region: The AWS Region used in instantiating the client. • close_connection: Immediately close all HTTP connections. • timeout: The time in seconds till a timeout exception is thrown when at- tempting to make a connection. The default is 60 seconds. • s3_force_path_style: Set this to true to force the request to use path-style addressing, i.e. http://s3.amazonaws.com/BUCKET/KEY. • sts_regional_endpoint: Set sts regional endpoint resolver to regional or legacy https://docs.aws.amazon.com/sdkref/latest/guide/feature-sts-regionalized-e html credentials Optional credentials shorthand for the config parameter • creds: – access_key_id: AWS access key ID – secret_access_key: AWS secret access key – session_token: AWS temporary session token • profile: The name of a profile to use. If not given, then the default profile is used. • anonymous: Set anonymous credentials. endpoint Optional shorthand for complete URL to use for the constructed client. region Optional shorthand for AWS Region used in instantiating the client. Value A client for the service. You can call the service’s operations using syntax like svc$operation(...), where svc is the name you’ve assigned to the client. The available operations are listed in the Op- erations section. Service syntax svc <- qldb( config = list( credentials = list( creds = list( access_key_id = "string", secret_access_key = "string", session_token = "string" ), profile = "string", anonymous = "logical" ), endpoint = "string", region = "string", close_connection = "logical", timeout = "numeric", s3_force_path_style = "logical", sts_regional_endpoint = "string" ), credentials = list( creds = list( access_key_id = "string", secret_access_key = "string", session_token = "string" ), profile = "string", anonymous = "logical" ), endpoint = "string", region = "string" ) Operations cancel_journal_kinesis_stream Ends a given Amazon QLDB journal stream create_ledger Creates a new ledger in your Amazon Web Services account in the current Region delete_ledger Deletes a ledger and all of its contents describe_journal_kinesis_stream Returns detailed information about a given Amazon QLDB journal stream describe_journal_s3_export Returns information about a journal export job, including the ledger name, export I describe_ledger Returns information about a ledger, including its state, permissions mode, encrypti export_journal_to_s3 Exports journal contents within a date and time range from a ledger into a specified get_block Returns a block object at a specified address in a journal get_digest Returns the digest of a ledger at the latest committed block in the journal get_revision Returns a revision data object for a specified document ID and block address list_journal_kinesis_streams_for_ledger Returns all Amazon QLDB journal streams for a given ledger list_journal_s3_exports Returns all journal export jobs for all ledgers that are associated with the current A list_journal_s3_exports_for_ledger Returns all journal export jobs for a specified ledger list_ledgers Returns all ledgers that are associated with the current Amazon Web Services acco list_tags_for_resource Returns all tags for a specified Amazon QLDB resource stream_journal_to_kinesis Creates a journal stream for a given Amazon QLDB ledger tag_resource Adds one or more tags to a specified Amazon QLDB resource untag_resource Removes one or more tags from a specified Amazon QLDB resource update_ledger Updates properties on a ledger update_ledger_permissions_mode Updates the permissions mode of a ledger Examples ## Not run: svc <- qldb() svc$cancel_journal_kinesis_stream( Foo = 123 ) ## End(Not run) qldbsession Amazon QLDB Session Description The transactional data APIs for Amazon QLDB Instead of interacting directly with this API, we recommend using the QLDB driver or the QLDB shell to execute data transactions on a ledger. • If you are working with an AWS SDK, use the QLDB driver. The driver provides a high-level abstraction layer above this QLDB Session data plane and manages send_command API calls for you. For information and a list of supported programming languages, see Getting started with the driver in the Amazon QLDB Developer Guide. • If you are working with the AWS Command Line Interface (AWS CLI), use the QLDB shell. The shell is a command line interface that uses the QLDB driver to interact with a ledger. For information, see Accessing Amazon QLDB using the QLDB shell. Usage qldbsession( config = list(), credentials = list(), endpoint = NULL, region = NULL ) Arguments config Optional configuration of credentials, endpoint, and/or region. • credentials: – creds: * access_key_id: AWS access key ID * secret_access_key: AWS secret access key * session_token: AWS temporary session token – profile: The name of a profile to use. If not given, then the default profile is used. – anonymous: Set anonymous credentials. – endpoint: The complete URL to use for the constructed client. – region: The AWS Region used in instantiating the client. • close_connection: Immediately close all HTTP connections. • timeout: The time in seconds till a timeout exception is thrown when at- tempting to make a connection. The default is 60 seconds. • s3_force_path_style: Set this to true to force the request to use path-style addressing, i.e. http://s3.amazonaws.com/BUCKET/KEY. • sts_regional_endpoint: Set sts regional endpoint resolver to regional or legacy https://docs.aws.amazon.com/sdkref/latest/guide/feature-sts-regionalized-e html credentials Optional credentials shorthand for the config parameter • creds: – access_key_id: AWS access key ID – secret_access_key: AWS secret access key – session_token: AWS temporary session token • profile: The name of a profile to use. If not given, then the default profile is used. • anonymous: Set anonymous credentials. endpoint Optional shorthand for complete URL to use for the constructed client. region Optional shorthand for AWS Region used in instantiating the client. Value A client for the service. You can call the service’s operations using syntax like svc$operation(...), where svc is the name you’ve assigned to the client. The available operations are listed in the Op- erations section. Service syntax svc <- qldbsession( config = list( credentials = list( creds = list( access_key_id = "string", secret_access_key = "string", session_token = "string" ), profile = "string", anonymous = "logical" ), endpoint = "string", region = "string", close_connection = "logical", timeout = "numeric", s3_force_path_style = "logical", sts_regional_endpoint = "string" ), credentials = list( creds = list( access_key_id = "string", secret_access_key = "string", session_token = "string" ), profile = "string", anonymous = "logical" ), endpoint = "string", region = "string" ) Operations send_command Sends a command to an Amazon QLDB ledger Examples ## Not run: svc <- qldbsession() svc$send_command( Foo = 123 ) ## End(Not run) rds Amazon Relational Database Service Description Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the cloud. It provides cost-efficient, resizeable capacity for an industry-standard relational database and manages common database administration tasks, freeing up developers to focus on what makes their applications and businesses unique. Amazon RDS gives you access to the capabilities of a MySQL, MariaDB, PostgreSQL, Microsoft SQL Server, Oracle, or Amazon Aurora database server. These capabilities mean that the code, applications, and tools you already use today with your existing databases work with Amazon RDS without modification. Amazon RDS automatically backs up your database and maintains the database software that powers your DB instance. Amazon RDS is flexible: you can scale your DB instance’s compute resources and storage capacity to meet your application’s demand. As with all Amazon Web Services, there are no up-front investments, and you pay only for the resources you use. This interface reference for Amazon RDS contains documentation for a programming or command line interface you can use to manage Amazon RDS. Amazon RDS is asynchronous, which means that some interfaces might require techniques such as polling or callback functions to determine when a command has been applied. In this reference, the parameter descriptions indicate whether a command is applied immediately, on the next instance reboot, or during the maintenance window. The reference structure is as follows, and we list following some related topics from the user guide. Amazon RDS API Reference • For the alphabetical list of API actions, see API Actions. • For the alphabetical list of data types, see Data Types. • For a list of common query parameters, see Common Parameters. • For descriptions of the error codes, see Common Errors. Amazon RDS User Guide • For a summary of the Amazon RDS interfaces, see Available RDS Interfaces. • For more information about how to use the Query API, see Using the Query API. Usage rds(config = list(), credentials = list(), endpoint = NULL, region = NULL) Arguments config Optional configuration of credentials, endpoint, and/or region. • credentials: – creds: * access_key_id: AWS access key ID * secret_access_key: AWS secret access key * session_token: AWS temporary session token – profile: The name of a profile to use. If not given, then the default profile is used. – anonymous: Set anonymous credentials. – endpoint: The complete URL to use for the constructed client. – region: The AWS Region used in instantiating the client. • close_connection: Immediately close all HTTP connections. • timeout: The time in seconds till a timeout exception is thrown when at- tempting to make a connection. The default is 60 seconds. • s3_force_path_style: Set this to true to force the request to use path-style addressing, i.e. http://s3.amazonaws.com/BUCKET/KEY. • sts_regional_endpoint: Set sts regional endpoint resolver to regional or legacy https://docs.aws.amazon.com/sdkref/latest/guide/feature-sts-regionalized-e html credentials Optional credentials shorthand for the config parameter • creds: – access_key_id: AWS access key ID – secret_access_key: AWS secret access key – session_token: AWS temporary session token • profile: The name of a profile to use. If not given, then the default profile is used. • anonymous: Set anonymous credentials. endpoint Optional shorthand for complete URL to use for the constructed client. region Optional shorthand for AWS Region used in instantiating the client. Value A client for the service. You can call the service’s operations using syntax like svc$operation(...), where svc is the name you’ve assigned to the client. The available operations are listed in the Op- erations section. Service syntax svc <- rds( config = list( credentials = list( creds = list( access_key_id = "string", secret_access_key = "string", session_token = "string" ), profile = "string", anonymous = "logical" ), endpoint = "string", region = "string", close_connection = "logical", timeout = "numeric", s3_force_path_style = "logical", sts_regional_endpoint = "string" ), credentials = list( creds = list( access_key_id = "string", secret_access_key = "string", session_token = "string" ), profile = "string", anonymous = "logical" ), endpoint = "string", region = "string" ) Operations add_role_to_db_cluster Associates an Identity and Access Management (IAM) role with a DB cl add_role_to_db_instance Associates an Amazon Web Services Identity and Access Management ( add_source_identifier_to_subscription Adds a source identifier to an existing RDS event notification subscriptio add_tags_to_resource Adds metadata tags to an Amazon RDS resource apply_pending_maintenance_action Applies a pending maintenance action to a resource (for example, to a D authorize_db_security_group_ingress Enables ingress to a DBSecurityGroup using one of two forms of authori backtrack_db_cluster Backtracks a DB cluster to a specific time, without creating a new DB cl build_auth_token Return an authentication token for a database connection cancel_export_task Cancels an export task in progress that is exporting a snapshot or cluster copy_db_cluster_parameter_group Copies the specified DB cluster parameter group copy_db_cluster_snapshot Copies a snapshot of a DB cluster copy_db_parameter_group Copies the specified DB parameter group copy_db_snapshot Copies the specified DB snapshot copy_option_group Copies the specified option group create_blue_green_deployment Creates a blue/green deployment create_custom_db_engine_version Creates a custom DB engine version (CEV) create_db_cluster Creates a new Amazon Aurora DB cluster or Multi-AZ DB cluster create_db_cluster_endpoint Creates a new custom endpoint and associates it with an Amazon Aurora create_db_cluster_parameter_group Creates a new DB cluster parameter group create_db_cluster_snapshot Creates a snapshot of a DB cluster create_db_instance Creates a new DB instance create_db_instance_read_replica Creates a new DB instance that acts as a read replica for an existing sour create_db_parameter_group Creates a new DB parameter group create_db_proxy Creates a new DB proxy create_db_proxy_endpoint Creates a DBProxyEndpoint create_db_security_group Creates a new DB security group create_db_snapshot Creates a snapshot of a DB instance create_db_subnet_group Creates a new DB subnet group create_event_subscription Creates an RDS event notification subscription create_global_cluster Creates an Aurora global database spread across multiple Amazon Web create_option_group Creates a new option group delete_blue_green_deployment Deletes a blue/green deployment delete_custom_db_engine_version Deletes a custom engine version delete_db_cluster The DeleteDBCluster action deletes a previously provisioned DB cluster delete_db_cluster_automated_backup Deletes automated backups using the DbClusterResourceId value of the delete_db_cluster_endpoint Deletes a custom endpoint and removes it from an Amazon Aurora DB c delete_db_cluster_parameter_group Deletes a specified DB cluster parameter group delete_db_cluster_snapshot Deletes a DB cluster snapshot delete_db_instance The DeleteDBInstance action deletes a previously provisioned DB instan delete_db_instance_automated_backup Deletes automated backups using the DbiResourceId value of the source delete_db_parameter_group Deletes a specified DB parameter group delete_db_proxy Deletes an existing DB proxy delete_db_proxy_endpoint Deletes a DBProxyEndpoint delete_db_security_group Deletes a DB security group delete_db_snapshot Deletes a DB snapshot delete_db_subnet_group Deletes a DB subnet group delete_event_subscription Deletes an RDS event notification subscription delete_global_cluster Deletes a global database cluster delete_option_group Deletes an existing option group deregister_db_proxy_targets Remove the association between one or more DBProxyTarget data struct describe_account_attributes Lists all of the attributes for a customer account describe_blue_green_deployments Describes one or more blue/green deployments describe_certificates Lists the set of CA certificates provided by Amazon RDS for this Amazo describe_db_cluster_automated_backups Displays backups for both current and deleted DB clusters describe_db_cluster_backtracks Returns information about backtracks for a DB cluster describe_db_cluster_endpoints Returns information about endpoints for an Amazon Aurora DB cluster describe_db_cluster_parameter_groups Returns a list of DBClusterParameterGroup descriptions describe_db_cluster_parameters Returns the detailed parameter list for a particular DB cluster parameter describe_db_clusters Describes existing Amazon Aurora DB clusters and Multi-AZ DB cluste describe_db_cluster_snapshot_attributes Returns a list of DB cluster snapshot attribute names and values for a ma describe_db_cluster_snapshots Returns information about DB cluster snapshots describe_db_engine_versions Returns a list of the available DB engines describe_db_instance_automated_backups Displays backups for both current and deleted instances describe_db_instances Describes provisioned RDS instances describe_db_log_files Returns a list of DB log files for the DB instance describe_db_parameter_groups Returns a list of DBParameterGroup descriptions describe_db_parameters Returns the detailed parameter list for a particular DB parameter group describe_db_proxies Returns information about DB proxies describe_db_proxy_endpoints Returns information about DB proxy endpoints describe_db_proxy_target_groups Returns information about DB proxy target groups, represented by DBPr describe_db_proxy_targets Returns information about DBProxyTarget objects describe_db_security_groups Returns a list of DBSecurityGroup descriptions describe_db_snapshot_attributes Returns a list of DB snapshot attribute names and values for a manual D describe_db_snapshots Returns information about DB snapshots describe_db_subnet_groups Returns a list of DBSubnetGroup descriptions describe_engine_default_cluster_parameters Returns the default engine and system parameter information for the clus describe_engine_default_parameters Returns the default engine and system parameter information for the spe describe_event_categories Displays a list of categories for all event source types, or, if specified, for describe_events Returns events related to DB instances, DB clusters, DB parameter grou describe_event_subscriptions Lists all the subscription descriptions for a customer account describe_export_tasks Returns information about a snapshot or cluster export to Amazon S3 describe_global_clusters Returns information about Aurora global database clusters describe_option_group_options Describes all available options describe_option_groups Describes the available option groups describe_orderable_db_instance_options Returns a list of orderable DB instance options for the specified DB engi describe_pending_maintenance_actions Returns a list of resources (for example, DB instances) that have at least describe_reserved_db_instances Returns information about reserved DB instances for this account, or abo describe_reserved_db_instances_offerings Lists available reserved DB instance offerings describe_source_regions Returns a list of the source Amazon Web Services Regions where the cur describe_valid_db_instance_modifications You can call DescribeValidDBInstanceModifications to learn what modif download_db_log_file_portion Downloads all or a portion of the specified log file, up to 1 MB in size failover_db_cluster Forces a failover for a DB cluster failover_global_cluster Promotes the specified secondary DB cluster to be the primary DB cluste list_tags_for_resource Lists all tags on an Amazon RDS resource modify_activity_stream Changes the audit policy state of a database activity stream to either lock modify_certificates Override the system-default Secure Sockets Layer/Transport Layer Secu modify_current_db_cluster_capacity Set the capacity of an Aurora Serverless v1 DB cluster to a specific value modify_custom_db_engine_version Modifies the status of a custom engine version (CEV) modify_db_cluster Modifies the settings of an Amazon Aurora DB cluster or a Multi-AZ D modify_db_cluster_endpoint Modifies the properties of an endpoint in an Amazon Aurora DB cluster modify_db_cluster_parameter_group Modifies the parameters of a DB cluster parameter group modify_db_cluster_snapshot_attribute Adds an attribute and values to, or removes an attribute and values from, modify_db_instance Modifies settings for a DB instance modify_db_parameter_group Modifies the parameters of a DB parameter group modify_db_proxy Changes the settings for an existing DB proxy modify_db_proxy_endpoint Changes the settings for an existing DB proxy endpoint modify_db_proxy_target_group Modifies the properties of a DBProxyTargetGroup modify_db_snapshot Updates a manual DB snapshot with a new engine version modify_db_snapshot_attribute Adds an attribute and values to, or removes an attribute and values from, modify_db_subnet_group Modifies an existing DB subnet group modify_event_subscription Modifies an existing RDS event notification subscription modify_global_cluster Modifies a setting for an Amazon Aurora global database cluster modify_option_group Modifies an existing option group promote_read_replica Promotes a read replica DB instance to a standalone DB instance promote_read_replica_db_cluster Promotes a read replica DB cluster to a standalone DB cluster purchase_reserved_db_instances_offering Purchases a reserved DB instance offering reboot_db_cluster You might need to reboot your DB cluster, usually for maintenance reaso reboot_db_instance You might need to reboot your DB instance, usually for maintenance rea register_db_proxy_targets Associate one or more DBProxyTarget data structures with a DBProxyTa remove_from_global_cluster Detaches an Aurora secondary cluster from an Aurora global database cl remove_role_from_db_cluster Removes the asssociation of an Amazon Web Services Identity and Acce remove_role_from_db_instance Disassociates an Amazon Web Services Identity and Access Managemen remove_source_identifier_from_subscription Removes a source identifier from an existing RDS event notification sub remove_tags_from_resource Removes metadata tags from an Amazon RDS resource reset_db_cluster_parameter_group Modifies the parameters of a DB cluster parameter group to the default v reset_db_parameter_group Modifies the parameters of a DB parameter group to the engine/system d restore_db_cluster_from_s3 Creates an Amazon Aurora DB cluster from MySQL data stored in an A restore_db_cluster_from_snapshot Creates a new DB cluster from a DB snapshot or DB cluster snapshot restore_db_cluster_to_point_in_time Restores a DB cluster to an arbitrary point in time restore_db_instance_from_db_snapshot Creates a new DB instance from a DB snapshot restore_db_instance_from_s3 Amazon Relational Database Service (Amazon RDS) supports importing restore_db_instance_to_point_in_time Restores a DB instance to an arbitrary point in time revoke_db_security_group_ingress Revokes ingress from a DBSecurityGroup for previously authorized IP r start_activity_stream Starts a database activity stream to monitor activity on the database start_db_cluster Starts an Amazon Aurora DB cluster that was stopped using the Amazon start_db_instance Starts an Amazon RDS DB instance that was stopped using the Amazon start_db_instance_automated_backups_replication Enables replication of automated backups to a different Amazon Web Se start_export_task Starts an export of DB snapshot or DB cluster data to Amazon S3 stop_activity_stream Stops a database activity stream that was started using the Amazon Web stop_db_cluster Stops an Amazon Aurora DB cluster stop_db_instance Stops an Amazon RDS DB instance stop_db_instance_automated_backups_replication Stops automated backup replication for a DB instance switchover_blue_green_deployment Switches over a blue/green deployment switchover_global_cluster Switches over the specified secondary DB cluster to be the new primary switchover_read_replica Switches over an Oracle standby database in an Oracle Data Guard envir Examples ## Not run: svc <- rds() svc$add_role_to_db_cluster( Foo = 123 ) ## End(Not run) rdsdataservice AWS RDS DataService Description Amazon RDS Data Service Amazon RDS provides an HTTP endpoint to run SQL statements on an Amazon Aurora Serverless v1 DB cluster. To run these statements, you work with the Data Service API. The Data Service API isn’t supported on Amazon Aurora Serverless v2 DB clusters. For more information about the Data Service API, see Using the Data API in the Amazon Aurora User Guide. Usage rdsdataservice( config = list(), credentials = list(), endpoint = NULL, region = NULL ) Arguments config Optional configuration of credentials, endpoint, and/or region. • credentials: – creds: * access_key_id: AWS access key ID * secret_access_key: AWS secret access key * session_token: AWS temporary session token – profile: The name of a profile to use. If not given, then the default profile is used. – anonymous: Set anonymous credentials. – endpoint: The complete URL to use for the constructed client. – region: The AWS Region used in instantiating the client. • close_connection: Immediately close all HTTP connections. • timeout: The time in seconds till a timeout exception is thrown when at- tempting to make a connection. The default is 60 seconds. • s3_force_path_style: Set this to true to force the request to use path-style addressing, i.e. http://s3.amazonaws.com/BUCKET/KEY. • sts_regional_endpoint: Set sts regional endpoint resolver to regional or legacy https://docs.aws.amazon.com/sdkref/latest/guide/feature-sts-regionalized-e html credentials Optional credentials shorthand for the config parameter • creds: – access_key_id: AWS access key ID – secret_access_key: AWS secret access key – session_token: AWS temporary session token • profile: The name of a profile to use. If not given, then the default profile is used. • anonymous: Set anonymous credentials. endpoint Optional shorthand for complete URL to use for the constructed client. region Optional shorthand for AWS Region used in instantiating the client. Value A client for the service. You can call the service’s operations using syntax like svc$operation(...), where svc is the name you’ve assigned to the client. The available operations are listed in the Op- erations section. Service syntax svc <- rdsdataservice( config = list( credentials = list( creds = list( access_key_id = "string", secret_access_key = "string", session_token = "string" ), profile = "string", anonymous = "logical" ), endpoint = "string", region = "string", close_connection = "logical", timeout = "numeric", s3_force_path_style = "logical", sts_regional_endpoint = "string" ), credentials = list( creds = list( access_key_id = "string", secret_access_key = "string", session_token = "string" ), profile = "string", anonymous = "logical" ), endpoint = "string", region = "string" ) Operations batch_execute_statement Runs a batch SQL statement over an array of data begin_transaction Starts a SQL transaction commit_transaction Ends a SQL transaction started with the BeginTransaction operation and commits the changes execute_sql Runs one or more SQL statements execute_statement Runs a SQL statement against a database rollback_transaction Performs a rollback of a transaction Examples ## Not run: svc <- rdsdataservice() svc$batch_execute_statement( Foo = 123 ) ## End(Not run) redshift Amazon Redshift Description Overview This is an interface reference for Amazon Redshift. It contains documentation for one of the pro- gramming or command line interfaces you can use to manage Amazon Redshift clusters. Note that Amazon Redshift is asynchronous, which means that some interfaces may require techniques, such as polling or asynchronous callback handlers, to determine when a command has been applied. In this reference, the parameter descriptions indicate whether a change is applied immediately, on the next instance reboot, or during the next maintenance window. For a summary of the Amazon Redshift cluster management interfaces, go to Using the Amazon Redshift Management Interfaces. Amazon Redshift manages all the work of setting up, operating, and scaling a data warehouse: provisioning capacity, monitoring and backing up the cluster, and applying patches and upgrades to the Amazon Redshift engine. You can focus on using your data to acquire new insights for your business and customers. If you are a first-time user of Amazon Redshift, we recommend that you begin by reading the Amazon Redshift Getting Started Guide. If you are a database developer, the Amazon Redshift Database Developer Guide explains how to design, build, query, and maintain the databases that make up your data warehouse. Usage redshift(config = list(), credentials = list(), endpoint = NULL, region = NULL) Arguments config Optional configuration of credentials, endpoint, and/or region. • credentials: – creds: * access_key_id: AWS access key ID * secret_access_key: AWS secret access key * session_token: AWS temporary session token – profile: The name of a profile to use. If not given, then the default profile is used. – anonymous: Set anonymous credentials. – endpoint: The complete URL to use for the constructed client. – region: The AWS Region used in instantiating the client. • close_connection: Immediately close all HTTP connections. • timeout: The time in seconds till a timeout exception is thrown when at- tempting to make a connection. The default is 60 seconds. • s3_force_path_style: Set this to true to force the request to use path-style addressing, i.e. http://s3.amazonaws.com/BUCKET/KEY. • sts_regional_endpoint: Set sts regional endpoint resolver to regional or legacy https://docs.aws.amazon.com/sdkref/latest/guide/feature-sts-regionalized-e html credentials Optional credentials shorthand for the config parameter • creds: – access_key_id: AWS access key ID – secret_access_key: AWS secret access key – session_token: AWS temporary session token • profile: The name of a profile to use. If not given, then the default profile is used. • anonymous: Set anonymous credentials. endpoint Optional shorthand for complete URL to use for the constructed client. region Optional shorthand for AWS Region used in instantiating the client. Value A client for the service. You can call the service’s operations using syntax like svc$operation(...), where svc is the name you’ve assigned to the client. The available operations are listed in the Op- erations section. Service syntax svc <- redshift( config = list( credentials = list( creds = list( access_key_id = "string", secret_access_key = "string", session_token = "string" ), profile = "string", anonymous = "logical" ), endpoint = "string", region = "string", close_connection = "logical", timeout = "numeric", s3_force_path_style = "logical", sts_regional_endpoint = "string" ), credentials = list( creds = list( access_key_id = "string", secret_access_key = "string", session_token = "string" ), profile = "string", anonymous = "logical" ), endpoint = "string", region = "string" ) Operations accept_reserved_node_exchange Exchanges a DC1 Reserved Node for a DC2 Reserved Node with no c add_partner Adds a partner integration to a cluster associate_data_share_consumer From a datashare consumer account, associates a datashare with the ac authorize_cluster_security_group_ingress Adds an inbound (ingress) rule to an Amazon Redshift security group authorize_data_share From a data producer account, authorizes the sharing of a datashare wi authorize_endpoint_access Grants access to a cluster authorize_snapshot_access Authorizes the specified Amazon Web Services account to restore the batch_delete_cluster_snapshots Deletes a set of cluster snapshots batch_modify_cluster_snapshots Modifies the settings for a set of cluster snapshots cancel_resize Cancels a resize operation for a cluster copy_cluster_snapshot Copies the specified automated cluster snapshot to a new manual clust create_authentication_profile Creates an authentication profile with the specified parameters create_cluster Creates a new cluster with the specified parameters create_cluster_parameter_group Creates an Amazon Redshift parameter group create_cluster_security_group Creates a new Amazon Redshift security group create_cluster_snapshot Creates a manual snapshot of the specified cluster create_cluster_subnet_group Creates a new Amazon Redshift subnet group create_custom_domain_association Used to create a custom domain name for a cluster create_endpoint_access Creates a Redshift-managed VPC endpoint create_event_subscription Creates an Amazon Redshift event notification subscription create_hsm_client_certificate Creates an HSM client certificate that an Amazon Redshift cluster will create_hsm_configuration Creates an HSM configuration that contains the information required b create_scheduled_action Creates a scheduled action create_snapshot_copy_grant Creates a snapshot copy grant that permits Amazon Redshift to use an create_snapshot_schedule Create a snapshot schedule that can be associated to a cluster and whic create_tags Adds tags to a cluster create_usage_limit Creates a usage limit for a specified Amazon Redshift feature on a clus deauthorize_data_share From a datashare producer account, removes authorization from the sp delete_authentication_profile Deletes an authentication profile delete_cluster Deletes a previously provisioned cluster without its final snapshot bein delete_cluster_parameter_group Deletes a specified Amazon Redshift parameter group delete_cluster_security_group Deletes an Amazon Redshift security group delete_cluster_snapshot Deletes the specified manual snapshot delete_cluster_subnet_group Deletes the specified cluster subnet group delete_custom_domain_association Contains information about deleting a custom domain association for a delete_endpoint_access Deletes a Redshift-managed VPC endpoint delete_event_subscription Deletes an Amazon Redshift event notification subscription delete_hsm_client_certificate Deletes the specified HSM client certificate delete_hsm_configuration Deletes the specified Amazon Redshift HSM configuration delete_partner Deletes a partner integration from a cluster delete_scheduled_action Deletes a scheduled action delete_snapshot_copy_grant Deletes the specified snapshot copy grant delete_snapshot_schedule Deletes a snapshot schedule delete_tags Deletes tags from a resource delete_usage_limit Deletes a usage limit from a cluster describe_account_attributes Returns a list of attributes attached to an account describe_authentication_profiles Describes an authentication profile describe_cluster_db_revisions Returns an array of ClusterDbRevision objects describe_cluster_parameter_groups Returns a list of Amazon Redshift parameter groups, including parame describe_cluster_parameters Returns a detailed list of parameters contained within the specified Am describe_clusters Returns properties of provisioned clusters including general cluster pro describe_cluster_security_groups Returns information about Amazon Redshift security groups describe_cluster_snapshots Returns one or more snapshot objects, which contain metadata about y describe_cluster_subnet_groups Returns one or more cluster subnet group objects, which contain metad describe_cluster_tracks Returns a list of all the available maintenance tracks describe_cluster_versions Returns descriptions of the available Amazon Redshift cluster versions describe_custom_domain_associations Contains information for custom domain associations for a cluster describe_data_shares Shows the status of any inbound or outbound datashares available in th describe_data_shares_for_consumer Returns a list of datashares where the account identifier being called is describe_data_shares_for_producer Returns a list of datashares when the account identifier being called is describe_default_cluster_parameters Returns a list of parameter settings for the specified parameter group f describe_endpoint_access Describes a Redshift-managed VPC endpoint describe_endpoint_authorization Describes an endpoint authorization describe_event_categories Displays a list of event categories for all event source types, or for a sp describe_events Returns events related to clusters, security groups, snapshots, and para describe_event_subscriptions Lists descriptions of all the Amazon Redshift event notification subscr describe_hsm_client_certificates Returns information about the specified HSM client certificate describe_hsm_configurations Returns information about the specified Amazon Redshift HSM config describe_logging_status Describes whether information, such as queries and connection attemp describe_node_configuration_options Returns properties of possible node configurations such as node type, n describe_orderable_cluster_options Returns a list of orderable cluster options describe_partners Returns information about the partner integrations defined for a cluster describe_reserved_node_exchange_status Returns exchange status details and associated metadata for a reserved describe_reserved_node_offerings Returns a list of the available reserved node offerings by Amazon Red describe_reserved_nodes Returns the descriptions of the reserved nodes describe_resize Returns information about the last resize operation for the specified clu describe_scheduled_actions Describes properties of scheduled actions describe_snapshot_copy_grants Returns a list of snapshot copy grants owned by the Amazon Web Serv describe_snapshot_schedules Returns a list of snapshot schedules describe_storage Returns account level backups storage size and provisional storage describe_table_restore_status Lists the status of one or more table restore requests made using the R describe_tags Returns a list of tags describe_usage_limits Shows usage limits on a cluster disable_logging Stops logging information, such as queries and connection attempts, fo disable_snapshot_copy Disables the automatic copying of snapshots from one region to anothe disassociate_data_share_consumer From a datashare consumer account, remove association for the specifi enable_logging Starts logging information, such as queries and connection attempts, fo enable_snapshot_copy Enables the automatic copy of snapshots from one region to another re get_cluster_credentials Returns a database user name and temporary password with temporary get_cluster_credentials_with_iam Returns a database user name and temporary password with temporary get_reserved_node_exchange_configuration_options Gets the configuration options for the reserved-node exchange get_reserved_node_exchange_offerings Returns an array of DC2 ReservedNodeOfferings that matches the pay modify_aqua_configuration This operation is retired modify_authentication_profile Modifies an authentication profile modify_cluster Modifies the settings for a cluster modify_cluster_db_revision Modifies the database revision of a cluster modify_cluster_iam_roles Modifies the list of Identity and Access Management (IAM) roles that modify_cluster_maintenance Modifies the maintenance settings of a cluster modify_cluster_parameter_group Modifies the parameters of a parameter group modify_cluster_snapshot Modifies the settings for a snapshot modify_cluster_snapshot_schedule Modifies a snapshot schedule for a cluster modify_cluster_subnet_group Modifies a cluster subnet group to include the specified list of VPC su modify_custom_domain_association Contains information for changing a custom domain association modify_endpoint_access Modifies a Redshift-managed VPC endpoint modify_event_subscription Modifies an existing Amazon Redshift event notification subscription modify_scheduled_action Modifies a scheduled action modify_snapshot_copy_retention_period Modifies the number of days to retain snapshots in the destination Am modify_snapshot_schedule Modifies a snapshot schedule modify_usage_limit Modifies a usage limit in a cluster pause_cluster Pauses a cluster purchase_reserved_node_offering Allows you to purchase reserved nodes reboot_cluster Reboots a cluster reject_data_share From a datashare consumer account, rejects the specified datashare reset_cluster_parameter_group Sets one or more parameters of the specified parameter group to their d resize_cluster Changes the size of the cluster restore_from_cluster_snapshot Creates a new cluster from a snapshot restore_table_from_cluster_snapshot Creates a new table from a table in an Amazon Redshift cluster snapsh resume_cluster Resumes a paused cluster revoke_cluster_security_group_ingress Revokes an ingress rule in an Amazon Redshift security group for a pr revoke_endpoint_access Revokes access to a cluster revoke_snapshot_access Removes the ability of the specified Amazon Web Services account to rotate_encryption_key Rotates the encryption keys for a cluster update_partner_status Updates the status of a partner integration Examples ## Not run: svc <- redshift() svc$accept_reserved_node_exchange( Foo = 123 ) ## End(Not run) redshiftdataapiservice Redshift Data API Service Description You can use the Amazon Redshift Data API to run queries on Amazon Redshift tables. You can run SQL statements, which are committed if the statement succeeds. For more information about the Amazon Redshift Data API and CLI usage examples, see Using the Amazon Redshift Data API in the Amazon Redshift Management Guide. Usage redshiftdataapiservice( config = list(), credentials = list(), endpoint = NULL, region = NULL ) Arguments config Optional configuration of credentials, endpoint, and/or region. • credentials: – creds: * access_key_id: AWS access key ID * secret_access_key: AWS secret access key * session_token: AWS temporary session token – profile: The name of a profile to use. If not given, then the default profile is used. – anonymous: Set anonymous credentials. – endpoint: The complete URL to use for the constructed client. – region: The AWS Region used in instantiating the client. • close_connection: Immediately close all HTTP connections. • timeout: The time in seconds till a timeout exception is thrown when at- tempting to make a connection. The default is 60 seconds. • s3_force_path_style: Set this to true to force the request to use path-style addressing, i.e. http://s3.amazonaws.com/BUCKET/KEY. • sts_regional_endpoint: Set sts regional endpoint resolver to regional or legacy https://docs.aws.amazon.com/sdkref/latest/guide/feature-sts-regionalized-e html credentials Optional credentials shorthand for the config parameter • creds: – access_key_id: AWS access key ID – secret_access_key: AWS secret access key – session_token: AWS temporary session token • profile: The name of a profile to use. If not given, then the default profile is used. • anonymous: Set anonymous credentials. endpoint Optional shorthand for complete URL to use for the constructed client. region Optional shorthand for AWS Region used in instantiating the client. Value A client for the service. You can call the service’s operations using syntax like svc$operation(...), where svc is the name you’ve assigned to the client. The available operations are listed in the Op- erations section. Service syntax svc <- redshiftdataapiservice( config = list( credentials = list( creds = list( access_key_id = "string", secret_access_key = "string", session_token = "string" ), profile = "string", anonymous = "logical" ), endpoint = "string", region = "string", close_connection = "logical", timeout = "numeric", s3_force_path_style = "logical", sts_regional_endpoint = "string" ), credentials = list( creds = list( access_key_id = "string", secret_access_key = "string", session_token = "string" ), profile = "string", anonymous = "logical" ), endpoint = "string", region = "string" ) Operations batch_execute_statement Runs one or more SQL statements, which can be data manipulation language (DML) or data defini cancel_statement Cancels a running query describe_statement Describes the details about a specific instance when a query was run by the Amazon Redshift Data describe_table Describes the detailed information about a table from metadata in the cluster execute_statement Runs an SQL statement, which can be data manipulation language (DML) or data definition langua get_statement_result Fetches the temporarily cached result of an SQL statement list_databases List the databases in a cluster list_schemas Lists the schemas in a database list_statements List of SQL statements list_tables List the tables in a database Examples ## Not run: svc <- redshiftdataapiservice() svc$batch_execute_statement( Foo = 123 ) ## End(Not run) redshiftserverless Redshift Serverless Description This is an interface reference for Amazon Redshift Serverless. It contains documentation for one of the programming or command line interfaces you can use to manage Amazon Redshift Serverless. Amazon Redshift Serverless automatically provisions data warehouse capacity and intelligently scales the underlying resources based on workload demands. Amazon Redshift Serverless adjusts capacity in seconds to deliver consistently high performance and simplified operations for even the most demanding and volatile workloads. Amazon Redshift Serverless lets you focus on using your data to acquire new insights for your business and customers. To learn more about Amazon Redshift Serverless, see What is Amazon Redshift Serverless. Usage redshiftserverless( config = list(), credentials = list(), endpoint = NULL, region = NULL ) Arguments config Optional configuration of credentials, endpoint, and/or region. • credentials: – creds: * access_key_id: AWS access key ID * secret_access_key: AWS secret access key * session_token: AWS temporary session token – profile: The name of a profile to use. If not given, then the default profile is used. – anonymous: Set anonymous credentials. – endpoint: The complete URL to use for the constructed client. – region: The AWS Region used in instantiating the client. • close_connection: Immediately close all HTTP connections. • timeout: The time in seconds till a timeout exception is thrown when at- tempting to make a connection. The default is 60 seconds. • s3_force_path_style: Set this to true to force the request to use path-style addressing, i.e. http://s3.amazonaws.com/BUCKET/KEY. • sts_regional_endpoint: Set sts regional endpoint resolver to regional or legacy https://docs.aws.amazon.com/sdkref/latest/guide/feature-sts-regionalized-e html credentials Optional credentials shorthand for the config parameter • creds: – access_key_id: AWS access key ID – secret_access_key: AWS secret access key – session_token: AWS temporary session token • profile: The name of a profile to use. If not given, then the default profile is used. • anonymous: Set anonymous credentials. endpoint Optional shorthand for complete URL to use for the constructed client. region Optional shorthand for AWS Region used in instantiating the client. Value A client for the service. You can call the service’s operations using syntax like svc$operation(...), where svc is the name you’ve assigned to the client. The available operations are listed in the Op- erations section. Service syntax svc <- redshiftserverless( config = list( credentials = list( creds = list( access_key_id = "string", secret_access_key = "string", session_token = "string" ), profile = "string", anonymous = "logical" ), endpoint = "string", region = "string", close_connection = "logical", timeout = "numeric", s3_force_path_style = "logical", sts_regional_endpoint = "string" ), credentials = list( creds = list( access_key_id = "string", secret_access_key = "string", session_token = "string" ), profile = "string", anonymous = "logical" ), endpoint = "string", region = "string" ) Operations convert_recovery_point_to_snapshot Converts a recovery point to a snapshot create_endpoint_access Creates an Amazon Redshift Serverless managed VPC endpoint create_namespace Creates a namespace in Amazon Redshift Serverless create_snapshot Creates a snapshot of all databases in a namespace create_usage_limit Creates a usage limit for a specified Amazon Redshift Serverless usage type create_workgroup Creates an workgroup in Amazon Redshift Serverless delete_endpoint_access Deletes an Amazon Redshift Serverless managed VPC endpoint delete_namespace Deletes a namespace from Amazon Redshift Serverless delete_resource_policy Deletes the specified resource policy delete_snapshot Deletes a snapshot from Amazon Redshift Serverless delete_usage_limit Deletes a usage limit from Amazon Redshift Serverless delete_workgroup Deletes a workgroup get_credentials Returns a database user name and temporary password with temporary authorization to get_endpoint_access Returns information, such as the name, about a VPC endpoint get_namespace Returns information about a namespace in Amazon Redshift Serverless get_recovery_point Returns information about a recovery point get_resource_policy Returns a resource policy get_snapshot Returns information about a specific snapshot get_table_restore_status Returns information about a TableRestoreStatus object get_usage_limit Returns information about a usage limit get_workgroup Returns information about a specific workgroup list_endpoint_access Returns an array of EndpointAccess objects and relevant information list_namespaces Returns information about a list of specified namespaces list_recovery_points Returns an array of recovery points list_snapshots Returns a list of snapshots list_table_restore_status Returns information about an array of TableRestoreStatus objects list_tags_for_resource Lists the tags assigned to a resource list_usage_limits Lists all usage limits within Amazon Redshift Serverless list_workgroups Returns information about a list of specified workgroups put_resource_policy Creates or updates a resource policy restore_from_recovery_point Restore the data from a recovery point restore_from_snapshot Restores a namespace from a snapshot restore_table_from_snapshot Restores a table from a snapshot to your Amazon Redshift Serverless instance tag_resource Assigns one or more tags to a resource untag_resource Removes a tag or set of tags from a resource update_endpoint_access Updates an Amazon Redshift Serverless managed endpoint update_namespace Updates a namespace with the specified settings update_snapshot Updates a snapshot update_usage_limit Update a usage limit in Amazon Redshift Serverless update_workgroup Updates a workgroup with the specified configuration settings Examples ## Not run: svc <- redshiftserverless() svc$convert_recovery_point_to_snapshot( Foo = 123 ) ## End(Not run) simpledb Amazon SimpleDB Description Amazon SimpleDB is a web service providing the core database functions of data indexing and querying in the cloud. By offloading the time and effort associated with building and operating a web-scale database, SimpleDB provides developers the freedom to focus on application develop- ment. A traditional, clustered relational database requires a sizable upfront capital outlay, is complex to design, and often requires extensive and repetitive database administration. Amazon SimpleDB is dramatically simpler, requiring no schema, automatically indexing your data and providing a simple API for storage and access. This approach eliminates the administrative burden of data modeling, index maintenance, and performance tuning. Developers gain access to this functionality within Amazon’s proven computing environment, are able to scale instantly, and pay only for what they use. Visit http://aws.amazon.com/simpledb/ for more information. Usage simpledb(config = list(), credentials = list(), endpoint = NULL, region = NULL) Arguments config Optional configuration of credentials, endpoint, and/or region. • credentials: – creds: * access_key_id: AWS access key ID * secret_access_key: AWS secret access key * session_token: AWS temporary session token – profile: The name of a profile to use. If not given, then the default profile is used. – anonymous: Set anonymous credentials. – endpoint: The complete URL to use for the constructed client. – region: The AWS Region used in instantiating the client. • close_connection: Immediately close all HTTP connections. • timeout: The time in seconds till a timeout exception is thrown when at- tempting to make a connection. The default is 60 seconds. • s3_force_path_style: Set this to true to force the request to use path-style addressing, i.e. http://s3.amazonaws.com/BUCKET/KEY. • sts_regional_endpoint: Set sts regional endpoint resolver to regional or legacy https://docs.aws.amazon.com/sdkref/latest/guide/feature-sts-regionalized-e html credentials Optional credentials shorthand for the config parameter • creds: – access_key_id: AWS access key ID – secret_access_key: AWS secret access key – session_token: AWS temporary session token • profile: The name of a profile to use. If not given, then the default profile is used. • anonymous: Set anonymous credentials. endpoint Optional shorthand for complete URL to use for the constructed client. region Optional shorthand for AWS Region used in instantiating the client. Value A client for the service. You can call the service’s operations using syntax like svc$operation(...), where svc is the name you’ve assigned to the client. The available operations are listed in the Op- erations section. Service syntax svc <- simpledb( config = list( credentials = list( creds = list( access_key_id = "string", secret_access_key = "string", session_token = "string" ), profile = "string", anonymous = "logical" ), endpoint = "string", region = "string", close_connection = "logical", timeout = "numeric", s3_force_path_style = "logical", sts_regional_endpoint = "string" ), credentials = list( creds = list( access_key_id = "string", secret_access_key = "string", session_token = "string" ), profile = "string", anonymous = "logical" ), endpoint = "string", region = "string" ) Operations batch_delete_attributes Performs multiple DeleteAttributes operations in a single call, which reduces round trips and latencie batch_put_attributes The BatchPutAttributes operation creates or replaces attributes within one or more items create_domain The CreateDomain operation creates a new domain delete_attributes Deletes one or more attributes associated with an item delete_domain The DeleteDomain operation deletes a domain domain_metadata Returns information about the domain, including when the domain was created, the number of items get_attributes Returns all of the attributes associated with the specified item list_domains The ListDomains operation lists all domains associated with the Access Key ID put_attributes The PutAttributes operation creates or replaces attributes in an item select The Select operation returns a set of attributes for ItemNames that match the select expression Examples ## Not run: svc <- simpledb() svc$batch_delete_attributes( Foo = 123 ) ## End(Not run) timestreamquery Amazon Timestream Query Description Amazon Timestream Query Usage timestreamquery( config = list(), credentials = list(), endpoint = NULL, region = NULL ) Arguments config Optional configuration of credentials, endpoint, and/or region. • credentials: – creds: * access_key_id: AWS access key ID * secret_access_key: AWS secret access key * session_token: AWS temporary session token – profile: The name of a profile to use. If not given, then the default profile is used. – anonymous: Set anonymous credentials. – endpoint: The complete URL to use for the constructed client. – region: The AWS Region used in instantiating the client. • close_connection: Immediately close all HTTP connections. • timeout: The time in seconds till a timeout exception is thrown when at- tempting to make a connection. The default is 60 seconds. • s3_force_path_style: Set this to true to force the request to use path-style addressing, i.e. http://s3.amazonaws.com/BUCKET/KEY. • sts_regional_endpoint: Set sts regional endpoint resolver to regional or legacy https://docs.aws.amazon.com/sdkref/latest/guide/feature-sts-regionalized-e html credentials Optional credentials shorthand for the config parameter • creds: – access_key_id: AWS access key ID – secret_access_key: AWS secret access key – session_token: AWS temporary session token • profile: The name of a profile to use. If not given, then the default profile is used. • anonymous: Set anonymous credentials. endpoint Optional shorthand for complete URL to use for the constructed client. region Optional shorthand for AWS Region used in instantiating the client. Value A client for the service. You can call the service’s operations using syntax like svc$operation(...), where svc is the name you’ve assigned to the client. The available operations are listed in the Op- erations section. Service syntax svc <- timestreamquery( config = list( credentials = list( creds = list( access_key_id = "string", secret_access_key = "string", session_token = "string" ), profile = "string", anonymous = "logical" ), endpoint = "string", region = "string", close_connection = "logical", timeout = "numeric", s3_force_path_style = "logical", sts_regional_endpoint = "string" ), credentials = list( creds = list( access_key_id = "string", secret_access_key = "string", session_token = "string" ), profile = "string", anonymous = "logical" ), endpoint = "string", region = "string" ) Operations cancel_query Cancels a query that has been issued create_scheduled_query Create a scheduled query that will be run on your behalf at the configured schedule delete_scheduled_query Deletes a given scheduled query describe_endpoints DescribeEndpoints returns a list of available endpoints to make Timestream API calls against describe_scheduled_query Provides detailed information about a scheduled query execute_scheduled_query You can use this API to run a scheduled query manually list_scheduled_queries Gets a list of all scheduled queries in the caller’s Amazon account and Region list_tags_for_resource List all tags on a Timestream query resource prepare_query A synchronous operation that allows you to submit a query with parameters to be stored by Time query Query is a synchronous operation that enables you to run a query against your Amazon Timestre tag_resource Associate a set of tags with a Timestream resource untag_resource Removes the association of tags from a Timestream query resource update_scheduled_query Update a scheduled query Examples ## Not run: svc <- timestreamquery() svc$cancel_query( Foo = 123 ) ## End(Not run) timestreamwrite Amazon Timestream Write Description Amazon Timestream is a fast, scalable, fully managed time-series database service that makes it easy to store and analyze trillions of time-series data points per day. With Timestream, you can eas- ily store and analyze IoT sensor data to derive insights from your IoT applications. You can analyze industrial telemetry to streamline equipment management and maintenance. You can also store and analyze log data and metrics to improve the performance and availability of your applications. Timestream is built from the ground up to effectively ingest, process, and store time-series data. It organizes data to optimize query processing. It automatically scales based on the volume of data ingested and on the query volume to ensure you receive optimal performance while inserting and querying data. As your data grows over time, Timestream’s adaptive query processing engine spans across storage tiers to provide fast analysis while reducing costs. Usage timestreamwrite( config = list(), credentials = list(), endpoint = NULL, region = NULL ) Arguments config Optional configuration of credentials, endpoint, and/or region. • credentials: – creds: * access_key_id: AWS access key ID * secret_access_key: AWS secret access key * session_token: AWS temporary session token – profile: The name of a profile to use. If not given, then the default profile is used. – anonymous: Set anonymous credentials. – endpoint: The complete URL to use for the constructed client. – region: The AWS Region used in instantiating the client. • close_connection: Immediately close all HTTP connections. • timeout: The time in seconds till a timeout exception is thrown when at- tempting to make a connection. The default is 60 seconds. • s3_force_path_style: Set this to true to force the request to use path-style addressing, i.e. http://s3.amazonaws.com/BUCKET/KEY. • sts_regional_endpoint: Set sts regional endpoint resolver to regional or legacy https://docs.aws.amazon.com/sdkref/latest/guide/feature-sts-regionalized-e html credentials Optional credentials shorthand for the config parameter • creds: – access_key_id: AWS access key ID – secret_access_key: AWS secret access key – session_token: AWS temporary session token • profile: The name of a profile to use. If not given, then the default profile is used. • anonymous: Set anonymous credentials. endpoint Optional shorthand for complete URL to use for the constructed client. region Optional shorthand for AWS Region used in instantiating the client. Value A client for the service. You can call the service’s operations using syntax like svc$operation(...), where svc is the name you’ve assigned to the client. The available operations are listed in the Op- erations section. Service syntax svc <- timestreamwrite( config = list( credentials = list( creds = list( access_key_id = "string", secret_access_key = "string", session_token = "string" ), profile = "string", anonymous = "logical" ), endpoint = "string", region = "string", close_connection = "logical", timeout = "numeric", s3_force_path_style = "logical", sts_regional_endpoint = "string" ), credentials = list( creds = list( access_key_id = "string", secret_access_key = "string", session_token = "string" ), profile = "string", anonymous = "logical" ), endpoint = "string", region = "string" ) Operations create_batch_load_task Creates a new Timestream batch load task create_database Creates a new Timestream database create_table Adds a new table to an existing database in your account delete_database Deletes a given Timestream database delete_table Deletes a given Timestream table describe_batch_load_task Returns information about the batch load task, including configurations, mappings, progress, and describe_database Returns information about the database, including the database name, time that the database was describe_endpoints Returns a list of available endpoints to make Timestream API calls against describe_table Returns information about the table, including the table name, database name, retention duration list_batch_load_tasks Provides a list of batch load tasks, along with the name, status, when the task is resumable until, a list_databases Returns a list of your Timestream databases list_tables Provides a list of tables, along with the name, status, and retention properties of each table list_tags_for_resource Lists all tags on a Timestream resource resume_batch_load_task Resume batch load task tag_resource Associates a set of tags with a Timestream resource untag_resource Removes the association of tags from a Timestream resource update_database Modifies the KMS key for an existing database update_table Modifies the retention duration of the memory store and magnetic store for your Timestream tabl write_records Enables you to write your time-series data into Timestream Examples ## Not run: svc <- timestreamwrite() svc$create_batch_load_task( Foo = 123 ) 64 timestreamwrite ## End(Not run)
cforward
cran
R
Package ‘cforward’ October 12, 2022 Title Forward Selection using Concordance/C-Index Version 0.1.0 Description Performs forward model selection, using the C-index/concordance in survival analysis models. License GPL-3 Encoding UTF-8 LazyData true RoxygenNote 7.1.1 Imports survival, dplyr, stats, magrittr, tibble URL https://github.com/muschellij2/cforward BugReports https://github.com/muschellij2/cforward/issues Depends R (>= 2.10) Suggests testthat NeedsCompilation no Author <NAME> [aut, cre] (<https://orcid.org/0000-0001-6469-1750>), <NAME> [aut] Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2021-03-29 14:20:08 UTC R topics documented: cforwar... 2 estimate_concordanc... 4 nhanes_exampl... 5 cforward Forward Selection Based on C-Index/Concordance Description Forward Selection Based on C-Index/Concordance Usage cforward( data, event_time = "event_time_years", event_status = "mortstat", weight_column = "WTMEC4YR_norm", variables = NULL, included_variables = NULL, n_folds = 10, seed = 1989, max_model_size = 50, c_threshold = NULL, verbose = TRUE, cfit_args = list(), save_memory = FALSE, ... ) cforward_one( data, event_time = "event_time_years", event_status = "mortstat", weight_column = "WTMEC4YR_norm", variables, included_variables = NULL, verbose = TRUE, cfit_args = list(), save_memory = FALSE, ... ) make_folds(data, event_status = "mortstat", n_folds = 10, verbose = TRUE) Arguments data A data set to perform model selection and cross-validation. event_time Character vector of length 1 with event times, passed to Surv event_status Character vector of length 1 with event status, passed to Surv weight_column Character vector of length 1 with weights for model. If no weights are available, set to NULL variables Character vector of variables to perform selection. Must be in data. included_variables Character vector of variables forced to have in the model. Must be in data n_folds Number of folds for Cross-validation. If you want to run on the full data, set to 1 seed Seed set before folds are created. max_model_size maximum number of variables in the model. Selection will stop if reached. Note, this does not correspond to the number of coefficients, due to categorical variables. c_threshold threshold for concordance. If the difference in the best concordance and this one does not reach a certain threshold, break. verbose print diagnostic messages cfit_args Arguments passed to concordancefit. If strata is to be passed, set strata_column in this list. save_memory save only a minimal amount of information, discard the fitted models ... Additional arguments to pass to coxph Value A list of lists, with elements of: full_concordance Concordance when fit on the full data models Cox model from full data set fit, stripped of large memory elements cv_concordance Cross-validated Concordance included_variables Variables included in the model, other than those being selection upon Examples variables = c("gender", "age_years_interview", "education_adult") res = cforward(nhanes_example, event_time = "event_time_years", event_status = "mortstat", weight_column = "WTMEC4YR_norm", variables = variables, included_variables = NULL, n_folds = 5, c_threshold = 0.02, seed = 1989, max_model_size = 50, verbose = TRUE) conc = sapply(res, `[[`, "best_concordance") res = cforward(nhanes_example, event_time = "event_time_years", event_status = "mortstat", weight_column = "WTMEC4YR_norm", variables = variables, included_variables = NULL, n_folds = 5, seed = 1989, max_model_size = 50, verbose = TRUE) conc = sapply(res, `[[`, "best_concordance") threshold = 0.01 included_variables = names(conc)[c(1, diff(conc)) > threshold] new_variables = c("diabetes", "stroke") second_level = cforward(nhanes_example, event_time = "event_time_years", event_status = "mortstat", weight_column = "WTMEC4YR_norm", variables = new_variables, included_variables = included_variables, n_folds = 5, seed = 1989, max_model_size = 50, verbose = TRUE) second_conc = sapply(second_level, `[[`, "best_concordance") result = second_level[[which.max(second_conc)]] final_model = result$models[[which.max(result$cv_concordance)]] estimate_concordance Estimate Out-of-Sample Concordance Description Estimate Out-of-Sample Concordance Usage estimate_concordance( train, test = train, event_time = "event_time_years", event_status = "mortstat", weight_column = "WTMEC4YR_norm", all_variables = NULL, cfit_args = list(), ... ) Arguments train A data set to perform model training. test A data set to estimate concordance, from fit model with train. Set to train if estimating on the same data event_time Character vector of length 1 with event times, passed to Surv event_status Character vector of length 1 with event status, passed to Surv weight_column Character vector of length 1 with weights for model. If no weights are available, set to NULL all_variables Character vector of variables to put in the model. All must be in data. cfit_args Arguments passed to concordancefit. If strata is to be passed, set strata_column in this list. ... Additional arguments to pass to coxph Value A list of concordance and the model fit with the training data nhanes_example Example Data from National Health and Nutrition Examination Sur- vey (’NHANES’) Description Example Data from National Health and Nutrition Examination Survey (’NHANES’) Usage nhanes_example Format A data.frame with 7 columns, which are: SEQN ID of participant mortstat mortality status, 1-died, 0 - censored event_time_years time observed WTMEC4YR_norm weights normalized for survey gender gender age_years_interview age in years at interview education_adult educational status
github.com/amimof/huego
go
Go
README [¶](#section-readme) --- [![Go](https://github.com/amimof/huego/actions/workflows/go.yaml/badge.svg)](https://github.com/amimof/huego/actions/workflows/go.yaml) [![Go Report Card](https://goreportcard.com/badge/github.com/amimof/huego)](https://goreportcard.com/report/github.com/amimof/huego) [![codecov](https://codecov.io/gh/amimof/huego/branch/master/graph/badge.svg)](https://codecov.io/gh/amimof/huego) [![Awesome](https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg)](https://github.com/avelino/awesome-go) ### Huego An extensive Philips Hue client library for [`Go`](https://golang.org/) with an emphasis on simplicity. It is designed to be clean, unbloated and extensible. With `Huego` you can interact with any Philips Hue bridge and its resources including `Lights`, `Groups`, `Scenes`, `Sensors`, `Rules`, `Schedules`, `Resourcelinks`, `Capabilities` and `Configuration` . ![](https://github.com/amimof/huego/raw/v1.2.1/logo/logo.png) #### Installation Get the package and import it in your code. ``` go get github.com/amimof/huego ``` You may use [`New()`](https://godoc.org/github.com/amimof/huego#New) if you have already created an user and know the IP address to your bridge. ``` package main import ( "github.com/amimof/huego" "fmt" ) func main() { bridge := huego.New("192.168.1.59", "username") l, err := bridge.GetLights() if err != nil { panic(err) } fmt.Printf("Found %d lights", len(l)) } ``` Or discover a bridge on your network with [`Discover()`](https://godoc.org/github.com/amimof/huego#Discover) and create a new user with [`CreateUser()`](https://godoc.org/github.com/amimof/huego#Bridge.CreateUser). To successfully create a user, the link button on your bridge must have been pressed before calling `CreateUser()` in order to authorise the request. ``` func main() { bridge, _ := huego.Discover() user, _ := bridge.CreateUser("my awesome hue app") // Link button needs to be pressed bridge = bridge.Login(user) light, _ := bridge.GetLight(3) light.Off() } ``` #### Documentation See [godoc.org/github.com/amimof/huego](https://godoc.org/github.com/amimof/huego) for the full package documentation. #### Contributing All help in any form is highly appreciated and your are welcome participate in developing `Huego` together. To contribute submit a `Pull Request`. If you want to provide feedback, open up a Github `Issue` or contact me personally. Documentation [¶](#section-documentation) --- ### Overview [¶](#pkg-overview) Package huego provides an extensive, easy to use interface to the Philips Hue bridge. ### Index [¶](#pkg-index) * [func ConvertRGBToXy(newcolor color.Color) ([]float32, uint8)](#ConvertRGBToXy) * [type APIError](#APIError) * + [func (a *APIError) Error() string](#APIError.Error) + [func (a *APIError) UnmarshalJSON(data []byte) error](#APIError.UnmarshalJSON) * [type APIResponse](#APIResponse) * [type AutoInstall](#AutoInstall) * [type Backup](#Backup) * [type Bridge](#Bridge) * + [func Discover() (*Bridge, error)](#Discover) + [func DiscoverAll() ([]Bridge, error)](#DiscoverAll) + [func DiscoverAllContext(ctx context.Context) ([]Bridge, error)](#DiscoverAllContext) + [func DiscoverContext(ctx context.Context) (*Bridge, error)](#DiscoverContext) + [func New(h, u string) *Bridge](#New) * + [func (b *Bridge) CreateGroup(g Group) (*Response, error)](#Bridge.CreateGroup) + [func (b *Bridge) CreateGroupContext(ctx context.Context, g Group) (*Response, error)](#Bridge.CreateGroupContext) + [func (b *Bridge) CreateResourcelink(s *Resourcelink) (*Response, error)](#Bridge.CreateResourcelink) + [func (b *Bridge) CreateResourcelinkContext(ctx context.Context, s *Resourcelink) (*Response, error)](#Bridge.CreateResourcelinkContext) + [func (b *Bridge) CreateRule(s *Rule) (*Response, error)](#Bridge.CreateRule) + [func (b *Bridge) CreateRuleContext(ctx context.Context, s *Rule) (*Response, error)](#Bridge.CreateRuleContext) + [func (b *Bridge) CreateScene(s *Scene) (*Response, error)](#Bridge.CreateScene) + [func (b *Bridge) CreateSceneContext(ctx context.Context, s *Scene) (*Response, error)](#Bridge.CreateSceneContext) + [func (b *Bridge) CreateSchedule(s *Schedule) (*Response, error)](#Bridge.CreateSchedule) + [func (b *Bridge) CreateScheduleContext(ctx context.Context, s *Schedule) (*Response, error)](#Bridge.CreateScheduleContext) + [func (b *Bridge) CreateSensor(s *Sensor) (*Response, error)](#Bridge.CreateSensor) + [func (b *Bridge) CreateSensorContext(ctx context.Context, s *Sensor) (*Response, error)](#Bridge.CreateSensorContext) + [func (b *Bridge) CreateUser(n string) (string, error)](#Bridge.CreateUser) + [func (b *Bridge) CreateUserContext(ctx context.Context, n string) (string, error)](#Bridge.CreateUserContext) + [func (b *Bridge) CreateUserWithClientKey(deviceType string) (*Whitelist, error)](#Bridge.CreateUserWithClientKey) + [func (b *Bridge) CreateUserWithClientKeyContext(ctx context.Context, deviceType string) (*Whitelist, error)](#Bridge.CreateUserWithClientKeyContext) + [func (b *Bridge) DeleteGroup(i int) error](#Bridge.DeleteGroup) + [func (b *Bridge) DeleteGroupContext(ctx context.Context, i int) error](#Bridge.DeleteGroupContext) + [func (b *Bridge) DeleteLight(i int) error](#Bridge.DeleteLight) + [func (b *Bridge) DeleteLightContext(ctx context.Context, i int) error](#Bridge.DeleteLightContext) + [func (b *Bridge) DeleteResourcelink(i int) error](#Bridge.DeleteResourcelink) + [func (b *Bridge) DeleteResourcelinkContext(ctx context.Context, i int) error](#Bridge.DeleteResourcelinkContext) + [func (b *Bridge) DeleteRule(i int) error](#Bridge.DeleteRule) + [func (b *Bridge) DeleteRuleContext(ctx context.Context, i int) error](#Bridge.DeleteRuleContext) + [func (b *Bridge) DeleteScene(id string) error](#Bridge.DeleteScene) + [func (b *Bridge) DeleteSceneContext(ctx context.Context, id string) error](#Bridge.DeleteSceneContext) + [func (b *Bridge) DeleteSchedule(i int) error](#Bridge.DeleteSchedule) + [func (b *Bridge) DeleteScheduleContext(ctx context.Context, i int) error](#Bridge.DeleteScheduleContext) + [func (b *Bridge) DeleteSensor(i int) error](#Bridge.DeleteSensor) + [func (b *Bridge) DeleteSensorContext(ctx context.Context, i int) error](#Bridge.DeleteSensorContext) + [func (b *Bridge) DeleteUser(n string) error](#Bridge.DeleteUser) + [func (b *Bridge) DeleteUserContext(ctx context.Context, n string) error](#Bridge.DeleteUserContext) + [func (b *Bridge) FindLights() (*Response, error)](#Bridge.FindLights) + [func (b *Bridge) FindLightsContext(ctx context.Context) (*Response, error)](#Bridge.FindLightsContext) + [func (b *Bridge) FindSensors() (*Response, error)](#Bridge.FindSensors) + [func (b *Bridge) FindSensorsContext(ctx context.Context) (*Response, error)](#Bridge.FindSensorsContext) + [func (b *Bridge) GetCapabilities() (*Capabilities, error)](#Bridge.GetCapabilities) + [func (b *Bridge) GetCapabilitiesContext(ctx context.Context) (*Capabilities, error)](#Bridge.GetCapabilitiesContext) + [func (b *Bridge) GetConfig() (*Config, error)](#Bridge.GetConfig) + [func (b *Bridge) GetConfigContext(ctx context.Context) (*Config, error)](#Bridge.GetConfigContext) + [func (b *Bridge) GetFullState() (map[string]interface{}, error)](#Bridge.GetFullState) + [func (b *Bridge) GetFullStateContext(ctx context.Context) (map[string]interface{}, error)](#Bridge.GetFullStateContext) + [func (b *Bridge) GetGroup(i int) (*Group, error)](#Bridge.GetGroup) + [func (b *Bridge) GetGroupContext(ctx context.Context, i int) (*Group, error)](#Bridge.GetGroupContext) + [func (b *Bridge) GetGroups() ([]Group, error)](#Bridge.GetGroups) + [func (b *Bridge) GetGroupsContext(ctx context.Context) ([]Group, error)](#Bridge.GetGroupsContext) + [func (b *Bridge) GetLight(i int) (*Light, error)](#Bridge.GetLight) + [func (b *Bridge) GetLightContext(ctx context.Context, i int) (*Light, error)](#Bridge.GetLightContext) + [func (b *Bridge) GetLights() ([]Light, error)](#Bridge.GetLights) + [func (b *Bridge) GetLightsContext(ctx context.Context) ([]Light, error)](#Bridge.GetLightsContext) + [func (b *Bridge) GetNewLights() (*NewLight, error)](#Bridge.GetNewLights) + [func (b *Bridge) GetNewLightsContext(ctx context.Context) (*NewLight, error)](#Bridge.GetNewLightsContext) + [func (b *Bridge) GetNewSensors() (*NewSensor, error)](#Bridge.GetNewSensors) + [func (b *Bridge) GetNewSensorsContext(ctx context.Context) (*NewSensor, error)](#Bridge.GetNewSensorsContext) + [func (b *Bridge) GetResourcelink(i int) (*Resourcelink, error)](#Bridge.GetResourcelink) + [func (b *Bridge) GetResourcelinkContext(ctx context.Context, i int) (*Resourcelink, error)](#Bridge.GetResourcelinkContext) + [func (b *Bridge) GetResourcelinks() ([]*Resourcelink, error)](#Bridge.GetResourcelinks) + [func (b *Bridge) GetResourcelinksContext(ctx context.Context) ([]*Resourcelink, error)](#Bridge.GetResourcelinksContext) + [func (b *Bridge) GetRule(i int) (*Rule, error)](#Bridge.GetRule) + [func (b *Bridge) GetRuleContext(ctx context.Context, i int) (*Rule, error)](#Bridge.GetRuleContext) + [func (b *Bridge) GetRules() ([]*Rule, error)](#Bridge.GetRules) + [func (b *Bridge) GetRulesContext(ctx context.Context) ([]*Rule, error)](#Bridge.GetRulesContext) + [func (b *Bridge) GetScene(i string) (*Scene, error)](#Bridge.GetScene) + [func (b *Bridge) GetSceneContext(ctx context.Context, i string) (*Scene, error)](#Bridge.GetSceneContext) + [func (b *Bridge) GetScenes() ([]Scene, error)](#Bridge.GetScenes) + [func (b *Bridge) GetScenesContext(ctx context.Context) ([]Scene, error)](#Bridge.GetScenesContext) + [func (b *Bridge) GetSchedule(i int) (*Schedule, error)](#Bridge.GetSchedule) + [func (b *Bridge) GetScheduleContext(ctx context.Context, i int) (*Schedule, error)](#Bridge.GetScheduleContext) + [func (b *Bridge) GetSchedules() ([]*Schedule, error)](#Bridge.GetSchedules) + [func (b *Bridge) GetSchedulesContext(ctx context.Context) ([]*Schedule, error)](#Bridge.GetSchedulesContext) + [func (b *Bridge) GetSensor(i int) (*Sensor, error)](#Bridge.GetSensor) + [func (b *Bridge) GetSensorContext(ctx context.Context, i int) (*Sensor, error)](#Bridge.GetSensorContext) + [func (b *Bridge) GetSensors() ([]Sensor, error)](#Bridge.GetSensors) + [func (b *Bridge) GetSensorsContext(ctx context.Context) ([]Sensor, error)](#Bridge.GetSensorsContext) + [func (b *Bridge) GetUsers() ([]Whitelist, error)](#Bridge.GetUsers) + [func (b *Bridge) IdentifyLight(i int) (*Response, error)](#Bridge.IdentifyLight) + [func (b *Bridge) IdentifyLightContext(ctx context.Context, i int) (*Response, error)](#Bridge.IdentifyLightContext) + [func (b *Bridge) Login(u string) *Bridge](#Bridge.Login) + [func (b *Bridge) RecallScene(id string, gid int) (*Response, error)](#Bridge.RecallScene) + [func (b *Bridge) RecallSceneContext(ctx context.Context, id string, gid int) (*Response, error)](#Bridge.RecallSceneContext) + [func (b *Bridge) SetGroupState(i int, l State) (*Response, error)](#Bridge.SetGroupState) + [func (b *Bridge) SetGroupStateContext(ctx context.Context, i int, l State) (*Response, error)](#Bridge.SetGroupStateContext) + [func (b *Bridge) SetLightState(i int, l State) (*Response, error)](#Bridge.SetLightState) + [func (b *Bridge) SetLightStateContext(ctx context.Context, i int, l State) (*Response, error)](#Bridge.SetLightStateContext) + [func (b *Bridge) SetSceneLightState(id string, iid int, l *State) (*Response, error)](#Bridge.SetSceneLightState) + [func (b *Bridge) SetSceneLightStateContext(ctx context.Context, id string, iid int, l *State) (*Response, error)](#Bridge.SetSceneLightStateContext) + [func (b *Bridge) UpdateConfig(c *Config) (*Response, error)](#Bridge.UpdateConfig) + [func (b *Bridge) UpdateConfigContext(ctx context.Context, c *Config) (*Response, error)](#Bridge.UpdateConfigContext) + [func (b *Bridge) UpdateGroup(i int, l Group) (*Response, error)](#Bridge.UpdateGroup) + [func (b *Bridge) UpdateGroupContext(ctx context.Context, i int, l Group) (*Response, error)](#Bridge.UpdateGroupContext) + [func (b *Bridge) UpdateLight(i int, light Light) (*Response, error)](#Bridge.UpdateLight) + [func (b *Bridge) UpdateLightContext(ctx context.Context, i int, light Light) (*Response, error)](#Bridge.UpdateLightContext) + [func (b *Bridge) UpdateResourcelink(i int, resourcelink *Resourcelink) (*Response, error)](#Bridge.UpdateResourcelink) + [func (b *Bridge) UpdateResourcelinkContext(ctx context.Context, i int, resourcelink *Resourcelink) (*Response, error)](#Bridge.UpdateResourcelinkContext) + [func (b *Bridge) UpdateRule(i int, rule *Rule) (*Response, error)](#Bridge.UpdateRule) + [func (b *Bridge) UpdateRuleContext(ctx context.Context, i int, rule *Rule) (*Response, error)](#Bridge.UpdateRuleContext) + [func (b *Bridge) UpdateScene(id string, s *Scene) (*Response, error)](#Bridge.UpdateScene) + [func (b *Bridge) UpdateSceneContext(ctx context.Context, id string, s *Scene) (*Response, error)](#Bridge.UpdateSceneContext) + [func (b *Bridge) UpdateSchedule(i int, schedule *Schedule) (*Response, error)](#Bridge.UpdateSchedule) + [func (b *Bridge) UpdateScheduleContext(ctx context.Context, i int, schedule *Schedule) (*Response, error)](#Bridge.UpdateScheduleContext) + [func (b *Bridge) UpdateSensor(i int, sensor *Sensor) (*Response, error)](#Bridge.UpdateSensor) + [func (b *Bridge) UpdateSensorConfig(i int, c interface{}) (*Response, error)](#Bridge.UpdateSensorConfig) + [func (b *Bridge) UpdateSensorConfigContext(ctx context.Context, i int, c interface{}) (*Response, error)](#Bridge.UpdateSensorConfigContext) + [func (b *Bridge) UpdateSensorContext(ctx context.Context, i int, sensor *Sensor) (*Response, error)](#Bridge.UpdateSensorContext) * [type BridgeConfig](#BridgeConfig) * [type Capabilities](#Capabilities) * [type Capability](#Capability) * [type Command](#Command) * [type Condition](#Condition) * [type Config](#Config) * [type DeviceTypes](#DeviceTypes) * [type Group](#Group) * + [func (g *Group) Alert(new string) error](#Group.Alert) + [func (g *Group) AlertContext(ctx context.Context, new string) error](#Group.AlertContext) + [func (g *Group) Bri(new uint8) error](#Group.Bri) + [func (g *Group) BriContext(ctx context.Context, new uint8) error](#Group.BriContext) + [func (g *Group) Col(new color.Color) error](#Group.Col) + [func (g *Group) ColContext(ctx context.Context, new color.Color) error](#Group.ColContext) + [func (g *Group) Ct(new uint16) error](#Group.Ct) + [func (g *Group) CtContext(ctx context.Context, new uint16) error](#Group.CtContext) + [func (g *Group) DisableStreaming() error](#Group.DisableStreaming) + [func (g *Group) DisableStreamingContext(ctx context.Context) error](#Group.DisableStreamingContext) + [func (g *Group) Effect(new string) error](#Group.Effect) + [func (g *Group) EffectContext(ctx context.Context, new string) error](#Group.EffectContext) + [func (g *Group) EnableStreaming() error](#Group.EnableStreaming) + [func (g *Group) EnableStreamingContext(ctx context.Context) error](#Group.EnableStreamingContext) + [func (g *Group) Hue(new uint16) error](#Group.Hue) + [func (g *Group) HueContext(ctx context.Context, new uint16) error](#Group.HueContext) + [func (g *Group) IsOn() bool](#Group.IsOn) + [func (g *Group) Off() error](#Group.Off) + [func (g *Group) OffContext(ctx context.Context) error](#Group.OffContext) + [func (g *Group) On() error](#Group.On) + [func (g *Group) OnContext(ctx context.Context) error](#Group.OnContext) + [func (g *Group) Rename(new string) error](#Group.Rename) + [func (g *Group) RenameContext(ctx context.Context, new string) error](#Group.RenameContext) + [func (g *Group) Sat(new uint8) error](#Group.Sat) + [func (g *Group) SatContext(ctx context.Context, new uint8) error](#Group.SatContext) + [func (g *Group) Scene(scene string) error](#Group.Scene) + [func (g *Group) SceneContext(ctx context.Context, scene string) error](#Group.SceneContext) + [func (g *Group) SetState(s State) error](#Group.SetState) + [func (g *Group) SetStateContext(ctx context.Context, s State) error](#Group.SetStateContext) + [func (g *Group) TransitionTime(new uint16) error](#Group.TransitionTime) + [func (g *Group) TransitionTimeContext(ctx context.Context, new uint16) error](#Group.TransitionTimeContext) + [func (g *Group) Xy(new []float32) error](#Group.Xy) + [func (g *Group) XyContext(ctx context.Context, new []float32) error](#Group.XyContext) * [type GroupState](#GroupState) * [type InternetService](#InternetService) * [type Light](#Light) * + [func (l *Light) Alert(new string) error](#Light.Alert) + [func (l *Light) AlertContext(ctx context.Context, new string) error](#Light.AlertContext) + [func (l *Light) Bri(new uint8) error](#Light.Bri) + [func (l *Light) BriContext(ctx context.Context, new uint8) error](#Light.BriContext) + [func (l *Light) Col(new color.Color) error](#Light.Col) + [func (l *Light) ColContext(ctx context.Context, new color.Color) error](#Light.ColContext) + [func (l *Light) Ct(new uint16) error](#Light.Ct) + [func (l *Light) CtContext(ctx context.Context, new uint16) error](#Light.CtContext) + [func (l *Light) Effect(new string) error](#Light.Effect) + [func (l *Light) EffectContext(ctx context.Context, new string) error](#Light.EffectContext) + [func (l *Light) Hue(new uint16) error](#Light.Hue) + [func (l *Light) HueContext(ctx context.Context, new uint16) error](#Light.HueContext) + [func (l *Light) IsOn() bool](#Light.IsOn) + [func (l *Light) Off() error](#Light.Off) + [func (l *Light) OffContext(ctx context.Context) error](#Light.OffContext) + [func (l *Light) On() error](#Light.On) + [func (l *Light) OnContext(ctx context.Context) error](#Light.OnContext) + [func (l *Light) Rename(new string) error](#Light.Rename) + [func (l *Light) RenameContext(ctx context.Context, new string) error](#Light.RenameContext) + [func (l *Light) Sat(new uint8) error](#Light.Sat) + [func (l *Light) SatContext(ctx context.Context, new uint8) error](#Light.SatContext) + [func (l *Light) SetState(s State) error](#Light.SetState) + [func (l *Light) SetStateContext(ctx context.Context, s State) error](#Light.SetStateContext) + [func (l *Light) TransitionTime(new uint16) error](#Light.TransitionTime) + [func (l *Light) TransitionTimeContext(ctx context.Context, new uint16) error](#Light.TransitionTimeContext) + [func (l *Light) Xy(new []float32) error](#Light.Xy) + [func (l *Light) XyContext(ctx context.Context, new []float32) error](#Light.XyContext) * [type NewLight](#NewLight) * [type NewSensor](#NewSensor) * [type PortalState](#PortalState) * [type Resourcelink](#Resourcelink) * [type Response](#Response) * [type Rule](#Rule) * [type RuleAction](#RuleAction) * [type Scene](#Scene) * + [func (s *Scene) Recall(id int) error](#Scene.Recall) + [func (s *Scene) RecallContext(ctx context.Context, id int) error](#Scene.RecallContext) * [type Schedule](#Schedule) * [type Sensor](#Sensor) * [type State](#State) * [type Stream](#Stream) * + [func (s *Stream) Active() bool](#Stream.Active) + [func (s *Stream) Owner() string](#Stream.Owner) * [type SwUpdate](#SwUpdate) * [type SwUpdate2](#SwUpdate2) * [type Whitelist](#Whitelist) #### Examples [¶](#pkg-examples) * [Bridge.CreateUser](#example-Bridge.CreateUser) ### Constants [¶](#pkg-constants) This section is empty. ### Variables [¶](#pkg-variables) This section is empty. ### Functions [¶](#pkg-functions) #### func [ConvertRGBToXy](https://github.com/amimof/huego/blob/v1.2.1/light.go#L283) [¶](#ConvertRGBToXy) added in v1.2.1 ``` func ConvertRGBToXy(newcolor [color](/image/color).[Color](/image/color#Color)) ([][float32](/builtin#float32), [uint8](/builtin#uint8)) ``` ConvertRGBToXy converts a given RGB color to the xy color of the ligth. implemented as in <https://developers.meethue.com/develop/application-design-guidance/color-conversion-formulas-rgb-to-xy-and-back/### Types [¶](#pkg-types) #### type [APIError](https://github.com/amimof/huego/blob/v1.2.1/huego.go#L25) [¶](#APIError) ``` type APIError struct { Type [int](/builtin#int) Address [string](/builtin#string) Description [string](/builtin#string) } ``` APIError defines the error response object returned from the bridge after an invalid API request. #### func (*APIError) [Error](https://github.com/amimof/huego/blob/v1.2.1/huego.go#L50) [¶](#APIError.Error) ``` func (a *[APIError](#APIError)) Error() [string](/builtin#string) ``` Error returns an error string #### func (*APIError) [UnmarshalJSON](https://github.com/amimof/huego/blob/v1.2.1/huego.go#L37) [¶](#APIError.UnmarshalJSON) ``` func (a *[APIError](#APIError)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error) ``` UnmarshalJSON makes sure that types are correct when unmarshalling. Implements package encoding/json #### type [APIResponse](https://github.com/amimof/huego/blob/v1.2.1/huego.go#L19) [¶](#APIResponse) ``` type APIResponse struct { Success map[[string](/builtin#string)]interface{} `json:"success,omitempty"` Error *[APIError](#APIError) `json:"error,omitempty"` } ``` APIResponse holds the response data returned form the bridge after a request has been made. #### type [AutoInstall](https://github.com/amimof/huego/blob/v1.2.1/config.go#L70) [¶](#AutoInstall) ``` type AutoInstall struct { On [bool](/builtin#bool) `json:"on,omitempty"` UpdateTime [string](/builtin#string) `json:"updatetime,omitempty"` } ``` AutoInstall holds automatic update configuration #### type [Backup](https://github.com/amimof/huego/blob/v1.2.1/config.go#L84) [¶](#Backup) ``` type Backup struct { Status [string](/builtin#string) `json:"backup,omitempty"` ErrorCode [int](/builtin#int) `json:"errorcode,omitempty"` } ``` Backup holds configuration backup status information #### type [Bridge](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L15) [¶](#Bridge) ``` type Bridge struct { Host [string](/builtin#string) `json:"internalipaddress,omitempty"` User [string](/builtin#string) ID [string](/builtin#string) `json:"id,omitempty"` } ``` Bridge exposes a hardware bridge through a struct. #### func [Discover](https://github.com/amimof/huego/blob/v1.2.1/huego.go#L245) [¶](#Discover) ``` func Discover() (*[Bridge](#Bridge), [error](/builtin#error)) ``` Discover performs a discovery on the network looking for bridges using <https://www.meethue.com/api/nupnp> service. Discover uses DiscoverAll() but only returns the first instance in the array of bridges if any. #### func [DiscoverAll](https://github.com/amimof/huego/blob/v1.2.1/huego.go#L204) [¶](#DiscoverAll) ``` func DiscoverAll() ([][Bridge](#Bridge), [error](/builtin#error)) ``` DiscoverAll performs a discovery on the network looking for bridges using <https://www.meethue.com/api/nupnp> service. DiscoverAll returns a list of Bridge objects. #### func [DiscoverAllContext](https://github.com/amimof/huego/blob/v1.2.1/huego.go#L210) [¶](#DiscoverAllContext) added in v1.1.0 ``` func DiscoverAllContext(ctx [context](/context).[Context](/context#Context)) ([][Bridge](#Bridge), [error](/builtin#error)) ``` DiscoverAllContext performs a discovery on the network looking for bridges using <https://www.meethue.com/api/nupnp> service. DiscoverAllContext returns a list of Bridge objects. #### func [DiscoverContext](https://github.com/amimof/huego/blob/v1.2.1/huego.go#L251) [¶](#DiscoverContext) added in v1.1.0 ``` func DiscoverContext(ctx [context](/context).[Context](/context#Context)) (*[Bridge](#Bridge), [error](/builtin#error)) ``` DiscoverContext performs a discovery on the network looking for bridges using <https://www.meethue.com/api/nupnp> service. DiscoverContext uses DiscoverAllContext() but only returns the first instance in the array of bridges if any. #### func [New](https://github.com/amimof/huego/blob/v1.2.1/huego.go#L271) [¶](#New) ``` func New(h, u [string](/builtin#string)) *[Bridge](#Bridge) ``` New instantiates and returns a new Bridge. New accepts hostname/ip address to the bridge (h) as well as an username (u). h may or may not be prefixed with http(s)://. For example <http://192.168.1.20/> or 192.168.1.20. u is a username known to the bridge. Use Discover() and CreateUser() to create a user. #### func (*Bridge) [CreateGroup](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L428) [¶](#Bridge.CreateGroup) ``` func (b *[Bridge](#Bridge)) CreateGroup(g [Group](#Group)) (*[Response](#Response), [error](/builtin#error)) ``` CreateGroup creates one new group with attributes defined by g #### func (*Bridge) [CreateGroupContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L433) [¶](#Bridge.CreateGroupContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) CreateGroupContext(ctx [context](/context).[Context](/context#Context), g [Group](#Group)) (*[Response](#Response), [error](/builtin#error)) ``` CreateGroupContext creates one new group with attributes defined by g #### func (*Bridge) [CreateResourcelink](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L875) [¶](#Bridge.CreateResourcelink) ``` func (b *[Bridge](#Bridge)) CreateResourcelink(s *[Resourcelink](#Resourcelink)) (*[Response](#Response), [error](/builtin#error)) ``` CreateResourcelink creates one new resourcelink on the bridge #### func (*Bridge) [CreateResourcelinkContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L880) [¶](#Bridge.CreateResourcelinkContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) CreateResourcelinkContext(ctx [context](/context).[Context](/context#Context), s *[Resourcelink](#Resourcelink)) (*[Response](#Response), [error](/builtin#error)) ``` CreateResourcelinkContext creates one new resourcelink on the bridge #### func (*Bridge) [CreateRule](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1059) [¶](#Bridge.CreateRule) ``` func (b *[Bridge](#Bridge)) CreateRule(s *[Rule](#Rule)) (*[Response](#Response), [error](/builtin#error)) ``` CreateRule creates one rule with attribues defined in s #### func (*Bridge) [CreateRuleContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1064) [¶](#Bridge.CreateRuleContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) CreateRuleContext(ctx [context](/context).[Context](/context#Context), s *[Rule](#Rule)) (*[Response](#Response), [error](/builtin#error)) ``` CreateRuleContext creates one rule with attribues defined in s #### func (*Bridge) [CreateScene](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1364) [¶](#Bridge.CreateScene) ``` func (b *[Bridge](#Bridge)) CreateScene(s *[Scene](#Scene)) (*[Response](#Response), [error](/builtin#error)) ``` CreateScene creates one new scene with its attributes defined in s #### func (*Bridge) [CreateSceneContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1369) [¶](#Bridge.CreateSceneContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) CreateSceneContext(ctx [context](/context).[Context](/context#Context), s *[Scene](#Scene)) (*[Response](#Response), [error](/builtin#error)) ``` CreateSceneContext creates one new scene with its attributes defined in s #### func (*Bridge) [CreateSchedule](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1509) [¶](#Bridge.CreateSchedule) ``` func (b *[Bridge](#Bridge)) CreateSchedule(s *[Schedule](#Schedule)) (*[Response](#Response), [error](/builtin#error)) ``` CreateSchedule creates one schedule and sets its attributes defined in s #### func (*Bridge) [CreateScheduleContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1514) [¶](#Bridge.CreateScheduleContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) CreateScheduleContext(ctx [context](/context).[Context](/context#Context), s *[Schedule](#Schedule)) (*[Response](#Response), [error](/builtin#error)) ``` CreateScheduleContext creates one schedule and sets its attributes defined in s #### func (*Bridge) [CreateSensor](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1692) [¶](#Bridge.CreateSensor) ``` func (b *[Bridge](#Bridge)) CreateSensor(s *[Sensor](#Sensor)) (*[Response](#Response), [error](/builtin#error)) ``` CreateSensor creates one new sensor #### func (*Bridge) [CreateSensorContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1697) [¶](#Bridge.CreateSensorContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) CreateSensorContext(ctx [context](/context).[Context](/context#Context), s *[Sensor](#Sensor)) (*[Response](#Response), [error](/builtin#error)) ``` CreateSensorContext creates one new sensor #### func (*Bridge) [CreateUser](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L90) [¶](#Bridge.CreateUser) ``` func (b *[Bridge](#Bridge)) CreateUser(n [string](/builtin#string)) ([string](/builtin#string), [error](/builtin#error)) ``` CreateUser creates a user by adding n to the list of whitelists in the bridge. The link button on the bridge must have been pressed before calling CreateUser. Example [¶](#example-Bridge.CreateUser) ``` bridge, _ := Discover() user, err := bridge.CreateUser("my awesome hue app") // Link button needs to be pressed if err != nil { fmt.Printf("Error creating user: %s", err.Error()) } bridge = bridge.Login(user) light, _ := bridge.GetLight(1) light.Off() ``` ``` Output: ``` #### func (*Bridge) [CreateUserContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L96) [¶](#Bridge.CreateUserContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) CreateUserContext(ctx [context](/context).[Context](/context#Context), n [string](/builtin#string)) ([string](/builtin#string), [error](/builtin#error)) ``` CreateUserContext creates a user by adding n to the list of whitelists in the bridge. The link button on the bridge must have been pressed before calling CreateUser. #### func (*Bridge) [CreateUserWithClientKey](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L107) [¶](#Bridge.CreateUserWithClientKey) added in v1.2.0 ``` func (b *[Bridge](#Bridge)) CreateUserWithClientKey(deviceType [string](/builtin#string)) (*[Whitelist](#Whitelist), [error](/builtin#error)) ``` CreateUserWithClientKey creates a user by adding deviceType to the list of whitelisted users on the bridge. The link button on the bridge must have been pressed before calling CreateUser. #### func (*Bridge) [CreateUserWithClientKeyContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L113) [¶](#Bridge.CreateUserWithClientKeyContext) added in v1.2.0 ``` func (b *[Bridge](#Bridge)) CreateUserWithClientKeyContext(ctx [context](/context).[Context](/context#Context), deviceType [string](/builtin#string)) (*[Whitelist](#Whitelist), [error](/builtin#error)) ``` CreateUserWithClientKeyContext creates a user by adding deviceType to the list of whitelisted users on the bridge The link button on the bridge must have been pressed before calling CreateUser. #### func (*Bridge) [DeleteGroup](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L466) [¶](#Bridge.DeleteGroup) ``` func (b *[Bridge](#Bridge)) DeleteGroup(i [int](/builtin#int)) [error](/builtin#error) ``` DeleteGroup deletes one group with the id of i #### func (*Bridge) [DeleteGroupContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L471) [¶](#Bridge.DeleteGroupContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) DeleteGroupContext(ctx [context](/context).[Context](/context#Context), i [int](/builtin#int)) [error](/builtin#error) ``` DeleteGroupContext deletes one group with the id of i #### func (*Bridge) [DeleteLight](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L727) [¶](#Bridge.DeleteLight) ``` func (b *[Bridge](#Bridge)) DeleteLight(i [int](/builtin#int)) [error](/builtin#error) ``` DeleteLight deletes one lights from the bridge #### func (*Bridge) [DeleteLightContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L732) [¶](#Bridge.DeleteLightContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) DeleteLightContext(ctx [context](/context).[Context](/context#Context), i [int](/builtin#int)) [error](/builtin#error) ``` DeleteLightContext deletes one lights from the bridge #### func (*Bridge) [DeleteResourcelink](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L951) [¶](#Bridge.DeleteResourcelink) ``` func (b *[Bridge](#Bridge)) DeleteResourcelink(i [int](/builtin#int)) [error](/builtin#error) ``` DeleteResourcelink deletes one resourcelink with the id of i #### func (*Bridge) [DeleteResourcelinkContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L956) [¶](#Bridge.DeleteResourcelinkContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) DeleteResourcelinkContext(ctx [context](/context).[Context](/context#Context), i [int](/builtin#int)) [error](/builtin#error) ``` DeleteResourcelinkContext deletes one resourcelink with the id of i #### func (*Bridge) [DeleteRule](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1136) [¶](#Bridge.DeleteRule) ``` func (b *[Bridge](#Bridge)) DeleteRule(i [int](/builtin#int)) [error](/builtin#error) ``` DeleteRule deletes one rule from the bridge #### func (*Bridge) [DeleteRuleContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1141) [¶](#Bridge.DeleteRuleContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) DeleteRuleContext(ctx [context](/context).[Context](/context#Context), i [int](/builtin#int)) [error](/builtin#error) ``` DeleteRuleContext deletes one rule from the bridge #### func (*Bridge) [DeleteScene](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1402) [¶](#Bridge.DeleteScene) ``` func (b *[Bridge](#Bridge)) DeleteScene(id [string](/builtin#string)) [error](/builtin#error) ``` DeleteScene deletes one scene from the bridge #### func (*Bridge) [DeleteSceneContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1407) [¶](#Bridge.DeleteSceneContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) DeleteSceneContext(ctx [context](/context).[Context](/context#Context), id [string](/builtin#string)) [error](/builtin#error) ``` DeleteSceneContext deletes one scene from the bridge #### func (*Bridge) [DeleteSchedule](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1586) [¶](#Bridge.DeleteSchedule) ``` func (b *[Bridge](#Bridge)) DeleteSchedule(i [int](/builtin#int)) [error](/builtin#error) ``` DeleteSchedule deletes one schedule from the bridge by its id of i #### func (*Bridge) [DeleteScheduleContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1591) [¶](#Bridge.DeleteScheduleContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) DeleteScheduleContext(ctx [context](/context).[Context](/context#Context), i [int](/builtin#int)) [error](/builtin#error) ``` DeleteScheduleContext deletes one schedule from the bridge by its id of i #### func (*Bridge) [DeleteSensor](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1852) [¶](#Bridge.DeleteSensor) ``` func (b *[Bridge](#Bridge)) DeleteSensor(i [int](/builtin#int)) [error](/builtin#error) ``` DeleteSensor deletes one sensor from the bridge #### func (*Bridge) [DeleteSensorContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1857) [¶](#Bridge.DeleteSensorContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) DeleteSensorContext(ctx [context](/context).[Context](/context#Context), i [int](/builtin#int)) [error](/builtin#error) ``` DeleteSensorContext deletes one sensor from the bridge #### func (*Bridge) [DeleteUser](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L213) [¶](#Bridge.DeleteUser) ``` func (b *[Bridge](#Bridge)) DeleteUser(n [string](/builtin#string)) [error](/builtin#error) ``` DeleteUser removes a whitelist item from whitelists on the bridge #### func (*Bridge) [DeleteUserContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L218) [¶](#Bridge.DeleteUserContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) DeleteUserContext(ctx [context](/context).[Context](/context#Context), n [string](/builtin#string)) [error](/builtin#error) ``` DeleteUserContext removes a whitelist item from whitelists on the bridge #### func (*Bridge) [FindLights](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L650) [¶](#Bridge.FindLights) ``` func (b *[Bridge](#Bridge)) FindLights() (*[Response](#Response), [error](/builtin#error)) ``` FindLights starts a search for new lights on the bridge. Use GetNewLights() verify if new lights have been detected. #### func (*Bridge) [FindLightsContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L656) [¶](#Bridge.FindLightsContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) FindLightsContext(ctx [context](/context).[Context](/context#Context)) (*[Response](#Response), [error](/builtin#error)) ``` FindLightsContext starts a search for new lights on the bridge. Use GetNewLights() verify if new lights have been detected. #### func (*Bridge) [FindSensors](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1732) [¶](#Bridge.FindSensors) ``` func (b *[Bridge](#Bridge)) FindSensors() (*[Response](#Response), [error](/builtin#error)) ``` FindSensors starts a search for new sensors. Use GetNewSensors() to verify if new sensors have been discovered in the bridge. #### func (*Bridge) [FindSensorsContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1738) [¶](#Bridge.FindSensorsContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) FindSensorsContext(ctx [context](/context).[Context](/context#Context)) (*[Response](#Response), [error](/builtin#error)) ``` FindSensorsContext starts a search for new sensors. Use GetNewSensorsContext() to verify if new sensors have been discovered in the bridge. #### func (*Bridge) [GetCapabilities](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1926) [¶](#Bridge.GetCapabilities) ``` func (b *[Bridge](#Bridge)) GetCapabilities() (*[Capabilities](#Capabilities), [error](/builtin#error)) ``` GetCapabilities returns a list of capabilities of resources supported in the bridge. #### func (*Bridge) [GetCapabilitiesContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1931) [¶](#Bridge.GetCapabilitiesContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) GetCapabilitiesContext(ctx [context](/context).[Context](/context#Context)) (*[Capabilities](#Capabilities), [error](/builtin#error)) ``` GetCapabilitiesContext returns a list of capabilities of resources supported in the bridge. #### func (*Bridge) [GetConfig](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L52) [¶](#Bridge.GetConfig) ``` func (b *[Bridge](#Bridge)) GetConfig() (*[Config](#Config), [error](/builtin#error)) ``` GetConfig returns the bridge configuration #### func (*Bridge) [GetConfigContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L57) [¶](#Bridge.GetConfigContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) GetConfigContext(ctx [context](/context).[Context](/context#Context)) (*[Config](#Config), [error](/builtin#error)) ``` GetConfigContext returns the bridge configuration #### func (*Bridge) [GetFullState](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L244) [¶](#Bridge.GetFullState) ``` func (b *[Bridge](#Bridge)) GetFullState() (map[[string](/builtin#string)]interface{}, [error](/builtin#error)) ``` GetFullState returns the entire bridge configuration. #### func (*Bridge) [GetFullStateContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L249) [¶](#Bridge.GetFullStateContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) GetFullStateContext(ctx [context](/context).[Context](/context#Context)) (map[[string](/builtin#string)]interface{}, [error](/builtin#error)) ``` GetFullStateContext returns the entire bridge configuration. #### func (*Bridge) [GetGroup](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L318) [¶](#Bridge.GetGroup) ``` func (b *[Bridge](#Bridge)) GetGroup(i [int](/builtin#int)) (*[Group](#Group), [error](/builtin#error)) ``` GetGroup returns one group known to the bridge by its id #### func (*Bridge) [GetGroupContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L323) [¶](#Bridge.GetGroupContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) GetGroupContext(ctx [context](/context).[Context](/context#Context), i [int](/builtin#int)) (*[Group](#Group), [error](/builtin#error)) ``` GetGroupContext returns one group known to the bridge by its id #### func (*Bridge) [GetGroups](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L278) [¶](#Bridge.GetGroups) ``` func (b *[Bridge](#Bridge)) GetGroups() ([][Group](#Group), [error](/builtin#error)) ``` GetGroups returns all groups known to the bridge #### func (*Bridge) [GetGroupsContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L283) [¶](#Bridge.GetGroupsContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) GetGroupsContext(ctx [context](/context).[Context](/context#Context)) ([][Group](#Group), [error](/builtin#error)) ``` GetGroupsContext returns all groups known to the bridge #### func (*Bridge) [GetLight](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L543) [¶](#Bridge.GetLight) ``` func (b *[Bridge](#Bridge)) GetLight(i [int](/builtin#int)) (*[Light](#Light), [error](/builtin#error)) ``` GetLight returns one light with the id of i #### func (*Bridge) [GetLightContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L548) [¶](#Bridge.GetLightContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) GetLightContext(ctx [context](/context).[Context](/context#Context), i [int](/builtin#int)) (*[Light](#Light), [error](/builtin#error)) ``` GetLightContext returns one light with the id of i #### func (*Bridge) [GetLights](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L503) [¶](#Bridge.GetLights) ``` func (b *[Bridge](#Bridge)) GetLights() ([][Light](#Light), [error](/builtin#error)) ``` GetLights returns all lights known to the bridge #### func (*Bridge) [GetLightsContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L508) [¶](#Bridge.GetLightsContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) GetLightsContext(ctx [context](/context).[Context](/context#Context)) ([][Light](#Light), [error](/builtin#error)) ``` GetLightsContext returns all lights known to the bridge #### func (*Bridge) [GetNewLights](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L685) [¶](#Bridge.GetNewLights) ``` func (b *[Bridge](#Bridge)) GetNewLights() (*[NewLight](#NewLight), [error](/builtin#error)) ``` GetNewLights returns a list of lights that were discovered last time FindLights() was executed. #### func (*Bridge) [GetNewLightsContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L690) [¶](#Bridge.GetNewLightsContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) GetNewLightsContext(ctx [context](/context).[Context](/context#Context)) (*[NewLight](#NewLight), [error](/builtin#error)) ``` GetNewLightsContext returns a list of lights that were discovered last time FindLights() was executed. #### func (*Bridge) [GetNewSensors](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1767) [¶](#Bridge.GetNewSensors) ``` func (b *[Bridge](#Bridge)) GetNewSensors() (*[NewSensor](#NewSensor), [error](/builtin#error)) ``` GetNewSensors returns a list of sensors that were discovered last time GetNewSensors() was executed. #### func (*Bridge) [GetNewSensorsContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1772) [¶](#Bridge.GetNewSensorsContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) GetNewSensorsContext(ctx [context](/context).[Context](/context#Context)) (*[NewSensor](#NewSensor), [error](/builtin#error)) ``` GetNewSensorsContext returns a list of sensors that were discovered last time GetNewSensors() was executed. #### func (*Bridge) [GetResourcelink](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L844) [¶](#Bridge.GetResourcelink) ``` func (b *[Bridge](#Bridge)) GetResourcelink(i [int](/builtin#int)) (*[Resourcelink](#Resourcelink), [error](/builtin#error)) ``` GetResourcelink returns one resourcelink by its id defined by i #### func (*Bridge) [GetResourcelinkContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L849) [¶](#Bridge.GetResourcelinkContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) GetResourcelinkContext(ctx [context](/context).[Context](/context#Context), i [int](/builtin#int)) (*[Resourcelink](#Resourcelink), [error](/builtin#error)) ``` GetResourcelinkContext returns one resourcelink by its id defined by i #### func (*Bridge) [GetResourcelinks](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L804) [¶](#Bridge.GetResourcelinks) ``` func (b *[Bridge](#Bridge)) GetResourcelinks() ([]*[Resourcelink](#Resourcelink), [error](/builtin#error)) ``` GetResourcelinks returns all resourcelinks known to the bridge #### func (*Bridge) [GetResourcelinksContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L809) [¶](#Bridge.GetResourcelinksContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) GetResourcelinksContext(ctx [context](/context).[Context](/context#Context)) ([]*[Resourcelink](#Resourcelink), [error](/builtin#error)) ``` GetResourcelinksContext returns all resourcelinks known to the bridge #### func (*Bridge) [GetRule](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1028) [¶](#Bridge.GetRule) ``` func (b *[Bridge](#Bridge)) GetRule(i [int](/builtin#int)) (*[Rule](#Rule), [error](/builtin#error)) ``` GetRule returns one rule by its id of i #### func (*Bridge) [GetRuleContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1033) [¶](#Bridge.GetRuleContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) GetRuleContext(ctx [context](/context).[Context](/context#Context), i [int](/builtin#int)) (*[Rule](#Rule), [error](/builtin#error)) ``` GetRuleContext returns one rule by its id of i #### func (*Bridge) [GetRules](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L988) [¶](#Bridge.GetRules) ``` func (b *[Bridge](#Bridge)) GetRules() ([]*[Rule](#Rule), [error](/builtin#error)) ``` GetRules returns all rules known to the bridge #### func (*Bridge) [GetRulesContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L993) [¶](#Bridge.GetRulesContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) GetRulesContext(ctx [context](/context).[Context](/context#Context)) ([]*[Rule](#Rule), [error](/builtin#error)) ``` GetRulesContext returns all rules known to the bridge #### func (*Bridge) [GetScene](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1206) [¶](#Bridge.GetScene) ``` func (b *[Bridge](#Bridge)) GetScene(i [string](/builtin#string)) (*[Scene](#Scene), [error](/builtin#error)) ``` GetScene returns one scene by its id of i #### func (*Bridge) [GetSceneContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1211) [¶](#Bridge.GetSceneContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) GetSceneContext(ctx [context](/context).[Context](/context#Context), i [string](/builtin#string)) (*[Scene](#Scene), [error](/builtin#error)) ``` GetSceneContext returns one scene by its id of i #### func (*Bridge) [GetScenes](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1173) [¶](#Bridge.GetScenes) ``` func (b *[Bridge](#Bridge)) GetScenes() ([][Scene](#Scene), [error](/builtin#error)) ``` GetScenes returns all scenes known to the bridge #### func (*Bridge) [GetScenesContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1178) [¶](#Bridge.GetScenesContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) GetScenesContext(ctx [context](/context).[Context](/context#Context)) ([][Scene](#Scene), [error](/builtin#error)) ``` GetScenesContext returns all scenes known to the bridge #### func (*Bridge) [GetSchedule](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1478) [¶](#Bridge.GetSchedule) ``` func (b *[Bridge](#Bridge)) GetSchedule(i [int](/builtin#int)) (*[Schedule](#Schedule), [error](/builtin#error)) ``` GetSchedule returns one schedule by id defined in i #### func (*Bridge) [GetScheduleContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1483) [¶](#Bridge.GetScheduleContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) GetScheduleContext(ctx [context](/context).[Context](/context#Context), i [int](/builtin#int)) (*[Schedule](#Schedule), [error](/builtin#error)) ``` GetScheduleContext returns one schedule by id defined in i #### func (*Bridge) [GetSchedules](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1438) [¶](#Bridge.GetSchedules) ``` func (b *[Bridge](#Bridge)) GetSchedules() ([]*[Schedule](#Schedule), [error](/builtin#error)) ``` GetSchedules returns all schedules known to the bridge #### func (*Bridge) [GetSchedulesContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1443) [¶](#Bridge.GetSchedulesContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) GetSchedulesContext(ctx [context](/context).[Context](/context#Context)) ([]*[Schedule](#Schedule), [error](/builtin#error)) ``` GetSchedulesContext returns all schedules known to the bridge #### func (*Bridge) [GetSensor](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1660) [¶](#Bridge.GetSensor) ``` func (b *[Bridge](#Bridge)) GetSensor(i [int](/builtin#int)) (*[Sensor](#Sensor), [error](/builtin#error)) ``` GetSensor returns one sensor by its id of i #### func (*Bridge) [GetSensorContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1665) [¶](#Bridge.GetSensorContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) GetSensorContext(ctx [context](/context).[Context](/context#Context), i [int](/builtin#int)) (*[Sensor](#Sensor), [error](/builtin#error)) ``` GetSensorContext returns one sensor by its id of i #### func (*Bridge) [GetSensors](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1623) [¶](#Bridge.GetSensors) ``` func (b *[Bridge](#Bridge)) GetSensors() ([][Sensor](#Sensor), [error](/builtin#error)) ``` GetSensors returns all sensors known to the bridge #### func (*Bridge) [GetSensorsContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1628) [¶](#Bridge.GetSensorsContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) GetSensorsContext(ctx [context](/context).[Context](/context#Context)) ([][Sensor](#Sensor), [error](/builtin#error)) ``` GetSensorsContext returns all sensors known to the bridge #### func (*Bridge) [GetUsers](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L166) [¶](#Bridge.GetUsers) ``` func (b *[Bridge](#Bridge)) GetUsers() ([][Whitelist](#Whitelist), [error](/builtin#error)) ``` GetUsers returns a list of whitelists from the bridge #### func (*Bridge) [IdentifyLight](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L575) [¶](#Bridge.IdentifyLight) added in v1.2.1 ``` func (b *[Bridge](#Bridge)) IdentifyLight(i [int](/builtin#int)) (*[Response](#Response), [error](/builtin#error)) ``` IdentifyLight allows identifying a light #### func (*Bridge) [IdentifyLightContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L580) [¶](#Bridge.IdentifyLightContext) added in v1.2.1 ``` func (b *[Bridge](#Bridge)) IdentifyLightContext(ctx [context](/context).[Context](/context#Context), i [int](/builtin#int)) (*[Response](#Response), [error](/builtin#error)) ``` IdentifyLightContext allows identifying a light #### func (*Bridge) [Login](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L40) [¶](#Bridge.Login) ``` func (b *[Bridge](#Bridge)) Login(u [string](/builtin#string)) *[Bridge](#Bridge) ``` Login calls New() and passes Host on this Bridge instance. #### func (*Bridge) [RecallScene](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1323) [¶](#Bridge.RecallScene) ``` func (b *[Bridge](#Bridge)) RecallScene(id [string](/builtin#string), gid [int](/builtin#int)) (*[Response](#Response), [error](/builtin#error)) ``` RecallScene will recall a scene in a group identified by both scene and group identifiers #### func (*Bridge) [RecallSceneContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1328) [¶](#Bridge.RecallSceneContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) RecallSceneContext(ctx [context](/context).[Context](/context#Context), id [string](/builtin#string), gid [int](/builtin#int)) (*[Response](#Response), [error](/builtin#error)) ``` RecallSceneContext will recall a scene in a group identified by both scene and group identifiers #### func (*Bridge) [SetGroupState](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L350) [¶](#Bridge.SetGroupState) ``` func (b *[Bridge](#Bridge)) SetGroupState(i [int](/builtin#int), l [State](#State)) (*[Response](#Response), [error](/builtin#error)) ``` SetGroupState allows for setting the state of one group, controlling the state of all lights in that group. #### func (*Bridge) [SetGroupStateContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L355) [¶](#Bridge.SetGroupStateContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) SetGroupStateContext(ctx [context](/context).[Context](/context#Context), i [int](/builtin#int), l [State](#State)) (*[Response](#Response), [error](/builtin#error)) ``` SetGroupStateContext allows for setting the state of one group, controlling the state of all lights in that group. #### func (*Bridge) [SetLightState](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L608) [¶](#Bridge.SetLightState) ``` func (b *[Bridge](#Bridge)) SetLightState(i [int](/builtin#int), l [State](#State)) (*[Response](#Response), [error](/builtin#error)) ``` SetLightState allows for controlling one light's state #### func (*Bridge) [SetLightStateContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L613) [¶](#Bridge.SetLightStateContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) SetLightStateContext(ctx [context](/context).[Context](/context#Context), i [int](/builtin#int), l [State](#State)) (*[Response](#Response), [error](/builtin#error)) ``` SetLightStateContext allows for controlling one light's state #### func (*Bridge) [SetSceneLightState](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1283) [¶](#Bridge.SetSceneLightState) ``` func (b *[Bridge](#Bridge)) SetSceneLightState(id [string](/builtin#string), iid [int](/builtin#int), l *[State](#State)) (*[Response](#Response), [error](/builtin#error)) ``` SetSceneLightState allows for setting the state of a light in a scene. SetSceneLightState accepts the id of the scene, the id of a light associated with the scene and the state object. #### func (*Bridge) [SetSceneLightStateContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1289) [¶](#Bridge.SetSceneLightStateContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) SetSceneLightStateContext(ctx [context](/context).[Context](/context#Context), id [string](/builtin#string), iid [int](/builtin#int), l *[State](#State)) (*[Response](#Response), [error](/builtin#error)) ``` SetSceneLightStateContext allows for setting the state of a light in a scene. SetSceneLightStateContext accepts the id of the scene, the id of a light associated with the scene and the state object. #### func (*Bridge) [UpdateConfig](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L175) [¶](#Bridge.UpdateConfig) ``` func (b *[Bridge](#Bridge)) UpdateConfig(c *[Config](#Config)) (*[Response](#Response), [error](/builtin#error)) ``` UpdateConfig updates the bridge configuration with c #### func (*Bridge) [UpdateConfigContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L180) [¶](#Bridge.UpdateConfigContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) UpdateConfigContext(ctx [context](/context).[Context](/context#Context), c *[Config](#Config)) (*[Response](#Response), [error](/builtin#error)) ``` UpdateConfigContext updates the bridge configuration with c #### func (*Bridge) [UpdateGroup](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L389) [¶](#Bridge.UpdateGroup) ``` func (b *[Bridge](#Bridge)) UpdateGroup(i [int](/builtin#int), l [Group](#Group)) (*[Response](#Response), [error](/builtin#error)) ``` UpdateGroup updates one group known to the bridge #### func (*Bridge) [UpdateGroupContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L394) [¶](#Bridge.UpdateGroupContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) UpdateGroupContext(ctx [context](/context).[Context](/context#Context), i [int](/builtin#int), l [Group](#Group)) (*[Response](#Response), [error](/builtin#error)) ``` UpdateGroupContext updates one group known to the bridge #### func (*Bridge) [UpdateLight](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L759) [¶](#Bridge.UpdateLight) ``` func (b *[Bridge](#Bridge)) UpdateLight(i [int](/builtin#int), light [Light](#Light)) (*[Response](#Response), [error](/builtin#error)) ``` UpdateLight updates one light's attributes and state properties #### func (*Bridge) [UpdateLightContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L764) [¶](#Bridge.UpdateLightContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) UpdateLightContext(ctx [context](/context).[Context](/context#Context), i [int](/builtin#int), light [Light](#Light)) (*[Response](#Response), [error](/builtin#error)) ``` UpdateLightContext updates one light's attributes and state properties #### func (*Bridge) [UpdateResourcelink](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L914) [¶](#Bridge.UpdateResourcelink) ``` func (b *[Bridge](#Bridge)) UpdateResourcelink(i [int](/builtin#int), resourcelink *[Resourcelink](#Resourcelink)) (*[Response](#Response), [error](/builtin#error)) ``` UpdateResourcelink updates one resourcelink with attributes defined by resourcelink #### func (*Bridge) [UpdateResourcelinkContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L919) [¶](#Bridge.UpdateResourcelinkContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) UpdateResourcelinkContext(ctx [context](/context).[Context](/context#Context), i [int](/builtin#int), resourcelink *[Resourcelink](#Resourcelink)) (*[Response](#Response), [error](/builtin#error)) ``` UpdateResourcelinkContext updates one resourcelink with attributes defined by resourcelink #### func (*Bridge) [UpdateRule](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1098) [¶](#Bridge.UpdateRule) ``` func (b *[Bridge](#Bridge)) UpdateRule(i [int](/builtin#int), rule *[Rule](#Rule)) (*[Response](#Response), [error](/builtin#error)) ``` UpdateRule updates one rule by its id of i and rule configuration of rule #### func (*Bridge) [UpdateRuleContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1103) [¶](#Bridge.UpdateRuleContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) UpdateRuleContext(ctx [context](/context).[Context](/context#Context), i [int](/builtin#int), rule *[Rule](#Rule)) (*[Response](#Response), [error](/builtin#error)) ``` UpdateRuleContext updates one rule by its id of i and rule configuration of rule #### func (*Bridge) [UpdateScene](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1244) [¶](#Bridge.UpdateScene) ``` func (b *[Bridge](#Bridge)) UpdateScene(id [string](/builtin#string), s *[Scene](#Scene)) (*[Response](#Response), [error](/builtin#error)) ``` UpdateScene updates one scene and its attributes by id of i #### func (*Bridge) [UpdateSceneContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1249) [¶](#Bridge.UpdateSceneContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) UpdateSceneContext(ctx [context](/context).[Context](/context#Context), id [string](/builtin#string), s *[Scene](#Scene)) (*[Response](#Response), [error](/builtin#error)) ``` UpdateSceneContext updates one scene and its attributes by id of i #### func (*Bridge) [UpdateSchedule](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1548) [¶](#Bridge.UpdateSchedule) ``` func (b *[Bridge](#Bridge)) UpdateSchedule(i [int](/builtin#int), schedule *[Schedule](#Schedule)) (*[Response](#Response), [error](/builtin#error)) ``` UpdateSchedule updates one schedule by its id of i and attributes by schedule #### func (*Bridge) [UpdateScheduleContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1553) [¶](#Bridge.UpdateScheduleContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) UpdateScheduleContext(ctx [context](/context).[Context](/context#Context), i [int](/builtin#int), schedule *[Schedule](#Schedule)) (*[Response](#Response), [error](/builtin#error)) ``` UpdateScheduleContext updates one schedule by its id of i and attributes by schedule #### func (*Bridge) [UpdateSensor](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1814) [¶](#Bridge.UpdateSensor) ``` func (b *[Bridge](#Bridge)) UpdateSensor(i [int](/builtin#int), sensor *[Sensor](#Sensor)) (*[Response](#Response), [error](/builtin#error)) ``` UpdateSensor updates one sensor by its id and attributes by sensor #### func (*Bridge) [UpdateSensorConfig](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1883) [¶](#Bridge.UpdateSensorConfig) ``` func (b *[Bridge](#Bridge)) UpdateSensorConfig(i [int](/builtin#int), c interface{}) (*[Response](#Response), [error](/builtin#error)) ``` UpdateSensorConfig updates the configuration of one sensor. The allowed configuration parameters depend on the sensor type #### func (*Bridge) [UpdateSensorConfigContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1888) [¶](#Bridge.UpdateSensorConfigContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) UpdateSensorConfigContext(ctx [context](/context).[Context](/context#Context), i [int](/builtin#int), c interface{}) (*[Response](#Response), [error](/builtin#error)) ``` UpdateSensorConfigContext updates the configuration of one sensor. The allowed configuration parameters depend on the sensor type #### func (*Bridge) [UpdateSensorContext](https://github.com/amimof/huego/blob/v1.2.1/bridge.go#L1819) [¶](#Bridge.UpdateSensorContext) added in v1.1.0 ``` func (b *[Bridge](#Bridge)) UpdateSensorContext(ctx [context](/context).[Context](/context#Context), i [int](/builtin#int), sensor *[Sensor](#Sensor)) (*[Response](#Response), [error](/builtin#error)) ``` UpdateSensorContext updates one sensor by its id and attributes by sensor #### type [BridgeConfig](https://github.com/amimof/huego/blob/v1.2.1/config.go#L64) [¶](#BridgeConfig) ``` type BridgeConfig struct { State [string](/builtin#string) `json:"state,omitempty"` LastInstall [string](/builtin#string) `json:"lastinstall,omitempty"` } ``` BridgeConfig holds information about software updates #### type [Capabilities](https://github.com/amimof/huego/blob/v1.2.1/capabilities.go#L4) [¶](#Capabilities) ``` type Capabilities struct { Groups [Capability](#Capability) `json:"groups,omitempty"` Lights [Capability](#Capability) `json:"lights,omitempty"` Resourcelinks [Capability](#Capability) `json:"resourcelinks,omitempty"` Schedules [Capability](#Capability) `json:"schedules,omitempty"` Rules [Capability](#Capability) `json:"rules,omitempty"` Scenes [Capability](#Capability) `json:"scenes,omitempty"` Sensors [Capability](#Capability) `json:"sensors,omitempty"` Streaming [Capability](#Capability) `json:"streaming,omitempty"` } ``` Capabilities holds a combined model of resource capabilities on the bridge: <https://developers.meethue.com/documentation/lights-api#### type [Capability](https://github.com/amimof/huego/blob/v1.2.1/capabilities.go#L16) [¶](#Capability) ``` type Capability struct { Available [int](/builtin#int) `json:"available,omitempty"` } ``` Capability defines the resource and subresource capabilities. #### type [Command](https://github.com/amimof/huego/blob/v1.2.1/schedule.go#L17) [¶](#Command) ``` type Command struct { Address [string](/builtin#string) `json:"address"` Method [string](/builtin#string) `json:"method"` Body interface{} `json:"body"` } ``` Command defines the request to be made when the schedule occurs #### type [Condition](https://github.com/amimof/huego/blob/v1.2.1/rule.go#L17) [¶](#Condition) ``` type Condition struct { Address [string](/builtin#string) `json:"address,omitempty"` Operator [string](/builtin#string) `json:"operator,omitempty"` Value [string](/builtin#string) `json:"value,omitempty"` } ``` Condition defines the condition of a rule #### type [Config](https://github.com/amimof/huego/blob/v1.2.1/config.go#L4) [¶](#Config) ``` type Config struct { Name [string](/builtin#string) `json:"name,omitempty"` SwUpdate [SwUpdate](#SwUpdate) `json:"swupdate"` SwUpdate2 [SwUpdate2](#SwUpdate2) `json:"swupdate2"` WhitelistMap map[[string](/builtin#string)][Whitelist](#Whitelist) `json:"whitelist"` Whitelist [][Whitelist](#Whitelist) `json:"-"` PortalState [PortalState](#PortalState) `json:"portalstate"` APIVersion [string](/builtin#string) `json:"apiversion,omitempty"` SwVersion [string](/builtin#string) `json:"swversion,omitempty"` ProxyAddress [string](/builtin#string) `json:"proxyaddress,omitempty"` ProxyPort [uint16](/builtin#uint16) `json:"proxyport,omitempty"` LinkButton [bool](/builtin#bool) `json:"linkbutton,omitempty"` IPAddress [string](/builtin#string) `json:"ipaddress,omitempty"` Mac [string](/builtin#string) `json:"mac,omitempty"` NetMask [string](/builtin#string) `json:"netmask,omitempty"` Gateway [string](/builtin#string) `json:"gateway,omitempty"` Dhcp [bool](/builtin#bool) `json:"dhcp,omitempty"` PortalServices [bool](/builtin#bool) `json:"portalservices,omitempty"` UTC [string](/builtin#string) `json:"UTC,omitempty"` LocalTime [string](/builtin#string) `json:"localtime,omitempty"` TimeZone [string](/builtin#string) `json:"timezone,omitempty"` ZigbeeChannel [uint8](/builtin#uint8) `json:"zigbeechannel,omitempty"` ModelID [string](/builtin#string) `json:"modelid,omitempty"` BridgeID [string](/builtin#string) `json:"bridgeid,omitempty"` FactoryNew [bool](/builtin#bool) `json:"factorynew,omitempty"` ReplacesBridgeID [string](/builtin#string) `json:"replacesbridgeid,omitempty"` DatastoreVersion [string](/builtin#string) `json:"datastoreversion,omitempty"` StarterKitID [string](/builtin#string) `json:"starterkitid,omitempty"` InternetService [InternetService](#InternetService) `json:"internetservices,omitempty"` } ``` Config holds the bridge hardware configuration #### type [DeviceTypes](https://github.com/amimof/huego/blob/v1.2.1/config.go#L57) [¶](#DeviceTypes) ``` type DeviceTypes struct { Bridge [bool](/builtin#bool) `json:"bridge,omitempty"` Lights [][string](/builtin#string) `json:"lights,omitempty"` Sensors [][string](/builtin#string) `json:"sensors,omitempty"` } ``` DeviceTypes details the type of updates available #### type [Group](https://github.com/amimof/huego/blob/v1.2.1/group.go#L10) [¶](#Group) ``` type Group struct { Name [string](/builtin#string) `json:"name,omitempty"` Lights [][string](/builtin#string) `json:"lights,omitempty"` Type [string](/builtin#string) `json:"type,omitempty"` GroupState *[GroupState](#GroupState) `json:"state,omitempty"` Recycle [bool](/builtin#bool) `json:"recycle,omitempty"` Class [string](/builtin#string) `json:"class,omitempty"` Stream *[Stream](#Stream) `json:"stream,omitempty"` Locations map[[string](/builtin#string)][][float64](/builtin#float64) `json:"locations,omitempty"` State *[State](#State) `json:"action,omitempty"` ID [int](/builtin#int) `json:"-"` // contains filtered or unexported fields } ``` Group represents a bridge group <https://developers.meethue.com/documentation/groups-api#### func (*Group) [Alert](https://github.com/amimof/huego/blob/v1.2.1/group.go#L284) [¶](#Group.Alert) ``` func (g *[Group](#Group)) Alert(new [string](/builtin#string)) [error](/builtin#error) ``` Alert makes the lights in the group blink in its current color. Supported values are: “none” – The light is not performing an alert effect. “select” – The light is performing one breathe cycle. “lselect” – The light is performing breathe cycles for 15 seconds or until alert is set to "none". #### func (*Group) [AlertContext](https://github.com/amimof/huego/blob/v1.2.1/group.go#L292) [¶](#Group.AlertContext) added in v1.1.0 ``` func (g *[Group](#Group)) AlertContext(ctx [context](/context).[Context](/context#Context), new [string](/builtin#string)) [error](/builtin#error) ``` AlertContext makes the lights in the group blink in its current color. Supported values are: “none” – The light is not performing an alert effect. “select” – The light is performing one breathe cycle. “lselect” – The light is performing breathe cycles for 15 seconds or until alert is set to "none". #### func (*Group) [Bri](https://github.com/amimof/huego/blob/v1.2.1/group.go#L126) [¶](#Group.Bri) ``` func (g *[Group](#Group)) Bri(new [uint8](/builtin#uint8)) [error](/builtin#error) ``` Bri sets the light brightness state property #### func (*Group) [BriContext](https://github.com/amimof/huego/blob/v1.2.1/group.go#L131) [¶](#Group.BriContext) added in v1.1.0 ``` func (g *[Group](#Group)) BriContext(ctx [context](/context).[Context](/context#Context), new [uint8](/builtin#uint8)) [error](/builtin#error) ``` BriContext sets the light brightness state property #### func (*Group) [Col](https://github.com/amimof/huego/blob/v1.2.1/group.go#L211) [¶](#Group.Col) added in v1.2.1 ``` func (g *[Group](#Group)) Col(new [color](/image/color).[Color](/image/color#Color)) [error](/builtin#error) ``` Col sets the light color as RGB (will be converted to xy) #### func (*Group) [ColContext](https://github.com/amimof/huego/blob/v1.2.1/group.go#L216) [¶](#Group.ColContext) added in v1.2.1 ``` func (g *[Group](#Group)) ColContext(ctx [context](/context).[Context](/context#Context), new [color](/image/color).[Color](/image/color#Color)) [error](/builtin#error) ``` ColContext sets the light color as RGB (will be converted to xy) #### func (*Group) [Ct](https://github.com/amimof/huego/blob/v1.2.1/group.go#L194) [¶](#Group.Ct) ``` func (g *[Group](#Group)) Ct(new [uint16](/builtin#uint16)) [error](/builtin#error) ``` Ct sets the light color temperature state property #### func (*Group) [CtContext](https://github.com/amimof/huego/blob/v1.2.1/group.go#L199) [¶](#Group.CtContext) added in v1.1.0 ``` func (g *[Group](#Group)) CtContext(ctx [context](/context).[Context](/context#Context), new [uint16](/builtin#uint16)) [error](/builtin#error) ``` CtContext sets the light color temperature state property #### func (*Group) [DisableStreaming](https://github.com/amimof/huego/blob/v1.2.1/group.go#L332) [¶](#Group.DisableStreaming) added in v1.2.0 ``` func (g *[Group](#Group)) DisableStreaming() [error](/builtin#error) ``` DisableStreaming disabled streaming for the group by setting the Stream Active property to false #### func (*Group) [DisableStreamingContext](https://github.com/amimof/huego/blob/v1.2.1/group.go#L337) [¶](#Group.DisableStreamingContext) added in v1.2.0 ``` func (g *[Group](#Group)) DisableStreamingContext(ctx [context](/context).[Context](/context#Context)) [error](/builtin#error) ``` DisableStreamingContext disabled streaming for the group by setting the Stream Active property to false #### func (*Group) [Effect](https://github.com/amimof/huego/blob/v1.2.1/group.go#L264) [¶](#Group.Effect) ``` func (g *[Group](#Group)) Effect(new [string](/builtin#string)) [error](/builtin#error) ``` Effect the dynamic effect of the lights in the group, currently “none” and “colorloop” are supported #### func (*Group) [EffectContext](https://github.com/amimof/huego/blob/v1.2.1/group.go#L269) [¶](#Group.EffectContext) added in v1.1.0 ``` func (g *[Group](#Group)) EffectContext(ctx [context](/context).[Context](/context#Context), new [string](/builtin#string)) [error](/builtin#error) ``` EffectContext the dynamic effect of the lights in the group, currently “none” and “colorloop” are supported #### func (*Group) [EnableStreaming](https://github.com/amimof/huego/blob/v1.2.1/group.go#L304) [¶](#Group.EnableStreaming) added in v1.2.0 ``` func (g *[Group](#Group)) EnableStreaming() [error](/builtin#error) ``` EnableStreaming enables streaming for the group by setting the Stream Active property to true #### func (*Group) [EnableStreamingContext](https://github.com/amimof/huego/blob/v1.2.1/group.go#L309) [¶](#Group.EnableStreamingContext) added in v1.2.0 ``` func (g *[Group](#Group)) EnableStreamingContext(ctx [context](/context).[Context](/context#Context)) [error](/builtin#error) ``` EnableStreamingContext enables streaming for the group by setting the Stream Active property to true #### func (*Group) [Hue](https://github.com/amimof/huego/blob/v1.2.1/group.go#L143) [¶](#Group.Hue) ``` func (g *[Group](#Group)) Hue(new [uint16](/builtin#uint16)) [error](/builtin#error) ``` Hue sets the light hue state property (0-65535) #### func (*Group) [HueContext](https://github.com/amimof/huego/blob/v1.2.1/group.go#L148) [¶](#Group.HueContext) added in v1.1.0 ``` func (g *[Group](#Group)) HueContext(ctx [context](/context).[Context](/context#Context), new [uint16](/builtin#uint16)) [error](/builtin#error) ``` HueContext sets the light hue state property (0-65535) #### func (*Group) [IsOn](https://github.com/amimof/huego/blob/v1.2.1/group.go#L121) [¶](#Group.IsOn) ``` func (g *[Group](#Group)) IsOn() [bool](/builtin#bool) ``` IsOn returns true if light state On property is true #### func (*Group) [Off](https://github.com/amimof/huego/blob/v1.2.1/group.go#L89) [¶](#Group.Off) ``` func (g *[Group](#Group)) Off() [error](/builtin#error) ``` Off sets the On state of one group to false, turning all lights in the group off #### func (*Group) [OffContext](https://github.com/amimof/huego/blob/v1.2.1/group.go#L94) [¶](#Group.OffContext) added in v1.1.0 ``` func (g *[Group](#Group)) OffContext(ctx [context](/context).[Context](/context#Context)) [error](/builtin#error) ``` OffContext sets the On state of one group to false, turning all lights in the group off #### func (*Group) [On](https://github.com/amimof/huego/blob/v1.2.1/group.go#L105) [¶](#Group.On) ``` func (g *[Group](#Group)) On() [error](/builtin#error) ``` On sets the On state of one group to true, turning all lights in the group on #### func (*Group) [OnContext](https://github.com/amimof/huego/blob/v1.2.1/group.go#L110) [¶](#Group.OnContext) added in v1.1.0 ``` func (g *[Group](#Group)) OnContext(ctx [context](/context).[Context](/context#Context)) [error](/builtin#error) ``` OnContext sets the On state of one group to true, turning all lights in the group on #### func (*Group) [Rename](https://github.com/amimof/huego/blob/v1.2.1/group.go#L73) [¶](#Group.Rename) ``` func (g *[Group](#Group)) Rename(new [string](/builtin#string)) [error](/builtin#error) ``` Rename sets the name property of the group #### func (*Group) [RenameContext](https://github.com/amimof/huego/blob/v1.2.1/group.go#L78) [¶](#Group.RenameContext) added in v1.1.0 ``` func (g *[Group](#Group)) RenameContext(ctx [context](/context).[Context](/context#Context), new [string](/builtin#string)) [error](/builtin#error) ``` RenameContext sets the name property of the group #### func (*Group) [Sat](https://github.com/amimof/huego/blob/v1.2.1/group.go#L160) [¶](#Group.Sat) ``` func (g *[Group](#Group)) Sat(new [uint8](/builtin#uint8)) [error](/builtin#error) ``` Sat sets the light saturation state property (0-254) #### func (*Group) [SatContext](https://github.com/amimof/huego/blob/v1.2.1/group.go#L165) [¶](#Group.SatContext) added in v1.1.0 ``` func (g *[Group](#Group)) SatContext(ctx [context](/context).[Context](/context#Context), new [uint8](/builtin#uint8)) [error](/builtin#error) ``` SatContext sets the light saturation state property (0-254) #### func (*Group) [Scene](https://github.com/amimof/huego/blob/v1.2.1/group.go#L231) [¶](#Group.Scene) ``` func (g *[Group](#Group)) Scene(scene [string](/builtin#string)) [error](/builtin#error) ``` Scene sets the scene by it's identifier of the scene you wish to recall #### func (*Group) [SceneContext](https://github.com/amimof/huego/blob/v1.2.1/group.go#L236) [¶](#Group.SceneContext) added in v1.1.0 ``` func (g *[Group](#Group)) SceneContext(ctx [context](/context).[Context](/context#Context), scene [string](/builtin#string)) [error](/builtin#error) ``` SceneContext sets the scene by it's identifier of the scene you wish to recall #### func (*Group) [SetState](https://github.com/amimof/huego/blob/v1.2.1/group.go#L58) [¶](#Group.SetState) ``` func (g *[Group](#Group)) SetState(s [State](#State)) [error](/builtin#error) ``` SetState sets the state of the group to s. #### func (*Group) [SetStateContext](https://github.com/amimof/huego/blob/v1.2.1/group.go#L63) [¶](#Group.SetStateContext) added in v1.1.0 ``` func (g *[Group](#Group)) SetStateContext(ctx [context](/context).[Context](/context#Context), s [State](#State)) [error](/builtin#error) ``` SetStateContext sets the state of the group to s. #### func (*Group) [TransitionTime](https://github.com/amimof/huego/blob/v1.2.1/group.go#L248) [¶](#Group.TransitionTime) ``` func (g *[Group](#Group)) TransitionTime(new [uint16](/builtin#uint16)) [error](/builtin#error) ``` TransitionTime sets the duration of the transition from the light’s current state to the new state #### func (*Group) [TransitionTimeContext](https://github.com/amimof/huego/blob/v1.2.1/group.go#L253) [¶](#Group.TransitionTimeContext) added in v1.1.0 ``` func (g *[Group](#Group)) TransitionTimeContext(ctx [context](/context).[Context](/context#Context), new [uint16](/builtin#uint16)) [error](/builtin#error) ``` TransitionTimeContext sets the duration of the transition from the light’s current state to the new state #### func (*Group) [Xy](https://github.com/amimof/huego/blob/v1.2.1/group.go#L177) [¶](#Group.Xy) ``` func (g *[Group](#Group)) Xy(new [][float32](/builtin#float32)) [error](/builtin#error) ``` Xy sets the x and y coordinates of a color in CIE color space. (0-1 per value) #### func (*Group) [XyContext](https://github.com/amimof/huego/blob/v1.2.1/group.go#L182) [¶](#Group.XyContext) added in v1.1.0 ``` func (g *[Group](#Group)) XyContext(ctx [context](/context).[Context](/context#Context), new [][float32](/builtin#float32)) [error](/builtin#error) ``` XyContext sets the x and y coordinates of a color in CIE color space. (0-1 per value) #### type [GroupState](https://github.com/amimof/huego/blob/v1.2.1/group.go#L26) [¶](#GroupState) ``` type GroupState struct { AllOn [bool](/builtin#bool) `json:"all_on,omitempty"` AnyOn [bool](/builtin#bool) `json:"any_on,omitempty"` } ``` GroupState defines the state on a group. Can be used to control the state of all lights in a group rather than controlling them individually #### type [InternetService](https://github.com/amimof/huego/blob/v1.2.1/config.go#L76) [¶](#InternetService) ``` type InternetService struct { Internet [string](/builtin#string) `json:"internet,omitempty"` RemoteAccess [string](/builtin#string) `json:"remoteaccess,omitempty"` Time [string](/builtin#string) `json:"time,omitempty"` SwUpdate [string](/builtin#string) `json:"swupdate,omitempty"` } ``` InternetService stores information about the internet connectivity to the bridge #### type [Light](https://github.com/amimof/huego/blob/v1.2.1/light.go#L10) [¶](#Light) ``` type Light struct { State *[State](#State) `json:"state,omitempty"` Type [string](/builtin#string) `json:"type,omitempty"` Name [string](/builtin#string) `json:"name,omitempty"` ModelID [string](/builtin#string) `json:"modelid,omitempty"` ManufacturerName [string](/builtin#string) `json:"manufacturername,omitempty"` UniqueID [string](/builtin#string) `json:"uniqueid,omitempty"` SwVersion [string](/builtin#string) `json:"swversion,omitempty"` SwConfigID [string](/builtin#string) `json:"swconfigid,omitempty"` ProductName [string](/builtin#string) `json:"productname,omitempty"` ID [int](/builtin#int) `json:"-"` // contains filtered or unexported fields } ``` Light represents a bridge light <https://developers.meethue.com/documentation/lights-api#### func (*Light) [Alert](https://github.com/amimof/huego/blob/v1.2.1/light.go#L262) [¶](#Light.Alert) ``` func (l *[Light](#Light)) Alert(new [string](/builtin#string)) [error](/builtin#error) ``` Alert makes the light blink in its current color. Supported values are: “none” – The light is not performing an alert effect. “select” – The light is performing one breathe cycle. “lselect” – The light is performing breathe cycles for 15 seconds or until alert is set to "none". #### func (*Light) [AlertContext](https://github.com/amimof/huego/blob/v1.2.1/light.go#L270) [¶](#Light.AlertContext) added in v1.1.0 ``` func (l *[Light](#Light)) AlertContext(ctx [context](/context).[Context](/context#Context), new [string](/builtin#string)) [error](/builtin#error) ``` AlertContext makes the light blink in its current color. Supported values are: “none” – The light is not performing an alert effect. “select” – The light is performing one breathe cycle. “lselect” – The light is performing breathe cycles for 15 seconds or until alert is set to "none". #### func (*Light) [Bri](https://github.com/amimof/huego/blob/v1.2.1/light.go#L121) [¶](#Light.Bri) ``` func (l *[Light](#Light)) Bri(new [uint8](/builtin#uint8)) [error](/builtin#error) ``` Bri sets the light brightness state property #### func (*Light) [BriContext](https://github.com/amimof/huego/blob/v1.2.1/light.go#L126) [¶](#Light.BriContext) added in v1.1.0 ``` func (l *[Light](#Light)) BriContext(ctx [context](/context).[Context](/context#Context), new [uint8](/builtin#uint8)) [error](/builtin#error) ``` BriContext sets the light brightness state property #### func (*Light) [Col](https://github.com/amimof/huego/blob/v1.2.1/light.go#L206) [¶](#Light.Col) added in v1.2.1 ``` func (l *[Light](#Light)) Col(new [color](/image/color).[Color](/image/color#Color)) [error](/builtin#error) ``` Col sets the light color as RGB (will be converted to xy) #### func (*Light) [ColContext](https://github.com/amimof/huego/blob/v1.2.1/light.go#L211) [¶](#Light.ColContext) added in v1.2.1 ``` func (l *[Light](#Light)) ColContext(ctx [context](/context).[Context](/context#Context), new [color](/image/color).[Color](/image/color#Color)) [error](/builtin#error) ``` ColContext sets the light color as RGB (will be converted to xy) #### func (*Light) [Ct](https://github.com/amimof/huego/blob/v1.2.1/light.go#L189) [¶](#Light.Ct) ``` func (l *[Light](#Light)) Ct(new [uint16](/builtin#uint16)) [error](/builtin#error) ``` Ct sets the light color temperature state property #### func (*Light) [CtContext](https://github.com/amimof/huego/blob/v1.2.1/light.go#L194) [¶](#Light.CtContext) added in v1.1.0 ``` func (l *[Light](#Light)) CtContext(ctx [context](/context).[Context](/context#Context), new [uint16](/builtin#uint16)) [error](/builtin#error) ``` CtContext sets the light color temperature state property #### func (*Light) [Effect](https://github.com/amimof/huego/blob/v1.2.1/light.go#L242) [¶](#Light.Effect) ``` func (l *[Light](#Light)) Effect(new [string](/builtin#string)) [error](/builtin#error) ``` Effect the dynamic effect of the light, currently “none” and “colorloop” are supported #### func (*Light) [EffectContext](https://github.com/amimof/huego/blob/v1.2.1/light.go#L247) [¶](#Light.EffectContext) added in v1.1.0 ``` func (l *[Light](#Light)) EffectContext(ctx [context](/context).[Context](/context#Context), new [string](/builtin#string)) [error](/builtin#error) ``` EffectContext the dynamic effect of the light, currently “none” and “colorloop” are supported #### func (*Light) [Hue](https://github.com/amimof/huego/blob/v1.2.1/light.go#L138) [¶](#Light.Hue) ``` func (l *[Light](#Light)) Hue(new [uint16](/builtin#uint16)) [error](/builtin#error) ``` Hue sets the light hue state property (0-65535) #### func (*Light) [HueContext](https://github.com/amimof/huego/blob/v1.2.1/light.go#L143) [¶](#Light.HueContext) added in v1.1.0 ``` func (l *[Light](#Light)) HueContext(ctx [context](/context).[Context](/context#Context), new [uint16](/builtin#uint16)) [error](/builtin#error) ``` HueContext sets the light hue state property (0-65535) #### func (*Light) [IsOn](https://github.com/amimof/huego/blob/v1.2.1/light.go#L100) [¶](#Light.IsOn) ``` func (l *[Light](#Light)) IsOn() [bool](/builtin#bool) ``` IsOn returns true if light state On property is true #### func (*Light) [Off](https://github.com/amimof/huego/blob/v1.2.1/light.go#L68) [¶](#Light.Off) ``` func (l *[Light](#Light)) Off() [error](/builtin#error) ``` Off sets the On state of one light to false, turning it off #### func (*Light) [OffContext](https://github.com/amimof/huego/blob/v1.2.1/light.go#L73) [¶](#Light.OffContext) added in v1.1.0 ``` func (l *[Light](#Light)) OffContext(ctx [context](/context).[Context](/context#Context)) [error](/builtin#error) ``` OffContext sets the On state of one light to false, turning it off #### func (*Light) [On](https://github.com/amimof/huego/blob/v1.2.1/light.go#L84) [¶](#Light.On) ``` func (l *[Light](#Light)) On() [error](/builtin#error) ``` On sets the On state of one light to true, turning it on #### func (*Light) [OnContext](https://github.com/amimof/huego/blob/v1.2.1/light.go#L89) [¶](#Light.OnContext) added in v1.1.0 ``` func (l *[Light](#Light)) OnContext(ctx [context](/context).[Context](/context#Context)) [error](/builtin#error) ``` OnContext sets the On state of one light to true, turning it on #### func (*Light) [Rename](https://github.com/amimof/huego/blob/v1.2.1/light.go#L105) [¶](#Light.Rename) ``` func (l *[Light](#Light)) Rename(new [string](/builtin#string)) [error](/builtin#error) ``` Rename sets the name property of the light #### func (*Light) [RenameContext](https://github.com/amimof/huego/blob/v1.2.1/light.go#L110) [¶](#Light.RenameContext) added in v1.1.0 ``` func (l *[Light](#Light)) RenameContext(ctx [context](/context).[Context](/context#Context), new [string](/builtin#string)) [error](/builtin#error) ``` RenameContext sets the name property of the light #### func (*Light) [Sat](https://github.com/amimof/huego/blob/v1.2.1/light.go#L155) [¶](#Light.Sat) ``` func (l *[Light](#Light)) Sat(new [uint8](/builtin#uint8)) [error](/builtin#error) ``` Sat sets the light saturation state property (0-254) #### func (*Light) [SatContext](https://github.com/amimof/huego/blob/v1.2.1/light.go#L160) [¶](#Light.SatContext) added in v1.1.0 ``` func (l *[Light](#Light)) SatContext(ctx [context](/context).[Context](/context#Context), new [uint8](/builtin#uint8)) [error](/builtin#error) ``` SatContext sets the light saturation state property (0-254) #### func (*Light) [SetState](https://github.com/amimof/huego/blob/v1.2.1/light.go#L53) [¶](#Light.SetState) ``` func (l *[Light](#Light)) SetState(s [State](#State)) [error](/builtin#error) ``` SetState sets the state of the light to s. #### func (*Light) [SetStateContext](https://github.com/amimof/huego/blob/v1.2.1/light.go#L58) [¶](#Light.SetStateContext) added in v1.1.0 ``` func (l *[Light](#Light)) SetStateContext(ctx [context](/context).[Context](/context#Context), s [State](#State)) [error](/builtin#error) ``` SetStateContext sets the state of the light to s. #### func (*Light) [TransitionTime](https://github.com/amimof/huego/blob/v1.2.1/light.go#L226) [¶](#Light.TransitionTime) ``` func (l *[Light](#Light)) TransitionTime(new [uint16](/builtin#uint16)) [error](/builtin#error) ``` TransitionTime sets the duration of the transition from the light’s current state to the new state #### func (*Light) [TransitionTimeContext](https://github.com/amimof/huego/blob/v1.2.1/light.go#L231) [¶](#Light.TransitionTimeContext) added in v1.1.0 ``` func (l *[Light](#Light)) TransitionTimeContext(ctx [context](/context).[Context](/context#Context), new [uint16](/builtin#uint16)) [error](/builtin#error) ``` TransitionTimeContext sets the duration of the transition from the light’s current state to the new state #### func (*Light) [Xy](https://github.com/amimof/huego/blob/v1.2.1/light.go#L172) [¶](#Light.Xy) ``` func (l *[Light](#Light)) Xy(new [][float32](/builtin#float32)) [error](/builtin#error) ``` Xy sets the x and y coordinates of a color in CIE color space. (0-1 per value) #### func (*Light) [XyContext](https://github.com/amimof/huego/blob/v1.2.1/light.go#L177) [¶](#Light.XyContext) added in v1.1.0 ``` func (l *[Light](#Light)) XyContext(ctx [context](/context).[Context](/context#Context), new [][float32](/builtin#float32)) [error](/builtin#error) ``` XyContext sets the x and y coordinates of a color in CIE color space. (0-1 per value) #### type [NewLight](https://github.com/amimof/huego/blob/v1.2.1/light.go#L47) [¶](#NewLight) ``` type NewLight struct { Lights [][string](/builtin#string) LastScan [string](/builtin#string) `json:"lastscan"` } ``` NewLight defines a list of lights discovered the last time the bridge performed a light discovery. Also stores the timestamp the last time a discovery was performed. #### type [NewSensor](https://github.com/amimof/huego/blob/v1.2.1/sensor.go#L18) [¶](#NewSensor) ``` type NewSensor struct { Sensors []*[Sensor](#Sensor) LastScan [string](/builtin#string) `json:"lastscan"` } ``` NewSensor defines a list of sensors discovered the last time the bridge performed a sensor discovery. Also stores the timestamp the last time a discovery was performed. #### type [PortalState](https://github.com/amimof/huego/blob/v1.2.1/config.go#L99) [¶](#PortalState) ``` type PortalState struct { SignedOn [bool](/builtin#bool) `json:"signedon,omitempty"` Incoming [bool](/builtin#bool) `json:"incoming,omitempty"` Outgoing [bool](/builtin#bool) `json:"outgoing,omitempty"` Communication [string](/builtin#string) `json:"communication,omitempty"` } ``` PortalState is a struct representing the portal state #### type [Resourcelink](https://github.com/amimof/huego/blob/v1.2.1/resourcelink.go#L4) [¶](#Resourcelink) ``` type Resourcelink struct { Name [string](/builtin#string) `json:"name,omitempty"` Description [string](/builtin#string) `json:"description,omitempty"` Type [string](/builtin#string) `json:"type,omitempty"` ClassID [uint16](/builtin#uint16) `json:"classid,omitempty"` Owner [string](/builtin#string) `json:"owner,omitempty"` Links [][string](/builtin#string) `json:"links,omitempty"` ID [int](/builtin#int) `json:",omitempty"` } ``` Resourcelink represents a bridge resourcelink <https://developers.meethue.com/documentation/resourcelinks-api#### type [Response](https://github.com/amimof/huego/blob/v1.2.1/huego.go#L32) [¶](#Response) ``` type Response struct { Success map[[string](/builtin#string)]interface{} } ``` Response is a wrapper struct of the success response returned from the bridge after a successful API call. #### type [Rule](https://github.com/amimof/huego/blob/v1.2.1/rule.go#L4) [¶](#Rule) ``` type Rule struct { Name [string](/builtin#string) `json:"name,omitempty"` LastTriggered [string](/builtin#string) `json:"lasttriggered,omitempty"` CreationTime [string](/builtin#string) `json:"creationtime,omitempty"` TimesTriggered [int](/builtin#int) `json:"timestriggered,omitempty"` Owner [string](/builtin#string) `json:"owner,omitempty"` Status [string](/builtin#string) `json:"status,omitempty"` Conditions []*[Condition](#Condition) `json:"conditions,omitempty"` Actions []*[RuleAction](#RuleAction) `json:"actions,omitempty"` ID [int](/builtin#int) `json:",omitempty"` } ``` Rule represents a bridge rule <https://developers.meethue.com/documentation/rules-api#### type [RuleAction](https://github.com/amimof/huego/blob/v1.2.1/rule.go#L24) [¶](#RuleAction) ``` type RuleAction struct { Address [string](/builtin#string) `json:"address,omitempty"` Method [string](/builtin#string) `json:"method,omitempty"` Body interface{} `json:"body,omitempty"` } ``` RuleAction defines the rule to execute when a rule triggers #### type [Scene](https://github.com/amimof/huego/blob/v1.2.1/scene.go#L6) [¶](#Scene) ``` type Scene struct { Name [string](/builtin#string) `json:"name,omitempty"` Type [string](/builtin#string) `json:"type,omitempty"` Group [string](/builtin#string) `json:"group,omitempty"` Lights [][string](/builtin#string) `json:"lights,omitempty"` Owner [string](/builtin#string) `json:"owner,omitempty"` Recycle [bool](/builtin#bool) `json:"recycle"` Locked [bool](/builtin#bool) `json:"locked,omitempty"` AppData interface{} `json:"appdata,omitempty"` Picture [string](/builtin#string) `json:"picture,omitempty"` LastUpdated [string](/builtin#string) `json:"lastupdated,omitempty"` Version [int](/builtin#int) `json:"version,omitempty"` StoreLightState [bool](/builtin#bool) `json:"storelightstate,omitempty"` LightStates map[[int](/builtin#int)][State](#State) `json:"lightstates,omitempty"` TransitionTime [uint16](/builtin#uint16) `json:"transitiontime,omitempty"` ID [string](/builtin#string) `json:"-"` // contains filtered or unexported fields } ``` Scene represents a bridge scene <https://developers.meethue.com/documentation/scenes-api#### func (*Scene) [Recall](https://github.com/amimof/huego/blob/v1.2.1/scene.go#L26) [¶](#Scene.Recall) ``` func (s *[Scene](#Scene)) Recall(id [int](/builtin#int)) [error](/builtin#error) ``` Recall will recall the scene in the group identified by id #### func (*Scene) [RecallContext](https://github.com/amimof/huego/blob/v1.2.1/scene.go#L31) [¶](#Scene.RecallContext) added in v1.1.0 ``` func (s *[Scene](#Scene)) RecallContext(ctx [context](/context).[Context](/context#Context), id [int](/builtin#int)) [error](/builtin#error) ``` RecallContext will recall the scene in the group identified by id #### type [Schedule](https://github.com/amimof/huego/blob/v1.2.1/schedule.go#L4) [¶](#Schedule) ``` type Schedule struct { Name [string](/builtin#string) `json:"name"` Description [string](/builtin#string) `json:"description"` Command *[Command](#Command) `json:"command"` Time [string](/builtin#string) `json:"time,omitempty"` LocalTime [string](/builtin#string) `json:"localtime"` StartTime [string](/builtin#string) `json:"starttime,omitempty"` Status [string](/builtin#string) `json:"status,omitempty"` AutoDelete [bool](/builtin#bool) `json:"autodelete,omitempty"` ID [int](/builtin#int) `json:"-"` } ``` Schedule represents a bridge schedule <https://developers.meethue.com/documentation/schedules-api-0#### type [Sensor](https://github.com/amimof/huego/blob/v1.2.1/sensor.go#L4) [¶](#Sensor) ``` type Sensor struct { State map[[string](/builtin#string)]interface{} `json:"state,omitempty"` Config map[[string](/builtin#string)]interface{} `json:"config,omitempty"` Name [string](/builtin#string) `json:"name,omitempty"` Type [string](/builtin#string) `json:"type,omitempty"` ModelID [string](/builtin#string) `json:"modelid,omitempty"` ManufacturerName [string](/builtin#string) `json:"manufacturername,omitempty"` UniqueID [string](/builtin#string) `json:"uniqueid,omitempty"` SwVersion [string](/builtin#string) `json:"swversion,omitempty"` ID [int](/builtin#int) `json:",omitempty"` } ``` Sensor represents a bridge sensor <https://developers.meethue.com/documentation/sensors-api#### type [State](https://github.com/amimof/huego/blob/v1.2.1/light.go#L25) [¶](#State) ``` type State struct { On [bool](/builtin#bool) `json:"on"` Bri [uint8](/builtin#uint8) `json:"bri,omitempty"` Hue [uint16](/builtin#uint16) `json:"hue,omitempty"` Sat [uint8](/builtin#uint8) `json:"sat,omitempty"` Xy [][float32](/builtin#float32) `json:"xy,omitempty"` Ct [uint16](/builtin#uint16) `json:"ct,omitempty"` Alert [string](/builtin#string) `json:"alert,omitempty"` Effect [string](/builtin#string) `json:"effect,omitempty"` TransitionTime [uint16](/builtin#uint16) `json:"transitiontime,omitempty"` BriInc [int](/builtin#int) `json:"bri_inc,omitempty"` SatInc [int](/builtin#int) `json:"sat_inc,omitempty"` HueInc [int](/builtin#int) `json:"hue_inc,omitempty"` CtInc [int](/builtin#int) `json:"ct_inc,omitempty"` XyInc [int](/builtin#int) `json:"xy_inc,omitempty"` ColorMode [string](/builtin#string) `json:"colormode,omitempty"` Reachable [bool](/builtin#bool) `json:"reachable,omitempty"` Scene [string](/builtin#string) `json:"scene,omitempty"` } ``` State defines the attributes and properties of a light #### type [Stream](https://github.com/amimof/huego/blob/v1.2.1/group.go#L32) [¶](#Stream) added in v1.2.0 ``` type Stream struct { ProxyMode [string](/builtin#string) `json:"proxymode,omitempty"` ProxyNode [string](/builtin#string) `json:"proxynode,omitempty"` ActiveRaw *[bool](/builtin#bool) `json:"active,omitempty"` OwnerRaw *[string](/builtin#string) `json:"owner,omitempty"` } ``` Stream define the stream status of a group #### func (*Stream) [Active](https://github.com/amimof/huego/blob/v1.2.1/group.go#L40) [¶](#Stream.Active) added in v1.2.0 ``` func (s *[Stream](#Stream)) Active() [bool](/builtin#bool) ``` Active returns the stream active state, and will return false if ActiveRaw is nil #### func (*Stream) [Owner](https://github.com/amimof/huego/blob/v1.2.1/group.go#L49) [¶](#Stream.Owner) added in v1.2.0 ``` func (s *[Stream](#Stream)) Owner() [string](/builtin#string) ``` Owner returns the stream Owner, and will return an empty string if OwnerRaw is nil #### type [SwUpdate](https://github.com/amimof/huego/blob/v1.2.1/config.go#L36) [¶](#SwUpdate) ``` type SwUpdate struct { CheckForUpdate [bool](/builtin#bool) `json:"checkforupdate,omitempty"` DeviceTypes [DeviceTypes](#DeviceTypes) `json:"devicetypes"` UpdateState [uint8](/builtin#uint8) `json:"updatestate,omitempty"` Notify [bool](/builtin#bool) `json:"notify,omitempty"` URL [string](/builtin#string) `json:"url,omitempty"` Text [string](/builtin#string) `json:"text,omitempty"` } ``` SwUpdate contains information related to software updates. Deprecated in 1.20 #### type [SwUpdate2](https://github.com/amimof/huego/blob/v1.2.1/config.go#L46) [¶](#SwUpdate2) ``` type SwUpdate2 struct { Bridge [BridgeConfig](#BridgeConfig) `json:"bridge"` CheckForUpdate [bool](/builtin#bool) `json:"checkforupdate,omitempty"` State [string](/builtin#string) `json:"state,omitempty"` Install [bool](/builtin#bool) `json:"install,omitempty"` AutoInstall [AutoInstall](#AutoInstall) `json:"autoinstall"` LastChange [string](/builtin#string) `json:"lastchange,omitempty"` LastInstall [string](/builtin#string) `json:"lastinstall,omitempty"` } ``` SwUpdate2 contains information related to software updates #### type [Whitelist](https://github.com/amimof/huego/blob/v1.2.1/config.go#L90) [¶](#Whitelist) ``` type Whitelist struct { Name [string](/builtin#string) `json:"name"` Username [string](/builtin#string) CreateDate [string](/builtin#string) `json:"create date"` LastUseDate [string](/builtin#string) `json:"last use date"` ClientKey [string](/builtin#string) } ``` Whitelist represents a whitelist user ID in the bridge
github.com/tdewolff/css
go
Go
README [¶](#section-readme) --- ### Parse [Build Status](https://travis-ci.org/tdewolff/parse) [GoDoc](http://godoc.org/github.com/tdewolff/parse) [Coverage Status](https://coveralls.io/github/tdewolff/parse?branch=master) This package contains several lexers and parsers written in [Go](http://golang.org/ "Go Language"). All subpackages are built to be streaming, high performance and to be in accordance with the official (latest) specifications. The lexers are implemented using `buffer.Lexer` in <https://github.com/tdewolff/parse/buffer> and the parsers work on top of the lexers. Some subpackages have hashes defined (using [Hasher](https://github.com/tdewolff/hasher)) that speed up common byte-slice comparisons. #### Buffer ##### Reader Reader is a wrapper around a `[]byte` that implements the `io.Reader` interface. It is comparable to `bytes.Reader` but has slightly different semantics (and a slightly smaller memory footprint). ##### Writer Writer is a buffer that implements the `io.Writer` interface and expands the buffer as needed. The reset functionality allows for better memory reuse. After calling `Reset`, it will overwrite the current buffer and thus reduce allocations. ##### Lexer Lexer is a read buffer specifically designed for building lexers. It keeps track of two positions: a start and end position. The start position is the beginning of the current token being parsed, the end position is being moved forward until a valid token is found. Calling `Shift` will collapse the positions to the end and return the parsed `[]byte`. Moving the end position can go through `Move(int)` which also accepts negative integers. One can also use `Pos() int` to try and parse a token, and if it fails rewind with `Rewind(int)`, passing the previously saved position. `Peek(int) byte` will peek forward (relative to the end position) and return the byte at that location. `PeekRune(int) (rune, int)` returns UTF-8 runes and its length at the given **byte** position. Upon an error `Peek` will return `0`, the **user must peek at every character** and not skip any, otherwise it may skip a `0` and panic on out-of-bounds indexing. `Lexeme() []byte` will return the currently selected bytes, `Skip()` will collapse the selection. `Shift() []byte` is a combination of `Lexeme() []byte` and `Skip()`. When the passed `io.Reader` returned an error, `Err() error` will return that error even if not at the end of the buffer. ##### StreamLexer StreamLexer behaves like Lexer but uses a buffer pool to read in chunks from `io.Reader`, retaining old buffers in memory that are still in use, and re-using old buffers otherwise. Calling `Free(n int)` frees up `n` bytes from the internal buffer(s). It holds an array of buffers to accommodate for keeping everything in-memory. Calling `ShiftLen() int` returns the number of bytes that have been shifted since the previous call to `ShiftLen`, which can be used to specify how many bytes need to be freed up from the buffer. If you don't need to keep returned byte slices around, call `Free(ShiftLen())` after every `Shift` call. #### Strconv This package contains string conversion function much like the standard library's `strconv` package, but it is specifically tailored for the performance needs within the `minify` package. For example, the floating-point to string conversion function is approximately twice as fast as the standard library, but it is not as precise. #### CSS This package is a CSS3 lexer and parser. Both follow the specification at [CSS Syntax Module Level 3](http://www.w3.org/TR/css-syntax-3/). The lexer takes an io.Reader and converts it into tokens until the EOF. The parser returns a parse tree of the full io.Reader input stream, but the low-level `Next` function can be used for stream parsing to returns grammar units until the EOF. [See README here](https://github.com/tdewolff/parse/tree/master/css). #### HTML This package is an HTML5 lexer. It follows the specification at [The HTML syntax](http://www.w3.org/TR/html5/syntax.html). The lexer takes an io.Reader and converts it into tokens until the EOF. [See README here](https://github.com/tdewolff/parse/tree/master/html). #### JS This package is a JS lexer (ECMA-262, edition 6.0). It follows the specification at [ECMAScript Language Specification](http://www.ecma-international.org/ecma-262/6.0/). The lexer takes an io.Reader and converts it into tokens until the EOF. [See README here](https://github.com/tdewolff/parse/tree/master/js). #### JSON This package is a JSON parser (ECMA-404). It follows the specification at [JSON](http://json.org/). The parser takes an io.Reader and converts it into tokens until the EOF. [See README here](https://github.com/tdewolff/parse/tree/master/json). #### SVG This package contains common hashes for SVG1.1 tags and attributes. #### XML This package is an XML1.0 lexer. It follows the specification at [Extensible Markup Language (XML) 1.0 (Fifth Edition)](http://www.w3.org/TR/xml/). The lexer takes an io.Reader and converts it into tokens until the EOF. [See README here](https://github.com/tdewolff/parse/tree/master/xml). #### License Released under the [MIT license](https://github.com/tdewolff/css/blob/v2.3.4/LICENSE.md). Documentation [¶](#section-documentation) --- ### Overview [¶](#pkg-overview) Package parse contains a collection of parsers for various formats in its subpackages. ### Index [¶](#pkg-index) * [Variables](#pkg-variables) * [func Copy(src []byte) (dst []byte)](#Copy) * [func DataURI(dataURI []byte) ([]byte, []byte, error)](#DataURI) * [func Dimension(b []byte) (int, int)](#Dimension) * [func EqualFold(s, targetLower []byte) bool](#EqualFold) * [func IsAllWhitespace(b []byte) bool](#IsAllWhitespace) * [func IsNewline(c byte) bool](#IsNewline) * [func IsWhitespace(c byte) bool](#IsWhitespace) * [func Mediatype(b []byte) ([]byte, map[string]string)](#Mediatype) * [func Number(b []byte) int](#Number) * [func Position(r io.Reader, offset int) (line, col int, context string, err error)](#Position) * [func QuoteEntity(b []byte) (quote byte, n int)](#QuoteEntity) * [func ReplaceMultipleWhitespace(b []byte) []byte](#ReplaceMultipleWhitespace) * [func ToLower(src []byte) []byte](#ToLower) * [func TrimWhitespace(b []byte) []byte](#TrimWhitespace) * [type Error](#Error) * + [func NewError(msg string, r io.Reader, offset int) *Error](#NewError) + [func NewErrorLexer(msg string, l *buffer.Lexer) *Error](#NewErrorLexer) * + [func (e *Error) Error() string](#Error.Error) + [func (e *Error) Position() (int, int, string)](#Error.Position) ### Constants [¶](#pkg-constants) This section is empty. ### Variables [¶](#pkg-variables) ``` var ErrBadDataURI = [errors](/errors).[New](/errors#New)("not a data URI") ``` ErrBadDataURI is returned by DataURI when the byte slice does not start with 'data:' or is too short. ### Functions [¶](#pkg-functions) #### func [Copy](https://github.com/tdewolff/css/blob/v2.3.4/util.go#L4) [¶](#Copy) ``` func Copy(src [][byte](/builtin#byte)) (dst [][byte](/builtin#byte)) ``` Copy returns a copy of the given byte slice. #### func [DataURI](https://github.com/tdewolff/css/blob/v2.3.4/common.go#L148) [¶](#DataURI) ``` func DataURI(dataURI [][byte](/builtin#byte)) ([][byte](/builtin#byte), [][byte](/builtin#byte), [error](/builtin#error)) ``` DataURI parses the given data URI and returns the mediatype, data and ok. #### func [Dimension](https://github.com/tdewolff/css/blob/v2.3.4/common.go#L68) [¶](#Dimension) ``` func Dimension(b [][byte](/builtin#byte)) ([int](/builtin#int), [int](/builtin#int)) ``` Dimension parses a byte-slice and returns the length of the number and its unit. #### func [EqualFold](https://github.com/tdewolff/css/blob/v2.3.4/util.go#L21) [¶](#EqualFold) ``` func EqualFold(s, targetLower [][byte](/builtin#byte)) [bool](/builtin#bool) ``` EqualFold returns true when s matches case-insensitively the targetLower (which must be lowercase). #### func [IsAllWhitespace](https://github.com/tdewolff/css/blob/v2.3.4/util.go#L132) [¶](#IsAllWhitespace) ``` func IsAllWhitespace(b [][byte](/builtin#byte)) [bool](/builtin#bool) ``` IsAllWhitespace returns true when the entire byte slice consists of space, \n, \r, \t, \f. #### func [IsNewline](https://github.com/tdewolff/css/blob/v2.3.4/util.go#L127) [¶](#IsNewline) ``` func IsNewline(c [byte](/builtin#byte)) [bool](/builtin#bool) ``` IsNewline returns true for \n, \r. #### func [IsWhitespace](https://github.com/tdewolff/css/blob/v2.3.4/util.go#L78) [¶](#IsWhitespace) ``` func IsWhitespace(c [byte](/builtin#byte)) [bool](/builtin#bool) ``` IsWhitespace returns true for space, \n, \r, \t, \f. #### func [Mediatype](https://github.com/tdewolff/css/blob/v2.3.4/common.go#L86) [¶](#Mediatype) ``` func Mediatype(b [][byte](/builtin#byte)) ([][byte](/builtin#byte), map[[string](/builtin#string)][string](/builtin#string)) ``` Mediatype parses a given mediatype and splits the mimetype from the parameters. It works similar to mime.ParseMediaType but is faster. #### func [Number](https://github.com/tdewolff/css/blob/v2.3.4/common.go#L15) [¶](#Number) ``` func Number(b [][byte](/builtin#byte)) [int](/builtin#int) ``` Number returns the number of bytes that parse as a number of the regex format (+|-)?([0-9]+(\.[0-9]+)?|\.[0-9]+)((e|E)(+|-)?[0-9]+)?. #### func [Position](https://github.com/tdewolff/css/blob/v2.3.4/position.go#L13) [¶](#Position) ``` func Position(r [io](/io).[Reader](/io#Reader), offset [int](/builtin#int)) (line, col [int](/builtin#int), context [string](/builtin#string), err [error](/builtin#error)) ``` Position returns the line and column number for a certain position in a file. It is useful for recovering the position in a file that caused an error. It only treates \n, \r, and \r\n as newlines, which might be different from some languages also recognizing \f, \u2028, and \u2029 to be newlines. #### func [QuoteEntity](https://github.com/tdewolff/css/blob/v2.3.4/common.go#L193) [¶](#QuoteEntity) ``` func QuoteEntity(b [][byte](/builtin#byte)) (quote [byte](/builtin#byte), n [int](/builtin#int)) ``` QuoteEntity parses the given byte slice and returns the quote that got matched (' or ") and its entity length. #### func [ReplaceMultipleWhitespace](https://github.com/tdewolff/css/blob/v2.3.4/util.go#L162) [¶](#ReplaceMultipleWhitespace) added in v1.1.0 ``` func ReplaceMultipleWhitespace(b [][byte](/builtin#byte)) [][byte](/builtin#byte) ``` ReplaceMultipleWhitespace replaces character series of space, \n, \t, \f, \r into a single space or newline (when the serie contained a \n or \r). #### func [ToLower](https://github.com/tdewolff/css/blob/v2.3.4/util.go#L11) [¶](#ToLower) ``` func ToLower(src [][byte](/builtin#byte)) [][byte](/builtin#byte) ``` ToLower converts all characters in the byte slice from A-Z to a-z. #### func [TrimWhitespace](https://github.com/tdewolff/css/blob/v2.3.4/util.go#L142) [¶](#TrimWhitespace) ``` func TrimWhitespace(b [][byte](/builtin#byte)) [][byte](/builtin#byte) ``` TrimWhitespace removes any leading and trailing whitespace characters. ### Types [¶](#pkg-types) #### type [Error](https://github.com/tdewolff/css/blob/v2.3.4/error.go#L11) [¶](#Error) ``` type Error struct { Message [string](/builtin#string) Offset [int](/builtin#int) // contains filtered or unexported fields } ``` Error is a parsing error returned by parser. It contains a message and an offset at which the error occurred. #### func [NewError](https://github.com/tdewolff/css/blob/v2.3.4/error.go#L21) [¶](#NewError) ``` func NewError(msg [string](/builtin#string), r [io](/io).[Reader](/io#Reader), offset [int](/builtin#int)) *[Error](#Error) ``` NewError creates a new error #### func [NewErrorLexer](https://github.com/tdewolff/css/blob/v2.3.4/error.go#L30) [¶](#NewErrorLexer) ``` func NewErrorLexer(msg [string](/builtin#string), l *[buffer](/github.com/tdewolff/parse/buffer).[Lexer](/github.com/tdewolff/parse/buffer#Lexer)) *[Error](#Error) ``` NewErrorLexer creates a new error from a *buffer.Lexer #### func (*Error) [Error](https://github.com/tdewolff/css/blob/v2.3.4/error.go#L46) [¶](#Error.Error) ``` func (e *[Error](#Error)) Error() [string](/builtin#string) ``` Error returns the error string, containing the context and line + column number. #### func (*Error) [Position](https://github.com/tdewolff/css/blob/v2.3.4/error.go#L38) [¶](#Error.Position) ``` func (e *[Error](#Error)) Position() ([int](/builtin#int), [int](/builtin#int), [string](/builtin#string)) ``` Positions re-parses the file to determine the line, column, and context of the error. Context is the entire line at which the error occurred.
misclassGLM
cran
R
Package ‘misclassGLM’ October 13, 2022 Type Package Title Computation of Generalized Linear Models with Misclassified Covariates Using Side Information Version 0.3.2 Date 2020-02-10 Author <NAME> Maintainer <NAME> <<EMAIL>> Depends R (>= 3.0.0) Imports stats, Matrix, MASS, ucminf, numDeriv, bigmemory, foreach, mlogit Suggests parallel Description Estimates models that extend the standard GLM to take misclassification into account. The models require side information from a secondary data set on the misclassification process, i.e. some sort of misclassification probabilities conditional on some common covariates. A detailed description of the algorithm can be found in Dlugosz, Mammen and Wilke (2015) <http://www.zew.de/PU70410>. License GPL-3 RoxygenNote 7.0.2 NeedsCompilation yes Repository CRAN Date/Publication 2020-02-10 22:40:09 UTC R topics documented: boot.misclassGL... 2 boot.misclassMlogi... 3 mfx.misclassGL... 4 mfx.misclassMlogi... 4 misclassGL... 5 misclassMlogi... 7 predict.misclassGL... 9 predict.misclassMlogi... 10 simulate_GLM_datase... 11 simulate_mlogit_datase... 12 boot.misclassGLM Compute Bootstrapped Standard Errors for misclassGLM Fits Description Obtain bootstrapped standard errors. Usage boot.misclassGLM(ret, Y, X, Pmodel, PX, boot.fraction = 1, repetitions = 1000) Arguments ret a fitted object of class inheriting from ’misclassGLM’. Y a vector of integers or numerics. This is the dependent variable. X a matrix containing the independent variables. Pmodel a fitted model (e.g. of class ’GLM’ or ’mlogit’) to implicitly produce variations of the predicted true values probabilities. (Usually conditional on the observed misclassified values and additional covariates.) PX covariates matrix suitable for predicting probabilities from Pmodel, usually in- cluding the mismeasured covariate. boot.fraction fraction of sample to be used for estimating the bootstrapped standard errors, for speedup. repetitions number of bootstrap samples to be drown. See Also misclassGLM boot.misclassMlogit Compute Bootstrapped Standard Errors for misclassMlogit Fits Description Obtain bootstrapped standard errors. Usage boot.misclassMlogit( ret, Y, X, Pmodel, PX, boot.fraction = 1, repetitions = 1000 ) Arguments ret a fitted object of class inheriting from ’misclassMlogit’. Y a matrix of 0s and 1s, indicating the target class. This is the dependent variable. X a matrix containing the independent variables. Pmodel a fitted model (e.g. of class ’GLM’ or ’mlogit’) to implicitly produce variations of the predicted true values probabilities. (Usually conditional on the observed misclassified values and additional covariates.) PX covariates matrix suitable for predicting probabilities from Pmodel, usually in- cluding the mismeasured covariate. boot.fraction fraction of sample to be used for estimating the bootstrapped standard errors, for speedup. repetitions number of bootstrap samples to be drown. See Also misclassMlogit mfx.misclassGLM Compute Marginal Effects for misclassGLM Fits Description Obtain marginal Effects. Usage mfx.misclassGLM(w, x.mean = TRUE, rev.dum = TRUE, digits = 3, ...) Arguments w a fitted object of class inheriting from ’misclassGLM’. x.mean logical, if true computes marginal effects at mean, otherwise average marginal effects. rev.dum logical, if true, computes differential effects for switch from 0 to 1. digits number of digits to be presented in output. ... further arguments passed to or from other functions. See Also misclassGLM mfx.misclassMlogit Compute Marginal Effects for ’misclassMlogit’ Fits Description Obtain marginal effects. Usage mfx.misclassMlogit( w, x.mean = TRUE, rev.dum = TRUE, outcome = 2, baseoutcome = 1, digits = 3, ... ) Arguments w a fitted object of class inheriting from ’misclassMlogit’. x.mean logical, if true computes marginal effects at mean, otherwise average marginal effects. rev.dum logical, if true, computes differential effects for switch from 0 to 1. outcome for which the ME should be computed. baseoutcome base outcome, e.g. reference class of the model. digits number of digits to be presented in output. ... further arguments passed to or from other functions. See Also misclassMlogit misclassGLM misclassGLM Description Estimates models that extend the standard GLM to take misclassification into account. The models require side information from a secondary data set on the misclassification process, i.e. some sort of misclassification probabilities conditional on some common covariates. A detailed description of the algorithm can be found in Dlugosz, Mammen and Wilke (2015) http://www.zew.de/PU70410. misclassGLM computes estimator for a GLM with a misclassified covariate using additional side information on the misclassification process Usage misclassGLM( Y, X, setM, P, na.action = na.omit, family = gaussian(link = "identity"), control = list(), par = NULL, x = FALSE, robust = FALSE ) Arguments Y a vector of integers or numerics. This is the dependent variable. X a matrix containing the independent variables. setM (optional) matrix, rows containing potential patterns for a misclassified (latent) covariate M in any coding for a categorical independent variable, e.g. dummy coding (default: Identity). P probabilities corresponding to each of the potential pattern conditional on the other covariates denoted in x. na.action how to treat NAs family a description of the error distribution and link function to be used in the model. This can be a character string naming a family function, a family function or the result of a call to a family function. (See family for details of family functions.) control options for the optimization procedure (see optim, ucminf for options and de- tails). par (optional) starting parameter vector x logical, add covariates matrix to result? robust logical, if true the computed asymptotic standard errors are replaced by their robust counterparts. Details The two main functions are misclassGLM and misclassMlogit. Examples ## simulate data data <- simulate_GLM_dataset() ## estimate model without misclassification error summary(lm(Y ~ X + M2, data)) ## estimate model with misclassification error summary(lm(Y ~ X + M, data)) ## estimate misclassification probabilities Pmodel <- glm(M2 ~ M + X, data = data, family = binomial("logit")) summary(Pmodel) ## construct a-posteriori probabilities from Pmodel P <- predict(Pmodel, newdata = data, type = "response") P <- cbind(1 - P, P) dimnames(P)[[2]] <- c("M0", "M1") ## speaking names ## estimate misclassGLM est <- misclassGLM(Y = data$Y, X = as.matrix(data[, 2, drop = FALSE]), setM = matrix(c(0, 1), nrow = 2), P = P) summary(est) ## and bootstrapping the results from dataset ## Not run: summary(boot.misclassGLM(est, Y = data$Y, X = data.matrix(data[, 2, drop = FALSE]), Pmodel = Pmodel, PX = data, repetitions = 100)) ## End(Not run) misclassMlogit Mlogit estimation under misclassified covariate Description misclassMLogit computes estimator for a GLM with a misclassified covariate using additional side information on the misclassification process Usage misclassMlogit( Y, X, setM, P, na.action = na.omit, control = list(), par = NULL, baseoutcome = NULL, x = FALSE ) Arguments Y a matrix of 0s and 1s, indicating the target class. This is the dependent variable. X a matrix containing the independent variables setM matrix, rows containing potential patterns for a misclassed (latent) covariate M in any coding for a categorical independent variable, e.g. dummy coding. P probabilities corresponding to each of the potential pattern conditional on the other covariates denoted in x. na.action how to treat NAs control options for the optimization procedure (see optim, ucminf for options and de- tails). par (optional) starting parameter vector baseoutcome reference outcome class x logical, add covariates matrix to result? Examples ## simulate data data <- simulate_mlogit_dataset() ## estimate model without misclassification error library(mlogit) data2 <- mlogit.data(data, varying = NULL, choice = "Y", shape = "wide") summary(mlogit(Y ~ 1 | X + M2, data2, reflevel = "3")) ## estimate model with misclassification error summary(mlogit(Y ~ 1 | X + M, data2, reflevel = "3")) ## estimate misclassification probabilities Pmodel <- glm(M2 ~ M + X, data = data, family = binomial("logit")) summary(Pmodel) ## construct a-posteriori probabilities from Pmodel P <- predict(Pmodel, newdata = data, type = "response") P <- cbind(1 - P, P) dimnames(P)[[2]] <- c("M0", "M1") ## speaking names ## estimate misclassGLM Yneu <- matrix(rep.int(0, nrow(data) * 3), ncol = 3) for (i in 1:nrow(data)) Yneu[i, data$Y[i]] <- 1 est <- misclassMlogit(Y = Yneu, X = as.matrix(data[, 2, drop = FALSE]), setM = matrix(c(0, 1), nrow = 2), P = P) summary(est) ## and bootstrapping the results from dataset ## Not run: summary(boot.misclassMlogit(est, Y = Yneu, X = data.matrix(data[, 2, drop = FALSE]), Pmodel = Pmodel, PX = data, repetitions = 100)) ## End(Not run) predict.misclassGLM Predict Method for misclassGLM Fits Description Obtains predictions Usage ## S3 method for class 'misclassGLM' ## S3 method for class 'misclassGLM' predict(object, X, P = NULL, type = c("link", "response"), na.action = na.pass, ...) Arguments object a fitted object of class inheriting from ’misclassGLM’. X matrix of fixed covariates P a-posteriori probabilities for the true values of the misclassified variable. If provided, the conditional expectation on X,P is computed, otherwise a set of marginal predictions is provided, one for each alternative. type the type of prediction required. The default is on the scale of the linear predic- tors; the alternative "response" is on the scale of the response variable. Thus for a default binomial model the default predictions are of log-odds (probabilities on logit scale) and type = "response" gives the predicted probabilities. The value of this argument can be abbreviated. na.action function determining what should be done with missing values in newdata. The default is to predict NA. ... additional arguments (not used at the moment) See Also misclassGLM predict.misclassMlogit Predict Method for misclassMlogit Fits Description Obtains predictions Usage ## S3 method for class 'misclassMlogit' ## S3 method for class 'misclassMlogit' predict(object, X, P = NULL, type = c("link", "response"), na.action = na.pass, ...) Arguments object a fitted object of class inheriting from ’misclassMlogit’. X matrix of fixed covariates. P a-posteriori probabilities for the true values of the misclassified variable. If provided, the conditional expectation on X,P is computed, otherwise a set of marginal predictions is provided, one for each alternative. type the type of prediction required. The default is on the scale of the linear predic- tors; the alternative "response" is on the scale of the response variable. Thus for a default binomial model the default predictions are of log-odds (probabilities on logit scale) and type = "response" gives the predicted probabilities. The value of this argument can be abbreviated. na.action function determining what should be done with missing values in newdata. The default is to predict NA. ... additional arguments (not used at the moment) See Also misclassMlogit simulate_GLM_dataset Simulate a Data Set to Use With misclassGLM Description simulates a data set with - one continuous variable X drawn from a Gaussian distribution, - a binary or trinary variable M with misclassification (M2) - a dependent variable either with added Gaussian noise or drawn from a logit distribution Usage simulate_GLM_dataset( n = 50000, const = 0, alpha = 1, beta = -2, beta2 = NULL, logit = FALSE ) Arguments n number observations const constant alpha parameter for X beta parameter for M(1) beta2 parameter for M2, if NULL, M is a binary covariate, otherwise a three-valued categorical logit logical, if true logit regression, otherwise Gaussian regression Details This can be used to demonstrate the abilities of misclassGLM. For an example see misclassGLM. See Also misclassGLM simulate_mlogit_dataset Simulate a Data Set to Use With misclassMlogit Description simulates a data set with - one continuous variable X drawn from a Gaussian distribution, - a binary or trinary variable M with misclassification (M2) - a dependent variable drawn from a multionomial distribution dependent on X and M. Usage simulate_mlogit_dataset( n = 1000, const = c(0, 0), alpha = c(1, 2), beta = -2 * c(1, 2), beta2 = NULL ) Arguments n number observations const constants alpha parameters for X beta parameters for M(1) beta2 parameters for M2, if NULL, M is a binary covariate, otherwise a three-valued categorical. Details This can be used to demonstrate the abilities of misclassMlogit. For an example see misclassMlogit. See Also misclassMlogit
ProcMod
cran
R
Package ‘ProcMod’ October 12, 2022 Type Package Title Informative Procrustean Matrix Correlation Version 1.0.8 Author <NAME>, <NAME> Maintainer <NAME> <<EMAIL>> Description Estimates corrected Procrustean correlation between matrices for removing overfitting ef- fect. Coissac Eric and <NAME> (2019) <doi:10.1101/842070>. License CeCILL-2 Encoding UTF-8 LazyData true RoxygenNote 7.1.1 Depends R (>= 3.1.0) Imports MASS, permute, Matrix, stats, foreach, Rdpack Suggests knitr, rmarkdown, roxygen2, vegan, testthat, ade4, doParallel RdMacros Rdpack Collate 'internals.R' 'procmod_frame.R' 'multivariate.R' 'procmod.R' 'covls.R' 'corls_test.R' 'procuste.R' 'simulate.R' NeedsCompilation no Repository CRAN Date/Publication 2021-05-12 06:52:11 UTC R topics documented: .getPermuteMatri... 2 .procmod_coerce_valu... 3 .rep_matri... 3 .Trac... 4 as.data.frame.dis... 5 as_procmod_fram... 6 bicente... 7 corls_tes... 8 dim.procmod_fram... 9 eukaryote... 9 is_eucli... 11 is_procmod_fram... 12 names.procmod_corl... 13 names.procmod_varl... 14 nmd... 15 orth... 16 pc... 17 pco... 18 print.procmod_corl... 19 print.procmod_varl... 20 procmo... 21 procmod_fram... 21 protat... 23 simulate_correlatio... 23 simulate_matri... 24 subset.procmod_fram... 25 varl... 27 .getPermuteMatrix Generate permutation matrix according to a schema. Description The permutation schema is defined using the ‘how‘ function. The implementation of this function is inspired from the VEGAN package and reproduced here to avoid an extra dependency on an hidden vegan function. Usage .getPermuteMatrix(permutations, n, strata = NULL) Arguments permutations a list of control values for the permutations as returned by the function how, or the number of permutations required. n numeric; the number of observations in the sample set. May also be any object that nobs knows about; see nobs methods. strata A factor, or an object that can be coerced to a factor via as.factor, specifying the strata for permutation. Note Internal function do not use. .procmod_coerce_value Internal function coercing the data to a matrix. Description Transforme the x value into a numeric matrix of the correct size or into a dist object. Usage .procmod_coerce_value(x, nrows = 0, contrasts = NULL) Arguments x The data to coerce nrows an interger value specifying the number of row of the returned matrix contrasts see the contrasts_arg argument of the procmod_frame constructor. Value a new numeric matrix with correct size. Note Internal function do not use. Author(s) <NAME> <<EMAIL>> <NAME> <<EMAIL>> .rep_matrix Internal function repeating a matrix. Description repeats several times the rows of a matrix to create a new matrix with more rows. The final row count must be a multiple of the initial row count Usage .rep_matrix(x, nrow) Arguments x The matrix to replicate nrow an interger value specifying the number of row of the returned matrix Value a new matrix with the same number of columns but with ‘nrow‘ rows. Note Internal function do not use. Author(s) <NAME> <<EMAIL>> <NAME> <<EMAIL>> .Trace Compute the trace of a square matrix. Description The trace of a square matrix is defined as the sum of its diagonal elements. Usage .Trace(X) Arguments X a square matrix Value the trace of X Note Internal function do not use. Author(s) <NAME> <NAME> Examples m <- matrix(1:16, nrow = 4) ProcMod:::.Trace(m) as.data.frame.dist Converts a dist object to a data.frame object. Description The created data.frame has a attribute is.dist set to the logical value TRUE. Usage ## S3 method for class 'dist' as.data.frame(x, row.names = NULL, optional = FALSE, ...) Arguments x the dist object to be converted row.names NULL or a character vector giving the row names for the data frame. Missing values are not allowed. optional logical. If TRUE, setting row names and converting column names (to syntac- tic names: see make.names) is optional. Note that all of R’s base package as.data.frame() methods use optional only for column names treatment, basi- cally with the meaning of data.frame(*, check.names = !optional). See also the make.names argument of the matrix method. ... additional arguments to be passed to or from methods. Author(s) <NAME> <NAME> Examples data(bacteria) bacteria_rel_freq <- sweep(bacteria, 1, rowSums(bacteria), "/") bacteria_hellinger <- sqrt(bacteria_rel_freq) bacteria_dist <- dist(bacteria_hellinger) bdf <- as.data.frame(bacteria_dist) as_procmod_frame Coerce to a ProcMod Frame. Description Conversion methods are proposed for list, matrix and array. Usage as_procmod_frame(data, ...) ## S3 method for class 'list' as_procmod_frame(data, ...) ## S3 method for class 'procmod_frame' as_procmod_frame(data, ...) ## S3 method for class 'array' as_procmod_frame(data, ...) ## S3 method for class 'matrix' as_procmod_frame(data, ...) Arguments data a R object to coerce. ... supplementary parameters used in some implementation of that method Value a procmod_frame object Author(s) <NAME> <NAME> Examples # Builds a list containing two random matrices m1 <- simulate_matrix(10,20) m2 <- simulate_matrix(10,30) l <- list(m1 = m1, m2 = m2) # Converts the list to a procmod_frame pmf1 <- as_procmod_frame(l) # Builds a procmod_frame from a matrix m3 <- matrix(1:12,nrow=3) pmf2 <- as_procmod_frame(matrix(1:12,nrow=3)) # Returns 4, the column count of the input matrix length(pmf2) # Builds a 3D array a <- array(1:24,dim = c(3,4,2)) # The conversion to a procmod_frame makes # an procmod element from each third dimension as_procmod_frame(a) bicenter Double centering of a matrix. Description colSums and rowSums of the returned matrix are all equal to zero. Usage bicenter(m) Arguments m a numeric matrix Details Inspired from the algorithm described in stackoverflow https://stackoverflow.com/questions/ 43639063/double-centering-in-r Value a numeric matrix centred by rows and columns Author(s) <NAME> <NAME> Examples data(bacteria) bact_bc <- bicenter(bacteria) sum(rowSums(bact_bc)) sum(colSums(bact_bc)) corls_test Monte-Carlo Test on the sum of the singular values of a procustean rotation. Description performs a Monte-Carlo Test on the sum of the singular values of a procustean rotation (see procuste.rtest). Usage corls_test( ..., permutations = permute::how(nperm = 999), p_adjust_method = "holm" ) Arguments ... the set of matrices or a procmod_frame object. permutations a list of control values for the permutations as returned by the function how, or the number of permutations required. p_adjust_method the multiple test correction method used to adjust p values. p_adjust_method belongs one of the folowing values: "holm", "hochberg", "hommel", "bonferroni", "BH", "BY", "fdr", "none". The default is,set to "holm". Author(s) <NAME> <NAME>-Melodelima References Jackson DA (1995). “PROTEST: A PROcrustean Randomization TEST of community environment concordance.” Écoscience, 2(3), 297–303. See Also p.adjust Examples A <- simulate_matrix(10,3) B <- simulate_matrix(10,5) C <- simulate_correlation(B,10,r2=0.6) # Computes the correlation matrix data <- procmod_frame(A = A, B = B, C = C) corls_test(data, permutations = 100) dim.procmod_frame Dimensions of a ProcMod Frame. Description Dimension 1 is the number of rows (individus) shared by the aggregated matrices. Dimension 2 is the number of aggregated matrices Usage ## S3 method for class 'procmod_frame' dim(x) Arguments x a procmod_frame object Author(s) <NAME> <NAME> Examples # Builds a procmod_frame with two random matrices m1 <- simulate_matrix(10,20) m2 <- simulate_matrix(10,30) pmf <- procmod_frame(m1 = m1, m2 = m2) dim(pmf) eukaryotes DNA metabarcoding Australia South-North Gradient Description This data set of five data.frame is a simplified version of a full data set describing biodiversity changes along a South-North gradient on the Australian East Coast, from Sidney to North Cap using a DNA metabarcoding approach. The gradient is constituted of 21 locations. Usage data(eukaryotes) data(bacteria) data(climat) data(soil) data(geography) Format five data.frame of 21 rows An object of class data.frame with 21 rows and 2150 columns. An object of class data.frame with 21 rows and 6 columns. An object of class data.frame with 21 rows and 12 columns. An object of class data.frame with 21 rows and 2 columns. Details bacteria is a 21 x 2150 data.frame describing bacterial community at each one of the 21 lo- cations. Each number is the relative frequency of a molecular operational taxonomy unit (MOTU) at a site after data cleaning and averaging of 135 pontual measures. bacteria is a 21 x 1393 data.frame describing eukariote community at each one of the 21 lo- cations. Each number is the relative frequency of a molecular operational taxonomy unit (MOTU) at a site after data cleaning and averaging of 135 pontual measures. climat is a 21 x 6 data.frame describing climatic conditions at each site using worldclim descrip- tors (https://www.worldclim.org). Aspect TempSeasonality MaxMonTemp Max Temperature of Warmest Month MeanMonTempRange AnnMeanTemp Isothemality Mean Diurnal Range / Temperature Annual Range, with Mean Diurnal Range Mean of monthly (max temp - min temp) Temperature Annual Range Max Temperature of Warmest Month - Min Temperature of Coldest Month soil s a 21 x 6 data.frame describing soil chemistery at each site. Each variable is reduced and centered KLg Logarithm of the potassium concentration pH Soil Ph AlLg Logarithm of the aluminium concentration FeLg Logarithm of the iron concentration PLg Logarithm of the phosphorus concentration SLg Logarithm of the sulphur concentration CaLg Logarithm of the calcium concentration MgLg Logarithm of the magnesium concentration MnLg Logarithm of the manganese concentration CNratio carbon / nitrogen concentration ratio CLg Logarithm of the carbon concentration NLg Logarithm of the nitrogen concentration geography Author(s) <NAME> <NAME> is_euclid Test if the distance matrix is euclidean. Description Actually a simplified version of the ADE4 implementation (is.euclid). Usage is_euclid(distances, tol = 1e-07) Arguments distances an object of class ’dist’ tol a tolerance threshold : an eigenvalue is considered positive if it is larger than -tol*lambda1 where lambda1 is the largest eigenvalue. Author(s) <NAME> <NAME> Examples library(vegan) data(bacteria) bacteria_rel_freq <- sweep(bacteria, 1, rowSums(bacteria), "/") bacteria_bray <- vegdist(bacteria_rel_freq,method = "bray") is_euclid(bacteria_bray) bacteria_chao <- vegdist(floor(bacteria*10000),method = "chao") is_euclid(bacteria_chao) is_procmod_frame Check if an object is a ProcMod Frame. Description Check if an object is a ProcMod Frame. Usage is_procmod_frame(x) Arguments x a R object to test Value a logical value equals to TRUE if x is a procmod_frame, FALSE otherwise. Author(s) <NAME> <NAME> Examples # Builds a procmod_frame with two random matrices m1 <- simulate_matrix(10,20) m2 <- simulate_matrix(10,30) pmf <- procmod_frame(m1 = m1, m2 = m2) # Returns TRUE is_procmod_frame(pmf) # Returns FALSE is_procmod_frame(3) names.procmod_corls The Names of the elements of a Correlation Matrix Description Returns the names of the elements associated to a procmod_corls object. Usage ## S3 method for class 'procmod_corls' names(x) Arguments x a procmod_corls object Author(s) <NAME> <NAME> See Also corls Examples # Build Three matrices of 3 rows. A <- simulate_matrix(10,3) B <- simulate_matrix(10,5) C <- simulate_correlation(B,10,r2=0.6) # Computes the correlation matrix data <- procmod_frame(A = A, B = B, C = C) cls <- corls(data, nrand = 100) names(cls) names.procmod_varls The Names of the elements of a Variance / Covariance Matrix. Description Returns the names of the elements associated to a procmod_varls object. Usage ## S3 method for class 'procmod_varls' names(x) Arguments x a procmod_varls object Author(s) <NAME> <NAME> See Also varls Examples # Build Three matrices of 3 rows. A <- simulate_matrix(10,3) B <- simulate_matrix(10,5) C <- simulate_correlation(B,10,r2=0.6) # Computes the variance covariance matrix data <- procmod_frame(A = A, B = B, C = C) v <- varls(data, nrand = 100) names(v) nmds Project a distance matrix in a euclidean space (NMDS). Description Project a set of points defined by a distance matrix in an eucleadean space using the Kruskal’s Non-metric Multidimensional Scaling. This function is mainly a simplified interface on the isoMDS function using as much as possible dimensions to limit the stress. The aims of this NDMS being only to project point in an orthogonal space therefore without any correlation between axis. Because a non-metric method is used no condition is required on the used distance. Usage nmds(distances, maxit = 100, trace = FALSE, tol = 0.001, p = 2) Arguments distances a dist object or a matrix object representing a distance matrix. maxit The maximum number of iterations. trace Logical for tracing optimization. Default TRUE. tol convergence tolerance. p Power for Minkowski distance in the configuration space. Value a numeric matrix with at most n-1 dimensions, with n the number pf observations. This matrix defines the coordinates of each point in the orthogonal space. Author(s) <NAME> <NAME>-Melodelima Examples data(bacteria) bacteria_rel_freq <- sweep(bacteria, 1, rowSums(bacteria), "/") bacteria_hellinger <- sqrt(bacteria_rel_freq) bacteria_dist <- dist(bacteria_hellinger) project <- nmds(bacteria_dist) ortho Project a dataset in a euclidean space. Description Project a set of points defined by a distance matrix or a set of variables in an eucleadean space. If the distance matrix is a metric, this is done using the pcoa function, for other distance the nmds is used. When points are described by a set of variable the nmds is used. Usage ortho(data, ...) ## S3 method for class 'dist' ortho(data, tol = 1e-07, ...) ## S3 method for class 'matrix' ortho(data, scale = FALSE, ...) ## S3 method for class 'data.frame' ortho(data, scale = FALSE, ...) ## S3 method for class 'procmod_frame' ortho(data, ...) Arguments data a numeric matrix describing the points ... other parameters specific to some implementation tol a tolerance threshold : an eigenvalue is considered positive if it is larger than -tol*lambda1 where lambda1 is the largest eigenvalue. scale a logical value indicating if the dimensions must be scaled to force for every column that sd=1. FALSE by default. Value a numeric matrix with at most n-1 dimensions, with n the number pf observations. This matrix defines the coordinates of each point in the orthogonal space. Author(s) <NAME> <NAME> Examples library(vegan) data(bacteria) data(eukaryotes) data(soil) dataset <- procmod_frame(euk = vegdist(decostand(eukaryotes, method = "hellinger"), method = "euclidean"), bac = vegdist(decostand(bacteria, method = "hellinger"), method = "euclidean"), soil = scale(soil, center = TRUE, scale = TRUE)) dp <- ortho(dataset) bacteria_rel_freq <- sweep(bacteria, 1, rowSums(bacteria), "/") bacteria_hellinger <- sqrt(bacteria_rel_freq) bacteria_dist <- dist(bacteria_hellinger) project <- ortho(bacteria_dist) pca Project a set of points in a euclidean space (PCA). Description Project a set of points defined by a set of numeric variables in an eucleadean space using the pricipal componant analysis. This function is mainly a simplified interface on the prcomp function using as much as possible dimensions to keep all the variation. The aims of this PCA being only to project point in an orthogonal space therefore without any correlation between axis. Data are centered by not scaled by default. Usage pca(data, scale = FALSE) Arguments data a numeric matrix describing the points scale a logical value indicating if the dimensions must be scaled to force for every column that sd=1. FALSE by default. Value a numeric matrix with at most n-1 dimensions, with n the number pf observations. This matrix defines the coordinates of each point in the orthogonal space. Author(s) <NAME> <NAME> Examples data(bacteria) bacteria_rel_freq <- sweep(bacteria, 1, rowSums(bacteria), "/") bacteria_hellinger <- sqrt(bacteria_rel_freq) project <- pca(bacteria_hellinger) pcoa Project a distance matrix in a euclidean space (PCOA). Description Project a set of points defined by a distance matrix in an eucleadean space using the Principal Co- ordinates Analysis method. This function is mainly a simplified interface on the cmdscale function using as much as possible dimensions for the projection. The aims of this PCoA being only to project point in an orthogonal space therefore without any correlation between axis. Because a metric method is used the used distance must be euclidean (cf is_euclid). Usage pcoa(distances) Arguments distances a dist object or a matrix object representing a distance matrix. Value a numeric matrix with at most n-1 dimensions, with n the number pf observations. This matrix defines the coordinates of each point in the orthogonal space. Author(s) <NAME> <NAME> Examples data(bacteria) bacteria_rel_freq <- sweep(bacteria, 1, rowSums(bacteria), "/") bacteria_hellinger <- sqrt(bacteria_rel_freq) bacteria_dist <- dist(bacteria_hellinger) project <- pcoa(bacteria_dist) print.procmod_corls Print a procrustean Correlation Matrix. Description Print a procrustean Correlation Matrix. Usage ## S3 method for class 'procmod_corls' print(x, ...) Arguments x a procmod_corls object ... other parameters passed to other functions Author(s) <NAME> <NAME> See Also corls Examples # Build Three matrices of 3 rows. A <- simulate_matrix(10,3) B <- simulate_matrix(10,5) C <- simulate_correlation(B,10,r2=0.6) # Computes the correlation matrix data <- procmod_frame(A = A, B = B, C = C) cls <- corls(data, nrand = 100) print(cls) print.procmod_varls Print procrustean Variance / Covariance Matrix. Description Print procrustean Variance / Covariance Matrix. Usage ## S3 method for class 'procmod_varls' print(x, ...) Arguments x a procmod_varls object ... other parameters passed to other functions Author(s) <NAME> <NAME> See Also varls Examples # Build Three matrices of 3 rows. A <- simulate_matrix(10,3) B <- simulate_matrix(10,5) C <- simulate_correlation(B,10,r2=0.6) # Computes the variance covariance matrix data <- procmod_frame(A = A, B = B, C = C) v <- varls(data, nrand = 100) print(v) procmod Informative Procrustean Matrix Correlation Description Estimates corrected Procrustean correlation between matrices for removing overfitting effect. Details The functions in the ProcMod package aims to estimate and to test correlation between matrices, correcting for the spurious correlations because of the over-fitting effect. The ProcMod package is developed on the metabarcoding.org gitlab (https://git.metabarcoding.org/lecasofts/ProcMod). The gitlab of metabarcoding.org provides up-to-date information and forums for bug reports. Author(s) <NAME> <NAME> procmod_frame The procmod_frame data structure. Description A procmod_frame can be considered as the analog of a data.frame for vector data. In a procmod_frame each element, equivalent to a column in a data.frame is a numeric matrix or a distance matrix ob- ject (dist). Every element must describe the same number of individuals. Therefore every numeric matrix must have the same number of row (nrow) and every distance matrix must have the same size (attr(d,"Size")). A procmod_frame can simultaneously contain both types of data, numeric and distance matrix. Usage procmod_frame( ..., row_names = NULL, check_rows = TRUE, reorder_rows = TRUE, contrasts_arg = NULL ) Arguments ... a set of objects to aggregate into a procmod_frame. These objects can be nu- meric matrices, or dist objects. Every objects must have the same number of row. row_names a character vector containing names associated to each row. check_rows a logical value. When set to TRUE, its default value, the number of row of every elements of the procmod_frame are tested for equality. Otherwise no check is done. reorder_rows a logical value. When set to TRUE, its default value, every elements of the procmod_frame are reordered according to the row_names order. Otherwise nothing is done. contrasts_arg A list, whose entries are values (numeric matrices or character strings naming functions) to be used as replacement values for the contrasts replacement func- tion and whose names are the names of columns of data containing factors. Value a procmod_frame instance. Author(s) <NAME> <NAME> Examples library(vegan) data(bacteria) data(eukaryotes) data(soil) dataset <- procmod_frame(euk = vegdist(decostand(eukaryotes, method = "hellinger"), method = "euclidean"), bac = vegdist(decostand(bacteria, method = "hellinger"), method = "euclidean"), soil = scale(soil, center = TRUE, scale = TRUE)) length(dataset) nrow(dataset) ncol(dataset) dataset$euk protate Rotate the src matrix to fit into the space of the dest matrix. Description The optimal rotation is computed according to the procruste methode. Rotation is based on singular value decomposition (SVD). No scaling and no centrering are done, before computing the SVD. Usage protate(src, dest) Arguments src a numeric matrix to be rotated dest a numeric matrix used as reference space Value a numeric matrix Author(s) <NAME>-Melodelima <NAME> Examples # Generates two random matrices of size 10 x 15 m1 <- simulate_matrix(10, 15) m2 <- simulate_matrix(10, 20) # Rotates matrix m1 on m2 mr <- protate(m1, m2) simulate_correlation Simulate n points of dimension p correlated to a reference matrix. Description Simulates a set of point correlated to another set according to the procrustean correlation definition. Points are simulated by drawing values of each dimension from a normal distribution of mean 0 and standard deviation equals to 1. The mean of each dimension is forced to 0 (data are centred). By default variable are also scaled to enforce a strandard deviation strictly equal to 1. Covariances between dimensions are not controled. Therefore they are expected to be equal to 0 and reflect only the random distribution of the covariance between two random vectors. The intensity of the correlation is determined by the r2 parameter. Usage simulate_correlation(reference, p, r2, equal_var = TRUE) Arguments reference a numeric matrix to which the simulated data will be correlated p an int value indicating the number of dimensions (variables) simulated r2 the fraction of variation shared between the reference and the simulated data equal_var a logical value indicating if the dimensions must be scaled to force sd=1. TRUE by default. Value a numeric matrix of nrow(reference) rows and p columns Author(s) <NAME> <NAME> Examples sim1 <- simulate_matrix(25,10) class(sim1) dim(sim1) sim2 <- simulate_correlation(sim1,20,0.8) corls(sim1, sim2)^2 simulate_matrix Simulate n points of dimension p. Description Points are simulated by drawing values of each dimension from a normal distribution of mean 0 and standard deviation equals to 1. The mean of each dimension is forced to 0 (data are centred). By default variable are also scaled to enforce a strandard deviation strictly equal to 1. Covariances between dimensions are not controled. Therefore they are expected to be equal to 0 and reflect only the random distribution of the covariance between two random vectors. Usage simulate_matrix(n, p, equal_var = TRUE) Arguments n an int value indicating the number of observations. p an int value indicating the number of dimensions (variables) simulated equal_var a logical value indicating if the dimensions must be scaled to force sd=1. TRUE by default. Value a numeric matrix of n rows and p columns Author(s) <NAME> <NAME> Examples sim1 <- simulate_matrix(25,10) class(sim1) dim(sim1) subset.procmod_frame Subsetting Procmod Frames Description This is the implementation of the subset generic function for procmod_frame. Usage ## S3 method for class 'procmod_frame' subset(x, subset, select, drop = FALSE, ...) Arguments x object to be subsetted. subset logical expression indicating elements or rows to keep: missing values are taken as false. select expression, indicating columns to select from a data frame. drop passed on to [ indexing operator. ... further arguments to be passed to or from other methods. Details The subset argument works on rows. Note that subset will be evaluated in the procmod_frame, so columns can be referred to (by name) as variables in the expression (see the examples). The select argument if provided indicates with matrices have to be conserved. It works by first replacing column names in the selection expression with the corresponding column numbers in the procmod_frame and then using the resulting integer vector to index the columns. This allows the use of the standard indexing conventions so that for example ranges of columns can be specified easily, or single columns can be dropped (see the examples). Remember that each column of a procmod_frame is actually a matrix. The drop argument is passed on to the procmod_frame indexing method. The default value is FALSE. Value A procmod_frame containing just the selected rows and columns. Author(s) <NAME> <NAME>-Melodelima Examples library(vegan) data(bacteria) data(eukaryotes) data(soil) dataset <- procmod_frame(euk = vegdist(decostand(eukaryotes, method = "hellinger"), method = "euclidean"), bac = vegdist(decostand(bacteria, method = "hellinger"), method = "euclidean"), soil = scale(soil, center = TRUE, scale = TRUE)) dim(dataset) higher_ph = subset(dataset,soil[,"pH"] > 0) dim(higher_ph) without_bacteria = subset(dataset,soil[,"pH"] > 0, -bac) dim(without_bacteria) varls Procrustean Correlation, and Variance / Covariance Matrices. Description varls, corls compute the procrustean variance / covariance, or correlation matrices between a set of real matrices and dist objects. Usage varls(..., nrand = 100, p_adjust_method = "holm") corls(..., nrand = 100, p_adjust_method = "holm") Arguments ... the set of matrices or a procmod_frame object. nrand number of randomisation used to estimate the mean covariance observed be- tween two random matrix. If rand is NULL or equal to 0, no correction is esti- mated and the raw procrustean covariances are estimated. p_adjust_method the multiple test correction method used to adjust p values. p_adjust_method belongsone of the folowing values: "holm", "hochberg", "hommel", "bonferroni", "BH", "BY", "fdr", "none". The default is,set to "holm". Details Procrustean covariance between two matrices X and Y, is defined as the sum of the singular values of the X’Y matrix (Gower 1971; Lingoes and Schönemann 1974). Both the X and Y matrices must have the same number of rows. The variances and covariances and correlations are corrected to avoid over fitting (Coissac and Gonindard-Melodelima 2019). The inputs must be numeric matrices or dist object. The set of input matrices can be aggregated un a procmod_frame. Before computing the coefficients, matrices are projected into an orthogonal space using the ortho function. The denominator n - 1 is used which gives an unbiased estimator of the (co)variance for i.i.d. observations. Value a procmod_varls object which corresponds to a numeric matrix annotated by several attributes. The following attribute is always added: - nrand an integer value indicating the number of randomisations used to estimate the mean of the random covariance. When nrand is greater than 0 a couple of attributes is added: - rcovls a numeric matrix containing the estimation of the mean of the random covariance. - p.value a numeric matrix containing the estimations of the p.values of tests checking that the observed covariance is larger than the mean of the random covariance. p.values are corrected for multiple tests according to the method specified by the p_adjust_method parameter. Author(s) <NAME> <NAME> References Gower JC (1971). “Statistical methods of comparing different multivariate analyses of the same data.” Mathematics in the archaeological and historical sciences, 138–149. <NAME>, <NAME> (1974). “Alternative measures of fit for the Schönemann-carroll ma- trix fitting algorithm.” Psychometrika, 39(4), 423–427. <NAME>, <NAME> (2019). “Assessing the shared variation among high-dimensional data matrices: a modified version of the Procrustean correlation coefficient.” in prep. See Also p.adjust Examples # Build Three matrices of 3 rows. A <- simulate_matrix(10,3) B <- simulate_matrix(10,5) C <- simulate_correlation(B,10,r2=0.6) # Computes the variance covariance matrix varls(A = A, B = B, C = C) data = procmod_frame(A = A, B = B, C = C) varls(data) # Computes the correlation matrix corls(data, nrand = 100)
multivariatestatsjl
readthedoc
Unknown
MultivariateStats Documentation Release 0.1.0 <NAME> Dec 21, 2018 Contents 1 Linear Least Square and Ridge Regression 3 2 Data Whitening 7 3 Principal Component Analysis 9 4 Probabilistic Principal Component Analysis 13 5 Kernel Principal Component Analysis 17 6 Canonical Correlation Analysis 21 7 Classical Multidimensional Scaling 25 8 Linear Discriminant Analysis 27 9 Multi-class Linear Discriminant Analysis 29 10 Independent Component Analysis 35 11 Factor Analysis 39 Bibliography 43 i ii MultivariateStats Documentation, Release 0.1.0 MultivariateStats.jl is a Julia package for multivariate statistical analysis. It provides a rich set of useful analysis techniques, such as PCA, CCA, LDA, PLS, etc. Contents: MultivariateStats Documentation, Release 0.1.0 2 Contents CHAPTER 1 Linear Least Square and Ridge Regression The package provides functions to perform Linear Least Square and Ridge Regression. 1.1 Linear Least Square Linear Least Square is to find linear combination(s) of given variables to fit the responses by minimizing the squared error between them. This can be formulated as an optimization as follows: minimize ‖y − (Xa + 𝑏)‖2 (a,𝑏) 2 Sometimes, the coefficient matrix is given in a transposed form, in which case, the optimization is modified as: minimize ‖y − (X𝑇 a + 𝑏)‖2 (a,𝑏) 2 The package provides llsq to solve these problems: llsq(X, y; ...) Solve the linear least square problem formulated above. Here, y can be either a vector, or a matrix where each column is a response vector. This function accepts two keyword arguments: • trans: whether to use the transposed form. (default is false) • bias: whether to include the bias term b. (default is true) The function results the solution a. In particular, when y is a vector (matrix), a is also a vector (matrix). If bias is true, then the returned array is augmented as [a; b]. Examples For a single response vector y (without using bias): MultivariateStats Documentation, Release 0.1.0 using MultivariateStats # prepare data X = rand(1000, 3) # feature matrix a0 = rand(3) # ground truths y = X * a0 + 0.1 * randn(1000) # generate response # solve using llsq a = llsq(X, y; bias=false) # do prediction yp = X * a # measure the error rmse = sqrt(mean(abs2(y - yp))) print("rmse = $rmse") For a single response vector y (using bias): # prepare data X = rand(1000, 3) a0, b0 = rand(3), rand() y = X * a0 + b0 + 0.1 * randn(1000) # solve using llsq sol = llsq(X, y) # extract results a, b = sol[1:end-1], sol[end] # do prediction yp = X * a + b' For a matrix Y comprised of multiple columns: # prepare data X = rand(1000, 3) A0, b0 = rand(3, 5), rand(1, 5) Y = (X * A0 .+ b0) + 0.1 * randn(1000, 5) # solve using llsq sol = llsq(X, Y) # extract results A, b = sol[1:end-1,:], sol[end,:] # do prediction Yp = X * A .+ b' 1.2 Ridge Regression Compared to linear least square, Ridge Regression uses an additional quadratic term to regularize the problem: 1 1 minimize ‖y − (Xa + 𝑏)‖2 + a𝑇 Qa (a,𝑏) 2 2 MultivariateStats Documentation, Release 0.1.0 The transposed form: minimize ‖y − (X𝑇 a + 𝑏)‖2 + a𝑇 Qa (a,𝑏) 2 2 The package provides ridge to solve these problems: ridge(X, y, r; ...) Solve the ridge regression problem formulated above. Here, y can be either a vector, or a matrix where each column is a response vector. The argument r gives the quadratic regularization matrix Q, which can be in either of the following forms: • r is a real scalar, then Q is considered to be r * eye(n), where n is the dimension of a. • r is a real vector, then Q is considered to be diagm(r). • r is a real symmetric matrix, then Q is simply considered to be r. This function accepts two keyword arguments: • trans: whether to use the transposed form. (default is false) • bias: whether to include the bias term b. (default is true) The function results the solution a. In particular, when y is a vector (matrix), a is also a vector (matrix). If bias is true, then the returned array is augmented as [a; b]. MultivariateStats Documentation, Release 0.1.0 6 Chapter 1. Linear Least Square and Ridge Regression CHAPTER 2 Data Whitening A whitening transformation is a decorrelation transformation that transforms a set of random variables into a set of new random variables with identity covariance (uncorrelated with unit variances). In particular, suppose a random vector has covariance C, then a whitening transform W is one that satisfy: W𝑇 CW = I Note that W is generally not unique. In particular, if W is a whitening transform, so is any of its rotation WR with R𝑇 R = I. 2.1 Whitening The package uses Whitening defined below to represent a whitening transform: immutable Whitening{T<:FloatingPoint} mean::Vector{T} # mean vector (can be empty to indicate zero mean), of length ˓→d W::Matrix{T} # the transform coefficient matrix, of size (d, d) end An instance of Whitening can be constructed by Whitening(mean, W). There are several functions to access the properties of a whitening transform f: indim(f ) Get the input dimension, i.e d. outdim(f ) Get the out dimension, i.e d. mean(f ) Get the mean vector. Note: if f.mean is empty, this function returns a zero vector of length d. MultivariateStats Documentation, Release 0.1.0 transform(f, x) Apply the whitening transform to a vector or a matrix with samples in columns, as W𝑇 (x − 𝜇). 2.2 Data Analysis Given a dataset, one can use the fit method to estimate a whitening transform. fit(Whitening, X; ...) Estimate a whitening transform from the data given in X. Here, X should be a matrix, whose columns give the samples. This function returns an instance of Whitening. Keyword Arguments: name description default regcoef The regularization coefficient. zero(T) The covariance will be regular- ized as follows when regcoef is positive: C + (eigmax(C) * regcoef) * eye(d) mean The mean vector, which can be ei- nothing ther of: been centralized • nothing: this function will compute the mean • a pre-computed mean vector Note: This function internally relies on cov_whiten to derive the transformation W. The function cov_whiten itself is also a useful function. cov_whitening(C) Derive the whitening transform coefficient matrix W given the covariance matrix C. Here, C can be either a square matrix, or an instance of Cholesky. Internally, this function solves the whitening transform using Cholesky factorization. The rationale is as follows: let C = U𝑇 U and W = U−1 , then W𝑇 CW = I. Note: The return matrix W is an upper triangular matrix. cov_whitening(C, regcoef ) Derive a whitening transform based on a regularized covariance, as C + (eigmax(C) * regcoef) * eye(d). In addition, the package also provides cov_whiten!, in which the input matrix C will be overwritten during com- putation. This can be more efficient when C is no longer used. invsqrtm(C) Compute inv(sqrtm(C)) through symmetric eigenvalue decomposition. CHAPTER 3 Principal Component Analysis Principal Component Analysis (PCA) derives an orthogonal projection to convert a given set of observations to linearly uncorrelated variables, called principal components. This package defines a PCA type to represent a PCA model, and provides a set of methods to access the properties. 3.1 Properties Let M be an instance of PCA, d be the dimension of observations, and p be the output dimension (i.e the dimension of the principal subspace) indim(M) Get the input dimension d, i.e the dimension of the observation space. outdim(M) Get the output dimension p, i.e the dimension of the principal subspace. mean(M) Get the mean vector (of length d). projection(M) Get the projection matrix (of size (d, p)). Each column of the projection matrix corresponds to a principal component. The principal components are arranged in descending order of the corresponding variances. principalvars(M) The variances of principal components. tprincipalvar(M) The total variance of principal components, which is equal to sum(principalvars(M)). tresidualvar(M) The total residual variance. MultivariateStats Documentation, Release 0.1.0 tvar(M) The total observation variance, which is equal to tprincipalvar(M) + tresidualvar(M). principalratio(M) The ratio of variance preserved in the principal subspace, which is equal to tprincipalvar(M) / tvar(M). 3.2 Transformation and Construction Given a PCA model M, one can use it to transform observations into principal components, as y = P𝑇 (x − 𝜇) or use it to reconstruct (approximately) the observations from principal components, as x̃ = Py + 𝜇 Here, P is the projection matrix. The package provides methods to do so: transform(M, x) Transform observations x into principal components. Here, x can be either a vector of length d or a matrix where each column is an observation. reconstruct(M, y) Approximately reconstruct observations from the principal components given in y. Here, y can be either a vector of length p or a matrix where each column gives the principal components for an observation. 3.3 Data Analysis One can use the fit method to perform PCA over a given dataset. fit(PCA, X; ...) Perform PCA over the data given in a matrix X. Each column of X is an observation. This method returns an instance of PCA. Keyword arguments: Let (d, n) = size(X) be respectively the input dimension and the number of observations: MultivariateStats Documentation, Release 0.1.0 name description default method The choice of methods: :auto • :auto: use :cov when d < n or :svd otherwise • :cov: based on covariance matrix • :svd: based on SVD of the input data maxoutdim Maximum output dimension. min(d, n) pratio The ratio of variances preserved 0.99 in the principal subspace. mean The mean vector, which can be ei- nothing ther of: • 0: the input data has already been centralized • nothing: this function will compute the mean • a pre-computed mean vector Notes: • The output dimension p depends on both maxoutdim and pratio, as follows. Suppose the first k principal components preserve at least pratio of the total variance, while the first k-1 preserves less than pratio, then the actual output dimension will be min(k, maxoutdim). • This function calls pcacov or pcasvd internally, depending on the choice of method. Example: using MultivariateStats # suppose Xtr and Xte are training and testing data matrix, # with each observation in a column # train a PCA model M = fit(PCA, Xtr; maxoutdim=100) # apply PCA model to testing set Yte = transform(M, Xte) # reconstruct testing observations (approximately) Xr = reconstruct(M, Yte) Example with iris dataset and plotting: using MultivariateStats, RDatasets, Plots plotly() # using plotly for 3D-interacive graphing # load iris dataset iris = dataset("datasets", "iris") # split half to training set Xtr = convert(Array,DataArray(iris[1:2:end,1:4]))' Xtr_labels = convert(Array,DataArray(iris[1:2:end,5])) (continues on next page) MultivariateStats Documentation, Release 0.1.0 (continued from previous page) # split other half to testing set Xte = convert(Array,DataArray(iris[2:2:end,1:4]))' Xte_labels = convert(Array,DataArray(iris[2:2:end,5])) # suppose Xtr and Xte are training and testing data matrix, # with each observation in a column # train a PCA model, allowing up to 3 dimensions M = fit(PCA, Xtr; maxoutdim=3) # apply PCA model to testing set Yte = transform(M, Xte) # reconstruct testing observations (approximately) Xr = reconstruct(M, Yte) # group results by testing set labels for color coding setosa = Yte[:,Xte_labels.=="setosa"] versicolor = Yte[:,Xte_labels.=="versicolor"] virginica = Yte[:,Xte_labels.=="virginica"] # visualize first 3 principal components in 3D interacive plot p = scatter(setosa[1,:],setosa[2,:],setosa[3,:],marker=:circle,linewidth=0) scatter!(versicolor[1,:],versicolor[2,:],versicolor[3,:],marker=:circle,linewidth=0) scatter!(virginica[1,:],virginica[2,:],virginica[3,:],marker=:circle,linewidth=0) plot!(p,xlabel="PC1",ylabel="PC2",zlabel="PC3") 3.4 Core Algorithms Two algorithms are implemented in this package: pcacov and pcastd. pcacov(C, mean; ...) Compute PCA based on eigenvalue decomposition of a given covariance matrix C. Parameters • C – The covariance matrix. • mean – The mean vector of original samples, which can be a vector of length d, or an empty vector Float64[] indicating a zero mean. Returns The resultant PCA model. Note This function accepts two keyword arguments: maxoutdim and pratio. pcasvd(Z, mean, tw; ...) Compute PCA based on singular value decomposition of a centralized sample matrix Z. Parameters • Z – provides centralized samples. • mean – The mean vector of the original samples, which can be a vector of length d, or an empty vector Float64[] indicating a zero mean. Returns The resultant PCA model. Note This function accepts two keyword arguments: maxoutdim and pratio. CHAPTER 4 Probabilistic Principal Component Analysis Probabilistic Principal Component Analysis (PPCA) represents a constrained form of the Gaussian distribution in which the number of free parameters can be restricted while still allowing the model to capture the dominant cor- relations in a data set. It is expressed as the maximum likelihood solution of a probabilistic latent variable model [BSHP06]. This package defines a PPCA type to represent a probabilistic PCA model, and provides a set of methods to access the properties. 4.1 Properties Let M be an instance of PPCA, d be the dimension of observations, and p be the output dimension (i.e the dimension of the principal subspace) indim(M) Get the input dimension d, i.e the dimension of the observation space. outdim(M) Get the output dimension p, i.e the dimension of the principal subspace. mean(M) Get the mean vector (of length d). projection(M) Get the projection matrix (of size (d, p)). Each column of the projection matrix corresponds to a principal component. The principal components are arranged in descending order of the corresponding variances. loadings(M) The factor loadings matrix (of size (d, p)). var(M) The total residual variance. MultivariateStats Documentation, Release 0.1.0 4.2 Transformation and Construction Given a probabilistic PCA model M, one can use it to transform observations into latent variables, as z = (W𝑇 W + 𝜎 2 I)W𝑇 (x − 𝜇) or use it to reconstruct (approximately) the observations from latent variables, as x̃ = WE[z] + 𝜇 Here, W is the factor loadings or weight matrix. The package provides methods to do so: transform(M, x) Transform observations x into latent variables. Here, x can be either a vector of length d or a matrix where each column is an observation. reconstruct(M, z) Approximately reconstruct observations from the latent variable given in z. Here, y can be either a vector of length p or a matrix where each column gives the latent variables for an observation. 4.3 Data Analysis One can use the fit method to perform PCA over a given dataset. fit(PPCA, X; ...) Perform probabilistic PCA over the data given in a matrix X. Each column of X is an observation. This method returns an instance of PCA. Keyword arguments: Let (d, n) = size(X) be respectively the input dimension and the number of observations: MultivariateStats Documentation, Release 0.1.0 name description default method The choice of methods: :ml • :ml: use maximum likeli- hood version of probabilistic PCA • :em: use EM version of probabilistic PCA • :bayes: use Bayesian PCA maxoutdim Maximum output dimension. d-1 mean The mean vector, which can be ei- nothing ther of: • 0: the input data has already been centralized • nothing: this function will compute the mean • a pre-computed mean vector tol Convergence tolerance 1.0e-6 tot Maximum number of iterations 1000 Notes: • This function calls ppcaml, ppcaem or bayespca internally, depending on the choice of method. Example: using MultivariateStats # suppose Xtr and Xte are training and testing data matrix, # with each observation in a column # train a PCA model M = fit(PPCA, Xtr; maxoutdim=100) # apply PCA model to testing set Yte = transform(M, Xte) # reconstruct testing observations (approximately) Xr = reconstruct(M, Yte) 4.4 Core Algorithms Three algorithms are implemented in this package: ppcaml, ppcaem, and bayespca. ppcaml(Z, mean, tw; ...) Compute probabilistic PCA using on maximum likelihood formulation for a centralized sample matrix Z. Parameters • Z – provides centralized samples. • mean – The mean vector of the original samples, which can be a vector of length d, or an empty vector Float64[] indicating a zero mean. MultivariateStats Documentation, Release 0.1.0 Returns The resultant PPCA model. Note This function accepts two keyword arguments: maxoutdim and tol. ppcaem(S, mean, n; ...) Compute probabilistic PCA based on expectation-maximization algorithm for a given sample covariance matrix S. Parameters • S – The sample covariance matrix. • mean – The mean vector of original samples, which can be a vector of length d, or an empty vector Float64[] indicating a zero mean. • n – The number of observations. Returns The resultant PPCA model. Note This function accepts two keyword arguments: maxoutdim, tol, and tot. bayespca(S, mean, n; ...) Compute probabilistic PCA based on Bayesian algorithm for a given sample covariance matrix S. Parameters • S – The sample covariance matrix. • mean – The mean vector of original samples, which can be a vector of length d, or an empty vector Float64[] indicating a zero mean. • n – The number of observations. Returns The resultant PPCA model. Note This function accepts two keyword arguments: maxoutdim, tol, and tot. Additional notes: • Function uses the maxoutdim parameter as an upper boundary when it automatically determines the latent space dimensionality. 4.5 References 16 Chapter 4. Probabilistic Principal Component Analysis CHAPTER 5 Kernel Principal Component Analysis Kernel Principal Component Analysis (kernel PCA) is an extension of principal component analysis (PCA) using techniques of kernel methods. Using a kernel, the originally linear operations of PCA are performed in a reproducing kernel Hilbert space. This package defines a KernelPCA type to represent a kernel PCA model, and provides a set of methods to access the properties. 5.1 Properties Let M be an instance of KernelPCA, d be the dimension of observations, and p be the output dimension (i.e the dimension of the principal subspace) indim(M) Get the input dimension d, i.e the dimension of the observation space. outdim(M) Get the output dimension p, i.e the dimension of the principal subspace. projection(M) Get the projection matrix (of size (n, p)). Each column of the projection matrix corresponds to an eigenvector, and n is a number of observations. The principal components are arranged in descending order of the corresponding eigenvalues. principalvars(M) The variances of principal components. 5.2 Transformation and Construction The package provides methods to do so: MultivariateStats Documentation, Release 0.1.0 transform(M, x) Transform observations x into principal components. Here, x can be either a vector of length d or a matrix where each column is an observation. reconstruct(M, y) Approximately reconstruct observations from the principal components given in y. Here, y can be either a vector of length p or a matrix where each column gives the principal components for an observation. 5.3 Data Analysis One can use the fit method to perform kernel PCA over a given dataset. fit(KernelPCA, X; ...) Perform kernel PCA over the data given in a matrix X. Each column of X is an observation. This method returns an instance of KernelPCA. Keyword arguments: Let (d, n) = size(X) be respectively the input dimension and the number of observations: name description default kernel The kernel function: (x,y)->x'y This functions accepts two vector arguments x and y, and returns a scalar value. solver The choice of solver: :eig • :eig: uses eigfact • :eigs: uses eigs (always used for sparse data) maxoutdim Maximum output dimension. min(d, n) inverse Whether to perform calculation false for inverse transform for non- precomputed kernels. 𝛽 Hyperparameter of the ridge re- 1.0 gression that learns the inverse transform (when inverse is true). tol Convergence tolerance for eigs 0.0 solver maxiter Maximum number of iterations 300 for eigs solver 5.4 Kernels List of the commonly used kernels: MultivariateStats Documentation, Release 0.1.0 function description (x,y)->x'y Linear (x,y)->(x'y+c)^d Polynomial (x,y)->exp(-𝛾 *norm(x-y)^2.0) Radial basis function (RBF) Example: using MultivariateStats # suppose Xtr and Xte are training and testing data matrix, # with each observation in a column # train a kernel PCA model M = fit(KernelPCA, Xtr; maxoutdim=100, inverse=true) # apply kernel PCA model to testing set Yte = transform(M, Xte) # reconstruct testing observations (approximately) Xr = reconstruct(M, Yte) MultivariateStats Documentation, Release 0.1.0 20 Chapter 5. Kernel Principal Component Analysis CHAPTER 6 Canonical Correlation Analysis Canonical Correlation Analysis (CCA) is a statistical analysis technique to identify correlations between two sets of variables. Given two vector variables X and Y, it finds two projections, one for each, to transform them to a common space with maximum correlations. The package defines a CCA type to represent a CCA model, and provides a set of methods to access the properties. 6.1 Properties Let M be an instance of CCA, dx be the dimension of X, dy the dimension of Y, and p the output dimension (i.e the dimensio of the common space). xindim(M) Get the dimension of X, the first set of variables. yindim(M) Get the dimension of Y, the second set of variables. outdim(M) Get the output dimension, i.e that of the common space. xmean(M) Get the mean vector of X (of length dx). ymean(M) Get the mean vector of Y (of length dy). xprojection(M) Get the projection matrix for X (of size (dx, p)). yprojection(M) Get the projection matrix for Y (of size (dy, p)). correlations(M) The correlations of the projected componnents (a vector of length p). MultivariateStats Documentation, Release 0.1.0 6.2 Transformation Given a CCA model, one can transform observations into both spaces into a common space, as z𝑥 = P𝑇𝑥 (x − 𝜇𝑥 ) z𝑦 = P𝑇𝑦 (y − 𝜇𝑦 ) Here, P𝑥 and P𝑦 are projection matrices for X and Y; 𝜇𝑥 and 𝜇𝑦 are mean vectors. This package provides methods to do so: xtransform(M, x) Transform observations in the X-space to the common space. Here, x can be either a vector of length dx or a matrix where each column is an observation. ytransform(M, y) Transform observations in the Y-space to the common space. Here, y can be either a vector of length dy or a matrix where each column is an observation. 6.3 Data Analysis One can use the fit method to perform CCA over given datasets. fit(CCA, X, Y; ...) Perform CCA over the data given in matrices X and Y. Each column of X and Y is an observation. X and Y should have the same number of columns (denoted by n below). This method returns an instance of CCA. Keyword arguments: name description default method The choice of methods: :svd • :cov: based on covariance matrices • :svd: based on SVD of the input data outdim The output dimension, i.e dimen- min(dx, dy, n) sion of the common space mean The mean vector, which can be ei- nothing ther of: • 0: the input data has already been centralized • nothing: this function will compute the mean • a pre-computed mean vector Notes: This function calls ccacov or ccasvd internally, depending on the choice of method. MultivariateStats Documentation, Release 0.1.0 6.4 Core Algorithms Two algorithms are implemented in this package: ccacov and ccasvd. ccacov(Cxx, Cyy, Cxy, xmean, ymean, p) Compute CCA based on analysis of the given covariance matrices, using generalized eigenvalue decomposition. Parameters • Cxx – The covariance matrix of X. • Cyy – The covariance matrix of Y. • Cxy – The covariance matrix between X and Y. • xmean – The mean vector of the original samples of X, which can be a vector of length dx, or an empty vector Float64[] indicating a zero mean. • ymean – The mean vector of the original samples of Y, which can be a vector of length dy, or an empty vector Float64[] indicating a zero mean. • p – The output dimension, i.e the dimension of the common space. Returns The resultant CCA model. ccasvd(Zx, Zy, xmean, ymean, p) Compute CCA based on singular value decomposition of centralized sample matrices Zx and Zy. Parameters • Zx – The centralized sample matrix for X. • Zy – The centralized sample matrix for Y. • xmean – The mean vector of the original samples of X, which can be a vector of length dx, or an empty vector Float64[] indicating a zero mean. • ymean – The mean vector of the original samples of Y, which can be a vector of length dy, or an empty vector Float64[] indicating a zero mean. • p – The output dimension, i.e the dimension of the common space. Returns The resultant CCA model. MultivariateStats Documentation, Release 0.1.0 24 Chapter 6. Canonical Correlation Analysis CHAPTER 7 Classical Multidimensional Scaling In general, Multidimensional Scaling (MDS) refers to techniques that transforms samples into lower dimensional space while preserving the inter-sample distances as well as possible. 7.1 Overview of Classical MDS Classical MDS is a specific technique in this family that accomplishes the embedding in two steps: 1. Convert the distance matrix to a Gram matrix. This conversion is based on the following relations between a distance matrix D and a Gram matrix G: sqr(D) = g1𝑇 + 1g𝑇 − 2G Here, sqr(D) indicates the element-wise square of D, and g is the diagonal elements of G. This relation is itself based on the following decomposition of squared Euclidean distance: ‖x − y‖2 = ‖x‖2 + ‖y‖2 − 2x𝑇 y 2. Perform eigenvalue decomposition of the Gram matrix to derive the coordinates. 7.2 Functions This package provides functions related to classical MDS. gram2dmat(G) Convert a Gram matrix G to a distance matrix. gram2dmat!(D, G) Convert a Gram matrix G to a distance matrix, and write the results to D. dmat2gram(D) Convert a distance matrix D to a Gram matrix. MultivariateStats Documentation, Release 0.1.0 dmat2gram!(G, D) Convert a distance matrix D to a Gram matrix, and write the results to G. classical_mds(D, p[, dowarn=true ]) Perform classical MDS. This function derives a p-dimensional embedding based on a given distance matrix D. It returns a coordinate matrix of size (p, n), where each column is the coordinates for an observation. Note: The Gramian derived from D may have nonpositive or degenerate eigenvalues. The subspace of nonpos- itive eigenvalues is projected out of the MDS solution so that the strain function is minimized in a least-squares sense. If the smallest remaining eigenvalue that is used for the MDS is degenerate, then the solution is not unique, as any linear combination of degenerate eigenvectors will also yield a MDS solution with the same strain value. By default, warnings are emitted if either situation is detected, which can be suppressed with dowarn=false. If the MDS uses an eigenspace of dimension m less than p, then the MDS coordinates will be padded with p-m zeros each. Reference: @inbook{Borg2005, Author = {<NAME> and <NAME>}, Title = {Modern Multidimensional Scaling: Theory and Applications}, Edition = {2}, Year = {2005}, Chapter = {12}, Doi = {10.1007/0-387-28981-X}, Pages = {201--268}, Series = {Springer Series in Statistics}, Publisher = {Springer}, } CHAPTER 8 Linear Discriminant Analysis Linear Discriminant Analysis are statistical analysis methods to find a linear combination of features for separating observations in two classes. Note: Please refer to Multi-class Linear Discriminant Analysis for methods that can discriminate between multiple classes. 8.1 Overview of LDA Suppose the samples in the positive and negative classes respectively with means: 𝜇𝑝 and 𝜇𝑛 , and covariances C𝑝 and C𝑛 . Then based on Fisher’s Linear Discriminant Criteria, the optimal projection direction can be expressed as: w = 𝛼 · (C𝑝 + C𝑛 )−1 (𝜇𝑝 − 𝜇𝑛 ) Here 𝛼 is an arbitrary non-negative coefficient. 8.2 Linear Discriminant A linear discriminant functional can be written as 𝑓 (x) = w𝑇 x + 𝑏 Here, w is the coefficient vector, and b is the bias constant. This package uses the LinearDiscriminant type, defined as below, to capture a linear discriminant functional: immutable LinearDiscriminant <: Discriminant w::Vector{Float64} b::Float64 end This type comes with several methods. Let f be an instance of LinearDiscriminant MultivariateStats Documentation, Release 0.1.0 length(f ) Get the length of the coefficient vector. evaluate(f, x) Evaluate the linear discriminant value, i.e w'x + b. When x is a vector, it returns a real value; when x is a matrix with samples in columns, it returns a vector of length size(x, 2). predict(f, x) Make prediction. It returns true iff evaluate(f, x) is positive. 8.3 Data Analysis The package provides several functions to perform Linear Discriminant Analysis. ldacov(Cp, Cn, 𝜇p, 𝜇n) Performs LDA given covariances and mean vectors. Parameters • Cp – The covariance matrix of the positive class. • Cn – The covariance matrix of the negative class. • 𝜇p – The mean vector of the positive class. • 𝜇n – The mean vector of the negative class. Returns The resultant linear discriminant functional of type LinearDiscriminant. Note: The coefficient vector is scaled such that w'𝜇p + b = 1 and w'𝜇n + b = -1. ldacov(C, 𝜇p, 𝜇n) Performs LDA given a covariance matrix and both mean vectors. Parameters • C – The pooled covariane matrix (i.e (Cp + Cn)/2) • 𝜇p – The mean vector of the positive class. • 𝜇n – The mean vector of the negative class. Returns The resultant linear discriminant functional of type LinearDiscriminant. Note: The coefficient vector is scaled such that w'𝜇p + b = 1 and w'𝜇n + b = -1. fit(LinearDiscriminant, Xp, Xn) Performs LDA given both positive and negative samples. Parameters • Xp – The sample matrix of the positive class. • Xn – The sample matrix of the negative class. Returns The resultant linear discriminant functional of type LinearDiscriminant. CHAPTER 9 Multi-class Linear Discriminant Analysis Multi-class LDA is a generalization of standard two-class LDA that can handle arbitrary number of classes. 9.1 Overview Multi-class LDA is based on the analysis of two scatter matrices: within-class scatter matrix and between-class scatter matrix. Given a set of samples x1 , . . . , x𝑛 , and their class labels 𝑦1 , . . . , 𝑦𝑛 : The within-class scatter matrix is defined as: 𝑛 ∑︁ S𝑤 = (x𝑖 − 𝜇𝑦𝑖 )(x𝑖 − 𝜇𝑦𝑖 )𝑇 Here, 𝜇𝑘 is the sample mean of the k-th class. The between-class scatter matrix is defined as: 𝑚 ∑︁ S𝑏 = 𝑛𝑘 (𝜇𝑘 − 𝜇)(𝜇𝑘 − 𝜇)𝑇 Here, m is the number of classes, 𝜇 is the overall sample mean, and 𝑛𝑘 is the number of samples in the k-th class. Then, multi-class LDA can be formulated as an optimization problem to find a set of linear combinations (with coef- ficients w) that maximizes the ratio of the between-class scattering to the within-class scattering, as w𝑇 S𝑏 w ŵ = argmax w w𝑇 S𝑤 w The solution is given by the following generalized eigenvalue problem: Generally, at most m - 1 generalized eigenvectors are useful to discriminate between m classes. When the dimensionality is high, it may not be feasible to construct the scatter matrices explicitly. In such cases, see SubspaceLDA below. MultivariateStats Documentation, Release 0.1.0 9.2 Normalization by number of observations An alternative definition of the within- and between-class scatter matrices normalizes for the number of observations in each group: 𝑚 ∑︁ 1 ∑︁ S*𝑤 = 𝑛 (x𝑖 − 𝜇𝑘 )(x𝑖 − 𝜇𝑘 )𝑇 𝑛𝑘 𝑘=1 𝑖|𝑦𝑖 =𝑘 𝑚 ∑︁ S*𝑏 = 𝑛 (𝜇𝑘 − 𝜇* )(𝜇𝑘 − 𝜇* )𝑇 𝑘=1 where 𝑚 𝜇* = 𝜇𝑘 . 𝑘 This definition can sometimes be more useful when looking for directions which discriminate among clusters contain- ing widely-varying numbers of observations. 9.3 Multi-class LDA The package defines a MulticlassLDA type to represent a multi-class LDA model, as: type MulticlassLDA proj::Matrix{Float64} pmeans::Matrix{Float64} stats::MulticlassLDAStats end Here, proj is the projection matrix, pmeans is the projected means of all classes, stats is an instance of MulticlassLDAStats that captures all statistics computed to train the model (which we will discuss later). Several methods are provided to access properties of the LDA model. Let M be an instance of MulticlassLDA: indim(M) Get the input dimension (i.e the dimension of the observation space). outdim(M) Get the output dimension (i.e the dimension of the transformed features). projection(M) Get the projection matrix (of size d x p). mean(M) Get the overall sample mean vector (of length d). classmeans(M) Get the matrix comprised of class-specific means as columns (of size (d, m)). classweights(M) Get the weights of individual classes (a vector of length m). If the samples are not weighted, the weight equals the number of samples of each class. withinclass_scatter(M) Get the within-class scatter matrix (of size (d, d)). betweenclass_scatter(M) Get the between-class scatter matrix (of size (d, d)). MultivariateStats Documentation, Release 0.1.0 transform(M, x) Transform input sample(s) in x to the output space. Here, x can be either a sample vector or a matrix comprised of samples in columns. In the pratice of classification, one can transform testing samples using this transform method, and compare them with M.pmeans. 9.4 Data Analysis One can use fit to perform multi-class LDA over a set of data: fit(MulticlassLDA, nc, X, y; ...) Perform multi-class LDA over a given data set. Parameters • nc – the number of classes • X – the matrix of input samples, of size (d, n). Each column in X is an observation. • y – the vector of class labels, of length n. Each element of y must be an integer between 1 and nc. Returns The resultant multi-class LDA model, of type MulticlassLDA. Keyword arguments: name description default method The choice of methods: :gevd • :gevd: based on general- ized eigenvalue decomposi- tion • :whiten: first derive a whitening transform from Sw and then solve the prob- lem based on eigenvalue de- composition of the whiten Sb. outdim The output dimension, i.e dimen- min(d, nc-1) sion of the transformed space regcoef The regularization coefficient. 1.0e-6 A positive value regcoef * eigmax(Sw) is added to the diagonal of Sw to improve numerical stability. Note: The resultant projection matrix P satisfies: P𝑇 (S𝑤 + 𝜅I)P = I Here, 𝜅 equals regcoef * eigmax(Sw). The columns of P are arranged in descending order of the corre- sponding generalized eigenvalues. Note that MulticlassLDA does not currently support the normalized version using S*𝑤 and S*𝑏 (see SubspaceLDA below). MultivariateStats Documentation, Release 0.1.0 9.5 Task Functions The multi-class LDA consists of several steps: 1. Compute statistics, such as class means, scatter matrices, etc. 2. Solve the projection matrix. 3. Construct the model. Sometimes, it is useful to only perform one of these tasks. The package exposes several functions for this purpose: multiclass_lda_stats(nc, X, y) Compute statistics required to train a multi-class LDA. Parameters • nc – the number of classes • X – the matrix of input samples. • y – the vector of class labels. This function returns an instance of MulticlassLDAStats, defined as below, that captures all relevant statistics. type MulticlassLDAStats dim::Int # sample dimensions nclasses::Int # number of classes cweights::Vector{Float64} # class weights tweight::Float64 # total sample weight mean::Vector{Float64} # overall sample mean cmeans::Matrix{Float64} # class-specific means Sw::Matrix{Float64} # within-class scatter matrix Sb::Matrix{Float64} # between-class scatter matrix end This type has the following constructor. Under certain circumstances, one might collect statistics in other ways and want to directly construct this instance. MulticlassLDAStats(cweights, mean, cmeans, Sw, Sb) Construct an instance of type MulticlassLDAStats. Parameters • cweights – the class weights, a vector of length m. • mean – the overall sample mean, a vector of length d. • cmeans – the class-specific sample means, a matrix of size (d, m). • Sw – the within-class scatter matrix, a matrix of size (d, d). • Sb – the between-class scatter matrix, a matrix of size (d, d). multiclass_lda(S; ...) Perform multi-class LDA based on given statistics. Here S is an instance of MulticlassLDAStats. This function accepts the following keyword arguments (as above): method, outdim, and regcoef. mclda_solve(Sb, Sw, method, p, regcoef ) Solve the projection matrix given both scatter matrices. Parameters MultivariateStats Documentation, Release 0.1.0 • Sb – the between-class scatter matrix. • Sw – the within-class scatter matrix. • method – the choice of method, which can be either :gevd or :whiten. • p – output dimension. • regcoef – regularization coefficient. mclda_solve!(Sb, Sw, method, p, regcoef) Solve the projection matrix given both scatter matrices. Note: In this function, Sb and Sw will be overwritten (saving some space). 9.6 Subspace LDA The package also defines a SubspaceLDA type to represent a multi-class LDA model for high-dimensional spaces. MulticlassLDA, because it stores the scatter matrices, is not well-suited for high-dimensional data. For example, if you are performing LDA on images, and each image has 10^6 pixels, then the scatter matrices would contain 10^12 elements, far too many to store directly. SubspaceLDA calculates the projection direction without the intermediary of the scatter matrices, by focusing on the subspace that lies within the span of the within-class scatter. This also serves to regularize the computation. immutable SubspaceLDA{T<:Real} projw::Matrix{T} # P, project down to the subspace spanned by within-class ˓→scatter projLDA::Matrix{T} # L, LDA directions in the projected subspace 𝜆::Vector{T} cmeans::Matrix{T} cweights::Vector{Int} end This supports all the same methods as MulticlassLDA, with the exception of the functions that return a scatter matrix. The overall projection is represented as a factorization P*L, where P'*x projects data points to the subspace spanned by the within-class scatter, and L is the LDA projection in the subspace. The projection directions w (the columns of projection(M)) satisfy the equation P𝑇 S𝑏 w = 𝜆P𝑇 S𝑤 w. When P is of full rank (e.g., if there are more data points than dimensions), then this equation guarantees that Eq. (9.1) will also hold. SubspaceLDA also supports the normalized version of LDA via the normalize keyword: M = fit(SubspaceLDA, X, label; normalize=true) would perform LDA using the equivalent of S*𝑤 and S*𝑏 . MultivariateStats Documentation, Release 0.1.0 34 Chapter 9. Multi-class Linear Discriminant Analysis CHAPTER 10 Independent Component Analysis Independent Component Analysis (ICA) is a computational technique for separating a multivariate signal into additive subcomponents, with the assumption that the subcomponents are non-Gaussian and independent from each other. There are multiple algorithms for ICA. Currently, this package implements the Fast ICA algorithm. 10.1 ICA The package uses a type ICA, defined below, to represent an ICA model: mutable struct ICA{T<:Real} mean::Vector{T} # mean vector, of length m (or empty to indicate zero mean) W::Matrix{T} # component coefficient matrix, of size (m, k) end Note: Each column of W here corresponds to an independent component. Several methods are provided to work with ICA. Let M be an instance of ICA: indim(M) Get the input dimension, i.e the number of observed mixtures. outdim(M) Get the output dimension, i.e the number of independent components. mean(M) Get the mean vector. Note: if M.mean is empty, this function returns a zero vector of length indim(M). transform(M, x) Transform x to the output space to extract independent components, as W𝑇 (x − 𝜇). MultivariateStats Documentation, Release 0.1.0 10.2 Data Analysis One can use fit to perform ICA over a given data set. fit(ICA, X, k; ...) Perform ICA over the data set given in X. Parameters • X – The data matrix, of size (m, n). Each row corresponds to a mixed signal, while each column corresponds to an observation (e.g all signal value at a particular time step). • k – The number of independent components to recover. Returns The resultant ICA model, an instance of type ICA. Note: If do_whiten is true, the return W satisfies W𝑇 CW = I, otherwise W is orthonormal, i.e W𝑇 W = I Keyword Arguments: MultivariateStats Documentation, Release 0.1.0 name description default alg The choice of algorithm (must be :fastica :fastica) fun The approx neg-entropy functor. icagfun(:tanh) It can be obtained using the func- tion icagfun. Now, it accepts the following values: • icagfun(:tanh) • icagfun(:tanh, a) • icagfun(:gaus) do_whiten Whether to perform pre- true whitening maxiter Maximum number of iterations 100 tol Tolerable change of W at conver- 1.0e-6 gence mean The mean vector, which can be ei- nothing ther of: • 0: the input data has already been centralized • nothing: this function will compute the mean • a pre-computed mean vector winit Initial guess of W, which should be zeros(0,0) either of: • empty matrix: the function will perform random initial- ization • a matrix of size (k, k) (when do_whiten) • a matrix of size (m, k) (when !do_whiten) verbose Whether to display iteration infor- false mation 10.3 Core Algorithms The package also exports functions of the core algorithms. Sometimes, it can be more efficient to directly invoke them instead of going through the fit interface. fastica!(W, X, fun, maxiter, tol, verbose) Invoke the Fast ICA algorithm. Parameters • W – The initial un-mixing matrix, of size (m, k). The function updates this matrix inplace. • X – The data matrix, of size (m, n). This matrix is input only, and won’t be modified. • fun – The approximate neg-entropy functor, which can be obtained using icagfun (see above). MultivariateStats Documentation, Release 0.1.0 • maxiter – Maximum number of iterations. • tol – Tolerable change of W at convergence. • verbose – Whether to display iteration information. Returns The updated W. Note: The number of components is inferred from W as size(W, 2). CHAPTER 11 Factor Analysis Factor Analysis (FA) is a linear-Gaussian latent variable model that is closely related to probabilistic PCA. In contrast to the probabilistic PCA model, the covariance of conditional distribution of the observed variable given the latent variable is diagonal rather than isotropic [BSHP06]. This package defines a FactorAnalysis type to represent a factor analysis model, and provides a set of methods to access the properties. 11.1 Properties Let M be an instance of FactorAnalysis, d be the dimension of observations, and p be the output dimension (i.e the dimension of the principal subspace) indim(M) Get the input dimension d, i.e the dimension of the observation space. outdim(M) Get the output dimension p, i.e the dimension of the principal subspace. mean(M) Get the mean vector (of length d). projection(M) Get the projection matrix (of size (d, p)). Each column of the projection matrix corresponds to a principal component. The principal components are arranged in descending order of the corresponding variances. loadings(M) The factor loadings matrix (of size (d, p)). cov(M) The diagonal covariance matrix. MultivariateStats Documentation, Release 0.1.0 11.2 Transformation and Construction Given a probabilistic PCA model M, one can use it to transform observations into latent variables, as z = W𝑇 Σ−1 (x − 𝜇) or use it to reconstruct (approximately) the observations from latent variables, as x̃ = ΣW(W𝑇 W)−1 z + 𝜇 Here, W is the factor loadings or weight matrix, Σ = Ψ + W𝑇 W is the covariance matrix. The package provides methods to do so: transform(M, x) Transform observations x into latent variables. Here, x can be either a vector of length d or a matrix where each column is an observation. reconstruct(M, z) Approximately reconstruct observations from the latent variable given in z. Here, y can be either a vector of length p or a matrix where each column gives the latent variables for an observation. 11.3 Data Analysis One can use the fit method to perform factor analysis over a given dataset. fit(FactorAnalysis, X; ...) Perform factor analysis over the data given in a matrix X. Each column of X is an observation. This method returns an instance of FactorAnalysis. Keyword arguments: Let (d, n) = size(X) be respectively the input dimension and the number of observations: name description default method The choice of methods: :cm • :em: use EM version of fac- tor analysis • :cm: use CM version of fac- tor analysis maxoutdim Maximum output dimension d-1 mean The mean vector, which can be ei- nothing ther of: been centralized • nothing: this function will compute the mean • a pre-computed mean vector tol Convergence tolerance 1.0e-6 tot Maximum number of iterations 1000 𝜂 Variance low bound 1.0e-6 MultivariateStats Documentation, Release 0.1.0 Notes: • This function calls facm or faem internally, depending on the choice of method. Example: using MultivariateStats # suppose Xtr and Xte are training and testing data matrix, # with each observation in a column # train a FactorAnalysis model M = fit(FactorAnalysis, Xtr; maxoutdim=100) # apply FactorAnalysis model to testing set Yte = transform(M, Xte) # reconstruct testing observations (approximately) Xr = reconstruct(M, Yte) 11.4 Core Algorithms Two algorithms are implemented in this package: faem and facm. faem(S, mean, n; ...) Perform factor analysis using an expectation-maximization algorithm for a given sample covariance matrix S [RUBN82]. Parameters • S – The sample covariance matrix. • mean – The mean vector of original samples, which can be a vector of length d, or an empty vector Float64[] indicating a zero mean. • n – The number of observations. Returns The resultant FactorAnalysis model. Note This function accepts two keyword arguments: maxoutdim,‘‘tol‘‘, and tot. facm(S, mean, n; ...) Perform factor analysis using an fast conditional maximization algorithm for a given sample covariance matrix S [ZHAO08]. Parameters • S – The sample covariance matrix. • mean – The mean vector of original samples, which can be a vector of length d, or an empty vector Float64[] indicating a zero mean. • n – The number of observations. Returns The resultant FactorAnalysis model. Note This function accepts two keyword arguments: maxoutdim, tol, tot, and 𝜂. MultivariateStats Documentation, Release 0.1.0 11.5 References Notes: All methods implemented in this package adopt the column-major convention of JuliaStats: in a data matrix, each column corresponds to a sample/observation, while each row corresponds to a feature (variable or attribute). Bibliography [BSHP06] <NAME>. Pattern Recognition and Machine Learning, 2006. [RUBN82] <NAME>., and <NAME>. EM algorithms for ML factor analysis. Psychometrika 47.1 (1982): 69-76. [ZHAO08] <NAME>., <NAME>, and <NAME>. ML estimation for factor analysis: EM or non-EM?. Statistics and computing 18.2 (2008): 109-123. MultivariateStats Documentation, Release 0.1.0 44 Bibliography
tld
readthedoc
Python
tld 0.13 documentation [tld](index.html#document-index) --- [tld](#id1)[¶](#tld) === Extract the top level domain (TLD) from the URL given. List of TLD names is taken from [Public Suffix](https://publicsuffix.org/list/public_suffix_list.dat). Optionally raises exceptions on non-existing TLDs or silently fails (if `fail_silently` argument is set to True). [Prerequisites](#id2)[¶](#prerequisites) --- * Python 3.7, 3.8, 3.9, 3.10 or 3.11. [Documentation](#id3)[¶](#documentation) --- Documentation is available on [Read the Docs](http://tld.readthedocs.io/). [Installation](#id4)[¶](#installation) --- Latest stable version on PyPI: ``` pip install tld ``` Or latest stable version from GitHub: ``` pip install https://github.com/barseghyanartur/tld/archive/stable.tar.gz ``` [Usage examples](#id5)[¶](#usage-examples) --- In addition to examples below, see the [jupyter notebook](jupyter/) workbook file. ### [Get the TLD name **as string** from the URL given](#id6)[¶](#get-the-tld-name-as-string-from-the-url-given) ``` from tld import get_tld get_tld("http://www.google.co.uk") # 'co.uk' get_tld("http://www.google.idontexist", fail_silently=True) # None ``` ### [Get the TLD as **an object**](#id7)[¶](#get-the-tld-as-an-object) ``` from tld import get_tld res = get_tld("http://some.subdomain.google.co.uk", as_object=True) res # 'co.uk' res.subdomain # 'some.subdomain' res.domain # 'google' res.tld # 'co.uk' res.fld # 'google.co.uk' res.parsed_url # SplitResult( # scheme='http', # netloc='some.subdomain.google.co.uk', # path='', # query='', # fragment='' # ) ``` ### [Get TLD name, **ignoring the missing protocol**](#id8)[¶](#get-tld-name-ignoring-the-missing-protocol) ``` from tld import get_tld, get_fld get_tld("www.google.co.uk", fix_protocol=True) # 'co.uk' get_fld("www.google.co.uk", fix_protocol=True) # 'google.co.uk' ``` ### [Return TLD parts as tuple](#id9)[¶](#return-tld-parts-as-tuple) ``` from tld import parse_tld parse_tld('http://www.google.com') # 'com', 'google', 'www' ``` ### [Get the first level domain name **as string** from the URL given](#id10)[¶](#get-the-first-level-domain-name-as-string-from-the-url-given) ``` from tld import get_fld get_fld("http://www.google.co.uk") # 'google.co.uk' get_fld("http://www.google.idontexist", fail_silently=True) # None ``` ### [Check if some tld is a valid tld](#id11)[¶](#check-if-some-tld-is-a-valid-tld) ``` from tld import is_tld is_tld('co.uk) # True is_tld('uk') # True is_tld('tld.doesnotexist') # False is_tld('www.google.com') # False ``` [Update the list of TLD names](#id12)[¶](#update-the-list-of-tld-names) --- To update/sync the tld names with the most recent versions run the following from your terminal: ``` update-tld-names ``` Or simply do: ``` from tld.utils import update_tld_names update_tld_names() ``` Note, that this will update all registered TLD source parsers (not only the list of TLD names taken from Mozilla). In order to run the update for a single parser, append `uid` of that parser as argument. ``` update-tld-names mozilla ``` [Custom TLD parsers](#id13)[¶](#custom-tld-parsers) --- By default list of TLD names is taken from Mozilla. Parsing implemented in the `tld.utils.MozillaTLDSourceParser` class. If you want to use another parser, subclass the `tld.base.BaseTLDSourceParser`, provide `uid`, `source_url`, `local_path` and implement the `get_tld_names` method. Take the `tld.utils.MozillaTLDSourceParser` as a good example of such implementation. You could then use `get_tld` (as well as other `tld` module functions) as shown below: ``` from tld import get_tld from some.module import CustomTLDSourceParser get_tld( "http://www.google.co.uk", parser_class=CustomTLDSourceParser ) ``` [Custom list of TLD names](#id14)[¶](#custom-list-of-tld-names) --- You could maintain your own custom version of the TLD names list (even multiple ones) and use them simultaneously with built in TLD names list. You would then store them locally and provide a path to it as shown below: ``` from tld import get_tld from tld.utils import BaseMozillaTLDSourceParser class CustomBaseMozillaTLDSourceParser(BaseMozillaTLDSourceParser): uid: str = 'custom_mozilla' local_path: str = 'tests/res/effective_tld_names_custom.dat.txt' get_tld( "http://www.foreverchild", parser_class=CustomBaseMozillaTLDSourceParser ) # 'foreverchild' ``` Same goes for first level domain names: ``` from tld import get_fld get_fld( "http://www.foreverchild", parser_class=CustomBaseMozillaTLDSourceParser ) # 'www.foreverchild' ``` Note, that in both examples shown above, there the original TLD names file has been modified in the following way: ``` ... // ===BEGIN ICANN DOMAINS=== // This one actually does not exist, added for testing purposes foreverchild ... ``` [Free up resources](#id15)[¶](#free-up-resources) --- To free up memory occupied by loading of custom TLD names, use `reset_tld_names` function with `tld_names_local_path` parameter. ``` from tld import get_tld, reset_tld_names # Get TLD from a custom TLD names parser get_tld( "http://www.foreverchild", parser_class=CustomBaseMozillaTLDSourceParser ) # Free resources occupied by the custom TLD names list reset_tld_names("tests/res/effective_tld_names_custom.dat.txt") ``` [Troubleshooting](#id16)[¶](#troubleshooting) --- If somehow domain names listed [here](https://publicsuffix.org/list/public_suffix_list.dat) are not recognised, make sure you have the most recent version of TLD names in your virtual environment: ``` update-tld-names ``` To update TLD names list for a single parser, specify it as an argument: ``` update-tld-names mozilla ``` [Testing](#id17)[¶](#testing) --- Simply type: ``` pytest ``` Or use tox: ``` tox ``` Or use tox to check specific env: ``` tox -e py39 ``` [Writing documentation](#id18)[¶](#writing-documentation) --- Keep the following hierarchy. ``` === title === header === sub-header --- sub-sub-header ~~~~~~~~~~~~~~ sub-sub-sub-header ^^^^^^^^^^^^^^^^^^ sub-sub-sub-sub-header ++++++++++++++++++++++ sub-sub-sub-sub-sub-header ************************** ``` [License](#id19)[¶](#license) --- MPL-1.1 OR GPL-2.0-only OR LGPL-2.1-or-later [Support](#id20)[¶](#support) --- For security issues contact me at the e-mail given in the [Author](#author) section. For overall issues, go to [GitHub](https://github.com/barseghyanartur/tld/issues). [Author](#id21)[¶](#author) --- <NAME> <[<EMAIL>](mailto:<EMAIL>yan%40<EMAIL>)[Project documentation](#id22)[¶](#project-documentation) --- Contents: Table of Contents * [tld](#tld) + [Prerequisites](#prerequisites) + [Documentation](#documentation) + [Installation](#installation) + [Usage examples](#usage-examples) - [Get the TLD name **as string** from the URL given](#get-the-tld-name-as-string-from-the-url-given) - [Get the TLD as **an object**](#get-the-tld-as-an-object) - [Get TLD name, **ignoring the missing protocol**](#get-tld-name-ignoring-the-missing-protocol) - [Return TLD parts as tuple](#return-tld-parts-as-tuple) - [Get the first level domain name **as string** from the URL given](#get-the-first-level-domain-name-as-string-from-the-url-given) - [Check if some tld is a valid tld](#check-if-some-tld-is-a-valid-tld) + [Update the list of TLD names](#update-the-list-of-tld-names) + [Custom TLD parsers](#custom-tld-parsers) + [Custom list of TLD names](#custom-list-of-tld-names) + [Free up resources](#free-up-resources) + [Troubleshooting](#troubleshooting) + [Testing](#testing) + [Writing documentation](#writing-documentation) + [License](#license) + [Support](#support) + [Author](#author) + [Project documentation](#project-documentation) + [Indices and tables](#indices-and-tables) ### Security Policy[¶](#security-policy) #### Reporting a Vulnerability[¶](#reporting-a-vulnerability) **Do not report security issues on GitHub!** Please report security issues by emailing <NAME> <[<EMAIL>](mailto:artur.<EMAIL>ghyan%40gmail.com)>. #### Supported Versions[¶](#supported-versions) **Make sure to use the latest version.** The tree most recent `tld` release series receive security support. For example, during the development cycle leading to the release of `tld` 0.12.x, support will be provided for `tld` 0.11.x, 0.10.x and 0.9.x. Upon the release of `tld` 0.13.x, security support for `tld` 0.9.x will end. ``` ┌─────────────────┬────────────────┐ │ Version │ Supported │ ├─────────────────┼────────────────┤ │ 0.12.x │ Yes │ ├─────────────────┼────────────────┤ │ 0.11.x │ Yes │ ├─────────────────┼────────────────┤ │ 0.10.x │ Yes │ ├─────────────────┼────────────────┤ │ 0.9.x │ Yes │ ├─────────────────┼────────────────┤ │ < 0.9 │ No │ └─────────────────┴────────────────┘ ``` ### Contributor Covenant Code of Conduct[¶](#contributor-covenant-code-of-conduct) #### Our Pledge[¶](#our-pledge) We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community. #### Our Standards[¶](#our-standards) Examples of behavior that contributes to a positive environment for our community include: * Demonstrating empathy and kindness toward other people * Being respectful of differing opinions, viewpoints, and experiences * Giving and gracefully accepting constructive feedback * Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience * Focusing on what is best not just for us as individuals, but for the overall community Examples of unacceptable behavior include: * The use of sexualized language or imagery, and sexual attention or advances of any kind * Trolling, insulting or derogatory comments, and personal or political attacks * Public or private harassment * Publishing others’ private information, such as a physical or email address, without their explicit permission * Other conduct which could reasonably be considered inappropriate in a professional setting #### Enforcement Responsibilities[¶](#enforcement-responsibilities) Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful. Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate. #### Scope[¶](#scope) This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. #### Enforcement[¶](#enforcement) Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at [<EMAIL>](mailto:<EMAIL>%40<EMAIL>.com). All complaints will be reviewed and investigated promptly and fairly. All community leaders are obligated to respect the privacy and security of the reporter of any incident. #### Enforcement Guidelines[¶](#enforcement-guidelines) Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct: ##### 1. Correction[¶](#correction) **Community Impact**: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community. **Consequence**: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested. ##### 2. Warning[¶](#warning) **Community Impact**: A violation through a single incident or series of actions. **Consequence**: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban. ##### 3. Temporary Ban[¶](#temporary-ban) **Community Impact**: A serious violation of community standards, including sustained inappropriate behavior. **Consequence**: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban. ##### 4. Permanent Ban[¶](#permanent-ban) **Community Impact**: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals. **Consequence**: A permanent ban from any sort of public interaction within the community. #### Attribution[¶](#attribution) This Code of Conduct is adapted from the [Contributor Covenant](https://www.contributor-covenant.org), version 2.0, available at <https://www.contributor-covenant.org/version/2/0/code_of_conduct.html>. Community Impact Guidelines were inspired by [Mozilla’s code of conduct enforcement ladder](https://github.com/mozilla/diversity). For answers to common questions about this code of conduct, see the FAQ at <https://www.contributor-covenant.org/faq>. Translations are available at <https://www.contributor-covenant.org/translations>. ### Contributor guidelines[¶](#contributor-guidelines) #### Developer prerequisites[¶](#developer-prerequisites) ##### pre-commit[¶](#id1) Refer to [pre-commit](https://pre-commit.com/#installation) for installation instructions. TL;DR: ``` pip install pipx --user # Install pipx pipx install pre-commit # Install pre-commit pre-commit install # Install pre-commit hooks ``` Installing [pre-commit](https://pre-commit.com/#installation) will ensure you adhere to the project code quality standards. #### Code standards[¶](#code-standards) [black](https://black.readthedocs.io/), [isort](https://pycqa.github.io/isort/), [ruff](https://beta.ruff.rs/docs/) and [doc8](https://doc8.readthedocs.io/) will be automatically triggered by [pre-commit](https://pre-commit.com/#installation). Still, if you want to run checks manually: ``` ./scripts/black.sh ./scripts/doc8.sh ./scripts/isort.sh ./scripts/ruff.sh ``` #### Requirements[¶](#requirements) Requirements are compiled using [pip-tools](https://pip-tools.readthedocs.io/). ``` ./scripts/compile_requirements.sh ``` #### Virtual environment[¶](#virtual-environment) You are advised to work in virtual environment. TL;DR: ``` python -m venv env pip install -e . pip install -r requirements/test.txt ``` #### Documentation[¶](#id2) Check [documentation](https://tld.readthedocs.io/#writing-documentation). #### Testing[¶](#id3) Check [testing](https://tld.readthedocs.io/#testing). If you introduce changes or fixes, make sure to test them locally using all supported environments. For that use tox. ``` tox ``` In any case, GitHub Actions will catch potential errors, but using tox speeds things up. #### Pull requests[¶](#pull-requests) You can contribute to the project by making a [pull request](https://github.com/barseghyanartur/tld/pulls). For example: * To fix documentation typos. * To improve documentation (for instance, to add new recipe or fix an existing recipe that doesn’t seem to work). * To improve performance. * To introduce a new feature. **General list to go through:** * Does your change require documentation update? * Does your change require update to tests? * Does your change rely on third-party cloud based service? If so, please make sure it’s added to tests that should be retried a couple of times. Example: `@pytest.mark.flaky(reruns=5)`. **When fixing bugs (in addition to the general list):** * Make sure to add regression tests. **When adding a new feature (in addition to the general list):** * Check the licenses of added dependencies carefully and make sure to list them in [prerequisites](https://tld.readthedocs.io/#prerequisites). * Make sure to update the documentation (check whether the [installation](https://tld.readthedocs.io/#installation), [usage examples](https://tld.readthedocs.io/#usage-examples) and [prerequisites](https://tld.readthedocs.io/#prerequisites) require changes). #### Questions[¶](#questions) Questions can be asked on GitHub [discussions](https://github.com/barseghyanartur/tld/discussions). #### Issues[¶](#id4) For reporting a bug or filing a feature request use GitHub [issues](https://github.com/barseghyanartur/tld/issues). **Do not report security issues on GitHub**. Check the [support](https://tld.readthedocs.io/#support) section. ### Release history and notes[¶](#release-history-and-notes) [Sequence based identifiers](http://en.wikipedia.org/wiki/Software_versioning#Sequence-based_identifiers) are used for versioning (schema follows below): ``` major.minor[.revision] ``` * It’s always safe to upgrade within the same minor version (for example, from 0.3 to 0.3.4). * Minor version changes might be backwards incompatible. Read the release notes carefully before upgrading (for example, when upgrading from 0.3.4 to 0.4). * All backwards incompatible changes are mentioned in this document. #### 0.13[¶](#id1) 2023-02-28 * Drop Python 2.7, 3.5 and 3.6 support. Minimum required version now is Python 3.7. #### 0.12.7[¶](#id2) 2023-02-01 * Make sure to fail silently on bad URL patterns. * Tested against Python 3.11. * Tested against Python 3.10. * Updated bundled tld names. #### 0.12.6[¶](#id3) 2021-06-05 * Move `Registry` class from `tld.registry` to `tld.base`. * Reformat code using `black`. * Log information on updated resources of the `update_tld_names`. #### 0.12.5[¶](#id4) 2021-01-11 Note Release dedicated to defenders of Armenia and Artsakh (Nagorno Karabakh) and all the victims of Turkish and Azerbaijani aggression. * Fixed lower-cased parsed_url attributes (SplitResult) when getting tld as object (as_object=True). #### 0.12.4[¶](#id5) 2021-01-02 * Tested against Python 3.9. #### 0.12.3[¶](#id6) 2020-11-26 * Separate parsers for (a) public and private and (b) public only domains. This fixes a bug. If you want an old behaviour: The following code would raise exception in past. ``` from tld import get_tld get_tld( 'http://silly.cc.ua', search_private=False ) ``` Now it would return ua. ``` get_tld( 'http://silly.cc.ua', search_private=False ) ``` If you want old behavior, do as follows: ``` from tld.utils import MozillaTLDSourceParser get_tld( 'http://silly.cc.ua', search_private=False, parser_class=MozillaTLDSourceParser ) ``` Same goes for `get_fld`, `process_url`, `parse_tld` and `is_tld` functions. #### 0.12.2[¶](#id7) 2020-05-20 * Add mozilla license to dist. * Fix MyPy issues. #### 0.12.1[¶](#id8) 2020-04-25 Note In commemoration of [Armenian Genocide](https://en.wikipedia.org/wiki/Armenian_Genocide). * Correctly handling domain names ending with dot(s). #### 0.12[¶](#id9) 2020-04-19 * Use Public Suffix list instead of deprecated Mozilla’s MXR. #### 0.11.11[¶](#id10) 2020-03-10 * Minor speed-ups, reduce memory usage. #### 0.11.10[¶](#id11) 2020-02-05 * Python 2.7 and 3.5 fixes. #### 0.11.9[¶](#id12) 2019-12-16 * Adding test TLDs list to the package. #### 0.11.8[¶](#id13) 2019-12-13 * Minor fixes in setup.py. #### 0.11.7[¶](#id14) 2019-12-13 Note There have been no code changes since 0.11.2. The only change is that support for Python 2.7 and 3.5 has been added. * Added support for Python 2.7. #### 0.11.6[¶](#id15) 2019-12-12 * Targeted releases for all supported Python versions. #### 0.11.5[¶](#id16) 2019-12-12 * Targeted releases for all supported Python versions. #### 0.11.4[¶](#id17) 2019-12-12 * Changed order of the releases (Python 3.6 and up come first, then Python 3.5). * Make all distributions except Python 3.5 universal. #### 0.11.3[¶](#id18) 2019-12-12 * Added missing resources to the Python 3.5 release. #### 0.11.2[¶](#id19) 2019-12-12 * Bring back Python 3.5 support. #### 0.11.1[¶](#id20) 2019-12-11 * Minor speed ups. * More on adding typing. #### 0.11[¶](#id21) 2019-12-09 Note Since introduction of parser classes, usage of `NAMES_SOURCE_URL` and `NAMES_LOCAL_PATH` of the `tld.conf` module is deprecated. Also, `tld_names_local_path` and `tld_names_source_url` arguments are deprecated as well. If you want to customise things, implement your own parser (inherit from `BaseTLDSourceParser`). * Drop support for Python versions prior to 3.6. * Clean-up dependencies. * Introduce parsers. * Drop `tld_names_source_url` and `tld_names_local_path` introduced in the previous release. * Minor speed-ups (including tests). #### 0.10[¶](#id22) 2019-11-27 Note This is the last release to support Python 2. * Make it possible to provide a custom path to the TLD names file. * Make it possible to free up some resources occupied due to loading custom tld names by calling the `reset_tld_names` function with `tld_names_local_path` parameter. #### 0.9.8[¶](#id23) 2019-11-15 * Fix for occasional issue when some domains are not correctly recognised. #### 0.9.7[¶](#id24) 2019-10-30 Note This release is dedicated to my newborn daughter. Happy birthday, my dear Ani. * Handling urls that are only a TLD. * Accepts already splitted URLs. * Tested against Python 3.8. #### 0.9.6[¶](#id25) 2019-09-12 * Fix for update-tld-names returns a non-zero exit code on success (introduced with optimisations in 0.9.4). * Minor tests improvements. #### 0.9.5[¶](#id26) 2019-09-11 * Tests improvements. #### 0.9.4[¶](#id27) 2019-09-11 * Optimisations in setup.py, tests and console scripts. * Skip testing the update-tld-names functionality if no internet is available. #### 0.9.3[¶](#id28) 2019-04-05 * Added is_tld function. * Docs updated. * Upgrade test suite. #### 0.9.2[¶](#id29) 2019-01-10 * Fix an issue causing certain punycode TLDs to be deemed invalid. * Tested against Python 3.7. * Added tests for commands. * Dropped Python 2.6 support. * TLD source updated to the latest version. #### 0.9.1[¶](#id30) 2018-07-09 * Correctly handling nested TLDs. #### 0.9[¶](#id31) 2018-06-14 Note This release contains backward incompatible changes. You should update your code. The `active_only` option has been removed from `get_tld`, `get_fld` and `parse_url` functions. Update your code accordingly. * Removed `active_only` option from `get_tld`, `get_fld` and `parse_url` functions. * Correctly handling exceptions (!) in the original TLD list. * Fixes in documentation. * Added `parse_tld` function. * Fixes the `python setup.py test` command. #### 0.8[¶](#id32) 2018-06-13 Note This release contains backward incompatible changes. You should update your code. Old `get_tld` functionality is moved to `get_fld` (first-level domain definition). The `as_object` argument (False by default) has been deprecated for `get_fld`. ``` res = get_tld("http://www.google.co.uk", as_object=True) ``` **Old behaviour** ``` In: res.domain Out: 'google' In: res.extension Out: 'co.uk' In: res.subdomain Out: 'www' In: res.suffix Out: 'co.uk' In: res.tld Out: 'google.co.uk' ``` **New behaviour** ``` In: res.fld Out: 'google.co.uk' In: res.tld Out: 'co.uk' In: res.domain Out: 'google' In: res.subdomain Out: 'www' ``` When used without `as_object` it returns `co.uk`. **Recap** If you have been happily using old version of `get_tld` function without `as_object` argument set to `True`, you might want to replace `get_tld` import with `get_fld` import: ``` # Old from tld import get_tld get_tld('http://google.co.uk') # New from tld import get_fld get_fld('http://google.co.uk') ``` * Move to a Trie to match TLDs. This brings a speed up of 15-20%. * It’s now possible to search in public, private or all suffixes (old behaviour). Use `search_public` and `search_private` arguments accordingly. By default (to support old behavior), both are set to `True`. * Correct TLD definitions. * Domains like *****.xn–fiqs8s are now recognized as well. * Due to usage of `urlsplit` instead of `urlparse`, the initial list of TLDs is assembled quicker (a speed-up of 15-20%). * Docs/ directory is included in source distribution tarball. * More tests. #### 0.7.10[¶](#id33) 2018-04-07 * The `fix_protocol` argument respects protocol relative URLs. * Change year in the license. * Improved docstrings. * TLD source updated to the latest version. #### 0.7.9[¶](#id34) 2017-05-02 * Added base path override for local .dat file. * python setup.py test can used to execute the tests #### 0.7.8[¶](#id35) 2017-02-19 * Fix relative import in non-package for update-tls-names script. #15 * `get_tld` got a new argument `fix_protocol`, which fixes the missing protocol, having prepended “https” if missing or incorrect. #### 0.7.7[¶](#id36) 2017-02-09 * Tested against Python 3.5, 3.6 and PyPy. * pep8 fixes. * removed deprecated tld.update module. Use `update-tld-names` command instead. #### 0.7.6[¶](#id37) 2016-01-23 * Minor fixes. #### 0.7.5[¶](#id38) 2015-11-22 * Minor fixes. * Updated tld names file to the latest version. #### 0.7.4[¶](#id39) 2015-09-24 * Exposed TLD initialization as `get_tld_names`. #### 0.7.3[¶](#id40) 2015-07-18 * Support for wheel packages. * Fixed failure on some unicode domains. * TLD source updated to the latest version. * Documentation updated. #### 0.7.2[¶](#id41) 2014-09-28 * Minor fixes. #### 0.7.1[¶](#id42) 2014-09-23 * Force lower case of the URL for correct search. #### 0.7[¶](#id43) 2014-08-14 * Making it possible to obtain object instead of just extracting the TLD by setting the `as_object` argument of `get_tld` function to True. #### 0.6.4[¶](#id44) 2014-05-21 * Softened dependencies and lowered the `six` package version requirement to 1.4.0. * Documentation improvements. #### 0.6.3[¶](#id45) 2013-12-05 * Speed up search #### 0.6.2[¶](#id46) 2013-12-03 * Fix for URLs with a port not handled correctly. * Adding licenses. #### 0.6.1[¶](#id47) 2013-09-15 * Minor fixes. * Credits added. #### 0.6[¶](#id48) 2013-09-12 * Fixes for Python 3 (Windows encoding). #### 0.5[¶](#id49) 2013-09-13 * Python 3 support added. #### 0.4[¶](#id50) 2013-08-03 * Tiny code improvements. * Tests added. ### tld package[¶](#tld-package) #### Submodules[¶](#submodules) #### tld.base module[¶](#module-tld.base) *class* `tld.base.``BaseTLDSourceParser`[[source]](_modules/tld/base.html#BaseTLDSourceParser)[¶](#tld.base.BaseTLDSourceParser) Bases: `object` Base TLD source parser. *classmethod* `get_tld_names`(*fail_silently: bool = False*, *retry_count: int = 0*)[[source]](_modules/tld/base.html#BaseTLDSourceParser.get_tld_names)[¶](#tld.base.BaseTLDSourceParser.get_tld_names) Get tld names. | Parameters: | * **fail_silently** – * **retry_count** – | | Returns: | | `include_private` *= True*[¶](#tld.base.BaseTLDSourceParser.include_private) `uid` *= None*[¶](#tld.base.BaseTLDSourceParser.uid) *classmethod* `update_tld_names`(*fail_silently: bool = False*) → bool[[source]](_modules/tld/base.html#BaseTLDSourceParser.update_tld_names)[¶](#tld.base.BaseTLDSourceParser.update_tld_names) Update the local copy of the TLD file. | Parameters: | **fail_silently** – | | Returns: | | *classmethod* `validate`()[[source]](_modules/tld/base.html#BaseTLDSourceParser.validate)[¶](#tld.base.BaseTLDSourceParser.validate) Constructor. *class* `tld.base.``Registry`[[source]](_modules/tld/base.html#Registry)[¶](#tld.base.Registry) Bases: `type` `REGISTRY` *= {'mozilla': <class 'tld.utils.MozillaTLDSourceParser'>, 'mozilla_public_only': <class 'tld.utils.MozillaPublicOnlyTLDSourceParser'>}*[¶](#tld.base.Registry.REGISTRY) *classmethod* `get`(*key: str*, *default: Optional[tld.base.BaseTLDSourceParser] = None*) → Optional[tld.base.BaseTLDSourceParser][[source]](_modules/tld/base.html#Registry.get)[¶](#tld.base.Registry.get) *classmethod* `items`() → ItemsView[str, tld.base.BaseTLDSourceParser][[source]](_modules/tld/base.html#Registry.items)[¶](#tld.base.Registry.items) *classmethod* `reset`() → None[[source]](_modules/tld/base.html#Registry.reset)[¶](#tld.base.Registry.reset) #### tld.conf module[¶](#module-tld.conf) #### tld.defaults module[¶](#module-tld.defaults) #### tld.exceptions module[¶](#module-tld.exceptions) *exception* `tld.exceptions.``TldBadUrl`(*url*)[[source]](_modules/tld/exceptions.html#TldBadUrl)[¶](#tld.exceptions.TldBadUrl) Bases: `ValueError` TldBadUrl. Supposed to be thrown when bad URL is given. *exception* `tld.exceptions.``TldDomainNotFound`(*domain_name*)[[source]](_modules/tld/exceptions.html#TldDomainNotFound)[¶](#tld.exceptions.TldDomainNotFound) Bases: `ValueError` TldDomainNotFound. Supposed to be thrown when domain name is not found (didn’t match) the local TLD policy. *exception* `tld.exceptions.``TldImproperlyConfigured`[[source]](_modules/tld/exceptions.html#TldImproperlyConfigured)[¶](#tld.exceptions.TldImproperlyConfigured) Bases: `Exception` TldImproperlyConfigured. Supposed to be thrown when code is improperly configured. Typical use-case is when user tries to use get_tld function with both search_public and search_private set to False. *exception* `tld.exceptions.``TldIOError`[[source]](_modules/tld/exceptions.html#TldIOError)[¶](#tld.exceptions.TldIOError) Bases: `OSError` TldIOError. Supposed to be thrown when problems with reading/writing occur. #### tld.helpers module[¶](#module-tld.helpers) `tld.helpers.``project_dir`(*base: str*) → str[[source]](_modules/tld/helpers.html#project_dir)[¶](#tld.helpers.project_dir) Project dir. `tld.helpers.``PROJECT_DIR`(*base: str*) → str[¶](#tld.helpers.PROJECT_DIR) Project dir. #### tld.registry module[¶](#module-tld.registry) *class* `tld.registry.``Registry`[[source]](_modules/tld/base.html#Registry)[¶](#tld.registry.Registry) Bases: `type` `REGISTRY` *= {'mozilla': <class 'tld.utils.MozillaTLDSourceParser'>, 'mozilla_public_only': <class 'tld.utils.MozillaPublicOnlyTLDSourceParser'>}*[¶](#tld.registry.Registry.REGISTRY) *classmethod* `get`(*key: str*, *default: Optional[tld.base.BaseTLDSourceParser] = None*) → Optional[tld.base.BaseTLDSourceParser][[source]](_modules/tld/base.html#Registry.get)[¶](#tld.registry.Registry.get) *classmethod* `items`() → ItemsView[str, tld.base.BaseTLDSourceParser][[source]](_modules/tld/base.html#Registry.items)[¶](#tld.registry.Registry.items) *classmethod* `reset`() → None[[source]](_modules/tld/base.html#Registry.reset)[¶](#tld.registry.Registry.reset) #### tld.result module[¶](#module-tld.result) *class* `tld.result.``Result`(*tld: str*, *domain: str*, *subdomain: str*, *parsed_url: urllib.parse.SplitResult*)[[source]](_modules/tld/result.html#Result)[¶](#tld.result.Result) Bases: `object` Container. `domain`[¶](#tld.result.Result.domain) `extension`[¶](#tld.result.Result.extension) Alias of `tld`. | Return str: | | `fld`[¶](#tld.result.Result.fld) First level domain. | Returns: | | | Return type: | str | `parsed_url`[¶](#tld.result.Result.parsed_url) `subdomain`[¶](#tld.result.Result.subdomain) `suffix`[¶](#tld.result.Result.suffix) Alias of `tld`. | Return str: | | `tld`[¶](#tld.result.Result.tld) #### tld.trie module[¶](#module-tld.trie) *class* `tld.trie.``Trie`[[source]](_modules/tld/trie.html#Trie)[¶](#tld.trie.Trie) Bases: `object` An adhoc Trie data structure to store tlds in reverse notation order. `add`(*tld: str*, *private: bool = False*) → None[[source]](_modules/tld/trie.html#Trie.add)[¶](#tld.trie.Trie.add) *class* `tld.trie.``TrieNode`[[source]](_modules/tld/trie.html#TrieNode)[¶](#tld.trie.TrieNode) Bases: `object` Class representing a single Trie node. `children`[¶](#tld.trie.TrieNode.children) `exception`[¶](#tld.trie.TrieNode.exception) `leaf`[¶](#tld.trie.TrieNode.leaf) `private`[¶](#tld.trie.TrieNode.private) #### tld.utils module[¶](#module-tld.utils) *class* `tld.utils.``BaseMozillaTLDSourceParser`[[source]](_modules/tld/utils.html#BaseMozillaTLDSourceParser)[¶](#tld.utils.BaseMozillaTLDSourceParser) Bases: [`tld.base.BaseTLDSourceParser`](#tld.base.BaseTLDSourceParser) *classmethod* `get_tld_names`(*fail_silently: bool = False*, *retry_count: int = 0*) → Optional[Dict[str, tld.trie.Trie]][[source]](_modules/tld/utils.html#BaseMozillaTLDSourceParser.get_tld_names)[¶](#tld.utils.BaseMozillaTLDSourceParser.get_tld_names) Parse. | Parameters: | * **fail_silently** – * **retry_count** – | | Returns: | | `tld.utils.``get_fld`(*url: Union[str, urllib.parse.SplitResult], fail_silently: bool = False, fix_protocol: bool = False, search_public: bool = True, search_private: bool = True, parser_class: Type[tld.base.BaseTLDSourceParser] = None, **kwargs*) → Optional[str][[source]](_modules/tld/utils.html#get_fld)[¶](#tld.utils.get_fld) Extract the first level domain. Extract the top level domain based on the mozilla’s effective TLD names dat file. Returns a string. May throw `TldBadUrl` or `TldDomainNotFound` exceptions if there’s bad URL provided or no TLD match found respectively. | Parameters: | * **url** (*str | SplitResult*) – URL to get top level domain from. * **fail_silently** (*bool*) – If set to True, no exceptions are raised and None is returned on failure. * **fix_protocol** (*bool*) – If set to True, missing or wrong protocol is ignored (https is appended instead). * **search_public** (*bool*) – If set to True, search in public domains. * **search_private** (*bool*) – If set to True, search in private domains. * **parser_class** – | | Returns: | String with top level domain (if `as_object` argument is set to False) or a `tld.utils.Result` object (if `as_object` argument is set to True); returns None on failure. | | Return type: | str | `tld.utils.``get_tld`(*url: Union[str, urllib.parse.SplitResult], fail_silently: bool = False, as_object: bool = False, fix_protocol: bool = False, search_public: bool = True, search_private: bool = True, parser_class: Type[tld.base.BaseTLDSourceParser] = None*) → Union[str, tld.result.Result, None][[source]](_modules/tld/utils.html#get_tld)[¶](#tld.utils.get_tld) Extract the top level domain. Extract the top level domain based on the mozilla’s effective TLD names dat file. Returns a string. May throw `TldBadUrl` or `TldDomainNotFound` exceptions if there’s bad URL provided or no TLD match found respectively. | Parameters: | * **url** (*str | SplitResult*) – URL to get top level domain from. * **fail_silently** (*bool*) – If set to True, no exceptions are raised and None is returned on failure. * **as_object** (*bool*) – If set to True, `tld.utils.Result` object is returned, `domain`, `suffix` and `tld` properties. * **fix_protocol** (*bool*) – If set to True, missing or wrong protocol is ignored (https is appended instead). * **search_public** (*bool*) – If set to True, search in public domains. * **search_private** (*bool*) – If set to True, search in private domains. * **parser_class** – | | Returns: | String with top level domain (if `as_object` argument is set to False) or a `tld.utils.Result` object (if `as_object` argument is set to True); returns None on failure. | | Return type: | str | `tld.utils.``get_tld_names`(*fail_silently: bool = False*, *retry_count: int = 0*, *parser_class: Type[tld.base.BaseTLDSourceParser] = None*) → Dict[str, tld.trie.Trie][[source]](_modules/tld/utils.html#get_tld_names)[¶](#tld.utils.get_tld_names) Build the `tlds` list if empty. Recursive. | Parameters: | * **fail_silently** (*bool*) – If set to True, no exceptions are raised and None is returned on failure. * **retry_count** (*int*) – If greater than 1, we raise an exception in order to avoid infinite loops. * **parser_class** ([*BaseTLDSourceParser*](index.html#tld.base.BaseTLDSourceParser)) – | | Returns: | List of TLD names | | Return type: | obj:tld.utils.Trie | `tld.utils.``get_tld_names_container`() → Dict[str, tld.trie.Trie][[source]](_modules/tld/utils.html#get_tld_names_container)[¶](#tld.utils.get_tld_names_container) Get container of all tld names. | Returns: | | | Rtype dict: | | `tld.utils.``is_tld`(*value: Union[str, urllib.parse.SplitResult], search_public: bool = True, search_private: bool = True, parser_class: Type[tld.base.BaseTLDSourceParser] = None*) → bool[[source]](_modules/tld/utils.html#is_tld)[¶](#tld.utils.is_tld) Check if given URL is tld. | Parameters: | * **value** (*str*) – URL to get top level domain from. * **search_public** (*bool*) – If set to True, search in public domains. * **search_private** (*bool*) – If set to True, search in private domains. * **parser_class** – | | Returns: | | | Return type: | bool | *class* `tld.utils.``MozillaTLDSourceParser`[[source]](_modules/tld/utils.html#MozillaTLDSourceParser)[¶](#tld.utils.MozillaTLDSourceParser) Bases: [`tld.utils.BaseMozillaTLDSourceParser`](#tld.utils.BaseMozillaTLDSourceParser) Mozilla TLD source. `local_path` *= 'res/effective_tld_names.dat.txt'*[¶](#tld.utils.MozillaTLDSourceParser.local_path) `source_url` *= 'https://publicsuffix.org/list/public_suffix_list.dat'*[¶](#tld.utils.MozillaTLDSourceParser.source_url) `uid` *= 'mozilla'*[¶](#tld.utils.MozillaTLDSourceParser.uid) *class* `tld.utils.``MozillaPublicOnlyTLDSourceParser`[[source]](_modules/tld/utils.html#MozillaPublicOnlyTLDSourceParser)[¶](#tld.utils.MozillaPublicOnlyTLDSourceParser) Bases: [`tld.utils.BaseMozillaTLDSourceParser`](#tld.utils.BaseMozillaTLDSourceParser) Mozilla TLD source. `include_private` *= False*[¶](#tld.utils.MozillaPublicOnlyTLDSourceParser.include_private) `local_path` *= 'res/effective_tld_names_public_only.dat.txt'*[¶](#tld.utils.MozillaPublicOnlyTLDSourceParser.local_path) `source_url` *= 'https://publicsuffix.org/list/public_suffix_list.dat?publiconly'*[¶](#tld.utils.MozillaPublicOnlyTLDSourceParser.source_url) `uid` *= 'mozilla_public_only'*[¶](#tld.utils.MozillaPublicOnlyTLDSourceParser.uid) `tld.utils.``parse_tld`(*url: Union[str, urllib.parse.SplitResult], fail_silently: bool = False, fix_protocol: bool = False, search_public: bool = True, search_private: bool = True, parser_class: Type[tld.base.BaseTLDSourceParser] = None*) → Union[Tuple[None, None, None], Tuple[str, str, str]][[source]](_modules/tld/utils.html#parse_tld)[¶](#tld.utils.parse_tld) Parse TLD into parts. | Parameters: | * **url** – * **fail_silently** – * **fix_protocol** – * **search_public** – * **search_private** – * **parser_class** – | | Returns: | Tuple (tld, domain, subdomain) | | Return type: | tuple | `tld.utils.``pop_tld_names_container`(*tld_names_local_path: str*) → None[[source]](_modules/tld/utils.html#pop_tld_names_container)[¶](#tld.utils.pop_tld_names_container) Remove TLD names container item. | Parameters: | **tld_names_local_path** – | | Returns: | | `tld.utils.``process_url`(*url: Union[str, urllib.parse.SplitResult], fail_silently: bool = False, fix_protocol: bool = False, search_public: bool = True, search_private: bool = True, parser_class: Type[tld.base.BaseTLDSourceParser] = <class 'tld.utils.MozillaTLDSourceParser'>*) → Union[Tuple[List[str], int, urllib.parse.SplitResult], Tuple[None, None, urllib.parse.SplitResult]][[source]](_modules/tld/utils.html#process_url)[¶](#tld.utils.process_url) Process URL. | Parameters: | * **parser_class** – * **url** – * **fail_silently** – * **fix_protocol** – * **search_public** – * **search_private** – | | Returns: | | `tld.utils.``reset_tld_names`(*tld_names_local_path: str = None*) → None[[source]](_modules/tld/utils.html#reset_tld_names)[¶](#tld.utils.reset_tld_names) Reset the `tld_names` to empty value. If `tld_names_local_path` is given, removes specified entry from `tld_names` instead. | Parameters: | **tld_names_local_path** (*str*) – | | Returns: | | *class* `tld.utils.``Result`(*tld: str*, *domain: str*, *subdomain: str*, *parsed_url: urllib.parse.SplitResult*)[[source]](_modules/tld/result.html#Result)[¶](#tld.utils.Result) Bases: `object` Container. `domain`[¶](#tld.utils.Result.domain) `extension`[¶](#tld.utils.Result.extension) Alias of `tld`. | Return str: | | `fld`[¶](#tld.utils.Result.fld) First level domain. | Returns: | | | Return type: | str | `parsed_url`[¶](#tld.utils.Result.parsed_url) `subdomain`[¶](#tld.utils.Result.subdomain) `suffix`[¶](#tld.utils.Result.suffix) Alias of `tld`. | Return str: | | `tld`[¶](#tld.utils.Result.tld) `tld.utils.``update_tld_names`[[source]](_modules/tld/utils.html#update_tld_names)[¶](#tld.utils.update_tld_names) Update TLD names. | Parameters: | * **fail_silently** – * **parser_uid** – | | Returns: | | `tld.utils.``update_tld_names_cli`() → int[[source]](_modules/tld/utils.html#update_tld_names_cli)[¶](#tld.utils.update_tld_names_cli) CLI wrapper for update_tld_names. Since update_tld_names returns True on success, we need to negate the result to match CLI semantics. `tld.utils.``update_tld_names_container`(*tld_names_local_path: str*, *trie_obj: tld.trie.Trie*) → None[[source]](_modules/tld/utils.html#update_tld_names_container)[¶](#tld.utils.update_tld_names_container) Update TLD Names container item. | Parameters: | * **tld_names_local_path** – * **trie_obj** – | | Returns: | | #### Module contents[¶](#module-tld) `tld.``get_fld`(*url: Union[str, urllib.parse.SplitResult], fail_silently: bool = False, fix_protocol: bool = False, search_public: bool = True, search_private: bool = True, parser_class: Type[tld.base.BaseTLDSourceParser] = None, **kwargs*) → Optional[str][[source]](_modules/tld/utils.html#get_fld)[¶](#tld.get_fld) Extract the first level domain. Extract the top level domain based on the mozilla’s effective TLD names dat file. Returns a string. May throw `TldBadUrl` or `TldDomainNotFound` exceptions if there’s bad URL provided or no TLD match found respectively. | Parameters: | * **url** (*str | SplitResult*) – URL to get top level domain from. * **fail_silently** (*bool*) – If set to True, no exceptions are raised and None is returned on failure. * **fix_protocol** (*bool*) – If set to True, missing or wrong protocol is ignored (https is appended instead). * **search_public** (*bool*) – If set to True, search in public domains. * **search_private** (*bool*) – If set to True, search in private domains. * **parser_class** – | | Returns: | String with top level domain (if `as_object` argument is set to False) or a `tld.utils.Result` object (if `as_object` argument is set to True); returns None on failure. | | Return type: | str | `tld.``get_tld`(*url: Union[str, urllib.parse.SplitResult], fail_silently: bool = False, as_object: bool = False, fix_protocol: bool = False, search_public: bool = True, search_private: bool = True, parser_class: Type[tld.base.BaseTLDSourceParser] = None*) → Union[str, tld.result.Result, None][[source]](_modules/tld/utils.html#get_tld)[¶](#tld.get_tld) Extract the top level domain. Extract the top level domain based on the mozilla’s effective TLD names dat file. Returns a string. May throw `TldBadUrl` or `TldDomainNotFound` exceptions if there’s bad URL provided or no TLD match found respectively. | Parameters: | * **url** (*str | SplitResult*) – URL to get top level domain from. * **fail_silently** (*bool*) – If set to True, no exceptions are raised and None is returned on failure. * **as_object** (*bool*) – If set to True, `tld.utils.Result` object is returned, `domain`, `suffix` and `tld` properties. * **fix_protocol** (*bool*) – If set to True, missing or wrong protocol is ignored (https is appended instead). * **search_public** (*bool*) – If set to True, search in public domains. * **search_private** (*bool*) – If set to True, search in private domains. * **parser_class** – | | Returns: | String with top level domain (if `as_object` argument is set to False) or a `tld.utils.Result` object (if `as_object` argument is set to True); returns None on failure. | | Return type: | str | `tld.``get_tld_names`(*fail_silently: bool = False*, *retry_count: int = 0*, *parser_class: Type[tld.base.BaseTLDSourceParser] = None*) → Dict[str, tld.trie.Trie][[source]](_modules/tld/utils.html#get_tld_names)[¶](#tld.get_tld_names) Build the `tlds` list if empty. Recursive. | Parameters: | * **fail_silently** (*bool*) – If set to True, no exceptions are raised and None is returned on failure. * **retry_count** (*int*) – If greater than 1, we raise an exception in order to avoid infinite loops. * **parser_class** ([*BaseTLDSourceParser*](index.html#tld.base.BaseTLDSourceParser)) – | | Returns: | List of TLD names | | Return type: | obj:tld.utils.Trie | `tld.``is_tld`(*value: Union[str, urllib.parse.SplitResult], search_public: bool = True, search_private: bool = True, parser_class: Type[tld.base.BaseTLDSourceParser] = None*) → bool[[source]](_modules/tld/utils.html#is_tld)[¶](#tld.is_tld) Check if given URL is tld. | Parameters: | * **value** (*str*) – URL to get top level domain from. * **search_public** (*bool*) – If set to True, search in public domains. * **search_private** (*bool*) – If set to True, search in private domains. * **parser_class** – | | Returns: | | | Return type: | bool | `tld.``parse_tld`(*url: Union[str, urllib.parse.SplitResult], fail_silently: bool = False, fix_protocol: bool = False, search_public: bool = True, search_private: bool = True, parser_class: Type[tld.base.BaseTLDSourceParser] = None*) → Union[Tuple[None, None, None], Tuple[str, str, str]][[source]](_modules/tld/utils.html#parse_tld)[¶](#tld.parse_tld) Parse TLD into parts. | Parameters: | * **url** – * **fail_silently** – * **fix_protocol** – * **search_public** – * **search_private** – * **parser_class** – | | Returns: | Tuple (tld, domain, subdomain) | | Return type: | tuple | *class* `tld.``Result`(*tld: str*, *domain: str*, *subdomain: str*, *parsed_url: urllib.parse.SplitResult*)[[source]](_modules/tld/result.html#Result)[¶](#tld.Result) Bases: `object` Container. `domain`[¶](#tld.Result.domain) `extension`[¶](#tld.Result.extension) Alias of `tld`. | Return str: | | `fld`[¶](#tld.Result.fld) First level domain. | Returns: | | | Return type: | str | `parsed_url`[¶](#tld.Result.parsed_url) `subdomain`[¶](#tld.Result.subdomain) `suffix`[¶](#tld.Result.suffix) Alias of `tld`. | Return str: | | `tld`[¶](#tld.Result.tld) `tld.``update_tld_names`[[source]](_modules/tld/utils.html#update_tld_names)[¶](#tld.update_tld_names) Update TLD names. | Parameters: | * **fail_silently** – * **parser_uid** – | | Returns: | | [Indices and tables](#id23)[¶](#indices-and-tables) --- * [Index](genindex.html) * [Module Index](py-modindex.html) * [Search Page](search.html)
github.com/iwilltry42/k3d-go
go
Go
README [¶](#section-readme) --- ### k3d-go [![Build Status](https://travis-ci.com/iwilltry42/k3d-go.svg?branch=master)](https://travis-ci.com/iwilltry42/k3d-go) [![Go Report Card](https://goreportcard.com/badge/github.com/iwilltry42/k3d-go)](https://goreportcard.com/report/github.com/iwilltry42/k3d-go) #### k3s in docker k3s is the lightweight Kubernetes distribution by Rancher: [rancher/k3s](https://github.com/rancher/k3s) This repository is basically [zeerorg/k3s-in-docker](https://github.com/zeerorg/k3s-in-docker) reimplemented in Golang with some different/new functionality... just because I didn't have time to learn Rust. Thanks to @zeerorg for the original work! #### Requirements * docker #### Install You have several options there: * use the install script to grab the latest release: + wget: `wget -q -O - https://raw.githubusercontent.com/iwilltry42/k3d-go/master/install.sh | bash` + curl: `curl -s https://raw.githubusercontent.com/iwilltry42/k3d-go/master/install.sh | bash` * Grab a release from the [release tab](https://github.com/iwilltry42/k3d-go/releases) and install it yourself. * Via go: `go install github.com/iwilltry42/k3d-go` or... #### Build 1. Clone this repo, e.g. via `go get -u github.com/iwilltry42/k3d-go/releases` 2. Inside the repo run * `make` to build for your current system * `go install` to install it to your `GOPATH` * `make build-cross` to build for all systems #### Usage Check out what you can do via `k3d help` Example Workflow: Create a new cluster and use it with `kubectl` 1. `k3d create` to create a new single-node cluster (docker container) 2. `export KUBECONFIG=$(k3d get-kubeconfig)` to make `kubectl` to use the kubeconfig for that cluster 3. execute some commands like `kubectl get pods --all-namespaces` 4. `k3d delete` to delete the default cluster #### TODO * Use the docker client library instead of commands * Test the docker version * Improve cluster state management * Use [sirupsen/logrus](https://github.com/sirupsen/logrus) for prettier logs * Add install script Documentation [¶](#section-documentation) --- ![The Go Gopher](/static/shared/gopher/airplane-1200x945.svg) There is no documentation for this package.
@bughiding/echarts-gl
npm
JavaScript
ECHARTS-GL === ECharts-GL is an extension pack of [Apache ECharts](http://echarts.apache.org/), which providing 3D plots, globe visualization and WebGL acceleration. Docs --- * [Option Manual](https://echarts.apache.org/zh/option-gl.html) * [Gallery](http://gallery.echartsjs.com/explore.html#tags=echarts-gl) Installing --- ### npm and webpack ``` npm install echarts npm install echarts-gl ``` #### Import all ``` import * as echarts from 'echarts'; import 'echarts-gl'; ``` #### Minimal Import ``` import * as echarts from 'echarts/core'; import { Scatter3DChart } from 'echarts-gl/charts'; import { Grid3DComponent } from 'echarts-gl/components'; echarts.use([Scatter3DChart, Grid3DComponent]); ``` ### Include by scripts ``` <script src="https://cdn.jsdelivr.net/npm/echarts/dist/echarts.min.js"></script> <script src="https://cdn.jsdelivr.net/npm/echarts-gl/dist/echarts-gl.min.js"></script> ``` NOTE: ECharts GL 2.x is compatible with ECharts 5.x. ECharts GL 1.x is compatible with ECharts 4.x. Basic Usage --- ``` var chart = echarts.init(document.getElementById('main')); chart.setOption({ grid3D: {}, xAxis3D: {}, yAxis3D: {}, zAxis3D: {}, series: [{ type: 'scatter3D', symbolSize: 50, data: [[-1, -1, -1], [0, 0, 0], [1, 1, 1]], itemStyle: { opacity: 1 } }] }) ``` License --- ECharts-GL is available under the BSD license. Readme --- ### Keywords none
x12
cran
R
Package ‘x12’ October 14, 2022 Version 1.10.3 Date 2022-05-19 Title Interface to 'X12-ARIMA'/'X13-ARIMA-SEATS' and Structure for Batch Processing of Seasonal Adjustment Author <NAME> <<EMAIL>>, <NAME> Maintainer <NAME> <<EMAIL>> Depends R (>= 2.14.0),stats,utils,grDevices,x13binary Imports stringr,methods Suggests covr, parallel, tinytest Description The 'X13-ARIMA- SEATS' <https://www.census.gov/data/software/x13as.html> methodology and soft- ware is a widely used software and developed by the US Census Bureau. It can be ac- cessed from 'R' with this package and 'X13-ARIMA-SEATS' binaries are pro- vided by the 'R' package 'x13binary'. License GPL (>= 2) LazyData TRUE ByteCompile TRUE URL https://github.com/statistikat/x12 NeedsCompilation no Repository CRAN Date/Publication 2022-05-19 09:20:02 UTC R topics documented: AirPassengersX1... 2 AirPassengersX12Batc... 3 crossVa... 3 crossValidation-clas... 5 diagnostics-clas... 6 fbcast-clas... 6 getP-method... 7 load... 8 plot-method... 10 plot.x12wor... 14 plotRsdAc... 15 plotSeasFa... 17 plotSpe... 19 prev-method... 21 readSp... 22 spectrum-clas... 23 summary-method... 24 summary.x12wor... 26 time... 27 x1... 28 x12BaseInfo-clas... 30 x12Batch-clas... 31 x12List-clas... 33 x12Output-clas... 34 x12Parameter-clas... 35 x12pat... 40 x12Single-clas... 41 x12wor... 42 AirPassengersX12 x12Single object Description x12 Single object with the AirPassengers time series Usage data(AirPassengersX12) Examples data(AirPassengersX12) summary(AirPassengersX12) summary(AirPassengersX12,oldOutput=10) AirPassengersX12Batch x12Batch object Description x12Batch object of four AirPassengers series with paramters and output objects Usage data(AirPassengersX12Batch) Examples data(AirPassengersX12Batch) summary(AirPassengersX12Batch) crossVal ~~ Methods for Function crossVal in Package x12 ~~ Description Cross Validation with function crossVal in package x12. Usage ## S4 method for signature 'ts' crossVal(object, x12Parameter, x12BaseInfo, showCI=FALSE, main="Cross Validation", col_original="black", col_fc="#2020ff", col_bc="#2020ff", col_ci="#d1d1ff", col_cishade="#d1d1ff", lty_original=1, lty_fc=2, lty_bc=2, lty_ci=1, lwd_original=1, lwd_fc=1, lwd_bc=1, lwd_ci=1, ytop=1, points_bc=FALSE, points_fc=FALSE, points_original=FALSE, showLine=TRUE, col_line="grey", lty_line=3, ylab="Value", xlab="Date",ylim=NULL,span=NULL) ## S4 method for signature 'x12Single' crossVal(object, x12BaseInfo=new("x12BaseInfo"), showCI=FALSE, main="Cross Validation", col_original="black", col_fc="#2020ff", col_bc="#2020ff", col_ci="#d1d1ff", col_cishade="#d1d1ff", lty_original=1, lty_fc=2, lty_bc=2, lty_ci=1, lwd_original=1, lwd_fc=1, lwd_bc=1, lwd_ci=1, ytop=1, points_bc=FALSE, points_fc=FALSE, points_original=FALSE, showLine=TRUE, col_line="grey", lty_line=3, ylab="Value", xlab="Date",ylim=NULL,span=NULL) Arguments object object of class ts or x12Single-class. x12Parameter object of class x12Parameter. x12BaseInfo object of class x12BaseInfo. showCI logical specifying if the prediction interval should be plotted. main plot title. col_original color of the original time series. col_fc color of the forecasts. col_bc color of the backcasts. col_ci color of the prediction interval. col_cishade color of the shading of the prediction interval. lty_original line type of the original time series. lty_fc line type of the forecasts. lty_bc line type of the backcasts. lty_ci line type of the prediction interval. lwd_original line width of the original time series. lwd_fc line width of the forecasts. lwd_bc line width of the backcasts. lwd_ci line width of the prediction interval. ytop multiplication factor for ylim. points_bc logical specifying if backcasts should additionally be indicated with points. points_fc logical specifying if forecasts should additionally be indicated with points. points_original logical specifying if the original time series should additionally be indicated with points. showLine logical indicating if a boundary line should be drawn before/after fore-/backcasts. col_line color of showLine. lty_line line type of showLine. ylab label of y-axis. xlab label of x-axis. ylim range of the y-axis span vector of length 4, limiting the data used for the plot. Start and end date of said time interval can be specified by 4 integers in the for- mat c(start year, start seasonal period, end year, end seasonal period) Value An S4 object of class crossValidation-class. Methods signature(object = "ts") signature(object = "x12Single") Author(s) <NAME>, <NAME> See Also x12, plot, plotSpec, plotSeasFac, plotRsdAcf Examples ## Not run: s <- new("x12Single",ts=AirPassengers,tsName="air") s <- setP(s,list(estimate=TRUE,regression.variables="AO1950.1",outlier.types="all", outlier.critical=list(LS=3.5,TC=2.5), backcast_years=1/2,forecast_years=1)) cv<-crossVal(s,showLine=TRUE) cv ## End(Not run) crossValidation-class Class "crossValidation" Description Standardized object for saving the output of crossVal in R. Objects from the Class Objects can be created by calls of the form new("crossValidation", ...). Slots backcast: Object of class "dfOrNULL" ~~ forecast: Object of class "dfOrNULL" ~~ Author(s) <NAME>, <NAME> Examples showClass("crossValidation") diagnostics-class Class "diagnostics" Description The x12 binaries produce a file with the suffix .udg. This class is a list of a selection of its content. Objects from the Class Objects can be created by calls of the form new("diagnostics", ...). It is used internally by the methods for x12Batch and x12Single objects. Slots .Data: Object of class "list" ~~ Extends Class "list", from data part. Author(s) <NAME> Examples showClass("diagnostics") fbcast-class Class "fbcast" Description Objects to save estimate, lowerci and upperci of fore- and/or backcasts in one standardized list. Used by the functions in this package. Objects from the Class Objects can be created by calls of the form new("fbcast", ...). Slots estimate: Object of class "ts" ~~ lowerci: Object of class "ts" ~~ upperci: Object of class "ts" ~~ Author(s) <NAME> Examples showClass("fbcast") getP-methods getP and setP for retrieving and setting parameters Description getP and setP for retrieving and setting parameters from a x12Single-class, x12Batch-class or x12Parameter-class object. Usage ## S4 method for signature 'x12Single' getP(object, whichP) ## S4 method for signature 'x12Batch' getP(object, whichP,index=NULL) ## S4 method for signature 'x12Parameter' getP(object, whichP) ## S4 method for signature 'x12Single' setP(object, listP) ## S4 method for signature 'x12Batch' setP(object, listP,index=NULL) ## S4 method for signature 'x12Parameter' setP(object, listP) Arguments object object of class x12Single-class, x12Batch-class or x12Parameter-class. whichP character vector with the names of the parameters to extract listP named list of parameters to change index index of the series in x12Batch-class to change or extract (NULL=all) Methods signature(object = "x12Batch") signature(object = "x12Parameter") signature(object = "x12Single") See Also x12, x12Single, x12Batch Examples ## Not run: #Create new batch object with 4 time series xb <- new("x12Batch",list(AirPassengers,AirPassengers,AirPassengers,AirPassengers)) # change the automdl to FALSE in all 4 elements xb <- setP(xb,list(automdl=FALSE)) #change the arima.model and arima.smodel settings for the first ts object xb <- setP(xb,list(arima.model=c(1,1,0),arima.smodel=c(1,1,0)),1) #change the arima.model and arima.smodel settings for the second ts object xb <- setP(xb,list(arima.model=c(0,1,1),arima.smodel=c(0,1,1)),2) #change the arima.model and arima.smodel settingsfor the third ts object xb <- setP(xb,list(arima.model=c(0,1,1),arima.smodel=c(1,1,1)),3) #change the arima.model and arima.smodel settings for the fourth ts object xb <- setP(xb,list(arima.model=c(1,1,1),arima.smodel=c(1,1,1)),4) #run x12 on all series xb <- x12(xb) summary(xb) #Set automdl=TRUE for the first ts xb <- setP(xb,list(automdl=TRUE),1) getP(xb,"automdl") #rerun x12 on all series (the binaries will only run on the first one) xb <- x12(xb) #summary with oldOutput summary(xb,oldOutput=10) #Change the parameter and output of the first series back to the first run xb <- prev(xb,index=1,n=1) #summary with oldOutput (--- No valid previous runs. ---) summary(xb,oldOutput=10) ## End(Not run) loadP loadP and saveP Description Functions loadP and saveP load and save parameter settings. Usage ## S4 method for signature 'x12Single' loadP(object, file) ## S4 method for signature 'x12Batch' loadP(object, file) ## S4 method for signature 'x12Parameter' loadP(object, file) ## S4 method for signature 'x12Single' saveP(object, file) ## S4 method for signature 'x12Batch' saveP(object, file) ## S4 method for signature 'x12Parameter' saveP(object, file) Arguments object object of class x12Single-class, x12Batch-class or x12Parameter-class. file filepath Methods signature(object = "x12Batch") signature(object = "x12Parameter") signature(object = "x12Single") See Also x12, x12Batch Examples ## Not run: #Create new batch object with 4 time series and change some parameters xb <- new("x12Batch",list(AirPassengers,AirPassengers,AirPassengers,AirPassengers)) xb <- setP(xb,list(automdl=FALSE)) xb <- setP(xb,list(arima.model=c(1,1,0),arima.model=c(1,1,0)),1) xb <- setP(xb,list(arima.model=c(0,1,1),arima.smodel=c(0,1,1)),2) xb <- setP(xb,list(arima.model=c(0,1,1),arima.smodel=c(1,1,1)),3) xb <- setP(xb,list(arima.model=c(1,1,1),arima.smodel=c(1,1,1)),4) #save all parameters saveP(xb,file="xyz.RData") xb1 <- new("x12Batch",list(AirPassengers,AirPassengers,AirPassengers,AirPassengers)) #load all parameters and save it to the corresponding series inside a x12Batch-object xb1 <- loadP(xb1,file="xyz.RData") xs <- new("x12Single",ts=AirPassengers) xs <- setP(xs,list(arima.model=c(2,1,1),arima.smodel=c(2,1,1))) #Save the parameters saveP(xs,file="xyz1.RData") #Load a saved parameter set to a x12Single object xs <- new("x12Single",ts=AirPassengers) xs <- loadP(xs,file="xyz1.RData") #Replace all parameters in a x12Batch object with one parameter set xb <- new("x12Batch",list(AirPassengers,AirPassengers,AirPassengers,AirPassengers)) xb <- loadP(xb,file="xyz1.RData") ## End(Not run) plot-methods ~~ Methods for Function plot in Package x12 ~~ Description Plot function for x12 output in package x12. Usage ## S4 method for signature 'x12Single' plot(x, original=TRUE, sa=FALSE, trend=FALSE, log_transform=FALSE, ylab="Value", xlab="Date", main="TS", col_original="black", col_sa="blue", col_trend="green", lwd_original=1, lwd_sa=1, lwd_trend=1, lty_sa=1, lty_trend=1, ytop=1, showAllout=FALSE, showAlloutLines=FALSE, showOut=NULL, annComp=TRUE, annCompTrend=TRUE, col_ao="red", col_ls="red", col_tc="red", col_annComp="grey", lwd_out=1, cex_out=1.5, pch_ao=4, pch_ls=2, pch_tc=23, plot_legend=TRUE, legend_horiz=TRUE, legend_bty="o", forecast=FALSE, backcast=FALSE, showCI=TRUE, col_fc="#2020ff", col_bc="#2020ff", col_ci="#d1d1ff", col_cishade="#d1d1ff", lty_original=1, lty_fc=2, lty_bc=2, lty_ci=1, lwd_fc=1, lwd_bc=1, lwd_ci=1, points_bc=FALSE, points_fc=FALSE, points_original=FALSE, showLine=FALSE, col_line="grey", lty_line=3, ylim=NULL, span=NULL, ...) ## S4 method for signature 'x12Batch' plot(x, what="ask",original=TRUE, sa=FALSE, trend=FALSE, log_transform=FALSE, ylab="Value", xlab="Date", main="TS", col_original="black", col_sa="blue", col_trend="green", lwd_original=1, lwd_sa=1, lwd_trend=1, lty_sa=1, lty_trend=1, ytop=1, showAllout=FALSE, showAlloutLines=FALSE, showOut=NULL, annComp=TRUE, annCompTrend=TRUE, col_ao="red", col_ls="red", col_tc="red", col_annComp="grey", lwd_out=1, cex_out=1.5, pch_ao=4, pch_ls=2, pch_tc=23, plot_legend=TRUE, legend_horiz=TRUE, legend_bty="o", forecast=FALSE, backcast=FALSE, showCI=TRUE, col_fc="#2020ff", col_bc="#2020ff", col_ci="#d1d1ff", col_cishade="#d1d1ff", lty_original=1, lty_fc=2, lty_bc=2, lty_ci=1, lwd_fc=1, lwd_bc=1, lwd_ci=1, points_bc=FALSE, points_fc=FALSE, points_original=FALSE, showLine=FALSE, col_line="grey", lty_line=3, ylim=NULL, span=NULL, ...) ## S4 method for signature 'x12Output' plot(x, original=TRUE, sa=FALSE, trend=FALSE, log_transform=FALSE, ylab="Value", xlab="Date", main="TS", col_original="black", col_sa="blue", col_trend="green", lwd_original=1, lwd_sa=1, lwd_trend=1, lty_sa=1, lty_trend=1, ytop=1, showAllout=FALSE, showAlloutLines=FALSE, showOut=NULL, annComp=TRUE, annCompTrend=TRUE, col_ao="red", col_ls="red", col_tc="red", col_annComp="grey", lwd_out=1, cex_out=1.5, pch_ao=4, pch_ls=2, pch_tc=23, plot_legend=TRUE, legend_horiz=TRUE, legend_bty="o", forecast=FALSE, backcast=FALSE, showCI=TRUE, col_fc="#2020ff", col_bc="#2020ff", col_ci="#d1d1ff", col_cishade="#d1d1ff", lty_original=1, lty_fc=2, lty_bc=2, lty_ci=1, lwd_fc=1, lwd_bc=1, lwd_ci=1, points_bc=FALSE, points_fc=FALSE, points_original=FALSE, showLine=FALSE, col_line="grey", lty_line=3, ylim=NULL, span=NULL, ...) Arguments x object of class x12Output-class or x12Single-class. original logical defining whether the original time series should be plotted. sa logical defining whether the seasonally adjusted time series should be plotted. trend logical defining whether the trend should be plotted. log_transform logical defining whether the log transform should be plotted. showAllout logical defining whether all outliers should be plotted. showOut character in the format "TypeYear.Seasonalperiod" defining a specific outlier to be plotted. annComp logical defining whether an annual comparison should be performed for the out- lier defined in showOut. forecast logical defining whether the forecasts should be plotted. backcast logical defining whether the backcasts should be plotted. showCI logical defining whether the prediction intervals should be plotted. ylab label of y-axis. xlab label of x-axis. main plot title. col_original color of the original time series. col_sa color of the seasonally adjusted time series. col_trend color of the trend. lwd_original line width of the original time series. lwd_sa line width of the seasonally adjusted time series. lwd_trend line width of the trend. lty_original line type of the original time series. lty_sa line type of the seasonally adjusted time series. lty_trend line type of the trend. ytop multiplication factor for ylim. showAlloutLines logical specifying if vertical lines should be plotted with the outliers. annCompTrend logical specifying if the trend of the annual comparison should be plotted. col_ao color of additive outliers. col_ls color of level shifts. col_tc color of transitory changes. col_annComp color of annual comparison. lwd_out line width of outliers. cex_out magnification factor for size of symbols used for plotting outliers. pch_ao symbols used for additive outliers. pch_ls symbols used for level shifts. pch_tc symbols used for transitory changes. plot_legend logical specifying if a legend should be plotted. legend_horiz Orientation of the legend legend_bty the type of box to be drawn around the legend. The allowed values are "o" (the default) and "n". col_fc color of forecasts. col_bc color of backcasts. col_ci color of prediction interval. col_cishade color of prediction interval shading. lty_fc line type of forecasts. lty_bc line type of backcasts. lty_ci line type of prediction interval. lwd_fc line width of forecasts. lwd_bc line width of backcasts. lwd_ci line width of prediction interval. points_bc logical specifying if backcasts should additionally be indicated with points. points_fc logical specifying if forecasts should additionally be indicated with points. points_original logical specifying if the original time series should additionally be indicated with points. showLine logical indicating if a boundary line should be drawn before/after fore-/backcasts. col_line color of showLine. lty_line line type of showLine. ylim range of the y-axis. span vector of length 4, limiting the data used for the plot. Start and end date of said time interval can be specified by 4 integers in the for- mat c(start year, start seasonal period, end year, end seasonal period) what How multiple plots should be treated. "ask" is the only option at the moment. ... ignored. Methods signature(x = "x12Output") signature(x = "x12Single") Author(s) <NAME>, <NAME> See Also plotSpec, plotSeasFac, plotRsdAcf Examples ## Not run: s <- new("x12Single",ts=AirPassengers,tsName="air") s <- setP(s,list(estimate=TRUE,regression.variables="AO1950.1",outlier.types="all", outlier.critical=list(LS=3.5,TC=2.5),backcast_years=1/2)) s <- x12(s) #w/o outliers plot(s@x12Output,sa=TRUE,trend=TRUE,original=FALSE) plot(s) #with (all) outliers plot(s,showAllout=TRUE,sa=TRUE,trend=TRUE,log_transform=TRUE,lwd_out=1,pch_ao=4) plot(s,showAllout=TRUE,sa=TRUE,trend=TRUE,original=FALSE,showAlloutLines=TRUE, col_tc="purple")#,log_transform=TRUE)#,lwd_out=3) plot(s,showAllout=TRUE,span=c(1951,1,1953,12),points_original=TRUE,cex_out=2) #with showOut plot(s,showOut="AO1960.Jun",sa=FALSE,trend=FALSE,annComp=TRUE,log_transform=TRUE) plot(s,showOut="AO1958.Mar",sa=TRUE,trend=TRUE,annComp=TRUE,annCompTrend=FALSE) plot(s,showOut="AO1950.Jun",annComp=FALSE,cex_out=3,pch_ao=19,col_ao="orange") plot(s,showOut="TC1954.Mar",span=c(1954,1,1955,12)) plot(s,showOut="TC1954.Feb",col_tc="green3") #w/o legend plot(s,showAllout=TRUE,plot_legend=FALSE) plot(s,plot_legend=FALSE) plot(s,showOut="AO1950.1",plot_legend=FALSE,lwd_out=2,col_ao="purple") plot(s,showOut="TC1954.Feb",col_tc="orange",col_ao="magenta",plot_legend=FALSE) plot(s,showOut="AO1950.1",col_tc="orange",col_ao="magenta",plot_legend=FALSE) #Forecasts & Backcasts plot(s,forecast=TRUE) plot(s,backcast=TRUE,showLine=TRUE) plot(s,backcast=TRUE,forecast=TRUE,showCI=FALSE) plot(s,forecast=TRUE,points_fc=TRUE,col_fc="purple",lty_fc=2,lty_original=3, lwd_fc=0.9,lwd_ci=2) plot(s,sa=TRUE,plot_legend=FALSE) #Seasonal Factors and SI Ratios plotSeasFac(s) #Spectra plotSpec(s) plotSpec(s,highlight=FALSE) #Autocorrelations of the Residuals plotRsdAcf(s) plotRsdAcf(s,col_acf="black",lwd_acf=1) ## End(Not run) plot.x12work Plot method for objects of class x12work Description Plot method for objects of class "x12work". Usage ## S3 method for class 'x12work' plot(x,plots=c(1:9), ...) Arguments x an object of class "x12work". plots a vector containing numbers between 1 and 9. ... further arguments (currently ignored). Details Plots: 1: Original 2: Original Trend Adjusted 3: Log Original 4: Seasonal Factors 5: Seasonal Factors with SI Ratios 6: Spectrum Adjusted Original 7: Spectrum Seasonal Adjusted 8: Spectrum Irregular 9: Spectrum Residulas Author(s) <NAME> See Also x12work Examples data(AirPassengersX12) #plot(AirPassengersX12) plotRsdAcf ~~ Methods for Function plotRsdAcf in Package x12 ~~ Description Plot of the (partial) autocorrelations of the (squared) residuals with function plotRsdAcf in package x12. Usage ## S4 method for signature 'x12Output' plotRsdAcf(x, which="acf", xlab="Lag", ylab="ACF", main="default", col_acf="darkgrey", lwd_acf=4, col_ci="blue", lt_ci=2, ylim="default", ...) ## S4 method for signature 'x12Single' plotRsdAcf(x, which="acf", xlab="Lag", ylab="ACF", main="default", col_acf="darkgrey", lwd_acf=4, col_ci="blue", lt_ci=2, ylim="default", ...) Arguments x object of class x12Output-class or x12Single-class. which character specifying the type of autocorrelation of the residuals that should be plotted, i.e. the autocorrelations or partial autocorrelations of the residuals or the autocorrelations of the squared residuals ("acf", "pacf", "acf2"). xlab label of the x-axis. ylab label of the y-axis. main plot title. col_acf color of the autocorrelations. lwd_acf line width of the autocorrelations. col_ci color of the +- 2 standard error limits. lt_ci line type of the +- 2 standard error limits. ylim range of the y-axis. ... ignored. Methods signature(x = "x12Output") signature(x = "x12Single") Author(s) <NAME>, <NAME> See Also x12, plot, plotSpec, plotSeasFac Examples ## Not run: s <- new("x12Single",ts=AirPassengers,tsName="air") s <- setP(s,list(estimate=TRUE,regression.variables="AO1950.1",outlier.types="all", outlier.critical=list(LS=3.5,TC=2.5),backcast_years=1/2)) s <- x12(s) #w/o outliers plot(s@x12Output,sa=TRUE,trend=TRUE,original=FALSE) plot(s) #with (all) outliers plot(s,showAllout=TRUE,sa=TRUE,trend=TRUE,log_transform=TRUE,lwd_out=1,pch_ao=4) plot(s,showAllout=TRUE,sa=TRUE,trend=TRUE,original=FALSE,showAlloutLines=TRUE, col_tc="purple")#,log_transform=TRUE)#,lwd_out=3) #with showOut plot(s,showOut="AO1960.Jun",sa=FALSE,trend=FALSE,annComp=TRUE,log_transform=TRUE) plot(s,showOut="AO1958.Mar",sa=TRUE,trend=TRUE,annComp=TRUE,annCompTrend=FALSE) plot(s,showOut="AO1950.Jun",annComp=FALSE,cex_out=3,pch_ao=19,col_ao="orange") plot(s,showOut="TC1954.Feb") plot(s,showOut="TC1954.Feb",col_tc="green3") #w/o legend plot(s,showAllout=TRUE,plot_legend=FALSE) plot(s,plot_legend=FALSE) plot(s,showOut="AO1950.1",plot_legend=FALSE,lwd_out=2,col_ao="purple") plot(s,showOut="TC1954.Feb",col_tc="orange",col_ao="magenta",plot_legend=FALSE) plot(s,showOut="AO1950.1",col_tc="orange",col_ao="magenta",plot_legend=FALSE) #Forecasts & Backcasts plot(s,forecast=TRUE) plot(s,backcast=TRUE,showLine=TRUE) plot(s,backcast=TRUE,forecast=TRUE,showCI=FALSE) plot(s,forecast=TRUE,points_fc=TRUE,col_fc="purple",lty_fc=2,lty_original=3,lwd_fc=0.9, lwd_ci=2) plot(s,sa=TRUE,plot_legend=FALSE) #Seasonal Factors and SI Ratios plotSeasFac(s) #Spectra plotSpec(s) plotSpec(s,highlight=FALSE) #Autocorrelations of the Residuals plotRsdAcf(s) plotRsdAcf(s,col_acf="black",lwd_acf=1) ## End(Not run) plotSeasFac ~~ Methods for Function plotSeasFac in Package x12 ~~ Description Seasonal factor plots with function plotSeasFac in package x12. Usage ## S4 method for signature 'x12Output' plotSeasFac(x,SI_Ratios=TRUE, ylab="Value", xlab="", lwd_seasonal=1, col_seasonal="black", lwd_mean=1, col_mean="blue", col_siratio="darkgreen",col_replaced="red", cex_siratio=.9, cex_replaced=.9, SI_Ratios_replaced=TRUE, plot_legend=TRUE,legend_horiz=FALSE,legend_bty="o", ...) ## S4 method for signature 'x12Single' plotSeasFac(x,SI_Ratios=TRUE, ylab="Value", xlab="",lwd_seasonal=1, col_seasonal="black", lwd_mean=1, col_mean="blue", col_siratio="darkgreen", col_replaced="red", cex_siratio=.9, cex_replaced=.9, SI_Ratios_replaced=TRUE, plot_legend=TRUE,legend_horiz=FALSE,legend_bty="o", ...) Arguments x object of class x12Output-class or x12Single-class. SI_Ratios logical specifying if the SI ratios should be plotted. ylab label of the y-axis. xlab label of the x-axis. lwd_seasonal line width of the seasonal factors. col_seasonal color of the seasonal factors. lwd_mean line width of the mean. col_mean color of the mean. col_siratio color of the SI ratios. col_replaced color of the replaced SI ratios. cex_siratio magnification factor for the size of the symbols used for plotting the SI ratios. cex_replaced magnification factor for the size of the symbols used for plotting the replaced SI ratios. SI_Ratios_replaced logical specifying if the replaced SI ratios should be plotted. plot_legend logical specifying if a legend should be plotted. legend_horiz Orientation of the legend legend_bty the type of box to be drawn around the legend. The allowed values are "o" (the default) and "n". ... ignored. Methods signature(x = "x12Output") signature(x = "x12Single") Author(s) <NAME>, <NAME> See Also x12, plot, plotSpec, plotRsdAcf Examples ## Not run: s <- new("x12Single",ts=AirPassengers,tsName="air") s <- setP(s,list(estimate=TRUE,regression.variables="AO1950.1",outlier.types="all", outlier.critical=list(LS=3.5,TC=2.5),backcast_years=1/2)) s <- x12(s) #w/o outliers plot(s@x12Output,sa=TRUE,trend=TRUE,original=FALSE) plot(s) #with (all) outliers plot(s,showAllout=TRUE,sa=TRUE,trend=TRUE,log_transform=TRUE,lwd_out=1,pch_ao=4) plot(s,showAllout=TRUE,sa=TRUE,trend=TRUE,original=FALSE,showAlloutLines=TRUE, col_tc="purple")#,log_transform=TRUE)#,lwd_out=3) #with showOut plot(s,showOut="AO1960.Jun",sa=FALSE,trend=FALSE,annComp=TRUE,log_transform=TRUE) plot(s,showOut="AO1958.Mar",sa=TRUE,trend=TRUE,annComp=TRUE,annCompTrend=FALSE) plot(s,showOut="AO1950.Jun",annComp=FALSE,cex_out=3,pch_ao=19,col_ao="orange") plot(s,showOut="TC1954.Feb") plot(s,showOut="TC1954.Feb",col_tc="green3") #w/o legend plot(s,showAllout=TRUE,plot_legend=FALSE) plot(s,plot_legend=FALSE) plot(s,showOut="AO1950.1",plot_legend=FALSE,lwd_out=2,col_ao="purple") plot(s,showOut="TC1954.Feb",col_tc="orange",col_ao="magenta",plot_legend=FALSE) plot(s,showOut="AO1950.1",col_tc="orange",col_ao="magenta",plot_legend=FALSE) #Forecasts & Backcasts plot(s,forecast=TRUE) plot(s,backcast=TRUE,showLine=TRUE) plot(s,backcast=TRUE,forecast=TRUE,showCI=FALSE) plot(s,forecast=TRUE,points_fc=TRUE,col_fc="purple",lty_fc=2,lty_original=3, lwd_fc=0.9,lwd_ci=2) plot(s,sa=TRUE,plot_legend=FALSE) #Seasonal Factors and SI Ratios plotSeasFac(s) #Spectra plotSpec(s) plotSpec(s,highlight=FALSE) #Autocorrelations of the Residuals plotRsdAcf(s) plotRsdAcf(s,col_acf="black",lwd_acf=1) ## End(Not run) plotSpec ~~ Methods for Function plotSpec in Package x12 ~~ Description Spectral plots with function plotSpec in package x12. Arguments x an object of class x12Output-class, x12Single-class or spectrum-class. which a string defining the executable of the editor to use ("sa" for the Spectrum of the Seasonally Adjusted Series, "original" for the Spectrum of the Original Series, "irregular" for the Spectrum of the Irregular Series and "residuals" for the Spectrum of the RegARIMA Residuals). frequency frequency of the time series (has to be specified for objects of class "spectrum" only). xlab label of the x-axis. ylab label of the y-axis. main plot title. col_bar color of bars. col_seasonal color of seasonal frequencies. col_td color of trading day frequencies. lwd_bar line width of bars. lwd_seasonal line width of seasonal frequencies. lwd_td line width of trading day frequencies. plot_legend logical specifying if a legend should be plotted. Methods signature(x = "x12Output",which="sa", xlab="Frequency",ylab="Decibels", main="Spectrum", col_bar="dark signature(x = "x12Single",which="sa", xlab="Frequency",ylab="Decibels", main="Spectrum", col_bar="dark signature(x = "spectrum",frequency, xlab="Frequency",ylab="Decibels", main="Spectrum", col_bar="darkgr Author(s) <NAME>, <NAME> See Also x12, plot, plotSeasFac, plotRsdAcf Examples ## Not run: s <- new("x12Single",ts=AirPassengers,tsName="air") s <- setP(s,list(estimate=TRUE,regression.variables="AO1950.1",outlier.types="all", outlier.critical=list(LS=3.5,TC=2.5),backcast_years=1/2)) s <- x12(s) #w/o outliers plot(s@x12Output,sa=TRUE,trend=TRUE,original=FALSE) plot(s) #with (all) outliers plot(s,showAllout=TRUE,sa=TRUE,trend=TRUE,log_transform=TRUE,lwd_out=1,pch_ao=4) plot(s,showAllout=TRUE,sa=TRUE,trend=TRUE,original=FALSE,showAlloutLines=TRUE, col_tc="purple")#,log_transform=TRUE)#,lwd_out=3) #with showOut plot(s,showOut="AO1960.Jun",sa=FALSE,trend=FALSE,annComp=TRUE,log_transform=TRUE) plot(s,showOut="AO1958.Mar",sa=TRUE,trend=TRUE,annComp=TRUE,annCompTrend=FALSE) plot(s,showOut="AO1950.Jun",annComp=FALSE,cex_out=3,pch_ao=19,col_ao="orange") plot(s,showOut="TC1954.Feb") plot(s,showOut="TC1954.Feb",col_tc="green3") #w/o legend plot(s,showAllout=TRUE,plot_legend=FALSE) plot(s,plot_legend=FALSE) plot(s,showOut="AO1950.1",plot_legend=FALSE,lwd_out=2,col_ao="purple") plot(s,showOut="TC1954.Feb",col_tc="orange",col_ao="magenta",plot_legend=FALSE) plot(s,showOut="AO1950.1",col_tc="orange",col_ao="magenta",plot_legend=FALSE) #Forecasts & Backcasts plot(s,forecast=TRUE) plot(s,backcast=TRUE,showLine=TRUE) plot(s,backcast=TRUE,forecast=TRUE,showCI=FALSE) plot(s,forecast=TRUE,points_fc=TRUE,col_fc="purple",lty_fc=2,lty_original=3, lwd_fc=0.9,lwd_ci=2) plot(s,sa=TRUE,plot_legend=FALSE) #Seasonal Factors and SI Ratios plotSeasFac(s) #Spectra plotSpec(s) plotSpec(s,highlight=FALSE) #Autocorrelations of the Residuals plotRsdAcf(s) plotRsdAcf(s,col_acf="black",lwd_acf=1) ## End(Not run) prev-methods ~~ Methods for Function prev and cleanArchive in Package x12 ~~ Description Function prev in package x12 reverts to previous parameter settings and output. Function cleanHistory resets x12OldParameter and x12OldOutput. Usage ## S4 method for signature 'x12Single' prev(object,n=NULL) ## S4 method for signature 'x12Batch' prev(object,index=NULL,n=NULL) ## S4 method for signature 'x12Single' cleanHistory(object) ## S4 method for signature 'x12Batch' cleanHistory(object,index=NULL) Arguments object object of class x12Single-class or x12Batch-class. n index corresponding to a previous run. index index corresponding to (an) object(s) of class "x12Single". Methods signature(object = "x12Single") signature(object = "x12Batch") Note cleanHistory is deprecated and cleanArchive should be used instead. Author(s) <NAME> See Also x12 Examples data(AirPassengersX12) summary(AirPassengersX12) # a maximum of 10 previous x12 runs are added to the summary summary(AirPassengersX12,oldOutput=10) #the x12Parameter and x12Output of the x12Single is set to the previous run of x12 Ap=prev(AirPassengersX12) summary(AirPassengersX12,oldOutput=10) readSpc Function to read X12-spc Files in x12Parameter R objects Description Still an early beta, so it will not work in specific situations Usage readSpc(file,filename=TRUE) Arguments file character vector containing filenames of spc files filename if TRUE the filename (without) ".spc" will be used as name for the serie Details Not all arguments of an X12 spc file are supported, but the parameters described in x12 should be covered. Value The function returns an object of class "x12Single" if the file argument has length 1, otherwise it returns an "x12Batch" object. Author(s) <NAME> See Also x12 Examples ## Not run: x12SingleObject1 <- readSpc("D:/aaa.spc") x12SingleObject2 <- readSpc("D:/ak_b.SPC") x12BatchObject1 <- readSpc(c("D:/ak_b.SPC","D:/aaa.spc")) setwd("M:/kowarik/Test/x12test") lf <- list.files() lf <- lf[unlist(lapply(lf,function(x)substr(x,nchar(x)-2,nchar(x)))) %in%c("spc","SPC")] lf <- lf[-c(grep("ind",lf))] allSPC <- readSpc(lf) a <- x12(allSPC) plot(a@x12List[[1]]) summary(a@x12List[[1]]) ## End(Not run) spectrum-class Class "spectrum" Description Standardized object for saving the spectrum output of the x12 binaries in R. Used by functions in this package. Objects from the Class Objects can be created by calls of the form new("spectrum", ...). Slots frequency: Object of class "numeric" ~~ spectrum: Object of class "numeric" ~~ Author(s) <NAME> Examples showClass("spectrum") summary-methods ~~ Methods for Function summary in Package x12 ~~ Description Delivers a diagnostics summary for x12 output. Usage ## S4 method for signature 'x12Output' summary(object, fullSummary=FALSE, spectra.detail=FALSE, almostout=FALSE, rsd.autocorr=NULL, quality.stat=FALSE, likelihood.stat=FALSE, aape=FALSE, id.rsdseas=FALSE, slidingspans=FALSE, history=FALSE, identify=FALSE, print=TRUE) ## S4 method for signature 'x12Single' summary(object, fullSummary=FALSE, spectra.detail=FALSE, almostout=FALSE, rsd.autocorr=NULL, quality.stat=FALSE, likelihood.stat=FALSE, aape=FALSE, id.rsdseas=FALSE, slidingspans=FALSE, history=FALSE, identify=FALSE, oldOutput=NULL,print=TRUE) ## S4 method for signature 'x12Batch' summary(object, fullSummary=FALSE, spectra.detail=FALSE, almostout=FALSE, rsd.autocorr=NULL, quality.stat=FALSE, likelihood.stat=FALSE, aape=FALSE, id.rsdseas=FALSE, slidingspans=FALSE, history=FALSE, identify=FALSE, oldOutput=NULL,print=TRUE) Arguments object object of class x12Output-class, x12Single-class or x12Batch-class. fullSummary logical defining whether all available optional diagnostics below should be in- cluded in the summary. spectra.detail logical defining whether more detail on the spectra should be returned. almostout logical defining whether "almost" outliers should be returned. rsd.autocorr character or character vector specifying the type of autocorrelation of the resid- uals that should be returned, i.e. the autocorrelations and/or partial autocorrela- tions of the residuals and/or the autocorrelations of the squared residuals ("acf", "pacf", "acf2"). quality.stat logical defining whether the second Q statistic, i.e. the Q Statistic computed w/o the M2 Quality Control Statistic, and the M statistics for monitoring and quality assessment should be returned as well. likelihood.stat if TRUE, the likelihood statistics AIC, AICC, BIC and HQ are returned as well as the estimated maximum value of the log likelihood function of the model for the untransformed data. aape logical defining whether the average absolute percentage error for forecasts should be returned. id.rsdseas logical defining whether the presence/absence of residual seasonality should be indicated. slidingspans logical defining whether the diagnostics output of the slidingspans analysis should be returned. history logical defining whether the diagnostics output of the (revision) history analysis should be returned. identify logical defining whether the (partial) autocorrelations of the residuals generated by the "identify" specification should be returned. oldOutput integer specifying the number of previous x12 runs stored in the x12OldOutput slot of an x12Single-class or an x12Batch-class object that should be in- cluded in the summary. print TRUE/FALSE if the summary should be printed. Methods signature(x = "x12Output") signature(x = "x12Single") signature(x = "x12Batch") Author(s) <NAME>, <NAME> See Also prev, cleanArchive Examples ## Not run: # Summary of an "x12Single" object x12path("../x12a.exe") s <- new("x12Single",ts=AirPassengers,tsName="air") s <- setP(s,list(estimate=TRUE,regression.variables="AO1950.1",outlier.types="all", outlier.critical=list(LS=3.5,TC=2.5),backcast_years=1/2)) s <- x12(s) summary.output<-summary(s) s <- x12(setP(s,list(arima.model=c(0,1,1),arima.smodel=c(0,2,1)))) summary.output<-summary(s,oldOutput=1) s <- x12(setP(s,list(arima.model=c(0,1,1),arima.smodel=c(1,0,1)))) summary.output<-summary(s,fullSummary=TRUE,oldOutput=2) # Summary of an "x12Batch" object xb <- new("x12Batch",list(AirPassengers,AirPassengers, AirPassengers),tsName=c("air1","air2","air3")) xb <- x12(xb) xb <- setP(xb,list(arima.model=c(1,1,0),arima.smodel=c(1,1,0)),1) xb <- x12(xb) xb <- setP(xb,list(regression.variables=c("AO1955.5","AO1956.1","ao1959.3")),1) xb <- setP(xb,list(regression.variables=c("AO1955.4")),2) xb<- x12(xb) xb <- setP(xb,list(outlier.types="all")) xb <- setP(xb,list(outlier.critical=list(LS=3.5,TC=2.5)),1) xb <- setP(xb,list(regression.variables=c("lpyear")),3) xb<- x12(xb) summary.output<-summary(xb,oldOutput=3) ## End(Not run) summary.x12work Diagnostics summary for objects of class x12work Description Diagnostics summary for objects of class "x12work". Usage ## S3 method for class 'x12work' summary(object,fullSummary=FALSE, spectra.detail=FALSE,almostout=FALSE, rsd.autocorr=NULL,quality.stat=FALSE,likelihood.stat=FALSE,aape=FALSE,id.rsdseas=FALSE, slidingspans=FALSE,history=FALSE,identify=FALSE,...) Arguments object an object of class "x12work". fullSummary logical defining whether all available optional diagnostics below should be in- cluded in the summary. spectra.detail logical defining whether more detail on the spectra should be returned. almostout logical defining whether "almost" outliers should be returned. rsd.autocorr character or character vector specifying the type of autocorrelation of the resid- uals that should be returned, i.e. the autocorrelations and/or partial autocorrela- tions of the residuals and/or the autocorrelations of the squared residuals ("acf", "pacf", "acf2"). quality.stat logical defining whether the second Q statistic, i.e. the Q Statistic computed w/o the M2 Quality Control Statistic, and the M statistics for monitoring and quality assessment should be returned as well. likelihood.stat if TRUE, the likelihood statistics AIC, AICC, BIC and HQ are returned as well as the estimated maximum value of the log likelihood function of the model for the untransformed data. aape logical defining whether the average absolute percentage error for forecasts should be returned. id.rsdseas logical defining whether the presence/absence of residual seasonality should be indicated. slidingspans logical defining whether the diagnostics output of the slidingspans analysis should be returned. history logical defining whether the diagnostics output of the (revision) history analysis should be returned. identify logical defining whether the (partial) autocorrelations of the residuals generated by the "identify" specification should be returned. ... ignored at the moment Details Delivers a diagnostics summary. Author(s) <NAME>, <NAME> See Also x12work, diagnostics-class, x12-methods Examples data(AirPassengers) ## Not run: summary(x12work(AirPassengers,...),quality.stat=TRUE,res.autocorr="acf") ## End(Not run) times Read start and end of a x12Single or x12Output object Description Combination of start() and end() for ts objects- Usage times(x) ## S4 method for signature 'x12Output' times(x) ## S4 method for signature 'x12Single' times(x) Arguments x a x12Single or x12Output object Value Returns a list with start and end for original, backcast and forecast timeseries Methods signature(x = "x12Output") signature(x = "x12Single") Author(s) <NAME> See Also x12, x12Single, x12Batch, x12Parameter, x12List, x12Output, x12BaseInfo, summary.x12work, x12work x12 ~~ Methods for Function x12 in Package x12 ~~ Description ~~ Methods for function x12 in package x12 ~~ Usage x12(object,x12Parameter=new("x12Parameter"),x12BaseInfo=new("x12BaseInfo"),...) Arguments object object of class ts, x12Single-class or x12Batch-class x12Parameter object of class x12Parameter x12BaseInfo object of class x12BaseInfo ... at the moment only forceRun=FALSE Methods signature(object = "ts") signature(object = "x12Single") signature(object = "x12Batch") Value x12Output An S4 object of class x12Output-class if object is of class ts x12Single An S4 object of class x12Single-class if object is of class x12Single-class x12Batch An S4 object of class x12Batch-class if object is of class x12Batch-class Note Parallelization is implemented for x12Batch objects with help of the package ’parallel’. To process in parallel set the option ’x12.parallel’ to an integer value representing the number of cores to use ( options(x12.parallel=2) ). Afterwards all calls to the function ’x12’ on an object of class ’x12Batch’ will be parallelized (For reseting use options(x12.parallel=NULL) ). cleanHistory is deprecated and cleanArchive should be used instead. Author(s) <NAME>, <NAME> Source https://www.census.gov/data/software/x13as.html References <NAME>, <NAME>, <NAME>, <NAME> (2014). Seasonal Adjustment with the R Packages x12 and x12GUI. Journal of Statistical Software, 62(2), 1-21. URL http://www.jstatsoft.org/v62/i02/. See Also summary, plot, x12env, setP, getP, loadP, saveP, prev, cleanArchive, crossVal Examples ## Not run: xts <- x12(AirPassengers) summary(xts) xs <- x12(new("x12Single",ts=AirPassengers)) summary(xs) xb<-x12(new("x12Batch",list(AirPassengers,AirPassengers,AirPassengers))) summary(xb) #Create new batch object with 4 time series xb <- new("x12Batch",list(AirPassengers,AirPassengers,AirPassengers,AirPassengers)) # change the automdl to FALSE in all 4 elements xb <- setP(xb,list(automdl=FALSE)) #change the arima.model and arima.smodel setting for the first ts object xb <- setP(xb,list(arima.model=c(1,1,0),arima.smodel=c(1,1,0)),1) #change the arima.model and arima.smodel setting for the second ts object xb <- setP(xb,list(arima.model=c(0,1,1),arima.smodel=c(0,1,1)),2) #change the arima.model and arima.smodel setting for the third ts object xb <- setP(xb,list(arima.model=c(0,1,1),arima.smodel=c(1,1,1)),3) #change the arima.model and arima.smodel setting for the fourth ts object xb <- setP(xb,list(arima.model=c(1,1,1),arima.smodel=c(1,1,1)),4) #run x12 on all series xb <- x12(xb) summary(xb) #Set automdl=TRUE for the first ts xb <- setP(xb,list(automdl=TRUE),1) #rerun x12 on all series (the binaries will only run on the first one) xb <- x12(xb) #summary with oldOutput summary(xb,oldOutput=10) #Change the parameter and output of the first series back to the first run xb <- prev(xb,index=1,n=1) #summary with oldOutput (--- No valid previous runs. ---) summary(xb,oldOutput=10) ## End(Not run) x12BaseInfo-class Class "x12BaseInfo" Description Baseinfo for use with the x12 function and classes. Objects from the Class Objects can be created by calls of the form new("x12BaseInfo", x12path, use, showWarnings). Slots x12path: Object of class "characterOrNULL" ~~ use: Object of class "character" ~~ showWarnings: Object of class "logical" ~~ Methods No methods defined with class "x12BaseInfo" in the signature. Author(s) <NAME> See Also x12, x12Single, x12Batch, x12Parameter, x12List, x12Output Examples showClass("x12BaseInfo") x12Batch-class Class "x12Batch" Description Concatenation of multiple x12Single-class objects. Objects from the Class Objects can be created by calls of the form new("x12Batch", tsList, tsName, x12BaseInfo). Slots x12List: Object of class "x12List" ~~ x12BaseInfo: Object of class "x12BaseInfo" ~~ Methods setP signature(object = "x12Batch"): ... getP signature(object = "x12Batch"): ... prev signature(object = "x12Batch"): ... cleanArchive signature(object = "x12Batch"): ... loadP signature(object = "x12Batch"): ... saveP signature(object = "x12Batch"): ... summary signature(object = "x12Batch"): ... x12 signature(object = "x12Batch"): ... dim signature(x = "x12Batch"): ... length signature(x = "x12Batch"): ... cleanHistory signature(object = "x12Batch"): ... Note cleanHistory is deprecated and cleanArchive should be used instead. Author(s) <NAME> References <NAME>, <NAME>, <NAME>, <NAME> (2014). Seasonal Adjustment with the R Packages x12 and x12GUI. Journal of Statistical Software, 62(2), 1-21. URL http://www.jstatsoft.org/v62/i02/. See Also x12, x12Single, x12Parameter, x12List, x12Output, x12BaseInfo, summary, getP, x12work Examples ## Not run: #object containing 4 time series and the corresponding parameters and output data(AirPassengersX12Batch) summary(AirPassengersX12Batch) #summary with oldOutput summary(AirPassengersX12Batch,oldOutput=10) #Change the parameter and output of the first series back to the first run AirPassengersX12Batch <- prev(AirPassengersX12Batch,index=1,n=1) #summary with oldOutput (--- No valid previous runs. ---) summary(AirPassengersX12Batch,oldOutput=10) #Create new batch object with 4 time series xb <- new("x12Batch",list(AirPassengers,ldeaths,nottem,UKgas), c("Air","ldeaths","nottem","UKgas")) # change outlier.types to "all" in all 4 elements xb <- setP(xb,list(outlier.types="all")) #change the arima.model and arima.smodel setting for the first ts object xb <- setP(xb,list(arima.model=c(0,1,1),arima.smodel=c(0,1,1)),1) #change the arima.model and arima.smodel setting for the second ts object xb <- setP(xb,list(arima.model=c(0,1,1),arima.smodel=c(0,1,1)),2) #change the arima.model and arima.smodel setting for the third ts object xb <- setP(xb,list(arima.model=c(0,1,1),arima.smodel=c(0,1,1)),3) #change the arima.model and arima.smodel setting for the fourth ts object xb <- setP(xb,list(arima.model=c(0,1,1),arima.smodel=c(0,1,1)),4) #run x12 on all series xb <- x12(xb) summary(xb) #Set automdl=TRUE for the first ts xb <- setP(xb,list(automdl=TRUE),1) #rerun x12 on all series (the binaries will only run on the first one) xb <- x12(xb) #summary with oldOutput summary(xb,oldOutput=10) #Change the parameter and output of the first series back to the first run xb <- prev(xb,index=1,n=1) #summary with oldOutput (--- No valid previous runs. ---) summary(xb,oldOutput=10) #Create new batch object by combining objects of class x12Single s1 <- new("x12Single",ts=AirPassengers,tsName="air") s1 <- setP(s1,list(estimate=TRUE,regression.variables="AO1950.1",outlier.types="all", outlier.critical=list(LS=3.5,TC=2.5))) s2 <- new("x12Single",ts=UKgas,tsName="UKgas") s2 <- setP(s2,list(slidingspans=TRUE,history=TRUE, history.estimates=c("sadj","sadjchng","trend","trendchng","seasonal","aic"), history.sadjlags=c(1,2),automdl=TRUE)) b <- new("x12Batch",list(s1,s2)) b <- x12(b) ## End(Not run) x12List-class Class "x12List" Description Support class for x12Batch-class containing multiple x12Single-class. Objects from the Class Objects can be created by calls of the form new("x12List", ...). Slots .Data: Object of class "list" ~~ Extends Class "list", from data part. Class "vector", by class "list", distance 2. Methods No methods defined with class "x12List" in the signature. Author(s) <NAME> See Also x12, x12Single, x12Batch, x12Parameter, x12Output, x12BaseInfo Examples showClass("x12List") x12Output-class Class "x12Output" Description Output class for x12. Objects from the Class Objects can be created by calls of the form new("x12Output", ...). Slots a1: Object of class "ts" - the original time series. d10: Object of class "ts" - the final seasonal factors. d11: Object of class "ts" - the final seasonally adjusted data. d12: Object of class "ts" - the final trend cycle. d13: Object of class "ts" - the final irregular components. d16: Object of class "ts" - the combined adjustment factors. c17: Object of class "ts" - the final weights for the irregular component. d9: Object of class "ts" - the final replacements for the SI ratios. e2: Object of class "ts" - the differenced, transformed, seasonally adjusted data. d8: Object of class "ts" - the final unmodified SI ratios. b1: Object of class "ts" - the prior adjusted original series. td: Object of class "tsOrNULL" - the trading day component otl: Object of class "tsOrNULL" - the outlier regression series sp0: Object of class "spectrum" - the spectrum of the original series. sp1: Object of class "spectrum" - the spectrum of the differenced seasonally adjusted series. sp2: Object of class "spectrum" - the spectrum of modified irregular series. spr: Object of class "spectrum" - the spectrum of the regARIMA model residuals. forecast: Object of class "fbcast" - the point forecasts with prediction intervals backcast: Object of class "fbcast" - the point backcasts with prediction intervals dg: Object of class "list", containing several seasonal adjustment and regARIMA modeling di- agnostics, i.e.: x11regress, transform, samode, seasonalma, trendma, arimamdl, automdl, regmdl, nout, nautoout, nalmostout, almostoutlier, crit,outlier, userdefined, autooutlier, peaks.seas, peaks.td, id.seas, id.rsdseas, spcrsd, spcori, spcsa, spcirr, m1, m2, m3, m4, m5, m6,m7, m8, m9, m10, m11, q, q2, nmfail, loglikelihood, aic, aicc, bic, hq, aape, autotransform, ifout, rsd.acf, rsd.pacf, rsd.acf2,tsName, frequency, span,... file: Object of class "character" - path to the output directory and filename tblnames: Object of class "character" - tables read into R Rtblnames: Object of class "character" - names of tables read into R Methods summary signature(object = "x12Output"): ... plot signature(object = "x12Output"): ... plotSpec signature(object = "x12Output"): ... plotSeasFac signature(object = "x12Output"): ... plotRsdAcf signature(object = "x12Output"): ... Author(s) <NAME>, <NAME> See Also x12, x12Single, x12Batch, x12Parameter, x12List, x12Output, x12BaseInfo, summary.x12work, x12work Examples data(AirPassengersX12) summary(AirPassengersX12) showClass("x12Output") x12Parameter-class Class "x12Parameter" Description Parameter class for x12. Objects from the Class Objects can be created by calls of the form new("x12Parameter", ...). Slots series.span: Object of class "numericOrNULLOrcharacter" - vector of length 4, limiting the data used for the calculations and analysis to a certain time interval. Start and end date of said time interval can be specified by 4 integers in the format c(start year, start seasonal period, end year, end seasonal period) If the start or end date of the time series object should be used, the respective year and seasonal period are to be set to NA. series.modelspan: Object of class "numericOrNULLOrcharacter" - vector of length 4, defining the start and end date of the time interval of the data that should be used to determine all regARIMA model coefficients. Specified in the same way as span. transform.function: Object of class "character" - transform parameter for x12 ("auto", "log", "none"). transform.power: Object of class "numericOrNULL" - numeric value specifying the power of the Box Cox power transformation. transform.adjust: Object of class "characterOrNULL" - determines the type of adjustment to be performed, i.e. transform.adjust="lom" for length-of-month adjustment on monthly data, transform.adjust="loq" for length-of-quarter adjustment on quarterly data or transform.adjust="lpyear" for leap year adjustment of monthly or quarterly data (which is only allowed when either transform.power=0 or transform.function="log"). regression.variables: Object of class "characterOrNULL" - character or character vector rep- resenting the names of the regression variables. regression.user: Object of class "characterOrNULL" - character or character vector defining the user parameters in the regression argument. regression.file: Object of class "characterOrNULL" - path to the file containing the data val- ues of all regression.user variables. regression.usertype: Object of class "characterOrNULL" - character or character vector as- signing a type of model-estimated regression effect on each user parameter in the regression argument ("seasonal", "td", "lpyear", "user", ...). By specifying a character vector of length greater one each variable can be given its own type. Otherwise the same type will be used for all user parameters. regression.centeruser: Object of class "characterOrNULL" - character specifying the removal of the (sample) mean or the seasonal means from the user parameters in the regression argu- ment ("mean", "seasonal"). Default is no modification of the respective user-defined regressors. regression.start: Object of class "numericOrNULLOrcharacter" - start date for the values of the regression.user variables, specified as a vector of two integers in the format c(year, seasonal period). regression.aictest: Object of class "characterOrNULL" - character vector defining the regres- sion variables for which an AIC test is to be performed. outlier.types: Object of class "characterOrNULL" - to enable the "outlier" specification in the spc file, this parameter has to be defined by a character or character vector determining the method(s) used for outlier detection ("AO", "LS", "TC", "all"). outlier.critical: Object of class "listOrNULLOrnumeric" - number specifying the critical value used for outlier detection (same value used for all types of outliers) or named list (possi- ble names of list elements being AO,LS and TC) where each list element specifies the respective critical value used for detecting the corresponding type of outlier. If not specified, the default critical value is used. outlier.span: Object of class "numericOrNULLOrcharacter" - vector of length 4, defining the span for outlier detection. Specified in the same way as span. outlier.method: Object of class "characterOrNULL" - character determining how detected out- liers should be added to the model ("addone", "addall"). If not specified,"addone" is used by default. identify: Object of class "logical" - if TRUE, the "identify" specification will be enabled in the spc file. identify.diff: Object of class "numericOrNULL" - number or vector representing the orders of nonseasonal differences specified, default is 0. identify.sdiff: Object of class "numericOrNULL" - number or vector representing the orders of seasonal differences specified, default is 0. identify.maxlag: Object of class "numericOrNULL" - number of lags specified for the ACFs and PACFs, default is 36 for monthly series and 12 for quarterly series. arima.model: Object of class "numericOrNULL" - vector of length 3, defining the arima parame- ters. arima.smodel: Object of class "numericOrNULL" - vector of length 3, defining the sarima param- eters. arima.ar: Object of class "numericOrNULLOrcharacter" - numeric or character vector specify- ing the initial values for nonseasonal and seasonal autoregressive parameters in the order that they appear in the arima.model argument. Empty positions are created with NA. arima.ma: Object of class "numericOrNULLOrcharacter" - numeric or character vector specify- ing the initial values for all moving average parameters in the order that they appear in the arima.model argument. Empty positions are created with NA. automdl: Object of class "logical" - TRUE/FALSE for activating auto modeling. automdl.acceptdefault: Object of class "logical" - logical for automdl defining whether the default model should be chosen if the Ljung-Box Q statistic for its model residuals is accept- able. automdl.balanced: Object of class "logical" - logical for automdl defining whether the auto- matic model procedure will tend towards balanced models. TRUE yields the same preference as the TRAMO program. automdl.maxorder: Object of class "numeric" - vector of length 2, specifying the maximum order for automdl. Empty positions are created with NA. automdl.maxdiff: Object of class "numeric" - vector of length 2, specifying the maximum diff. order for automdl. Empty positions are created with NA. forecast_years: Object of class "numericOrNULL" - number of years to forecast, default is 1 year. backcast_years: Object of class "numericOrNULL" - number of years to backcast, default is no backcasts. forecast_conf: Object of class "numeric" - probability for the confidence interval of forecasts. forecast_save: Object of class "character" either "ftr"(in transformed scaling) or "fct"(in orig- inal scaling) estimate: Object of class "logical" - if TRUE, the term "estimate" will be added to the spc file. estimate.outofsample: Object of class "logical" - logical defining whether "out of sample" or "within sample" forecast errors should be used in calculating the average magnitude of forecast errors over the last three years. check: Object of class "logical" - TRUE/FALSE for activating the "check" specification in the spc file. check.maxlag: Object of class "numericOrNULL" - the number of lags requested for the residual sample ACF and PACF, default is 24 for monthly series and 8 for quarterly series. slidingspans: Object of class "logical" - if TRUE, "slidingspans" specification will be enabled in the spc file. slidingspans.fixmdl: Object of class "characterOrNULL" - ("yes" (default), "no", "clear"). slidingspans.fixreg: Object of class "characterOrNULL" - character or character vector speci- fying the trading day, holiday, outlier or other user-defined regression effects to be fixed ("td", "holiday", "outlier", "user"). All other regression coefficients will be re-estimated for each sliding span. slidingspans.length: Object of class "numericOrNULL" - numeric value specifying the length of each span in months or quarters (>3 years, <17 years). slidingspans.numspans: Object of class "numericOrNULL" - numeric value specifying the num- ber of sliding spans used to generate output for comparisons (must be between 2 and 4, inclu- sive). slidingspans.outlier: Object of class "characterOrNULL" - ("keep" (default), "remove", "yes"). slidingspans.additivesa: Object of class "characterOrNULL" - ("difference" (default), "percent"). slidingspans.start: Object of class "numericOrNULLOrcharacter" - specified as a vector of two integers in the format c(start year, start seasonal period). history: if TRUE, the history specification will be enabled. history.estimates: Object of class "characterOrNULL" - character or character vector deter- mining which estimates from the regARIMA modeling and/or the x11 seasonal adjustment will be analyzed in the history analysis ("sadj" (default), "sadjchng", "trend", "trendchng", "seasonal", "aic", "fcst"). history.fixmdl: Object of class "logical" - logical determining whether the regARIMA model will be re-estimated during the history analysis. history.fixreg: Object of class "characterOrNULL" - character or character vector specifying the trading day, holiday, outlier or other user-defined regression effects to be fixed ("td", "holiday", "outlier", "user"). All other coefficients will be re-estimated for each history span. history.outlier: Object of class "characterOrNULL" - ("keep" (default), "remove", "auto") history.sadjlags: Object of class "numericOrNULL" - integer or vector specifying up to 5 revi- sion lags (each >0) that will be analyzed in the revisions analysis of lagged seasonal adjust- ments. history.trendlags: Object of class "numericOrNULL" - integer or vector specifying up to 5 revision lags (each >0) that will be used in the revision history of the lagged trend components. history.start: Object of class "numericOrNULLOrcharacter" - specified as a vector of two integers in the format c(start year, start seasonal period). history.target: Object of class "characterOrNULL" - character determining whether the revi- sions of the seasonal adjustments and trends calculated at the lags specified in history.sadjlags and history.trendlags should be defined by the deviation from the concurrent estimate or the deviation from the final estimate ("final" (default), "concurrent"). x11.sigmalim: Object of class "numericOrNULL" - vector of length 2, defining the limits for sigma in the x11 methodology, used to downweight extreme irregular values in the internal seasonal adjustment iterations. x11.type: Object of class "characterOrNULL" - character, i.e. "summary", "trend" or "sa". If x11.type="trend", x11 will only be used to estimate the final trend-cycle as well as the irregular components and to adjust according to trading days. The default setting is type="sa" where a seasonal decomposition of the series is calculated. x11.sfshort: Object of class "logical" - logical controlling the seasonal filter to be used if the series is at most 5 years long. If TRUE, the arguments of the seasonalma filter will be used wherever possible. If FALSE, a stable seasonal filter will be used irrespective of seasonalma. x11.samode: Object of class "characterOrNULL" - character defining the type of seasonal adjust- ment decomposition calculated ("mult", "add", "pseudoadd", "logadd"). x11.seasonalma: Object of class "characterOrNULL" - character or character vector of the for- mat c("snxm","snxm", ...) defining which seasonal nxm moving average(s) should be used for which calendar months or quarters to estimate the seasonal factors. If only one ma is spec- ified, the same ma will be used for all months or quarters. If not specified, the program will invoke an automatic choice. x11.trendma: Object of class "numericOrNULL" - integer defining the type of Henderson moving average used for estimating the final trend cycle. If not specified, the program will invoke an automatic choice. x11.appendfcst: Object of class "logical" - logical defining whether forecasts should be in- cluded in certain x11 tables. x11.appendbcst: Object of class "logical" - logical defining whether forecasts should be in- cluded in certain x11 tables. x11.calendarsigma: Object of class "characterOrNULL" - regulates the way the standard er- rors used for the detection and adjustment of extreme values should be computed ("all", "signif", "select" or no specification). x11.excludefcst: Object of class "logical" - logical defining if forecasts and backcasts from the regARIMA model should not be used in the generation of extreme values in the seasonal adjustment routines. x11.final: Object of class "character" - character or character vector specifying which type(s) of prior adjustment factors should be removed from the final seasonally adjusted series ("AO", "LS", "TC", "user", "none"). x11regression: Object of class "logical" - if TRUE, x11Regression will be performed using the respective regression and outlier commands above, i.e. regression.variables, regression.user, regression.file, regression.usertype, regression.centeruser and regression.start as well as outlier.critical, outlier.span and outlier.method. Methods getP signature(object = "x12Parameter"): ... setP signature(object = "x12Parameter"): ... Author(s) <NAME>, <NAME> Examples showClass("x12Parameter") x12path Function to interact with the environment x12env Description "x12env" is used to store the x12path and x13path (and more for the GUI). Usage x12env x12path(path=NULL) putd(x,value) getd(x, mode="any") rmd(x) existd(x, mode="any") Arguments path The path to the X12 or X13 binaries. x a character for the name value value that should be assigned to the object with name x. mode the mode or type of object sought Author(s) <NAME> See Also get, assign, exists, x12 Examples ## Not run: x12path() x12path("d:/x12/x12a.exe") x12path() getd("x12path") ## End(Not run) x12Single-class Class "x12Single" Description Class consisting of all information for x12. Objects from the Class Objects can be created by calls of the form new("x12Single", ...). Slots ts: Object of class ts x12Parameter: Object of class x12Parameter-class x12Output: Object of class x12Output-class x12OldParameter: Object of class list x12OldOutput: Object of class list tsName: Object of class characterOrNULL firstRun: Object of class logical Methods setP signature(object = "x12Single") getP signature(object = "x12Single") prev signature(object = "x12Single") cleanArchive signature(object = "x12Single") loadP signature(object = "x12Single") saveP signature(object = "x12Single") summary signature(object = "x12Single") x12 signature(object = "x12Single") plot signature(object = "x12Single") crossVal signature(object = "x12Single") plotSpec signature(object = "x12Single") plotSeasFac signature(object = "x12Single") plotRsdAcf signature(object = "x12Single") cleanHistory signature(object = "x12Single") Note cleanHistory is deprecated and cleanArchive should be used instead. Author(s) <NAME> See Also x12, x12Batch, x12Parameter, x12List, x12Output, x12BaseInfo, summary, getP, x12work Examples ## Not run: s <- new("x12Single",ts=AirPassengers,tsName="air") s <- setP(s,list(estimate=TRUE,regression.variables="AO1950.1",outlier.types="all", outlier.critical=list(LS=3.5,TC=2.5))) s <- x12(s) ## End(Not run) x12work Run x12 on an R TS-object Description A wrapper function for the x12 binaries. It creates a specification file for an R time series and runs x12, afterwards the output is read into R. Usage x12work(tso,period=frequency(tso),file="Rout", series.span=NULL,series.modelspan=NULL, transform.function="auto",transform.power=NULL,transform.adjust=NULL, regression.variables=NULL,regression.user=NULL,regression.file=NULL, regression.usertype=NULL,regression.centeruser=NULL,regression.start=NULL, regression.aictest=NULL, outlier.types=NULL,outlier.critical=NULL,outlier.span=NULL,outlier.method=NULL, identify=FALSE,identify.diff=NULL,identify.sdiff=NULL,identify.maxlag=NULL, arima.model=NULL,arima.smodel=NULL,arima.ar=NULL,arima.ma=NULL, automdl=FALSE,automdl.acceptdefault=FALSE,automdl.balanced=TRUE, automdl.maxorder=c(3,2),automdl.maxdiff=c(1,1), forecast_years=NULL,backcast_years=NULL,forecast_conf=.95, forecast_save="ftr", estimate=FALSE,estimate.outofsample=TRUE, check=TRUE,check.maxlag=NULL, slidingspans=FALSE, slidingspans.fixmdl=NULL,slidingspans.fixreg=NULL, slidingspans.length=NULL,slidingspans.numspans=NULL, slidingspans.outlier=NULL, slidingspans.additivesa=NULL,slidingspans.start=NULL, history=FALSE, history.estimates=NULL,history.fixmdl=FALSE, history.fixreg=NULL,history.outlier=NULL, history.sadjlags=NULL,history.trendlags=NULL, history.start=NULL,history.target=NULL, x11.sigmalim=c(1.5,2.5),x11.type=NULL,x11.sfshort=FALSE,x11.samode=NULL, x11.seasonalma=NULL,x11.trendma=NULL, x11.appendfcst=TRUE,x11.appendbcst=FALSE,x11.calendarsigma=NULL, x11.excludefcst=TRUE,x11.final="user", x11regression=FALSE, tblnames=NULL,Rtblnames=NULL, x12path=NULL,use="x12",keep_x12out=TRUE,showWarnings=TRUE) Arguments tso a time series object. period frequency of the time series. file path to the output directory and filename, default is the working directory and Rout.*. series.span vector of length 4, limiting the data used for the calculations and analysis to a certain time interval. Start and end date of said time interval can be specified by 4 integers in the for- mat c(start year, start seasonal period, end year, end seasonal period) If the start or end date of the time series object should be used, the respective year and seasonal period are to be set to NA. series.modelspan vector of length 4, defining the start and end date of the time interval of the data that should be used to determine all regARIMA model coefficients. Specified in the same way as span. transform.function transform parameter for x12 ("auto", "log", "none"). transform.power numeric value specifying the power of the Box Cox power transformation. transform.adjust determines the type of adjustment to be performed, i.e. transform.adjust="lom" for length-of-month adjustment on monthly data, transform.adjust="loq" for length-of-quarter adjustment on quarterly data or transform.adjust="lpyear" for leap year adjustment of monthly or quarterly data (which is only allowed when either transform.power=0 or transform.function="log"). regression.variables character or character vector representing the names of the regression variables. regression.user character or character vector defining the user parameters in the regression ar- gument. regression.file path to the file containing the data values of all regression.user variables. regression.usertype character or character vector assigning a type of model-estimated regression effect on each user parameter in the regression argument ("seasonal", "td", "lpyear", "user", ...). By specifying a character vector of length greater one each variable can be given its own type. Otherwise the same type will be used for all user parameters. regression.centeruser character specifying the removal of the (sample) mean or the seasonal means from the user parameters in the regression argument ("mean", "seasonal"). Default is no modification of the respective user-defined regressors. regression.start start date for the values of the regression.user variables, specified as a vector of two integers in the format c(year, seasonal period). regression.aictest character vector defining the regression variables for which an AIC test is to be performed. outlier.types to enable the "outlier" specification in the spc file, this parameter has to be de- fined by a character or character vector determining the method(s) used for out- lier detection ("AO", "LS", "TC", "all"). outlier.critical number specifying the critical value used for outlier detection (same value used for all types of outliers) or named list (possible names of list elements being AO,LS and TC) where each list element specifies the respective critical value used for detecting the corresponding type of outlier. If not specified, the default critical value is used. outlier.span vector of length 2, defining the span for outlier detection. outlier.method character determining how detected outliers should be added to the model ("addone", "addall"). If not specified,"addone" is used by default. identify Object of class "logical" - if TRUE, the "identify" specification will be enabled in the spc file. identify.diff number or vector representing the orders of nonseasonal differences specified, default is 0. identify.sdiff number or vector representing the orders of seasonal differences specified, de- fault is 0. identify.maxlag number of lags specified for the ACFs and PACFs, default is 36 for monthly series and 12 for quarterly series. arima.model vector of length 3, defining the arima parameters. arima.smodel vector of length 3, defining the sarima parameters. arima.ar numeric or character vector specifying the initial values for nonseasonal and sea- sonal autoregressive parameters in the order that they appear in the arima.model argument. Empty positions are created with NA. arima.ma numeric or character vector specifying the initial values for all moving average parameters in the order that they appear in the arima.model argument. Empty positions are created with NA. automdl TRUE/FALSE for activating auto modeling. automdl.acceptdefault logical for automdl defining whether the default model should be chosen if the Ljung-Box Q statistic for its model residuals is acceptable. automdl.balanced logical for automdl defining whether the automatic model procedure will tend towards balanced models. TRUE yields the same preference as the TRAMO pro- gram. automdl.maxorder vector of length 2, maximum order for automdl. Empty positions are created with NA. automdl.maxdiff vector of length 2, maximum diff. order for automdl. Empty positions are created with NA. forecast_years number of years to forecast, default is 1 year. backcast_years number of years to backcast, default is no backcasts. forecast_conf probability for the confidence interval of forecasts forecast_save character either "ftr"(in transformed scaling) or "fct"(in original scaling) estimate if TRUE, the term "estimate" will be added to the spc file. estimate.outofsample logical defining whether "out of sample" or "within sample" forecast errors should be used in calculating the average magnitude of forecast errors over the last three years. check TRUE/FALSE for activating the "check" specification in the spc file. check.maxlag the number of lags requested for the residual sample ACF and PACF, default is 24 for monthly series and 8 for quarterly series. slidingspans if TRUE, "slidingspans" specification will be enabled in the spc file. slidingspans.fixmdl ("yes" (default), "no", "clear"). slidingspans.fixreg character or character vector specifying the trading day, holiday, outlier or other user-defined regression effects to be fixed ("td", "holiday", "outlier", "user"). All other regression coefficients will be re-estimated for each sliding span. slidingspans.length numeric value specifying the length of each span in months or quarters (>3 years, <17 years). slidingspans.numspans numeric value specifying the number of sliding spans used to generate output for comparisons (must be between 2 and 4, inclusive). slidingspans.outlier ("keep" (default), "remove", "yes"). slidingspans.additivesa ("difference" (default), "percent"). slidingspans.start specified as a vector of two integers in the format c(start year, start seasonal period). history if TRUE, the history specification will be enabled. history.estimates character or character vector determining which estimates from the regARIMA modeling and/or the x11 seasonal adjustment will be analyzed in the history analysis ("sadj" (default), "sadjchng", "trend", "trendchng", "seasonal", "aic", "fcst"). history.fixmdl logical determining whether the regARIMA model will be re-estimated during the history analysis. history.fixreg character or character vector specifying the trading day, holiday, outlier or other user-defined regression effects to be fixed ("td", "holiday", "outlier", "user"). All other coefficients will be re-estimated for each history span. history.outlier ("keep" (default), "remove", "auto") history.sadjlags integer or vector specifying up to 5 revision lags (each >0) that will be analyzed in the revisions analysis of lagged seasonal adjustments. history.trendlags integer or vector specifying up to 5 revision lags (each >0) that will be used in the revision history of the lagged trend components. history.start specified as a vector of two integers in the format c(start year, start seasonal period). history.target character determining whether the revisions of the seasonal adjustments and trends calculated at the lags specified in history.sadjlags and history.trendlags should be defined by the deviation from the concurrent estimate or the deviation from the final estimate ("final" (default), "concurrent"). x11.sigmalim vector of length 2, defining the limits for sigma in the x11 methodology, used to downweight extreme irregular values in the internal seasonal adjustment itera- tions. x11.type character, i.e. "summary", "trend" or "sa". If x11.type="trend", x11 will only be used to estimate the final trend-cycle as well as the irregular components and to adjust according to trading days. The default setting is type="sa" where a seasonal decomposition of the series is calculated. x11.sfshort logical controlling the seasonal filter to be used if the series is at most 5 years long. If TRUE, the arguments of the seasonalma filter will be used wherever pos- sible. If FALSE, a stable seasonal filter will be used irrespective of seasonalma. x11.samode character defining the type of seasonal adjustment decomposition calculated ("mult", "add", "pseudoadd", "logadd"). x11.seasonalma character or character vector of the format c("snxm","snxm", ...) defining which seasonal nxm moving average(s) should be used for which calendar months or quarters to estimate the seasonal factors. If only one ma is specified, the same ma will be used for all months or quarters. If not specified, the program will invoke an automatic choice. x11.trendma integer defining the type of Henderson moving average used for estimating the final trend cycle. If not specified, the program will invoke an automatic choice. x11.appendfcst logical defining whether forecasts should be included in certain x11 tables. x11.appendbcst logical defining whether forecasts should be included in certain x11 tables. x11.calendarsigma regulates the way the standard errors used for the detection and adjustment of extreme values should be computed ("all", "signif", "select" or no specifi- cation). x11.excludefcst logical defining if forecasts and backcasts from the regARIMA model should not be used in the generation of extreme values in the seasonal adjustment routines. x11.final character or character vector specifying which type(s) of prior adjustment fac- tors should be removed from the final seasonally adjusted series ("AO", "LS", "TC", "user", "none"). x11regression if TRUE, x11Regression will be performed using the respective regression and outlier commands above, i.e. regression.variables, regression.user, regression.file, regression.usertype, regression.centeruser and regression.start as well as outlier.critical, outlier.span and outlier.method. tblnames character vector of additional tables to be read into R. Rtblnames character vector naming the additional tables. x12path path to the x12 binaries, for example d:\x12a\x12a.exe. use "x12" or "x13", at the moment only "x12" is tested properly. keep_x12out if TRUE, the output files generated by x12 are stored in the folder "gra" in the output directory and are not deleted at the end of a successful run. showWarnings logical defining whether warnings and notes generated by x12 should be re- turned. Errors will be displayed in any case. Details Generates an x12 specification file, runs x12 and reads the output files. Value x12work returns an object of class "x12". The function summary is used to print a summary of the diagnostics results. An object of class "x12" is a list containing at least the following components: a1 original time series d10 final seasonal factors d11 final seasonally adjusted data d12 final trend cycle d13 final irregular components d16 combined adjustment factors c17 final weights for irregular component d9 final replacements for SI ratios e2 differenced, transformed, seasonally adjusted data d8 final unmodified SI ratios b1 prior adjusted original series forecast point forecasts with prediction intervals backcast point backcasts with prediction intervals dg a list containing several seasonal adjustment and regARIMA modeling diagnos- tics, i.e.: x11regress, transform, samode, seasonalma, trendma, arimamdl, automdl, regmdl, nout, nautoout,nalmostout, almostoutlier, crit, outlier, userdefined, autooutlier, peaks.seas, peaks.td, id.seas,id.rsdseas, spcrsd, spcori, spcsa, spcirr, q, q2, nmfail, loglikelihood, aic, aicc, bic, hq, aape,autotransform, ifout, res.acf, res.pacf, res.acf2,... file path to the output directory and filename tblnames tables read into R Rtblnames names of tables read into R Note Only working with available x12 binaries. Author(s) <NAME>, <NAME> Source https://www.census.gov/data/software/x13as.html References <NAME>, <NAME>, <NAME>, <NAME> (2014). Seasonal Adjustment with the R Packages x12 and x12GUI. Journal of Statistical Software, 62(2), 1-21. URL http://www.jstatsoft.org/v62/i02/. See Also x12, ts, summary.x12work, plot.x12work, x12-methods Examples ### Examples data(AirPassengers) ## Not run: x12out <- x12work(AirPassengers,x12path=".../x12a.exe",transform.function="auto", arima.model=c(0,1,1),arima.smodel=c(0,1,1),regression.variables="lpyear", x11.sigmalim=c(2.0,3.0),outlier.types="all",outlier.critical=list(LS=3.5,TC=3), x11.seasonalma="s3x3") summary(x12out) ## End(Not run)
regsem
cran
R
Package ‘regsem’ June 2, 2023 Type Package Title Regularized Structural Equation Modeling Version 1.9.5 Maintainer <NAME> <<EMAIL>> Description Uses both ridge and lasso penalties (and extensions) to penalize specific parameters in structural equation models. The package offers additional cost functions, cross validation, and other extensions beyond traditional structural equation models. Also contains a function to perform exploratory mediation (XMed). URL https://github.com/Rjacobucci/regsem/ BugReports https://github.com/Rjacobucci/regsem/issues/ License GPL (>= 2) VignetteBuilder knitr Depends lavaan, Rcpp, Rsolnp Suggests snowfall, markdown, MASS, GA, caret, glmnet, ISLR, lbfgs, numDeriv, psych, knitr, nloptr, NlcOptim, optimx, semPlot, colorspace, plyr, matrixStats, stringr LinkingTo Rcpp, RcppArmadillo RoxygenNote 7.2.3 NeedsCompilation yes Author <NAME> [aut, cre], <NAME> [ctb], <NAME> [ctb], <NAME> [ctb], <NAME> [ctb], <NAME> [ctb], <NAME> [ctb], <NAME> [ctb] Repository CRAN Date/Publication 2023-06-02 09:00:02 UTC R topics documented: cv_regse... 2 det_rang... 6 det_range_pa... 7 efaMode... 8 extractMatrice... 9 fit_indice... 9 multi_opti... 10 parse_parameter... 13 pen_mo... 14 plot.cvregse... 14 rcpp_fit_fu... 15 rcpp_grad_ra... 16 rcpp_quasi_cal... 17 rcpp_RAMmul... 18 regse... 18 stabse... 22 stabsel_pa... 24 stabsel_th... 25 summary.cvregse... 26 summary.regse... 27 xme... 27 cv_regsem The main function that runs multiple penalty values. Description The main function that runs multiple penalty values. Usage cv_regsem( model, n.lambda = 40, pars_pen = "regressions", metric = ifelse(fit.ret2 == "train", "BIC", "chisq"), mult.start = FALSE, multi.iter = 10, jump = 0.01, lambda.start = 0, alpha = 0.5, gamma = 3.7, type = "lasso", random.alpha = 0.5, fit.ret = c("rmsea", "BIC", "chisq"), fit.ret2 = "train", n.boot = 20, data = NULL, optMethod = "rsolnp", gradFun = "ram", hessFun = "none", test.cov = NULL, test.n.obs = NULL, prerun = FALSE, parallel = FALSE, ncore = 2, Start = "lavaan", subOpt = "nlminb", diff_par = NULL, LB = -Inf, UB = Inf, par.lim = c(-Inf, Inf), block = TRUE, full = TRUE, calc = "normal", max.iter = 2000, tol = 1e-05, round = 3, solver = FALSE, quasi = FALSE, solver.maxit = 5, alpha.inc = FALSE, step = 0.1, momentum = FALSE, step.ratio = FALSE, line.search = FALSE, nlminb.control = list(), warm.start = FALSE, missing = "listwise", verbose = TRUE, ... ) Arguments model Lavaan output object. This is a model that was previously run with any of the lavaan main functions: cfa(), lavaan(), sem(), or growth(). It also can be from the efaUnrotate() function from the semTools package. Currently, the parts of the model which cannot be handled in regsem is the use of multiple group models, missing other than listwise, thresholds from categorical variable models, the use of additional estimators other than ML, most notably WLSMV for categorical variables. Note: the model does not have to actually run (use do.fit=FALSE), converge etc... regsem() uses the lavaan object as more of a parser and to get sample covariance matrix. n.lambda number of penalization values to test. pars_pen Parameter indicators to penalize. There are multiple ways to specify. The de- fault is to penalize all regression parameters ("regressions"). Additionally, one can specify all loadings ("loadings"), or both c("regressions","loadings"). Next, parameter labels can be assigned in the lavaan syntax and passed to pars_pen. See the example.Finally, one can take the parameter numbers from the A or S matrices and pass these directly. See extractMatrices(lav.object)$A. metric Which fit index to use to choose a final model? Note that it chooses the best fit that also achieves convergence (conv=0). mult.start Logical. Whether to use multi_optim() (TRUE) or regsem() (FALSE). multi.iter maximum number of random starts for multi_optim jump Amount to increase penalization each iteration. lambda.start What value to start the penalty at alpha Mixture for elastic net. 1 = ridge, 0 = lasso gamma Additional penalty for MCP and SCAD type Penalty type. Options include "none", "lasso", "ridge", "enet" for the elastic net, "alasso" for the adaptive lasso and "diff_lasso". diff_lasso penalizes the discrepency between parameter estimates and some pre-specified values. The values to take the deviation from are specified in diff_par. Two methods for sparser results than lasso are the smooth clipped absolute deviation, "scad", and the minimum concave penalty, "mcp". Last option is "rlasso" which is the ran- domised lasso to be used for stability selection. random.alpha Alpha parameter for randomised lasso. Has to be between 0 and 1, with a default of 0.5. Note this is only used for "rlasso", which pairs with stability selection. fit.ret Fit indices to return. fit.ret2 Return fits using only dataset "train" or bootstrap "boot"? Have to do 2 sample CV manually. n.boot Number of bootstrap samples if fit.ret2="boot" data Optional dataframe. Only required for missing="fiml". optMethod Solver to use. Two main options for use: rsoolnp and coord_desc. Although slightly slower, rsolnp works much better for complex models. coord_desc uses gradient descent with soft thresholding for the type of of penalty. Rsolnp is a nonlinear solver that doesn’t rely on gradient information. There is a similar type of solver also available for use, slsqp from the nloptr package. coord_desc can also be used with hessian information, either through the use of quasi=TRUE, or specifying a hess_fun. However, this option is not recommended at this time. gradFun Gradient function to use. Recommended to use "ram", which refers to the method specified in von Oertzen & Brick (2014). Only for use with optMethod="coord_desc". hessFun hessian function to use. Currently not recommended. test.cov Covariance matrix from test dataset. Necessary for CV=T test.n.obs Number of observations in test set. Used when CV=T prerun Logical. Use rsolnp to first optimize before passing to gradient descent? Only for use with coord_desc parallel Logical. whether to parallelize the processes running models for all values of lambda. ncore Number of cores to use when parallel=TRUE Start type of starting values to use. subOpt type of optimization to use in the optimx package. diff_par parameter values to deviate from. LB lower bound vector. UB upper bound vector par.lim Vector of minimum and maximum parameter estimates. Used to stop optimiza- tion and move to new starting values if violated. block Whether to use block coordinate descent full Whether to do full gradient descent or block calc Type of calc function to use with means or not. Not recommended for use. max.iter Number of iterations for coordinate descent tol Tolerance for coordinate descent round Number of digits to round results to solver Whether to use solver for coord_desc quasi Whether to use quasi-Newton solver.maxit Max iterations for solver in coord_desc alpha.inc Whether alpha should increase for coord_desc step Step size momentum Momentum for step sizes step.ratio Ratio of step size between A and S. Logical line.search Use line search for optimization. Default is no, use fixed step size nlminb.control list of control values to pass to nlminb warm.start Whether start values are based on previous iteration. This is not recommended. missing How to handle missing data. Current options are "listwise" and "fiml". verbose Print progress bar? ... Any additional arguments to pass to regsem() or multi_optim(). Value parameters Matrix of parameter estimates across the penalties fits Fit metrics across penalties final_pars Parameter estimates from the best fitting model according to metric pars_pen Parameter indicators that were penalized. df Degrees of freedom metric The fit function used to choose a final model call Examples library(regsem) # put variables on same scale for regsem HS <- data.frame(scale(HolzingerSwineford1939[,7:15])) mod <- ' f =~ x1 + x2 + x3 + x4 + x5 + x6 + x7 + x8 + x9 ' outt = cfa(mod, HS) # increase to > 25 cv.out = cv_regsem(outt,type="lasso", pars_pen=c(1:2,6:8), n.lambda=5,jump=0.01) # check parameter numbers extractMatrices(outt)["A"] # equivalent to mod <- ' f =~ 1*x1 + l1*x2 + l2*x3 + l3*x4 + l4*x5 + l5*x6 + l6*x7 + l7*x8 + l8*x9 ' outt = cfa(mod,HS) # increase to > 25 cv.out = cv_regsem(outt, type="lasso", pars_pen=c("l1","l2","l6","l7","l8"), n.lambda=5,jump=0.01) summary(cv.out) plot(cv.out, show.minimum="BIC") mod <- ' f =~ x1 + x2 + x3 + x4 + x5 + x6 ' outt = cfa(mod, HS) # can penalize all loadings cv.out = cv_regsem(outt,type="lasso", pars_pen="loadings", n.lambda=5,jump=0.01) mod2 <- ' f =~ x4+x5+x3 #x1 ~ x7 + x8 + x9 + x2 x1 ~ f x2 ~ f ' outt2 = cfa(mod2, HS) extractMatrices(outt2)$A # if no pars_pen specification, defaults to all # regressions cv.out = cv_regsem(outt2,type="lasso", n.lambda=15,jump=0.03) # check cv.out$pars_pen det_range Determine the initial range for stability selection Description This function perform regsem on bootstrap samples to determine the initial range for stability se- lection. Interquartile range of the bootstrap optimal regularization amounts are uesd as the final range. Usage det_range(data, model, times = 50, ...) Arguments data data frame model lavaan output object. times number of bootstrap samples used. ... Any additional arguments to pass to regsem() or cv_regsem(). Value result the lambda values and the upper bound and lower bound of the interquartile range. det_range_par Determine the initial range for stability selection, parallel version Description This function perform regsem on bootstrap samples to determine the initial range for stability selec- tion. Interquartile range of the bootstrap optimal regularization amounts are uesd as the final range. Parallelization is used to achieve faster performance. Usage det_range_par(data, model, times = 50, ...) Arguments data data frame model lavaan output object. times number of bootstrap samples used. ... Any additional arguments to pass to regsem() or cv_regsem(). Value result the lambda values and the upper bound and lower bound of the interquartile range. efaModel Generates an EFA model to be used by lavaan and regsem Function created by <NAME> for the paper Should regularization replace simple structure rotation in Exploratory Factor Analysis – Scharf & Nestler (in press at SEM) Description Generates an EFA model to be used by lavaan and regsem Function created by <NAME> for the paper Should regularization replace simple structure rotation in Exploratory Factor Analysis – Scharf & Nestler (in press at SEM) Usage efaModel(nFactors, variables) Arguments nFactors Number of latent factors to generate. variables Names of variables to be used as indicators Value model Full EFA model parameters. Examples ## Not run: HS <- data.frame(scale(HolzingerSwineford1939[,7:15])) # Note to find number of factors, recommended to use # fa.parallel() from the psych package # using the wrong number of factors can distort the results mod = efaModel(3, colnames(HS)) semFit = sem(mod, data = HS, int.ov.free = FALSE, int.lv.free = FALSE, std.lv = TRUE, std.ov = TRUE, auto.fix.single = FALSE, se = "none") # note it requires smaller penalties than other applications reg.out2 = cv_regsem(model = semFit, pars_pen = "loadings", mult.start = TRUE, multi.iter = 10, n.lambda = 100, type = "lasso", jump = 10^-5, lambda.start = 0.001) reg.out2 plot(reg.out2) # note that the solution jumps around -- make sure best fit makes sense ## End(Not run) extractMatrices This function extracts RAM matrices from a lavaan object. Description This function extracts RAM matrices from a lavaan object. Usage extractMatrices(model) Arguments model Lavaan model object. Value The RAM matrices from model. Examples library(lavaan) data(HolzingerSwineford1939) HS.model <- ' visual =~ x1 + x2 + x3 textual =~ x4 + x5 + x6 speed =~ x7 + x8 + x9 ' mod <- cfa(HS.model, data=HolzingerSwineford1939) mats = extractMatrices(mod) fit_indices Calculates the fit indices Description Calculates the fit indices Usage fit_indices(model, CV = FALSE, CovMat = NULL, data = NULL, n.obs = NULL) Arguments model regsem model object. CV cross-validation. Note that this requires splitting the dataset into a training and test set prior to running the model. The model should be run on the training set, with the test set held out and then passed to CovMat=. CovMat If CV=T then test covariance matrix must be supplied. Note That this should be done before running the lavaan model and should not overlap with the data or covariance matrix used to run the model. data supply the dataset? n.obs Number of observations in the test set for CV. Value fits Full set of fit indices Examples ## Not run: fit_indices() ## End(Not run) multi_optim Multiple starts for Regularized Structural Equation Modeling Description Multiple starts for Regularized Structural Equation Modeling Usage multi_optim( model, max.try = 10, lambda = 0, alpha = 0.5, gamma = 3.7, random.alpha = 0.5, LB = -Inf, UB = Inf, par.lim = c(-Inf, Inf), block = TRUE, full = TRUE, type = "lasso", optMethod = "rsolnp", gradFun = "ram", pars_pen = "regressions", diff_par = NULL, hessFun = "none", tol = 1e-05, round = 3, solver = FALSE, quasi = FALSE, solver.maxit = 50000, alpha.inc = FALSE, line.search = FALSE, prerun = FALSE, step = 0.1, momentum = FALSE, step.ratio = FALSE, verbose = FALSE, warm.start = FALSE, Start2 = NULL, nlminb.control = NULL, max.iter = 500 ) Arguments model Lavaan output object. This is a model that was previously run with any of the lavaan main functions: cfa(), lavaan(), sem(), or growth(). It also can be from the efaUnrotate() function from the semTools package. Currently, the parts of the model which cannot be handled in regsem is the use of multiple group models, missing other than listwise, thresholds from categorical variable models, the use of additional estimators other than ML, most notably WLSMV for categorical variables. Note: the model does not have to actually run (use do.fit=FALSE), converge etc... regsem() uses the lavaan object as more of a parser and to get sample covariance matrix. max.try number of starts to try before convergence. lambda Penalty value. Note: higher values will result in additional convergence issues. alpha Mixture for elastic net. gamma Additional penalty for MCP and SCAD random.alpha Alpha parameter for randomised lasso. Has to be between 0 and 1, with a default of 0.5. Note this is only used for "rlasso", which pairs with stability selection. LB lower bound vector. Note: This is very important to specify when using regular- ization. It greatly increases the chances of converging. UB Upper bound vector par.lim Vector of minimum and maximum parameter estimates. Used to stop optimiza- tion and move to new starting values if violated. block Whether to use block coordinate descent full Whether to do full gradient descent or block type Penalty type. Options include "none", "lasso", "enet" for the elastic net, "alasso" for the adaptive lasso and "diff_lasso". If ridge penalties are desired, use type="enet" and alpha=1. diff_lasso penalizes the discrepency between parameter estimates and some pre-specified values. The values to take the deviation from are spec- ified in diff_par. Two methods for sparser results than lasso are the smooth clipped absolute deviation, "scad", and the minimum concave penalty, "mcp". Last option is "rlasso" which is the randomised lasso to be used for stability selection. optMethod Solver to use. Two main options for use: rsoolnp and coord_desc. Although slightly slower, rsolnp works much better for complex models. coord_desc uses gradient descent with soft thresholding for the type of of penalty. Rsolnp is a nonlinear solver that doesn’t rely on gradient information. There is a similar type of solver also available for use, slsqp from the nloptr package. coord_desc can also be used with hessian information, either through the use of quasi=TRUE, or specifying a hess_fun. However, this option is not recommended at this time. gradFun Gradient function to use. Recommended to use "ram", which refers to the method specified in von Oertzen & Brick (2014). Only for use with optMethod="coord_desc". pars_pen Parameter indicators to penalize. There are multiple ways to specify. The de- fault is to penalize all regression parameters ("regressions"). Additionally, one can specify all loadings ("loadings"), or both c("regressions","loadings"). Next, parameter labels can be assigned in the lavaan syntax and passed to pars_pen. See the example.Finally, one can take the parameter numbers from the A or S matrices and pass these directly. See extractMatrices(lav.object)$A. diff_par Parameter values to deviate from. Only used when type="diff_lasso". hessFun Hessian function to use. Currently not recommended. tol Tolerance for coordinate descent round Number of digits to round results to solver Whether to use solver for coord_desc quasi Whether to use quasi-Newton. Currently not recommended. solver.maxit Max iterations for solver in coord_desc alpha.inc Whether alpha should increase for coord_desc line.search Use line search for optimization. Default is no, use fixed step size prerun Logical. Use rsolnp to first optimize before passing to gradient descent? Only for use with coord_desc. step Step size momentum Momentum for step sizes step.ratio Ratio of step size between A and S. Logical verbose Whether to print iteration number. warm.start Whether start values are based on previous iteration. This is not recommended. Start2 Provided starting values. Not required nlminb.control list of control values to pass to nlminb max.iter Number of iterations for coordinate descent Value fit Full set of output from regsem() Examples ## Not run: # Note that this is not currently recommended. Use cv_regsem() instead library(regsem) # put variables on same scale for regsem HS <- data.frame(scale(HolzingerSwineford1939[ ,7:15])) mod <- ' f =~ x1 + x2 + x3 + x4 + x5 + x6 + x7 + x8 + x9 ' outt = cfa(mod, HS, meanstructure=TRUE) fit1 <- multi_optim(outt, max.try=40, lambda=0.1, type="lasso") # growth model model <- ' i =~ 1*t1 + 1*t2 + 1*t3 + 1*t4 s =~ 0*t1 + s1*t2 + s2*t3 + 3*t4 ' fit <- growth(model, data=Demo.growth) summary(fit) fitmeasures(fit) fit3 <- multi_optim(fit, lambda=0.2, type="lasso") summary(fit3) ## End(Not run) parse_parameters Takes either a vector of parameter ids or a vector of named parameters and returns a vector of parameter ids Description Takes either a vector of parameter ids or a vector of named parameters and returns a vector of parameter ids Usage parse_parameters(x, model) Arguments x Parameter labels model Lavaan model Value NULL if undefined input. Else vector of parameter ids pen_mod Penalized model syntax. Description This function create a lavaan model syntax with paths corresponding to paremeters penalized to 0 removed. Usage pen_mod(model, nm = NULL, pars_pen = NULL) Arguments model lavaan output object. nm names(regsemOutput$coefficients). pars_pen a vector of numbers corresponding to paths to be removed (same sequence as regsemOutput$coefficients). Value new.mod new model in lavaan syntax. plot.cvregsem Plot function for cv_regsem Description Plot function for cv_regsem Usage ## S3 method for class 'cvregsem' plot( x, ..., pars = NULL, show.minimum = "BIC", col = NULL, type = "l", lwd = 3, h_line = 0, lty = 1, xlab = NULL, ylab = NULL, legend.x = NULL, legend.y = NULL, legend.cex = 1, legend.bg = par("bg"), grey.out = FALSE ) Arguments x An x from cv_regsem. ... Other arguments. pars Which parameters to plot show.minimum What fit index to use col A specification for the default plotting color. type what type of plot should be drawn. Possible types are "p" for points, "l" for lines, or "b" for both lwd line width h_line Where to draw horizontal line lty line type xlab X axis label ylab Y axis label legend.x x-coordinate of legend. See ?legend legend.y y-coordinate of legend. See ?legend legend.cex cex of legend. See ?legend legend.bg legend background color. See ?legend grey.out Add grey to background Value Plot of parameter estimates across penalties rcpp_fit_fun Calculates the objective function values. Description Calculates the objective function values. Usage rcpp_fit_fun( ImpCov, SampCov, type2, lambda, gamma, pen_vec, pen_diff, e_alpha, rlasso_pen, pen_vec1, pen_vec2, dual_pen1, dual_pen2 ) Arguments ImpCov expected covariance matrix. SampCov Sample covariance matrix. type2 penalty type. lambda penalty value. gamma additional penalty for mcp and scad pen_vec vector of penalized parameters. pen_diff Vector of values to take deviation from. e_alpha Alpha for elastic net rlasso_pen Alpha for rlasso2 pen_vec1 vector of penalized parameters for lasso penalty. pen_vec2 vector of penalized parameters for ridge penalty. dual_pen1 vector of penalized parameters for lasso penalty. dual_pen2 vector of penalized parameters for ridge penalty. rcpp_grad_ram Calculates the gradient vector based on Von Oertzen and Brick, 2014 Description Calculates the gradient vector based on Von Oertzen and Brick, 2014 Usage rcpp_grad_ram( par, ImpCov, SampCov, Areg, Sreg, A, S, Fmat, lambda, type2, pen_vec, diff_par ) Arguments par vector with parameters. ImpCov expected covariance matrix. SampCov Sample covariance matrix. Areg A matrix with current parameter estimates. Sreg S matrix with current parameter estimates. A A matrix with parameter labels. S S matrix with parameter labels. Fmat Fmat matrix. lambda penalty value. type2 penalty type. pen_vec parameter indicators to be penalized. diff_par parameter values to take deviations from. rcpp_quasi_calc Compute quasi Hessian Description Compute quasi Hessian Usage rcpp_quasi_calc(I, s, y, H) Arguments I identity matrix. s s vector. y y vector. H previous Hessian. rcpp_RAMmult Take RAM matrices, multiplies, and returns Implied Covariance ma- trix. Description Take RAM matrices, multiplies, and returns Implied Covariance matrix. Usage rcpp_RAMmult(par, A, S, S_fixed, A_fixed, A_est, S_est, Fmat, I) Arguments par parameter estimates. A A matrix with parameter labels. S S matrix with parameter labels. S_fixed S matrix with fixed indicators. A_fixed A matrix with fixed indicators. A_est A matrix with parameter estimates. S_est S matrix with parameter estimates. Fmat Fmat matrix. I Diagonal matrix of ones. regsem Regularized Structural Equation Modeling. Tests a single penalty. For testing multiple penalties, see cv_regsem(). Description Regularized Structural Equation Modeling. Tests a single penalty. For testing multiple penalties, see cv_regsem(). Usage regsem( model, lambda = 0, alpha = 0.5, gamma = 3.7, type = "lasso", dual_pen = NULL, random.alpha = 0.5, data = NULL, optMethod = "rsolnp", estimator = "ML", gradFun = "none", hessFun = "none", prerun = FALSE, parallel = "no", Start = "lavaan", subOpt = "nlminb", longMod = FALSE, pars_pen = "regressions", diff_par = NULL, LB = -Inf, UB = Inf, par.lim = c(-Inf, Inf), block = TRUE, full = TRUE, calc = "normal", max.iter = 500, tol = 1e-05, round = 3, solver = FALSE, quasi = FALSE, solver.maxit = 5, alpha.inc = FALSE, line.search = FALSE, step = 0.1, momentum = FALSE, step.ratio = FALSE, nlminb.control = list(), missing = "listwise" ) Arguments model Lavaan output object. This is a model that was previously run with any of the lavaan main functions: cfa(), lavaan(), sem(), or growth(). It also can be from the efaUnrotate() function from the semTools package. Currently, the parts of the model which cannot be handled in regsem is the use of multiple group models, missing other than listwise, thresholds from categorical variable models, the use of additional estimators other than ML, most notably WLSMV for categorical variables. Note: the model does not have to actually run (use do.fit=FALSE), converge etc... regsem() uses the lavaan object as more of a parser and to get sample covariance matrix. lambda Penalty value. Note: higher values will result in additional convergence is- sues. If using values > 0.1, it is recommended to use mutli_optim() instead. See multi_optim for more detail. alpha Mixture for elastic net. 1 = ridge, 0 = lasso gamma Additional penalty for MCP and SCAD type Penalty type. Options include "none", "lasso", "enet" for the elastic net, "alasso" for the adaptive lasso and "diff_lasso". If ridge penalties are desired, use type="enet" and alpha=1. diff_lasso penalizes the discrepency between parameter estimates and some pre-specified values. The values to take the deviation from are spec- ified in diff_par. Two methods for sparser results than lasso are the smooth clipped absolute deviation, "scad", and the minimum concave penalty, "mcp". Last option is "rlasso" which is the randomised lasso to be used for stability selection. dual_pen Two penalties to be used for type="dual", first is lasso, second ridge random.alpha Alpha parameter for randomised lasso. Has to be between 0 and 1, with a default of 0.5. Note this is only used for "rlasso", which pairs with stability selection. data Optional dataframe. Only required for missing="fiml" which is not currently working. optMethod Solver to use. Two main options for use: rsoolnp and coord_desc. Although slightly slower, rsolnp works much better for complex models. coord_desc uses gradient descent with soft thresholding for the type of of penalty. Rsolnp is a nonlinear solver that doesn’t rely on gradient information. There is a similar type of solver also available for use, slsqp from the nloptr package. coord_desc can also be used with hessian information, either through the use of quasi=TRUE, or specifying a hess_fun. However, this option is not recommended at this time. estimator Whether to use maximum likelihood (ML) or unweighted least squares (ULS) as a base estimator. gradFun Gradient function to use. Recommended to use "ram", which refers to the method specified in von Oertzen & Brick (2014). Only for use with optMethod="coord_desc". hessFun Hessian function to use. Recommended to use "ram", which refers to the method specified in von Oertzen & Brick (2014). This is currently not recommended. prerun Logical. Use rsolnp to first optimize before passing to gradient descent? Only for use with coord_desc. parallel Logical. Whether to parallelize the processes? Start type of starting values to use. Only recommended to use "default". This sets factor loadings and variances to 0.5. Start = "lavaan" uses the parameter esti- mates from the lavaan model object. This is not recommended as it can increase the chances in getting stuck at the previous parameter estimates. subOpt Type of optimization to use in the optimx package. longMod If TRUE, the model is using longitudinal data? This changes the sample covari- ance used. pars_pen Parameter indicators to penalize. There are multiple ways to specify. The de- fault is to penalize all regression parameters ("regressions"). Additionally, one can specify all loadings ("loadings"), or both c("regressions","loadings"). Next, parameter labels can be assigned in the lavaan syntax and passed to pars_pen. See the example.Finally, one can take the parameter numbers from the A or S matrices and pass these directly. See extractMatrices(lav.object)$A. diff_par Parameter values to deviate from. Only used when type="diff_lasso". LB lower bound vector. Note: This is very important to specify when using regular- ization. It greatly increases the chances of converging. UB Upper bound vector par.lim Vector of minimum and maximum parameter estimates. Used to stop optimiza- tion and move to new starting values if violated. block Whether to use block coordinate descent full Whether to do full gradient descent or block calc Type of calc function to use with means or not. Not recommended for use. max.iter Number of iterations for coordinate descent tol Tolerance for coordinate descent round Number of digits to round results to solver Whether to use solver for coord_desc quasi Whether to use quasi-Newton solver.maxit Max iterations for solver in coord_desc alpha.inc Whether alpha should increase for coord_desc line.search Use line search for optimization. Default is no, use fixed step size step Step size momentum Momentum for step sizes step.ratio Ratio of step size between A and S. Logical nlminb.control list of control values to pass to nlminb missing How to handle missing data. Current options are "listwise" and "fiml". "fiml" is not currently working well. Value out List of return values from optimization program convergence Convergence status. 0 = converged, 1 or 99 means the model did not converge. par.ret Final parameter estimates Imp_Cov Final implied covariance matrix grad Final gradient. KKT1 Were final gradient values close enough to 0. KKT2 Was the final Hessian positive definite. df Final degrees of freedom. Note that df changes with lasso penalties. npar Final number of free parameters. Note that this can change with lasso penalties. SampCov Sample covariance matrix. fit Final F_ml fit. Note this is the final parameter estimates evaluated with the F_ml fit function. coefficients Final parameter estimates nvar Number of variables. N sample size. nfac Number of factors baseline.chisq Baseline chi-square. baseline.df Baseline degrees of freedom. Examples # Note that this is not currently recommended. Use cv_regsem() instead library(lavaan) # put variables on same scale for regsem HS <- data.frame(scale(HolzingerSwineford1939[,7:15])) mod <- ' f =~ 1*x1 + l1*x2 + l2*x3 + l3*x4 + l4*x5 + l5*x6 + l6*x7 + l7*x8 + l8*x9 ' # Recommended to specify meanstructure in lavaan outt = cfa(mod, HS, meanstructure=TRUE) fit1 <- regsem(outt, lambda=0.05, type="lasso", pars_pen=c("l1", "l2", "l6", "l7", "l8")) #equivalent to pars_pen=c(1:2, 6:8) #summary(fit1) stabsel Stability selection Description Stability selection Usage stabsel( data, model, det.range = FALSE, from, to, times = 50, jump = 0.01, detr.nlambda = 20, n.lambda = 40, n.boot = 100, det.thr = FALSE, p = 0.8, p.from = 0.5, p.to = 1, p.jump = 0.05, p.method = "aic", type = "lasso", pars_pen = "regressions", ... ) Arguments data data frame model lavaan syntax model. det.range Whether to determine the range of penalization values for stability selection through bootstrapping. Default is FALSE, from and to arguments are then needed. If set to TRUE, then jump, times and detr.nlambda arguments will be needed. from Minimum value of penalization values for stability selection. to Maximum value of penalization values for stability selection. times Number of bootstrapping sample used to determine the range. Default is 50. jump Amount to increase penalization each iteration. Default is 0.01 detr.nlambda Number of penalization values to test for determining range. n.lambda Number of penalization values to test for stability selection. n.boot Number of bootstrap samples needed for stability selection. det.thr Whether to determine the probability threshold value. Default is FALSE, p is then needed. If set to TRUE, p.from, p.to, p.method arguments will be needed. p Probability threshold: above which selection probability is the path kept in the modle. Default value is 0.8. p.from Lower bound of probability threshold to test. Default is 0.5. p.to Upper bound of probability threshold to test. Default is 1. p.jump Amount to increase threshold each iteration. Default is 0.05. p.method Which fit index to use to choose a final model? type Penalty type pars_pen Parameter indicators to penalize. ... Any additional arguments to pass to regsem() or cv_regsem(). Examples library(regsem) # put variables on same scale for regsem HS <- data.frame(scale(HolzingerSwineford1939[,7:15])) mod <- ' f =~ 1*x1 + x2 + x3 + x4 + x5 + x6 + x7 + x8 + x9 x1 ~~ r1*x2;x1 ~~ r2*x3;x1 ~~ r3*x4;x1 ~~ r4*x5 ' outt = cfa(mod, HS) stabsel.out = stabsel(data=HS,model=mod,det.range=TRUE,detr.nlambda=20,n.lambda=5, n.boot=10,p=0.9,type="alasso", p.method="aic", pars_pen=c("r1","r2","r3","r4")) stabsel.out$selection_results stabsel_par Stability selection, parallelized version Description Stability selection, parallelized version Usage stabsel_par( data, model, det.range = FALSE, from, to, times = 50, jump = 0.01, detr.nlambda = 20, n.lambda = 40, n.boot = 100, det.thr = FALSE, p = 0.8, p.from = 0.5, p.to = 1, p.jump = 0.05, p.method = "aic", type = "lasso", pars_pen = "regressions", ... ) Arguments data data frame model lavaan syntax model. det.range Whether to determine the range of penalization values for stability selection through bootstrapping. Default is FALSE, from and to arguments are then needed. If set to TRUE, then jump, times and detr.nlambda arguments will be needed. from Minimum value of penalization values for stability selection. to Maximum value of penalization values for stability selection. times Number of bootstrapping sample used to determine the range. Default is 50. jump Amount to increase penalization each iteration. Default is 0.01 detr.nlambda Number of penalization values to test for determing range. n.lambda Number of penalization values to test for stability selection. n.boot Number of bootstrap samples needed for stability selection. det.thr Whether to determine the probability threshold value. Default is FALSE, p is then needed. If set to TRUE, p.from, p.to, p.method arguments will be needed. p Probability threshold: above which selection probability is the path kept in the modle. Default value is 0.8. p.from Lower bound of probability threshold to test. Default is 0.5. p.to Upper bound of probability threshold to test. Default is 1. p.jump Amount to increase threshold each iteration. Default is 0.05. p.method Which fit index to use to choose a final model? type Penalty type pars_pen Parameter indicators to penalize. ... Any additional arguments to pass to regsem() or cv_regsem(). stabsel_thr Tuning the probability threshold. Description This function tune the probability threshold parameter. Usage stabsel_thr( stabsel = NULL, data = NULL, model = NULL, est_model = NULL, prob = NULL, nm = NULL, pars.pen = NULL, from = 0.5, to = 1, jump = 0.01, method = "aic" ) Arguments stabsel output object from stabsel function. If specified, data, model, est_model, prob, nm, and pars.pen parameters are not needed. data data frame model lavaan syntax model. est_model lavaan output object. prob matrix of selection probabilities. nm names(regsemOutput$coefficients). pars.pen a vector of numbers corresponding to paths to be removed (same sequence as regsemOutput$coefficients). from starting value of the threshold parameter. to end value of the threshold parameter. jump increment of the threshold parameter. method fit indices uesd to tune the parameter. Value rtn results using the optimal threshold. summary.cvregsem print information about cvregsem object Description print information about cvregsem object Usage ## S3 method for class 'cvregsem' summary(object, ...) Arguments object cv_regsem object ... Additional arguments Value Details regarding convergence and fit summary.regsem Summary results from regsem. Description Summary results from regsem. Usage ## S3 method for class 'regsem' summary(object, ...) Arguments object An object from regsem. ... Other arguments. Value Details regarding convergence and fit xmed Function to performed exploratory mediation with continuous and cat- egorical variables Description Function to performed exploratory mediation with continuous and categorical variables Usage xmed( data, iv, mediators, dv, covariates = NULL, type = "lasso", nfolds = 10, show.lambda = F, epsilon = 0.001, seed = NULL ) Arguments data Name of the dataset iv Name (or vector of names) of independent variable(s) mediators Name of mediators dv Name of dependent variable covariates Name of covariates to be included in model. type What type of penalty. Options include lasso, ridge, and enet. nfolds Number of cross-validation folds. show.lambda Displays lambda values in output epsilon Threshold for determining whether effect is 0 or not. seed Set seed to control CV results Value Coefficients from best fitting model Examples # example library(ISLR) College1 = College[which(College$Private=="Yes"),] Data = data.frame(scale(College1[c("Grad.Rate","Accept","Outstate","Room.Board","Books","Expend")])) Data$Grad.Rate <- ifelse(Data$Grad.Rate > 0,1,0) Data$Grad.Rate <- as.factor(Data$Grad.Rate) #lavaan model with all mediators model1 <- ' # direct effect (c_prime) Grad.Rate ~ c_prime*Accept # mediators Outstate ~ a1*Accept Room.Board ~ a2*Accept Books ~ a3*Accept Expend ~ a6*Accept Grad.Rate ~ b1*Outstate + b2*Room.Board + b3*Books + b6*Expend # indirect effects (a*b) a1b1 := a1*b1 a2b2 := a2*b2 a3b3 := a3*b3 a6b6 := a6*b6 # total effect (c) c := c_prime + (a1*b1) + (a2*b2) + (a3*b3) + (a6*b6) ' #p-value approach using delta method standard errors fit.delta = sem(model1,data=Data,fixed.x=TRUE,ordered="Grad.Rate") summary(fit.delta) #xmed() iv <- "Accept" dv <- "Grad.Rate" mediators <- c("Outstate","Room.Board","Books","Expend") out <- xmed(Data,iv,mediators,dv) out
create
ctan
TeX
###### Abstract The package create-theorem provides commands for naming, initializing and configuring theorem-like environments. All of these commands have key-value based interface and are especially useful in multi-language documents, allowing the easy declaration of theorem-like environments that can automatically adapt to the language settings. ## 1 / How to load it First, you need a backend to provide the command \newtheorem with the usual behaviour, for example, amsthm or nthorem. After that, you can simply load the current package with: \usepackage[(options)]{create-theorem} ## 2 Attention Since create-theorem uses cleveref internally, it should usually be placed near the last of your preamble -- notably, it needs to be loaded after varioref and hyperref. It has the following options: name as context * When referencing, the resulted names shall correspond to the current context of your text. For example, the names shall be displayed in English when you are referencing a theorem-like environment in an English context, no matter in which linguistic context the target environment appeared. * This is the default behavior. * Synonymous names: name-as-context | nameascontext | regionalref name as is * When referencing, the resulted names shall correspond to the contexts in which the target environments appeared. For example, if the target environment is written in an English context, then its name shall always be displayed in English when referencing, regardless of the current linguistic context. * Synonymous names: name-as-is | nameasis | originalref name in link * Include the names in the hyperlinks when referencing. * Synonymous names: name-in-link | nameinlink no preset names * Disable the preset names. Use this option if you want to define you own name set. * Synonymous names: no-preset-names | nopresetnames ### 2.1 Naming theorem-like environments with \NameTheorem The syntax of \NameTheorem is as follows: \NameTheorem{_name of environment_}}\(\{\)_key-value configuration_} Supported keys are: heading = \(configuration\) * The heading of the environment, where \(configuration\) can be: * a single string in monolingual documents: heading = \(string\); * a key-value name list in multilingual documents: heading = \(language\ name) = \(string\) heading style = \(style\) * The style of the heading, you can specify the font, text style, color, etc. * Synonymous names: heading-style | headingstyle crefname = \(configuration\) * The name for \cref the environment, where \(configuration\) can be: * a single string in monolingual documents: crefname = \(name\)\(names\); * a key-value name list in multilingual documents: crefname = \(language\ name) = \(\{(singular\ name)\}\{(plural\ name)\}\) * Also supports the syntax of \crefthename, thus you can assign names of the form: \(\{(singular\ definite\ article)\}\{(singular\ name)\}\{(plural\ definite\ article)\}\{(plural\ name)\}\) This would be useful for languages like French, Italian, Spanish, etc. * Also supports the syntax of \crefhevariantname, thus you can assign different set of names for different variants/declensions (the first line in the configuration is the default name set, which is used in case no variants is specified when referencing): crefname = \(language\ name) = \([...]\{...\}\{...\}\{...\}\) , \(\{variant\ 1\}\) = \([...]\{...\}\{...\}\{...\}\) , \(\{variant\ 2\}\) = \([...]\{...\}\{...\}\{...\}\) ... This would be useful for languages like German, Russian, etc. crefname style = \(style\) * The style of "crefname" when referencing, you may specify the font, text style, color, etc. * Synonymous names: crefname-style | crefnamestyleCrefname = _configuration_) * The name for \Cref the environment, its syntax is the same as that of crefname. * Also supports the syntax of \Crefthename and \Crefthevariantname. * Crefname style = _\style_) * The style of "Crefname" when referencing, you may specify the font, text style, color, etc. * Synonymous names: Crefname-style | Crefmametstyle numbering style = _\style_) * The style of numbering in the reference, you can specify the font, text style, color, etc. * Synonymous names: numbering-style | numberingstyle use name = _\list of existed environment(s) separated with semicolon "_; "_) * Use the name(s) and style(s) of the given environment(s). If there are multiple ones specified, the result would be a string combining the names, separated with "-". * The definite articles (if exist) are chosen to be that of the last given environment. * Synonymous names: combined | use-name | usename Tip You can also define the names within \CreateTheorem while initializing the theorem-like environments. \NameTheorem is especially useful for package or class authors who wish to preset suitable names (with styles) in their packages or classes. ### 2.2 | Initializing theorem-like environments with \CreateTheorem The syntax of \CreateTheorem is as follows: \CreateTheorem{_(list of the name of environments)_}{_(key-value configuration)_} Attention When the _\key-value configuration_ is empty, don't forget to include the second pair of curly brackets, for example, \CreateTheorem{theorem}{}. Supported keys are: name = _\configuration_) or name style = _\configuration_) * Setting the names. Same as \NameTheorem{_(name of environment)_}{_(configuration)_}. * Synonymous names: name-style | namesstyle use name = _\list of existed environment(s) separated with semicolon "_; "_) * Using existed name(s). Same as in \NameTheorem. * Synonymous names: combined | use-name | usename style = _\theorem style_) * Specifying the \theoremstyle for the current environment. * Synonymous names: apply style | apply-style | applystyle qed or qed = _\Q.E.D. symbol_) * Specifying the Q.E.D. symbol for the current environment. * Note that the Q.E.D. symbol has already been put in math mode. If you want regular text such as "Q.E.D.", you need to write qed = \mathrm{Q.E.D.}. * If you are using ntheorem as the backend, then you need to load it with option thmmarks. * Synonymous names: qed symbol | qed-symbol | qedsymbolparent counter = _(parent counter)_ * Specifying the _(parent counter)_ for the current environment, _i.e._, numbering will restart whenever that sectional level is encountered. * Synonymous names: parent-counter | parentcounter | number within | number-within | numberwithin shared counter = _(shared counter)_ * Specifying the _(shared counter)_ for the current environment, _i.e._, numbering will progress sequentially for all theorem-like environments using this counter. * Synonymous names: shared-counter | sharedcounter | number like | number-like | numberlike numberless * Defining the current environment to be unnumbered. create starred version * Defining a corresponding starred (unnumbered) version of the current environment. * It must be placed _before_ qed if you want the starred version to have a Q.E.D symbol. * Synonymous names: create-starred-version | createstarredversion | create numberless version | create-numberless-version | createnumberlessversion copy existed = _(existed environment)_ * Defining the current environment to be the same as _(existed environment)_. * This key is usually useful in the following two situations: 1. To use a more concise name. For example, with \CreateTheorem{thm}{copy existed = theorem}, one can then use the name thm to write theorems. 2. To remove the numbering of some environments. For example, one can remove the numbering of the remark environment with \CreateTheorem{remark}{copy existed = remark*}. * Synonymous names: copy-existed | copyexisted ### Tip The names for the following environments have been preset: application, assertion, assumption, axiom, claim, conclusion, conjecture, construction, convention, corollary, definition, example, exercise, fact, hypothesis, lemma, notation, observation, postulate, problem, property, proposition, question, recall, remark and theorem. If you are fine with the preset names, then there is no need to specify the key "name" while creating them, otherwise you shall have to use the package option "no preset names" to disable the presets and then define your own ones. Please note that, for the sake of generality, the environment _(env)_ and its starred relative _(env)_* do _not_ share the same set of names when they are separately defined. However, with proper usage of create starred version and copy existed, you should already be able to produce all of the following combinations that shares the same set of names: 1) numbered _(env)_, numbered _(env)_* ; 2) numbered _(env)_, unnumbered _(env)_* ; 3) unnumbered _(env)_, numbered _(env)_* ; and 4) unnumbered _(env)_, unnumbered _(env)_*. If left it as an easy exercise for you ;-) The answer can be found in section 3.2. ### 2.3 \ Configuring theorem-like environments with \SetTheorem The previous two commands are especially useful for package or class writers, while this one is more for the users. If you are not satisfied with preset name styles or numbering settings, then even after initializing the environments, you can still further configure them by means of \SetTheorem, the syntax of which is as follows: \SetTheorem{_list of the name of environments_}}\{_key-value configuration_}} Supported keys are: name = _configuration_) and name style = _configuration_} * Same as \NameTheorem{_name of environment_}}\{_configuration_}. * Note that this configuration can overwrite those already specified in \NameTheorem. * Synonymous names: name-style \ namesstyle qed = _{Q.E.D. symbol_} * Specifying the Q.E.D. symbol for the current environment. * Note that this configuration only works if you have already enabled the Q.E.D. symbol during the creating phase of the corresponding environment. * Synonymous names: qed symbol \ qed-symbol \ qedsymbol parent counter = _(parent counter)_ * Specifying the _(parent counter)_ for the current environment, _i.e._, numbering will restart whenever that sectional level is encountered. * Note that this configuration can overwrite those already specified in \CreateTheorem. * Synonymous names: parent-counter \ parentcounter \ number within \ numberwithin shared counter = _(shared counter)_ * Specifying the _(shared counter)_ for the current environment, _i.e._, numbering will progress sequentially for all theorem-like environments using this counter. * Note that this configuration can overwrite those already specified in \CreateTheorem. * Synonymous names: shared-counter \ sharedcounter \ number like \ number-like \ numberlike In some cases, you may define an internal environment (for example, a generic version) first and then use it to define the final environment. You may wish to hide the internal names from the users so that they can use \SetTheorem with the name of the final environments. This can be done with the following command: \SetTheoremBinding{_list of the name of environments_}}\{_(the environment to bind with)_} ### 2.4 \ Setting the names in external language configuration files with \NameTheorems The command \NameTheorem introduced earlier is for defining the names of a given environment for each language, which is more natural to use within a real-life document. However, for package/class authors wishing to maintain their language configuration files, it would be more convenient to use the following \NameTheorems, which assigns the names for a given language all at once, made it possible to preset the names inside external files. The syntax of \NameTheorems is as follows (please note that the _(language name)_ here should be consistent with \languagename): \NameTheorems{_language name_}}\{_(key-value configuration)_} Supported keys are (notice that you _cannot_ set the styles via \NameTheorems): heading = \(configuration) - The headings of the environments, where \(configuration\) is a key-value name list: heading = { \(name of environment) = \(string\) } crefname = \(configuration) - The names for \cref the environments, where \(configuration\) is a key-value name list: crefname = { \(name of environment) = \(\{singular name\}\}\{\)(\(\mathit{plural name}\)\}\) } - Also supports the syntax of \crefthename and \crefthevariantname. Please refer to the description of \NameTheorem for more details. Crefname = \(configuration\) - The names for \Cref the environments, its syntax is the same as that of crefname. - Also supports the syntax of \Crefthename and \Crefthevariantname. Please refer to the description of \NameTheorem for more details. _If you're feeling confused, don't worry. Let's now take a look at some examples._ \(3\)/ Examples ### The environment idea First, let's getting familiar with these two commands by creating the environment idea. \NameTheorem{idea} { heading = Idea, crefname = {idea}{ideas}, Crefname = {Idea}{Ideas}, } \CreateTheorem{idea}{parent counter = section } or to do it in one turn: \CreateTheorem{idea} { name = { heading = Idea, crefname = {idea}{ideas}, Crefname = {Idea}{Ideas}, }, parent counter = section,This is not exciting at all. Now, let's say we are writing a trilingual note in English, French and German. (I shall omit the \NameTheorem version and do it all at once in \CreateTheorem.) \CreateTheorem{idea} { name = { heading = { english = Idea, french = Idee, ngerman = Idee, }, crefname = { english = {idea}{ideas}, french = [1']{idee}[les]{idees}, ngerman = { {Idee}{Idee} , Nominativ = [die]{Idee}[die]{Ideen} , Genitiv = [der]{Idee}[der]{Ideen} , Dativ = [der]{Idee}[den]{Ideen} , Akkusativ = [die]{Idee}[die]{Ideen} } }, Crefname = { english = {Idea}{Ideas}, french = [L']{idee}[Les]{idees}, ngerman = { {Idee}{Idee} , Nominativ = [Die]{Idee}[Die]{Ideen} , Genitiv = [Der]{Idee}[Der]{Ideen} , Dativ = [Der]{Idee}[Den]{Ideen} , Akkusativ = [Die]{Idee}[Die]{Ideen} } }, parent counter = section, } With this, if you use \selectlanguage{french}, the idea environment shall be automatically displayed as "Idee". And if you \crefthe it, the definite article and the name would show up properly just as expected. The same happens for German with \selectlanguage{ngerman}, and when referencing an idea environment, you may specify the declension as with \crefthe[(_prep_),variant= Nominativ]{_{_label_}_}, or simply with shortcut such as \crefthe[(_prep_),nom.]{_{_label_}_}. \TopFor more detailed usage of the referencing command \crefthe, please refer to the documentation of the package crefthe. Next we shall deal with the problem of numbering. Let's continue to use this environment idea for demonstration -- suppose that we have already set the names with \NameTheorem. ### \ Let's play with numbering Remember the exercise I left you in the previous section? Let's do it together now. #### 3.2.1 Numbered idea and numbered idea* **This is easy, copy existed suffices**: CreateTheorem{idea}{parent counter = section} CreateTheorem{idea*}{copy existed = idea} #### 3.2.2 Numbered idea and unnumbered idea* **This is the most common situation, create starred version will do**. CreateTheorem{idea} parent counter = section, create starred version, } Attention Please note that you cannot use \CreateTheorem{idea*}{numberless} here, since we don't have the names defined for idea*. #### 3.2.3 Unnumbered idea and numbered idea* This is a bit tricky: by default we can only create numbered idea or unnumbered idea*, and the question is how to switch them. We shall need an intermediary for this purpose. CreateTheorem{idea}{create starred version} CreateTheorem{idea-temp}{copy existed = idea*} CreateTheorem{idea*}{copy existed = idea} #### 3.2.4 Unnumbered idea and unnumbered idea* This is essentially the combination of the first two cases -- we need to create idea* first and then copy it to idea: CreateTheorem{idea}{create starred version} CreateTheorem{idea}{copy existed = idea*} In each case, the two environments idea and idea* share the same set of names. Attention The sole purpose of this section is to demonstrate the feature of this package -- some combinations are not recommended to use in the actual documents. ### 3.3 | The _proofless_ version -- theorems with a Q.E.D. symbol Sometimes you may encounter a theorem without a proof, in which case you might want a Q.E.D. symbol when the theorem is finished. This can be easily achieved via: CreateTheorem { theorem } { create starred version } CreateTheorem { theorem+ } { copy existed = theorem, qed } CreateTheorem { theorem+* } { copy existed = theorem*, qed } The code above defines two new environments theorem+ and theorem+* in addition to theorem and theorem*. The + version behaves exactly the same as the usual version, except that it has a Q.E.D. symbol. ### Redefine the proof environment If you wish to have a proof environment with a custom theorem style, or to have a numbered version proof* of it, the following code could be helpful: \ExplSyntaxOn \newcounter { proof } \tl_new:N l_mymodule_name_of_proof_tl \CreateTheorem { proof_inner } { name = { heading = { l_mymodule_name_of_proof_tl } }, create-starred-version, style = remark, qed, shared-counter = proof, } \cs_undefine:c { proof } \cs_undefine:c { endproof } \NewDocumentEnvironment { proof } { O{\proofname} } { \tl_set:Nn \l_mymodule_name_of_proof_tl { #1 } \begin { proof_inner* } } { \end { proof_inner* } } \NewDocumentEnvironment { proof* } { O{\proofname} } { \tl_set:Nn \l_mymodule_name_of_proof_tl { #1 } \begin { proof_inner } } } { \end { proof_inner } } \SetTheoremBinding { proof } { proof_inner* } \SetTheoremBinding { proof* } { proof_inner } \ExplSyntaxOff It defines an environment proof_inner (with its starred variant) with theorem style remark to mimic the default style (you are welcome to use your own style here), and with the name to be a variable which is latter used to define the actual environments proof and proof*. Thesetwo environments are defined in such a way that proof is the usual unnumbered version and proof* is the numbered version. The \SetTheoremBinding lines are to ensure that user can directly write \SetTheorem{proof} instead of \SetTheorem{proof_inner*}. Attention The code above requires amsthm. If you are using ntheorem as the backend, then you need to load it with option amsthm, and remove the \newcounter line. ### Advanced topic: setting the names in an external file A typical configuration looks like this: \NameTheorems { english } { , heading = { , theorem = Theorem , proposition = Proposition ... } , crefname = { , theorem = {theorem}{theorems} , proposition = {proposition}{propositions} ... ... } , Crefname = { , theorem = {Theorem}{Theorems} , proposition = {Proposition}{Propositions} ... ... } Here is an example for French: \NameTheorems { french } { , heading = { , theorem = Theoreme , proposition = Proposition , example = Exemple ... } , crefname = { , theorem = [le]{theoreme}[les]{theoremes} , proposition = [la]{proposition}[les]{propositions} , example = [1']{example}[les]{exemples} ... } , Crefname = {, theorem = [Le]{theorème}[Les]{thoermes} , proposition = [La]{proposition}[Les]{propositions} , example = [L']{exemple}[Les]{exemples} ... } } And an example for German: \NameTheorems { ngerman } { , heading = { , theorem = Satz ... } , crefname = { , theorem = {{Satz}{Sätze} , Nominativ = [der]{Satz}[diel}{Sätze} , Genitiv = [des]{Satzes}[der]{Sätze} , Dativ = [dem]{Satz}[den]{Sätzen} , Akkusativ = [den]{Satz}[diel]{Sätze} } ... }, Crefname = { , theorem = {{Satz}{Sätze} , Nominativ = [Der]{Satz}[Die]{Sätze} , Genitiv = [Des]{Satzes}[Der]{Sätze} , Dativ = [Dem]{Satz}[Den]{Sätzen} , Akkusativ = [Den]{Satz}[Die]{Sätze} } ... } } The configuration using \NameTheorems is compatible with that using \NameTheorem and there is no need to worry about duplicated definitions -- new settings will automatically overwrite the old ones. * / 4 / Known issues * create-theorem modifies some undocumented internal macros of cleveref, so the behaviour might not be stable if cleveref gets updated. * The counter aliasing function is still not perfect, (sometimes) causing incorrect ordering in the result of \cref. * There might be inaccuracies in the translation of those preset names. If you run into any issues or have ideas for improvement, feel free to discuss on: [https://github.com/Jinwen-XU/create-theorem/issues](https://github.com/Jinwen-XU/create-theorem/issues) or email me via <EMAIL>.
CompositeReliability
cran
R
Package ‘CompositeReliability’ August 21, 2023 Title Determine the Composite Reliability of a Naturalistic, Unbalanced Dataset Version 1.0.3 Description The reliability of assessment tools is a crucial aspect of monitoring student performance in vari- ous educational settings. It ensures that the assessment outcomes accurately reflect a stu- dent's true level of performance. However, when assessments are combined, determining compos- ite reliability can be challenging, especially for naturalistic and unbalanced datasets. This pack- age provides an easy-to-use solution for calculating composite reliability for different assess- ment types. It allows for the inclusion of weight per assessment type and produces extensive G- and D-study results with graphical interpretations. Overall, our approach enhances the reliabil- ity of composite assessments, making it suitable for various education contexts. License GPL (>= 3) Encoding UTF-8 RoxygenNote 7.2.3 Imports dplyr, ggplot2, lme4, magrittr, plyr, psych, reshape2, tidyr, Rsolnp Depends R (>= 2.10) LazyData true URL https://github.com/jmoonen/CompositeReliability BugReports https://github.com/jmoonen/CompositeReliability/issues NeedsCompilation no Author <NAME> - <NAME> [aut, cre] (<https://orcid.org/0000-0002-8883-8822>) Maintainer <NAME> - <NAME> <<EMAIL>> Repository CRAN Date/Publication 2023-08-21 13:20:22 UTC R topics documented: calculateReliabilit... 2 calculateVarCo... 3 checkDataset... 3 computeCompositeReliabilit... 4 computeMaxCompositeReliabilit... 5 DStud... 6 GStud... 6 GStudyPerTyp... 7 mydat... 8 calculateReliability calculateReliability: determine the reliability and SEM per Type Description calculateReliability: determine the reliability and SEM per Type Usage calculateReliability(mydata, n) Arguments mydata A dataframe containing columns ID, Type, Score (numeric) n A vector containing for each Type the number of score or assessments assess- ments, e.g. averages, requirements. Value A list containing 2 vectors; one vector with the reliability coefficient of each Type, the other vector with the SEM values for each Type Examples rel <- calculateReliability(mydata, n=c("A"=3,"B"=3,C="2")) calculateVarCov calculateVarCov: Estimate variance and covariance components of assessee p S_p and mean assessment scores i nested in assessees S_iINp, and determine the error scores S_delta Description calculateVarCov: Estimate variance and covariance components of assessee p S_p and mean as- sessment scores i nested in assessees S_iINp, and determine the error scores S_delta Usage calculateVarCov(mydata, n) Arguments mydata A dataframe containing columns ID, Type, Score (numeric) n A vector containing for each Type the number of score or assessments assess- ments, e.g. averages, requirements. Value A list containing the observed variances, covariances and errors scores Examples varcov <- calculateVarCov(mydata, c("A"=3, "B"=3, "C"=2)) varcov$S_p varcov$S_iINp varcov$S_delta checkDatasets checkDatasets: assert that the given datasets adhere to the assump- tions and requirements of this package i.e. the data set ’mydata’ is a dataframe with 3 columns, named "ID", "Type" and "Score", col- umn "Score" contains numeric data, and each combination of "ID" and "Type" exists at least once, data set n contains a numerical value for each "Type", and data set weights contains a numerical value for each "Type" and the sum of all values is equal to 1. Description checkDatasets: assert that the given datasets adhere to the assumptions and requirements of this package i.e. the data set ’mydata’ is a dataframe with 3 columns, named "ID", "Type" and "Score", column "Score" contains numeric data, and each combination of "ID" and "Type" exists at least once, data set n contains a numerical value for each "Type", and data set weights contains a numer- ical value for each "Type" and the sum of all values is equal to 1. Usage checkDatasets(mydata, n = NULL, weights = NULL) Arguments mydata A dataframe containing columns ID, Type, Score (numeric) n A vector containing for each Type the number of score or assessments assess- ments, e.g. averages, requirements. weights A vector containing for each Type the weight assigned to it. The sum of weights should be equal to 1. Value A list with the number of Assessments per ID per Type Examples checkDatasets(mydata, n=c("A"=10, "B"=5, "C"=2), weights=c("A"=1/3,"B"=1/3, "C"=1/3)) computeCompositeReliability computeCompositeReliability: multivariate generalizability theory approach to estimate the composite reliability of student performance across different types of assessments. Description computeCompositeReliability: multivariate generalizability theory approach to estimate the com- posite reliability of student performance across different types of assessments. Usage computeCompositeReliability(mydata, n, weights, optimizeSEM) Arguments mydata A dataframe containing columns ID, Type, Score (numeric) n A vector containing for each Type the number of score or assessments assess- ments, e.g. averages, requirements. weights A vector containing for each Type the weight assigned to it. The sum of weights should be equal to 1. optimizeSEM Boolean, if TRUE, the weights are adjusted in order to minimize the Standard Error of Measurement (SEM) Value A list containing the composite reliability coefficient, the SEM and the distribution of weights. If ’optimizeSEM’ is set to TRUE, the vector of weights minimizes the SEM. Examples compRel <- computeCompositeReliability(mydata, n=c("A"=10, "B"=5, "C"=2), weights=c("A"=1/3,"B"=1/3, "C"=1/3), optimizeSEM=TRUE) compRel$reliability compRel$SEM compRel$weights computeMaxCompositeReliability computeMaxCompositeReliability: multivariate generalizability the- ory approach to estimate the maximum composite reliability of student performance across different types of assessments. Description computeMaxCompositeReliability: multivariate generalizability theory approach to estimate the maximum composite reliability of student performance across different types of assessments. Usage computeMaxCompositeReliability(mydata, n) Arguments mydata A dataframe containing columns ID, Type, Score (numeric) n A vector containing for each Type the number of score or assessments assess- ments, e.g. averages, requirements. Value A list containing the composite reliability coefficient, the SEM and the distribution of weights. Examples compMaxRel <- computeMaxCompositeReliability(mydata, n=c("A"=3, "B"=2, "C"=1)) compMaxRel$reliability compMaxRel$SEM compMaxRel$weights DStudy DStudy: the program presents the reliability coefficient and the SEM for different numbers of assessments per type. Both the reliability co- efficient and the SEM are presented in graphs for differing numbers of assessments, given insight in the impact on the reliability if more or less assessments per type were required or advised. Description DStudy: the program presents the reliability coefficient and the SEM for different numbers of assessments per type. Both the reliability coefficient and the SEM are presented in graphs for differing numbers of assessments, given insight in the impact on the reliability if more or less assessments per type were required or advised. Usage DStudy(mydata, maxNrAssessments = 60) Arguments mydata A dataframe containing columns ID, Type, Score (numeric) maxNrAssessments The maximum (Int) number of assessments per type on with the D study is executed Value A list containing 2 plots: reliability (plotRel) and Standard Error of Measurement SEM (plotSEM) Examples plots <- DStudy(mydata, maxNrAssessments = 10) GStudy GStudy for a dataset in which every student p has a potentially differ- ing number of scores i on each assessment type m. i.e. model i: (p x m). The output gives descriptive statistics, reliability coefficient and SEM for each assessment type. Description GStudy for a dataset in which every student p has a potentially differing number of scores i on each assessment type m. i.e. model i: (p x m). The output gives descriptive statistics, reliability coefficient and SEM for each assessment type. Usage GStudy(mydata, nrDigitsOutput = 4) Arguments mydata A dataframe containing columns ID, Type, Score (numeric) nrDigitsOutput Integer, number of digits in the output Value Matrix with descriptive statistics for each Type of assessment Examples GStudy(mydata,nrDigitsOutput=4) GStudyPerType GStudyPerType: This function is mainly used within calculateVar- Cov.R, but can be executed on its own to determine the reliability co- efficient and SEM for a dataset with a single type of assessment. Description GStudyPerType: This function is mainly used within calculateVarCov.R, but can be executed on its own to determine the reliability coefficient and SEM for a dataset with a single type of assessment. Usage GStudyPerType(dataPerAssessmentType) Arguments dataPerAssessmentType A dataframe containing columns ID, Type, Score (numeric), with only one value in column Type Value A matrix presenting the observerd varianced and residual, number of ID’s and the percentage of the total variance for each group mydata mydata Description A dataset that can be used as example in package CompositeReliability. Usage mydata Format mydata: A data frame with 7,240 rows and 60 columns: ID ID of the student Type The type of assessment Score The obtained score by this student on this occasion, using the type of assessment ...
webauthn
hex
Erlang
API Reference === [modules](#modules) Modules --- [Webauthn](Webauthn.html) [Webauthn.AttestationStatement.AndroidKey](Webauthn.AttestationStatement.AndroidKey.html) [Webauthn.AttestationStatement.AndroidSafetynet](Webauthn.AttestationStatement.AndroidSafetynet.html) [Webauthn.AttestationStatement.FidoU2F](Webauthn.AttestationStatement.FidoU2F.html) [Webauthn.AttestationStatement.Packed](Webauthn.AttestationStatement.Packed.html) [Webauthn.AttestationStatement.TPM](Webauthn.AttestationStatement.TPM.html) [Webauthn.Authentication.Challenge](Webauthn.Authentication.Challenge.html) [Webauthn.Authentication.Response](Webauthn.Authentication.Response.html) [Webauthn.AuthenticatorData](Webauthn.AuthenticatorData.html) Information about the authenticator data can be found at the link below <https://www.w3.org/TR/webauthn/#authenticator-data[Webauthn.Cose](Webauthn.Cose.html) [Webauthn.Registration.Challenge](Webauthn.Registration.Challenge.html) This module handles the first step of the Webauthn ceremony, creating a challenge so the client can register their device. The generate/1 function outputs a map of 'publicKey' options that will be passed into the browser's navigator.credentials.create method. [Webauthn.Registration.Response](Webauthn.Registration.Response.html) [Webauthn.Utils.Crypto](Webauthn.Utils.Crypto.html) [Webauthn.Utils.TokenBinding](Webauthn.Utils.TokenBinding.html) Webauthn === [Link to this section](#summary) Summary === [Functions](#functions) --- [auth\_challenge(challenge, options)](#auth_challenge/2) [auth\_response(request, params)](#auth_response/2) [challenge()](#challenge/0) [registration\_challenge(challenge, options)](#registration_challenge/2) [registration\_response(request, att\_obj, json)](#registration_response/3) [Link to this section](#functions) Functions === Webauthn.AttestationStatement.AndroidKey === [Link to this section](#summary) Summary === [Functions](#functions) --- [verify(map, auth\_data, client\_hash)](#verify/3) [Link to this section](#functions) Functions === Webauthn.AttestationStatement.AndroidSafetynet === [Link to this section](#summary) Summary === [Functions](#functions) --- [verify(map, auth\_data, client\_hash)](#verify/3) [Link to this section](#functions) Functions === Webauthn.AttestationStatement.FidoU2F === [Link to this section](#summary) Summary === [Functions](#functions) --- [verify(arg1, auth\_data, client\_hash)](#verify/3) [Link to this section](#functions) Functions === Webauthn.AttestationStatement.Packed === [Link to this section](#summary) Summary === [Functions](#functions) --- [verify(map, auth\_data, client\_hash)](#verify/3) [Link to this section](#functions) Functions === Webauthn.AttestationStatement.TPM === [Link to this section](#summary) Summary === [Functions](#functions) --- [verify(att\_stmt, auth\_data, client\_hash)](#verify/3) [Link to this section](#functions) Functions === Webauthn.Authentication.Challenge === [Link to this section](#summary) Summary === [Functions](#functions) --- [generate(challenge, options)](#generate/2) [Link to this section](#functions) Functions === Webauthn.Authentication.Response === [Link to this section](#summary) Summary === [Functions](#functions) --- [challenge?(arg1, arg2)](#challenge?/2) [decode\_auth\_data(arg1)](#decode_auth_data/1) [decode\_json(arg1)](#decode_json/1) [decode\_signature(arg1)](#decode_signature/1) [extensions?(map)](#extensions?/1) [find\_credential(arg1, key\_id)](#find_credential/2) [find\_public\_key(arg1)](#find_public_key/1) [origin?(arg1, arg2)](#origin?/2) [parse\_auth\_data(data)](#parse_auth_data/1) [rp\_id\_hash?(arg1, arg2)](#rp_id_hash?/2) [signature\_count(map, credential)](#signature_count/2) [token\_binding?(\_, \_)](#token_binding?/2) [type?(arg1)](#type?/1) [user\_handle?(request, response)](#user_handle?/2) [user\_present?(arg1)](#user_present?/1) [user\_verified?(arg1, auth\_data)](#user_verified?/2) [valid\_signature?(message, digest, signature, key)](#valid_signature?/4) [verify(request, params)](#verify/2) [Link to this section](#functions) Functions === Webauthn.AuthenticatorData === Information about the authenticator data can be found at the link below <https://www.w3.org/TR/webauthn/#authenticator-data[Link to this section](#summary) Summary === [Functions](#functions) --- [parse(value)](#parse/1) [parse\_rp\_id(arg, ad)](#parse_rp_id/2) [Link to this section](#functions) Functions === Webauthn.Cose === [Link to this section](#summary) Summary === [Functions](#functions) --- [digest\_for(number)](#digest_for/1) [to\_public\_key(arg1)](#to_public_key/1) [Link to this section](#functions) Functions === Webauthn.Registration.Challenge === This module handles the first step of the Webauthn ceremony, creating a challenge so the client can register their device. The generate/1 function outputs a map of 'publicKey' options that will be passed into the browser's navigator.credentials.create method. **Note** The 'challenge' value is encoded as a url safe base64 string. In the front end you will need to decode this value and convert to a Uint8Array. We will include some javascript that will walk you through this process in the demo application. [Link to this section](#summary) Summary === [Functions](#functions) --- [generate(challenge, options)](#generate/2) [Link to this section](#functions) Functions === Webauthn.Registration.Response === [Link to this section](#summary) Summary === [Functions](#functions) --- [verify(registration, attestation\_obj, raw\_client\_json)](#verify/3) [Link to this section](#functions) Functions === Webauthn.Utils.Crypto === [Link to this section](#summary) Summary === [Functions](#functions) --- [certificates()](#certificates/0) [find\_root\_certificate(issuer)](#find_root_certificate/1) [secure\_compare(left, right)](#secure_compare/2) Compares the two binaries in constant-time to avoid timing attacks. [Link to this section](#functions) Functions === Webauthn.Utils.TokenBinding === [Link to this section](#summary) Summary === [Functions](#functions) --- [validate(server, client)](#validate/2) [Link to this section](#functions) Functions ===
cryptix-blmq
rust
Rust
Struct cryptix_blmq::BLMQ === ``` pub struct BLMQ<R: CryptoRngCore> { /* private fields */ } ``` Implementations --- ### impl<R: CryptoRngCore> BLMQ<R#### pub fn new(rng: R) -> Self #### pub fn extract(&self, id: &[u8]) -> (FrElement, BN254Fp) returns (pk, sk) #### pub fn sign(&mut self, sk: BN254Fp, msg: &[u8]) -> Sigma #### pub fn verify(&self, sig: &Sigma, id: &[u8], msg: &[u8]) -> bool e(Q_1, Q_2)^r e(Q_1, Q_2)^h = e(S, MQ_2 + R) S = (r + h)(M + s)^-1Q_1 Trait Implementations --- ### impl<R: Debug + CryptoRngCore> Debug for BLMQ<R#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read moreAuto Trait Implementations --- ### impl<R> RefUnwindSafe for BLMQ<R>where R: RefUnwindSafe, ### impl<R> Send for BLMQ<R>where R: Send, ### impl<R> Sync for BLMQ<R>where R: Sync, ### impl<R> Unpin for BLMQ<R>where R: Unpin, ### impl<R> UnwindSafe for BLMQ<R>where R: UnwindSafe, Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> Same<T> for T #### type Output = T Should always be `Self`### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct cryptix_blmq::Sigma === ``` pub struct Sigma { pub h: FrElement, pub s: BN254Fp, } ``` Fields --- `h: FrElement``s: BN254Fp`Trait Implementations --- ### impl Clone for Sigma #### fn clone(&self) -> Sigma Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Auto Trait Implementations --- ### impl RefUnwindSafe for Sigma ### impl Send for Sigma ### impl Sync for Sigma ### impl Unpin for Sigma ### impl UnwindSafe for Sigma Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> Same<T> for T #### type Output = T Should always be `Self`### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
tidyfast
cran
R
Package ‘tidyfast’ October 14, 2022 Title Fast Tidying of Data Version 0.2.1 Description Tidying functions built on 'data.table' to provide quick and efficient data manipulation with minimal overhead. Imports data.table (>= 1.12.4), Rcpp Suggests remotes, magrittr, tidyr, dplyr, testthat (>= 2.1.0), covr License GPL-3 Encoding UTF-8 LazyData true RoxygenNote 7.1.0 LinkingTo Rcpp NeedsCompilation yes Author <NAME> [aut, cre] (<https://orcid.org/0000-0002-2137-1391>), <NAME> [ctb] Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2020-03-20 10:40:02 UTC R topics documented: dt_case_whe... 2 dt_coun... 3 dt_fil... 4 dt_hois... 5 dt_nes... 5 dt_pivot_longe... 6 dt_pivot_wide... 7 dt_print_option... 8 dt_separat... 9 dt_starts_wit... 10 dt_uncoun... 11 dt_unnes... 12 dt_case_when Case When with data.table Description Does what dplyr::case_when() does, with the same syntax, but with data.table::fifelse() under the hood Usage dt_case_when(...) Arguments ... statements of the form: condition ~ label, where the label is applied if the condition is met Value Vector of the same size as the input vector Examples x <- rnorm(100) dt_case_when( x < median(x) ~ "low", x >= median(x) ~ "high", is.na(x) ~ "other" ) library(data.table) temp <- data.table(pseudo_id = c(1, 2, 3, 4, 5), x = sample(1:5, 5, replace = TRUE)) temp[, y := dt_case_when(pseudo_id == 1 ~ x * 1, pseudo_id == 2 ~ x * 2, pseudo_id == 3 ~ x * 3, pseudo_id == 4 ~ x * 4, pseudo_id == 5 ~ x * 5)] dt_count Count Description Count the numbers of observations within groups Usage dt_count(dt_, ..., na.rm = FALSE, wt = NULL) Arguments dt_ the data table to uncount ... groups na.rm should any rows with missingness be removed before the count? Default is FALSE. wt the wt assigned to the counts (same number of rows as the data) Value A data.table with counts for each group (or combination of groups) Examples library(data.table) dt <- data.table( x = rnorm(1e5), y = runif(1e5), grp = sample(1L:3L, 1e5, replace = TRUE), wt = runif(1e5, 1, 100) ) dt_count(dt, grp) dt_count(dt, grp, na.rm = TRUE) dt_count(dt, grp, na.rm = TRUE, wt = wt) dt_fill Fill with data.table Description Fills in values, similar to tidyr::fill(), by within data.table. This function relies on the Rcpp functions that drive tidyr::fill() but applies them within data.table. Usage dt_fill(dt_, ..., id = NULL, .direction = c("down", "up", "downup", "updown")) Arguments dt_ the data table (or if not a data.table then it is coerced with as.data.table) ... the columns to fill id the grouping variable(s) to fill within .direction either "down" or "up" (down fills values down, up fills values up), or "downup" (down first then up) or "updown" (up first then down) Value A data.table with listed columns having values filled in Examples set.seed(84322) library(data.table) x = 1:10 dt = data.table(v1 = x, v2 = shift(x), v3 = shift(x, -1L), v4 = sample(c(rep(NA, 10), x), 10), grp = sample(1:3, 10, replace = TRUE)) dt_fill(dt, v2, v3, v4, id = grp, .direction = "downup") dt_fill(dt, v2, v3, v4, id = grp) dt_fill(dt, .direction = "up") dt_hoist Hoist: Fast Unnesting of Vectors Description Quickly unnest vectors nested in list columns. Still experimental (has some potentially unexpected behavior in some situations)! Usage dt_hoist(dt_, ...) Arguments dt_ the data table to unnest ... the columns to unnest (must all be the sample length when unnested); use bare names of the variables Examples library(data.table) dt <- data.table( x = rnorm(1e5), y = runif(1e5), nested1 = lapply(1:10, sample, 10, replace = TRUE), nested2 = lapply(c("thing1", "thing2"), sample, 10, replace = TRUE), id = 1:1e5 ) dt_hoist(dt, nested1, nested2) dt_nest Fast Nesting Description Quickly nest data tables (similar to dplyr::group_nest()). Usage dt_nest(dt_, ..., .key = "data") Arguments dt_ the data table to nest ... the variables to group by .key the name of the list column; default is "data" Value A data.table with a list column containing data.tables Examples library(data.table) dt <- data.table( x = rnorm(1e5), y = runif(1e5), grp = sample(1L:3L, 1e5, replace = TRUE) ) dt_nest(dt, grp) dt_pivot_longer Pivot data from wide to long Description dt_pivot_wider() "widens" data, increasing the number of columns and decreasing the number of rows. The inverse transformation is dt_pivot_longer(). Syntax based on the tidyr equivalents. Usage dt_pivot_longer( dt_, cols = NULL, names_to = "name", values_to = "value", values_drop_na = FALSE, ... ) Arguments dt_ The data table to pivot longer cols Column selection. If empty, uses all columns. Can use -colname to unselect column(s) names_to Name of the new "names" column. Must be a string. values_to Name of the new "values" column. Must be a string. values_drop_na If TRUE, rows will be dropped that contain NAs. ... Additional arguments to pass to ‘melt.data.table()‘ Value A reshaped data.table into longer format Examples library(data.table) example_dt <- data.table(x = c(1,2,3), y = c(4,5,6), z = c("a", "b", "c")) dt_pivot_longer(example_dt, cols = c(x, y), names_to = "stuff", values_to = "things") dt_pivot_longer(example_dt, cols = -z, names_to = "stuff", values_to = "things") dt_pivot_wider Pivot data from long to wide Description dt_pivot_wider() "widens" data, increasing the number of columns and decreasing the number of rows. The inverse transformation is dt_pivot_longer(). Syntax based on the tidyr equivalents. Usage dt_pivot_wider(dt_, id_cols = NULL, names_from, names_sep = "_", values_from) Arguments dt_ the data table to widen id_cols A set of columns that uniquely identifies each observation. Defaults to all columns in the data table except for the columns specified in names_from and values_from. Typically used when you have additional variables that is directly related. names_from A pair of arguments describing which column (or columns) to get the name of the output column (name_from), and which column (or columns) to get the cell values from (values_from). names_sep the separator between the names of the columns values_from A pair of arguments describing which column (or columns) to get the name of the output column (name_from), and which column (or columns) to get the cell values from (values_from). Value A reshaped data.table into wider format Examples library(data.table) example_dt <- data.table(z = rep(c("a", "b", "c"), 2), stuff = c(rep("x", 3), rep("y", 3)), things = 1:6) dt_pivot_wider(example_dt, names_from = stuff, values_from = things) dt_pivot_wider(example_dt, names_from = stuff, values_from = things, id_cols = z) dt_print_options Set Print Method Description The function allows the user to define options relating to the print method for data.table. Usage dt_print_options( class = TRUE, topn = 5, rownames = TRUE, nrows = 100, trunc.cols = TRUE ) Arguments class should the variable class be printed? (options("datatable.print.class")) topn the number of rows to print (both head and tail) if nrows(DT) > nrows. (options("datatable.print.to rownames should rownames be printed? (options("datatable.print.rownames")) nrows total number of rows to print (options("datatable.print.nrows")) trunc.cols if TRUE, only the columns that fit in the console are printed (with a message stat- ing the variables not shown, similar to tibbles; options("datatable.print.trunc.cols")). This only works on data.table versions higher than 1.12.6 (i.e. not currently available but anticipating the eventual release). Value None. This function is used for its side effect of changing options. Examples dt_print_options( class = TRUE, topn = 5, rownames = TRUE, nrows = 100, trunc.cols = TRUE) dt_separate Separate columns with data.table Description Separates a column of data into others, by splitting based a separator or regular expression Usage dt_separate( dt_, col, into, sep = ".", remove = TRUE, fill = NA, fixed = TRUE, immutable = TRUE, ... ) Arguments dt_ the data table (or if not a data.table then it is coerced with as.data.table) col the column to separate into the names of the new columns created from splitting col. sep the regular expression stating how col should be separated. Default is .. remove should col be removed in the returned data table? Default is TRUE fill if empty, fill is inserted. Default is NA. fixed logical. If TRUE match split exactly, otherwise use regular expressions. Has priority over perl. immutable If TRUE, .dt is treated as immutable (it will not be modified in place). Alterna- tively, you can set immutable = FALSE to modify the input object. ... arguments passed to data.table::tstrplit() Value A data.table with a column split into multiple columns. Examples library(data.table) d <- data.table(x = c("A.B", "A", "B", "B.A"), y = 1:4) # defaults dt_separate(d, x, c("c1", "c2")) # can keep the original column with `remove = FALSE` dt_separate(d, x, c("c1", "c2"), remove = FALSE) # need to assign when `immutable = TRUE` separated <- dt_separate(d, x, c("c1", "c2"), immutable = TRUE) separated # don't need to assign when `immutable = FALSE` (default) dt_separate(d, x, c("c1", "c2"), immutable = FALSE) d dt_starts_with Select helpers Description These functions allow you to select variables based on their names. • dt_starts_with(): Starts with a prefix • dt_starts_with(): Ends with a suffix • dt_contains(): Contains a literal string • dt_everything(): Matches all variables Usage dt_starts_with(match) dt_contains(match) dt_ends_with(match) dt_everything() Arguments match a character string to match to variable names Value None. To be used within the dt_pivot_* functions. Examples library(data.table) # example of using it with `dt_pivot_longer()` df <- data.table(row = 1, var = c("x", "y"), a = 1:2, b = 3:4) pv <- dt_pivot_wider(df, names_from = var, values_from = c(dt_starts_with("a"), dt_ends_with("b"))) dt_uncount Uncount Description Uncount a counted data table Usage dt_uncount(dt_, weights, .remove = TRUE, .id = NULL) Arguments dt_ the data table to uncount weights the counts for each .remove should the weights variable be removed? .id an optional new id variable, providing a unique id for each row Value A data.table with a row for each uncounted column. Examples library(data.table) dt_count <- data.table( x = LETTERS[1:3], w = c(2,1,4) ) uncount <- dt_uncount(dt_count, w, .id = "id") uncount[] # note that `[]` forces the printing dt_unnest Unnest: Fast Unnesting of Data Tables Description Quickly unnest data tables, particularly those nested by dt_nest(). Usage dt_unnest(dt_, col, ...) Arguments dt_ the data table to unnest col the column to unnest ... any of the other variables in the nested table that you want to keep in the unnested table. Bare variable names. If none are provided, all variables are kept. Examples library(data.table) dt <- data.table( x = rnorm(1e5), y = runif(1e5), grp = sample(1L:3L, 1e5, replace = TRUE) ) nested <- dt_nest(dt, grp) dt_unnest(nested, col = data)
erd
ruby
Ruby
Erd === A Rails engine for drawing your app's ER diagram and operating migrations Requirements --- * Rails 7.0, 6.1, 6.0, 5.2, 5.1, 5.0, 4.2, 4.1, 4.0, 3.2, or 3.1 * Graphviz Installation --- Bundle 'erd' gem to your existing Rails app's Gemfile: ``` gem 'erd', group: :development ``` Usage --- Browse at your <http://localhost:3000/erdFeatures --- ### Show Mode * Erd draws an ER diagram based on your app's database and models. * You can drag and arrange the positions of each model. + Then you can save the positions to a local file `db/erd_positions.json`, so you can share the diagram between the team members. ### Edit Mode * You can operate DB schema manipulations such as `add column`, `rename column`, `alter column`, `create model (as well as table)`, and `drop table`. * Then, Erd generates migration files on the server. * And you can run each migration on your browser super quickly. TODO --- * Fix buggy JS * drop column (need to think of the UI) * stop depending on Graphviz * tests * cleaner code (the code is horrible. Please don't read the code, though of course your patches welcome) Contributing to Erd --- * Send me your pull requests! Team --- * [<NAME>][<https://github.com/amatsuda>] * [<NAME>][http://github.com/machida](design) Copyright --- Copyright (c) 2012 <NAME>. See MIT-LICENSE for further details.
boilex
hex
Erlang
Toggle Theme Boilex === Boilex is mix-based Elixir development tool. It * generates dev tools configurations for + static bytecode analysis + source code analysis + test coverage analysis * generates Circleci configurations for + testing + building docker images + pushing to Dockerhub + building project documentation + ERD * generates Docker configurations * generates development scripts (for remote debugging etc) * provides releases versioning Installation --- Add the following parameters to `deps` function in `mix.exs` file ``` # development tools {:excoveralls, "~> 0.8", runtime: false}, {:dialyxir, "~> 0.5", runtime: false}, {:ex_doc, "~> 0.19", runtime: false}, {:credo, "~> 0.9", runtime: false}, {:boilex, "~> 0.2", runtime: false}, ``` Usage --- ### boilex.init Command `mix boilex.init` generates development tools configuration files in already existing Elixir project. It can be used with any **Elixir** or **Phoenix** application except *umbrella* projects. To generate configuration execute this command and follow instructions. ``` cd ./myproject mix deps.get && mix compile mix boilex.init ``` * `Coveralls` tool will help you to check test coverage for each module of new project. Can be configured with `coveralls.json` file. It’s recommended to keep minimal test coverage = 100%. * `Dialyzer` is static analysis tool for BEAM bytecode. Most useful feature of this tool is perfect type inference what will work in your project from-the-box without writing any explicit function specs or any other overhead. Can be configured with `.dialyzer_ignore` file. * [`ExDoc`](https://hexdocs.pm/ex_doc/0.19.1/ExDoc.html) is a tool to generate beautiful documentation for your Elixir projects. * `Credo` static code analysis tool will make your code pretty and consistent. Can be configured with `.credo.exs` file. * `scripts` directory contains auto-generated bash helper scripts. ### boilex.release Script bumps version, creates new release, updates changelog and pushes new tag to github. Argument is one of `patch | minor | major`. Example: ``` mix boilex.release patch ``` ### boilex.hex.publish Task is wrapper around standard `mix hex.publish` but it prevents accidental pushing of private organization packages to open-source. Can accept optional `--confirm-public` flag to enforce open-source push. ``` mix boilex.hex.publish [--confirm-public] ``` ### boilex.ci Some mix tasks are made to use by CI. But of course tasks can be executed locally if needed. List of tasks: ``` mix help | grep "boilex\.ci" ``` ### scripts * `.env` text file contains variables are required by some scripts and mix tasks. * `start.sh` locally starts compiled application. * `pre-commit.sh` is git pre-commit hook. This script will compile project and execute all possible checks. Script will not let you make commits before all issues generated by compiler and static analysis tools will be fixed and all tests will pass. * `remote-iex.sh` provides direct access to remote erlang node through `iex`. * `cluster-iex.sh` connects current erlang node to remote erlang node. All local debug tools (for example Observer) are available to debug remote node. Hot code reloading is also available. * `docs.sh` creates and opens project documentation. * `coverage.sh` creates and opens project test coverage report. Some system variables are required by some scripts, description of all variables * `ERLANG_HOST` remote hostname to connect, example: **www.elixir-app.com** * `ERLANG_OTP_APPLICATION` lowercase and snakecase standard OTP application name, example: **elixir_app** * `ERLANG_COOKIE` remote Erlang node cookie, example: **OEBy/p9vFWi85XTeYOUvIwLr/sZctkHPKWNxfTtf81M=** * `ENABLE_DIALYZER` run Dialyzer checks in pre-commit hooks or not, example: **false** * `CHANGELOG_GITHUB_TOKEN` is token for *github_changelog_generator* utility. Token is **required** for private repos. Reference is [HERE](https://github.com/skywinder/github-changelog-generator#github-token). Variables can be defined in `scripts/.env` file locally (useful for development) or globally in the system. TODO === * Add standard project template generator. * Add phoenix project generator with option “—without-crap” to avoid JS, CSS, other unnecessary static stuff and unnecessary Elixir code. * Make files generator more configurable. Toggle Theme Boilex v0.2.9 Boilex === Some compile-time executed code 1) Fetching submodules. 2) Generation of pre-commit hook link. Toggle Theme Boilex v0.2.9 Boilex.Generator.Circleci === [Link to this section](#summary) Summary === [Functions](#functions) --- [run(assigns)](#run/1) [Link to this section](#functions) Functions === [Link to this function](#run/1 "Link to this function") run(assigns) Toggle Theme Boilex v0.2.9 Boilex.Generator.DevTools === [Link to this section](#summary) Summary === [Functions](#functions) --- [run(assigns)](#run/1) [Link to this section](#functions) Functions === [Link to this function](#run/1 "Link to this function") run(assigns) Toggle Theme Boilex v0.2.9 Boilex.Generator.Docker === [Link to this section](#summary) Summary === [Functions](#functions) --- [run(assigns)](#run/1) [Link to this section](#functions) Functions === [Link to this function](#run/1 "Link to this function") run(assigns) Toggle Theme Boilex v0.2.9 Boilex.Generator.Scripts === [Link to this section](#summary) Summary === [Functions](#functions) --- [run(assigns)](#run/1) [Link to this section](#functions) Functions === [Link to this function](#run/1 "Link to this function") run(assigns) Toggle Theme Boilex v0.2.9 Boilex.Utils === [Link to this section](#summary) Summary === [Functions](#functions) --- [create_script(name, value)](#create_script/2) [create_symlink(destination_path, symlink_path)](#create_symlink/2) [fetch_elixir_version()](#fetch_elixir_version/0) [Link to this section](#functions) Functions === [Link to this function](#create_script/2 "Link to this function") create_script(name, value) [Link to this function](#create_symlink/2 "Link to this function") create_symlink(destination_path, symlink_path) [Link to this function](#fetch_elixir_version/0 "Link to this function") fetch_elixir_version() Toggle Theme Boilex v0.2.9 mix boilex.ci.docker.build === Builds application docker image Usage === ``` cd ./myproject mix mix.tasks.boilex.ci.docker.build $args ``` [Link to this section](#summary) Summary === [Functions](#functions) --- [run(args)](#run/1) A task needs to implement `run` which receives a list of command line args [Link to this section](#functions) Functions === [Link to this function](#run/1 "Link to this function") run(args) ``` run([OptionParser.argv](https://hexdocs.pm/elixir/OptionParser.html#t:argv/0)()) :: :ok ``` A task needs to implement `run` which receives a list of command line args. Callback implementation for [`Mix.Task.run/1`](https://hexdocs.pm/mix/Mix.Task.html#c:run/1). Toggle Theme Boilex v0.2.9 mix boilex.ci.docker.client.install === Installs docker client Usage === ``` cd ./myproject mix mix.tasks.boilex.ci.docker.client.install $args ``` [Link to this section](#summary) Summary === [Functions](#functions) --- [run(args)](#run/1) A task needs to implement `run` which receives a list of command line args [Link to this section](#functions) Functions === [Link to this function](#run/1 "Link to this function") run(args) ``` run([OptionParser.argv](https://hexdocs.pm/elixir/OptionParser.html#t:argv/0)()) :: :ok ``` A task needs to implement `run` which receives a list of command line args. Callback implementation for [`Mix.Task.run/1`](https://hexdocs.pm/mix/Mix.Task.html#c:run/1). Toggle Theme Boilex v0.2.9 mix boilex.ci.docker.push === Pushes application docker image to dockerhub Usage === ``` cd ./myproject mix mix.tasks.boilex.ci.docker.push $args ``` [Link to this section](#summary) Summary === [Functions](#functions) --- [run(args)](#run/1) A task needs to implement `run` which receives a list of command line args [Link to this section](#functions) Functions === [Link to this function](#run/1 "Link to this function") run(args) ``` run([OptionParser.argv](https://hexdocs.pm/elixir/OptionParser.html#t:argv/0)()) :: :ok ``` A task needs to implement `run` which receives a list of command line args. Callback implementation for [`Mix.Task.run/1`](https://hexdocs.pm/mix/Mix.Task.html#c:run/1). Toggle Theme Boilex v0.2.9 mix boilex.hex.publish === mix hex.publish wrapper Prevents accidental pushing of private code to open-source. Usage === ``` # publish to private organization repo cd ./myproject mix boilex.hex.publish # publish to open-source cd ./myproject mix boilex.hex.publish --confirm-public ``` [Link to this section](#summary) Summary === [Functions](#functions) --- [run(args)](#run/1) A task needs to implement `run` which receives a list of command line args [Link to this section](#functions) Functions === [Link to this function](#run/1 "Link to this function") run(args) ``` run([OptionParser.argv](https://hexdocs.pm/elixir/OptionParser.html#t:argv/0)()) :: :ok ``` A task needs to implement `run` which receives a list of command line args. Callback implementation for [`Mix.Task.run/1`](https://hexdocs.pm/mix/Mix.Task.html#c:run/1). Toggle Theme Boilex v0.2.9 mix boilex.init === Creates new (or updates old) configuration files for Elixir dev tools and scripts. Usage === ``` cd ./myproject mix boilex.init ``` [Link to this section](#summary) Summary === [Functions](#functions) --- [run(_)](#run/1) A task needs to implement `run` which receives a list of command line args [Link to this section](#functions) Functions === [Link to this function](#run/1 "Link to this function") run(_) ``` run([OptionParser.argv](https://hexdocs.pm/elixir/OptionParser.html#t:argv/0)()) :: :ok ``` A task needs to implement `run` which receives a list of command line args. Callback implementation for [`Mix.Task.run/1`](https://hexdocs.pm/mix/Mix.Task.html#c:run/1). Toggle Theme Boilex v0.2.9 mix boilex.release === Bumps version, updates changelog and pushes new release Argument is release kind: patch | minor | major Usage === ``` cd ./myproject mix boilex.release patch ``` [Link to this section](#summary) Summary === [Functions](#functions) --- [run(list)](#run/1) A task needs to implement `run` which receives a list of command line args [Link to this section](#functions) Functions === [Link to this function](#run/1 "Link to this function") run(list) ``` run([OptionParser.argv](https://hexdocs.pm/elixir/OptionParser.html#t:argv/0)()) :: :ok ``` A task needs to implement `run` which receives a list of command line args. Callback implementation for [`Mix.Task.run/1`](https://hexdocs.pm/mix/Mix.Task.html#c:run/1).
rpart
cran
R
Package ‘rpart’ October 10, 2023 Priority recommended Version 4.1.21 Date 2023-10-09 Description Recursive partitioning for classification, regression and survival trees. An implementation of most of the functionality of the 1984 book by Breiman, Friedman, Olshen and Stone. Title Recursive Partitioning and Regression Trees Depends R (>= 2.15.0), graphics, stats, grDevices Suggests survival License GPL-2 | GPL-3 LazyData yes ByteCompile yes NeedsCompilation yes Author <NAME> [aut], <NAME> [aut, cre], <NAME> [trl] (producer of the initial R port, maintainer 1999-2017) Maintainer <NAME> <<EMAIL>> Repository CRAN URL https://github.com/bethatkinson/rpart, https://cran.r-project.org/package=rpart BugReports https://github.com/bethatkinson/rpart/issues Date/Publication 2023-10-09 22:40:02 UTC R topics documented: car.test.fram... 2 car9... 3 cu.summar... 5 kyphosi... 6 labels.rpar... 7 meanvar.rpar... 8 na.rpar... 9 path.rpar... 9 plot.rpar... 10 plotc... 12 post.rpar... 13 predict.rpar... 14 print.rpar... 16 printc... 17 prune.rpar... 18 residuals.rpar... 19 rpar... 20 rpart.contro... 22 rpart.ex... 23 rpart.objec... 24 rsq.rpar... 25 snip.rpar... 26 solder.balanc... 27 stage... 28 summary.rpar... 29 text.rpar... 30 xpred.rpar... 31 car.test.frame Automobile Data from ’Consumer Reports’ 1990 Description The car.test.frame data frame has 60 rows and 8 columns, giving data on makes of cars taken from the April, 1990 issue of Consumer Reports. This is part of a larger dataset, some columns of which are given in cu.summary. Usage car.test.frame Format This data frame contains the following columns: Price a numeric vector giving the list price in US dollars of a standard model Country of origin, a factor with levels ‘France’, ‘Germany’, ‘Japan’ , ‘Japan/USA’, ‘Korea’, ‘Mexico’, ‘Sweden’ and ‘USA’ Reliability a numeric vector coded 1 to 5. Mileage fuel consumption miles per US gallon, as tested. Type a factor with levels Compact Large Medium Small Sporty Van Weight kerb weight in pounds. Disp. the engine capacity (displacement) in litres. HP the net horsepower of the vehicle. Source Consumer Reports, April, 1990, pp. 235–288 quoted in <NAME> and T<NAME> eds. (1992) Statistical Models in S, Wadsworth and Brooks/Cole, Pacific Grove, CA, pp. 46–47. See Also car90, cu.summary Examples z.auto <- rpart(Mileage ~ Weight, car.test.frame) summary(z.auto) car90 Automobile Data from ’Consumer Reports’ 1990 Description Data on 111 cars, taken from pages 235–255, 281–285 and 287–288 of the April 1990 Consumer Reports Magazine. Usage data(car90) Format The data frame contains the following columns Country a factor giving the country in which the car was manufactured Disp engine displacement in cubic inches Disp2 engine displacement in liters Eng.Rev engine revolutions per mile, or engine speed at 60 mph Front.Hd distance between the car’s head-liner and the head of a 5 ft. 9 in. front seat passenger, in inches, as measured by CU Frt.Leg.Room maximum front leg room, in inches, as measured by CU Frt.Shld front shoulder room, in inches, as measured by CU Gear.Ratio the overall gear ratio, high gear, for manual transmission Gear2 the overall gear ratio, high gear, for automatic transmission HP net horsepower HP.revs the red line—the maximum safe engine speed in rpm Height height of car, in inches, as supplied by manufacturer Length overall length, in inches, as supplied by manufacturer Luggage luggage space Mileage a numeric vector of gas mileage in miles/gallon as tested by CU; contains NAs. Model2 alternate name, if the car was sold under two labels Price list price with standard equipment, in dollars Rear.Hd distance between the car’s head-liner and the head of a 5 ft 9 in. rear seat passenger, in inches, as measured by CU Rear.Seating rear fore-and-aft seating room, in inches, as measured by CU RearShld rear shoulder room, in inches, as measured by CU Reliability an ordered factor with levels ‘Much worse’ < ‘worse’ < ‘average’ < ‘better’ < ‘Much better’: contains NAs. Rim factor giving the rim size Sratio.m Number of turns of the steering wheel required for a turn of 30 foot radius, manual steering Sratio.p Number of turns of the steering wheel required for a turn of 30 foot radius, power steering Steering steering type offered: manual, power, or both Tank fuel refill capacity in gallons Tires factor giving tire size Trans1 manual transmission, a factor with levels ‘’, ‘man.4’, ‘man.5’ and ‘man.6’ Trans2 automatic transmission, a factor with levels ‘’, ‘auto.3’, ‘auto.4’, and ‘auto.CVT’. No car is missing both the manual and automatic transmission variables, but several had both as options Turning the radius of the turning circle in feet Type a factor giving the general type of car. The levels are: ‘Small’, ‘Sporty’, ‘Compact’, ‘Medium’, ‘Large’, ‘Van’ Weight an order statistic giving the relative weights of the cars; 1 is the lightest and 111 is the heaviest Wheel.base length of wheelbase, in inches, as supplied by manufacturer Width width of car, in inches, as supplied by manufacturer Source This is derived (with permission) from the data set car.all in S-PLUS, but with some further clean up of variable names and definitions. See Also car.test.frame, cu.summary for extracts from other versions of the dataset. Examples data(car90) plot(car90$Price/1000, car90$Weight, xlab = "Price (thousands)", ylab = "Weight (lbs)") mlowess <- function(x, y, ...) { keep <- !(is.na(x) | is.na(y)) lowess(x[keep], y[keep], ...) } with(car90, lines(mlowess(Price/1000, Weight, f = 0.5))) cu.summary Automobile Data from ’Consumer Reports’ 1990 Description The cu.summary data frame has 117 rows and 5 columns, giving data on makes of cars taken from the April, 1990 issue of Consumer Reports. Usage cu.summary Format This data frame contains the following columns: Price a numeric vector giving the list price in US dollars of a standard model Country of origin, a factor with levels ‘Brazil’, ‘England’, ‘France’, ‘Germany’, ‘Japan’, ‘Japan/USA’, ‘Korea’, ‘Mexico’, ‘Sweden’ and ‘USA’ Reliability an ordered factor with levels ‘Much worse’ < ‘worse’ < ‘average’ < ‘better’ < ‘Much better’ Mileage fuel consumption miles per US gallon, as tested. Type a factor with levels Compact Large Medium Small Sporty Van Source Consumer Reports, April, 1990, pp. 235–288 quoted in <NAME> and Trevor J. Hastie eds. (1992) Statistical Models in S, Wadsworth and Brooks/Cole, Pacific Grove, CA, pp. 46–47. See Also car.test.frame, car90 Examples fit <- rpart(Price ~ Mileage + Type + Country, cu.summary) par(xpd = TRUE) plot(fit, compress = TRUE) text(fit, use.n = TRUE) kyphosis Data on Children who have had Corrective Spinal Surgery Description The kyphosis data frame has 81 rows and 4 columns. representing data on children who have had corrective spinal surgery Usage kyphosis Format This data frame contains the following columns: Kyphosis a factor with levels absent present indicating if a kyphosis (a type of deformation) was present after the operation. Age in months Number the number of vertebrae involved Start the number of the first (topmost) vertebra operated on. Source <NAME> and Trev<NAME>. Hastie eds. (1992) Statistical Models in S, Wadsworth and Brooks/Cole, Pacific Grove, CA. Examples fit <- rpart(Kyphosis ~ Age + Number + Start, data = kyphosis) fit2 <- rpart(Kyphosis ~ Age + Number + Start, data = kyphosis, parms = list(prior = c(0.65, 0.35), split = "information")) fit3 <- rpart(Kyphosis ~ Age + Number + Start, data=kyphosis, control = rpart.control(cp = 0.05)) par(mfrow = c(1,2), xpd = TRUE) plot(fit) text(fit, use.n = TRUE) plot(fit2) text(fit2, use.n = TRUE) labels.rpart Create Split Labels For an Rpart Object Description This function provides labels for the branches of an rpart tree. Usage ## S3 method for class 'rpart' labels(object, digits = 4, minlength = 1L, pretty, collapse = TRUE, ...) Arguments object fitted model object of class "rpart". This is assumed to be the result of some function that produces an object with the same named components as that re- turned by the rpart function. digits the number of digits to be used for numeric values. All of the rpart functions that call labels explicitly set this value, with options("digits") as the default. minlength the minimum length for abbreviation of character or factor variables. If 0 no abbreviation is done; if 1 single English letters are used, first lower case than upper case (with a maximum of 52 levels). If the value is greater than , the abbreviate function is used, passed the minlength argument. pretty an argument included for compatibility with the original Splus tree package: pretty = 0 implies minlength = 0L, pretty = NULL implies minlength = 1L, and pretty = TRUE implies minlength = 4L. collapse logical. The returned set of labels is always of the same length as the number of nodes in the tree. If collapse = TRUE (default), the returned value is a vector of labels for the branch leading into each node, with "root" as the label for the top node. If FALSE, the returned value is a two column matrix of labels for the left and right branches leading out from each node, with "leaf" as the branch labels for terminal nodes. ... optional arguments to abbreviate. Value Vector of split labels (collapse = TRUE) or matrix of left and right splits (collapse = FALSE) for the supplied rpart object. This function is called by printing methods for rpart and is not intended to be called directly by the users. See Also abbreviate meanvar.rpart Mean-Variance Plot for an Rpart Object Description Creates a plot on the current graphics device of the deviance of the node divided by the number of observations at the node. Also returns the node number. Usage meanvar(tree, ...) ## S3 method for class 'rpart' meanvar(tree, xlab = "ave(y)", ylab = "ave(deviance)", ...) Arguments tree fitted model object of class "rpart". This is assumed to be the result of some function that produces an object with the same named components as that re- turned by the rpart function. xlab x-axis label for the plot. ylab y-axis label for the plot. ... additional graphical parameters may be supplied as arguments to this function. Value an invisible list containing the following vectors is returned. x fitted value at terminal nodes (yval). y deviance of node divided by number of observations at node. label node number. Side Effects a plot is put on the current graphics device. See Also plot.rpart. Examples z.auto <- rpart(Mileage ~ Weight, car.test.frame) meanvar(z.auto, log = 'xy') na.rpart Handles Missing Values in an Rpart Object Description Handles missing values in an "rpart" object. Usage na.rpart(x) Arguments x a model frame. Details Default function that handles missing values when calling the function rpart. It omits cases where part of the response is missing or all the explanatory variables are missing. path.rpart Follow Paths to Selected Nodes of an Rpart Object Description Returns a names list where each element contains the splits on the path from the root to the selected nodes. Usage path.rpart(tree, nodes, pretty = 0, print.it = TRUE) Arguments tree fitted model object of class "rpart". This is assumed to be the result of some function that produces an object with the same named components as that re- turned by the rpart function. nodes an integer vector containing indices (node numbers) of all nodes for which paths are desired. If missing, user selects nodes as described below. pretty an integer denoting the extent to which factor levels in split labels will be abbre- viated. A value of (0) signifies no abbreviation. A NULL, the default, signifies using elements of letters to represent the different factor levels. print.it Logical. Denotes whether paths will be printed out as nodes are interactively selected. Irrelevant if nodes argument is supplied. Details The function has a required argument as an rpart object and a list of nodes as optional arguments. Omitting a list of nodes will cause the function to wait for the user to select nodes from the dendro- gram. It will return a list, with one component for each node specified or selected. The component contains the sequence of splits leading to that node. In the graphical interaction, the individual paths are printed out as nodes are selected. Value A named (by node) list, each element of which contains all the splits on the path from the root to the specified or selected nodes. Graphical Interaction A dendrogram of the rpart object is expected to be visible on the graphics device, and a graphics input device (e.g. a mouse) is required. Clicking (the selection button) on a node selects that node. This process may be repeated any number of times. Clicking the exit button will stop the selection process and return the list of paths. References This function was modified from path.tree in S. See Also rpart Examples fit <- rpart(Kyphosis ~ Age + Number + Start, data = kyphosis) print(fit) path.rpart(fit, nodes = c(11, 22)) plot.rpart Plot an Rpart Object Description Plots an rpart object on the current graphics device. Usage ## S3 method for class 'rpart' plot(x, uniform = FALSE, branch = 1, compress = FALSE, nspace, margin = 0, minbranch = 0.3, branch.col = 1, branch.lty = 1, branch.lwd = 1, ...) Arguments x a fitted object of class "rpart", containing a classification, regression, or rate tree. uniform if TRUE, uniform vertical spacing of the nodes is used; this may be less cluttered when fitting a large plot onto a page. The default is to use a non-uniform spacing proportional to the error in the fit. branch controls the shape of the branches from parent to child node. Any number from 0 to 1 is allowed. A value of 1 gives square shouldered branches, a value of 0 give V shaped branches, with other values being intermediate. compress if FALSE, the leaf nodes will be at the horizontal plot coordinates of 1:nleaves. If TRUE, the routine attempts a more compact arrangement of the tree. The com- paction algorithm assumes uniform=TRUE; surprisingly, the result is usually an improvement even when that is not the case. nspace the amount of extra space between a node with children and a leaf, as compared to the minimal space between leaves. Applies to compressed trees only. The default is the value of branch. margin an extra fraction of white space to leave around the borders of the tree. (Long labels sometimes get cut off by the default computation). minbranch set the minimum length for a branch to minbranch times the average branch length. This parameter is ignored if uniform=TRUE. Sometimes a split will give very little improvement, or even (in the classification case) no improvement at all. A tree with branch lengths strictly proportional to improvement leaves no room to squeeze in node labels. branch.col set the color of the branches. branch.lty set the line type of the branches. branch.lwd set the line width of the branches. ... arguments to be passed to or from other methods. Details This function is a method for the generic function plot, for objects of class rpart. The y-coordinate of the top node of the tree will always be 1. Value The coordinates of the nodes are returned as a list, with components x and y. Side Effects An unlabeled plot is produced on the current graphics device: one being opened if needed. In order to build up a plot in the usual S style, e.g., a separate text command for adding labels, some extra information about the plot needs be retained. This is kept in an environment in the package. See Also rpart, text.rpart Examples fit <- rpart(Price ~ Mileage + Type + Country, cu.summary) par(xpd = TRUE) plot(fit, compress = TRUE) text(fit, use.n = TRUE) plotcp Plot a Complexity Parameter Table for an Rpart Fit Description Gives a visual representation of the cross-validation results in an rpart object. Usage plotcp(x, minline = TRUE, lty = 3, col = 1, upper = c("size", "splits", "none"), ...) Arguments x an object of class "rpart" minline whether a horizontal line is drawn 1SE above the minimum of the curve. lty line type for this line col colour for this line upper what is plotted on the top axis: the size of the tree (the number of leaves), the number of splits or nothing. ... additional plotting parameters Details The set of possible cost-complexity prunings of a tree from a nested set. For the geometric means of the intervals of values of cp for which a pruning is optimal, a cross-validation has (usually) been done in the initial construction by rpart. The cptable in the fit contains the mean and standard deviation of the errors in the cross-validated prediction against each of the geometric means, and these are plotted by this function. A good choice of cp for pruning is often the leftmost value for which the mean lies below the horizontal line. Value None. Side Effects A plot is produced on the current graphical device. See Also rpart, printcp, rpart.object post.rpart PostScript Presentation Plot of an Rpart Object Description Generates a PostScript presentation plot of an rpart object. Usage post(tree, ...) ## S3 method for class 'rpart' post(tree, title., filename = paste(deparse(substitute(tree)), ".ps", sep = ""), digits = getOption("digits") - 2, pretty = TRUE, use.n = TRUE, horizontal = TRUE, ...) Arguments tree fitted model object of class "rpart". This is assumed to be the result of some function that produces an object with the same named components as that re- turned by the rpart function. title. a title which appears at the top of the plot. By default, the name of the rpart endpoint is printed out. filename ASCII file to contain the output. By default, the name of the file is the name of the object given by rpart (with the suffix .ps added). If filename = "", the plot appears on the current graphical device. digits number of significant digits to include in numerical data. pretty an integer denoting the extent to which factor levels will be abbreviated in the character strings defining the splits; (0) signifies no abbreviation of levels. A NULL signifies using elements of letters to represent the different factor levels. The default (TRUE) indicates the maximum possible abbreviation. use.n Logical. If TRUE (default), adds to the label #events level1/ #events level2/ etc. for method class, n for method anova, and #events/n for methods poisson and exp). horizontal Logical. If TRUE (default), plot is horizontal. If FALSE, plot appears as landscape. ... other arguments to the postscript function. Details The plot created uses the functions plot.rpart and text.rpart (with the fancy option). The settings were chosen because they looked good to us, but other options may be better, depending on the rpart object. Users are encouraged to write their own function containing favorite options. Side Effects a plot of rpart is created using the postscript driver, or the current device if filename = "". See Also plot.rpart, rpart, text.rpart, abbreviate Examples ## Not run: z.auto <- rpart(Mileage ~ Weight, car.test.frame) post(z.auto, file = "") # display tree on active device # now construct postscript version on file "pretty.ps" # with no title post(z.auto, file = "pretty.ps", title = " ") z.hp <- rpart(Mileage ~ Weight + HP, car.test.frame) post(z.hp) ## End(Not run) predict.rpart Predictions from a Fitted Rpart Object Description Returns a vector of predicted responses from a fitted rpart object. Usage ## S3 method for class 'rpart' predict(object, newdata, type = c("vector", "prob", "class", "matrix"), na.action = na.pass, ...) Arguments object fitted model object of class "rpart". This is assumed to be the result of some function that produces an object with the same named components as that re- turned by the rpart function. newdata data frame containing the values at which predictions are required. The predic- tors referred to in the right side of formula(object) must be present by name in newdata. If missing, the fitted values are returned. type character string denoting the type of predicted value returned. If the rpart object is a classification tree, then the default is to return prob predictions, a matrix whose columns are the probability of the first, second, etc. class. (This agrees with the default behavior of tree). Otherwise, a vector result is returned. na.action a function to determine what should be done with missing values in newdata. The default is to pass them down the tree using surrogates in the way selected when the model was built. Other possibilities are na.omit and na.fail. ... further arguments passed to or from other methods. Details This function is a method for the generic function predict for class "rpart". It can be invoked by calling predict for an object of the appropriate class, or directly by calling predict.rpart regardless of the class of the object. Value A new object is obtained by dropping newdata down the object. For factor predictors, if an observa- tion contains a level not used to grow the tree, it is left at the deepest possible node and frame$yval at the node is the prediction. If type = "vector": vector of predicted responses. For regression trees this is the mean response at the node, for Poisson trees it is the estimated response rate, and for classification trees it is the predicted class (as a number). If type = "prob": (for a classification tree) a matrix of class probabilities. If type = "matrix": a matrix of the full responses (frame$yval2 if this exists, otherwise frame$yval). For regression trees, this is the mean response, for Poisson trees it is the response rate and the number of events at that node in the fitted tree, and for classification trees it is the concatenation of at least the predicted class, the class counts at that node in the fitted tree, and the class probabilities (some versions of rpart may contain further columns). If type = "class": (for a classification tree) a factor of classifications based on the responses. See Also predict, rpart.object Examples z.auto <- rpart(Mileage ~ Weight, car.test.frame) predict(z.auto) fit <- rpart(Kyphosis ~ Age + Number + Start, data = kyphosis) predict(fit, type = "prob") # class probabilities (default) predict(fit, type = "vector") # level numbers predict(fit, type = "class") # factor predict(fit, type = "matrix") # level number, class frequencies, probabilities sub <- c(sample(1:50, 25), sample(51:100, 25), sample(101:150, 25)) fit <- rpart(Species ~ ., data = iris, subset = sub) fit table(predict(fit, iris[-sub,], type = "class"), iris[-sub, "Species"]) print.rpart Print an Rpart Object Description This function prints an rpart object. It is a method for the generic function print of class "rpart". Usage ## S3 method for class 'rpart' print(x, minlength = 0, spaces = 2, cp, digits = getOption("digits"), nsmall = min(20, digits), ...) Arguments x fitted model object of class "rpart". This is assumed to be the result of some function that produces an object with the same named components as that re- turned by the rpart function. minlength Controls the abbreviation of labels: see labels.rpart. spaces the number of spaces to indent nodes of increasing depth. digits the number of digits of numbers to print. nsmall the number of digits to the right of the decimal. See format. cp prune all nodes with a complexity less than cp from the printout. Ignored if unspecified. ... arguments to be passed to or from other methods. Details This function is a method for the generic function print for class "rpart". It can be invoked by calling print for an object of the appropriate class, or directly by calling print.rpart regardless of the class of the object. Side Effects A semi-graphical layout of the contents of x$frame is printed. Indentation is used to convey the tree topology. Information for each node includes the node number, split, size, deviance, and fitted value. For the "class" method, the class probabilities are also printed. See Also print, rpart.object, summary.rpart, printcp Examples z.auto <- rpart(Mileage ~ Weight, car.test.frame) z.auto ## Not run: node), split, n, deviance, yval * denotes terminal node 1) root 60 1354.58300 24.58333 2) Weight>=2567.5 45 361.20000 22.46667 4) Weight>=3087.5 22 61.31818 20.40909 * 5) Weight<3087.5 23 117.65220 24.43478 10) Weight>=2747.5 15 60.40000 23.80000 * 11) Weight<2747.5 8 39.87500 25.62500 * 3) Weight<2567.5 15 186.93330 30.93333 * ## End(Not run) printcp Displays CP table for Fitted Rpart Object Description Displays the cp table for fitted rpart object. Usage printcp(x, digits = getOption("digits") - 2) Arguments x fitted model object of class "rpart". This is assumed to be the result of some function that produces an object with the same named components as that re- turned by the rpart function. digits the number of digits of numbers to print. Details Prints a table of optimal prunings based on a complexity parameter. See Also summary.rpart, rpart.object Examples z.auto <- rpart(Mileage ~ Weight, car.test.frame) printcp(z.auto) ## Not run: Regression tree: rpart(formula = Mileage ~ Weight, data = car.test.frame) Variables actually used in tree construction: [1] Weight Root node error: 1354.6/60 = 22.576 CP nsplit rel error xerror xstd 1 0.595349 0 1.00000 1.03436 0.178526 2 0.134528 1 0.40465 0.60508 0.105217 3 0.012828 2 0.27012 0.45153 0.083330 4 0.010000 3 0.25729 0.44826 0.076998 ## End(Not run) prune.rpart Cost-complexity Pruning of an Rpart Object Description Determines a nested sequence of subtrees of the supplied rpart object by recursively snipping off the least important splits, based on the complexity parameter (cp). Usage prune(tree, ...) ## S3 method for class 'rpart' prune(tree, cp, ...) Arguments tree fitted model object of class "rpart". This is assumed to be the result of some function that produces an object with the same named components as that re- turned by the rpart function. cp Complexity parameter to which the rpart object will be trimmed. ... further arguments passed to or from other methods. Value A new rpart object that is trimmed to the value cp. See Also rpart Examples z.auto <- rpart(Mileage ~ Weight, car.test.frame) zp <- prune(z.auto, cp = 0.1) plot(zp) #plot smaller rpart object residuals.rpart Residuals From a Fitted Rpart Object Description Method for residuals for an rpart object. Usage ## S3 method for class 'rpart' residuals(object, type = c("usual", "pearson", "deviance"), ...) Arguments object fitted model object of class "rpart". type Indicates the type of residual desired. For regression or anova trees all three residual definitions reduce to y - fitted. This is the residual returned for user method trees as well. For classification trees the usual residuals are the misclassification losses L(actual, predicted) where L is the loss matrix. With default losses this residual is 0/1 for correct/incorrect classification. The pearson residual is (1-fitted)/sqrt(fitted(1- fitted)) and the deviance residual is sqrt(minus twice logarithm of fitted). For poisson and exp (or survival) trees, the usual residual is the observed - expected number of events. The pearson and deviance residuals are as defined in McCullagh and Nelder. ... further arguments passed to or from other methods. Value Vector of residuals of type type from a fitted rpart object. References <NAME>. and <NAME>. (1989) Generalized Linear Models. London: Chapman and Hall. Examples fit <- rpart(skips ~ Opening + Solder + Mask + PadType + Panel, data = solder.balance, method = "anova") summary(residuals(fit)) plot(predict(fit),residuals(fit)) rpart Recursive Partitioning and Regression Trees Description Fit a rpart model Usage rpart(formula, data, weights, subset, na.action = na.rpart, method, model = FALSE, x = FALSE, y = TRUE, parms, control, cost, ...) Arguments formula a formula, with a response but no interaction terms. If this is a data frame, it is taken as the model frame (see model.frame). data an optional data frame in which to interpret the variables named in the formula. weights optional case weights. subset optional expression saying that only a subset of the rows of the data should be used in the fit. na.action the default action deletes all observations for which y is missing, but keeps those in which one or more predictors are missing. method one of "anova", "poisson", "class" or "exp". If method is missing then the routine tries to make an intelligent guess. If y is a survival object, then method = "exp" is assumed, if y has 2 columns then method = "poisson" is assumed, if y is a factor then method = "class" is assumed, otherwise method = "anova" is assumed. It is wisest to specify the method directly, especially as more criteria may added to the function in future. Alternatively, method can be a list of functions named init, split and eval. Examples are given in the file ‘tests/usersplits.R’ in the sources, and in the vignettes ‘User Written Split Functions’. model if logical: keep a copy of the model frame in the result? If the input value for model is a model frame (likely from an earlier call to the rpart function), then this frame is used rather than constructing new data. x keep a copy of the x matrix in the result. y keep a copy of the dependent variable in the result. If missing and model is supplied this defaults to FALSE. parms optional parameters for the splitting function. Anova splitting has no parameters. Poisson splitting has a single parameter, the coefficient of variation of the prior distribution on the rates. The default value is 1. Exponential splitting has the same parameter as Poisson. For classification splitting, the list can contain any of: the vector of prior prob- abilities (component prior), the loss matrix (component loss) or the splitting index (component split). The priors must be positive and sum to 1. The loss matrix must have zeros on the diagonal and positive off-diagonal elements. The splitting index can be gini or information. The default priors are proportional to the data counts, the losses default to 1, and the split defaults to gini. control a list of options that control details of the rpart algorithm. See rpart.control. cost a vector of non-negative costs, one for each variable in the model. Defaults to one for all variables. These are scalings to be applied when considering splits, so the improvement on splitting on a variable is divided by its cost in deciding which split to choose. ... arguments to rpart.control may also be specified in the call to rpart. They are checked against the list of valid arguments. Details This differs from the tree function in S mainly in its handling of surrogate variables. In most details it follows Breiman et. al (1984) quite closely. R package tree provides a re-implementation of tree. Value An object of class rpart. See rpart.object. References <NAME>., <NAME>., <NAME>., and <NAME>. (1984) Classification and Regression Trees. Wadsworth. See Also rpart.control, rpart.object, summary.rpart, print.rpart Examples fit <- rpart(Kyphosis ~ Age + Number + Start, data = kyphosis) fit2 <- rpart(Kyphosis ~ Age + Number + Start, data = kyphosis, parms = list(prior = c(.65,.35), split = "information")) fit3 <- rpart(Kyphosis ~ Age + Number + Start, data = kyphosis, control = rpart.control(cp = 0.05)) par(mfrow = c(1,2), xpd = NA) # otherwise on some devices the text is clipped plot(fit) text(fit, use.n = TRUE) plot(fit2) text(fit2, use.n = TRUE) rpart.control Control for Rpart Fits Description Various parameters that control aspects of the rpart fit. Usage rpart.control(minsplit = 20, minbucket = round(minsplit/3), cp = 0.01, maxcompete = 4, maxsurrogate = 5, usesurrogate = 2, xval = 10, surrogatestyle = 0, maxdepth = 30, ...) Arguments minsplit the minimum number of observations that must exist in a node in order for a split to be attempted. minbucket the minimum number of observations in any terminal <leaf> node. If only one of minbucket or minsplit is specified, the code either sets minsplit to minbucket*3 or minbucket to minsplit/3, as appropriate. cp complexity parameter. Any split that does not decrease the overall lack of fit by a factor of cp is not attempted. For instance, with anova splitting, this means that the overall R-squared must increase by cp at each step. The main role of this parameter is to save computing time by pruning off splits that are obviously not worthwhile. Essentially,the user informs the program that any split which does not improve the fit by cp will likely be pruned off by cross-validation, and that hence the program need not pursue it. maxcompete the number of competitor splits retained in the output. It is useful to know not just which split was chosen, but which variable came in second, third, etc. maxsurrogate the number of surrogate splits retained in the output. If this is set to zero the compute time will be reduced, since approximately half of the computational time (other than setup) is used in the search for surrogate splits. usesurrogate how to use surrogates in the splitting process. 0 means display only; an obser- vation with a missing value for the primary split rule is not sent further down the tree. 1 means use surrogates, in order, to split subjects missing the primary variable; if all surrogates are missing the observation is not split. For value 2 ,if all surrogates are missing, then send the observation in the majority direction. A value of 0 corresponds to the action of tree, and 2 to the recommendations of Breiman et.al (1984). xval number of cross-validations. surrogatestyle controls the selection of a best surrogate. If set to 0 (default) the program uses the total number of correct classification for a potential surrogate variable, if set to 1 it uses the percent correct, calculated over the non-missing values of the surrogate. The first option more severely penalizes covariates with a large number of missing values. maxdepth Set the maximum depth of any node of the final tree, with the root node counted as depth 0. Values greater than 30 rpart will give nonsense results on 32-bit machines. ... mop up other arguments. Value A list containing the options. See Also rpart rpart.exp Initialization function for exponential fitting Description This function does the initialization step for rpart, when the response is a survival object. It rescales the data so as to have an exponential baseline hazard and then uses Poisson methods. This function would rarely if ever be called directly by a user. Usage rpart.exp(y, offset, parms, wt) Arguments y the response, which will be of class Surv offset optional offset parms parameters controlling the fit. This is a list with components shrink and method. The first is the prior for the coefficient of variation of the predictions. The second is either "deviance" or "sqrt" and is the measure used for cross-validation. If values are missing the defaults are used, which are "deviance" for the method, and a shrinkage of 1.0 for the deviance method and 0 for the square root. wt case weights, if present Value a list with the necessary initialization components Author(s) <NAME> See Also rpart rpart.object Recursive Partitioning and Regression Trees Object Description These are objects representing fitted rpart trees. Value frame data frame with one row for each node in the tree. The row.names of frame contain the (unique) node numbers that follow a binary ordering indexed by node depth. Columns of frame include var, a factor giving the names of the variables used in the split at each node (leaf nodes are denoted by the level "<leaf>"), n, the number of observations reaching the node, wt, the sum of case weights for observations reaching the node, dev, the deviance of the node, yval, the fitted value of the response at the node, and splits, a two column matrix of left and right split labels for each node. Also included in the frame are complexity, the complexity parameter at which this split will collapse, ncompete, the number of competitor splits recorded, and nsurrogate, the number of surrogate splits recorded. Extra response information which may be present is in yval2, which contains the number of events at the node (poisson tree), or a matrix containing the fit- ted class, the class counts for each node, the class probabilities and the ‘node probability’ (classification trees). where an integer vector of the same length as the number of observations in the root node, containing the row number of frame corresponding to the leaf node that each observation falls into. call an image of the call that produced the object, but with the arguments all named and with the actual formula included as the formula argument. To re-evaluate the call, say update(tree). terms an object of class c("terms", "formula") (see terms.object) summarizing the formula. Used by various methods, but typically not of direct relevance to users. splits a numeric matrix describing the splits: only present if there are any. The row label is the name of the split variable, and columns are count, the number of observations (which are not missing and are of positive weight) sent left or right by the split (for competitor splits this is the number that would have been sent left or right had this split been used, for surrogate splits it is the number miss- ing the primary split variable which were decided using this surrogate), ncat, the number of categories or levels for the variable (+/-1 for a continuous vari- able), improve, which is the improvement in deviance given by this split, or, for surrogates, the concordance of the surrogate with the primary, and index, the numeric split point. The last column adj gives the adjusted concordance for surrogate splits. For a factor, the index column contains the row number of the csplit matrix. For a continuous variable, the sign of ncat determines whether the subset x < cutpoint or x > cutpoint is sent to the left. csplit an integer matrix. (Only present only if at least one of the split variables is a factor or ordered factor.) There is a row for each such split, and the number of columns is the largest number of levels in the factors. Which row is given by the index column of the splits matrix. The columns record 1 if that level of the factor goes to the left, 3 if it goes to the right, and 2 if that level is not present at this node of the tree (or not defined for the factor). method character string: the method used to grow the tree. One of "class", "exp", "poisson", "anova" or "user" (if splitting functions were supplied). cptable a matrix of information on the optimal prunings based on a complexity parame- ter. variable.importance a named numeric vector giving the importance of each variable. (Only present if there are any splits.) When printed by summary.rpart these are rescaled to add to 100. numresp integer number of responses; the number of levels for a factor response. parms, control a record of the arguments supplied, which defaults filled in. functions the summary, print and text functions for method used. ordered a named logical vector recording for each variable if it was an ordered factor. na.action (where relevant) information returned by model.frame on the special handling of NAs derived from the na.action argument. There may be attributes "xlevels" and "levels" recording the levels of any factor splitting vari- ables and of a factor response respectively. Optional components include the model frame (model), the matrix of predictors (x) and the response variable (y) used to construct the rpart object. Structure The following components must be included in a legitimate rpart object. See Also rpart. rsq.rpart Plots the Approximate R-Square for the Different Splits Description Produces 2 plots. The first plots the r-square (apparent and apparent - from cross-validation) versus the number of splits. The second plots the Relative Error(cross-validation) +/- 1-SE from cross- validation versus the number of splits. Usage rsq.rpart(x) Arguments x fitted model object of class "rpart". This is assumed to be the result of some function that produces an object with the same named components as that re- turned by the rpart function. Side Effects Two plots are produced. Note The labels are only appropriate for the "anova" method. Examples z.auto <- rpart(Mileage ~ Weight, car.test.frame) rsq.rpart(z.auto) snip.rpart Snip Subtrees of an Rpart Object Description Creates a "snipped" rpart object, containing the nodes that remain after selected subtrees have been snipped off. The user can snip nodes using the toss argument, or interactively by clicking the mouse button on specified nodes within the graphics window. Usage snip.rpart(x, toss) Arguments x fitted model object of class "rpart". This is assumed to be the result of some function that produces an object with the same named components as that re- turned by the rpart function. toss an integer vector containing indices (node numbers) of all subtrees to be snipped off. If missing, user selects branches to snip off as described below. Details A dendrogram of rpart is expected to be visible on the graphics device, and a graphics input device (e.g., a mouse) is required. Clicking (the selection button) on a node displays the node number, sample size, response y-value, and Error (dev). Clicking a second time on the same node snips that subtree off and visually erases the subtree. This process may be repeated an number of times. Warnings result from selecting the root or leaf nodes. Clicking the exit button will stop the snipping process and return the resulting rpart object. See the documentation for the specific graphics device for details on graphical input techniques. Value A rpart object containing the nodes that remain after specified or selected subtrees have been snipped off. Warning Visually erasing the plot is done by over-plotting with the background colour. This will do nothing if the background is transparent (often true for screen devices). See Also plot.rpart Examples ## dataset not in R ## Not run: z.survey <- rpart(market.survey) # grow the rpart object plot(z.survey) # plot the tree z.survey2 <- snip.rpart(z.survey, toss = 2) # trim subtree at node 2 plot(z.survey2) # plot new tree # can also interactively select the node using the mouse in the # graphics window ## End(Not run) solder.balance Soldering of Components on Printed-Circuit Boards Description The solder.balance data frame has 720 rows and 6 columns, representing a balanced subset of a designed experiment varying 5 factors on the soldering of components on printed-circuit boards. The solder data frame is the full version of the data with 900 rows. It is located in both the rpart and the survival packages. Usage solder Format This data frame contains the following columns: Opening a factor with levels ‘L’, ‘M’ and ‘S’ indicating the amount of clearance around the mount- ing pad. Solder a factor with levels ‘Thick’ and ‘Thin’ giving the thickness of the solder used. Mask a factor with levels ‘A1.5’, ‘A3’, ‘B3’ and ‘B6’ indicating the type and thickness of mask used. PadType a factor with levels ‘D4’, ‘D6’, ‘D7’, ‘L4’, ‘L6’, ‘L7’, ‘L8’, ‘L9’, ‘W4’ and ‘W9’ giving the size and geometry of the mounting pad. Panel 1:3 indicating the panel on a board being tested. skips a numeric vector giving the number of visible solder skips. Source <NAME> and <NAME>. Hastie eds. (1992) Statistical Models in S, Wadsworth and Brooks/Cole, Pacific Grove, CA. Examples fit <- rpart(skips ~ Opening + Solder + Mask + PadType + Panel, data = solder.balance, method = "anova") summary(residuals(fit)) plot(predict(fit), residuals(fit)) stagec Stage C Prostate Cancer Description A set of 146 patients with stage C prostate cancer, from a study exploring the prognostic value of flow cytometry. Usage data(stagec) Format A data frame with 146 observations on the following 8 variables. pgtime Time to progression or last follow-up (years) pgstat 1 = progression observed, 0 = censored age age in years eet early endocrine therapy, 1 = no, 2 = yes g2 percent of cells in G2 phase, as found by flow cytometry grade grade of the tumor, Farrow system gleason grade of the tumor, Gleason system ploidy the ploidy status of the tumor, from flow cytometry. Values are ‘diploid’, ‘tetraploid’, and ‘aneuploid’ Details A tumor is called diploid (normal complement of dividing cells) if the fraction of cells in G2 phase was determined to be 13% or less. Aneuploid cells have a measurable fraction with a chromosome count that is neither 24 nor 48, for these the G2 percent is difficult or impossible to measure. Examples require(survival) rpart(Surv(pgtime, pgstat) ~ ., stagec) summary.rpart Summarize a Fitted Rpart Object Description Returns a detailed listing of a fitted rpart object. Usage ## S3 method for class 'rpart' summary(object, cp = 0, digits = getOption("digits"), file, ...) Arguments object fitted model object of class "rpart". This is assumed to be the result of some function that produces an object with the same named components as that re- turned by the rpart function. digits Number of significant digits to be used in the result. cp trim nodes with a complexity of less than cp from the listing. file write the output to a given file name. (Full listings of a tree are often quite long). ... arguments to be passed to or from other methods. Details This function is a method for the generic function summary for class "rpart". It can be invoked by calling summary for an object of the appropriate class, or directly by calling summary.rpart regardless of the class of the object. It prints the call, the table shown by printcp, the variable importance (summing to 100) and details for each node (the details depending on the type of tree). See Also summary, rpart.object, printcp. Examples ## a regression tree z.auto <- rpart(Mileage ~ Weight, car.test.frame) summary(z.auto) ## a classification tree with multiple variables and surrogate splits. summary(rpart(Kyphosis ~ Age + Number + Start, data = kyphosis)) text.rpart Place Text on a Dendrogram Plot Description Labels the current plot of the tree dendrogram with text. Usage ## S3 method for class 'rpart' text(x, splits = TRUE, label, FUN = text, all = FALSE, pretty = NULL, digits = getOption("digits") - 3, use.n = FALSE, fancy = FALSE, fwidth = 0.8, fheight = 0.8, bg = par("bg"), minlength = 1L, ...) Arguments x fitted model object of class "rpart". This is assumed to be the result of some function that produces an object with the same named components as that re- turned by the rpart function. splits logical flag. If TRUE (default), then the splits in the tree are labeled with the criterion for the split. label For compatibility with rpart2, ignored in this version (with a warning). FUN the name of a labeling function, e.g. text. all Logical. If TRUE, all nodes are labeled, otherwise just terminal nodes. minlength the length to use for factor labels. A value of 1 causes them to be printed as ‘a’, ‘b’, . . . .. Larger values use abbreviations of the label names. See the labels.rpart function for details. pretty an alternative to the minlength argument, see labels.rpart. digits number of significant digits to include in numerical labels. use.n Logical. If TRUE, adds to label (#events level1/ #events level2/ etc. for class, n for anova, and #events/n for poisson and exp). fancy Logical. If TRUE, nodes are represented by ellipses (interior nodes) and rectan- gles (leaves) and labeled by yval. The edges connecting the nodes are labeled by left and right splits. fwidth Relates to option fancy and the width of the ellipses and rectangles. If fwidth < 1 then it is a scaling factor (default = 0.8). If fwidth > 1 then it represents the number of character widths (for current graphical device) to use. fheight Relates to option fancy and the height of the ellipses and rectangles. If fheight <1 then it is a scaling factor (default = 0.8). If fheight > 1 then it represents the number of character heights (for current graphical device) to use. bg The color used to paint the background to annotations if fancy = TRUE. ... Graphical parameters may also be supplied as arguments to this function (see par). As labels often extend outside the plot region it can be helpful to specify xpd = TRUE. Side Effects the current plot of a tree dendrogram is labeled. See Also text, plot.rpart, rpart, labels.rpart, abbreviate Examples freen.tr <- rpart(y ~ ., freeny) par(xpd = TRUE) plot(freen.tr) text(freen.tr, use.n = TRUE, all = TRUE) xpred.rpart Return Cross-Validated Predictions Description Gives the predicted values for an rpart fit, under cross validation, for a set of complexity parameter values. Usage xpred.rpart(fit, xval = 10, cp, return.all = FALSE) Arguments fit a object of class "rpart". xval number of cross-validation groups. This may also be an explicit list of integers that define the cross-validation groups. cp the desired list of complexity values. By default it is taken from the cptable component of the fit. return.all if FALSE return only the first element of the prediction Details Complexity penalties are actually ranges, not values. If the cp values found in the table were .36, .28, and .13, for instance, this means that the first row of the table holds for all complexity penalties in the range [.36, 1], the second row for cp in the range [.28, .36) and the third row for [.13, .28). By default, the geometric mean of each interval is used for cross validation. Value A matrix with one row for each observation and one column for each complexity value. If return.all is TRUE and the prediction for each node is a vector, then the result will be an array containing all of the predictions. When the response is categorical, for instance, the result contains the predicted class followed by the class probabilities of the selected terminal node; result[1,,] will be the matrix of predicted classes, result[2,,] the matrix of class 1 probabilities, etc. See Also rpart Examples fit <- rpart(Mileage ~ Weight, car.test.frame) xmat <- xpred.rpart(fit) xerr <- (xmat - car.test.frame$Mileage)^2 apply(xerr, 2, sum) # cross-validated error estimate # approx same result as rel. error from printcp(fit) apply(xerr, 2, sum)/var(car.test.frame$Mileage) printcp(fit)
fstcore
cran
R
Package ‘fstcore’ January 12, 2023 Type Package Title R Bindings to the 'Fstlib' Library Description The 'fstlib' library provides multithreaded serialization of compressed data frames using the 'fst' format. The 'fst' format allows for random access of stored data and compres- sion with the 'LZ4' and 'ZSTD' compressors. Version 0.9.14 Date 2023-01-11 Depends R (>= 3.0.0) Imports Rcpp LinkingTo Rcpp SystemRequirements little-endian platform RoxygenNote 7.2.3 Suggests testthat, lintr License MPL-2.0 | file LICENSE Encoding UTF-8 Copyright This package includes sources from the LZ4 library owned by Yann Collet, sources of the ZSTD library owned by Facebook, Inc., sources of the libdivsufsort-lite library owned by Yuta Mori and sources of the fstlib library owned by <NAME> URL https://www.fstpackage.org/fstcore/ BugReports https://github.com/fstpackage/fst/issues NeedsCompilation yes Author <NAME> [aut, cre, cph] (<NAME> is author of the fstcore package and author and copyright holder of the bundled 'fstlib' code), Yuta Mori [ctb, cph] (Yuta Mori is author and copyright holder of the bundled 'libdivsufsort-lite' code, part of 'ZSTD'), <NAME> [ctb, cph] (Przemyslaw Skibinski is author and copyright holder of the bundled price functions, part of 'ZSTD'), <NAME> [ctb, cph] (T<NAME>t is author and copyright holder of bundled sources from the 'zstdmt' library, part of 'ZSTD'), Yann Collet [ctb, cph] (Yann Collet is author of the bundled 'LZ4' and 'ZSTD' code and copyright holder of 'LZ4'), Facebook, Inc. [cph] (Bundled 'ZSTD' code) Maintainer <NAME> <mark<EMAIL>> Repository CRAN Date/Publication 2023-01-12 09:00:12 UTC R topics documented: fstcore-packag... 2 lib_version... 5 threads_fstli... 5 fstcore-package R bindings to the fstlib library Description R package fstcore contains R bindings to the C++ fstlib library which allows interfacing with fst files. It also contains the LZ4 and ZSTD sources used for compression. fstcore exists as a package separate from the fst package to facilitate independent updates to the fstlib, LZ4 and ZSTD libraries and is used as a backend to fst. Details The fstcore package is based on six C++ libraries or components: • fstlib: library containing code to write, read and compute on files stored in the fst format. Written and owned by <NAME>. • LZ4: library containing code to compress data with the LZ4 compressor. Written and owned by <NAME>. • ZSTD: library containing code to compress data with the ZSTD compressor. Written by Yann Collet and owned by Facebook, Inc. • libdivsufsort-lite: a lightweight suffix array construction algorithm. This code is bundled with the ZSTD compression library. Written and owned by <NAME>. • sources from zstdmt: a multithreading library for Brotli, Lizard, LZ4, LZ5 and Zstandard. This code is bundled with the ZSTD library. Written and owned by <NAME>. • source file zstd_opt.h: Price functions for optimal parser. Written and owned by <NAME> and <NAME>. This code is bundled with the ZSTD library. The following copyright notice, list of conditions and disclaimer apply to the use of the ZSTD library in the fstcore package: BSD License For Zstandard software Copyright (c) 2016-present, Facebook, Inc. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: • Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. • Redistributions in binary form must reproduce the above copyright notice, this list of condi- tions and the following disclaimer in the documentation and/or other materials provided with the distribution. • Neither the name Facebook nor the names of its contributors may be used to endorse or pro- mote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIM- ITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIM- ITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THE- ORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUD- ING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. The following copyright notice, list of conditions and disclaimer apply to the use of the LZ4 library in the fstcore package: LZ4 Library Copyright (c) 2011-2016, <NAME> All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: • Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. • Redistributions in binary form must reproduce the above copyright notice, this list of condi- tions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIM- ITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIM- ITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THE- ORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUD- ING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. The following copyright notice, list of conditions and disclaimer apply to the use of the fstlib library in the fstcore package: fstlib - A C++ library for ultra fast storage and retrieval of datasets Copyright (C) 2017-present, <NAME> This file is part of fstlib. This Source Code Form is subject to the terms of the Mozilla Public License, v. 2.0. If a copy of the MPL was not distributed with this file, You can obtain one at https://mozilla.org/MPL/2.0/. https://www.mozilla.org/en-US/MPL/2.0/FAQ/ The following copyright notice, list of conditions and disclaimer apply to the use of the libdivsufsort- lite library in the fstcore package: Copyright (c) 2003-2008 <NAME> All Rights Reserved. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and as- sociated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL- ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAM- AGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTH- ERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. The following copyright notice, list of conditions and disclaimer apply to the use of sources from the zstdmt library included in the fstcore package as part the ZSTD library: Copyright (c) 2016 <NAME> All rights reserved. This source code is licensed under both the BSD-style license (found in the LICENSE file in the root directory of this source tree) and the GPLv2 (found in the COPYING file in the root directory of this source tree). You can contact the author at: • zstdmt source repository: https://github.com/mcmilk/zstdmt The following copyright notice, list of conditions and disclaimer apply to the use of zstd_opt.h included in the fstcore package as part the ZSTD library: Copyright (c) 2016-present, <NAME>, <NAME>, Facebook, Inc. All rights reserved. This source code is licensed under the BSD-style license found in the LICENSE file in the root directory of https://github.com/facebook/zstd. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License version 2 as published by the Free Software Foundation. This program is dual-licensed; you may select either version 2 of the GNU General Public License ("GPL") or BSD license ("BSD"). lib_versions Display versioning information of fstcore library dependencies Description Display versioning information of fstcore library dependencies Usage lib_versions() Value a list with library versions threads_fstlib Get or set the number of threads used in parallel operations Description For parallel operations, the performance is determined to a great extend by the number of threads used. More threads will allow the CPU to perform more computational intensive tasks simultane- ously, speeding up the operation. Using more threads also introduces some overhead that will scale with the number of threads used. Therefore, using the maximum number of available threads is not always the fastest solution. With threads_fstlib the number of threads can be adjusted to the users specific requirements. As a default, fstcore uses a number of threads equal to the number of logical cores in the system. Usage threads_fstlib(nr_of_threads = NULL, reset_after_fork = NULL) Arguments nr_of_threads number of threads to use or NULL to get the current number of threads used in multithreaded operations. reset_after_fork when fstcore is running in a forked process, the usage of OpenMP can create problems. To prevent these, fstcore switches back to single core usage when it detects a fork. After the fork, the number of threads is reset to it’s initial setting. However, on some compilers (e.g. Intel), switching back to multi-threaded mode can lead to issues. When reset_after_fork is set to FALSE, fstcore is left in single-threaded mode after the fork ends. After the fork, multithreading can be activated again manually by calling threads_fstlib with an appropriate value for nr_of_threads. The default (reset_after_fork = NULL) leaves the fork behavior unchanged. Details The number of threads can also be set with options(fst_threads = N). NOTE: This option is only read when the package’s namespace is first loaded, with commands like library, require, or ::. If you have already used one of these, you must use threads_fstlib to set the number of threads. Value the number of threads (previously) used Examples # get current number of threads threads_fstlib() # set the number of threads threads_fstlib(12) # leave in single threaded mode after a fork threads_fstlib(12, reset_after_fork = FALSE) # reset number of threads after a fork threads_fstlib(12, reset_after_fork = TRUE)
lexer-rs
rust
Rust
Crate lexer_rs === Lexer library --- This library provides a generic mechanism for parsing data into streams of tokens. This is commonly used in human-readable language compilers and interpreters, to convert from a text stream into values that can then be parsed according to the grammar of that language. A simple example would be for a calculator that operates on a stream of numbers and mathematical symbols; the first step of processing that the calculator must do is to convert the text stream into abstract tokens such as ‘the number 73’ and ‘the plus sign’. Once the calculator has such tokens it can piece them together into a real expression that it can then evaluate. ### Basic concept The basic concept of a lexer is to convert a stream of (e.g.) char into a stream of ‘Token’ - which will be specific to the lexer. The lexer starts at the beginning of the text, and moves through consuming characters into tokens. ### Lexer implementations A lexer is not difficult to implement, and there are many alternative approaches to doing so. A very simple approach for a String would be to have a loop that matches the start of the string with possible token values (perhaps using a regular expression), and on finding a match it can ‘trim’ the front of the String, yield the token, and then loop again. This library provides an implementation option that gives the ability to provide good error messages when things go wrong; it provides a trait that allows abstraction of the lexer from the consumer (so that one can get streams of tokens from a String, a BufRead, etc.); it provides the infrastructure for any lexer using a simple mechanism for parsing tokens. Positions in files --- The crate provides some mechanisms for tracking the position of parsing within a stream, so that error messages can be appropriately crafted for the end user. Tracking the position as a minimum is following the byte offset within the file; additionally the line number and column number can also be tracked. As Rust utilizes UTF8 encoded strings, not all byte offsets correspond to actual chars in a stream, and the column separation between two characters is not the difference between their byte offsets. So traits are provided to manage positions within streams, and to help with reporting them. The bare minimum though, does not require tracking of lines and columns; only the byte offset tracking *has* to be used. The Lexer is therefore generic on a stream position type: this must be lightweight as it is moved around and copied frequently, and must be static. Tokens --- The token type that the Lexer produces from its parsing is supplied by the client; this is normally a simple enumeration. The parsing is managed by the Lexer with the client providing a slice of matching functions; each matching function is applied in turn, and the first that returns an Ok of a Some of a token yields the token and advances the parsing state. The parsers can generate an error if they detect a real error in the stream (not just a mismatch to their token type). Error reporting --- With the file position handling used within the Lexer it is possible to display contextual error information - so if the whole text is retained by the Lexer then an error can be displayed with the text from the source with the error point/region highlighted. Support for this is provided by the FmtContext trait, which is implemented particularly for LexerOfString. ! Structs --- * LexerOfStrA Lexer of a str, using an arbitrary stream position type, lexer token, and lexer error. * LexerOfStringThis provides a type that wraps an allocated String, and which tracks the lines within the string. It then provides a method to create a LexerOfStr that borrows the text, and which can the be used as a crate::Lexer. * LineColumnA line and column within a text stream * ParserIteratorAn iterator over a Lexer presenting the parsed Tokens from it * SimpleParseErrorA simple implementation of a type supporting LexerError * StreamCharPosThis provides the byte offset of a character within a stream, with an associated position that might also accurately provide line and column numbers of the position * StreamCharSpanThis provides a span between two byte offsets within a stream; the start and end have an associated position that might also ccurately provide line and column numbers Traits --- * CharStreamThe CharStream trait allows a stream of char to provide extraa methods * FmtContextThis trait is provided by types that wish to support context for (e.g.) error messages * LexerThe Lexer trait is provided by stream types that support parsing into tokens. * LexerErrorA trait required of an error within a Lexer - a char that does not match any token parser rust return an error, and this trait requires that such an error be provided * PosnInCharStreamTrait for location within a character stream * UserPosnTrait for location within a stream Type Aliases --- * BoxDynLexerParseFnThe type of a parse function, when Boxed as a dyn trait * LexerParseFnThe type of a parse function * LexerParseResultThe return value for a Lexer parse function Crate lexer_rs === Lexer library --- This library provides a generic mechanism for parsing data into streams of tokens. This is commonly used in human-readable language compilers and interpreters, to convert from a text stream into values that can then be parsed according to the grammar of that language. A simple example would be for a calculator that operates on a stream of numbers and mathematical symbols; the first step of processing that the calculator must do is to convert the text stream into abstract tokens such as ‘the number 73’ and ‘the plus sign’. Once the calculator has such tokens it can piece them together into a real expression that it can then evaluate. ### Basic concept The basic concept of a lexer is to convert a stream of (e.g.) char into a stream of ‘Token’ - which will be specific to the lexer. The lexer starts at the beginning of the text, and moves through consuming characters into tokens. ### Lexer implementations A lexer is not difficult to implement, and there are many alternative approaches to doing so. A very simple approach for a String would be to have a loop that matches the start of the string with possible token values (perhaps using a regular expression), and on finding a match it can ‘trim’ the front of the String, yield the token, and then loop again. This library provides an implementation option that gives the ability to provide good error messages when things go wrong; it provides a trait that allows abstraction of the lexer from the consumer (so that one can get streams of tokens from a String, a BufRead, etc.); it provides the infrastructure for any lexer using a simple mechanism for parsing tokens. Positions in files --- The crate provides some mechanisms for tracking the position of parsing within a stream, so that error messages can be appropriately crafted for the end user. Tracking the position as a minimum is following the byte offset within the file; additionally the line number and column number can also be tracked. As Rust utilizes UTF8 encoded strings, not all byte offsets correspond to actual chars in a stream, and the column separation between two characters is not the difference between their byte offsets. So traits are provided to manage positions within streams, and to help with reporting them. The bare minimum though, does not require tracking of lines and columns; only the byte offset tracking *has* to be used. The Lexer is therefore generic on a stream position type: this must be lightweight as it is moved around and copied frequently, and must be static. Tokens --- The token type that the Lexer produces from its parsing is supplied by the client; this is normally a simple enumeration. The parsing is managed by the Lexer with the client providing a slice of matching functions; each matching function is applied in turn, and the first that returns an Ok of a Some of a token yields the token and advances the parsing state. The parsers can generate an error if they detect a real error in the stream (not just a mismatch to their token type). Error reporting --- With the file position handling used within the Lexer it is possible to display contextual error information - so if the whole text is retained by the Lexer then an error can be displayed with the text from the source with the error point/region highlighted. Support for this is provided by the FmtContext trait, which is implemented particularly for LexerOfString. ! Structs --- * LexerOfStrA Lexer of a str, using an arbitrary stream position type, lexer token, and lexer error. * LexerOfStringThis provides a type that wraps an allocated String, and which tracks the lines within the string. It then provides a method to create a LexerOfStr that borrows the text, and which can the be used as a crate::Lexer. * LineColumnA line and column within a text stream * ParserIteratorAn iterator over a Lexer presenting the parsed Tokens from it * SimpleParseErrorA simple implementation of a type supporting LexerError * StreamCharPosThis provides the byte offset of a character within a stream, with an associated position that might also accurately provide line and column numbers of the position * StreamCharSpanThis provides a span between two byte offsets within a stream; the start and end have an associated position that might also ccurately provide line and column numbers Traits --- * CharStreamThe CharStream trait allows a stream of char to provide extraa methods * FmtContextThis trait is provided by types that wish to support context for (e.g.) error messages * LexerThe Lexer trait is provided by stream types that support parsing into tokens. * LexerErrorA trait required of an error within a Lexer - a char that does not match any token parser rust return an error, and this trait requires that such an error be provided * PosnInCharStreamTrait for location within a character stream * UserPosnTrait for location within a stream Type Aliases --- * BoxDynLexerParseFnThe type of a parse function, when Boxed as a dyn trait * LexerParseFnThe type of a parse function * LexerParseResultThe return value for a Lexer parse function Trait lexer_rs::FmtContext === ``` pub trait FmtContext<P> { // Required methods fn line_length(&self, line: usize) -> usize; fn fmt_line(&self, f: &mut dyn Write, line: usize) -> Result; // Provided methods fn fmt_context_single_line( &self, f: &mut dyn Write, start: &P, num_cols: usize ) -> Result where P: UserPosn { ... } fn fmt_context_multiple_lines( &self, f: &mut dyn Write, start: &P, end: &P ) -> Result where P: UserPosn { ... } fn fmt_context(&self, fmt: &mut dyn Write, start: &P, end: &P) -> Result where P: UserPosn { ... } } ``` This trait is provided by types that wish to support context for (e.g.) error messages It requires the type to have the ability to map from a line number to a position within the file/stream/text of the type, and to provide the length of any specific line nummber.e With those supplied methods, the trait provides the ‘fmt_context’ method, which outputs to a formatter (which can be an &mut String even) the lines of the text ahead of a provided span of start and end positions within the stream. Currently the format of the context is fixed - the number of lines ahead is fixed a a maximum of four, the lines are always numbered with aa line number of up to 4 digits, and so on. Required Methods --- #### fn line_length(&self, line: usize) -> usize Return the length of the specified line #### fn fmt_line(&self, f: &mut dyn Write, line: usize) -> Result Format the line of text (potentially with coloring and so on). This formatting must preserve the columnn numbers of characters if context markers are to line up correctly Provided Methods --- #### fn fmt_context_single_line( &self, f: &mut dyn Write, start: &P, num_cols: usize ) -> Resultwhere P: UserPosn, Format a line of text with highlight on certain columns #### fn fmt_context_multiple_lines( &self, f: &mut dyn Write, start: &P, end: &P ) -> Resultwhere P: UserPosn, Format multiple lines of text, highlighting certain lines #### fn fmt_context(&self, fmt: &mut dyn Write, start: &P, end: &P) -> Resultwhere P: UserPosn, Format text with highlighting between start and end This is the main method used by clients of the trait ##### Examples found in repository? examples/calc.rs (line 171) ``` 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 ``` ``` &varrfn main() -> Result<(), String> { let args: Vec<String> = env::args().collect(); if args.len() < 2 { return Err(format!("Usage: {} <expression>", args[0])); } let args_as_string = args[1..].join(" "); let c = CalcTokenParser::new(); let l = LexerOfString::default().set_text(args_as_string); let ts = l.lexer(); // let ts = TextStream::new(&args_as_string); println!("Parsing"); let tokens = c.iter(&ts); for t in tokens { let t = { match t { Err(e) => { println!(); let mut s = String::new(); l.fmt_context(&mut s, &e.pos, &e.pos).unwrap(); eprintln!("{}", s); return Err(format!("{}", e)); } Ok(t) => t, } }; print!("{}", t); } println!(); println!("Text parsed okay"); Ok(()) } ``` Hide additional examplesexamples/simple.rs (line 225) ``` 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 ``` ``` &varrfn main() -> Result<(), String> { let args: Vec<String> = env::args().collect(); if args.len() < 2 { return Err(format!("Usage: {} <expression>", args[0])); } let args_as_string = args[1..].join(" "); let mut parsers = ParserVec::new(); parsers.add_parser(|a, b, c| LexToken::parse_whitespace(a, b, c)); parsers.add_parser(|a, b, c| LexToken::parse_comment_line(a, b, c)); parsers.add_parser(|a, b, c| LexToken::parse_digits(a, b, c)); parsers.add_parser(|a, b, c| LexToken::parse_char(a, b, c)); let l = LexerOfString::default().set_text(args_as_string); let ts = l.lexer(); let tokens = ts.iter(&parsers.parsers); println!("Parsing"); for t in tokens { let t = { match t { Err(e) => { println!(); let mut s = String::new(); l.fmt_context(&mut s, &e.pos, &e.pos).unwrap(); eprintln!("{}", s); return Err(format!("{}", e)); } Ok(t) => t, } }; println!("{:?}", t); } println!(); println!("Text parsed okay"); Ok(()) } ``` Implementors --- ### impl<P, T, E> FmtContext<P> for LexerOfString<P, T, E>where P: PosnInCharStream, E: LexerError<P>, Struct lexer_rs::LineColumn === ``` pub struct LineColumn { /* private fields */ } ``` A line and column within a text stream This provides the UserPosn trait, which provides methods to retrieve the line and column values of the state. Trait Implementations --- ### impl Clone for LineColumn #### fn clone(&self) -> LineColumn Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn default() -> Self Returns the “default value” for a type. #### fn fmt(&self, fmt: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter. #### fn hash<__H: Hasher>(&self, state: &mut __H) Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where H: Hasher, Self: Sized, Feeds a slice of this type into the given `Hasher`. #### fn eq(&self, other: &LineColumn) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl UserPosn for LineColumn #### fn line(&self) -> usize Return the line number (if supported, else 0)#### fn column(&self) -> usize Return the column number (if supported, else 0)#### fn advance_cols(self, _: usize, num_chars: usize) -> Self Advance the state of the stream by a number of bytes and a number of characters; the characters are guaranteed to *not* be newlines Advance the state of the stream by a number of bytes and to the start of the next line ### impl Eq for LineColumn ### impl StructuralEq for LineColumn ### impl StructuralPartialEq for LineColumn Auto Trait Implementations --- ### impl RefUnwindSafe for LineColumn ### impl Send for LineColumn ### impl Sync for LineColumn ### impl Unpin for LineColumn ### impl UnwindSafe for LineColumn Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. T: Display + ?Sized, #### default fn to_string(&self) -> String Converts the given value to a `String`. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct lexer_rs::SimpleParseError === ``` pub struct SimpleParseError<P>where P: UserPosn,{ pub ch: char, pub pos: P, } ``` A simple implementation of a type supporting LexerError An error in parsing a token P : UserPosn Fields --- `ch: char`The character which could not be matched to a token `pos: P`The position of the character in the stream Trait Implementations --- ### impl<P> Clone for SimpleParseError<P>where P: UserPosn + Clone, #### fn clone(&self) -> SimpleParseError<PReturns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. P: UserPosn + Debug, #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. P: UserPosn, #### fn fmt(&self, fmt: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter. P: UserPosn, 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more1.0.0 · source#### fn description(&self) -> &str 👎Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎Deprecated since 1.33.0: replaced by Error::source, which can support downcasting#### fn provide<'a>(&'a self, request: &mut Request<'a>) 🔬This is a nightly-only experimental API. (`error_generic_member_access`)Provides type based access to context intended for error reports. P: UserPosn, #### fn failed_to_parse(pos: P, ch: char) -> Self Return an error indicating that a bad character (could not be matched for a token) has occurred at the position indicated by the state### impl<P> PartialEq<SimpleParseError<P>> for SimpleParseError<P>where P: UserPosn + PartialEq, #### fn eq(&self, other: &SimpleParseError<P>) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl<P> Eq for SimpleParseError<P>where P: UserPosn + Eq, ### impl<P> StructuralEq for SimpleParseError<P>where P: UserPosn, ### impl<P> StructuralPartialEq for SimpleParseError<P>where P: UserPosn, Auto Trait Implementations --- ### impl<P> RefUnwindSafe for SimpleParseError<P>where P: RefUnwindSafe, ### impl<P> Send for SimpleParseError<P>where P: Send, ### impl<P> Sync for SimpleParseError<P>where P: Sync, ### impl<P> Unpin for SimpleParseError<P>where P: Unpin, ### impl<P> UnwindSafe for SimpleParseError<P>where P: UnwindSafe, Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. T: Display + ?Sized, #### default fn to_string(&self) -> String Converts the given value to a `String`. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Trait lexer_rs::CharStream === ``` pub trait CharStream<P> { // Required methods fn do_while<F: Fn(usize, char) -> bool>( &self, state: P, ch: char, f: &F ) -> (P, Option<(P, usize)>); fn range_as_bytes(&self, ofs: usize, n: usize) -> &[u8] ; fn matches_bytes(&self, state: &P, s: &[u8]) -> bool; fn get_text_span(&self, span: &StreamCharSpan<P>) -> &str where P: PosnInCharStream; fn get_text(&self, start: P, end: P) -> &str; fn matches_str(&self, pos: &P, pat: &str) -> bool; fn peek_at(&self, state: &P) -> Option<char>; fn consumed(&self, state: P, num_chars: usize) -> P; // Provided methods fn consumed_char(&self, state: P, ch: char) -> P where P: PosnInCharStream { ... } unsafe fn consumed_newline(&self, state: P, num_bytes: usize) -> P where P: PosnInCharStream { ... } fn consumed_ascii_str(&self, state: P, s: &str) -> P where P: PosnInCharStream { ... } unsafe fn consumed_chars( &self, state: P, num_bytes: usize, num_chars: usize ) -> P where P: PosnInCharStream { ... } fn commit_consumed(&self, _up_to: &P) { ... } } ``` The CharStream trait allows a stream of char to provide extraa methods Requires P : PosnInCharStream Required Methods --- #### fn do_while<F: Fn(usize, char) -> bool>( &self, state: P, ch: char, f: &F ) -> (P, Option<(P, usize)>) Steps along the stream starting at the provided state (and character) while the provided function returns true; the function is provided with the index and character (starting at 0 / ch), and it returns true if the token continues, otherwise false If the first invocation of ‘f’ returns false then the token is said to not match, and ‘do_while’ returns the stream state and Ok(None). If the first N (more than zero) invocations match then the result is the stream state after the matched characters, and Some(initial state, N) This can be used to match whitespace (where N is probably discarded), or user ‘id’ values in a language. The text can be retrieved with the ‘get_text’ method #### fn range_as_bytes(&self, ofs: usize, n: usize) -> &[u8] Retrieve a range of bytes from the stream #### fn matches_bytes(&self, state: &P, s: &[u8]) -> bool Return true if the content of the stream at ‘state’ matches the byte slice #### fn get_text_span(&self, span: &StreamCharSpan<P>) -> &strwhere P: PosnInCharStream, Get the text between the start of a span (inclusive) and the end of the span (exclusive). #### fn get_text(&self, start: P, end: P) -> &str Get the text between the start (inclusive) and the end (exclusive). #### fn matches_str(&self, pos: &P, pat: &str) -> bool Match the text at the offset with a str; return true if it matches, else false #### fn peek_at(&self, state: &P) -> Option<charPeek at the next character in the stream, returning None if the state is the end of the stream #### fn consumed(&self, state: P, num_chars: usize) -> P Move the stream state forward by the specified number of characters Provided Methods --- #### fn consumed_char(&self, state: P, ch: char) -> Pwhere P: PosnInCharStream, Get a stream state after consuming the specified (non-newline) character at its current state ##### Examples found in repository? examples/calc.rs (line 85) ``` 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 ``` ``` &varrfn parse_char_fn(stream: &TextStream, state: TextPos, ch: char) -> CalcLexResult { if let Some(t) = { match ch { '+' => Some(CalcToken::Op(CalcOp::Plus)), '-' => Some(CalcToken::Op(CalcOp::Minus)), '*' => Some(CalcToken::Op(CalcOp::Times)), '/' => Some(CalcToken::Op(CalcOp::Divide)), '(' => Some(CalcToken::Open), ')' => Some(CalcToken::Close), _ => None, } } { Ok(Some((stream.consumed_char(state, ch), t))) } else { Ok(None) } } ``` Hide additional examplesexamples/simple.rs (line 56) ``` 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 ``` ``` &varr pub fn parse_char<L>( stream: &L, state: L::State, ch: char, ) -> LexerParseResult<P, Self, L::Error> where L: CharStream<P>, L: Lexer<Token = Self, State = P>, { let pos = state; match ch { '\n' => Ok(Some((stream.consumed(state, 1), Self::Newline(pos)))), '(' | '[' | '{' => Ok(Some((stream.consumed(state, 1), Self::OpenBra(pos, ch)))), ')' | ']' | '}' => Ok(Some((stream.consumed(state, 1), Self::CloseBra(pos, ch)))), ch => Ok(Some((stream.consumed_char(state, ch), Self::Char(pos, ch)))), } } ``` #### unsafe fn consumed_newline(&self, state: P, num_bytes: usize) -> Pwhere P: PosnInCharStream, Get a stream state after consuming a newline at its current state ##### Safety num_bytes *must* correspond to the number of bytes that the newline character consists of, and state *must* point to the bytes offset of that character #### fn consumed_ascii_str(&self, state: P, s: &str) -> Pwhere P: PosnInCharStream, Get a stream state after consuming the specified (non-newline) character at its current state Become the span after consuming a particular ascii string without newlines This is safe as there is no unsafe handling of byte offsets within *state*; however, there is no check that the provided string is ASCII and that it does not contain newlines. If these API rules are broke then the lie and column held by *state* may be incorrect (which is not *unsafe*, but potentially a bug) #### unsafe fn consumed_chars( &self, state: P, num_bytes: usize, num_chars: usize ) -> Pwhere P: PosnInCharStream, Become the span after consuming a particular string of known character length ##### Safety num_bytes *must* correspond to the number of bytes that ‘num_chars’ indicates start at *state*. If this constraint is not met then the byte offset indicated by the returned value may not correspond to a UTF8 character boundary within the stream. #### fn commit_consumed(&self, _up_to: &P) Invoked by the Lexer to indicate that the stream has been consumed up to a certain point, and that (for parsing) no state earlier in the stream will be requested in the future A truly streaming source can drop earlier data in the stream if this fits the application Implementors --- ### impl<'a, P, T, E> CharStream<P> for LexerOfStr<'a, P, T, E>where P: PosnInCharStream, {"&[u8]":"<h3>Notable traits for <code>&amp;[<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>]</code></h3><pre><code><span class=\"where fmt-newline\">impl <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/std/io/trait.Read.html\" title=\"trait std::io::Read\">Read</a> for &amp;[<a class=\"primitive\" href=\"https://doc.rust-lang.org/nightly/std/primitive.u8.html\">u8</a>]</span>"} Trait lexer_rs::LexerError === ``` pub trait LexerError<P>: Sized + Error { // Required method fn failed_to_parse(state: P, ch: char) -> Self; } ``` A trait required of an error within a Lexer - a char that does not match any token parser rust return an error, and this trait requires that such an error be provided It might be nice to have this take the Lexer too, but then there is a cycle in that Lexer::Error will in general depend on Lexer which depends on Lexer::Error… This breaks code (and the compiler tends to hang forever) Required Methods --- #### fn failed_to_parse(state: P, ch: char) -> Self Return an error indicating that a bad character (could not be matched for a token) has occurred at the position indicated by the state Implementors --- ### impl<P> LexerError<P> for SimpleParseError<P>where P: UserPosn, Trait lexer_rs::PosnInCharStream === ``` pub trait PosnInCharStream: UserPosn { // Required method fn byte_ofs(&self) -> usize; } ``` Trait for location within a character stream This tracks a byte offset within the stream so that strings can be retrieved from the stream. Byte offsets *must* always be on UTF8 boundaries. Required Methods --- #### fn byte_ofs(&self) -> usize Return the byte offset into the stream of the position. This must *always* be a UTF8 character boundary; it will be so Implementations on Foreign Types --- ### impl PosnInCharStream for usize #### fn byte_ofs(&self) -> usize Implementors --- ### impl<P> PosnInCharStream for StreamCharPos<P>where P: UserPosn, Trait lexer_rs::UserPosn === ``` pub trait UserPosn: Sized + Debug + Copy + Default + PartialEq + Eq + Hash { // Provided methods fn advance_cols(self, _num_bytes: usize, _num_chars: usize) -> Self { ... } fn advance_line(self, _num_bytes: usize) -> Self { ... } fn line(&self) -> usize { ... } fn column(&self) -> usize { ... } fn error_fmt(&self, fmt: &mut Formatter<'_>) -> Result<(), Error> { ... } } ``` Trait for location within a stream This base trait is used to enable tracking the position of a token parser within a stream in a manner that is useful for human-readable error messages A simple implementation can be null, if the position is not critical for error messages for the token parser - for example, parsing a simple string in a test. For a single file implementation see crate::LineColumn Provided Methods --- #### fn advance_cols(self, _num_bytes: usize, _num_chars: usize) -> Self Advance the state of the stream by a number of bytes and a number of characters; the characters are guaranteed to *not* be newlines For character streams (where num_bytes is not the same as num_char) this *must* only be invoked to move on to a new UTF8 character boundary - hence num_bytes must be a (sum of) len_utf8 values for the text at byte offset of self. #### fn advance_line(self, _num_bytes: usize) -> Self Advance the state of the stream by a number of bytes and to the start of the next line For character streams this *must* only be invoked to move on to a new UTF8 character boundary - hence num_bytes must be a (sum of) len_utf8 values for the text at byte offset of self, the last character of which is a newline #### fn line(&self) -> usize Return the line number (if supported, else 0) #### fn column(&self) -> usize Return the column number (if supported, else 0) #### fn error_fmt(&self, fmt: &mut Formatter<'_>) -> Result<(), ErrorFormat self for an error - this can be the same format as Display (if implemented), or Debug, or whatever is desired It is required for a Lexer to generate a fail-to-parse-character error Implementations on Foreign Types --- ### impl UserPosn for usize #### fn advance_cols(self, byte_ofs: usize, _num_chars: usize) -> Self #### fn advance_line(self, byte_ofs: usize) -> Self ### impl UserPosn for () Implementors --- ### impl UserPosn for LineColumn ### impl<P> UserPosn for StreamCharPos<P>where P: UserPosn, Type Alias lexer_rs::BoxDynLexerParseFn === ``` pub type BoxDynLexerParseFn<'a, L> = Box<dyn for<'call> Fn(&'call L, <L as Lexer>::State, char) -> LexerParseResult<<L as Lexer>::State, <L as Lexer>::Token, <L as Lexer>::Error> + 'a>; ``` The type of a parse function, when Boxed as a dyn trait This type can be used in arrays/slices to allow a Lexer to run through a list of possible token parsers such as: ``` let parsers = [ Box::new(parse_char_fn) as BoxDynLexerParseFn<OurLexer> Box::new(parse_value_fn), Box::new(parse_whitespace_fn), ]; ``` Note that the use of ‘as Box…’ is required, as without it type inference will kick in on the Box::new() to infer parse_char_fn as a precise type, whereas the more generic dyn Fn is what is required. Aliased Type --- ``` struct BoxDynLexerParseFn<'a, L>(/* private fields */); ``` Trait Implementations --- 1.0.0 · source### impl<T, A> Deref for Box<T, A>where A: Allocator, T: ?Sized, #### type Target = T The resulting type after dereferencing.#### fn deref(&self) -> &T Dereferences the value. Type Alias lexer_rs::LexerParseFn === ``` pub type LexerParseFn<L> = fn(lexer: &L, _: <L as Lexer>::State, _: char) -> LexerParseResult<<L as Lexer>::State, <L as Lexer>::Token, <L as Lexer>::Error>; ``` The type of a parse function Type Alias lexer_rs::LexerParseResult === ``` pub type LexerParseResult<S, T, E> = Result<Option<(S, T)>, E>; ``` The return value for a Lexer parse function This *could* have been defined as: pub type LexerParseResult<L:Lexer> = Result<Option<(::State, ::Token)>, ::Error>; But then clients that have their type L with a lifetime (which is common) would have a parse result that must be indicated by a lifetime, where the actual result *does not*. This causes problems for clients Aliased Type --- ``` enum LexerParseResult<S, T, E> { Ok(Option<(S, T)>), Err(E), } ``` Variants --- 1.0.0### Ok(Option<(S, T)>) Contains the success value 1.0.0### Err(E) Contains the error value
onboard-docs-test
readthedoc
Unknown
Lumache Release 0.1 Graziella Jan 11, 2022 CONTENTS 1.1 Initial Setu... 3 1.2 Querying Data Mode... 4 1.3 Querying Building-Specific Dat... 5 i ii Lumache, Release 0.1 This package provides Python bindings to Onboard Data’s building data API, allowing easy and lightweight access to building data. For example, we can retrieve the last week of temperature data from all Zone Temperature points associated with FCUs in the Laboratory building: import pandas as pd import onboard.client from datetime import datetime, timezone, timedelta from onboard.client.models import PointSelector, TimeseriesQuery, PointData from typing import List client = OnboardClient(api_key='your-api-key-here') query = PointSelector() query.point_types = ['Zone Temperature'] # can list multiple query.equipment_types = ['fcu'] query.buildings = ['Laboratory'] selection = client.select_points(query) start = datetime.now(pytz.utc) - timedelta(days=7) end = datetime.now(pytz.utc) timeseries_query = TimeseriesQuery(point_ids = selection['points'], start = start, end =␣ ˓→end) sensor_data = points_df_from_streaming_timeseries(client.stream_point_ ˓→timeseries(timeseries_query)) Lumache, Release 0.1 For installation instructions, and to get set up with API access, refer to :ref:`Initial Setup`_. Note: While we are committed to backwards-compatibility, this project is under active development. If you discover a feature that would be helpful, or any unexpected behavior, please contact us at <EMAIL> CHAPTER ONE CONTENTS 1.1 Initial Setup 1.1.1 Installation The Python Onboard API client is available to install through pip: >>> pip install onboard.client or by cloning our Github repo: git clone <EMAIL>:onboard-data/client-py Please note, the client requires Python >= 3.7. 1.1.2 Setting up API access You’ll need an active API Key with the appropriate scopes in order to use this python client. If you are an existing Onboard user you can head over to the accounts page and generate a new key and grant scopes for “general” and “buildings:read”. If you would like to get access to Onboard and start prototyping against an example building please request access here. You can test if your API key is working with the following code: >>> from onboard.client import OnboardClient >>> client = OnboardClient(api_key='your-api-key-here') >>> >>> # Verify access & connectivity >>> client.whoami() {'result': 'ok', 'apiKeyInHeader': True, ... 'authLevel': 4} You can also retrieve a list of your currently authorized scopes with client.whoami()['apiKeyScopes']. Lumache, Release 0.1 1.2 Querying Data Model Onboard’s data model contains both equipment types (e.g. fans, air handling units) and point types (e.g. zone temper- ature). We can query the full data model within our API. 1.2.1 Equipment types First, we make an API call with client.get_equipment_types(). This returns a JSON object, which we will convert to a dataframe using Pandas: >>> from onboard.client import OnboardClient >>> client = OnboardClient(api_key='') >>> import pandas as pd >>> # Get all equipment types from the Data Model >>> equip_type = pd.json_normalize(client.get_equipment_types()) >>> equip_type.columns ['id', 'tag_name', 'name_long', 'name_abbr', 'active', 'flow_order', 'critical_point_types', 'sub_types', 'tags'] equip_type now contains a dataframe listing all equipment types in our data model, along with associated attributes (e.g. tags, full names, associated point types, and sub-equipment types). The sub-equipment types are nested as dataframes within each row, and can be listed for an equipment type (e.g. ‘fan’) like so: >>> sub_type = pd.DataFrame(equip_type[equip_type.tag_name == 'fan']['sub_types'].item()) id equipment_type_id tag_name name_long name_abbr 0 12 26 exhaustFan Exhaust Fan EFN 1 13 26 reliefFan Relief Fan RlFN 2 14 26 returnFan Return Fan RFN 3 15 26 supplyFan Supply Fan SFN ... Note that not all equipment types have associated sub types. 1.2.2 Point types Accessing point types is very similar, and can be accessed through client.get_all_point_types(): >>> # Get all point types from the Data Model >>> point_type = pd.DataFrame(client.get_all_point_types()) point_type now contains a dataframe listing all the tags associated with each point type. We can extract the metadata associated with each tag in our data model like so: >>> # Get all tags and their definitions from the Data Model >>> pd.DataFrame(client.get_tags()) id name definition def_source ␣ ˓→ def_url 0 120 battery A container that stores chemical energy that c... brick ␣ ˓→https://brickschema.org/ontology/1.1/classes/B... 1 191 exhaustVAV A device that regulates the volume of air bein... onboard ␣ ˓→ None (continues on next page) Lumache, Release 0.1 (continued from previous page) 2 193 oil A viscous liquid derived from petroleum, espec... brick ␣ ˓→https://brickschema.org/ontology/1.2/classes/Oil/ 3 114 fumeHood A fume-collection device mounted over a work s... brick ␣ ˓→https://brickschema.org/ontology/1.1/classes/F... ... This returns a dataframe containing definitions for all tags in our data model, with attribution where applicable. 1.3 Querying Building-Specific Data 1.3.1 Querying Equipment Using the API, we can retrieve the data from all the buildings that belong to our organization: >>> # Get a list of all the buildings under your Organization >>> pd.json_normalize(client.get_all_buildings()) id org_id name ... point_count info.note info 0 66 6 T`Challa House ... 81 NaN 1 427 6 Office Building ... 4219 NaN NaN 2 428 6 Laboratory ... 2206 NaN NaN 3 429 6 Residential ... 4394 NaN NaN The first column of this dataframe (‘id’) contains the building identifier number. In order to retrieve the equipment for a particular building (e.g. Laboratory, id: 428), we use client.get_building_equipment(): >>> # Get a list of all equipment in a building >>> all_equipment = pd.DataFrame(client.get_building_equipment(428)) >>> all_equipment[['id', 'building_id', 'equip_id', 'points', 'tags']] id building_id equip_id ␣ ˓→points tags 0 27293 428 crac-T-105 [{'id': 291731, 'building_id': 428, 'last_upda.. ˓→. [crac, hvac] 1 27294 428 exhaustFan-01 [{'id': 290783, 'building_id': 428, 'last_upda.. ˓→. [fan, hvac, exhaustFan] 2 27295 428 exhaustFan-021 [{'id': 289684, 'building_id': 428, 'last_upda.. ˓→. [fan, hvac, exhaustFan] 3 27296 428 exhaustFan-022 [{'id': 289655, 'building_id': 428, 'last_upda.. ˓→. [fan, hvac, exhaustFan] ... Lumache, Release 0.1 1.3.2 Querying Specific Points In order to query specific points, first we need to import the PointSelector class: >>> # Set parameters for querying sensor data >>> from onboard.client.models import PointSelector >>> query = PointSelector() There are multiple ways to select points using the PointSelector. The user can select all the points that are associated with one or more lists containing any of the following: 'organizations', 'buildings', 'point_ids', 'point_names', 'point_hashes', 'point_ids', 'point_names', 'point_topics', 'equipment', 'equipment_types' For example, here we make a query that returns all the points of the type ‘Real Power’ OR of the type ‘Zone Temperature’ that belong to the ‘Laboratory’ building: >>> query = PointSelector() >>> query.point_types = ['Real Power', 'Zone Temperature'] >>> query.buildings = ['Laboratory'] >>> selection = client.select_points(query) We can add to our query to e.g. further require that returned points must be associated with the ‘fcu’ equipment type: >>> query = PointSelector() >>> query.point_types = ['Real Power', 'Zone Temperature'] >>> query.equipment_types = ['fcu'] >>> query.buildings = ['Laboratory'] >>> selection = client.select_points(query) >>> selection {'buildings': [428], 'equipment': [27356, 27357], 'equipment_types': [9], 'orgs': [6], 'point_types': [77], 'points': [289701, 289575]} In this example, the points with ID=162801, and 162795 are the only ones that satisfy the requirements of our query. We can get more information about these points by calling the function get_points_by_ids() on selection[‘points’]: >>> # Get Metadata for the sensors you would like to query >>> sensor_metadata = client.get_points_by_ids(selection['points']) >>> sensor_metadata_df = pd.DataFrame(sensor_metadata) >>> sensor_metadata_df[['id', 'building_id', 'first_updated', 'last_updated', 'type', ˓→'value', 'units']] id building_id first_updated last_updated type value ␣ ˓→units 0 289575 428 1.626901e+12 1.641928e+12 Zone Temperature 66.0 ␣ ˓→degreesFahrenheit 1 289701 428 1.626901e+12 1.641928e+12 Zone Temperature 61.0 ␣ ˓→degreesFahrenheit sensor_metadata_df now contains a dataframe with rows for each point. Based on the information about these points, we can observe that none of the points of our list belongs to the point type ‘Real Power’, but only to the point type ‘Zone Temperature’ Lumache, Release 0.1 1.3.3 Exporting Data to .csv Data extracted using the API can be exported to a .csv or excel file using Pandas: >>> # Save Metadata to .csv file >>> sensor_metadata_df.to_csv('~/metadata_query.csv') 1.3.4 Querying Time-Series Data To query time-series data first we need to import modules from datetime, models and dataframes. >>> from datetime import datetime, timezone, timedelta >>> import pytz >>> from onboard.client.models import TimeseriesQuery, PointData >>> from onboard.client.dataframes import points_df_from_streaming_timeseries We select the range of dates we want to query, in UTC format: >>> # Enter Start & End Time Stamps in UTC >>> # Example "2018-06-03T12:00:00Z" >>> >>> # get data from the past week >>> start = datetime.now(pytz.utc) - timedelta(days=7) >>> end = datetime.now(pytz.utc) Now we are ready to query the time-series data for the points we previously selected in the specified time-period >>> # Get time series data for the sensors you would like to query >>> timeseries_query = TimeseriesQuery(point_ids = selection['points'], start = start,␣ ˓→end = end) >>> sensor_data = points_df_from_streaming_timeseries(client.stream_point_ ˓→timeseries(timeseries_query)) >>> sensor_data timestamp 289575 289701 0 2022-01-04T19:34:11.741000Z 68.0 None 1 2022-01-04T19:34:19.143000Z None 62.0 2 2022-01-04T19:35:12.133000Z 68.0 None ... This returns a dataframe containing columns for the timestamp and for each requested point. Here, we set the timestamp as the index and forward fill the data for plotting >>> sensor_data_clean = sensor_data.set_index('timestamp').astype(float).ffill() >>> >>> # Edit the indexes just for visualization purposes >>> indexes = [i.split('T')[0] for i in list(sensor_data_clean.index)] >>> sensor_data_clean.index = indexes >>> >>> fig = sensor_data_clean.plot(figsize=(15,8), fontsize = 12) >>> >>> # Adding some formatting >>> fig.set_ylabel('Farenheit',fontdict={'fontsize':15}) >>> fig.set_xlabel('time stamp',fontdict={'fontsize':15}) Lumache, Release 0.1 8 Chapter 1. Contents CHAPTER TWO LICENSE Copyright 2018-2022 Onboard Data Inc Licensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
one_time_pass_ecto
hex
Erlang
Toggle Theme one\_time\_pass\_ecto v1.1.1 API Reference === Modules --- [OneTimePassEcto](OneTimePassEcto.html) Module to handle one-time passwords, usually for use in two factor authentication [OneTimePassEcto.Base](OneTimePassEcto.Base.html) Generate and verify HOTP and TOTP one-time passwords Toggle Theme one\_time\_pass\_ecto v1.1.1 OneTimePassEcto === Module to handle one-time passwords, usually for use in two factor authentication. One-time password options --- There are the following options for the one-time passwords: * HMAC-based one-time passwords + `:token_length` - the length of the one-time password - the default is 6 + `:last` - the count when the one-time password was last used - this count needs to be stored server-side + `:window` - the number of future attempts allowed - the default is 3 * Time-based one-time passwords + `:token_length` - the length of the one-time password - the default is 6 + `:interval_length` - the length of each timed interval - the default is 30 (seconds) + `:window` - the number of attempts, before and after the current one, allowed - the default is 1 (1 interval before and 1 interval after) * Both HOTP and TOTP + `:otp_secret` - name of the Ecto field holding the secret (default :otp\_secret) + `:otp_last` - name of the Ecto field holding the last value (default :otp\_last) See the documentation for the OneTimePassEcto.Base module for more details about generating and verifying one-time passwords. Implementation details --- The following notes provide details about how this module implements the verification of one-time passwords. It is important not to allow the one-time password to be reused within the timeframe that it is valid. For TOTPs, one method of preventing reuse is to compare the output of check\_totp (the `last` value) with the previous output. The output should be greater than the previous `last` value. In the case of HOTPs, it is important that the database is locked from the time the `last` value is checked until the `last` value is updated. [Link to this section](#summary) Summary === [Functions](#functions) --- [verify(params, repo, user\_schema, opts \\ [])](#verify/4) Check the one-time password, and return {:ok, user} if the one-time password is correct or {:error, message} if there is an error [Link to this section](#functions) Functions === Check the one-time password, and return {:ok, user} if the one-time password is correct or {:error, message} if there is an error. After this function has been called, you need to either add the user to the session, by running `put_session(conn, :user_id, id)`, or send an API token to the user. See the `One-time password options` in this module's documentation for available options to be used as the second argument to this function. Toggle Theme one\_time\_pass\_ecto v1.1.1 OneTimePassEcto.Base === Generate and verify HOTP and TOTP one-time passwords. Module to generate and check HMAC-based one-time passwords and time-based one-time passwords, in accordance with [RFC 4226](https://tools.ietf.org/html/rfc4226) and [RFC 6238](https://tools.ietf.org/html/rfc6238). Two factor authentication --- These one-time passwords are often used together with regular passwords to provide two factor authentication (2FA), which forms a layered approach to user authentication. The advantage of 2FA over just using passwords is that an attacker would face an additional challenge to being authorized. [Link to this section](#summary) Summary === [Functions](#functions) --- [check\_hotp(token, secret, opts \\ [])](#check_hotp/3) Verify a HMAC-based one-time password [check\_totp(token, secret, opts \\ [])](#check_totp/3) Verify a time-based one-time password [gen\_hotp(secret, count, opts \\ [])](#gen_hotp/3) Generate a HMAC-based one-time password [gen\_secret(secret\_length \\ 16)](#gen_secret/1) Generate a secret key to be used with one-time passwords [gen\_totp(secret, opts \\ [])](#gen_totp/2) Generate a time-based one-time password [valid\_token(token, token\_length)](#valid_token/2) Check the one-time password is valid [Link to this section](#functions) Functions === Verify a HMAC-based one-time password. There are three options: * `:token_length` - the length of the one-time password + the default is 6 * `:last` - the count when the one-time password was last used + this count needs to be stored server-side * `:window` - the number of future attempts allowed + the default is 3 Verify a time-based one-time password. There are three options: * `:token_length` - the length of the one-time password + the default is 6 * `:interval_length` - the length of each timed interval + the default is 30 (seconds) * `:window` - the number of attempts, before and after the current one, allowed + the default is 1 (1 interval before and 1 interval after) + you might need to increase this window to allow for clock skew on the server Generate a HMAC-based one-time password. Note that the `count` (2nd argument) should be a positive integer. There is one option: * `:token_length` - the length of the one-time password + the default is 6 Generate a secret key to be used with one-time passwords. By default, this function creates a 16 character base32 (80-bit) string, which is compatible with Google Authenticator. It is also possible to generate 26 character (128-bit) and 32 character (160-bit) secret keys. RFC 4226 secret key length recommendations --- According to RFC 4226, the secret key length must be at least 128 bits long, and the recommended length is 160 bits. Generate a time-based one-time password. There are two options: * `:token_length` - the length of the one-time password + the default is 6 * `:interval_length` - the length of each timed interval + the default is 30 (seconds) Check the one-time password is valid. The one-time password should be at least 6 characters long, and it should be a string which only contains numeric values.
graphhopper
cran
R
Package ‘graphhopper’ October 13, 2022 Title An R Interface to the 'GraphHopper' Directions API Version 0.1.2 Date 2021-02-06 Maintainer <NAME> <<EMAIL>> Description Provides a quick and easy access to the 'GraphHopper' Directions API. 'GraphHopper' <https: //www.graphhopper.com/> itself is a routing engine based on 'OpenStreetMap' data. API responses can be converted to simple feature (sf) objects in a convenient way. License MIT + file LICENSE Encoding UTF-8 LazyData true Imports magrittr, httr, googlePolylines, jsonlite, tibble, dplyr Suggests sf, geojsonsf, ggplot2, testthat RoxygenNote 6.1.1 URL https://github.com/crazycapivara/graphhopper-r BugReports https://github.com/crazycapivara/graphhopper-r/issues NeedsCompilation no Author <NAME> [aut, cre] Repository CRAN Date/Publication 2021-02-06 16:50:02 UTC R topics documented: gh_as_s... 2 gh_available_spt_column... 3 gh_bbo... 3 gh_get_inf... 4 gh_get_isochron... 4 gh_get_rout... 5 gh_get_route... 6 gh_get_sp... 7 gh_instruction... 8 gh_point... 8 gh_set_api_ur... 9 gh_spt_as_linestrings_s... 9 gh_spt_column... 10 gh_time_distanc... 11 gh_as_sf Convert a gh object into an sf object Description Convert a gh object into an sf object Usage gh_as_sf(data, ...) ## S3 method for class 'gh_route' gh_as_sf(data, ..., geom_type = c("linestring", "point")) ## S3 method for class 'gh_spt' gh_as_sf(data, ...) ## S3 method for class 'gh_isochrone' gh_as_sf(data, ...) Arguments data A gh_route or gh_spt object. ... ignored geom_type Use geom_type = point to return the points of the route with ids corresponding to the instruction ids. Examples if (FALSE) { start_point <- c(52.592204, 13.414307) end_point <- c(52.539614, 13.364868) route_sf <- gh_get_route(list(start_point, end_point)) %>% gh_as_sf() } gh_available_spt_columns Get a vector with available columns of the spt endpoint Description Get a vector with available columns of the spt endpoint Usage gh_available_spt_columns() gh_bbox Extract the bounding box from a gh object Description Extract the bounding box from a gh object Usage gh_bbox(data) ## S3 method for class 'gh_route' gh_bbox(data) ## S3 method for class 'gh_info' gh_bbox(data) Arguments data A gh_route or gh_info object. gh_get_info Get information about the GraphHopper instance Description Get information about the GraphHopper instance Usage gh_get_info() Examples if (FALSE) { info <- gh_get_info() message(info$version) message(info$data_date) print(gh_bbox(info)) } gh_get_isochrone Get isochrones for a given start point Description Get isochrones for a given start point Usage gh_get_isochrone(start_point, time_limit = 180, distance_limit = -1, ...) Arguments start_point The start point as (lat, lon) pair. time_limit The travel time limit in seconds. Ignored if distance_limit > 0. distance_limit The distance limit in meters. ... Additonal parameters. See https://docs.graphhopper.com/#operation/ getIsochrone. Examples if (FALSE) { start_point <- c(52.53961, 13.36487) isochrone_sf <- gh_get_isochrone(start_point, time_limit = 180) %>% gh_as_sf() } gh_get_route Get a route for a given set of points Description Get a route for a given set of points Usage gh_get_route(points, ..., response_only = FALSE) Arguments points A list of 2 or more points as (lat, lon) pairs. ... Optional parameters that are passed to the query. response_only Whether to return the raw response object instead of just its content. See Also https://docs.graphhopper.com/#tag/Routing-API for optional parameters. Examples if (FALSE) { start_point <- c(52.592204, 13.414307) end_point <- c(52.539614, 13.364868) route_sf <- gh_get_route(list(start_point, end_point)) %>% gh_as_sf() } gh_get_routes Get multiple routes Description Internally it just calls gh_get_route sevaral times. See also gh_get_spt. Usage gh_get_routes(x, y, ..., callback = NULL) Arguments x A single start point as (lat, lon) pair y A matrix or a data frame containing columns with latitudes and longitudes that are used as endpoints. Needs (lat, lon) order. ... Parameters that are passed to gh_get_route. callback A callback function that is applied to every calculated route. Examples if (FALSE) { start_point <- c(52.519772, 13.392334) end_points <- rbind( c(52.564665, 13.42083), c(52.564456, 13.342724), c(52.489261, 13.324871), c(52.48738, 13.454647) ) time_distance_table <- gh_get_routes( start_point, end_points, calc_points = FALSE, callback = gh_time_distance ) %>% dplyr::bind_rows() routes_sf <- gh_get_routes(start_point, end_points, callback = gh_as_sf) %>% do.call(rbind, .) } gh_get_spt Get the shortest path tree for a given start point Description Get the shortest path tree for a given start point Usage gh_get_spt(start_point, time_limit = 600, distance_limit = -1, columns = gh_spt_columns(), reverse_flow = FALSE, profile = "car") Arguments start_point The start point as (lat, lon) pair. time_limit The travel time limit in seconds. Ignored if distance_limit > 0. distance_limit The distance limit in meters. columns The columns to be returned. See gh_spt_columns and gh_available_spt_columns for available columns. reverse_flow Use reverse_flow = TRUE to change the flow direction. profile The profile for which the spt should be calculated. Examples if (FALSE) { start_point <- c(52.53961, 13.36487) columns <- gh_spt_columns( prev_longitude = TRUE, prev_latitude = TRUE, prev_time = TRUE ) points_sf <- gh_get_spt(start_point, time_limit = 180, columns = columns) %>% gh_as_sf() } gh_instructions Extract the instructions from a gh route object Description Extract the instructions from a gh route object Usage gh_instructions(data, instructions_only = FALSE) Arguments data A gh_route object. instructions_only Whether to return the instructions without the corresponding points. See Also gh_get_route gh_points Extract the points from a gh route object Description Extract the points from a gh route object Usage gh_points(data) Arguments data A gh_route object. gh_set_api_url Set gh API base url Description Set gh API base url Usage gh_set_api_url(api_url) Arguments api_url API base url Note Internally it calls Sys.setenv to store the API url in an environment variable called GH_API_URL. Examples gh_set_api_url("http://localhost:8989") gh_spt_as_linestrings_sf Build lines from a gh spt object Description Build lines from a gh spt object Usage gh_spt_as_linestrings_sf(data) Arguments data A gh_spt object. Examples if (FALSE) { start_point <- c(52.53961, 13.36487) columns <- gh_spt_columns( prev_longitude = TRUE, prev_latitude = TRUE, prev_time = TRUE ) lines_sf <- gh_get_spt(start_point, time_limit = 180, columns = columns) %>% gh_spt_as_linestrings_sf() } gh_spt_columns Select the columns to be returned by a spt request Description Times are returned in milliseconds and distances in meters. Usage gh_spt_columns(longitude = TRUE, latitude = TRUE, time = TRUE, distance = TRUE, prev_longitude = FALSE, prev_latitude = FALSE, prev_time = FALSE, prev_distance = FALSE, node_id = FALSE, prev_node_id = FALSE, edge_id = FALSE, prev_edge_id = FALSE) Arguments longitude, latitude The longitude, latitude of the node. time, distance The travel time, distance to the node. prev_longitude, prev_latitude The longitude, latitude of the previous node. prev_time, prev_distance The travel time, distance to the previous node. node_id, prev_node_id The ID of the node, previous node. edge_id, prev_edge_id The ID of the edge, previous edge. gh_time_distance Extract time and distance from a gh route object Description Extract time and distance from a gh route object Usage gh_time_distance(data) Arguments data A gh_route object.
leanpub_com_manualToAutomatedWithSeleniumIDEAndSahi
free_programming_book
Unknown
Date: 2013-03-03 Categories: Tags: # Critical Evaluation of Record-Playback Automation Tools This book is aims to learn basics of record and playback automation for the manual testers who wish to learn Automation Testing. This book covers two simple record and playback automation tools. Sahi and Selenium IDE. This project of critical evaluation is carried out in 2010 so recent version of the Selenium IDE and Sahi may have slightly different behaviour but concepts will be same. If you are interested to learn, send me an email and I will send you free copy of this ebook. This book will cover * Basic of the Test Automation * Getting Started with Selenium IDE * Getting Started with Sahi * Critical Evaluation of Selenium IDE & Sahi. * Preface * Introduction * Book Objectives * Requirement * Tools Comparison Criteria * Literature Review * Test Automation Context * Manual Vs Automation testing * (Automation Vs Manual) * Testing Vs Test Automation * Tool Support for Testing * Software Test Automation * Role of Test tool * Introducing tool within organization * Selenium IDE * Selenium Working * Sahi * Sahi Working * Installation * Configuration * Critical evaluation of tools * Evaluation Criteria’s * SWOT Analysis of Selenium & Sahi * Selenium: Critical Success Factor * SWOT Analysis of SAHI * Sahi: Critical Success Factor * Conclusion * Bibliography ### The Leanpub 60-day 100% Happiness Guarantee Within 60 days of purchase you can get a 100% refund on any Leanpub purchase, in two clicks.Now, this is technically risky for us, since you'll have the book or course files either way. But we're so confident in our products and services, and in our authors and readers, that we're happy to offer a full money back guarantee for everything we sell. You can only find out how good something is by trying it, and because of our 100% money back guarantee there's literally no risk to do so! So, there's no reason not to click the Add to Cart button, is there? See full terms... ### 80% Royalties. Earn $16 on a $20 book. # We pay 80% royalties. That's not a typo: you earn $16 on a $20 sale. If we sell 5000 non-refunded copies of your book or course for $20, you'll earn $80,000. (Yes, some authors have already earned much more than that on Leanpub.) In fact, authors have earnedover $12 millionwriting, publishing and selling on Leanpub. Learn more about writing on Leanpub ### Free Updates. DRM Free. If you buy a Leanpub book, you get free updates for as long as the author updates the book! Many authors use Leanpub to publish their books in-progress, while they are writing them. All readers get free updates, regardless of when they bought the book or how much they paid (including free).Most Leanpub books are available in PDF (for computers) and EPUB (for phones, tablets and Kindle). The formats that a book includes are shown at the top right corner of this page. Finally, Leanpub books don't have any DRM copy-protection nonsense, so you can easily read them on any supported device. Learn more about Leanpub's ebook formats and where to read them # Date: Categories: Tags: Please include an email address so the author can respond to your query This message will be sent to Shashikant Jagtap This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
snapshot
cran
R
Package ‘snapshot’ October 14, 2022 Type Package Title Gadget N-body cosmological simulation code snapshot I/O utilities Version 0.1.2 Date 2013-10-04 Author <NAME> Maintainer <NAME> <<EMAIL>> Description Functions for reading and writing Gadget N-body snapshots. The Gadget code is popu- lar in astronomy for running N-body / hydrodynamical cosmological and merger simula- tions. To find out more about Gadget see the main distribution page at www.mpa- garching.mpg.de/gadget/ License GPL-2 Depends R (>= 2.13) NeedsCompilation no Repository CRAN Date/Publication 2013-10-22 16:50:41 R topics documented: snapshot-packag... 2 addhea... 2 genpara... 5 snaprea... 9 snapwrit... 10 snapshot-package Gadget N-body cosmological simulation code snapshot I/O utilities ~~ package title ~~ Description Functions for reading and writing Gadget N-body snapshots. The Gadget code is popular in astron- omy for running N-body / hydrodynamical cosmological and merger simulations. To find out more about Gadget see the main distribution page at www.mpa-garching.mpg.de/gadget/ Details Package: snapshot Type: Package Version: 0.1.2 Date: 2013-10-04 License: GPL-2 Author(s) <NAME> Maintainer: <NAME> <<EMAIL>> Examples ## Not run: temp=snapread('snapshot_XXX') temp$part[,'x']=temp$part[,'x']+10 snapwrite(temp$part,temp$head,'snapshot_XXX_mod') ## End(Not run) addhead Add header information to particle data Description Function to add required header information to a Gadget read particle dataframe. This has sensible defaults for a small galaxy merger style simulation Usage addhead(part, Npart = 2, Massarr = 0, Time = 0, z = 0, FlagSfr = 0, FlagFeedback = 0, FlagCooling = 0, BoxSize = 0, OmegaM = 0, OmegaL = 0, h = 1, FlagAge = 0, FlagMetals = 0, NallHW = 0, flag_entr_ics = 0) Arguments part Strictly speaking ’part’ is passed through the function, but to make this a useful object ’part’ should be a data.frame containing the main particle level informa- tion. Columns required are: ID particle ID x x position in units of Mpc y y position in units of Mpc z z position in units of Mpc vx x velocity in units of km/s vy y velocity in units of km/s vz z velocity in units of km/s Mass particle mass in units of Msun Npart The index on the Npart vector that should contain the particle number, where: gas [1] / collisionless particles [2:6]. The actual value is calculated based on the part data.frame provided with ’part’, Nall is also calculated based on this number and not given as an option since the same index as Npart must be used Massarr The mass of the particles in the particle index provided to Npart Time Time of snapshot in units of km/s and kpc so 1 unit is ~10 Gyrs z Redshift of snapshot FlagSfr Star formation turned on/off FlagFeedback Feedback turned on/off FlagCooling Cooling turned on/off BoxSize Size of simulation box edge length in units of kpc OmegaM Omega matter of the simulation OmegaL Omega lambda of the simulation h Hubble constant divided by 100 used in the simulation FlagAge Stellar ages on/off FlagMetals Stellar metallacities on/off NallHW Tell Gadget to use large integers in the particle index provided to Npart- not usually necessary flag_entr_ics Entropy for gas on/off Details Nall is calculated based on Npart, and therfore it cannot be specified via an input argument. This increases the likelihood that a legal Gadget header will be produced. Value part Strictly speaking ’part’ is passed through the function, but to make this a useful object ’part’ should be a data.frame containing the main particle level informa- tion. Assuming ’part’ has been given a sensible input, columns provided are: ID particle ID x x position in units of Mpc y y position in units of Mpc z z position in units of Mpc vx x velocity in units of km/s vy y velocity in units of km/s vz z velocity in units of km/s Mass particle mass in units of Msun head A list containing various header information as list elements. These are: Npart Vector of length 6 containing the number of particles in this snapshot file, where: gas [1] / collisionless particles [2:6] Massarr Vector of length 6 containing the particle masses for the respective particle types in Npart Time Time of snapshot in units of km/s and kpc so 1 unit is ~10 Gyrs z Redshift of snapshot FlagSfr Star formation turned on/off Nall Vector of length 6 containing the number of particles in all snapshot files, where: gas [1] / collisionless particles [2:6] FlagFeedback Feedback turned on/off FlagCooling Cooling turned on/off NumFiles Number of files per snapshot- usually 1 BoxSize Size of simulation box edge length in units of kpc OmegaM Omega matter of the simulation OmegaL Omega lambda of the simulation h Hubble constant divided by 100 used in the simulation FlagAge Stellar ages on/off FlagMetals Stellar metallacities on/off NallHW Tell Gadget to use large integers for the respective particle types in Npart - not usually necessary flag_entr_ics Entropy for gas on/off Author(s) <NAME> See Also snapwrite,snapread,genparam Examples ## Not run: tempadd=addhead(temp$part) ## End(Not run) genparam Generates a Gadget paramter file Description Function to generator a legal Gadget paramter setup file. This has a sensible selection of defaults chosen for fairly small (non Cosmological) simulations. Usage genparam(ParamFile = "galaxy.param", ParamBase = "./HernTest/", InitCondFile = "./HernStart.gdt", OutputDir = "./HernTest/", EnergyFile = "energy.txt", InfoFile = "info.txt", TimingsFile = "timings.txt", CpuFile = "cpu.txt", RestartFile = "restart", SnapshotFileBase = "snapshot", OutputListFilename = "parameterfiles/output_list.txt", TimeLimitCPU = 36000, ResubmitOn = 0, ResubmitCommand = "my-scriptfile", ICFormat = 1, SnapFormat = 1, ComovingIntegrationOn = 0, TypeOfTimestepCriterion = 0, OutputListOn = 0, PeriodicBoundariesOn = 0, TimeBegin = 0, TimeMax = 0.001, Omega0 = 0, OmegaLambda = 0, OmegaBaryon = 0, HubbleParam = 1, BoxSize = 0, TimeBetSnapshot = 1e-05, TimeOfFirstSnapshot = 0, CpuTimeBetRestartFile = 36000, TimeBetStatistics = 0.05, NumFilesPerSnapshot = 1, NumFilesWrittenInParallel = 1, ErrTolIntAccuracy = 0.025, CourantFac = 0.3, MaxSizeTimestep = 0.1, MinSizeTimestep = 0, ErrTolTheta = 0.5, TypeOfOpeningCriterion = 1, ErrTolForceAcc = 0.005, TreeDomainUpdateFrequency = 0.1, DesNumNgb = 32, MaxNumNgbDeviation = 8, ArtBulkViscConst = 1, InitGasTemp = 0, MinGasTemp = 100, PartAllocFactor = 3.0, TreeAllocFactor = 4.8, BufferSize = 25, UnitLength_in_cm = 3.085678e+21, UnitMass_in_g = 1.989e+43, UnitVelocity_in_cm_per_s = 1e+05, GravityConstantInternal = 0, MinGasHsmlFractional = 0.25, SofteningGas = 1e-04, SofteningHalo = 1e-04, SofteningDisk = 0.4, SofteningBulge = 0.8, SofteningStars = 0, SofteningBndry = 0.1, SofteningGasMaxPhys = 1e-04, SofteningHaloMaxPhys = 1e-04, SofteningDiskMaxPhys = 0.4, SofteningBulgeMaxPhys = 0.8, SofteningStarsMaxPhys = 0, SofteningBndryMaxPhys = 0.1, MaxRMSDisplacementFac = 0.2, NFWConcentration = 10, VirialMass = 200, FlatRadius = 1e-05, DeltaVir = 200, addNFW = FALSE) Arguments ParamFile Name for the paramter file ParamBase Base file path for the paramter file InitCondFile Full path of file containing initial conditions OutputDir Base directory in which to put the major Gadget outputs, including snapshots etc EnergyFile Name to give energy file InfoFile Name to give info file TimingsFile Name to give timings file CpuFile Name to give CPU file RestartFile Name to give restart file SnapshotFileBase Base name for snapshots, appended by snapshot number OutputListFilename Name of file containing output times / expansion factors TimeLimitCPU Max CPU time to use for Gadget run ResubmitOn Flag to tell super-computer there is a resubmit file ResubmitCommand Specific to super-computer resubmit command ICFormat Initial conditions format: PUT OPTIONS IN TABLE HERE SnapFormat Snapshot format: PUT OPTIONS IN TABLE HERE ComovingIntegrationOn Allow for expansion of Universe TypeOfTimestepCriterion Type of particle integrator- leave at 0 OutputListOn Flag to tell it to use OutputListFilename as input PeriodicBoundariesOn Flag to turn on/off periodic box boundaries, only needed for large cosmological runs TimeBegin Time at the beginning of simulation TimeMax Max time to evolve particles to Omega0 Total energy density OmegaLambda Cosmological constant energy density OmegaBaryon Baryonic energy density HubbleParam Value of H0/100 to be used BoxSize Length of box edge (important for cosmological runs only) TimeBetSnapshot Time between snapshots TimeOfFirstSnapshot Time at which to output first snapshot CpuTimeBetRestartFile How often to output full restart file TimeBetStatistics Time between energy.txt updates NumFilesPerSnapshot How many files to split snapshots over NumFilesWrittenInParallel How many files to split snapshots over (probably ignore) ErrTolIntAccuracy Orbital integration accuracy CourantFac Limit on time step compared to sound crossing time for hydro runs MaxSizeTimestep Maximum time step allowed MinSizeTimestep Minimum time step allowed ErrTolTheta Controls the accurary of integration (smaller is closer to direct N-body) TypeOfOpeningCriterion Barnes-Hut or modified opening criteria (probably ignore) ErrTolForceAcc Only used for modified opening criterion (use default) TreeDomainUpdateFrequency How often should a tree be constructed DesNumNgb Number of neighbours to use for denisty estimation in SPH MaxNumNgbDeviation How much tolerance is allowed when finding neighbours ArtBulkViscConst Artificial viscosity term (use default) InitGasTemp Initial gas temperature MinGasTemp Minimum gas temperature allowed in the run PartAllocFactor Memory buffer per particle per processor TreeAllocFactor Memory buffer for tree calculation BufferSize Total memory buffer between processors UnitLength_in_cm Assumed IC distance units in cm (default assumes Kpc for input) UnitMass_in_g Assumed mass of provided IC mass units in grams (default assumes 1e10 Msun for input) UnitVelocity_in_cm_per_s Assumed velocity of provided units in cm/s (default assumes km/s) GravityConstantInternal Internal units for g MinGasHsmlFractional Minimum multiplicitive factor for smoothing length in hyrdo gas SofteningGas Softening to use for gas particles SofteningHalo Softening to use for halo particles SofteningDisk Softening to use for disk particles SofteningBulge Softening to use for bulge particles SofteningStars Softening to use for star particles SofteningBndry Softening to use for boundary particles SofteningGasMaxPhys Physical softening to use for gas particles (only relevant for Cosmo run) SofteningHaloMaxPhys Physical softening to use for halo particles (only relevant for Cosmo run) SofteningDiskMaxPhys Physical softening to use for disk particles (only relevant for Cosmo run) SofteningBulgeMaxPhys Physical softening to use for bulge particles (only relevant for Cosmo run) SofteningStarsMaxPhys Physical softening to use for star particles (only relevant for Cosmo run) SofteningBndryMaxPhys Physical softening to use for boundary particles (only relevant for Cosmo run) MaxRMSDisplacementFac Biggest distance that a particle can move in a time step NFWConcentration Concentration of analytic NFW profile, addNFW must be set to TRUE VirialMass Mass within virial radius of analytic NFW profile, addNFW must be set to TRUE FlatRadius Forces the NFW profile to be cored (not cusped), addNFW must be set to TRUE DeltaVir Virial overdensity of NFW profile, addNFW must be set to TRUE addNFW Logic determining whether the analyic NFW specific paramters be added to the setup file? See above Value No value returned, called for the side-effect of writing out a Gadget paramter setup file. Author(s) <NAME> See Also snapwrite,snapread,addhead Examples ## Not run: genparam('example.param','Demo/Example1/') ## End(Not run) snapread Read in Gadget snapshots Description This function allows the user to read in the standard format Gadget binaries. It keeps the particle information and header information in separate components of a list. Usage snapread(file) Arguments file The full path to the Gadget snapshot to be read in. Value part A data.frame containing the main particle level information. Columns included are: ID particle ID x x position in units of Mpc y y position in units of Mpc z z position in units of Mpc vx x velocity vy y velocity vz z velocity Mass particle mass in units of Msun head A list containing various header information as list elements. These are: Npart Vector of length 6 containing the number of particles in this snapshot file, where: gas [1] / collisionless particles [2:6] Massarr Vector of length 6 containing the particle masses for the respective particle types in Npart Time Time of snapshot in units of km/s and kpc so 1 unit is ~10 Gyrs z Redshift of snapshot FlagSfr Star formation turned on/off Nall Vector of length 6 containing the number of particles in all snapshot files, where: gas [1] / collisionless particles [2:6] FlagFeedback Feedback turned on/off FlagCooling Cooling turned on/off NumFiles Number of files per snapshot- usually 1 BoxSize Size of simulation box edge length in units of kpc OmegaM Omega matter of the simulation OmegaL Omega lambda of the simulation h Hubble constant divided by 100 used in the simulation FlagAge Stellar ages on/off FlagMetals Stellar metallacities on/off NallHW Tell Gadget to use large integers for the respective particle types in Npart - not usually necessary flag_entr_ics Entropy for gas on/off Author(s) <NAME> See Also snapwrite,addhead,genparam Examples ## Not run: temp=snapread('somepath/snapshot_XXX') ## End(Not run) snapwrite Write in Gadget snapshots Description This function allows the user to write standard format Gadget binaries. It can write the particle information and header information, which are provided as separate R objects. Usage snapwrite(part, head, file) Arguments part A data.frame containing the main particle level information. Columns required are: ID particle ID x x position in units of Mpc y y position in units of Mpc z z position in units of Mpc vx x velocity in units of km/s vy y velocity in units of km/s vz z velocity in units of km/s Mass particle mass in units of Msun head A list containing various header information as list elements. These are: Npart Vector of length 6 containing the number of particles in this snapshot file, where: gas [1] / collisionless particles [2:6] Massarr Vector of length 6 containing the particle masses for the respective particle types in Npart Time Time of snapshot in units of km/s and kpc so 1 unit is ~10 Gyrs z Redshift of snapshot FlagSfr Star formation turned on/off Nall Vector of length 6 containing the number of particles in all snapshot files, where: gas [1] / collisionless particles [2:6] FlagFeedback Feedback turned on/off FlagCooling Cooling turned on/off NumFiles Number of files per snapshot- usually 1 BoxSize Size of simulation box edge length in units of kpc OmegaM Omega matter of the simulation OmegaL Omega lambda of the simulation h Hubble constant divided by 100 used in the simulation FlagAge Stellar ages on/off FlagMetals Stellar metallacities on/off NallHW Tell Gadget to use large integers for the respective particle types in Npart - not usually necessary flag_entr_ics Entropy for gas on/off file The full path to the Gadget snapshot to be created. Value No value returned, called for the side-effect of writing out a binary Gadget file. Author(s) <NAME> See Also snapread,addhead,genparam Examples ## Not run: temp=snapwrite(snap$part,snap$head,'somepath/snapshot_XXX') ## End(Not run)
node-tls
npm
JavaScript
node-tls === A native implementation of [TLS][] (and various other cryptographic tools) in [JavaScript][]. Introduction --- The node-tls software is a fully native implementation of the [TLS][] protocol in JavaScript, a set of cryptography utilities, and a set of tools for developing Web Apps that utilize many network resources. Installation --- ### Node.js If you want to use forge with [Node.js][], it is available through `npm`: <https://npmjs.org/node-tlsInstallation: ``` npm install node-tls ``` You can then use forge as a regular module: ``` const { tls } = require('node-tls'); ``` Documentation --- * [Introduction](#introduction) * [Performance](#performance) * [Installation](#installation) * [Testing](#testing) * [Contributing](#contributing) ### API * [Options](#options) ### Transports * [TLS](#tls) * [HTTP](#http) * [SSH](#ssh) * [XHR](#xhr) * [Sockets](#socket) ### Ciphers * [CIPHER](#cipher) * [AES](#aes) * [DES](#des) * [RC2](#rc2) ### PKI * [ED25519](#ed25519) * [RSA](#rsa) * [RSA-KEM](#rsakem) * [X.509](#x509) * [PKCS#5](#pkcs5) * [PKCS#7](#pkcs7) * [PKCS#8](#pkcs8) * [PKCS#10](#pkcs10) * [PKCS#12](#pkcs12) * [ASN.1](#asn) ### Message Digests * [SHA1](#sha1) * [SHA256](#sha256) * [SHA384](#sha384) * [SHA512](#sha512) * [MD5](#md5) * [HMAC](#hmac) ### Utilities * [Prime](#prime) * [PRNG](#prng) * [Tasks](#task) * [Utilities](#util) * [Logging](#log) * [Flash Networking Support](#flash) ### Other * [Security Considerations](#security-considerations) * [Library Background](#library-background) * [Contact](#contact) * [Donations](#donations) The npm package includes pre-built `min.js`, `all.min.js`, and `prime.worker.min.js` using the [UMD][] format. API --- ### Options If at any time you wish to disable the use of native code, where available, for particular forge features like its secure random number generator, you may set the `options.usePureJavaScript` flag to `true`. It is not recommended that you set this flag as native code is typically more performant and may have stronger security properties. It may be useful to set this flag to test certain features that you plan to run in environments that are different from your testing environment. To disable native code when including forge in the browser: ``` // run this *after* including the forge script options.usePureJavaScript = true; ``` To disable native code when using Node.js: ``` var forge = require('node-tls'); options.usePureJavaScript = true; ``` Transports --- ### TLS Provides a native javascript client and server-side [TLS][] implementation. **Examples** ``` // create TLS client var client = tls.createConnection({ server: false, caStore: /* Array of PEM-formatted certs or a CA store object */, sessionCache: {}, // supported cipher suites in order of preference cipherSuites: [ tls.CipherSuites.TLS_RSA_WITH_AES_128_CBC_SHA, tls.CipherSuites.TLS_RSA_WITH_AES_256_CBC_SHA], virtualHost: 'example.com', verify: function(connection, verified, depth, certs) { if(depth === 0) { var cn = certs[0].subject.getField('CN').value; if(cn !== 'example.com') { verified = { alert: tls.Alert.Description.bad_certificate, message: 'Certificate common name does not match hostname.' }; } } return verified; }, connected: function(connection) { console.log('connected'); // send message to server connection.prepare(util.encodeUtf8('Hi server!')); /* NOTE: experimental, start heartbeat retransmission timer myHeartbeatTimer = setInterval(function() { connection.prepareHeartbeatRequest(util.createBuffer('1234')); }, 5*60*1000);*/ }, /* provide a client-side cert if you want getCertificate: function(connection, hint) { return myClientCertificate; }, /* the private key for the client-side cert if provided */ getPrivateKey: function(connection, cert) { return myClientPrivateKey; }, tlsDataReady: function(connection) { // TLS data (encrypted) is ready to be sent to the server sendToServerSomehow(connection.tlsData.getBytes()); // if you were communicating with the server below, you'd do: // server.process(connection.tlsData.getBytes()); }, dataReady: function(connection) { // clear data from the server is ready console.log('the server sent: ' + util.decodeUtf8(connection.data.getBytes())); // close connection connection.close(); }, /* NOTE: experimental heartbeatReceived: function(connection, payload) { // restart retransmission timer, look at payload clearInterval(myHeartbeatTimer); myHeartbeatTimer = setInterval(function() { connection.prepareHeartbeatRequest(util.createBuffer('1234')); }, 5*60*1000); payload.getBytes(); },*/ closed: function(connection) { console.log('disconnected'); }, error: function(connection, error) { console.log('uh oh', error); } }); // start the handshake process client.handshake(); // when encrypted TLS data is received from the server, process it client.process(encryptedBytesFromServer); // create TLS server var server = tls.createConnection({ server: true, caStore: /* Array of PEM-formatted certs or a CA store object */, sessionCache: {}, // supported cipher suites in order of preference cipherSuites: [ tls.CipherSuites.TLS_RSA_WITH_AES_128_CBC_SHA, tls.CipherSuites.TLS_RSA_WITH_AES_256_CBC_SHA], // require a client-side certificate if you want verifyClient: true, verify: function(connection, verified, depth, certs) { if(depth === 0) { var cn = certs[0].subject.getField('CN').value; if(cn !== 'the-client') { verified = { alert: tls.Alert.Description.bad_certificate, message: 'Certificate common name does not match expected client.' }; } } return verified; }, connected: function(connection) { console.log('connected'); // send message to client connection.prepare(util.encodeUtf8('Hi client!')); /* NOTE: experimental, start heartbeat retransmission timer myHeartbeatTimer = setInterval(function() { connection.prepareHeartbeatRequest(util.createBuffer('1234')); }, 5*60*1000);*/ }, getCertificate: function(connection, hint) { return myServerCertificate; }, getPrivateKey: function(connection, cert) { return myServerPrivateKey; }, tlsDataReady: function(connection) { // TLS data (encrypted) is ready to be sent to the client sendToClientSomehow(connection.tlsData.getBytes()); // if you were communicating with the client above you'd do: // client.process(connection.tlsData.getBytes()); }, dataReady: function(connection) { // clear data from the client is ready console.log('the client sent: ' + util.decodeUtf8(connection.data.getBytes())); // close connection connection.close(); }, /* NOTE: experimental heartbeatReceived: function(connection, payload) { // restart retransmission timer, look at payload clearInterval(myHeartbeatTimer); myHeartbeatTimer = setInterval(function() { connection.prepareHeartbeatRequest(util.createBuffer('1234')); }, 5*60*1000); payload.getBytes(); },*/ closed: function(connection) { console.log('disconnected'); }, error: function(connection, error) { console.log('uh oh', error); } }); // when encrypted TLS data is received from the client, process it server.process(encryptedBytesFromClient); ``` Connect to a TLS server using node's net.Socket: ``` var socket = new net.Socket(); var client = tls.createConnection({ server: false, verify: function(connection, verified, depth, certs) { // skip verification for testing console.log('[tls] server certificate verified'); return true; }, connected: function(connection) { console.log('[tls] connected'); // prepare some data to send (note that the string is interpreted as // 'binary' encoded, which works for HTTP which only uses ASCII, use // util.encodeUtf8(str) otherwise client.prepare('GET / HTTP/1.0\r\n\r\n'); }, tlsDataReady: function(connection) { // encrypted data is ready to be sent to the server var data = connection.tlsData.getBytes(); socket.write(data, 'binary'); // encoding should be 'binary' }, dataReady: function(connection) { // clear data from the server is ready var data = connection.data.getBytes(); console.log('[tls] data received from the server: ' + data); }, closed: function() { console.log('[tls] disconnected'); }, error: function(connection, error) { console.log('[tls] error', error); } }); socket.on('connect', function() { console.log('[socket] connected'); client.handshake(); }); socket.on('data', function(data) { client.process(data.toString('binary')); // encoding should be 'binary' }); socket.on('end', function() { console.log('[socket] disconnected'); }); // connect to google.com socket.connect(443, 'google.com'); // or connect to gmail's imap server (but don't send the HTTP header above) //socket.connect(993, 'imap.gmail.com'); ``` ### HTTP Provides a native [JavaScript][] mini-implementation of an http client that uses pooled sockets. **Examples** ``` // create an HTTP GET request var request = http.createRequest({method: 'GET', path: url.path}); // send the request somewhere sendSomehow(request.toString()); // receive response var buffer = util.createBuffer(); var response = http.createResponse(); var someAsyncDataHandler = function(bytes) { if(!response.bodyReceived) { buffer.putBytes(bytes); if(!response.headerReceived) { if(response.readHeader(buffer)) { console.log('HTTP response header: ' + response.toString()); } } if(response.headerReceived && !response.bodyReceived) { if(response.readBody(buffer)) { console.log('HTTP response body: ' + response.body); } } } }; ``` ### SSH Provides some SSH utility functions. **Examples** ``` // encodes (and optionally encrypts) a private RSA key as a Putty PPK file ssh.privateKeyToPutty(privateKey, passphrase, comment); // encodes a public RSA key as an OpenSSH file ssh.publicKeyToOpenSSH(key, comment); // encodes a private RSA key as an OpenSSH file ssh.privateKeyToOpenSSH(privateKey, passphrase); // gets the SSH public key fingerprint in a byte buffer ssh.getPublicKeyFingerprint(key); // gets a hex-encoded, colon-delimited SSH public key fingerprint ssh.getPublicKeyFingerprint(key, {encoding: 'hex', delimiter: ':'}); ``` ### XHR Provides an XmlHttpRequest implementation using http as a backend. **Examples** ``` // TODO ``` ### Sockets Provides an interface to create and use raw sockets provided via Flash. **Examples** ``` // TODO ``` Ciphers --- ### CIPHER Provides a basic API for block encryption and decryption. There is built-in support for the ciphers: [AES][], [3DES][], and [DES][], and for the modes of operation: [ECB][], [CBC][], [CFB][], [OFB][], [CTR][], and [GCM][]. These algorithms are currently supported: * AES-ECB * AES-CBC * AES-CFB * AES-OFB * AES-CTR * AES-GCM * 3DES-ECB * 3DES-CBC * DES-ECB * DES-CBC When using an [AES][] algorithm, the key size will determine whether AES-128, AES-192, or AES-256 is used (all are supported). When a [DES][] algorithm is used, the key size will determine whether [3DES][] or regular [DES][] is used. Use a [3DES][] algorithm to enforce Triple-DES. **Examples** ``` // generate a random key and IV // Note: a key size of 16 bytes will use AES-128, 24 => AES-192, 32 => AES-256 var key = random.getBytesSync(16); var iv = random.getBytesSync(16); /* alternatively, generate a password-based 16-byte key var salt = random.getBytesSync(128); var key = pkcs5.pbkdf2('password', salt, numIterations, 16); */ // encrypt some bytes using CBC mode // (other modes include: ECB, CFB, OFB, CTR, and GCM) // Note: CBC and ECB modes use PKCS#7 padding as default var cipher = cipher.createCipher('AES-CBC', key); cipher.start({iv: iv}); cipher.update(util.createBuffer(someBytes)); cipher.finish(); var encrypted = cipher.output; // outputs encrypted hex console.log(encrypted.toHex()); // decrypt some bytes using CBC mode // (other modes include: CFB, OFB, CTR, and GCM) var decipher = cipher.createDecipher('AES-CBC', key); decipher.start({iv: iv}); decipher.update(encrypted); var result = decipher.finish(); // check 'result' for true/false // outputs decrypted hex console.log(decipher.output.toHex()); // decrypt bytes using CBC mode and streaming // Performance can suffer for large multi-MB inputs due to buffer // manipulations. Stream processing in chunks can offer significant // improvement. CPU intensive update() calls could also be performed with // setImmediate/setTimeout to avoid blocking the main browser UI thread (not // shown here). Optimal block size depends on the JavaScript VM and other // factors. Encryption can use a simple technique for increased performance. var encryptedBytes = encrypted.bytes(); var decipher = cipher.createDecipher('AES-CBC', key); decipher.start({iv: iv}); var length = encryptedBytes.length; var chunkSize = 1024 * 64; var index = 0; var decrypted = ''; do { decrypted += decipher.output.getBytes(); var buf = util.createBuffer(encryptedBytes.substr(index, chunkSize)); decipher.update(buf); index += chunkSize; } while(index < length); var result = decipher.finish(); assert(result); decrypted += decipher.output.getBytes(); console.log(util.bytesToHex(decrypted)); // encrypt some bytes using GCM mode var cipher = cipher.createCipher('AES-GCM', key); cipher.start({ iv: iv, // should be a 12-byte binary-encoded string or byte buffer additionalData: 'binary-encoded string', // optional tagLength: 128 // optional, defaults to 128 bits }); cipher.update(util.createBuffer(someBytes)); cipher.finish(); var encrypted = cipher.output; var tag = cipher.mode.tag; // outputs encrypted hex console.log(encrypted.toHex()); // outputs authentication tag console.log(tag.toHex()); // decrypt some bytes using GCM mode var decipher = cipher.createDecipher('AES-GCM', key); decipher.start({ iv: iv, additionalData: 'binary-encoded string', // optional tagLength: 128, // optional, defaults to 128 bits tag: tag // authentication tag from encryption }); decipher.update(encrypted); var pass = decipher.finish(); // pass is false if there was a failure (eg: authentication tag didn't match) if(pass) { // outputs decrypted hex console.log(decipher.output.toHex()); } ``` Using forge in Node.js to match openssl's "enc" command line tool (**Note**: OpenSSL "enc" uses a non-standard file format with a custom key derivation function and a fixed iteration count of 1, which some consider less secure than alternatives such as [OpenPGP](https://tools.ietf.org/html/rfc4880)/[GnuPG](https://www.gnupg.org/)): ``` var forge = require('node-tls'); var fs = require('fs'); // openssl enc -des3 -in input.txt -out input.enc function encrypt(password) { var input = fs.readFileSync('input.txt', {encoding: 'binary'}); // 3DES key and IV sizes var keySize = 24; var ivSize = 8; // get derived bytes // Notes: // 1. If using an alternative hash (eg: "-md sha1") pass // "md.sha1.create()" as the final parameter. // 2. If using "-nosalt", set salt to null. var salt = random.getBytesSync(8); // var md = md.sha1.create(); // "-md sha1" var derivedBytes = pbe.opensslDeriveBytes( password, salt, keySize + ivSize/*, md*/); var buffer = util.createBuffer(derivedBytes); var key = buffer.getBytes(keySize); var iv = buffer.getBytes(ivSize); var cipher = cipher.createCipher('3DES-CBC', key); cipher.start({iv: iv}); cipher.update(util.createBuffer(input, 'binary')); cipher.finish(); var output = util.createBuffer(); // if using a salt, prepend this to the output: if(salt !== null) { output.putBytes('Salted__'); // (add to match openssl tool output) output.putBytes(salt); } output.putBuffer(cipher.output); fs.writeFileSync('input.enc', output.getBytes(), {encoding: 'binary'}); } // openssl enc -d -des3 -in input.enc -out input.dec.txt function decrypt(password) { var input = fs.readFileSync('input.enc', {encoding: 'binary'}); // parse salt from input input = util.createBuffer(input, 'binary'); // skip "Salted__" (if known to be present) input.getBytes('Salted__'.length); // read 8-byte salt var salt = input.getBytes(8); // Note: if using "-nosalt", skip above parsing and use // var salt = null; // 3DES key and IV sizes var keySize = 24; var ivSize = 8; var derivedBytes = pbe.opensslDeriveBytes( password, salt, keySize + ivSize); var buffer = util.createBuffer(derivedBytes); var key = buffer.getBytes(keySize); var iv = buffer.getBytes(ivSize); var decipher = cipher.createDecipher('3DES-CBC', key); decipher.start({iv: iv}); decipher.update(input); var result = decipher.finish(); // check 'result' for true/false fs.writeFileSync( 'input.dec.txt', decipher.output.getBytes(), {encoding: 'binary'}); } ``` ### AES Provides [AES][] encryption and decryption in [CBC][], [CFB][], [OFB][], [CTR][], and [GCM][] modes. See [CIPHER](#cipher) for examples. ### DES Provides [3DES][] and [DES][] encryption and decryption in [ECB][] and [CBC][] modes. See [CIPHER](#cipher) for examples. ### RC2 **Examples** ``` // generate a random key and IV var key = random.getBytesSync(16); var iv = random.getBytesSync(8); // encrypt some bytes var cipher = rc2.createEncryptionCipher(key); cipher.start(iv); cipher.update(util.createBuffer(someBytes)); cipher.finish(); var encrypted = cipher.output; // outputs encrypted hex console.log(encrypted.toHex()); // decrypt some bytes var cipher = rc2.createDecryptionCipher(key); cipher.start(iv); cipher.update(encrypted); cipher.finish(); // outputs decrypted hex console.log(cipher.output.toHex()); ``` PKI --- Provides [X.509][] certificate support, ED25519 key generation and signing/verifying, and RSA public and private key encoding, decoding, encryption/decryption, and signing/verifying. ### ED25519 Special thanks to [TweetNaCl.js][] for providing the bulk of the implementation. **Examples** ``` var ed25519 = pki.ed25519; // generate a random ED25519 keypair var keypair = ed25519.generateKeyPair(); // `keypair.publicKey` is a node.js Buffer or Uint8Array // `keypair.privateKey` is a node.js Buffer or Uint8Array // generate a random ED25519 keypair based on a random 32-byte seed var seed = random.getBytesSync(32); var keypair = ed25519.generateKeyPair({seed: seed}); // generate a random ED25519 keypair based on a "password" 32-byte seed var password = 'Mai9ohgh6ahxee0jutheew0pungoozil'; var seed = new util.ByteBuffer(password, 'utf8'); var keypair = ed25519.generateKeyPair({seed: seed}); // sign a UTF-8 message var signature = ED25519.sign({ message: 'test', // also accepts `binary` if you want to pass a binary string encoding: 'utf8', // node.js Buffer, Uint8Array, forge ByteBuffer, binary string privateKey: privateKey }); // `signature` is a node.js Buffer or Uint8Array // sign a message passed as a buffer var signature = ED25519.sign({ // also accepts a forge ByteBuffer or Uint8Array message: Buffer.from('test', 'utf8'), privateKey: privateKey }); // sign a message digest (shorter "message" == better performance) var md = md.sha256.create(); md.update('test', 'utf8'); var signature = ED25519.sign({ md: md, privateKey: privateKey }); // verify a signature on a UTF-8 message var verified = ED25519.verify({ message: 'test', encoding: 'utf8', // node.js Buffer, Uint8Array, forge ByteBuffer, or binary string signature: signature, // node.js Buffer, Uint8Array, forge ByteBuffer, or binary string publicKey: publicKey }); // `verified` is true/false // sign a message passed as a buffer var verified = ED25519.verify({ // also accepts a forge ByteBuffer or Uint8Array message: Buffer.from('test', 'utf8'), // node.js Buffer, Uint8Array, forge ByteBuffer, or binary string signature: signature, // node.js Buffer, Uint8Array, forge ByteBuffer, or binary string publicKey: publicKey }); // verify a signature on a message digest var md = md.sha256.create(); md.update('test', 'utf8'); var verified = ED25519.verify({ md: md, // node.js Buffer, Uint8Array, forge ByteBuffer, or binary string signature: signature, // node.js Buffer, Uint8Array, forge ByteBuffer, or binary string publicKey: publicKey }); ``` ### RSA **Examples** ``` var rsa = pki.rsa; // generate an RSA key pair synchronously // *NOT RECOMMENDED*: Can be significantly slower than async and may block // JavaScript execution. Will use native Node.js 10.12.0+ API if possible. var keypair = rsa.generateKeyPair({bits: 2048, e: 0x10001}); // generate an RSA key pair asynchronously (uses web workers if available) // use workers: -1 to run a fast core estimator to optimize # of workers // *RECOMMENDED*: Can be significantly faster than sync. Will use native // Node.js 10.12.0+ or WebCrypto API if possible. rsa.generateKeyPair({bits: 2048, workers: 2}, function(err, keypair) { // keypair.privateKey, keypair.publicKey }); // generate an RSA key pair in steps that attempt to run for a specified period // of time on the main JS thread var state = rsa.createKeyPairGenerationState(2048, 0x10001); var step = function() { // run for 100 ms if(!rsa.stepKeyPairGenerationState(state, 100)) { setTimeout(step, 1); } else { // done, turn off progress indicator, use state.keys } }; // turn on progress indicator, schedule generation to run setTimeout(step); // sign data with a private key and output DigestInfo DER-encoded bytes // (defaults to RSASSA PKCS#1 v1.5) var md = md.sha1.create(); md.update('sign this', 'utf8'); var signature = privateKey.sign(md); // verify data with a public key // (defaults to RSASSA PKCS#1 v1.5) var verified = publicKey.verify(md.digest().bytes(), signature); // sign data using RSASSA-PSS where PSS uses a SHA-1 hash, a SHA-1 based // masking function MGF1, and a 20 byte salt var md = md.sha1.create(); md.update('sign this', 'utf8'); var pss = pss.create({ md: md.sha1.create(), mgf: mgf.mgf1.create(md.sha1.create()), saltLength: 20 // optionally pass 'prng' with a custom PRNG implementation // optionalls pass 'salt' with a util.ByteBuffer w/custom salt }); var signature = privateKey.sign(md, pss); // verify RSASSA-PSS signature var pss = pss.create({ md: md.sha1.create(), mgf: mgf.mgf1.create(md.sha1.create()), saltLength: 20 // optionally pass 'prng' with a custom PRNG implementation }); var md = md.sha1.create(); md.update('sign this', 'utf8'); publicKey.verify(md.digest().getBytes(), signature, pss); // encrypt data with a public key (defaults to RSAES PKCS#1 v1.5) var encrypted = publicKey.encrypt(bytes); // decrypt data with a private key (defaults to RSAES PKCS#1 v1.5) var decrypted = privateKey.decrypt(encrypted); // encrypt data with a public key using RSAES PKCS#1 v1.5 var encrypted = publicKey.encrypt(bytes, 'RSAES-PKCS1-V1_5'); // decrypt data with a private key using RSAES PKCS#1 v1.5 var decrypted = privateKey.decrypt(encrypted, 'RSAES-PKCS1-V1_5'); // encrypt data with a public key using RSAES-OAEP var encrypted = publicKey.encrypt(bytes, 'RSA-OAEP'); // decrypt data with a private key using RSAES-OAEP var decrypted = privateKey.decrypt(encrypted, 'RSA-OAEP'); // encrypt data with a public key using RSAES-OAEP/SHA-256 var encrypted = publicKey.encrypt(bytes, 'RSA-OAEP', { md: md.sha256.create() }); // decrypt data with a private key using RSAES-OAEP/SHA-256 var decrypted = privateKey.decrypt(encrypted, 'RSA-OAEP', { md: md.sha256.create() }); // encrypt data with a public key using RSAES-OAEP/SHA-256/MGF1-SHA-1 // compatible with Java's RSA/ECB/OAEPWithSHA-256AndMGF1Padding var encrypted = publicKey.encrypt(bytes, 'RSA-OAEP', { md: md.sha256.create(), mgf1: { md: md.sha1.create() } }); // decrypt data with a private key using RSAES-OAEP/SHA-256/MGF1-SHA-1 // compatible with Java's RSA/ECB/OAEPWithSHA-256AndMGF1Padding var decrypted = privateKey.decrypt(encrypted, 'RSA-OAEP', { md: md.sha256.create(), mgf1: { md: md.sha1.create() } }); ``` ### RSA-KEM **Examples** ``` // generate an RSA key pair asynchronously (uses web workers if available) // use workers: -1 to run a fast core estimator to optimize # of workers rsa.generateKeyPair({bits: 2048, workers: -1}, function(err, keypair) { // keypair.privateKey, keypair.publicKey }); // generate and encapsulate a 16-byte secret key var kdf1 = new kem.kdf1(md.sha1.create()); var kem = kem.rsa.create(kdf1); var result = kem.encrypt(keypair.publicKey, 16); // result has 'encapsulation' and 'key' // encrypt some bytes var iv = random.getBytesSync(12); var someBytes = 'hello world!'; var cipher = cipher.createCipher('AES-GCM', result.key); cipher.start({iv: iv}); cipher.update(util.createBuffer(someBytes)); cipher.finish(); var encrypted = cipher.output.getBytes(); var tag = cipher.mode.tag.getBytes(); // send 'encrypted', 'iv', 'tag', and result.encapsulation to recipient // decrypt encapsulated 16-byte secret key var kdf1 = new kem.kdf1(md.sha1.create()); var kem = kem.rsa.create(kdf1); var key = kem.decrypt(keypair.privateKey, result.encapsulation, 16); // decrypt some bytes var decipher = cipher.createDecipher('AES-GCM', key); decipher.start({iv: iv, tag: tag}); decipher.update(util.createBuffer(encrypted)); var pass = decipher.finish(); // pass is false if there was a failure (eg: authentication tag didn't match) if(pass) { // outputs 'hello world!' console.log(decipher.output.getBytes()); } ``` ### X.509 **Examples** ``` var pki = pki; // convert a PEM-formatted public key to a node-tls public key var publicKey = pki.publicKeyFromPem(pem); // convert a node-tls public key to PEM-format var pem = pki.publicKeyToPem(publicKey); // convert an ASN.1 SubjectPublicKeyInfo to a node-tls public key var publicKey = pki.publicKeyFromAsn1(subjectPublicKeyInfo); // convert a node-tls public key to an ASN.1 SubjectPublicKeyInfo var subjectPublicKeyInfo = pki.publicKeyToAsn1(publicKey); // gets a SHA-1 RSAPublicKey fingerprint a byte buffer pki.getPublicKeyFingerprint(key); // gets a SHA-1 SubjectPublicKeyInfo fingerprint a byte buffer pki.getPublicKeyFingerprint(key, {type: 'SubjectPublicKeyInfo'}); // gets a hex-encoded, colon-delimited SHA-1 RSAPublicKey public key fingerprint pki.getPublicKeyFingerprint(key, {encoding: 'hex', delimiter: ':'}); // gets a hex-encoded, colon-delimited SHA-1 SubjectPublicKeyInfo public key fingerprint pki.getPublicKeyFingerprint(key, { type: 'SubjectPublicKeyInfo', encoding: 'hex', delimiter: ':' }); // gets a hex-encoded, colon-delimited MD5 RSAPublicKey public key fingerprint pki.getPublicKeyFingerprint(key, { md: md.md5.create(), encoding: 'hex', delimiter: ':' }); // creates a CA store var caStore = pki.createCaStore([/* PEM-encoded cert */, ...]); // add a certificate to the CA store caStore.addCertificate(certObjectOrPemString); // gets the issuer (its certificate) for the given certificate var issuerCert = caStore.getIssuer(subjectCert); // verifies a certificate chain against a CA store pki.verifyCertificateChain(caStore, chain, customVerifyCallback); // signs a certificate using the given private key cert.sign(privateKey); // signs a certificate using SHA-256 instead of SHA-1 cert.sign(privateKey, md.sha256.create()); // verifies an issued certificate using the certificates public key var verified = issuer.verify(issued); // generate a keypair and create an X.509v3 certificate var keys = pki.rsa.generateKeyPair(2048); var cert = pki.createCertificate(); cert.publicKey = keys.publicKey; // alternatively set public key from a csr //cert.publicKey = csr.publicKey; // NOTE: serialNumber is the hex encoded value of an ASN.1 INTEGER. // Conforming CAs should ensure serialNumber is: // - no more than 20 octets // - non-negative (prefix a '00' if your value starts with a '1' bit) cert.serialNumber = '01'; cert.validity.notBefore = new Date(); cert.validity.notAfter = new Date(); cert.validity.notAfter.setFullYear(cert.validity.notBefore.getFullYear() + 1); var attrs = [{ name: 'commonName', value: 'example.org' }, { name: 'countryName', value: 'US' }, { shortName: 'ST', value: 'Virginia' }, { name: 'localityName', value: 'Blacksburg' }, { name: 'organizationName', value: 'Test' }, { shortName: 'OU', value: 'Test' }]; cert.setSubject(attrs); // alternatively set subject from a csr //cert.setSubject(csr.subject.attributes); cert.setIssuer(attrs); cert.setExtensions([{ name: 'basicConstraints', cA: true }, { name: 'keyUsage', keyCertSign: true, digitalSignature: true, nonRepudiation: true, keyEncipherment: true, dataEncipherment: true }, { name: 'extKeyUsage', serverAuth: true, clientAuth: true, codeSigning: true, emailProtection: true, timeStamping: true }, { name: 'nsCertType', client: true, server: true, email: true, objsign: true, sslCA: true, emailCA: true, objCA: true }, { name: 'subjectAltName', altNames: [{ type: 6, // URI value: 'http://example.org/webid#me' }, { type: 7, // IP ip: '127.0.0.1' }] }, { name: 'subjectKeyIdentifier' }]); /* alternatively set extensions from a csr var extensions = csr.getAttribute({name: 'extensionRequest'}).extensions; // optionally add more extensions extensions.push.apply(extensions, [{ name: 'basicConstraints', cA: true }, { name: 'keyUsage', keyCertSign: true, digitalSignature: true, nonRepudiation: true, keyEncipherment: true, dataEncipherment: true }]); cert.setExtensions(extensions); */ // self-sign certificate cert.sign(keys.privateKey); // convert a node-tls certificate to PEM var pem = pki.certificateToPem(cert); // convert a node-tls certificate from PEM var cert = pki.certificateFromPem(pem); // convert an ASN.1 X.509x3 object to a node-tls certificate var cert = pki.certificateFromAsn1(obj); // convert a node-tls certificate to an ASN.1 X.509v3 object var asn1Cert = pki.certificateToAsn1(cert); ``` ### PKCS#5 Provides the password-based key-derivation function from [PKCS#5][]. **Examples** ``` // generate a password-based 16-byte key // note an optional message digest can be passed as the final parameter var salt = random.getBytesSync(128); var derivedKey = pkcs5.pbkdf2('password', salt, numIterations, 16); // generate key asynchronously // note an optional message digest can be passed before the callback pkcs5.pbkdf2('password', salt, numIterations, 16, function(err, derivedKey) { // do something w/derivedKey }); ``` ### PKCS#7 Provides cryptographically protected messages from [PKCS#7][]. **Examples** ``` // convert a message from PEM var p7 = pkcs7.messageFromPem(pem); // look at p7.recipients // find a recipient by the issuer of a certificate var recipient = p7.findRecipient(cert); // decrypt p7.decrypt(p7.recipients[0], privateKey); // create a p7 enveloped message var p7 = pkcs7.createEnvelopedData(); // add a recipient var cert = pki.certificateFromPem(certPem); p7.addRecipient(cert); // set content p7.content = util.createBuffer('Hello'); // encrypt p7.encrypt(); // convert message to PEM var pem = pkcs7.messageToPem(p7); // create a degenerate PKCS#7 certificate container // (CRLs not currently supported, only certificates) var p7 = pkcs7.createSignedData(); p7.addCertificate(certOrCertPem1); p7.addCertificate(certOrCertPem2); var pem = pkcs7.messageToPem(p7); // create PKCS#7 signed data with authenticatedAttributes // attributes include: PKCS#9 content-type, message-digest, and signing-time var p7 = pkcs7.createSignedData(); p7.content = util.createBuffer('Some content to be signed.', 'utf8'); p7.addCertificate(certOrCertPem); p7.addSigner({ key: privateKeyAssociatedWithCert, certificate: certOrCertPem, digestAlgorithm: pki.oids.sha256, authenticatedAttributes: [{ type: pki.oids.contentType, value: pki.oids.data }, { type: pki.oids.messageDigest // value will be auto-populated at signing time }, { type: pki.oids.signingTime, // value can also be auto-populated at signing time value: new Date() }] }); p7.sign(); var pem = pkcs7.messageToPem(p7); // PKCS#7 Sign in detached mode. // Includes the signature and certificate without the signed data. p7.sign({detached: true}); ``` ### PKCS#8 **Examples** ``` var pki = pki; // convert a PEM-formatted private key to a node-tls private key var privateKey = pki.privateKeyFromPem(pem); // convert a node-tls private key to PEM-format var pem = pki.privateKeyToPem(privateKey); // convert an ASN.1 PrivateKeyInfo or RSAPrivateKey to a node-tls private key var privateKey = pki.privateKeyFromAsn1(rsaPrivateKey); // convert a node-tls private key to an ASN.1 RSAPrivateKey var rsaPrivateKey = pki.privateKeyToAsn1(privateKey); // wrap an RSAPrivateKey ASN.1 object in a PKCS#8 ASN.1 PrivateKeyInfo var privateKeyInfo = pki.wrapRsaPrivateKey(rsaPrivateKey); // convert a PKCS#8 ASN.1 PrivateKeyInfo to PEM var pem = pki.privateKeyInfoToPem(privateKeyInfo); // encrypts a PrivateKeyInfo using a custom password and // outputs an EncryptedPrivateKeyInfo var encryptedPrivateKeyInfo = pki.encryptPrivateKeyInfo( privateKeyInfo, 'myCustomPasswordHere', { algorithm: 'aes256', // 'aes128', 'aes192', 'aes256', '3des' }); // decrypts an ASN.1 EncryptedPrivateKeyInfo that was encrypted // with a custom password var privateKeyInfo = pki.decryptPrivateKeyInfo( encryptedPrivateKeyInfo, 'myCustomPasswordHere'); // converts an EncryptedPrivateKeyInfo to PEM var pem = pki.encryptedPrivateKeyToPem(encryptedPrivateKeyInfo); // converts a PEM-encoded EncryptedPrivateKeyInfo to ASN.1 format var encryptedPrivateKeyInfo = pki.encryptedPrivateKeyFromPem(pem); // wraps and encrypts a node-tls private key and outputs it in PEM format var pem = pki.encryptRsaPrivateKey(privateKey, 'password'); // encrypts a node-tls private key and outputs it in PEM format using OpenSSL's // proprietary legacy format + encapsulated PEM headers (DEK-Info) var pem = pki.encryptRsaPrivateKey(privateKey, 'password', {legacy: true}); // decrypts a PEM-formatted, encrypted private key var privateKey = pki.decryptRsaPrivateKey(pem, 'password'); // sets an RSA public key from a private key var publicKey = pki.setRsaPublicKey(privateKey.n, privateKey.e); ``` ### PKCS#10 Provides certification requests or certificate signing requests (CSR) from [PKCS#10][]. **Examples** ``` // generate a key pair var keys = pki.rsa.generateKeyPair(1024); // create a certification request (CSR) var csr = pki.createCertificationRequest(); csr.publicKey = keys.publicKey; csr.setSubject([{ name: 'commonName', value: 'example.org' }, { name: 'countryName', value: 'US' }, { shortName: 'ST', value: 'Virginia' }, { name: 'localityName', value: 'Blacksburg' }, { name: 'organizationName', value: 'Test' }, { shortName: 'OU', value: 'Test' }]); // set (optional) attributes csr.setAttributes([{ name: 'challengePassword', value: 'password' }, { name: 'unstructuredName', value: 'My Company, Inc.' }, { name: 'extensionRequest', extensions: [{ name: 'subjectAltName', altNames: [{ // 2 is DNS type type: 2, value: 'test.domain.com' }, { type: 2, value: 'other.domain.com', }, { type: 2, value: 'www.domain.net' }] }] }]); // sign certification request csr.sign(keys.privateKey); // verify certification request var verified = csr.verify(); // convert certification request to PEM-format var pem = pki.certificationRequestToPem(csr); // convert a node-tls certification request from PEM-format var csr = pki.certificationRequestFromPem(pem); // get an attribute csr.getAttribute({name: 'challengePassword'}); // get extensions array csr.getAttribute({name: 'extensionRequest'}).extensions; ``` ### PKCS#12 Provides the cryptographic archive file format from [PKCS#12][]. **Note for Chrome/Firefox/iOS/similar users**: If you have trouble importing a PKCS#12 container, try using the TripleDES algorithm. It can be passed to `pkcs12.toPkcs12Asn1` using the `{algorithm: '3des'}` option. **Examples** ``` // decode p12 from base64 var p12Der = util.decode64(p12b64); // get p12 as ASN.1 object var p12Asn1 = asn1.fromDer(p12Der); // decrypt p12 using the password 'password' var p12 = pkcs12.pkcs12FromAsn1(p12Asn1, 'password'); // decrypt p12 using non-strict parsing mode (resolves some ASN.1 parse errors) var p12 = pkcs12.pkcs12FromAsn1(p12Asn1, false, 'password'); // decrypt p12 using literally no password (eg: Mac OS X/apple push) var p12 = pkcs12.pkcs12FromAsn1(p12Asn1); // decrypt p12 using an "empty" password (eg: OpenSSL with no password input) var p12 = pkcs12.pkcs12FromAsn1(p12Asn1, ''); // p12.safeContents is an array of safe contents, each of // which contains an array of safeBags // get bags by friendlyName var bags = p12.getBags({friendlyName: 'test'}); // bags are key'd by attribute type (here "friendlyName") // and the key values are an array of matching objects var cert = bags.friendlyName[0]; // get bags by localKeyId var bags = p12.getBags({localKeyId: buffer}); // bags are key'd by attribute type (here "localKeyId") // and the key values are an array of matching objects var cert = bags.localKeyId[0]; // get bags by localKeyId (input in hex) var bags = p12.getBags({localKeyIdHex: '<KEY>'}); // bags are key'd by attribute type (here "localKeyId", *not* "localKeyIdHex") // and the key values are an array of matching objects var cert = bags.localKeyId[0]; // get bags by type var bags = p12.getBags({bagType: pki.oids.certBag}); // bags are key'd by bagType and each bagType key's value // is an array of matches (in this case, certificate objects) var cert = bags[pki.oids.certBag][0]; // get bags by friendlyName and filter on bag type var bags = p12.getBags({ friendlyName: 'test', bagType: pki.oids.certBag }); // get key bags var bags = p12.getBags({bagType: pki.oids.keyBag}); // get key var bag = bags[pki.oids.keyBag][0]; var key = bag.key; // if the key is in a format unrecognized by forge then // bag.key will be `null`, use bag.asn1 to get the ASN.1 // representation of the key if(bag.key === null) { var keyAsn1 = bag.asn1; // can now convert back to DER/PEM/etc for export } // generate a p12 using AES (default) var p12Asn1 = pkcs12.toPkcs12Asn1( privateKey, certificateChain, 'password'); // generate a p12 that can be imported by Chrome/Firefox/iOS // (requires the use of Triple DES instead of AES) var p12Asn1 = pkcs12.toPkcs12Asn1( privateKey, certificateChain, 'password', {algorithm: '3des'}); // base64-encode p12 var p12Der = asn1.toDer(p12Asn1).getBytes(); var p12b64 = util.encode64(p12Der); // create download link for p12 var a = document.createElement('a'); a.download = 'example.p12'; a.setAttribute('href', 'data:application/x-pkcs12;base64,' + p12b64); a.appendChild(document.createTextNode('Download')); ``` ### ASN.1 Provides [ASN.1][] DER encoding and decoding. **Examples** ``` var asn1 = asn1; // create a SubjectPublicKeyInfo var subjectPublicKeyInfo = asn1.create(asn1.Class.UNIVERSAL, asn1.Type.SEQUENCE, true, [ // AlgorithmIdentifier asn1.create(asn1.Class.UNIVERSAL, asn1.Type.SEQUENCE, true, [ // algorithm asn1.create(asn1.Class.UNIVERSAL, asn1.Type.OID, false, asn1.oidToDer(pki.oids['rsaEncryption']).getBytes()), // parameters (null) asn1.create(asn1.Class.UNIVERSAL, asn1.Type.NULL, false, '') ]), // subjectPublicKey asn1.create(asn1.Class.UNIVERSAL, asn1.Type.BITSTRING, false, [ // RSAPublicKey asn1.create(asn1.Class.UNIVERSAL, asn1.Type.SEQUENCE, true, [ // modulus (n) asn1.create(asn1.Class.UNIVERSAL, asn1.Type.INTEGER, false, _bnToBytes(key.n)), // publicExponent (e) asn1.create(asn1.Class.UNIVERSAL, asn1.Type.INTEGER, false, _bnToBytes(key.e)) ]) ]) ]); // serialize an ASN.1 object to DER format var derBuffer = asn1.toDer(subjectPublicKeyInfo); // deserialize to an ASN.1 object from a byte buffer filled with DER data var object = asn1.fromDer(derBuffer); // convert an OID dot-separated string to a byte buffer var derOidBuffer = asn1.oidToDer('1.2.840.113549.1.1.5'); // convert a byte buffer with a DER-encoded OID to a dot-separated string console.log(asn1.derToOid(derOidBuffer)); // output: 1.2.840.113549.1.1.5 // validates that an ASN.1 object matches a particular ASN.1 structure and // captures data of interest from that structure for easy access var publicKeyValidator = { name: 'SubjectPublicKeyInfo', tagClass: asn1.Class.UNIVERSAL, type: asn1.Type.SEQUENCE, constructed: true, captureAsn1: 'subjectPublicKeyInfo', value: [{ name: 'SubjectPublicKeyInfo.AlgorithmIdentifier', tagClass: asn1.Class.UNIVERSAL, type: asn1.Type.SEQUENCE, constructed: true, value: [{ name: 'AlgorithmIdentifier.algorithm', tagClass: asn1.Class.UNIVERSAL, type: asn1.Type.OID, constructed: false, capture: 'publicKeyOid' }] }, { // subjectPublicKey name: 'SubjectPublicKeyInfo.subjectPublicKey', tagClass: asn1.Class.UNIVERSAL, type: asn1.Type.BITSTRING, constructed: false, value: [{ // RSAPublicKey name: 'SubjectPublicKeyInfo.subjectPublicKey.RSAPublicKey', tagClass: asn1.Class.UNIVERSAL, type: asn1.Type.SEQUENCE, constructed: true, optional: true, captureAsn1: 'rsaPublicKey' }] }] }; var capture = {}; var errors = []; if(!asn1.validate( publicKeyValidator, subjectPublicKeyInfo, validator, capture, errors)) { throw 'ASN.1 object is not a SubjectPublicKeyInfo.'; } // capture.subjectPublicKeyInfo contains the full ASN.1 object // capture.rsaPublicKey contains the full ASN.1 object for the RSA public key // capture.publicKeyOid only contains the value for the OID var oid = asn1.derToOid(capture.publicKeyOid); if(oid !== pki.oids['rsaEncryption']) { throw 'Unsupported OID.'; } // pretty print an ASN.1 object to a string for debugging purposes asn1.prettyPrint(object); ``` Message Digests --- ### SHA1 Provides [SHA-1][] message digests. **Examples** ``` var md = md.sha1.create(); md.update('The quick brown fox jumps over the lazy dog'); console.log(md.digest().toHex()); // output: 2fd4e1c67a2d28fced849ee1bb76e7391b93eb12 ``` ### SHA256 Provides [SHA-256][] message digests. **Examples** ``` var md = md.sha256.create(); md.update('The quick brown fox jumps over the lazy dog'); console.log(md.digest().toHex()); // output: d7a8fbb307d7809469ca9abcb0082e4f8d5651e46d3cdb762d02d0bf37c9e592 ``` ### SHA384 Provides [SHA-384][] message digests. **Examples** ``` var md = md.sha384.create(); md.update('The quick brown fox jumps over the lazy dog'); console.log(md.digest().toHex()); // output: ca737f1014a48f4c0b6dd43cb177b0afd9e5169367544c494011e3317dbf9a509cb1e5dc1e85a941bbee3d7f2afbc9b1 ``` ### SHA512 Provides [SHA-512][] message digests. **Examples** ``` // SHA-512 var md = md.sha512.create(); md.update('The quick brown fox jumps over the lazy dog'); console.log(md.digest().toHex()); // output: 07e547d9586f6a73f73fbac0435ed76951218fb7d0c8d788a309d785436bbb642e93a252a954f23912547d1e8a3b5ed6e1bfd7097821233fa0538f3db854fee6 // SHA-512/224 var md = md.sha512.sha224.create(); md.update('The quick brown fox jumps over the lazy dog'); console.log(md.digest().toHex()); // output: 944cd2847fb54558d4775db0485a50003111c8e5daa63fe722c6aa37 // SHA-512/256 var md = md.sha512.sha256.create(); md.update('The quick brown fox jumps over the lazy dog'); console.log(md.digest().toHex()); // output: dd9d67b371519c339ed8dbd25af90e976a1eeefd4ad3d889005e532fc5bef04d ``` ### MD5 Provides [MD5][] message digests. **Examples** ``` var md = md.md5.create(); md.update('The quick brown fox jumps over the lazy dog'); console.log(md.digest().toHex()); // output: 9e107d9d372bb6826bd81d3542a419d6 ``` ### HMAC Provides [HMAC][] w/any supported message digest algorithm. **Examples** ``` var hmac = hmac.create(); hmac.start('sha1', 'Jefe'); hmac.update('what do ya want for nothing?'); console.log(hmac.digest().toHex()); // output: effcdf6ae5eb2fa2d27416d5f184df9c259a7c79 ``` Utilities --- ### Prime Provides an API for generating large, random, probable primes. **Examples** ``` // generate a random prime on the main JS thread var bits = 1024; prime.generateProbablePrime(bits, function(err, num) { console.log('random prime', num.toString(16)); }); // generate a random prime using Web Workers (if available, otherwise // falls back to the main thread) var bits = 1024; var options = { algorithm: { name: 'PRIMEINC', workers: -1 // auto-optimize # of workers } }; prime.generateProbablePrime(bits, options, function(err, num) { console.log('random prime', num.toString(16)); }); ``` ### PRNG Provides a [Fortuna][]-based cryptographically-secure pseudo-random number generator, to be used with a cryptographic function backend, e.g. [AES][]. An implementation using [AES][] as a backend is provided. An API for collecting entropy is given, though if window.crypto.getRandomValues is available, it will be used automatically. **Examples** ``` // get some random bytes synchronously var bytes = random.getBytesSync(32); console.log(util.bytesToHex(bytes)); // get some random bytes asynchronously random.getBytes(32, function(err, bytes) { console.log(util.bytesToHex(bytes)); }); // collect some entropy if you'd like random.collect(someRandomBytes); jQuery().mousemove(function(e) { random.collectInt(e.clientX, 16); random.collectInt(e.clientY, 16); }); // specify a seed file for use with the synchronous API if you'd like random.seedFileSync = function(needed) { // get 'needed' number of random bytes from somewhere return fetchedRandomBytes; }; // specify a seed file for use with the asynchronous API if you'd like random.seedFile = function(needed, callback) { // get the 'needed' number of random bytes from somewhere callback(null, fetchedRandomBytes); }); // register the main thread to send entropy or a Web Worker to receive // entropy on demand from the main thread random.registerWorker(self); // generate a new instance of a PRNG with no collected entropy var myPrng = random.createInstance(); ``` ### Tasks Provides queuing and synchronizing tasks in a web application. **Examples** ``` // TODO ``` ### Utilities Provides utility functions, including byte buffer support, base64, bytes to/from hex, zlib inflate/deflate, etc. **Examples** ``` // encode/decode base64 var encoded = util.encode64(str); var str = util.decode64(encoded); // encode/decode UTF-8 var encoded = util.encodeUtf8(str); var str = util.decodeUtf8(encoded); // bytes to/from hex var bytes = util.hexToBytes(hex); var hex = util.bytesToHex(bytes); // create an empty byte buffer var buffer = util.createBuffer(); // create a byte buffer from raw binary bytes var buffer = util.createBuffer(input, 'raw'); // create a byte buffer from utf8 bytes var buffer = util.createBuffer(input, 'utf8'); // get the length of the buffer in bytes buffer.length(); // put bytes into the buffer buffer.putBytes(bytes); // put a 32-bit integer into the buffer buffer.putInt32(10); // buffer to hex buffer.toHex(); // get a copy of the bytes in the buffer bytes.bytes(/* count */); // empty this buffer and get its contents bytes.getBytes(/* count */); // convert a forge buffer into a Node.js Buffer // make sure you specify the encoding as 'binary' var forgeBuffer = util.createBuffer(); var nodeBuffer = Buffer.from(forgeBuffer.getBytes(), 'binary'); // convert a Node.js Buffer into a forge buffer // make sure you specify the encoding as 'binary' var nodeBuffer = Buffer.from('CAFE', 'hex'); var forgeBuffer = util.createBuffer(nodeBuffer.toString('binary')); // parse a URL var parsed = util.parseUrl('http://example.com/foo?bar=baz'); // parsed.scheme, parsed.host, parsed.port, parsed.path, parsed.fullHost ``` ### Logging Provides logging to a javascript console using various categories and levels of verbosity. **Examples** ``` // TODO ``` ### LICENSE This is a fork from [node-forge](https://github.com/digitalbazaar/forge) project and license under [LICENSE](https://github.com/digitalbazaar/forge/blob/master/LICENSE) Readme --- ### Keywords * aes * asn * asn.1 * cbc * crypto * cryptography * csr * des * gcm * hmac * http * https * md5 * network * pkcs * pki * prng * rc2 * rsa * sha1 * sha256 * sha384 * sha512 * ssh * tls * x.509 * x509
@hotosm/id
npm
JavaScript
iD - friendly JavaScript editor for [OpenStreetMap](https://www.openstreetmap.org/) === Basics --- * iD is a JavaScript [OpenStreetMap](https://www.openstreetmap.org/) editor. * It's intentionally simple. It lets you do the most basic tasks while not breaking other people's data. * It supports all popular modern desktop browsers: Chrome, Firefox, Safari, Opera, and Edge. * iD is not yet designed for mobile browsers, but this is something we hope to add! * Data is rendered with [d3.js](https://d3js.org/). Participate! --- * Read the project [Code of Conduct](https://github.com/openstreetmap/iD/blob/HEAD/CODE_OF_CONDUCT.md) and remember to be nice to one another. * Read up on [Contributing and the code style of iD](https://github.com/openstreetmap/iD/blob/HEAD/CONTRIBUTING.md). * See [open issues in the issue tracker](https://github.com/openstreetmap/iD/issues?state=open) if you're looking for something to do. * [Translate!](https://github.com/openstreetmap/iD/blob/develop/CONTRIBUTING.md#translating) * Test a prerelease version of iD: + Stable mirror of `release` branch: <https://ideditor-release.netlify.app> + Development mirror of `develop` branch + latest translations: <https://ideditor.netlify.app> + Development mirror of `v3-prototype` branch: <https://preview.ideditor.com/masterCome on in, the water's lovely. More help? Ping `<NAME>`/`tyr_asd` or `bhousel` on: * [OpenStreetMap US Slack](https://slack.openstreetmap.us/) (`#id` channel) * [OpenStreetMap Discord](https://discord.gg/openstreetmap) (`#id` channel) * [OpenStreetMap IRC](https://wiki.openstreetmap.org/wiki/IRC) (`irc.oftc.net`, in `#osm-dev`) * [OpenStreetMap `dev` mailing list](https://wiki.openstreetmap.org/wiki/Mailing_lists) Prerequisites --- * [Node.js](https://nodejs.org/) version 12 or newer * [`git`](https://www.atlassian.com/git/tutorials/install-git/) for your platform + Note for Windows users: - Edit `$HOME\.gitconfig`: Add these lines to avoid checking in files with CRLF newlines ``` [core] autocrlf = input ``` Installation --- Note: Windows users should run these steps in a shell started with "Run as administrator". This is only necessary the first time so that the build process can create symbolic links. To run the current development version of iD on your own computer: #### Cloning the repository The repository is reasonably large, and it's unlikely that you need the full history (~200 MB). If you are happy to wait for it all to download, run: ``` git clone https://github.com/openstreetmap/iD.git ``` To clone only the most recent version, instead use a 'shallow clone': ``` git clone --depth=1 https://github.com/openstreetmap/iD.git ``` If you want to add in the full history later on, perhaps to run `git blame` or `git log`, run `git fetch --depth=1000000` #### Building iD 1. `cd` into the newly cloned project folder 2. Run `npm install` 3. Run `npm run all` 4. Run `npm start` 5. Open `http://127.0.0.1:8080/` in a web browser For guidance on building a packaged version, running tests, and contributing to development, see [CONTRIBUTING.md](https://github.com/openstreetmap/iD/blob/HEAD/CONTRIBUTING.md). License --- iD is available under the [ISC License](https://opensource.org/licenses/ISC). See the [LICENSE.md](https://github.com/openstreetmap/iD/blob/HEAD/LICENSE.md) file for more details. iD also bundles portions of the following open source software. * [D3.js (BSD-3-Clause)](https://github.com/d3/d3) * [CLDR (Unicode Consortium Terms of Use)](https://github.com/unicode-cldr/cldr-json) * [editor-layer-index (CC-BY-SA 3.0)](https://github.com/osmlab/editor-layer-index) * [Font Awesome (CC-BY 4.0)](https://fontawesome.com/license) * [Maki (CC0 1.0)](https://github.com/mapbox/maki) * [Temaki (CC0 1.0)](https://github.com/ideditor/temaki) * [Mapillary JS (MIT)](https://github.com/mapillary/mapillary-js) * [iD Tagging Schema (ISC)](https://github.com/openstreetmap/id-tagging-schema) * [name-suggestion-index (BSD-3-Clause)](https://github.com/osmlab/name-suggestion-index) * [osm-community-index (ISC)](https://github.com/osmlab/osm-community-index) Thank you --- Initial development of iD was made possible by a [grant of the Knight Foundation](https://www.mapbox.com/blog/knight-invests-openstreetmap/). Readme --- ### Keywords * editor * openstreetmap
iroh-metrics
rust
Rust
Crate iroh_metrics === Metrics library for iroh Re-exports --- * `pub use struct_iterable;` Modules --- * coreExpose core types and traits * metricsMetrics collection Macros --- * incIncrement the given counter by 1. * inc_byIncrement the given counter `n`. Crate iroh_metrics === Metrics library for iroh Re-exports --- * `pub use struct_iterable;` Modules --- * coreExpose core types and traits * metricsMetrics collection Macros --- * incIncrement the given counter by 1. * inc_byIncrement the given counter `n`. Module iroh_metrics::core === Expose core types and traits Structs --- * CoreCore is the base metrics struct. It manages the mapping between the metrics name and the actual metrics. It also carries a single prometheus registry to be used by all metrics. * CounterOpen Metrics `Counter` to measure discrete events. Traits --- * HistogramTypeInterface for all distribution based metrics. * MetricDescription of a group of metrics. * MetricTypeInterface for all single value based metrics. Module iroh_metrics::metrics === Metrics collection Enables and manages a global registry of metrics. Divided up into modules, each module has its own metrics. Starting the metrics service will expose the metrics on a OpenMetrics http endpoint. To enable metrics collection, call `init_metrics()` before starting the service. * To increment a **counter**, use the `crate::inc` macro with a value. * To increment a **counter** by 1, use the `crate::inc_by` macro. To expose the metrics, start the metrics service with `start_metrics_server()`. Example: --- ``` use iroh_metrics::{inc, inc_by}; use iroh_metrics::core::{Core, Metric, Counter}; use struct_iterable::Iterable; #[derive(Debug, Clone, Iterable)] pub struct Metrics { pub things_added: Counter, } impl Default for Metrics { fn default() -> Self { Self { things_added: Counter::new("things_added tracks the number of things we have added"), } } } impl Metric for Metrics { fn name() -> &'static str { "my_metrics" } } Core::init(|reg, metrics| { metrics.insert(Metrics::new(reg)); }); inc_by!(Metrics, things_added, 2); inc!(Metrics, things_added); ``` Functions --- * start_metrics_serverStart a server to serve the OpenMetrics endpoint. Macro iroh_metrics::inc === ``` macro_rules! inc { ($m:ty, $f:ident) => { ... }; } ``` Increment the given counter by 1. Macro iroh_metrics::inc_by === ``` macro_rules! inc_by { ($m:ty, $f:ident, $n:expr) => { ... }; } ``` Increment the given counter `n`.
yuima
cran
R
Package ‘yuima’ December 20, 2022 Type Package Title The YUIMA Project Package for SDEs Version 1.15.22 Depends R(>= 2.10.0), methods, zoo, stats4, utils, expm, cubature, mvtnorm Imports Rcpp (>= 0.12.1), boot (>= 1.3-2), glassoFast, coda, calculus (>= 0.2.0) Author YUIMA Project Team Maintainer <NAME> <<EMAIL>> Description Simulation and Inference for SDEs and Other Stochastic Processes. License GPL-2 URL https://yuimaproject.com BugReports https://github.com/yuimaproject/yuima/issues LinkingTo Rcpp, RcppArmadillo NeedsCompilation yes Repository CRAN Date/Publication 2022-12-20 18:30:02 UTC R topics documented: adaBaye... 4 a... 6 aeCharacteristi... 8 aeDensit... 9 aeExpectatio... 10 aeKurtosi... 11 aeMargina... 12 aeMea... 13 aeMomen... 14 aeS... 15 aeSkewnes... 16 asymptotic_ter... 17 bns.tes... 19 carma.info-clas... 21 CarmaNois... 22 cc... 24 cce.facto... 33 Class for Quasi Maximum Likelihood Estimation of Point Process Regression Models . 39 cogarch.est.-clas... 40 cogarch.est.incr-clas... 40 cogarch.info-clas... 41 cogarchNois... 42 CPoin... 43 DataPP... 47 Diagnostic.Carm... 49 Diagnostic.Cogarc... 50 fitCI... 51 get.counting.dat... 53 gm... 55 hyava... 57 I... 61 info.Map-clas... 64 info.PP... 65 Integral.sd... 65 Integran... 65 Intensity.PP... 65 JBtes... 66 lambdaFromDat... 68 lass... 69 LawMethod... 70 limiting.gamm... 71 lla... 72 llag.tes... 76 lm.jumptes... 79 LogSP... 81 lseBaye... 81 mlla... 84 mmfra... 87 model.parameter-clas... 88 mp... 89 MWK15... 91 noisy.samplin... 92 nt... 94 param.Integra... 96 param.Map-clas... 97 phi.tes... 97 poisson.random.samplin... 98 pz.tes... 99 qg... 101 qml... 103 qmleLev... 111 rcons... 114 rn... 115 setCarm... 119 setCharacteristi... 122 setCogarc... 123 setDat... 125 setFunctiona... 127 setHawke... 128 setIntegra... 130 setLa... 131 setMa... 133 setMode... 134 setPoisso... 137 setPP... 139 setSamplin... 140 setYuim... 142 simBmlla... 144 simCI... 147 simFunctiona... 148 simulat... 149 sn... 156 spectralco... 158 subsamplin... 162 toLate... 164 variable.Integra... 165 wlla... 166 yboo... 169 yuima-clas... 169 yuima.ae-clas... 170 yuima.carma-clas... 171 yuima.carma.qmle-clas... 172 yuima.characteristic-clas... 173 yuima.cogarch-clas... 173 yuima.CP.qmle-clas... 175 yuima.data-clas... 176 yuima.functional-clas... 177 yuima.Hawke... 177 yuima.Integral-clas... 178 yuima.law-clas... 179 yuima.Map-clas... 180 yuima.model-clas... 181 yuima.multimodel-clas... 182 yuima.poisson-clas... 185 yuima.PP... 186 yuima.qmleLevy.inc... 187 yuima.sampling-clas... 188 yuima.snr-clas... 188 adaBayes Adaptive Bayes estimator for the parameters in sde model Description The adabayes.mcmc class is a class of the yuima package that extends the mle-class. Usage adaBayes(yuima, start, prior, lower, upper, method = "mcmc", iteration = NULL,mcmc, rate =1, rcpp = TRUE, algorithm = "randomwalk",center=NULL,sd=NULL,rho=NULL, path = FALSE) Arguments yuima a ’yuima’ object. start initial suggestion for parameter values prior a list of prior distributions for the parameters specified by ’code’. Currently, dunif(z, min, max), dnorm(z, mean, sd), dbeta(z, shape1, shape2), dgamma(z, shape, rate) are available. lower a named list for specifying lower bounds of parameters upper a named list for specifying upper bounds of parameters method "nomcmc" requires package cubature iteration number of iteration of Markov chain Monte Carlo method mcmc number of iteration of Markov chain Monte Carlo method rate a thinning parameter. Only the first n^rate observation will be used for inference. rcpp Logical value. If rcpp = TRUE (default), Rcpp code will be performed. Other- wise, usual R code will be performed. algorithm If algorithm = "randomwalk" (default), the random-walk Metropolis algorithm will be performed. If algorithm = "MpCN", the Mixed preconditioned Crank- Nicolson algorithm will be performed. center A list of parameters used to center MpCN algorithm. sd A list for specifying the standard deviation of proposal distributions. path Logical value when method = "mcmc". If path=TRUE, then the sample path for each variable will be included in the MCMC object in the output. rho A parameter used for MpCN algorithm. Details Calculate the Bayes estimator for stochastic processes by using the quasi-likelihood function. The calculation is performed by the Markov chain Monte Carlo method. Currently, the Random-walk Metropolis algorithm and the Mixed preconditioned Crank-Nicolson algorithm is implemented. Slots mcmc: is a list of MCMC objects for all estimated parameters. accept_rate: is a list acceptance rates for diffusion and drift parts. call: is an object of class language. fullcoef: is an object of class list that contains estimated parameters. vcov: is an object of class matrix. coefficients: is an object of class vector that contains estimated parameters. Note algorithm = nomcmc is unstable. Author(s) <NAME> with YUIMA project Team References <NAME>. (2011). Polynomial type large deviation inequalities and quasi-likelihood analysis for stochastic differential equations. Annals of the Institute of Statistical Mathematics, 63(3), 431- 479. <NAME>., & <NAME>. (2014). Adaptive Bayes type estimators of ergodic diffusion processes from discrete observations. Statistical Inference for Stochastic Processes, 17(2), 181- 219. <NAME>. (2017). Ergodicity of Markov chain Monte Carlo with reversible proposal. Journal of Applied Probability, 54(2). Examples ## Not run: set.seed(123) b <- c("-theta1*x1+theta2*sin(x2)+50","-theta3*x2+theta4*cos(x1)+25") a <- matrix(c("4+theta5","1","1","2+theta6"),2,2) true = list(theta1 = 0.5, theta2 = 5,theta3 = 0.3, theta4 = 5, theta5 = 1, theta6 = 1) lower = list(theta1=0.1,theta2=0.1,theta3=0, theta4=0.1,theta5=0.1,theta6=0.1) upper = list(theta1=1,theta2=10,theta3=0.9, theta4=10,theta5=10,theta6=10) start = list(theta1=runif(1), theta2=rnorm(1), theta3=rbeta(1,1,1), theta4=rnorm(1), theta5=rgamma(1,1,1), theta6=rexp(1)) yuimamodel <- setModel(drift=b,diffusion=a,state.variable=c("x1", "x2"),solve.variable=c("x1","x2")) yuimasamp <- setSampling(Terminal=50,n=50*10) yuima <- setYuima(model = yuimamodel, sampling = yuimasamp) yuima <- simulate(yuima, xinit = c(100,80), true.parameter = true,sampling = yuimasamp) prior <- list( theta1=list(measure.type="code",df="dunif(z,0,1)"), theta2=list(measure.type="code",df="dnorm(z,0,1)"), theta3=list(measure.type="code",df="dbeta(z,1,1)"), theta4=list(measure.type="code",df="dgamma(z,1,1)"), theta5=list(measure.type="code",df="dnorm(z,0,1)"), theta6=list(measure.type="code",df="dnorm(z,0,1)") ) set.seed(123) mle <- qmle(yuima, start = start, lower = lower, upper = upper, method = "L-BFGS-B",rcpp=TRUE) print(mle@coef) center<-list(theta1=0.5,theta2=5,theta3=0.3,theta4=4,theta5=3,theta6=3) sd<-list(theta1=0.001,theta2=0.001,theta3=0.001,theta4=0.01,theta5=0.5,theta6=0.5) bayes <- adaBayes(yuima, start=start, prior=prior,lower=lower,upper=upper, method="mcmc",mcmc=1000,rate = 1, rcpp = TRUE, algorithm = "randomwalk",center = center,sd=sd, path=TRUE) print(bayes@fullcoef) print(bayes@accept_rate) print(bayes@mcmc$theta1[1:10]) ## End(Not run) ae Asymptotic Expansion Description Asymptotic expansion of uni-dimensional and multi-dimensional diffusion processes. Usage ae( model, xinit, order = 1L, true.parameter = list(), sampling = NULL, eps.var = "eps", solver = "rk4", verbose = FALSE ) Arguments model an object of yuima-class or yuima.model-class. xinit initial value vector of state variables. order integer. The asymptotic expansion order. Higher orders lead to better approxi- mations but longer computational times. true.parameter named list of parameters. sampling a yuima.sampling-class object. eps.var character. The perturbation variable. solver the solver for ordinary differential equations. One of "rk4" (more accurate) or "euler" (faster). verbose logical. Print on progress? Default FALSE. Details If sampling is not provided, then model must be an object of yuima-class with non-empty sampling. if eps.var does not appear in the model specification, then it is internally added in front of the diffusion matrix to apply the asymptotic expansion scheme. Value An object of yuima.ae-class Author(s) <NAME> <<EMAIL>> Examples ## Not run: # model gbm <- setModel(drift = 'mu*x', diffusion = 'sigma*x', solve.variable = 'x') # settings xinit <- 100 par <- list(mu = 0.01, sigma = 0.2) sampling <- setSampling(Initial = 0, Terminal = 1, n = 1000) # asymptotic expansion approx <- ae(model = gbm, sampling = sampling, order = 4, true.parameter = par, xinit = xinit) # exact density x <- seq(50, 200, by = 0.1) exact <- dlnorm(x = x, meanlog = log(xinit)+(par$mu-0.5*par$sigma^2)*1, sdlog = par$sigma*sqrt(1)) # compare plot(x, exact, type = 'l', ylab = "Density") lines(x, aeDensity(x = x, ae = approx, order = 1), col = 2) lines(x, aeDensity(x = x, ae = approx, order = 2), col = 3) lines(x, aeDensity(x = x, ae = approx, order = 3), col = 4) lines(x, aeDensity(x = x, ae = approx, order = 4), col = 5) ## End(Not run) aeCharacteristic Asymptotic Expansion - Characteristic Function Description Asymptotic Expansion - Characteristic Function Usage aeCharacteristic(..., ae, eps = 1, order = NULL) Arguments ... named argument, data.frame, list, or environment specifying the grid to evaluate the characteristic function. See examples. ae an object of class yuima.ae-class. eps numeric. The intensity of the perturbation. order integer. The expansion order. If NULL (default), it uses the maximum order used in ae. Value Characteristic function evaluated on the given grid. Examples ## Not run: # model gbm <- setModel(drift = 'mu*x', diffusion = 'sigma*x', solve.variable = 'x') # settings xinit <- 100 par <- list(mu = 0.01, sigma = 0.2) sampling <- setSampling(Initial = 0, Terminal = 1, n = 1000) # asymptotic expansion approx <- ae(model = gbm, sampling = sampling, order = 4, true.parameter = par, xinit = xinit) # The following are all equivalent methods to specify the grid via .... # Notice that the character 'u1' corresponds to the 'u.var' of the ae object. [email protected] # 1) named argument u1 <- seq(0, 1, by = 0.1) psi <- aeCharacteristic(u1 = u1, ae = approx, order = 4) # 2) data frame df <- data.frame(u1 = seq(0, 1, by = 0.1)) psi <- aeCharacteristic(df, ae = approx, order = 4) # 3) environment env <- new.env() env$u1 <- seq(0, 1, by = 0.1) psi <- aeCharacteristic(env, ae = approx, order = 4) # 4) list lst <- list(u1 = seq(0, 1, by = 0.1)) psi <- aeCharacteristic(lst, ae = approx, order = 4) ## End(Not run) aeDensity Asymptotic Expansion - Density Description Asymptotic Expansion - Density Usage aeDensity(..., ae, eps = 1, order = NULL) Arguments ... named argument, data.frame, list, or environment specifying the grid to evaluate the density. See examples. ae an object of class yuima.ae-class. eps numeric. The intensity of the perturbation. order integer. The expansion order. If NULL (default), it uses the maximum order used in ae. Value Probability density function evaluated on the given grid. Examples ## Not run: # model gbm <- setModel(drift = 'mu*x', diffusion = 'sigma*x', solve.variable = 'x') # settings xinit <- 100 par <- list(mu = 0.01, sigma = 0.2) sampling <- setSampling(Initial = 0, Terminal = 1, n = 1000) # asymptotic expansion approx <- ae(model = gbm, sampling = sampling, order = 4, true.parameter = par, xinit = xinit) # The following are all equivalent methods to specify the grid via .... # Notice that the character 'x' corresponds to the solve.variable of the yuima model. # 1) named argument x <- seq(50, 200, by = 0.1) density <- aeDensity(x = x, ae = approx, order = 4) # 2) data frame df <- data.frame(x = seq(50, 200, by = 0.1)) density <- aeDensity(df, ae = approx, order = 4) # 3) environment env <- new.env() env$x <- seq(50, 200, by = 0.1) density <- aeDensity(env, ae = approx, order = 4) # 4) list lst <- list(x = seq(50, 200, by = 0.1)) density <- aeDensity(lst, ae = approx, order = 4) # exact density exact <- dlnorm(x = x, meanlog = log(xinit)+(par$mu-0.5*par$sigma^2)*1, sdlog = par$sigma*sqrt(1)) # compare plot(x = exact, y = density, xlab = "Exact", ylab = "Approximated") ## End(Not run) aeExpectation Asymptotic Expansion - Functionals Description Compute the expected value of functionals. Usage aeExpectation(f, bounds, ae, eps = 1, order = NULL, ...) Arguments f character. The functional. bounds named list of integration bounds in the form list(x = c(xmin, xmax), y = c(ymin, ymax), ...) ae an object of class yuima.ae-class. eps numeric. The intensity of the perturbation. order integer. The expansion order. If NULL (default), it uses the maximum order used in ae. ... additional arguments passed to cubintegrate. Value return value of cubintegrate. The expectation of the functional provided. Examples ## Not run: # model gbm <- setModel(drift = 'mu*x', diffusion = 'sigma*x', solve.variable = 'x') # settings xinit <- 100 par <- list(mu = 0.01, sigma = 0.2) sampling <- setSampling(Initial = 0, Terminal = 1, n = 1000) # asymptotic expansion approx <- ae(model = gbm, sampling = sampling, order = 4, true.parameter = par, xinit = xinit) # compute the mean via integration aeExpectation(f = 'x', bounds = list(x = c(0,1000)), ae = approx) # compare with the mean computed by differentiation of the characteristic function aeMean(approx) ## End(Not run) aeKurtosis Asymptotic Expansion - Kurtosis Description Asymptotic Expansion - Kurtosis Usage aeKurtosis(ae, eps = 1, order = NULL) Arguments ae an object of class yuima.ae-class. eps numeric. The intensity of the perturbation. order integer. The expansion order. If NULL (default), it uses the maximum order used in ae. Value numeric. Examples ## Not run: # model gbm <- setModel(drift = 'mu*x', diffusion = 'sigma*x', solve.variable = 'x') # settings xinit <- 100 par <- list(mu = 0.01, sigma = 0.2) sampling <- setSampling(Initial = 0, Terminal = 1, n = 1000) # asymptotic expansion approx <- ae(model = gbm, sampling = sampling, order = 4, true.parameter = par, xinit = xinit) # expansion order max aeKurtosis(ae = approx) # expansion order 1 aeKurtosis(ae = approx, order = 1) ## End(Not run) aeMarginal Asymptotic Expansion - Marginals Description Asymptotic Expansion - Marginals Usage aeMarginal(ae, var) Arguments ae an object of class yuima.ae-class. var variables of the marginal distribution to compute. Value An object of yuima.ae-class Examples ## Not run: # multidimensional model gbm <- setModel(drift = c('mu*x1','mu*x2'), diffusion = matrix(c('sigma1*x1',0,0,'sigma2*x2'), nrow = 2), solve.variable = c('x1','x2')) # settings xinit <- c(100, 100) par <- list(mu = 0.01, sigma1 = 0.2, sigma2 = 0.1) sampling <- setSampling(Initial = 0, Terminal = 1, n = 1000) # asymptotic expansion approx <- ae(model = gbm, sampling = sampling, order = 3, true.parameter = par, xinit = xinit) # extract marginals margin1 <- aeMarginal(ae = approx, var = "x1") margin2 <- aeMarginal(ae = approx, var = "x2") # compare with exact solution for marginal 1 x1 <- seq(50, 200, by = 0.1) exact <- dlnorm(x = x1, meanlog = log(xinit[1])+(par$mu-0.5*par$sigma1^2), sdlog = par$sigma1) plot(x1, exact, type = 'p', ylab = "Density") lines(x1, aeDensity(x1 = x1, ae = margin1, order = 3), col = 2) # compare with exact solution for marginal 2 x2 <- seq(50, 200, by = 0.1) exact <- dlnorm(x = x2, meanlog = log(xinit[2])+(par$mu-0.5*par$sigma2^2), sdlog = par$sigma2) plot(x2, exact, type = 'p', ylab = "Density") lines(x2, aeDensity(x2 = x2, ae = margin2, order = 3), col = 2) ## End(Not run) aeMean Asymptotic Expansion - Mean Description Asymptotic Expansion - Mean Usage aeMean(ae, eps = 1, order = NULL) Arguments ae an object of class yuima.ae-class. eps numeric. The intensity of the perturbation. order integer. The expansion order. If NULL (default), it uses the maximum order used in ae. Value numeric. Examples ## Not run: # model gbm <- setModel(drift = 'mu*x', diffusion = 'sigma*x', solve.variable = 'x') # settings xinit <- 100 par <- list(mu = 0.01, sigma = 0.2) sampling <- setSampling(Initial = 0, Terminal = 1, n = 1000) # asymptotic expansion approx <- ae(model = gbm, sampling = sampling, order = 4, true.parameter = par, xinit = xinit) # expansion order max aeMean(ae = approx) # expansion order 1 aeMean(ae = approx, order = 1) ## End(Not run) aeMoment Asymptotic Expansion - Moments Description Asymptotic Expansion - Moments Usage aeMoment(ae, m = 1, eps = 1, order = NULL) Arguments ae an object of class yuima.ae-class. m integer. The moment order. In case of multidimensional processes, it is possible to compute cross-moments by providing a vector of the same length as the state variables. eps numeric. The intensity of the perturbation. order integer. The expansion order. If NULL (default), it uses the maximum order used in ae. Value numeric. Examples ## Not run: # model gbm <- setModel(drift = 'mu*x', diffusion = 'sigma*x', solve.variable = 'x') # settings xinit <- 100 par <- list(mu = 0.01, sigma = 0.2) sampling <- setSampling(Initial = 0, Terminal = 1, n = 1000) # asymptotic expansion approx <- ae(model = gbm, sampling = sampling, order = 4, true.parameter = par, xinit = xinit) # second moment, expansion order max aeMoment(ae = approx, m = 2) # second moment, expansion order 3 aeMoment(ae = approx, m = 2, order = 3) # second moment, expansion order 2 aeMoment(ae = approx, m = 2, order = 2) # second moment, expansion order 1 aeMoment(ae = approx, m = 2, order = 1) ## End(Not run) aeSd Asymptotic Expansion - Standard Deviation Description Asymptotic Expansion - Standard Deviation Usage aeSd(ae, eps = 1, order = NULL) Arguments ae an object of class yuima.ae-class. eps numeric. The intensity of the perturbation. order integer. The expansion order. If NULL (default), it uses the maximum order used in ae. Value numeric. Examples ## Not run: # model gbm <- setModel(drift = 'mu*x', diffusion = 'sigma*x', solve.variable = 'x') # settings xinit <- 100 par <- list(mu = 0.01, sigma = 0.2) sampling <- setSampling(Initial = 0, Terminal = 1, n = 1000) # asymptotic expansion approx <- ae(model = gbm, sampling = sampling, order = 4, true.parameter = par, xinit = xinit) # expansion order max aeSd(ae = approx) # expansion order 1 aeSd(ae = approx, order = 1) ## End(Not run) aeSkewness Asymptotic Expansion - Skewness Description Asymptotic Expansion - Skewness Usage aeSkewness(ae, eps = 1, order = NULL) Arguments ae an object of class yuima.ae-class. eps numeric. The intensity of the perturbation. order integer. The expansion order. If NULL (default), it uses the maximum order used in ae. Value numeric. Examples ## Not run: # model gbm <- setModel(drift = 'mu*x', diffusion = 'sigma*x', solve.variable = 'x') # settings xinit <- 100 par <- list(mu = 0.01, sigma = 0.2) sampling <- setSampling(Initial = 0, Terminal = 1, n = 1000) # asymptotic expansion approx <- ae(model = gbm, sampling = sampling, order = 4, true.parameter = par, xinit = xinit) # expansion order max aeSkewness(ae = approx) # expansion order 1 aeSkewness(ae = approx, order = 1) ## End(Not run) asymptotic_term asymptotic expansion of the expected value of the functional Description calculate the fisrt and second term of asymptotic expansion of the functional mean. Usage asymptotic_term(yuima, block=100, rho, g, expand.var="e") Arguments yuima an yuima object containing model and functional. block the number of trapezoids for integrals. rho specify discounting factor in mean integral. g arbitrary measurable function for mean integral. expand.var default expand.var="e". Details Calculate the first and second term of asymptotic expansion of the expected value of the functional associated with a sde. The returned value d0 + epsilon * d1 is approximation of the expected value. Value terms list of 1st and 2nd asymptotic terms, terms$d0 and terms$d1. Note we need to fix this routine. Author(s) YUIMA Project Team Examples ## Not run: # to the Black-Scholes economy: # dXt^e = Xt^e * dt + e * Xt^e * dWt diff.matrix <- "x*e" model <- setModel(drift = "x", diffusion = diff.matrix) # call option is evaluated by averating # max{ (1/T)*int_0^T Xt^e dt, 0}, the first argument is the functional of interest: Terminal <- 1 xinit <- c(1) f <- list( c(expression(x/Terminal)), c(expression(0))) F <- 0 division <- 1000 e <- .3 yuima <- setYuima(model = model, sampling = setSampling(Terminal=Terminal, n=division)) yuima <- setFunctional( yuima, f=f,F=F, xinit=xinit,e=e) # asymptotic expansion rho <- expression(0) F0 <- F0(yuima) get_ge <- function(x,epsilon,K,F0){ tmp <- (F0 - K) + (epsilon * x) tmp[(epsilon * x) < (K-F0)] <- 0 return( tmp ) } g <- function(x) get_ge(x,epsilon=e,K=1,F0=F0) set.seed(123) asymp <- asymptotic_term(yuima, block=10, rho,g) asymp sum(asymp$d0 + e * asymp$d1) ### An example of multivariate case: Heston model ## a <- 1;C <- 1;d <- 10;R<-.1 ## diff.matrix <- matrix( c("x1*sqrt(x2)*e", "e*R*sqrt(x2)",0,"sqrt(x2*(1-R^2))*e"), 2,2) ## model <- setModel(drift = c("a*x1","C*(10-x2)"), ## diffusion = diff.matrix,solve.variable=c("x1","x2"),state.variable=c("x1","x2")) ## call option is evaluated by averating ## max{ (1/T)*int_0^T Xt^e dt, 0}, the first argument is the functional of interest: ## ## Terminal <- 1 ## xinit <- c(1,1) ## ## f <- list( c(expression(0), expression(0)), ## c(expression(0), expression(0)) , c(expression(0), expression(0)) ) ## F <- expression(x1,x2) ## ## division <- 1000 ## e <- .3 ## ## yuima <- setYuima(model = model, sampling = setSampling(Terminal=Terminal, n=division)) ## yuima <- setFunctional( yuima, f=f,F=F, xinit=xinit,e=e) ## ## rho <- expression(x1) ## F0 <- F0(yuima) ## get_ge <- function(x){ ## return( max(x[1],0)) ## } ## g <- function(x) get_ge(x) ## set.seed(123) ## asymp <- asymptotic_term(yuima, block=10, rho,g) ## sum(asymp$d0 + e * asymp$d1) ## End(Not run) bns.test Barndorff-Nielsen and Shephard’s Test for the Presence of Jumps Us- ing Bipower Variation Description Tests the presence of jumps using the statistic proposed in Barndorff-Nielsen and Shephard (2004,2006) for each component. Usage bns.test(yuima, r = rep(1, 4), type = "standard", adj = TRUE) Arguments yuima an object of yuima-class or yuima.data-class. r a vector of non-negative numbers or a list of vectors of non-negative numbers. Theoretically, it is necessary that sum(r)=4 and max(r)<2. type type of the test statistic to use. standard is default. adj logical; if TRUE, the maximum adjustment suggested in Barndorff-Nielsen and Shephard (2004) is applied to the test statistic when type is equal to either “log” or “ratio”. Details For the i-th component, the test statistic is equal to the i-th component of sqrt(n)*(mpv(yuima,2)-mpv(yuima,c(1,1)))/ when type="standard", sqrt(n)*log(mpv(yuima,2)/mpv(yuima,c(1,1)))/sqrt(vartheta*mpv(yuima,r)/mpv(yui when type="log" and sqrt(n)*(1-mpv(yuima,c(1,1))/mpv(yuima,2))/sqrt(vartheta*mpv(yuima,r)/mpv(yuima,c when type="ratio". Here, n is equal to the length of the i-th component of the zoo.data of yuima minus 1 and vartheta is pi^2/4+pi-5. When adj=TRUE, mpv(yuima,r)[i]/mpv(yuima,c(1,1))^2)[i] is replaced with 1 if it is less than 1. Value A list with the same length as the zoo.data of yuima. Each component of the list has class “htest” and contains the following components: statistic the value of the test statistic of the corresponding component of the zoo.data of yuima. p.value an approximate p-value for the test of the corresponding component. method the character string “Barndorff-Nielsen and Shephard jump test”. data.name the character string “xi”, where i is the number of the component. Note Theoretically, this test may be invalid if sampling is irregular. Author(s) <NAME> with YUIMA Project Team References Barndorff-Nielsen, <NAME>. and <NAME>. (2004) Power and bipower variation with stochastic volatility and jumps, Journal of Financial Econometrics, 2, no. 1, 1–37. Barndorff-Nielsen, <NAME>. and <NAME>. (2006) Econometrics of testing for jumps in financial economics using bipower variation, Journal of Financial Econometrics, 4, no. 1, 1–30. <NAME>. and <NAME>. (2005) The relative contribution of jumps to total price variance, Journal of Financial Econometrics, 3, no. 4, 456–499. See Also lm.jumptest, mpv, minrv.test, medrv.test, pz.test Examples set.seed(123) # One-dimensional case ## Model: dXt=t*dWt+t*dzt, ## where zt is a compound Poisson process with intensity 5 and jump sizes distribution N(0,0.1). model <- setModel(drift=0,diffusion="t",jump.coeff="t",measure.type="CP", measure=list(intensity=5,df=list("dnorm(z,0,sqrt(0.1))")), time.variable="t") yuima.samp <- setSampling(Terminal = 1, n = 390) yuima <- setYuima(model = model, sampling = yuima.samp) yuima <- simulate(yuima) plot(yuima) # The path seems to involve some jumps bns.test(yuima) # standard type bns.test(yuima,type="log") # log type bns.test(yuima,type="ratio") # ratio type # Multi-dimensional case ## Model: dXkt=t*dWk_t (k=1,2,3) (no jump case). diff.matrix <- diag(3) diag(diff.matrix) <- c("t","t","t") model <- setModel(drift=c(0,0,0),diffusion=diff.matrix,time.variable="t", solve.variable=c("x1","x2","x3")) yuima.samp <- setSampling(Terminal = 1, n = 390) yuima <- setYuima(model = model, sampling = yuima.samp) yuima <- simulate(yuima) plot(yuima) bns.test(yuima) carma.info-class Class for information about CARMA(p,q) model Description The carma.info-class is a class of the yuima package. Details The carma.info-class object cannot be directly specified by the user but it is constructed when the yuima.carma-class object is constructed via setCarma. Slots p: Number of autoregressive coefficients. q: Number of moving average coefficients. loc.par: Label of location coefficient. scale.par: Label of scale coefficient. ar.par: Label of autoregressive coefficients. ma.par: Label of moving average coefficients. lin.par: Label of linear coefficients. Carma.var: Label of the observed process. Latent.var: Label of the unobserved process. XinExpr: Logical variable. If XinExpr=FALSE, the starting condition of Latent.var is zero other- wise each component of Latent.var has a parameter as a starting point. Author(s) The YUIMA Project Team CarmaNoise Estimation for the underlying Levy in a carma model Description Retrieve the increment of the underlying Levy for the carma(p,q) process using the approach devel- oped in Brockwell et al.(2011) Usage CarmaNoise(yuima, param, data=NULL, NoNeg.Noise=FALSE) Arguments yuima a yuima object or an object of yuima.carma-class. param list of parameters for the carma. data an object of class yuima.data-class contains the observations available at uni- formly spaced time. If data=NULL, the default, the ’CarmaNoise’ uses the data in an object of yuima.data-class. NoNeg.Noise Estimate a non-negative Levy-Driven Carma process. By default NoNeg.Noise=FALSE. Value incr.Levy a numeric object contains the estimated increments. Note The function qmle uses the function CarmaNoise for estimation of underlying Levy in the carma model. Author(s) The YUIMA Project Team References <NAME>., <NAME>. and <NAME>. (2011) Estimation for Non-Negative Levy-Driven CARMA Process, Journal of Business And Economic Statistics, 29 - 2, 250-259. Examples ## Not run: #Ex.1: Carma(p=3, q=0) process driven by a brownian motion. mod0<-setCarma(p=3,q=0) # We fix the autoregressive and moving average parameters # to ensure the existence of a second order stationary solution for the process. true.parm0 <-list(a1=4,a2=4.75,a3=1.5,b0=1) # We simulate a trajectory of the Carma model. numb.sim<-1000 samp0<-setSampling(Terminal=100,n=numb.sim) set.seed(100) incr.W<-matrix(rnorm(n=numb.sim,mean=0,sd=sqrt(100/numb.sim)),1,numb.sim) sim0<-simulate(mod0, true.parameter=true.parm0, sampling=samp0, increment.W=incr.W) #Applying the CarmaNoise system.time( inc.Levy0<-CarmaNoise(sim0,true.parm0) ) # We compare the orginal with the estimated noise increments par(mfrow=c(1,2)) plot(t(incr.W)[1:998],type="l", ylab="",xlab="time") title(main="True Brownian Motion",font.main="1") plot(inc.Levy0,type="l", main="Filtered Brownian Motion",font.main="1",ylab="",xlab="time") # Ex.2: carma(2,1) driven by a compound poisson # where jump size is normally distributed and # the lambda is equal to 1. mod1<-setCarma(p=2, q=1, measure=list(intensity="Lamb",df=list("dnorm(z, 0, 1)")), measure.type="CP") true.parm1 <-list(a1=1.39631, a2=0.05029, b0=1,b1=2, Lamb=1) # We generate a sample path. samp1<-setSampling(Terminal=100,n=200) set.seed(123) sim1<-simulate(mod1, true.parameter=true.parm1, sampling=samp1) # We estimate the parameter using qmle. carmaopt1 <- qmle(sim1, start=true.parm1) summary(carmaopt1) # Internally qmle uses CarmaNoise. The result is in plot(carmaopt1) # Ex.3: Carma(p=2,q=1) with scale and location parameters # driven by a Compound Poisson # with jump size normally distributed. mod2<-setCarma(p=2, q=1, loc.par="mu", scale.par="sig", measure=list(intensity="Lamb",df=list("dnorm(z, 0, 1)")), measure.type="CP") true.parm2 <-list(a1=1.39631, a2=0.05029, b0=1, b1=2, Lamb=1, mu=0.5, sig=0.23) # We simulate the sample path set.seed(123) sim2<-simulate(mod2, true.parameter=true.parm2, sampling=samp1) # We estimate the Carma and we plot the underlying noise. carmaopt2 <- qmle(sim2, start=true.parm2) summary(carmaopt2) # Increments estimated by CarmaNoise plot(carmaopt2) ## End(Not run) cce Nonsynchronous Cumulative Covariance Estimator Description This function estimates the covariance between two Ito processes when they are observed at discrete times possibly nonsynchronously. It can apply to irregularly sampled one-dimensional data as a special case. Usage cce(x, method="HY", theta, kn, g=function(x)min(x,1-x), refreshing = TRUE, cwise = TRUE, delta = 0, adj = TRUE, K, c.two, J = 1, c.multi, kernel, H, c.RK, eta = 3/5, m = 2, ftregion = 0, vol.init = NA, covol.init = NA, nvar.init = NA, ncov.init = NA, mn, alpha = 0.4, frequency = 300, avg = TRUE, threshold, utime, psd = FALSE) Arguments x an object of yuima-class or yuima.data-class. method the method to be used. See ‘Details’. theta a numeric vector or matrix. If it is a matrix, each of its components indicates the tuning parameter which determines the pre-averaging window lengths kn to be used for estimating the corresponding component. If it is a numeric vector, it is converted to a matrix as (C+t(C))/2, where C=matrix(theta,d,d) and d=dim(x). The default value is 0.15 for the method "PHY" or "PTHY" following Christensen et al. (2013), while it is 1 for the method "MRC" following Chris- tensen et al. (2010). kn an integer-valued vector or matrix indicating the pre-averaging window length(s). For the methods "PHY" or "PTHY", see ‘Details’ for the default value. For the method "MRC", the default value is ceiling(theta*n^(1+delta)), where n is the number of the refresh times associated with the data minus 1. g a function indicating the weight function to be used. The default value is the Bartlett window: function(x)min(x,1-x). refreshing logical. If TRUE, the data is pre-synchronized by the next-tick interpolation in the refresh times. cwise logical. If TRUE, the estimator is calculated componentwise. delta a non-negative number indicating the order of the pre-averaging window length(s) kn. adj logical. If TRUE, a finite-sample adjustment is performed. For the method "MRC", see Christensen et al. (2010) for details. For the method "TSCV", see Zhang (2011) and Zhang et al. (2005) for details. K a positive integer indicating the large time-scale parameter. The default value is ceiling(c.two*n^(2/3)), where n is the number of the refresh times associ- ated with the data minus 1. c.two a positive number indicating the tuning parameter which determines the scale of the large time-scale parameter K. The default value is the average of the numeric vector each of whose components is the roughly estimated optimal value in the sense of the minimizer of the theoretical asymptotic variance of the estimator of the corresponding diagonal component. The theoretical asymptotic variance is considered in the standard case and given by Eq.(63) of Zhang et al. (2005). J a positive integer indicating the small time-scale parameter. c.multi a numeric vector or matrix. If it is a matrix, each of its components indicates the tuning parameter which determines (the scale of) the number of the time scales to be used for estimating the corresponding component. If it is a numeric vector, it is converted to a matrix as (C+t(C))/2, where C=matrix(c.multi,d,d) and d=dim(x). The default value is the numeric vector each of whose components is the roughly estimated optimal value in the sense of minimizing the theoretical asymptotic variance of the estimator of the corresponding diagonal component. The theoretical asymptotic variance is considered in the standard case and given by Eq.(37) of Zhang (2006). kernel a function indicating the kernel function to be used. The default value is the Parzan kernel, which is recommended in Barndorff-Nielsen et al. (2009, 2011). H a positive number indicating the bandwidth parameter. The default value is c.RK*n^eta, where n is the number of the refresh times associated with the data minus 1. c.RK a positive number indicating the tuning parameter which determines the scale of the bandwidth parameter H. The default value is the average of the numeric vector each of whose components is the roughly estimated optimal value in the sense of minimizing the theoretical asymptotic variance of the estimator of the corresponding diagonal component. The theoretical asymptotic variance is con- sidered in the standard case and given in Barndorff-Nielsen et al. (2009, 2011). eta a positive number indicating the tuning parameter which determines the order of the bandwidth parameter H. m a positive integer indicating the number of the end points to be jittered. ftregion a non-negative number indicating the length of the flat-top region. ftregion=0 (the default) means that a non-flat-top realized kernel studied in Barndorff- Nielsen et al. (2011) is used. ftregion=1/H means that a flat-top realized kernel studied in Barndorff-Nielsen et al. (2008) is used. See Varneskov (2015) for other values. vol.init a numeric vector each of whose components indicates the initial value to be used to estimate the integrated volatility of the corresponding component, which is passed to the optimizer. covol.init a numeric matrix each of whose columns indicates the initial value to be used to estimate the integrated covariance of the corresponding component, which is passed to the optimizer. nvar.init a numeric vector each of whose components indicates the initial value to be used to estimate the variance of noise of the corresponding component, which is passed to the optimizer. ncov.init a numeric matrix each of whose columns indicates the initial value to be used to estimate the covariance of noise of the corresponding component, which is passed to the optimizer. mn a positive integer indicating the number of terms to be used for calculating the SIML estimator. The default value is ceiling(n^alpha), where n is the number of the refresh times associated with the data minus 1. alpha a postive number indicating the order of mn. frequency a positive integer indicating the frequency (seconds) of the calendar time sam- pling to be used. avg logical. If TRUE, the averaged subsampling estimator is calculated. Otherwise the simple sparsely subsampled estimator is calculated. threshold a numeric vector or list indicating the threshold parameter(s). Each of its compo- nents indicates the threshold parameter or process to be used for estimating the corresponding component. If it is a numeric vector, the elements in threshold are recycled if there are two few elements in threshold. The default value is determined following Koike (2014) (for the method "THY") and Koike (2015) (for the method "PTHY"). utime a positive number indicating what seconds the interval [0,1] corresponds to. The default value is the difference between the maximum and the minimum of the sampling times, multiplied by 23,400. Here, 23,400 seconds correspond to 6.5 hours, hence if the data is sampled on the interval [0,1], then the sampling interval is regarded as 6.5 hours. psd logical. If TRUE, the estimated covariance matrix C is converted to (C%*%C)^(1/2) for ensuring the positive semi-definiteness. In this case the absolute values of the estimated correlations are always ensured to be less than or equal to 1. Details This function is a method for objects of yuima.data-class and yuima-class. It extracts the data slot when applied to a an object of yuima-class. Typical usages are cce(x,psd=FALSE) cce(x,method="PHY",theta,kn,g,refreshing=TRUE,cwise=TRUE,psd=FALSE) cce(x,method="MRC",theta,kn,g,delta=0,avg=TRUE,psd=FALSE) cce(x,method="TSCV",K,c.two,J=1,adj=TRUE,utime,psd=FALSE) cce(x,method="GME",c.multi,utime,psd=FALSE) cce(x,method="RK",kernel,H,c.RK,eta=3/5,m=2,ftregion=0,utime,psd=FALSE) cce(x,method="QMLE",vol.init=NULL,covol.init=NULL, nvar.init=NULL,ncov.init=NULL,psd=FALSE) cce(x,method="SIML",mn,alpha=0.4,psd=FALSE) cce(x,method="THY",threshold,psd=FALSE) cce(x,method="PTHY",theta,kn,g,threshold,refreshing=TRUE,cwise=TRUE,psd=FALSE) cce(x,method="SRC",frequency=300,avg=TRUE,utime,psd=FALSE) cce(x,method="SBPC",frequency=300,avg=TRUE,utime,psd=FALSE) The default method is method "HY", which is an implementation of the Hayashi-Yoshida estimator proposed in Hayashi and Yoshida (2005). Method "PHY" is an implementation of the Pre-averaged Hayashi-Yoshida estimator proposed in Christensen et al. (2010). Method "MRC" is an implementation of the Modulated Realized Covariance based on refresh time sampling proposed in Christensen et al. (2010). Method "TSCV" is an implementation of the previous tick Two Scales realized CoVariance based on refresh time sampling proposed in Zhang (2011). Method "GME" is an implementation of the Generalized Multiscale Estimator proposed in Bibinger (2011). Method "RK" is an implementation of the multivariate Realized Kernel based on refresh time sam- pling proposed in Barndorff-Nielsen et al. (2011). Method "QMLE" is an implementation of the nonparametric Quasi Maximum Likelihood Estima- tor proposed in Ait-Sahalia et al. (2010). Method "SIML" is an implementation of the Separating Information Maximum Likelihood esti- mator proposed in Kunitomo and Sato (2013) with the basis of refresh time sampling. Method "THY" is an implementation of the Truncated Hayashi-Yoshida estimator proposed in Mancini and Gobbi (2012). Method "PTHY" is an implementation of the Pre-averaged Truncated Hayashi-Yoshida estimator, which is a thresholding version of the pre-averaged Hayashi-Yoshida estimator. Method "SRC" is an implementation of the calendar time Subsampled Realized Covariance. Method "SBPC" is an implementation of the calendar time Subsampled realized BiPower Covaria- tion. The rough estimation procedures for selecting the default values of the tuning parameters are based on those in Barndorff-Nielsen et al. (2009). For the methods "PHY" or "PTHY", the default value of kn changes depending on the values of refreshing and cwise. If both refreshing and cwise are TRUE (the default), the default value of kn is given by the matrix ceiling(theta*N), where N is a matrix whose diagonal components are identical with the vector length(x)-1 and whose (i, j)-th component is identical with the number of the refresh times associated with i-th and j-th components of x minus 1. If refreshing is TRUE while cwise is FALSE, the default value of kn is given by ceiling(mean(theta)*sqrt(n)), where n is the number of the refresh times associated with the data minus 1. If refreshing is FALSE while cwise is TRUE, the default value of kn is given by the matrix ceiling(theta*N0), where N0 is a matrix whose diagonal components are identical with the vector length(x)-1 and whose (i, j)- th component is identical with (length(x)[i]-1)+(length(x)[j]-1). If both refreshing and cwise are FALSE, the default value of kn is given by ceiling(mean(theta)*sqrt(sum(length(x)-1))) (following Christensen et al. (2013)). For the method "QMLE", the optimization of the quasi-likelihood function is implemented via arima0 using the fact that it can be seen as an MA(1) model’s one: See Hansen et al. (2008) for details. Value A list with components: covmat the estimated covariance matrix cormat the estimated correlation matrix Note The example shows the central limit theorem for the nonsynchronous covariance estimator. Estimation of the asymptotic variance can be implemented by hyavar. The second-order correction will be provided in a future version of the package. Author(s) <NAME> with YUIMA Project Team References <NAME>., <NAME>. and <NAME>. (2010) High-frequency covariance estimates with noisy and asynchronous financial data, Journal of the American Statistical Association, 105, no. 492, 1504– 1517. <NAME>., <NAME>., <NAME>. and <NAME>. (2008) Designing realised kernels to measure the ex-post variation of equity prices in the presence of noise, Econometrica, 76, no. 6, 1481–1536. <NAME>., <NAME>., <NAME>. and <NAME>. (2009) Realized kernels in practice: trades and quotes, Econometrics Journal, 12, C1–C32. Barndorff-<NAME>., <NAME>., <NAME>. and <NAME>. (2011) Multivariate realised kernels: Consistent positive semi-definite estimators of the covariation of equity prices with noise and non-synchronous trading, Journal of Econometrics, 162, 149–169. <NAME>. (2011) Efficient covariance estimation for asynchronous noisy high-frequency data, Scandinavian Journal of Statistics, 38, 23–45. <NAME>. (2012) An estimator for the quadratic covariation of asynchronously observed Ito processes with noise: asymptotic distribution theory, Stochastic processes and their applications, 122, 2411–2453. <NAME>., <NAME>. and <NAME>. (2010) Pre-averaging estimators of the ex-post covariance matrix in noisy diffusion models with non-synchronous data, Journal of Econometrics, 159, 116–133. <NAME>., <NAME>. and <NAME>. (2013) On covariation estimation for multivariate continuous Ito semimartingales with noise in non-synchronous observation schemes, Journal of Multivariate Analysis 120 59–84. <NAME>., <NAME>. and <NAME>. (2008) Moving average-based estimators of integrated vari- ance, Econometric Reviews, 27, 79–111. <NAME>. and <NAME>. (2005) On covariance estimation of non-synchronously observed dif- fusion processes, Bernoulli, 11, no. 2, 359–379. <NAME>. and Yoshida, N. (2008) Asymptotic normality of a covariance estimator for nonsyn- chronously observed diffusion processes, Annals of the Institute of Statistical Mathematics, 60, no. 2, 367–406. Koike, Y. (2016) Estimation of integrated covariances in the simultaneous presence of nonsyn- chronicity, microstructure noise and jumps, Econometric Theory, 32, 533–611. Koike, Y. (2014) An estimator for the cumulative co-volatility of asynchronously observed semi- martingales with jumps, Scandinavian Journal of Statistics, 41, 460–481. Kunitomo, N. and <NAME>. (2013) Separating information maximum likelihood estimation of real- ized volatility and covariance with micro-market noise, North American Journal of Economics and Finance, 26, 282–309. <NAME>. and <NAME>. (2012) Identifying the Brownian covariation from the co-jumps given discrete observations, Econometric Theory, 28, 249–273. <NAME>. (2016) Flat-top realized kernel estimation of quadratic covariation with non- synchronous and noisy asset prices, Journal of Business & Economic Statistics, 34, no.1, 1–22. Zhang, L. (2006) Efficient estimation of stochastic volatility using noisy observations: a multi-scale approach, Bernoulli, 12, no.6, 1019–1043. <NAME>. (2011) Estimating covariation: Epps effect, microstructure noise, Journal of Economet- rics, 160, 33–47. <NAME>., <NAME>. and <NAME>. (2005) A tale of two time scales: Determining in- tegrated volatility with noisy high-frequency data, Journal of the American Statistical Association, 100, no. 472, 1394–1411. See Also setModel, setData, hyavar, lmm, cce.factor Examples ## Not run: ## Set a model diff.coef.1 <- function(t, x1 = 0, x2 = 0) sqrt(1+t) diff.coef.2 <- function(t, x1 = 0, x2 = 0) sqrt(1+t^2) cor.rho <- function(t, x1 = 0, x2 = 0) sqrt(1/2) diff.coef.matrix <- matrix(c("diff.coef.1(t,x1,x2)", "diff.coef.2(t,x1,x2) * cor.rho(t,x1,x2)", "", "diff.coef.2(t,x1,x2) * sqrt(1-cor.rho(t,x1,x2)^2)"), 2, 2) cor.mod <- setModel(drift = c("", ""), diffusion = diff.coef.matrix,solve.variable = c("x1", "x2")) set.seed(111) ## We use a function poisson.random.sampling to get observation by Poisson sampling. yuima.samp <- setSampling(Terminal = 1, n = 1200) yuima <- setYuima(model = cor.mod, sampling = yuima.samp) yuima <- simulate(yuima) psample<- poisson.random.sampling(yuima, rate = c(0.2,0.3), n = 1000) ## cce takes the psample and returns an estimate of the quadratic covariation. cce(psample)$covmat[1, 2] ##cce(psample)[1, 2] ## True value of the quadratic covariation. cc.theta <- function(T, sigma1, sigma2, rho) { tmp <- function(t) return(sigma1(t) * sigma2(t) * rho(t)) integrate(tmp, 0, T) } theta <- cc.theta(T = 1, diff.coef.1, diff.coef.2, cor.rho)$value cat(sprintf("theta =%.5f\n", theta)) names(<EMAIL>@<EMAIL>) # Example. A stochastic differential equation with nonlinear feedback. ## Set a model drift.coef.1 <- function(x1,x2) x2 drift.coef.2 <- function(x1,x2) -x1 drift.coef.vector <- c("drift.coef.1","drift.coef.2") diff.coef.1 <- function(t,x1,x2) sqrt(abs(x1))*sqrt(1+t) diff.coef.2 <- function(t,x1,x2) sqrt(abs(x2)) cor.rho <- function(t,x1,x2) 1/(1+x1^2) diff.coef.matrix <- matrix(c("diff.coef.1(t,x1,x2)", "diff.coef.2(t,x1,x2) * cor.rho(t,x1,x2)","", "diff.coef.2(t,x1,x2) * sqrt(1-cor.rho(t,x1,x2)^2)"), 2, 2) cor.mod <- setModel(drift = drift.coef.vector, diffusion = diff.coef.matrix,solve.variable = c("x1", "x2")) ## Generate a path of the process set.seed(111) yuima.samp <- setSampling(Terminal = 1, n = 10000) yuima <- setYuima(model = cor.mod, sampling = yuima.samp) yuima <- simulate(yuima, xinit=c(2,3)) plot(yuima) ## The "true" value of the quadratic covariation. cce(yuima) ## We use the function poisson.random.sampling to generate nonsynchronous ## observations by Poisson sampling. psample<- poisson.random.sampling(yuima, rate = c(0.2,0.3), n = 3000) ## cce takes the psample to return an estimated value of the quadratic covariation. ## The off-diagonal elements are the value of the Hayashi-Yoshida estimator. cce(psample) # Example. Epps effect for the realized covariance estimator ## Set a model drift <- c(0,0) sigma1 <- 1 sigma2 <- 1 rho <- 0.5 diffusion <- matrix(c(sigma1,sigma2*rho,0,sigma2*sqrt(1-rho^2)),2,2) model <- setModel(drift=drift,diffusion=diffusion, state.variable=c("x1","x2"),solve.variable=c("x1","x2")) ## Generate a path of the latent process set.seed(116) ## We regard the unit interval as 6.5 hours and generate the path on it ## with the step size equal to 2 seconds yuima.samp <- setSampling(Terminal = 1, n = 11700) yuima <- setYuima(model = model, sampling = yuima.samp) yuima <- simulate(yuima) ## We extract nonsynchronous observations from the path generated above ## by Poisson random sampling with the average duration equal to 10 seconds psample <- poisson.random.sampling(yuima, rate = c(1/5,1/5), n = 11700) ## Hayashi-Yoshida estimator consistetly estimates the true correlation cce(psample)$cormat[1,2] ## If we synchronize the observation data on some regular grid ## by previous-tick interpolations and compute the correlation ## by therealized covariance based on such synchronized observations, ## we underestimate the true correlation (known as the Epps effect). ## This is illustrated by the following examples. ## Synchronization on the grid with 5 seconds steps suppressWarnings(s1 <- cce(subsampling(psample, sampling = setSampling(n = 4680)))$cormat[1,2]) s1 ## Synchronization on the grid with 10 seconds steps suppressWarnings(s2 <- cce(subsampling(psample, sampling = setSampling(n = 2340)))$cormat[1,2]) s2 ## Synchronization on the grid with 20 seconds steps suppressWarnings(s3 <- cce(subsampling(psample, sampling = setSampling(n = 1170)))$cormat[1,2]) s3 ## Synchronization on the grid with 30 seconds steps suppressWarnings(s4 <- cce(subsampling(psample, sampling = setSampling(n = 780)))$cormat[1,2]) s4 ## Synchronization on the grid with 1 minute steps suppressWarnings(s5 <- cce(subsampling(psample, sampling = setSampling(n = 390)))$cormat[1,2]) s5 plot(zoo(c(s1,s2,s3,s4,s5),c(5,10,20,30,60)),type="b",xlab="seconds",ylab="correlation", main = "Epps effect for the realized covariance") # Example. Non-synchronous and noisy observations of a correlated bivariate Brownian motion ## Generate noisy observations from the model used in the previous example Omega <- 0.005*matrix(c(1,rho,rho,1),2,2) # covariance matrix of noise noisy.psample <- noisy.sampling(psample,var.adj=Omega) plot(noisy.psample) ## Hayashi-Yoshida estimator: inconsistent cce(noisy.psample)$covmat ## Pre-averaged Hayashi-Yoshida estimator: consistent cce(noisy.psample,method="PHY")$covmat ## Generalized multiscale estimator: consistent cce(noisy.psample,method="GME")$covmat ## Multivariate realized kernel: consistent cce(noisy.psample,method="RK")$covmat ## Nonparametric QMLE: consistent cce(noisy.psample,method="QMLE")$covmat ## End(Not run) cce.factor High-Dimensional Cumulative Covariance Estimator by Factor Mod- eling and Regularization Description This function estimates the covariance and precision matrices of a high-dimesnional Ito process by factor modeling and regularization when it is observed at discrete times possibly nonsynchronously with noise. Usage cce.factor(yuima, method = "HY", factor = NULL, PCA = FALSE, nfactor = "interactive", regularize = "glasso", taper, group = 1:(dim(yuima) - length(factor)), lambda = "bic", weight = TRUE, nlambda = 10, ratio, N, thr.type = "soft", thr = NULL, tau = NULL, par.alasso = 1, par.scad = 3.7, thr.delta = 0.01, frequency = 300, utime, ...) Arguments yuima an object of yuima-class or yuima.data-class. method the method to be used in cce. factor an integer or character vector indicating which components of yuima are factors. If NULL, no factor structure is taken account of. PCA logical. If TRUE, a principal component analysis is performed to construct fac- tors. nfactor the number of factors constructed when PCA is TRUE. If nfactor = "interactive", the scree plot of the principal component analysis is depicted and the user can set this argument interactively. regularize the regularizaton method to be used. Possible choices are "glasso" (the de- fault), "tapering", "thresholding" and "eigen.cleaning". See ‘Details’. taper the tapering matrix used when regularize = "tapering". If missing, the ta- pering matrix is constructed according to group. See ‘Details’. group an integer vector having the length equal to dim(yuima)-length(factor). lambda the penalty parameter used when regularize = "glasso". If it is "aic" (resp. "bic"), it is selected by minimizing the formally defined AIC (resp. BIC). See ‘Details’. weight logical. If TRUE, a weighted version is used for regularize = "glasso" as in Koike (2020). nlambda a positive integer indicating the number of candidate penalty parameters for which AIC or BIC is evaluated when lambda is "aic" or "bic". ratio a positive number indicating the ratio of the largest and smallest values in can- didate penalty parameters for which AIC or BIC is evaluated when lambda is "aic" or "bic". See ‘Details’. The default value is sqrt(log(d)/N), where d is the dimension of yuima. N a positive integer indicating the "effective" sampling size, which is necessary to evealuate AIC and BIC when lambda is "aic" or "bic". In a standard situation, it is equal to the sample size − 1, but it might be different when the data are observed nonsynchronously and/or with noise. If missing, it is automatically determined according to method. thr.type a character string indicating the type of the thresholding method used when regularize = "thresholding". Possible choices are "hard", "soft", "alasso" and "scad". See Section 2.3 of Dai et al. (2019) for the definition of each method. thr a numeric matrix indicating the threshold levels used when regularize = "thresholding". Its entries indicate the threshold levels for the corresponding entries of the co- variance matrix (values for λ in the notation of Dai et al. (2019)). A single number is converted to the matrix with common entries equal to that number. If NULL, it is determined according to tau. See ‘Details’. tau a number between 0 and 1 used to determine the threshold levels used when regularize = "thresholding" and thr=NULL (a value for τ in the notation of Dai et al. (2019)). If NULL, it is determined by a grid search procedure as suggested in Section 4.3 of Dai et al. (2019). See ‘Details’. par.alasso the tuning parameter for thr.type = "alasso" (a value for η in the notation of Dai et al. (2019)). par.scad the tuning parameter for thr.type = "scad" (a value for a in the notation of Dai et al. (2019)). thr.delta a positive number indicating the step size used in the grid serach procedure to determine tau. frequency passed to cce. utime passed to cce. ... passed to cce. Details One basic approach to estimate the covariance matrix of high-dimensional time series is to take account of the factor structure and perform regularization for the residual covariance matrix. This function implements such an estimation procedure for high-frequency data modeled as a discretely observed semimartingale. Specifically, let Y be a d-dimensional semimartingale which describes the dynamics of the observation data. We consider the following continuous-time factor model: Yt = βXt + Zt , 0 ≤ t ≤ T, where X is an r-dimensional semimartingale (the factor process), Z is a d-dimensional semimartin- gale (the residual process), and β is a constant d × r matrix (the factor loading matrix). We assume that X and Z are orthogonal in the sense that [X, Z]T = 0. Then, the quadratic covariation matrix of Y is given by [Y, Y ]T = β[X, X]T β > + [Z, Z]T . Also, β can be written as β = [Y, X]T [X, X]−1 T . Thus, if we have observation data both for Y and X, we can construct estimators for [Y, Y ]T , [X, X]T and β by cce. Moreover, plugging these estimators into the above equation, we can also construct an estimator for [Z, Z]T . Since this estimator is often poor due to the high-dimensionality, we regularize it by some method. Then, by plugging the regularized estimator for [Z, Z]T into the above equation, we obtain the final estimator for [Y, Y ]T . Even if we do not have observation data for X, we can (at least formally) construct a pseudo factor process by performing principal component analysis for the initial estimator of [Y, Y ]T . See Ait- Sahalia and Xiu (2017) and Dai et al. (2019) for details. Currently, the following four options are available for the regularization method applied to the residual covariance matrix estimate: 1. regularize = "glasso" (the default). This performs the glaphical Lasso. When weight=TRUE (the default), a weighted version of the graphical Lasso is performed as in Koike (2020). Otherwise, the standard graphical Lasso is performed as in Brownlees et al. (2018). If lambda="aic" (resp.~lambda="bic"), the penalty parameter for the graphical Lasso is selected by minimizing the formally defined AIC (resp.~BIC). The minimization is carried out by grid search, where the grid is determined as in Section 5.1 of Koike (2020). The optimization problem in the graphical Lasso is solved by the GLASSOFAST algorithm of Sustik and Calderhead (2012), which is available from the package glassoFast. 2. regularize = "tapering". This performs tapering, i.e. taking the entry-wise product of the residual covariance matrix estimate and a tapering matrix specified by taper. See Section 3.5.1 of Pourahmadi (2011) for an overview of this method. If taper is missing, it is constructed according to group as follows: taper is a 0-1 matrix and the (i, j)-th entry is equal to 1 if and only if group[i]==group[j]. Thus, by default it makes the residual covariance matrix diagonal. 3. regularize = "thresholding". This performs thresholding, i.e. entries of the residual covariance matrix are shrunk toward 0 according to a thresholding rule (specified by thr.type) and a threshold level (spencified by thr). q If thr=NULL, the (i, j)-th entry of thr is given by τ [Z iˆ, Z i ]T [Z j ˆ, Z j ]T , where [Z iˆ, Z i ]T (resp. [Z j ˆ, Z j ] ) denotes the i-th (resp. j-th) diagonal entry of the non-regularized estimator T for the residual covariance matrix [Z, Z]T , and τ is a tuning parameter specified by tau. When tau=NULL, the value of τ is set to the smallest value in the grid with step size thr.delta such that the regularized estimate of the residual covariance matrix becomes positive definite. 4. regularize = "eigen.cleaning". This performs the eigenvalue cleaning algorithm described in Hautsch et al. (2012). Value A list with components: covmat.y the estimated covariance matrix premat.y the estimated precision matrix beta.hat the estimated factor loading matrix covmat.x the estimated factor covariance matrix covmat.z the estimated residual covariance matrix premat.z the estimated residual precision matrix sigma.z the estimated residual covariance matrix before regularization pc the variances of the principal components (it is NULL if PCA = FALSE) Author(s) <NAME> with YUIMA project Team References <NAME>. and <NAME>. (2017). Using principal component analysis to estimate a high dimen- sional factor model with high-frequency data, Journal of Econometrics, 201, 384–399. <NAME>., <NAME>. and <NAME>. (2018). Realized networks, Journal of Applied Economet- rics, 33, 986–1006. <NAME>., <NAME>. and <NAME>. (2019). Knowing factors or factor loadings, or neither? Evaluating estimators of large covariance matrices with noisy and asynchronous data, Journal of Econometrics, 208, 43–79. <NAME>., <NAME>. and <NAME>. (2012). A blocking and regularization approach to high-dimensional realized covariance estimation, Journal of Applied Econometrics, 27, 625–645. <NAME>. (2020). De-biased graphical Lasso for high-frequency data, Entropy, 22, 456. <NAME>. (2011). Covariance estimation: The GLM and regularization perspectives. Statis- tical Science, 26, 369–387. <NAME>. and <NAME>. (2012). GLASSOFAST: An efficient GLASSO implementation, UTCSTechnical Report TR-12-29, The University of Texas at Austin. See Also cce, lmm, glassoFast Examples ## Not run: set.seed(123) ## Simulating a factor process (Heston model) drift <- c("mu*S", "-theta*(V-v)") diffusion <- matrix(c("sqrt(max(V,0))*S", "gamma*sqrt(max(V,0))*rho", 0, "gamma*sqrt(max(V,0))*sqrt(1-rho^2)"), 2,2) mod <- setModel(drift = drift, diffusion = diffusion, state.variable = c("S", "V")) n <- 2340 samp <- setSampling(n = n) heston <- setYuima(model = mod, sampling = samp) param <- list(mu = 0.03, theta = 3, v = 0.09, gamma = 0.3, rho = -0.6) result <- simulate(heston, xinit = c(1, 0.1), true.parameter = param) zdata <- get.zoo.data(result) # extract the zoo data X <- log(zdata[[1]]) # log-price process V <- zdata[[2]] # squared volatility process ## Simulating a residual process (correlated BM) d <- 100 # dimension Q <- 0.1 * toeplitz(0.7^(1:d-1)) # residual covariance matrix dZ <- matrix(rnorm(n*d),n,d) %*% chol(Q)/sqrt(n) Z <- zoo(apply(dZ, 2, "diffinv"), samp@grid[[1]]) ## Constructing observation data b <- runif(d, 0.25, 2.25) # factor loadings Y <- X %o% b + Z yuima <- setData(cbind(X, Y)) # We subsample yuima to construct observation data yuima <- subsampling(yuima, setSampling(n = 78)) ## Estimating the covariance matrix (factor is known) cmat <- tcrossprod(b) * mean(V[-1]) + Q # true covariance matrix pmat <- solve(cmat) # true precision matrix # (1) Regularization method is glasso (the default) est <- cce.factor(yuima, factor = 1) norm(est$covmat.y - cmat, type = "2") norm(est$premat.y - pmat, type = "2") # (2) Regularization method is tapering est <- cce.factor(yuima, factor = 1, regularize = "tapering") norm(est$covmat.y - cmat, type = "2") norm(est$premat.y - pmat, type = "2") # (3) Regularization method is thresholding est <- cce.factor(yuima, factor = 1, regularize = "thresholding") norm(est$covmat.y - cmat, type = "2") norm(est$premat.y - pmat, type = "2") # (4) Regularization method is eigen.cleaning est <- cce.factor(yuima, factor = 1, regularize = "eigen.cleaning") norm(est$covmat.y - cmat, type = "2") norm(est$premat.y - pmat, type = "2") ## Estimating the covariance matrix (factor is unknown) yuima2 <- setData(Y) # We subsample yuima to construct observation data yuima2 <- subsampling(yuima2, setSampling(n = 78)) # (A) Ignoring the factor structure (regularize = "glasso") est <- cce.factor(yuima2) norm(est$covmat.y - cmat, type = "2") norm(est$premat.y - pmat, type = "2") # (B) Estimating the factor by PCA (regularize = "glasso") est <- cce.factor(yuima2, PCA = TRUE, nfactor = 1) # use 1 factor norm(est$covmat.y - cmat, type = "2") norm(est$premat.y - pmat, type = "2") Class for Quasi Maximum Likelihood Estimation of Point Process Regression Models 39 # One can interactively select the number of factors # after implementing PCA (the scree plot is depicted) # Try: est <- cce.factor(yuima2, PCA = TRUE) ## End(Not run) Class for Quasi Maximum Likelihood Estimation of Point Process Regression Models Class for Quasi Maximum Likelihood Estimation of Point Process Re- gression Models Description The yuima.PPR.qmle class is a class of the yuima package that extends the mle-class of the stats4 package. Slots call: is an object of class language. coef: is an object of class numeric that contains estimated parameters. fullcoef: is an object of class numeric that contains estimated and fixed parameters. vcov: is an object of class matrix. min: is an object of class numeric. minuslogl: is an object of class function. method: is an object of class character. model: is an object of class yuima.PPR-class. Methods Methods mle All methods for mle-class are available. Author(s) The YUIMA Project Team cogarch.est.-class Class for Generalized Method of Moments Estimation for COGA- RCH(p,q) model Description The cogarch.est class is a class of the yuima package that contains estimated parameters obtained by the function gmm or qmle. Slots yuima: is an object of of yuima-class. objFun: is an object of class character that indicates the objective function used in the minimiza- tion problem. See the documentation of the function gmm or qmle for more details. call: is an object of class language. coef: is an object of class numeric that contains estimated parameters. fullcoef: is an object of class numeric that contains estimated and fixed parameters. vcov: is an object of class matrix. min: is an object of class numeric. minuslogl: is an object of class function. method: is an object of class character. Methods Methods mle All methods for mle-class are available. Author(s) The YUIMA Project Team cogarch.est.incr-class Class for Estimation of COGARCH(p,q) model with underlying incre- ments Description The cogarch.est.incr class is a class of the yuima package that extends the cogarch.est-class and is filled by the function gmm or qmle. Slots Incr.Lev: is an object of class zoo that contains the estimated increments of the noise obtained using cogarchNoise. yuima: is an object of of yuima-class. logL.Incr: is an object of class numeric that contains the value of the log-likelihood for estimated Levy increments. objFun: is an object of class character that indicates the objective function used in the minimiza- tion problem. See the documentation of the function gmm or qmle for more details. call: is an object of class language. coef: is an object of class numeric that contains estimated parameters. fullcoef: is an object of class numeric that contains estimated and fixed parameters. vcov: is an object of class matrix. min: is an object of class numeric. minuslogl: is an object of class function. method: is an object of class character. Methods simulate simulation method. For more information see simulate. plot Plot method for estimated increment of the noise. Methods mle All methods for mle-class are available. Author(s) The YUIMA Project Team cogarch.info-class Class for information about CoGarch(p,q) Description The cogarch.info-class is a class of the yuima package Slots p: Number of autoregressive coefficients in the variance process. q: Number of moving average coefficients in the variance process. ar.par: Label of autoregressive coefficients. ma.par: Label of moving average coefficients. loc.par: Label of location coefficient in the variance process. Cogarch.var: Label of the observed process. V.var: Label of the variance process. Latent.var: Label of the latent process in the state representation of the variance. XinExpr: Logical variable. If XinExpr=FALSE, the starting condition of Latent.var is zero other- wise each component of Latent.var has a parameter as a starting point. measure: Levy measure for jump and quadratic part. measure.type: Type specification for Levy measure. Note The cogarch.info-class object cannot be directly specified by the user but it is built when the yuima.cogarch-class object is constructed via setCogarch. Author(s) The YUIMA Project Team cogarchNoise Estimation for the underlying Levy in a COGARCH(p,q) model Description Retrieve the increment of the underlying Levy for the COGARCH(p,q) process Usage cogarchNoise(yuima, data=NULL, param, mu=1) Arguments yuima a yuima object or an object of yuima.cogarch-class. data an object of class yuima.data-class contains the observations available at uni- formly spaced time. If data=NULL, the default, the cogarchNoise uses the data in an object of yuima.data-class. param list of parameters for the COGARCH(p,q). mu a numeric object that contains the value of the second moments of the levy measure. Value incr.Levy a numeric object contains the estimated increments. model an object of class yuima containing the state, the variance and the cogarch pro- cess. Note The function cogarchNoise assumes the underlying Levy process is centered in zero. The function gmm uses the function cogarchNoise for estimation of underlying Levy in the COG- ARCH(p,q) model. Author(s) The YUIMA Project Team References Chadraa. (2009) Statistical Modelling with COGARCH(P,Q) Processes, PhD Thesis. Examples # Insert here some examples CPoint Volatility structural change point estimator Description Volatility structural change point estimator Usage CPoint(yuima, param1, param2, print=FALSE, symmetrized=FALSE, plot=FALSE) qmleL(yuima, t, ...) qmleR(yuima, t, ...) Arguments yuima a yuima object. param1 parameter values before the change point t param2 parameter values after the change point t plot plot test statistics? Default is FALSE. print print some debug output. Default is FALSE. t time value. See Details. symmetrized if TRUE uses the symmetrized version of the quasi maximum-likelihood approx- imation. ... passed to qmle method. See Examples. Details CPoint estimates the change point using quasi-maximum likelihood approach. Function qmleL estimates the parameters in the diffusion matrix using observations up to time t. Function qmleR estimates the parameters in the diffusion matrix using observations from time t to the end. Arguments in both qmleL and qmleR follow the same rules as in qmle. Value ans a list with change point instant, and paramters before and after the change point. Author(s) The YUIMA Project Team Examples ## Not run: diff.matrix <- matrix(c("theta1.1*x1","0*x2","0*x1","theta1.2*x2"), 2, 2) drift.c <- c("1-x1", "3-x2") drift.matrix <- matrix(drift.c, 2, 1) ymodel <- setModel(drift=drift.matrix, diffusion=diff.matrix, time.variable="t", state.variable=c("x1", "x2"), solve.variable=c("x1", "x2")) n <- 1000 set.seed(123) t1 <- list(theta1.1=.1, theta1.2=0.2) t2 <- list(theta1.1=.6, theta1.2=.6) tau <- 0.4 ysamp1 <- setSampling(n=tau*n, Initial=0, delta=0.01) yuima1 <- setYuima(model=ymodel, sampling=ysamp1) yuima1 <- simulate(yuima1, xinit=c(1, 1), true.parameter=t1) x1 <- yuima1@[email protected][[1]] x1 <- as.numeric(x1[length(x1)]) x2 <- yuima1@[email protected][[2]] x2 <- as.numeric(x2[length(x2)]) ysamp2 <- setSampling(Initial=n*tau*0.01, n=n*(1-tau), delta=0.01) yuima2 <- setYuima(model=ymodel, sampling=ysamp2) yuima2 <- simulate(yuima2, xinit=c(x1, x2), true.parameter=t2) yuima <- yuima1 yuima@<EMAIL>[[1]] <- c(<EMAIL>[[1]], yuima2@[email protected][[1]][-1]) yuima@data<EMAIL>.data[[2]] <- c(yuima1@[email protected][[2]], yuima2@[email protected][[2]][-1]) plot(yuima) # estimation of change point for given parameter values t.est <- CPoint(yuima,param1=t1,param2=t2, plot=TRUE) low <- list(theta1.1=0, theta1.2=0) # first state estimate of parameters using small # portion of data in the tails tmp1 <- qmleL(yuima,start=list(theta1.1=0.3,theta1.2=0.5),t=1.5, lower=low, method="L-BFGS-B") tmp1 tmp2 <- qmleR(yuima,start=list(theta1.1=0.3,theta1.2=0.5), t=8.5, lower=low, method="L-BFGS-B") tmp2 # first stage changepoint estimator t.est2 <- CPoint(yuima,param1=coef(tmp1),param2=coef(tmp2)) t.est2$tau # second stage estimation of parameters given first stage # change point estimator tmp11 <- qmleL(yuima,start=as.list(coef(tmp1)), t=t.est2$tau-0.1, lower=low, method="L-BFGS-B") tmp11 tmp21 <- qmleR(yuima,start=as.list(coef(tmp2)), t=t.est2$tau+0.1, lower=low, method="L-BFGS-B") tmp21 # second stage estimator of the change point CPoint(yuima,param1=coef(tmp11),param2=coef(tmp21)) ## One dimensional example: non linear case diff.matrix <- matrix("(1+x1^2)^theta1", 1, 1) drift.c <- c("x1") ymodel <- setModel(drift=drift.c, diffusion=diff.matrix, time.variable="t", state.variable=c("x1"), solve.variable=c("x1")) n <- 500 set.seed(123) y0 <- 5 # initial value theta00 <- 1/5 gamma <- 1/4 theta01 <- theta00+n^(-gamma) t1 <- list(theta1= theta00) t2 <- list(theta1= theta01) tau <- 0.4 ysamp1 <- setSampling(n=tau*n, Initial=0, delta=1/n) yuima1 <- setYuima(model=ymodel, sampling=ysamp1) yuima1 <- simulate(yuima1, xinit=c(5), true.parameter=t1) x1 <- <EMAIL>[[1]] x1 <- as.numeric(x1[length(x1)]) ysamp2 <- setSampling(Initial=tau, n=n*(1-tau), delta=1/n) yuima2 <- setYuima(model=ymodel, sampling=ysamp2) yuima2 <- simulate(yuima2, xinit=c(x1), true.parameter=t2) yuima <- yuima1 yu<EMAIL>[[1]] <- c(<EMAIL>[[1]], <EMAIL>[[1]][-1]) plot(yuima) t.est <- CPoint(yuima,param1=t1,param2=t2) t.est$tau low <- list(theta1=0) upp <- list(theta1=1) # first state estimate of parameters using small # portion of data in the tails tmp1 <- qmleL(yuima,start=list(theta1=0.5),t=.15,lower=low, upper=upp,method="L-BFGS-B") tmp1 tmp2 <- qmleR(yuima,start=list(theta1=0.5), t=.85,lower=low, upper=upp,method="L-BFGS-B") tmp2 # first stage changepoint estimator t.est2 <- CPoint(yuima,param1=coef(tmp1),param2=coef(tmp2)) t.est2$tau # second stage estimation of parameters given first stage # change point estimator tmp11 <- qmleL(yuima,start=as.list(coef(tmp1)), t=t.est2$tau-0.1, lower=low, upper=upp,method="L-BFGS-B") tmp11 tmp21 <- qmleR(yuima,start=as.list(coef(tmp2)), t=t.est2$tau+0.1, lower=low, upper=upp,method="L-BFGS-B") tmp21 # second stage estimator of the change point CPoint(yuima,param1=coef(tmp11),param2=coef(tmp21),plot=TRUE) ## End(Not run) DataPPR From zoo data to yuima.PPR. Description The function converts an object of class zoo to an object of class yuima.PPR. Usage DataPPR(CountVar, yuimaPPR, samp) Arguments CountVar An object of class zoo that contains counting variables and covariates. index(CountVar) returns the arrival times. yuimaPPR An object of class yuima.PPR that contains a mathematical description of the point process regression model assumed to be the generator of the observed data. samp An object of class yuima.sampling. Value The function returns an object of class yuima.PPR where the slot model contains the Point process described in yuimaPPR@model, the slot data contains the counting variables and the covariates observed on the grid in samp. Examples ## Not run: # In this example we generate a dataset contains the Counting Variable N # and the Covariate X. # The covariate X is an OU driven by a Gamma process. # Values of parameters. mu <- 2 alpha <- 4 beta <-5 # Law definition my.rKern <- function(n,t){ res0 <- t(t(rgamma(n, 0.1*t))) res1 <- t(t(rep(1,n))) res <- cbind(res0,res1) return(res) } Law.PPRKern <- setLaw(rng = my.rKern) # Point Process definition modKern <- setModel(drift = c("0.4*(0.1-X)","0"), diffusion = c("0","0"), jump.coeff = matrix(c("1","0","0","1"),2,2), measure = list(df = Law.PPRKern), measure.type = c("code","code"), solve.variable = c("X","N"), xinit=c("0.25","0")) gFun <- "exp(mu*log(1+X))" # Kernel <- "alpha*exp(-beta*(t-s))" prvKern <- setPPR(yuima = modKern, counting.var="N", gFun=gFun, Kernel = as.matrix(Kernel), lambda.var = "lambda", var.dx = "N", lower.var="0", upper.var = "t") # Simulation Term<-200 seed<-1 n<-20000 true.parKern <- list(mu=mu, alpha=alpha, beta=beta) set.seed(seed) # set.seed(1) time.simKern <-system.time( simprvKern <- simulate(object = prvKern, true.parameter = true.parKern, sampling = setSampling(Terminal =Term, n=n)) ) plot(simprvKern,main ="Counting Process with covariates" ,cex.main=0.9) # Using the function get.counting.data we extract from an object of class # yuima.PPR the counting process N and the covariate X at the arrival times. CountVar <- get.counting.data(simprvKern) plot(CountVar) # We convert the zoo object in the yuima.PPR object. sim2 <- DataPPR(CountVar, yuimaPPR=simprvKern, samp=simprvKern@sampling) ## End(Not run) Diagnostic.Carma Diagnostic Carma model Description This function verifies if the condition of stationarity is satisfied. Usage Diagnostic.Carma(carma) Arguments carma An object of class yuima.qmle-class where the slot model is a carma process. Value Logical variable. If TRUE, Carma is stationary. Author(s) YUIMA TEAM Examples mod1 <- setCarma(p = 2, q = 1, scale.par = "sig", Carma.var = "y") param1 <- list(a1 = 1.39631, a2 = 0.05029, b0 = 1, b1 = 1, sig = 1) samp1 <- setSampling(Terminal = 100, n = 200) set.seed(123) sim1 <- simulate(mod1, true.parameter = param1, sampling = samp1) est1 <- qmle(sim1, start = param1) Diagnostic.Carma(est1) Diagnostic.Cogarch Function for checking the statistical properties of the COGARCH(p,q) model Description The function check the statistical properties of the COGARCH(p,q) model. We verify if the process has a strict positive stationary variance model. Usage Diagnostic.Cogarch(yuima.cogarch, param = list(), matrixS = NULL, mu = 1, display = TRUE) Arguments yuima.cogarch an object of class yuima.cogarch, yuima or a class cogarch.gmm-class param a list containing the values of the parameters matrixS a Square matrix. mu first moment of the Levy measure. display a logical variable, if TRUE the function displays the result in the console. Value The functon returns a List with entries: meanVarianceProc Unconditional Stationary mean of the variance process. meanStateVariable Unconditional Stationary mean of the state process. stationary If TRUE, the COGARCH(p,q) has stationary variance. positivity If TRUE, the variance process is strictly positive. Author(s) YUIMA Project Team Examples ## Not run: # Definition of the COGARCH(1,1) process driven by a Variance Gamma nois: param.VG <- list(a1 = 0.038, b1 = 0.053, a0 = 0.04/0.053,lambda = 1, alpha = sqrt(2), beta = 0, mu = 0, x01 = 50.33) cog.VG <- setCogarch(p = 1, q = 1, work = FALSE, measure=list(df="rvgamma(z, lambda, alpha, beta, mu)"), measure.type = "code", Cogarch.var = "y", V.var = "v", Latent.var="x", XinExpr=TRUE) # Verify the stationarity and the positivity of th variance process test <- Diagnostic.Cogarch(cog.VG,param=param.VG) show(test) # Simulate a sample path set.seed(210) Term=800 num=24000 samp.VG <- setSampling(Terminal=Term, n=num) sim.VG <- simulate(cog.VG, true.parameter=param.VG, sampling=samp.VG, method="euler") plot(sim.VG) # Estimate the model res.VG <- gmm(sim.VG, start = param.VG, Est.Incr = "IncrPar") summary(res.VG) # Check if the estimated COGARCH(1,1) has a positive and stationary variance test1<-Diagnostic.Cogarch(res.VG) show(test1) # Simulate a COGARCH sample path using the estimated COGARCH(1,1) # and the recovered increments of underlying Variance Gamma Noise esttraj<-simulate(res.VG) plot(esttraj) ## End(Not run) fitCIR Calculate preliminary estimator and one-step improvements of a Cox- Ingersoll-Ross diffusion Description This is a function to simulate the preliminary estimator and the corresponding one step estimators based on the Newton-Raphson and the scoring method of the Cox-Ingersoll-Ross process given via the SDE √ dXt = (α − βXt )dt + γXt dWt with parameters β > 0, 2α > 5γ > 0 and a Brownian motion (Wt )t≥0 . This function uses the Gaussian quasi-likelihood, hence requires that data is sampled at high-frequency. Usage fitCIR(data) Arguments data a numeric matrix containing the realization of (t0 , Xt0 ), . . . , (tn , Xtn ) with tj denoting the j-th sampling times. data[1,] contains the sampling times t0 , . . . , tn and data[2,] the corresponding value of the process Xt0 , . . . , Xtn . In other words data[,j]=(tj , Xtj ). The observations should be equidistant. Details The estimators calculated by this function can be found in the reference below. Value A list with three entries each contain a vector in the following order: The result of the preliminary estimator, Newton-Raphson method and the method of scoring. If the sampling points are not equidistant the function will return 'Please use equidistant sampling points'. Author(s) <NAME> Contacts: <<EMAIL>> References <NAME>, <NAME>, <NAME>. Estimation of ergodic square-root diffusion under high-frequency sampling. Econometrics and Statistics, Article Number: 346 (2022). Examples #You can make use of the function simCIR to generate the data data <- simCIR(alpha=3,beta=1,gamma=1, n=5000, h=0.05, equi.dist=TRUE) results <- fitCIR(data) get.counting.data Extract arrival times from an object of class yuima.PPR Description This function extracts arrival times from an object of class yuima.PPR. Usage get.counting.data(yuimaPPR,type="zoo") Arguments yuimaPPR An object of class yuima.PPR. type By default type="zoo" the function returns an object of class zoo. Other values are yuima.PPR and matrix. Value By default the function returns an object of class zoo. The arrival times can be extracted by applying the method index to the output Examples ## Not run: ################## # Hawkes Process # ################## # Values of parameters. mu <- 2 alpha <- 4 beta <-5 # Law definition my.rHawkes <- function(n){ res <- t(t(rep(1,n))) return(res) } Law.Hawkes <- setLaw(rng = my.rHawkes) # Point Process Definition gFun <- "mu" Kernel <- "alpha*exp(-beta*(t-s))" modHawkes <- setModel(drift = c("0"), diffusion = matrix("0",1,1), jump.coeff = matrix(c("1"),1,1), measure = list(df = Law.Hawkes), measure.type = "code", solve.variable = c("N"), xinit=c("0")) prvHawkes <- setPPR(yuima = modHawkes, counting.var="N", gFun=gFun, Kernel = as.matrix(Kernel), lambda.var = "lambda", var.dx = "N", lower.var="0", upper.var = "t") true.par <- list(mu=mu, alpha=alpha, beta=beta) set.seed(1) Term<-70 n<-7000 # Simulation trajectory time.Hawkes <-system.time( simHawkes <- simulate(object = prvHawkes, true.parameter = true.par, sampling = setSampling(Terminal =Term, n=n)) ) # Arrival times of the Counting process. DataHawkes <- get.counting.data(simHawkes) TimeArr <- index(DataHawkes) ################################## # Point Process Regression Model # ################################## # Values of parameters. mu <- 2 alpha <- 4 beta <-5 # Law definition my.rKern <- function(n,t){ res0 <- t(t(rgamma(n, 0.1*t))) res1 <- t(t(rep(1,n))) res <- cbind(res0,res1) return(res) } Law.PPRKern <- setLaw(rng = my.rKern) # Point Process definition modKern <- setModel(drift = c("0.4*(0.1-X)","0"), diffusion = c("0","0"), jump.coeff = matrix(c("1","0","0","1"),2,2), measure = list(df = Law.PPRKern), measure.type = c("code","code"), solve.variable = c("X","N"), xinit=c("0.25","0")) gFun <- "exp(mu*log(1+X))" # Kernel <- "alpha*exp(-beta*(t-s))" prvKern <- setPPR(yuima = modKern, counting.var="N", gFun=gFun, Kernel = as.matrix(Kernel), lambda.var = "lambda", var.dx = "N", lower.var="0", upper.var = "t") # Simulation Term<-100 seed<-1 n<-10000 true.parKern <- list(mu=mu, alpha=alpha, beta=beta) set.seed(seed) # set.seed(1) time.simKern <-system.time( simprvKern <- simulate(object = prvKern, true.parameter = true.parKern, sampling = setSampling(Terminal =Term, n=n)) ) plot(simprvKern,main ="Counting Process with covariates" ,cex.main=0.9) # Arrival Times CountVar <- get.counting.data(simprvKern) TimeArr <- index(CountVar) ## End(Not run) gmm Method of Moments for COGARCH(P,Q). Description The function returns the estimated parameters of a COGARCH(P,Q) model. The parameters are abtained by matching theoretical vs empirical autocorrelation function. The theoretical autocorre- lation function is computed according the methodology developed in Chadraa (2009). Usage gmm(yuima, data = NULL, start, method="BFGS", fixed = list(), lower, upper, lag.max = NULL, equally.spaced = FALSE, aggregation=TRUE, Est.Incr = "NoIncr", objFun = "L2") Arguments yuima a yuima object or an object of yuima.cogarch-class. data an object of class yuima.data-class contains the observations available at uni- formly spaced time. If data=NULL, the default, the function uses the data in an object of yuima-class. start a list containing the starting values for the optimization routine. method a string indicating one of the methods available in optim. fixed a list of fixed parameters in optimization routine. lower a named list for specifying lower bounds of parameters. upper a named list for specifying upper bounds of parameters. lag.max maximum lag at which to calculate the theoretical and empirical acf. Default is sqrt{N} where N is the number of observation. equally.spaced Logical variable. If equally.spaced = TRUE., the function use the returns of COGARCH(P,Q) evaluated at unitary length for the computation of the empiri- cal autocorrelations. If equally.spaced = FALSE, the increments are evaluated on the interval with frequency specified in an object of class yuima.data-class that contains the observed time series. aggregation If aggregation=TRUE, before the estimation of the levy parameters we aggre- gate the estimated increments Est.Incr a string variable, If Est.Incr = "NoIncr", default value, gmm returns an ob- ject of class cogarch.est-class that contains the COGARCH parameters. If Est.Incr = "Incr" or Est.Incr = "IncrPar" the output is an object of class cogarch.est.incr-class. In the first case the object contains the increments of underlying noise while in the second case also the estimated parameter of levy measure. objFun a string variable that indentifies the objective function in the optimization step. objFun = "L2", default value, the objective function is a quadratic form where the weighting Matrix is the identity one. objFun = "L2CUE" the weighting ma- trix is estimated using Continuously Updating GMM (L2CUE). objFun = "L1", the objective function is the mean absolute error. In the last case the standard error for estimators are not available. Details The routine is based on three steps: estimation of the COGARCH parameters, recovering the incre- ments of the underlying Levy process and estimation of the levy measure parameters. The last two steps are available on request by the user. Value The function returns a list with the same components of the object obtained when the function optim is used. Author(s) The YUIMA Project Team. References Chadraa, E. (2009) Statistical Modeling with COGARCH(P,Q) Processes. Phd Thesis Examples ## Not run: # Example COGARCH(1,1): the parameters are the same used in Haugh et al. 2005. In this case # we assume the underlying noise is a symmetric variance gamma. # As first step we define the COGARCH(1,1) in yuima: mod1 <- setCogarch(p = 1, q = 1, work = FALSE, measure=list(df="rbgamma(z,1,sqrt(2),1,sqrt(2))"), measure.type = "code", Cogarch.var = "y", V.var = "v", Latent.var="x",XinExpr=TRUE) param <- list(a1 = 0.038, b1 = 0.053, a0 = 0.04/0.053, x01 = 20) # We generate a trajectory samp <- setSampling(Terminal=10000, n=100000) set.seed(210) sim1 <- simulate(mod1, sampling = samp, true.parameter = param) # We estimate the model res1 <- gmm(yuima = sim1, start = param) summary(res1) ## End(Not run) hyavar Asymptotic Variance Estimator for the Hayashi-Yoshida estimator Description This function estimates the asymptotic variances of covariance and correlation estimates by the Hayashi-Yoshida estimator. Usage hyavar(yuima, bw, nonneg = TRUE, psd = TRUE) Arguments yuima an object of yuima-class or yuima.data-class. bw a positive number or a numeric matrix. If it is a matrix, each component indicate the bandwidth parameter for the kernel estimators used to estimate the asymp- totic variance of the corresponding component (necessary only for off-diagonal components). If it is a number, it is converted to a matrix as matrix(bw,d,d), where d=dim(x). The default value is the matrix whose (i, j)-th component is given by min(ni , nj )0.45 , where ni denotes the number of the observations for the i-th component of the data. nonneg logical. If TRUE, the asymptotic variance estimates for correlations are always ensured to be non-negative. See ‘Details’. psd passed to cce. Details The precise description of the method used to estimate the asymptotic variances is as follows. For diagonal components, they are estimated by the realized quarticity multiplied by 2/3. Its theoretical validity is ensured by Hayashi et al. (2011), for example. For off-diagonal components, they are estimated by the naive kernel approach descrived in Section 8.2 of Hayashi and Yoshida (2011). Note that the asymptotic covariance between a diagonal component and another component, which is necessary to evaluate the asymptotic variances of correlation estimates, is not provided in Hayashi and Yoshida (2011), but it can be derived in a similar manner to that paper. If nonneg is TRUE, negative values of the asymptotic variances of correlations are avoided in the fol- lowing way. The computed asymptotic varaince-covariance matrix of the vector (HY ii, HY ij, HY jj) is converted to its spectral absolute value. Here, HY ij denotes the Hayashi-Yohida estimator for the (i, j)-th component. The function also returns the covariance and correlation matrices calculated by the Hayashi-Yoshida estimator (using cce). Value A list with components: covmat the estimated covariance matrix cormat the estimated correlation matrix avar.cov the estimated asymptotic variances for covariances avar.cor the estimated asymptotic variances for correlations Note Construction of kernel-type estimators for off-diagonal components is implemented after pseudo- aggregation described in Bibinger (2011). Author(s) <NAME> with YUIMA Project Team References <NAME>. and <NAME>. (2004) Econometric analysis of realized covariation: High frequency based covariance, regression, and correlation in financial economics, Econometrica, 72, no. 3, 885–925. <NAME>. (2011) Asymptotics of Asynchronicity, technical report, Available at doi:10.48550/ arXiv.1106.4222. <NAME>., <NAME>. and <NAME>. (2011) Irregular sampling and central limit theorems for power variations: The continuous case, Annales de l’Institut Henri Poincare - Probabilites et Statis- tiques, 47, no. 4, 1197–1218. <NAME>. and <NAME>. (2011) Nonsynchronous covariation process and limit theorems, Stochas- tic processes and their applications, 121, 2416–2454. See Also setData, cce Examples ## Not run: ## Set a model diff.coef.1 <- function(t, x1 = 0, x2 = 0) sqrt(1+t) diff.coef.2 <- function(t, x1 = 0, x2 = 0) sqrt(1+t^2) cor.rho <- function(t, x1 = 0, x2 = 0) sqrt(1/2) diff.coef.matrix <- matrix(c("diff.coef.1(t,x1,x2)", "diff.coef.2(t,x1,x2) * cor.rho(t,x1,x2)", "", "diff.coef.2(t,x1,x2) * sqrt(1-cor.rho(t,x1,x2)^2)"), 2, 2) cor.mod <- setModel(drift = c("", ""), diffusion = diff.coef.matrix,solve.variable = c("x1", "x2")) set.seed(111) ## We use a function poisson.random.sampling to get observation by Poisson sampling. yuima.samp <- setSampling(Terminal = 1, n = 1200) yuima <- setYuima(model = cor.mod, sampling = yuima.samp) yuima <- simulate(yuima) psample<- poisson.random.sampling(yuima, rate = c(0.2,0.3), n = 1000) ## Constructing a 95% confidence interval for the quadratic covariation from psample result <- hyavar(psample) thetahat <- result$covmat[1,2] # estimate of the quadratic covariation se <- sqrt(result$avar.cov[1,2]) # estimated standard error c(lower = thetahat + qnorm(0.025) * se, upper = thetahat + qnorm(0.975) * se) ## True value of the quadratic covariation. cc.theta <- function(T, sigma1, sigma2, rho) { tmp <- function(t) return(sigma1(t) * sigma2(t) * rho(t)) integrate(tmp, 0, T) } # contained in the constructed confidence interval cc.theta(T = 1, diff.coef.1, diff.coef.2, cor.rho)$value # Example. A stochastic differential equation with nonlinear feedback. ## Set a model drift.coef.1 <- function(x1,x2) x2 drift.coef.2 <- function(x1,x2) -x1 drift.coef.vector <- c("drift.coef.1","drift.coef.2") diff.coef.1 <- function(t,x1,x2) sqrt(abs(x1))*sqrt(1+t) diff.coef.2 <- function(t,x1,x2) sqrt(abs(x2)) cor.rho <- function(t,x1,x2) 1/(1+x1^2) diff.coef.matrix <- matrix(c("diff.coef.1(t,x1,x2)", "diff.coef.2(t,x1,x2) * cor.rho(t,x1,x2)","", "diff.coef.2(t,x1,x2) * sqrt(1-cor.rho(t,x1,x2)^2)"), 2, 2) cor.mod <- setModel(drift = drift.coef.vector, diffusion = diff.coef.matrix,solve.variable = c("x1", "x2")) ## Generate a path of the process set.seed(111) yuima.samp <- setSampling(Terminal = 1, n = 10000) yuima <- setYuima(model = cor.mod, sampling = yuima.samp) yuima <- simulate(yuima, xinit=c(2,3)) plot(yuima) ## The "true" values of the covariance and correlation. result.full <- cce(yuima) (cov.true <- result.full$covmat[1,2]) # covariance (cor.true <- result.full$cormat[1,2]) # correlation ## We use the function poisson.random.sampling to generate nonsynchronous ## observations by Poisson sampling. psample<- poisson.random.sampling(yuima, rate = c(0.2,0.3), n = 3000) ## Constructing 95% confidence intervals for the covariation from psample result <- hyavar(psample) cov.est <- result$covmat[1,2] # estimate of covariance cor.est <- result$cormat[1,2] # estimate of correlation se.cov <- sqrt(result$avar.cov[1,2]) # estimated standard error of covariance se.cor <- sqrt(result$avar.cor[1,2]) # estimated standard error of correlation ## 95% confidence interval for covariance c(lower = cov.est + qnorm(0.025) * se.cov, upper = cov.est + qnorm(0.975) * se.cov) # contains cov.true ## 95% confidence interval for correlation c(lower = cor.est + qnorm(0.025) * se.cor, upper = cor.est + qnorm(0.975) * se.cor) # contains cor.true ## We can also use the Fisher z transformation to construct a ## 95% confidence interval for correlation ## It often improves the finite sample behavior of the asymptotic ## theory (cf. Section 4.2.3 of Barndorff-Nielsen and Shephard (2004)) z <- atanh(cor.est) # the Fisher z transformation of the estimated correlation se.z <- se.cor/(1 - cor.est^2) # standard error for z (calculated by the delta method) ## 95% confidence interval for correlation via the Fisher z transformation c(lower = tanh(z + qnorm(0.025) * se.z), upper = tanh(z + qnorm(0.975) * se.z)) ## End(Not run) IC Information criteria for the stochastic differential equation Description Information criteria BIC, Quasi-BIC (QBIC) and CIC for the stochastic differential equation. Usage IC(drif = NULL, diff = NULL, jump.coeff = NULL, data = NULL, Terminal = 1, add.settings = list(), start, lower, upper, ergodic = TRUE, stepwise = FALSE, weight = FALSE, rcpp = FALSE, ...) Arguments drif a character vector that each element presents the candidate drift coefficient. diff a character vector that each element presents the candidate diffusion coefficient. jump.coeff a character vector that each element presents the candidate scale coefficient. data the data to be used. Terminal terminal time of the grid. add.settings details of model settings(see setModel). start a named list of the initial values of the parameters for optimization. lower a named list for specifying lower bounds of the parameters. upper a named list for specifying upper bounds of the parameters. ergodic whether the candidate models are ergodic SDEs or not(default ergodic=TRUE). stepwise specifies joint procedure or stepwise procedure(default stepwise=FALSE). weight calculate model weight? (default weight=FALSE) rcpp use C++ code? (default rcpp=FALSE) ... Details Calculate the information criteria BIC, QBIC, and CIC for stochastic processes. The calculation and model selection are performed by joint procedure or stepwise procedure. Value BIC values of BIC for all candidates. QBIC values of QBIC for all candidates. AIC values of AIC-type information criterion for all candidates. model information of all candidate models. par quasi-maximum likelihood estimator for each candidate. weight model weights for all candidates. selected selected model number and selected drift and diffusion coefficients Note The function IC uses the function qmle with method="L-BFGS-B" internally. Author(s) The YUIMA Project Team Contacts: <NAME> <<EMAIL>> References ## AIC, BIC <NAME>. (1973). Information theory and an extension of the maximum likelihood principle. In Second International Symposium on Information Theory (Tsahkadsor, 1971), 267-281. doi:10.1007/ 9781461216940_15 <NAME>. (1978). Estimating the dimension of a model. The Annals of Statistics, 6(2), 461-464. doi:10.1214/aos/1176344136 ## BIC, Quasi-BIC <NAME>. and <NAME>. (2018). Schwarz type model comparison for LAQ models. Bernoulli, 24(3), 2278-2327. doi:10.3150/17BEJ928. ## CIC <NAME>. (2010). Contrast-based information criterion for ergodic diffusion processes from dis- crete observations. Annals of the Institute of Statistical Mathematics, 62(1), 161-187. doi:10.1007/ s1046300902451 ## Model weight <NAME>. and <NAME>. (2002). Model Selection and Multimodel Inference. Springer- Verlag, New York. Examples ## Not run: ### Ex.1 set.seed(123) N <- 1000 # number of data h <- N^(-2/3) # sampling stepsize Ter <- N*h # terminal sampling time ## Data generate (dXt = -Xt*dt + exp((-2*cos(Xt) + 1)/2)*dWt) mod <- setModel(drift="theta21*x", diffusion="exp((theta11*cos(x)+theta12)/2)") samp <- setSampling(Terminal=Ter, n = N) yuima <- setYuima(model=mod, sampling=setSampling(Terminal=Ter, n=50*N)) simu.yuima <- simulate(yuima, xinit=1, true.parameter=list(theta11=-2, theta12=1, theta21=-1), subsampling=samp) Xt <- NULL for(i in 1:(N+1)){ Xt <- c(Xt, <EMAIL>[50*(i-1)+1]) } ## Candidate coefficients diffusion <- c("exp((theta11*cos(x)+theta12*sin(x)+theta13)/2)", "exp((theta11*cos(x)+theta12*sin(x))/2)", "exp((theta11*cos(x)+theta13)/2)", "exp((theta12*sin(x)+theta13)/2)") drift <- c("theta21*x + theta22", "theta21*x") ## Parameter settings para.init <- list(theta11=runif(1,max=5,min=-5), theta12=runif(1,max=5,min=-5), theta13=runif(1,max=5,min=-5), theta21=runif(1,max=-0.5,min=-1.5), theta22=runif(1,max=-0.5,min=-1.5)) para.low <- list(theta11=-10, theta12=-10, theta13=-10, theta21=-5, theta22=-5) para.upp <- list(theta11=10, theta12=10, theta13=10, theta21=-0.001, theta22=-0.001) ## Ex.1.1 Joint ic1 <- IC(drif=drift, diff=diffusion, data=Xt, Terminal=Ter, start=para.init, lower=para.low, upper=para.upp, stepwise = FALSE, weight = FALSE, rcpp = TRUE) ic1 ## Ex.1.2 Stepwise ic2 <- IC(drif=drift, diff=diffusion, data=Xt, Terminal=Ter, start=para.init, lower=para.low, upper=para.upp, stepwise = TRUE, weight = FALSE, rcpp = TRUE) ic2 ### Ex.2 (multidimansion case) set.seed(123) N <- 3000 # number of data h <- N^(-2/3) # sampling stepsize Ter <- N*h # terminal sampling time ## Data generate diff.coef.matrix <- matrix(c("beta1*x1+beta3", "1", "-1", "beta1*x1+beta3"), 2, 2) drif.coef.vec <- c("alpha1*x1", "alpha2*x2") mod <- setModel(drift = drif.coef.vec, diffusion = diff.coef.matrix, state.variable = c("x1", "x2"), solve.variable = c("x1", "x2")) samp <- setSampling(Terminal = Ter, n = N) yuima <- setYuima(model = mod, sampling = setSampling(Terminal = N^(1/3), n = 50*N)) simu.yuima <- simulate(yuima, xinit = c(1,1), true.parameter = list(alpha1=-2, alpha2=-1, beta1=-1, beta3=2), subsampling = samp) Xt <- matrix(0,(N+1),2) for(i in 1:(N+1)){ Xt[i,] <- <EMAIL>[50*(i-1)+1,] } ## Candidate coefficients diffusion <- list(matrix(c("beta1*x1+beta2*x2+beta3", "1", "-1", "beta1*x1+beta2*x2+beta3"), 2, 2), matrix(c("beta1*x1+beta2*x2", "1", "-1", "beta1*x1+beta2*x2"), 2, 2), matrix(c("beta1*x1+beta3", "1", "-1", "beta1*x1+beta3"), 2, 2), matrix(c("beta2*x2+beta3", "1", "-1", "beta2*x2+beta3"), 2, 2), matrix(c("beta1*x1", "1", "-1", "beta1*x1"), 2, 2), matrix(c("beta2*x2", "1", "-1", "beta2*x2"), 2, 2), matrix(c("beta3", "1", "-1", "beta3"), 2, 2)) drift <- list(c("alpha1*x1", "alpha2*x2"), c("alpha1*x2", "alpha2*x1")) modsettings <- list(state.variable = c("x1", "x2"), solve.variable = c("x1", "x2")) ## Parameter settings para.init <- list(alpha1 = runif(1,min=-3,max=-1), alpha2 = runif(1,min=-2,max=0), beta1 = runif(1,min=-2,max=0), beta2 = runif(1,min=0,max=2), beta3 = runif(1,min=1,max=3)) para.low <- list(alpha1 = -5, alpha2 = -5, beta1 = -5, beta2 = -5, beta3 = 1) para.upp <- list(alpha1 = 0.01, alpha2 = -0.01, beta1 = 5, beta2 = 5, beta3 = 10) ## Ex.2.1 Joint ic3 <- IC(drif=drift, diff=diffusion, data=Xt, Terminal=Ter, add.settings=modsettings, start=para.init, lower=para.low, upper=para.upp, weight=FALSE, rcpp=FALSE) ic3 ## Ex.2.2 Stepwise ic4 <- IC(drif=drift, diff=diffusion, data=Xt, Terminal=Ter, add.settings=modsettings, start=para.init, lower=para.low, upper=para.upp, stepwise = TRUE, weight=FALSE, rcpp=FALSE) ic4 ## End(Not run) info.Map-class Class for information about Map/Operators Description Auxiliar class for definition of an object of class yuima.Map. see the documentation of yuima.Map for more details. info.PPR Class for information about Point Process Description Auxiliar class for definition of an object of class yuima.PPR and yuima.Hawkes. see the documen- tation for more details. Integral.sde Class for the mathematical description of integral of a stochastic pro- cess Description Auxiliar class for definition of an object of class yuima.Integral. see the documentation of yuima.Integral for more details. Integrand Class for the mathematical description of integral of a stochastic pro- cess Description Auxiliar class for definition of an object of class yuima.Integral. see the documentation of yuima.Integral for more details. Intensity.PPR Intesity Process for the Point Process Regression Model Description This function returns the intensity process of a Point Process Regression Model Usage Intensity.PPR(yuimaPPR, param) Arguments yuimaPPR An object of class yuima.PPR param Model parameters Value On obejct of class yuima.data Author(s) YUIMA TEAM Examples #INSERT HERE AN EXAMPLE JBtest Remove jumps and calculate the Gaussian quasi-likelihood estimator based on the Jarque-Bera normality test Description Remove jumps and calculate the Gaussian quasi-likelihood estimator based on the Jarque-Bera normality test Usage JBtest(yuima,start,lower,upper,alpha,skewness=TRUE,kurtosis=TRUE,withdrift=FALSE) Arguments yuima a yuima object (diffusion with compound Poisson jumps). lower a named list for specifying lower bounds of parameters. upper a named list for specifying upper bounds of parameters. alpha Insert Description Here. start initial values to be passed to the optimizer. skewness use third moment information ? by default, skewness=TRUE kurtosis use fourth moment information ? by default, kurtosis=TRUE withdrift use drift information for constructing self-normalized residuals or not? by de- fault, withdrift = FALSE Details This function removes large increments which are regarded as jumps based on the iterative Jarque- Bera normality test, and after that, calculates the Gaussian quasi maximum likelihood estimator. Value Removed Removed jumps and jump times OGQMLE Gaussian quasi maximum likelihood estimator before jump removal JRGQMLE Gaussian quasi maximum likelihood estimator after jump removal Figures For visualization, the jump points are presented. In addition, the histgram of the jump removed self-normalized residuals, transition of the estimators and the logarithm of Jarque-Bera statistics are given as figures Author(s) The YUIMA Project Team Contacts: <NAME> <<EMAIL>> References <NAME>. (2013). Asymptotics for functionals of self-normalized residuals of discretely observed stochastic processes. Stochastic Processes and their Applications 123 (2013), 2752–2778 Masuda, H and <NAME>. (2018) Estimating Diffusion With Compound Poisson Jumps Based On Self-normalized Residuals, arXiv:1802.03945 Examples ## Not run: set.seed(123) mod <- setModel(drift="10-3*x", diffusion="theta*(2+x^2)/(1+x^2)", jump.coeff="1", measure=list(intensity="1",df=list("dunif(z, 3, 5)")), measure.type="CP") T <- 10 ## Terminal n <- 5000 ## generation size samp <- setSampling(Terminal=T, n=n) ## define sampling scheme yuima <- setYuima(model = mod, sampling = samp) yuima <- simulate(yuima, xinit=1,true.parameter=list(theta=sqrt(2)), sampling = samp) JBtest(yuima,start=list(theta=0.5),upper=c(theta=100) ,lower=c(theta=0),alpha=0.01) ## End(Not run) lambdaFromData Intensity of a Point Process Regression Model Description This function returns the intensity process of a PPR model when covariates and counting processes are obsered on discrete time Usage lambdaFromData(yuimaPPR, PPRData = NULL, parLambda = list()) Arguments yuimaPPR Mathematical Description of PPR model PPRData Observed data parLambda Values of intesity parameters Details ... Value ... Note ... Author(s) YUIMA TEAM References ... See Also ... lasso Adaptive LASSO estimation for stochastic differential equations Description Adaptive LASSO estimation for stochastic differential equations. Usage lasso(yuima, lambda0, start, delta=1, ...) Arguments yuima a yuima object. lambda0 a named list with penalty for each parameter. start initial values to be passed to the optimizer. delta controls the amount of shrinking in the adaptive sequences. ... passed to optim method. See Examples. Details lasso behaves more likely the standard qmle function in and argument method is one of the meth- ods available in optim. From initial guess of QML estimates, performs adaptive LASSO estimation using the Least Squares Approximation (LSA) as in Wang and Leng (2007, JASA). Value ans a list with both QMLE and LASSO estimates. Author(s) The YUIMA Project Team Examples ## Not run: ##multidimension case diff.matrix <- matrix(c("theta1.1","theta1.2", "1", "1"), 2, 2) drift.c <- c("-theta2.1*x1", "-theta2.2*x2", "-theta2.2", "-theta2.1") drift.matrix <- matrix(drift.c, 2, 2) ymodel <- setModel(drift=drift.matrix, diffusion=diff.matrix, time.variable="t", state.variable=c("x1", "x2"), solve.variable=c("x1", "x2")) n <- 100 ysamp <- setSampling(Terminal=(n)^(1/3), n=n) yuima <- setYuima(model=ymodel, sampling=ysamp) set.seed(123) truep <- list(theta1.1=0.6, theta1.2=0,theta2.1=0.5, theta2.2=0) yuima <- simulate(yuima, xinit=c(1, 1), true.parameter=truep) est <- lasso(yuima, start=list(theta2.1=0.8, theta2.2=0.2, theta1.1=0.7, theta1.2=0.1), lower=list(theta1.1=1e-10,theta1.2=1e-10,theta2.1=.1,theta2.2=1e-10), upper=list(theta1.1=4,theta1.2=4,theta2.1=4,theta2.2=4), method="L-BFGS-B") # TRUE unlist(truep) # QMLE round(est$mle,3) # LASSO round(est$lasso,3) ## End(Not run) LawMethods Methods for an object of class yuima.law Description Methods for yuima.law Usage rand(object, n, param, ...) dens(object, x, param, log = FALSE, ...) cdf(object, q, param, ...) quant(object, p, param, ...) Arguments object ... n ... param ... ... ... x ... log ... q ... p ... Value Methods for an object of yuima.law-class Note Insert additional info Author(s) YUIMA TEAM limiting.gamma calculate the value of limiting covariance matrices : Gamma Description To confirm assysmptotic normality of theta estimators. Usage limiting.gamma(obj,theta,verbose=FALSE) Arguments obj an yuima or yuima.model object. theta true theta verbose an option for display a verbose process. Details Calculate the value of limiting covariance matrices Gamma. The returned values gamma1 and gamma2 are used to confirm assysmptotic normality of theta estimators. this program is limitted to 1-dimention-sde model for now. Value gamma1 a theoretical figure for variance of theta1 estimator gamma2 a theoretical figure for variance of theta2 estimator Note we need to fix this routine. Author(s) The YUIMA Project Team Examples set.seed(123) ## Yuima diff.matrix <- matrix(c("theta1"), 1, 1) myModel <- setModel(drift=c("(-1)*theta2*x"), diffusion=diff.matrix, time.variable="t", state.variable="x") n <- 100 mySampling <- setSampling(Terminal=(n)^(1/3), n=n) myYuima <- setYuima(model=myModel, sampling=mySampling) myYuima <- simulate(myYuima, xinit=1, true.parameter=list(theta1=0.6, theta2=0.3)) ## theorical figure of theta theta1 <- 3.5 theta2 <- 1.3 theta <- list(theta1, theta2) lim.gamma <- limiting.gamma(obj=myYuima, theta=theta, verbose=TRUE) ## return theta1 and theta2 with list lim.gamma$list ## return theta1 and theta2 with vector lim.gamma$vec llag Lead Lag Estimator Description Estimate the lead-lag parameters of discretely observed processes by maximizing the shifted Hayashi- Yoshida covariation contrast functions, following Hoffmann et al. (2013). Usage llag(x, from = -Inf, to = Inf, division = FALSE, verbose = (ci || ccor), grid, psd = TRUE, plot = ci, ccor = ci, ci = FALSE, alpha = 0.01, fisher = TRUE, bw, tol = 1e-6) Arguments x an object of yuima-class or yuima.data-class. verbose whether llag returns matrices or not. The default is FALSE. from a numeric vector each of whose component(s) indicates the lower end of a finite grid on which the contrast function is evaluated, if grid is missing. to a numeric vector each of whose component(s) indicates the upper end of a finite grid on which the contrast function is evaluated, if grid is missing. division a numeric vector each of whose component(s) indicates the number of the points of a finite grid on which the contrast function is evaluated, if grid is missing. grid a numeric vector or a list of numeric vectors. See ’Details’. psd logical. If TRUE, the estimated cross-correlation functions are converted to the interval [-1,1]. See ’Details’. plot logical. If TRUE, the estimated cross-correlation functions are plotted. If ci is also TRUE, the pointwise confidence intervals (under the null hypothesis that the corresponding correlation is zero) are also plotted. The default is FALSE. ccor logical. If TRUE, the estimated cross-correlation functions are returned. This argument is ignored if verbose is FALSE. The default is FALSE. ci logical. If TRUE, (pointwise) confidence intervals of the estimated cross-correlation functions and p-values for the significance of the correlations at the estimated lead-lag parameters are calculated. Note that the confidence intervals are only plotted when plot=TRUE. alpha a posive number indicating the significance level of the confidence intervals for the cross-correlation functions. fisher logical. If TRUE, the p-values and the confidence intervals for the cross-correlation functions is evaluated after applying the Fisher z transformation. This argument is only meaningful if pval = "corr". bw bandwidth parameter to compute the asymptotic variances. See ’Details’ and hyavar for details. tol tolelance parameter to avoid numerical errors in comparison of time stamps. All time stamps are divided by tol and rounded to integers. Note that the values of grid are also divided by tol and rounded to integers. A reasonable choice of tol is the minimum unit of time stamps. The default value 1e-6 supposes that the minimum unit of time stamps is greater or equal to 1 micro-second. Details Let d be the number of the components of the zoo.data of the object x. Let Xiti0 , Xiti1 , . . . , Xitin (i) be the observation data of the i-th component (i.e. the i-th component of the zoo.data of the object x). The shifted Hayashi-Yoshida covariation contrast function U ij(θ) of the observations Xi and Xj (i < j) is defined by the same way as in Hoffmann et al. (2013), which corresponds to their cross- covariance function. The lead-lag parameter θi j is defined as a maximizer of |U ij(θ)|. U ij(θ) is evaluated on a finite grid Gij defined below. Thus θi j belongs to this grid. If there exist more than two maximizers, the lowest one is selected. If psd is TRUE, for any i, j the matrix C := (U kl(θ))k,l∈i,j is converted to (C%*%C)^(1/2) for ensuring the positive semi-definiteness, and U ij(θ) is redefined as the (1, 2)-component of the converted C. Here, U kk(θ) is set to the realized volatility of Xk. In this case θi j is given as a maximizer of the cross-correlation functions. The grid Gij is defined as follows. First, if grid is missing, Gij is given by a, a + (b − a)/(N − 1), . . . , a + (N − 2)(b − a)/(N − 1), b, where a, b and N are the (d(i − 1) − (i − 1)i/2 + (j − i))-th components of from, to and division respectively. If the corresponding component of from (resp. to) is -Inf (resp. Inf), a = −(tjn (j) − ti0 ) (resp. b = tin (i) − tj0 ) is used, while if the corresponding component of division is FALSE, N = round(2max(n(i), n(j))) + 1 is used. Missing components are filled with -Inf (resp. Inf, FALSE). The default value -Inf (resp. Inf, FALSE) means that all components are -Inf (resp. Inf, FALSE). Next, if grid is a numeric vector, Gij is given by grid. If grid is a list of numeric vectors, Gij is given by the (d(i − 1) − (i − 1)i/2 + (j − i))-th component of grid. The estimated lead-lag parameters are returned as the skew-symmetric matrix (θi j)i,j=1,...,d . If verbose is TRUE, the covariance matrix (U ij(θi j))i,j=1,...,d corresponding to the estimated lead- lag parameters, the corresponding correlation matrix and the computed contrast functions are also returned. If further ccor is TRUE,the computed cross-correlation functions are returned as a list with the length d(d − 1)/2. For i < j, the (d(i − 1) − (i − 1)i/2 + (j − i))-th component of the list consists of an object U ij(θ)/sqrt(U ii(θ) ∗ U jj(θ)) of class zoo indexed by Gij. If plot is TRUE, the computed cross-correlation functions are plotted sequentially. If ci is TRUE, the asymptotic variances of the cross-correlations are calculated at each point of the grid by using the naive kernel approach descrived in Section 8.2 of Hayashi and Yoshida (2011). The implementation is the same as that of hyavar and more detailed description is found there. Value If verbose is FALSE, a skew-symmetric matrix corresponding to the estimated lead-lag parame- ters is returned. Otherwise, an object of class "yuima.llag", which is a list with the following components, is returned: lagcce a skew-symmetric matrix corresponding to the estimated lead-lag parameters. covmat a covariance matrix corresponding to the estimated lead-lag parameters. cormat a correlation matrix corresponding to the estimated lead-lag parameters. LLR a matrix consisting of lead-lag ratios. See Huth and Abergel (2014) for details. If ci is TRUE, the following component is added to the returned list: p.values a matrix of p-values for the significance of the correlations corresponding to the estimated lead-lag parameters. If further ccor is TRUE, the following components are added to the returned list: ccor a list of computed cross-correlation functions. avar a list of computed asymptotic variances of the cross-correlations (if ci = TRUE). Note The default grid usually contains too many points, so it is better for users to specify this argument in order to reduce the computational time. See ’Examples’ below for an example of the specification. The evaluated p-values should carefully be interpreted because they are calculated based on point- wise confidence intervals rather than simultaneous confidence intervals (so there would be a multi- ple testing problem). Evaluation of p-values based on the latter will be implemented in the future extension of this function: Indeed, so far no theory has been developed for this. However, it is con- jectured that the error distributions of the estimated cross-correlation functions are asymptotically independent if the grid is not dense too much, so p-values evaluated by this function will still be meaningful as long as sufficiently low significance levels are used. Author(s) <NAME> with YUIMA Project Team References <NAME>. and <NAME>. (2011) Nonsynchronous covariation process and limit theorems, Stochas- tic processes and their applications, 121, 2416–2454. <NAME>., <NAME>. and <NAME>. (2013) Estimation of the lead-lag parameter from non-synchronous data, Bernoulli, 19, no. 2, 426–461. <NAME>. and <NAME>. (2014) High frequency lead/lag relationships — Empirical facts, Journal of Empirical Finance, 26, 41–58. See Also cce, hyavar, mllag, llag.test Examples ## Set a model diff.coef.matrix <- matrix(c("sqrt(x1)", "3/5*sqrt(x2)", "1/3*sqrt(x3)", "", "4/5*sqrt(x2)","2/3*sqrt(x3)", "","","2/3*sqrt(x3)"), 3, 3) drift <- c("1-x1","2*(10-x2)","3*(4-x3)") cor.mod <- setModel(drift = drift, diffusion = diff.coef.matrix, solve.variable = c("x1", "x2","x3")) set.seed(111) ## We use a function poisson.random.sampling ## to get observation by Poisson sampling. yuima.samp <- setSampling(Terminal = 1, n = 1200) yuima <- setYuima(model = cor.mod, sampling = yuima.samp) yuima <- simulate(yuima,xinit=c(1,7,5)) ## intentionally displace the second time series data2 <- yuima@[email protected][[2]] time2 <- time(data2) theta2 <- 0.05 # the lag of x2 behind x1 stime2 <- time2 + theta2 time(<EMAIL>[[2]]) <- stime2 data3 <- yuima@[email protected][[3]] time3 <- time(data3) theta3 <- 0.12 # the lag of x3 behind x1 stime3 <- time3 + theta3 time(yuima@[email protected][[3]]) <- stime3 ## sampled data by Poisson rules psample<- poisson.random.sampling(yuima, rate = c(0.2,0.3,0.4), n = 1000) ## plot plot(psample) ## cce cce(psample) ## lead-lag estimation (with cross-correlation plots) par(mfcol=c(3,1)) result <- llag(psample, plot=TRUE) ## estimated lead-lag parameter result ## computing pointwise confidence intervals llag(psample, ci = TRUE) ## In practice, it is better to specify the grid because the default grid contains too many points. ## Here we give an example for how to specify it. ## We search lead-lag parameters on the interval [-0.1, 0.1] with step size 0.01 G <- seq(-0.1,0.1,by=0.01) ## lead-lag estimation (with computing confidence intervals) result <- llag(psample, grid = G, ci = TRUE) ## Since the true lead-lag parameter 0.12 between x1 and x3 is not contained ## in the searching grid G, we see that the corresponding cross-correlation ## does not exceed the cofidence interval ## detailed output ## the p-value for the (1,3)-th component is high result ## Finally, we can examine confidence intervals of other significant levels ## and/or without the Fisher z-transformation via the plot-method defined ## for yuima.llag-class objects as follows plot(result, alpha = 0.001) plot(result, fisher = FALSE) par(mfcol=c(1,1)) llag.test Wild Bootstrap Test for the Absence of Lead-Lag Effects Description Tests the absence of lead-lag effects (time-lagged correlations) by the wild bootstrap procedure proposed in Koike (2017) for each pair of components. Usage llag.test(x, from = -Inf, to = Inf, division = FALSE, grid, R = 999, parallel = "no", ncpus = getOption("boot.ncpus", 1L), cl = NULL, tol = 1e-06) Arguments x an object of yuima-class or yuima.data-class. from a numeric vector each of whose component(s) indicates the lower end of a finite grid on which the contrast function is evaluated, if grid is missing. to a numeric vector each of whose component(s) indicates the upper end of a finite grid on which the contrast function is evaluated, if grid is missing. division a numeric vector each of whose component(s) indicates the number of the points of a finite grid on which the contrast function is evaluated, if grid is missing. grid a numeric vector or a list of numeric vectors. See ’Details’ of llag. R a single positive integer indicating the number of bootstrap replicates. parallel passed to boot. ncpus passed to boot. cl passed to boot. tol tolelance parameter to avoid numerical errors in comparison of time stamps. All time stamps are divided by tol and rounded to integers. Note that the values of grid are also divided by tol and rounded to integers. A reasonable choice of tol is the minimum unit of time stamps. The default value 1e-6 supposes that the minimum unit of time stamps is greater or equal to 1 micro-second. Details For each pair of components, this function performs the wild bootstrap procedure proposed in Koike (2017) to test whether there is a (possibly) time-lagged correlation. The null hypothesis of the test is that there is no time-lagged correlation and the alternative is its negative. The test regects the null hypothesis if the maximum of the absolute values of cross-covariances is too large. The critical region is constructed by a wild bootstrap procedure with Rademacher variables as the multiplier variables. Value p.values a matrix whose components indicate the bootstrap p-values for the correspond- ing pair of components. max.cov a matrix whose componenets indicate the maxima of the absolute values of cross-covariances for the corresponding pair of components. max.corr a matrix whose componenets indicate the maxima of the absolute values of cross-correlations for the corresponding pair of components. Author(s) <NAME> with YUIMA Project Team References Koike, Y. (2019). Gaussian approximation of maxima of Wiener functionals and its application to high-frequency data, Annals of Statistics, 47, 1663–1687. doi:10.1214/18AOS1731. See Also cce, hyavar, mllag, llag Examples ## Not run: # The following example is taken from mllag ## Set a model diff.coef.matrix <- matrix(c("sqrt(x1)", "3/5*sqrt(x2)", "1/3*sqrt(x3)", "", "4/5*sqrt(x2)","2/3*sqrt(x3)", "","","2/3*sqrt(x3)"), 3, 3) drift <- c("1-x1","2*(10-x2)","3*(4-x3)") cor.mod <- setModel(drift = drift, diffusion = diff.coef.matrix, solve.variable = c("x1", "x2","x3")) set.seed(111) ## We use a function poisson.random.sampling ## to get observation by Poisson sampling. yuima.samp <- setSampling(Terminal = 1, n = 1200) yuima <- setYuima(model = cor.mod, sampling = yuima.samp) yuima <- simulate(yuima,xinit=c(1,7,5)) ## intentionally displace the second time series data2 <- <EMAIL>@data<EMAIL>[[2]] time2 <- time(data2) theta2 <- 0.05 # the lag of x2 behind x1 stime2 <- time2 + theta2 time(yuima@data<EMAIL>[[2]]) <- stime2 data3 <- yuima@[email protected][[3]] time3 <- time(data3) theta3 <- 0.12 # the lag of x3 behind x1 stime3 <- time3 + theta3 time(yuima@[email protected][[3]]) <- stime3 ## sampled data by Poisson rules psample<- poisson.random.sampling(yuima, rate = c(0.2,0.3,0.4), n = 1000) ## We search lead-lag parameters on the interval [-0.1, 0.1] with step size 0.01 G <- seq(-0.1,0.1,by=0.01) ## perform lead-lag test llag.test(psample, grid = G, R = 999) ## Since the lead-lag parameter for the pair(x1, x3) is not contained in G, ## the null hypothesis is not rejected for this pair ## End(Not run) lm.jumptest Lee and Mykland’s Test for the Presence of Jumps Using Normalized Returns Description Performs a test for the null hypothesis that the realized path has no jump following Lee and Mykland (2008). Usage lm.jumptest(yuima, K) Arguments yuima an object of yuima-class or yuima.data-class. K a positive integer indicating the window size to compute local variance esti- mates. It can be specified as a vector to use different window sizes for differ- ent components. The default value is K=pmin(floor(sqrt(252*n)), n) with n=length(yuima)-1, following Lee and Mykland (2008) as well as Dumitru and Urga (2012). Value A list with the same length as dim(yuima). Each component of the list has class “htest” and contains the following components: statistic the value of the test statistic of the corresponding component of yuima. p.value an approximate p-value for the test of the corresponding component. method the character string “Lee and Mykland jump test”. data.name the character string “xi”, where i is the number of the component. Author(s) <NAME> with YUIMA Project Team References <NAME>. and <NAME>. (2012) Identifying jumps in financial assets: A comparison between nonparametric jump tests. Journal of Business and Economic Statistics, 30, 242–255. <NAME>. and <NAME>. (2008) Jumps in financial markets: A new nonparametric test and jump dynamics. Review of Financial Studies, 21, 2535–2563. <NAME>., <NAME>. and <NAME>. (2020) High-frequency jump tests: Which test should we use? Journal of Econometrics, 219, 478–487. <NAME>. and <NAME>. (2011) A comprehensive comparison of alternative tests for jumps in asset prices. Central Bank of Cyprus Working Paper 2011-2. See Also bns.test, minrv.test, medrv.test, pz.test Examples set.seed(123) # One-dimensional case ## Model: dXt=t*dWt+t*dzt, ## where zt is a compound Poisson process with intensity 5 and jump sizes distribution N(0,1). model <- setModel(drift=0,diffusion="t",jump.coeff="t",measure.type="CP", measure=list(intensity=5,df=list("dnorm(z,0,sqrt(0.1))")), time.variable="t") yuima.samp <- setSampling(Terminal = 1, n = 390) yuima <- setYuima(model = model, sampling = yuima.samp) yuima <- simulate(yuima) plot(yuima) # The path seems to involve some jumps lm.jumptest(yuima) # p-value is very small, so the path would have a jump lm.jumptest(yuima, K = floor(sqrt(390))) # different value of K # Multi-dimensional case ## Model: Bivariate standard BM + CP ## Only the first component has jumps mod <- setModel(drift = c(0, 0), diffusion = diag(2), jump.coeff = diag(c(1, 0)), measure = list(intensity = 5, df = "dmvnorm(z,c(0,0),diag(2))"), jump.variable = c("z"), measure.type=c("CP"), solve.variable=c("x1","x2")) samp <- setSampling(Terminal = 1, n = 390) yuima <- setYuima(model = model, sampling = yuima.samp) yuima <- simulate(object = mod, sampling = samp) plot(yuima) lm.jumptest(yuima) # test is performed component-wise LogSPX Five minutes Log SPX prices Description Intraday five minutes Standard and Poor 500 Log-prices data ranging from 09 july 2012 to 01 april 2015. Usage data(LogSPX) Details The dataset is composed by a list where the element Data$allObs contains the intraday five minutes Standard and Poor cumulative Log-return data computed as Log(P_t)-Log(P_0) and P_0 is the open SPX price at 09 july 2012. Data$logdayprice contains daily SPX log prices and. Each day we have the same number of observation and the value is reported in Data$obsinday. Examples data(LogSPX) lseBayes Adaptive Bayes estimator for the parameters in sde model by using LSE functions Description Adaptive Bayes estimator for the parameters in a specific type of sde by using LSE functions. Usage lseBayes(yuima, start, prior, lower, upper, method = "mcmc", mcmc = 1000, rate =1, algorithm = "randomwalk") Arguments yuima a ’yuima’ object. start initial suggestion for parameter values prior a list of prior distributions for the parameters specified by ’code’. Currently, dunif(z, min, max), dnorm(z, mean, sd), dbeta(z, shape1, shape2), dgamma(z, shape, rate) are available. lower a named list for specifying lower bounds of parameters upper a named list for specifying upper bounds of parameters method nomcmc requires package cubature mcmc number of iteration of Markov chain Monte Carlo method rate a thinning parameter. Only the first n^rate observation will be used for inference. algorithm Logical value when method = mcmc. If algorithm = "randomwalk" (default), the random-walk Metropolis algorithm will be performed. If algorithm = "MpCN", the Mixed preconditioned Crank-Nicolson algorithm will be performed. Details lseBayes is always performed by Rcpp code.Calculate the Bayes estimator for stochastic pro- cesses by using Least Square Estimate functions. The calculation is performed by the Markov chain Monte Carlo method. Currently, the Random-walk Metropolis algorithm and the Mixed pre- conditioned Crank-Nicolson algorithm is implemented.In lseBayes,the LSE function for estimat- ing diffusion parameter differs from the LSE function for estimating drift parameter.lseBayes is similar to adaBayes,but lseBayes calculate faster than adaBayes because of LSE functions. Value vector a vector of the parameter estimate Note algorithm = "nomcmc" is unstable. nomcmc is going to be stopped. Author(s) <NAME> with YUIMA project Team References <NAME>. (2011). Polynomial type large deviation inequalities and quasi-likelihood analysis for stochastic differential equations. Annals of the Institute of Statistical Mathematics, 63(3), 431-479. Uchida, M., & <NAME>. (2014). Adaptive Bayes type estimators of ergodic diffusion processes from discrete observations. Statistical Inference for Stochastic Processes, 17(2), 181-219. <NAME>. (2017). Ergodicity of Markov chain Monte Carlo with reversible proposal. Journal of Applied Probability, 54(2). Examples ## Not run: ####2-dim model set.seed(123) b <- c("-theta1*x1+theta2*sin(x2)+50","-theta3*x2+theta4*cos(x1)+25") a <- matrix(c("4+theta5*sin(x1)^2","1","1","2+theta6*sin(x2)^2"),2,2) true = list(theta1 = 0.5, theta2 = 5,theta3 = 0.3, theta4 = 5, theta5 = 1, theta6 = 1) lower = list(theta1=0.1,theta2=0.1,theta3=0, theta4=0.1,theta5=0.1,theta6=0.1) upper = list(theta1=1,theta2=10,theta3=0.9, theta4=10,theta5=10,theta6=10) start = list(theta1=runif(1), theta2=rnorm(1), theta3=rbeta(1,1,1), theta4=rnorm(1), theta5=rgamma(1,1,1), theta6=rexp(1)) yuimamodel <- setModel(drift=b,diffusion=a,state.variable=c("x1", "x2"),solve.variable=c("x1","x2")) yuimasamp <- setSampling(Terminal=50,n=50*100) yuima <- setYuima(model = yuimamodel, sampling = yuimasamp) yuima <- simulate(yuima, xinit = c(100,80), true.parameter = true,sampling = yuimasamp) prior <- list( theta1=list(measure.type="code",df="dunif(z,0,1)"), theta2=list(measure.type="code",df="dnorm(z,0,1)"), theta3=list(measure.type="code",df="dbeta(z,1,1)"), theta4=list(measure.type="code",df="dgamma(z,1,1)"), theta5=list(measure.type="code",df="dnorm(z,0,1)"), theta6=list(measure.type="code",df="dnorm(z,0,1)") ) mle <- qmle(yuima, start = start, lower = lower, upper = upper, method = "L-BFGS-B",rcpp=TRUE) print(mle@coef) set.seed(123) bayes1 <- lseBayes(yuima, start=start, prior=prior, method="mcmc", mcmc=1000,lower = lower, upper = upper,algorithm = "randomwalk") bayes1@coef set.seed(123) bayes2 <- lseBayes(yuima, start=start, prior=prior, method="mcmc", mcmc=1000,lower = lower, upper = upper,algorithm = "MpCN") bayes2@coef ## End(Not run) mllag Multiple Lead-Lag Detector Description Detecting the lead-lag parameters of discretely observed processes by picking time shifts at which the Hayashi-Yoshida cross-correlation functions exceed thresholds, which are constructed based on the asymptotic theory of Hayashi and Yoshida (2011). Usage mllag(x, from = -Inf, to = Inf, division = FALSE, grid, psd = TRUE, plot = TRUE, alpha = 0.01, fisher = TRUE, bw) Arguments x an object of yuima-class or yuima.data-class or yuima.llag-class (out- put of llag) or yuima.mllag-class (output of this function). from passed to llag. to passed to llag. division passed to llag. grid passed to llag. psd passed to llag. plot logical. If TRUE, the estimated cross-correlation functions and the pointwise con- fidence intervals (under the null hypothesis that the corresponding correlation is zero) as well as the detected lead-lag parameters are plotted. alpha a posive number indicating the significance level of the confidence intervals for the cross-correlation functions. fisher logical. If TRUE, the p-values and the confidence intervals for the cross-correlation functions is evaluated after applying the Fisher z transformation. bw passed to llag. Details The computation method of cross-correlation functions and confidence intervals is the same as the one used in llag. The exception between this function and llag is how to detect the lead-lag parameters. While llag only returns the maximizer of the absolute value of the cross-correlations following the theory of Hoffmann et al. (2013), this function returns all the time shifts at which the cross-correlations exceed (so there is also the possiblity that no lead-lag is returned). Note that this approach is mathematically debetable because there would be a multiple testing problem (see also ’Note’ of llag), so the interpretation of the result from this function should carefully be addressed. In particular, the significance level alpha probably does not give the "correct" level. Value An object of class "yuima.mllag", which is a list with the following elements: mlagcce a list of data.frame-class objects consisting of lagcce (lead-lag parameters), p.value and correlation. LLR a matrix consisting of lead-lag ratios. See Huth and Abergel (2014) for details. ccor a list of computed cross-correlation functions. avar a list of computed asymptotic variances of the cross-correlations (if ci = TRUE). CI a list of computed confidence intervals. Author(s) <NAME> with YUIMA Project Team References <NAME>. and <NAME>. (2011) Nonsynchronous covariation process and limit theorems, Stochas- tic processes and their applications, 121, 2416–2454. <NAME>., <NAME>. and <NAME>. (2013) Estimation of the lead-lag parameter from non-synchronous data, Bernoulli, 19, no. 2, 426–461. <NAME>. and <NAME>. (2014) High frequency lead/lag relationships — Empirical facts, Journal of Empirical Finance, 26, 41–58. See Also llag, hyavar, llag.test Examples # The first example is taken from llag ## Set a model diff.coef.matrix <- matrix(c("sqrt(x1)", "3/5*sqrt(x2)", "1/3*sqrt(x3)", "", "4/5*sqrt(x2)","2/3*sqrt(x3)", "","","2/3*sqrt(x3)"), 3, 3) drift <- c("1-x1","2*(10-x2)","3*(4-x3)") cor.mod <- setModel(drift = drift, diffusion = diff.coef.matrix, solve.variable = c("x1", "x2","x3")) set.seed(111) ## We use a function poisson.random.sampling ## to get observation by Poisson sampling. yuima.samp <- setSampling(Terminal = 1, n = 1200) yuima <- setYuima(model = cor.mod, sampling = yuima.samp) yuima <- simulate(yuima,xinit=c(1,7,5)) ## intentionally displace the second time series data2 <- yuima@[email protected][[2]] time2 <- time(data2) theta2 <- 0.05 # the lag of x2 behind x1 stime2 <- time2 + theta2 time(<EMAIL>[[2]]) <- stime2 data3 <- yuima@[email protected][[3]] time3 <- time(data3) theta3 <- 0.12 # the lag of x3 behind x1 stime3 <- time3 + theta3 time(<EMAIL>[[3]]) <- stime3 ## sampled data by Poisson rules psample<- poisson.random.sampling(yuima, rate = c(0.2,0.3,0.4), n = 1000) ## We search lead-lag parameters on the interval [-0.1, 0.1] with step size 0.01 G <- seq(-0.1,0.1,by=0.01) ## lead-lag estimation by mllag par(mfcol=c(3,1)) result <- mllag(psample, grid = G) ## Since the lead-lag parameter for the pair(x1, x3) is not contained in G, ## no lead-lag parameter is detected for this pair par(mfcol=c(1,1)) # The second example is a situation where multiple lead-lag effects exist set.seed(222) n <- 3600 Times <- seq(0, 1, by = 1/n) R1 <- 0.6 R2 <- -0.3 dW1 <- rnorm(n + 10)/sqrt(n) dW2 <- rnorm(n + 5)/sqrt(n) dW3 <- rnorm(n)/sqrt(n) x <- zoo(diffinv(dW1[-(1:10)] + dW2[1:n]), Times) y <- zoo(diffinv(R1 * dW1[1:n] + R2 * dW2[-(1:5)] + sqrt(1- R1^2 - R2^2) * dW3), Times) ## In this setting, both x and y have a component leading to the other, ## but x's leading component dominates y's one yuima <- setData(list(x, y)) ## Lead-lag estimation by llag G <- seq(-30/n, 30/n, by = 1/n) est <- llag(yuima, grid = G, ci = TRUE) ## The shape of the plotted cross-correlation is evidently bimodal, ## so there are likely two lead-lag parameters ## Lead-lag estimation by mllag mllag(est) # succeeds in detecting two lead-lag parameters ## Next consider a non-synchronous sampling case psample <- poisson.random.sampling(yuima, n = n, rate = c(0.8, 0.7)) ## Lead-lag estimation by mllag est <- mllag(psample, grid = G) est # detects too many lead-lag parameters ## Using a lower significant level mllag(est, alpha = 0.001) # insufficient ## As the plot reveals, one reason is because the grid is too dense ## In fact, this phenomenon can be avoided by using a coarser grid mllag(psample, grid = seq(-30/n, 30/n, by=5/n)) # succeeds! mmfrac mmfrac Description Estimates the drift of a fractional Ornstein-Uhlenbeck and, if necessary, also the Hurst and diffusion parameters. Usage mmfrac(yuima, ...) Arguments yuima a yuima object. ... arguments passed to qgv. Details Estimates the drift of s fractional Ornstein-Uhlenbeck and, if necessary, also the Hurst and diffusion parameters. Value an object of class mmfrac Author(s) The YUIMA Project Team References <NAME>., <NAME>. (2013) Parameter estimation for the discretely observed fractional Ornstein- Uhlenbeck process and the Yuima R package, Computational Statistics, pp. 1129–1147. See Also See also qgv. Examples # Estimating all Hurst parameter, diffusion coefficient and drift coefficient # in fractional Ornstein-Uhlenbeck model<-setModel(drift="-x*lambda",hurst=NA,diffusion="theta") sampling<-setSampling(T=100,n=10000) yui1<-simulate(model,true.param=list(theta=1,lambda=4),hurst=0.7,sampling=sampling) mmfrac(yui1) model.parameter-class Class for the parameter description of stochastic differential equations Description The model.parameter-class is a class of the yuima package. Details The model.parameter-class object cannot be directly specified by the user but it is constructed when the yuima.model-class object is constructed via setModel. All the terms which are not in the list of solution, state, time, jump variables are considered as parameters. These parameters are identified in the different components of the model (drift, diffusion and jump part). This information is later used to draw inference jointly or separately for the different parameters depending on the model in hands. Slots drift: A vector of names belonging to the drift coefficient. diffusion: A vector of names of parameters belonging to the diffusion coefficient. jump: A vector of names of parameters belonging to the jump coefficient. measure: A vector of names of parameters belonging to the Levy measure. xinit: A vector of names of parameters belonging to the initial condition. all: A vector of names of all the parameters found in the components of the model. common: A vector of names of the parameters in common among drift, diffusion, jump and measure term. Author(s) The YUIMA Project Team mpv Realized Multipower Variation Description The function returns the realized MultiPower Variation (mpv), defined in Barndorff-Nielsen and Shephard (2004), for each component. Usage mpv(yuima, r = 2, normalize = TRUE) Arguments yuima an object of yuima-class or yuima.data-class. r a vector of non-negative numbers or a list of vectors of non-negative numbers. normalize logical. See ‘Details’. Details Let d be the number of the components of the zoo.data of yuima. Let Xti0 , Xti1 , . . . , Xtin be the observation data of the i-th component (i.e. the i-th component of the zoo.data of yuima). When r is a k-dimensional vector of non-negative numbers, mpv(yuima,r,normalize=TRUE) is defined as the d-dimensional vector with i-th element equal to r[1]+···+r[k] X µ−1 −1 r[1] · · · µr[k] n 2 −1 where µp is the p-th absolute moment of the standard normal distribution and ∆Xtij = Xtij −Xtij−1 . If normalize is FALSE the result is not multiplied by µ−1 −1 When r is a list of vectors of non-negative numbers, mpv(yuima,r,normalize=TRUE) is defined as the d-dimensional vector with i-th element equal to i +···+r i r1 n−k i +1 k i X i i i µ−1 ri · · · µ−1 ri n 2 −1 where r1i , . . . , rki i is the i-th component of r. If normalize is FALSE the result is not multiplied by µr−1 −1 i · · · µr i . 1 ki Value A numeric vector with the same length as the zoo.data of yuima Author(s) <NAME> with YUIMA Project Team References Barndorff-Nielsen, <NAME>. and <NAME>. (2004) Power and bipower variation with stochastic volatility and jumps, Journal of Financial Econometrics, 2, no. 1, 1–37. <NAME>. , <NAME>. , <NAME>. , <NAME>. and <NAME>. (2006) A central limit theorem for realised power and bipower variations of continuous semimartingales, in: <NAME>. , <NAME>. , <NAME>. (Eds.), From Stochastic Calculus to Mathematical Finance: The Shiryaev Festschrift, Springer-Verlag, Berlin, pp. 33–68. See Also setData, cce, minrv, medrv Examples ## Not run: set.seed(123) # One-dimensional case ## Model: dXt=t*dWt+t*dzt, ## where zt is a compound Poisson process with intensity 5 and jump sizes distribution N(0,0.1). model <- setModel(drift=0,diffusion="t",jump.coeff="t",measure.type="CP", measure=list(intensity=5,df=list("dnorm(z,0,sqrt(0.1))")), time.variable="t") yuima.samp <- setSampling(Terminal = 1, n = 390) yuima <- setYuima(model = model, sampling = yuima.samp) yuima <- simulate(yuima) plot(yuima) mpv(yuima) # true value is 1/3 mpv(yuima,1) # true value is 1/2 mpv(yuima,rep(2/3,3)) # true value is 1/3 # Multi-dimensional case ## Model: dXkt=t*dWk_t (k=1,2,3). diff.matrix <- diag(3) diag(diff.matrix) <- c("t","t","t") model <- setModel(drift=c(0,0,0),diffusion=diff.matrix,time.variable="t", solve.variable=c("x1","x2","x3")) yuima.samp <- setSampling(Terminal = 1, n = 390) yuima <- setYuima(model = model, sampling = yuima.samp) yuima <- simulate(yuima) plot(yuima) mpv(yuima,list(c(1,1),1,rep(2/3,3))) # true varue is c(1/3,1/2,1/3) ## End(Not run) MWK151 Graybill - Methuselah Walk - PILO - ITRDB CA535 Description Graybill - Methuselah Walk - PILO - ITRDB CA535, pine tree width in mm from -608 to 1957. Usage data(MWK151) Details The full data records of past temperature, precipitation, and climate and environmental change derived from tree ring measurements. Parameter keywords describe what was measured in this data set. Additional summary information can be found in the abstracts of papers listed in the data set citations, however many of the data sets arise from unpublished research contributed to the International Tree Ring Data Bank. Additional information on data processing and analysis for International Tree Ring Data Bank (ITRDB) data sets can be found on the Tree Ring Page https://www.ncei.noaa.gov/products/paleoclimatology. The MWK151 is only a small part of the data relative to one tree and contains measurement of a tree’s ring width in mm, from -608 to 1957. Source ftp://ftp.ncdc.noaa.gov/pub/data/paleo/treering/measurements/northamerica/usa/ca535. rwl References Graybill, D.A., and <NAME>., Dendroclimatic evidence from the northern Soviet Union, in Climate since A.D. 1500, edited by <NAME> and <NAME>, Routledge, London, 393-414, 1992. Examples data(MWK151) noisy.sampling Noisy Observation Generator Description Generates a new observation data contaminated by noise. Usage noisy.sampling(x, var.adj = 0, rng = "rnorm", mean.adj = 0, ..., end.coef = 0, n, order.adj = 0, znoise) Arguments x an object of yuima-class or yuima.data-class. var.adj a matrix or list to be used for adjusting the variance matrix of the exogenous noise. rng a function to be used for generating the random numbers for the exogenous noise. mean.adj a numeric vector to be used for adjusting the mean vector of the exogenous noise. ... passed to rng. end.coef a numeric vector or list to be used for adjusting the variance of the endogenous noise. n a numeric vector to be used for adjusting the scale of the endogenous noise. order.adj a positive number to be used for adjusting the order of the noise. znoise a list indicating other sources of noise processes. The default value is as.list(double(dim(x))). Details This function simulates microstructure noise and adds it to the path of x. Currently, this function can deal with Kalnina and Linton (2011) type microstructure noise. See ’Examples’ below for more details. Value an object of yuima.data-class. Author(s) The YUIMA Project Team References <NAME>. and <NAME>. (2011) Estimating quadratic variation consistently in the presence of endogenous and diurnal measurement error, Journal of Econometrics, 147, 47–59. See Also cce, lmm Examples ## Set a model (a two-dimensional normal model sampled by a Poisson random sampling) set.seed(123) drift <- c(0,0) sigma1 <- 1 sigma2 <- 1 rho <- 0.7 diffusion <- matrix(c(sigma1,sigma2*rho,0,sigma2*sqrt(1-rho^2)),2,2) model <- setModel(drift=drift,diffusion=diffusion, state.variable=c("x1","x2"),solve.variable=c("x1","x2")) yuima.samp <- setSampling(Terminal = 1, n = 2340) yuima <- setYuima(model = model, sampling = yuima.samp) yuima <- simulate(yuima) ## Poisson random sampling psample<- poisson.random.sampling(yuima, rate = c(1/3,1/6), n = 2340) ## Plot the path without noise plot(psample) # Set a matrix as the variance of noise Omega <- 0.01*diffusion %*% t(diffusion) ## Contaminate the observation data by centered normal distributed noise ## with the variance matrix equal to 1% of the diffusion noisy.psample1 <- noisy.sampling(psample,var.adj=Omega) plot(noisy.psample1) ## Contaminate the observation data by centered uniformly distributed noise ## with the variance matrix equal to 1% of the diffusion noisy.psample2 <- noisy.sampling(psample,var.adj=Omega,rng="runif",min=-sqrt(3),max=sqrt(3)) plot(noisy.psample2) ## Contaminate the observation data by centered exponentially distributed noise ## with the variance matrix equal to 1% of the diffusion noisy.psample3 <- noisy.sampling(psample,var.adj=Omega,rng="rexp",rate=1,mean.adj=1) plot(noisy.psample3) ## Contaminate the observation data by its return series ## multiplied by -0.1 times the square root of the intensity vector ## of the Poisson random sampling noisy.psample4 <- noisy.sampling(psample,end.coef=-0.1,n=2340*c(1/3,1/6)) plot(noisy.psample4) ## An application: ## Adding a compound Poisson jumps to the observation data ## Set a compound Poisson process intensity <- 5 j.num <- rpois(1,intensity) # Set a number of jumps j.idx <- unique(ceiling(2340*runif(j.num))) # Set time indices of jumps jump <- matrix(0,2,2341) jump[,j.idx+1] <- sqrt(0.25/intensity)*diffusion %*% matrix(rnorm(length(j.idx)),2,length(j.idx)) grid <- seq(0,1,by=1/2340) CPprocess <- list(zoo(cumsum(jump[1,]),grid),zoo(cumsum(jump[2,]),grid)) ## Adding the jumps yuima.jump <- noisy.sampling(yuima,znoise=CPprocess) plot(yuima.jump) ## Poisson random sampling psample.jump <- poisson.random.sampling(yuima.jump, rate = c(1/3,1/6), n = 2340) plot(psample.jump) ntv Volatility Estimation and Jump Test Using Nearest Neighbor Trunca- tion Description minrv and medrv respectively compute the MinRV and MedRV estimators introduced in Andersen, Dobrev and Schaumburg (2012). minrv.test and medrv.test respectively perform Haussman type tests for the null hypothesis that the realized path has no jump using the MinRV and MedRV estimators. See Section 4.4 in Andersen, Dobrev and Schaumburg (2014) for a concise discussion. Usage minrv(yuima) medrv(yuima) minrv.test(yuima, type = "ratio", adj = TRUE) medrv.test(yuima, type = "ratio", adj = TRUE) Arguments yuima an object of yuima-class or yuima.data-class. type type of the test statistic to use. ratio is default. adj logical; if TRUE, the maximum adjustment suggested in Barndorff-Nielsen and Shephard (2004) is applied to the test statistic when type is equal to either “log” or “ratio”. See also Section 2.5 in Dumitru and Urga (2012). Value minrv and medrv return a numeric vector with the same length as dim(yuima). Each component of the vector is a volatility estimate for the corresponding component of yuima. minrv.test and medrv.test return a list with the same length as dim(yuima). Each component of the list has class “htest” and contains the following components: statistic the value of the test statistic of the corresponding component of yuima. p.value an approximate p-value for the test of the corresponding component. method the character string “Andersen-Dobrev-Schaumburg jump test based on xxx”, where xxx is either MinRV or MedRV. data.name the character string “xi”, where i is the number of the component. Author(s) <NAME> with YUIMA Project Team References <NAME>., <NAME>. and <NAME>. (2012) Jump-robust volatility estimation using nearest neighbor truncation. Journal of Econometrics, 169, 75–93. <NAME>., <NAME>. and <NAME>. (2014) A robust neighborhood truncation approach to estimation of integrated quarticity. Econometric Theory, 30, 3–59. <NAME>. and <NAME>. (2012) Identifying jumps in financial assets: A comparison between nonparametric jump tests. Journal of Business and Economic Statistics, 30, 242–255. <NAME>., <NAME>. and <NAME>. (2020) High-frequency jump tests: Which test should we use? Journal of Econometrics, 219, 478–487. Theodosiou, M. and <NAME>. (2011) A comprehensive comparison of alternative tests for jumps in asset prices. Central Bank of Cyprus Working Paper 2011-2. See Also mpv, cce, bns.test, lm.jumptest, pz.test Examples ## Not run: set.seed(123) # One-dimensional case ## Model: dXt=t*dWt+t*dzt, ## where zt is a compound Poisson process with intensity 5 ## and jump sizes distribution N(0,1). model <- setModel(drift=0,diffusion="t",jump.coeff="t",measure.type="CP", measure=list(intensity=5,df=list("dnorm(z,0,1)")), time.variable="t") yuima.samp <- setSampling(Terminal = 1, n = 390) yuima <- setYuima(model = model, sampling = yuima.samp) yuima <- simulate(yuima) plot(yuima) # The path evidently has some jumps ## Volatility estimation minrv(yuima) # minRV (true value = 1/3) medrv(yuima) # medRV (true value = 1/3) ## Jump test minrv.test(yuima, type = "standard") minrv.test(yuima,type="log") minrv.test(yuima,type="ratio") medrv.test(yuima, type = "standard") medrv.test(yuima,type="log") medrv.test(yuima,type="ratio") # Multi-dimensional case ## Model: Bivariate standard BM + CP ## Only the first component has jumps mod <- setModel(drift = c(0, 0), diffusion = diag(2), jump.coeff = diag(c(1, 0)), measure = list(intensity = 5, df = "dmvnorm(z,c(0,0),diag(2))"), jump.variable = c("z"), measure.type=c("CP"), solve.variable=c("x1","x2")) samp <- setSampling(Terminal = 1, n = 390) yuima <- setYuima(model = model, sampling = yuima.samp) yuima <- simulate(object = mod, sampling = samp) plot(yuima) ## Volatility estimation minrv(yuima) # minRV (true value = c(1, 1)) medrv(yuima) # medRV (true value = c(1, 1)) ## Jump test minrv.test(yuima) # test is performed component-wise medrv.test(yuima) # test is performed component-wise ## End(Not run) param.Integral Class for the mathematical description of integral of a stochastic pro- cess Description Auxiliar class for definition of an object of class yuima.Integral. see the documentation of yuima.Integral for more details. param.Map-class Class for information about Map/Operators Description Auxiliar class for definition of an object of class yuima.Map. see the documentation of yuima.Map for more details. phi.test Phi-divergence test statistic for stochastic differential equations Description Phi-divergence test statistic for stochastic differential equations. Usage phi.test(yuima, H0, H1, phi, print=FALSE,...) Arguments yuima a yuima object. H0 a named list of parameter under H0. H1 a named list of parameter under H1. phi the phi function to be used in the test. See Details. print you can see a progress of the estimation when print is TRUE. ... passed to qmle function. Details phi.test executes a Phi-divergence test. If H1 is not specified this hypothesis is filled with the QMLE estimates. If phi is missing, then phi(x)=1-x+x*log(x) and the Phi-divergence statistic corresponds to the likelihood ratio test statistic. Value ans an obkect of class phitest. Author(s) The YUIMA Project Team Examples ## Not run: model<- setModel(drift="t1*(t2-x)",diffusion="t3") T<-10 n<-1000 sampling <- setSampling(Terminal=T,n=n) yuima<-setYuima(model=model, sampling=sampling) h0 <- list(t1=0.3, t2=1, t3=0.25) X <- simulate(yuima, xinit=1, true=h0) h1 <- list(t1=0.3, t2=0.2, t3=0.1) phi1 <- function(x) 1-x+x*log(x) phi.test(X, H0=h0, H1=h1,phi=phi1) phi.test(X, H0=h0, phi=phi1, start=h0, lower=list(t1=0.1, t2=0.1, t3=0.1), upper=list(t1=2,t2=2,t3=2),method="L-BFGS-B") phi.test(X, H0=h1, phi=phi1, start=h0, lower=list(t1=0.1, t2=0.1, t3=0.1), upper=list(t1=2,t2=2,t3=2),method="L-BFGS-B") ## End(Not run) poisson.random.sampling Poisson random sampling method Description Poisson random sampling method. Usage poisson.random.sampling(x, rate, n) Arguments x an object of yuima.data-class or yuima-class. rate a Poisson intensity or a vector of Poisson intensities. n a common multiplier to the Poisson intensities. The default value is 1. Details It returns an object of type yuima.data-class which is a copy of the original input data where observations are sampled according to the Poisson process. The unsampled data are set to NA. Value an object of yuima.data-class. Author(s) The YUIMA Project Team See Also cce Examples ## Set a model diff.coef.1 <- function(t, x1=0, x2) x2*(1+t) diff.coef.2 <- function(t, x1, x2=0) x1*sqrt(1+t^2) cor.rho <- function(t, x1=0, x2=0) sqrt((1+cos(x1*x2))/2) diff.coef.matrix <- matrix(c("diff.coef.1(t,x1,x2)", "diff.coef.2(t,x1,x2)*cor.rho(t,x1,x2)", "", "diff.coef.2(t,x1,x2)*sqrt(1-cor.rho(t,x1,x2)^2)"),2,2) cor.mod <- setModel(drift=c("",""), diffusion=diff.coef.matrix, solve.variable=c("x1", "x2"), xinit=c(3,2)) set.seed(111) ## We first simulate the two dimensional diffusion model yuima.samp <- setSampling(Terminal=1, n=1200) yuima <- setYuima(model=cor.mod, sampling=yuima.samp) yuima.sim <- simulate(yuima) ## Then we use function poisson.random.sampling to get observations ## by Poisson sampling. psample <- poisson.random.sampling(yuima.sim, rate = c(0.2, 0.3), n=1000) str(psample) pz.test Podolskij and Ziggel’s Test for the Presence of Jumps Using Power Variation with Perturbed Truncation Description Performs a test for the null hypothesis that the realized path has no jump following Podolskij and Ziggel (2010). Usage pz.test(yuima, p = 4, threshold = "local", tau = 0.05) Arguments yuima an object of yuima-class or yuima.data-class. p a positive number indicating the exponent of the (truncated) power variation to compute test statistic(s). Theoretically, it must be greater than or equal to 2. threshold a numeric vector or list indicating the threshold parameter(s). Each of its compo- nents indicates the threshold parameter or process to be used for estimating the corresponding component. If it is a numeric vector, the elements in threshold are recycled if there are two few elements in threshold. Alternatively, you can specify either "PZ" or "local" to automatically select a (hopefully) appropriate threshold. When threshold="PZ", selection is per- formed following Section 5.1 in Podolskij and Ziggel (2010). When threshold="local", selection is performed following Section 5.1 in Koike (2014). The default is threshold="local". tau a probability controlling the strength of perturbation. See Section 2.3 in Podol- skij and Ziggel (2010) for details. Podolskij and Ziggel (2010) suggests using a relatively small value for tau, e.g. tau=0.1 or tau=0.05. Value A list with the same length as dim(yuima). Each component of the list has class “htest” and contains the following components: statistic the value of the test statistic of the corresponding component of yuima. p.value an approximate p-value for the test of the corresponding component. method the character string “Podolskij and Ziggel jump test”. data.name the character string “xi”, where i is the number of the component. Note Podolskij and Ziggel (2010) also introduce a pre-averaged version of the test to deal with noisy observations. Such a test will be implemented in the future version of the package. Author(s) <NAME> with YUIMA Project Team References <NAME>. and <NAME>. (2012) Identifying jumps in financial assets: A comparison between nonparametric jump tests. Journal of Business and Economic Statistics, 30, 242–255. Koike, Y. (2014) An estimator for the cumulative co-volatility of asynchronously observed semi- martingales with jumps, Scandinavian Journal of Statistics, 41, 460–481. <NAME>., <NAME>. and <NAME>. (2020) High-frequency jump tests: Which test should we use? Journal of Econometrics, 219, 478–487. <NAME>. and <NAME>. (2010) New tests for jumps in semimartingale models, Statistical Inference for Stochastic Processes, 13, 15–41. <NAME>. and <NAME>. (2011) A comprehensive comparison of alternative tests for jumps in asset prices. Central Bank of Cyprus Working Paper 2011-2. See Also bns.test, lm.jumptest, minrv.test, medrv.test Examples ## Not run: set.seed(123) # One-dimensional case ## Model: dXt=t*dWt+t*dzt, ## where zt is a compound Poisson process with intensity 5 and jump sizes distribution N(0,1). model <- setModel(drift=0,diffusion="t",jump.coeff="t",measure.type="CP", measure=list(intensity=5,df=list("dnorm(z,0,sqrt(0.1))")), time.variable="t") yuima.samp <- setSampling(Terminal = 1, n = 390) yuima <- setYuima(model = model, sampling = yuima.samp) yuima <- simulate(yuima) plot(yuima) # The path seems to involve some jumps #lm.jumptest(yuima) # p-value is very small, so the path would have a jump #lm.jumptest(yuima, K = floor(sqrt(390))) # different value of K pz.test(yuima) # p-value is very small, so the path would have a jump pz.test(yuima, p = 2) # different value of p pz.test(yuima, tau = 0.1) # different value of tau # Multi-dimensional case ## Model: Bivariate standard BM + CP ## Only the first component has jumps mod <- setModel(drift = c(0, 0), diffusion = diag(2), jump.coeff = diag(c(1, 0)), measure = list(intensity = 5, df = "dmvnorm(z,c(0,0),diag(2))"), jump.variable = c("z"), measure.type=c("CP"), solve.variable=c("x1","x2")) samp <- setSampling(Terminal = 1, n = 390) yuima <- setYuima(model = model, sampling = yuima.samp) yuima <- simulate(object = mod, sampling = samp) plot(yuima) pz.test(yuima) # test is performed component-wise ## End(Not run) qgv qgv Description Estimate the local Holder exponent with quadratic generalized variations method Usage qgv(yuima, filter.type = "Daubechies", order = 2, a = NULL) Arguments yuima A yuima object. filter.type The filter.type can be set to "Daubechies" or "Classical". order The order of the filter a to be chosen a Any other filter Details Estimation of the Hurst index and the constant of the fractional Ornstein-Uhlenbeck process. Value an object of class qgv Author(s) The YUIMA Project Team References <NAME>., <NAME>. (2013) Parameter estimation for the discretely observed fractional Ornstein- Uhlenbeck process and the Yuima R package, Computational Statistics, pp. 1129–1147. See Also See also mmfrac. Examples # Estimating both Hurst parameter and diffusion coefficient in fractional Ornstein-Uhlenbeck model<-setModel(drift="-x*lambda",hurst=NA,diffusion="theta") sampling<-setSampling(T=100,n=10000) yui1<-simulate(model,true.param=list(theta=1,lambda=4),hurst=0.7,sampling=sampling) qgv(yui1) # Estimating Hurst parameter only in diffusion processes model2<-setModel(drift="-x*lambda",hurst=NA,diffusion="theta*sqrt(x)") sampling<-setSampling(T=1,n=10000) yui2<-simulate(model2,true.param=list(theta=1,lambda=4),hurst=0.7,sampling=sampling,xinit=10) qgv(yui2) qmle Calculate quasi-likelihood and ML estimator of least squares estima- tor Description Calculate the quasi-likelihood and estimate of the parameters of the stochastic differential equation by the maximum likelihood method or least squares estimator of the drift parameter. Usage qmle(yuima, start, method = "L-BFGS-B", fixed = list(), print = FALSE, envir = globalenv(), lower, upper, joint = FALSE, Est.Incr ="NoIncr", aggregation = TRUE, threshold = NULL, rcpp =FALSE, ...) quasilogl(yuima, param, print = FALSE, rcpp = FALSE) lse(yuima, start, lower, upper, method = "BFGS", ...) Arguments yuima a yuima object. print you can see a progress of the estimation when print is TRUE. envir an environment where the model coefficients are evaluated. method see Details. param list of parameters for the quasi loglikelihood. lower a named list for specifying lower bounds of parameters upper a named list for specifying upper bounds of parameters start initial values to be passed to the optimizer. fixed for conditional (quasi)maximum likelihood estimation. joint perform joint estimation or two stage estimation? by default joint=FALSE. Est.Incr If the yuima model is an object of yuima.carma-class or yuima.cogarch-class the qmle returns an object of yuima.carma.qmle-class, cogarch.est.incr-class,cogarch.est-clas or object of class mle-class. By default Est.Incr="NoIncr", alternative val- ues are IncrPar and Incr. aggregation If aggregation=TRUE, before the estimation of the levy parameters we aggre- gate the increments. threshold If the model has Compund Poisson type jumps, the threshold is used to perform thresholding of the increments. ... passed to optim method. See Examples. rcpp use C++ code? Details qmle behaves more likely the standard mle function in stats4 and argument method is one of the methods available in optim. lse calculates least squares estimators of the drift parameters. This is useful for initial guess of qmle estimation. quasilogl returns the value of the quasi loglikelihood for a given yuima object and list of parameters coef. Value QL a real value. opt a list with components the same as ’optim’ function. carmaopt if the model is an object of yuima.carma-class, qmle returns an object yuima.carma.qmle-class cogarchopt if the model is an object of yuima.cogarch-class, qmle returns an object of class cogarch.est-class. The estimates are obtained by maximizing the pseudo-loglikelihood function as shown in Iacus et al. (2015) Note The function qmle uses the function optim internally. The function qmle uses the function CarmaNoise internally for estimation of underlying Levy if the model is an object of yuima.carma-class. Author(s) The YUIMA Project Team References ## Non-ergodic diffucion <NAME>., & <NAME>. (1993). On the estimation of the diffusion coefficient for multi- dimensional diffusion processes. In Annales de l’IHP Probabilités et statistiques, 29(1), 119-151. <NAME>., & <NAME>. (2013). Quasi likelihood analysis of volatility and nondegeneracy of statistical random field. Stochastic Processes and their Applications, 123(7), 2851-2876. ## Ergodic diffusion <NAME>. (1997). Estimation of an ergodic diffusion from discrete observations. Scandinavian Journal of Statistics, 24(2), 211-229. ## Jump diffusion <NAME>., & <NAME>. (2006). Estimation of parameters for diffusion processes with jumps from discrete observations. Statistical Inference for Stochastic Processes, 9(3), 227-277. <NAME>., & <NAME>. (2011). Quasi-likelihood analysis for the stochastic differential equation with jumps. Statistical Inference for Stochastic Processes, 14(3), 189-229. ## COGARCH <NAME>., <NAME>. and <NAME>.(2015) Discrete time approximation of a COGARCH (p, q) model and its estimation. doi:10.48550/arXiv.1511.00253 ## CARMA <NAME>., <NAME>. (2015) Implementation of Levy CARMA model in Yuima package. Comp. Stat. (30) 1111-1141. doi:10.1007/s0018001505697 Examples #dXt^e = -theta2 * Xt^e * dt + theta1 * dWt diff.matrix <- matrix(c("theta1"), 1, 1) ymodel <- setModel(drift=c("(-1)*theta2*x"), diffusion=diff.matrix, time.variable="t", state.variable="x", solve.variable="x") n <- 100 ysamp <- setSampling(Terminal=(n)^(1/3), n=n) yuima <- setYuima(model=ymodel, sampling=ysamp) set.seed(123) yuima <- simulate(yuima, xinit=1, true.parameter=list(theta1=0.3, theta2=0.1)) QL <- quasilogl(yuima, param=list(theta2=0.8, theta1=0.7)) ##QL <- ql(yuima, 0.8, 0.7, h=1/((n)^(2/3))) QL ## another way of parameter specification ##param <- list(theta2=0.8, theta1=0.7) ##QL <- ql(yuima, h=1/((n)^(2/3)), param=param) ##QL ## old code ##system.time( ##opt <- ml.ql(yuima, 0.8, 0.7, h=1/((n)^(2/3)), c(0, 1), c(0, 1)) ##) ##cat(sprintf("\nTrue param. theta2 = .3, theta1 = .1\n")) ##print(coef(opt)) system.time( opt2 <- qmle(yuima, start=list(theta1=0.8, theta2=0.7), lower=list(theta1=0,theta2=0), upper=list(theta1=1,theta2=1), method="L-BFGS-B") ) cat(sprintf("\nTrue param. theta1 = .3, theta2 = .1\n")) print(coef(opt2)) ## initial guess for theta2 by least squares estimator tmp <- lse(yuima, start=list(theta2=0.7), lower=list(theta2=0), upper=list(theta2=1)) tmp system.time( opt3 <- qmle(yuima, start=list(theta1=0.8, theta2=tmp), lower=list(theta1=0,theta2=0), upper=list(theta1=1,theta2=1), method="L-BFGS-B") ) cat(sprintf("\nTrue param. theta1 = .3, theta2 = .1\n")) print(coef(opt3)) ## perform joint estimation? Non-optimal, just for didactic purposes system.time( opt4 <- qmle(yuima, start=list(theta1=0.8, theta2=0.7), lower=list(theta1=0,theta2=0), upper=list(theta1=1,theta2=1), method="L-BFGS-B", joint=TRUE) ) cat(sprintf("\nTrue param. theta1 = .3, theta2 = .1\n")) print(coef(opt4)) ## fix theta1 to the true value system.time( opt5 <- qmle(yuima, start=list(theta2=0.7), lower=list(theta2=0), upper=list(theta2=1),fixed=list(theta1=0.3), method="L-BFGS-B") ) cat(sprintf("\nTrue param. theta1 = .3, theta2 = .1\n")) print(coef(opt5)) ## old code ##system.time( ##opt <- ml.ql(yuima, 0.8, 0.7, h=1/((n)^(2/3)), c(0, 1), c(0, 1), method="Newton") ##) ##cat(sprintf("\nTrue param. theta1 = .3, theta2 = .1\n")) ##print(coef(opt)) ## Not run: ###multidimension case ##dXt^e = - drift.matrix * Xt^e * dt + diff.matrix * dWt diff.matrix <- matrix(c("theta1.1","theta1.2", "1", "1"), 2, 2) drift.c <- c("-theta2.1*x1", "-theta2.2*x2", "-theta2.2", "-theta2.1") drift.matrix <- matrix(drift.c, 2, 2) ymodel <- setModel(drift=drift.matrix, diffusion=diff.matrix, time.variable="t", state.variable=c("x1", "x2"), solve.variable=c("x1", "x2")) n <- 100 ysamp <- setSampling(Terminal=(n)^(1/3), n=n) yuima <- setYuima(model=ymodel, sampling=ysamp) set.seed(123) ##xinit=c(x1, x2) #true.parameter=c(theta2.1, theta2.2, theta1.1, theta1.2) yuima <- simulate(yuima, xinit=c(1, 1), true.parameter=list(theta2.1=0.5, theta2.2=0.3, theta1.1=0.6, theta1.2=0.2)) ## theta2 <- c(0.8, 0.2) #c(theta2.1, theta2.2) ##theta1 <- c(0.7, 0.1) #c(theta1.1, theta1.2) ## QL <- ql(yuima, theta2, theta1, h=1/((n)^(2/3))) ## QL ## ## another way of parameter specification ## #param <- list(theta2=theta2, theta1=theta1) ## #QL <- ql(yuima, h=1/((n)^(2/3)), param=param) ## #QL ## theta2.1.lim <- c(0, 1) ## theta2.2.lim <- c(0, 1) ## theta1.1.lim <- c(0, 1) ## theta1.2.lim <- c(0, 1) ## theta2.lim <- t( matrix( c(theta2.1.lim, theta2.2.lim), 2, 2) ) ## theta1.lim <- t( matrix( c(theta1.1.lim, theta1.2.lim), 2, 2) ) ## system.time( ## opt <- ml.ql(yuima, theta2, theta1, h=1/((n)^(2/3)), theta2.lim, theta1.lim) ## ) ## opt@coef system.time( opt2 <- qmle(yuima, start=list(theta2.1=0.8, theta2.2=0.2, theta1.1=0.7, theta1.2=0.1), lower=list(theta1.1=.1,theta1.2=.1,theta2.1=.1,theta2.2=.1), upper=list(theta1.1=4,theta1.2=4,theta2.1=4,theta2.2=4), method="L-BFGS-B") ) opt2@coef summary(opt2) ## unconstrained optimization system.time( opt3 <- qmle(yuima, start=list(theta2.1=0.8, theta2.2=0.2, theta1.1=0.7, theta1.2=0.1)) ) opt3@coef summary(opt3) quasilogl(yuima, param=list(theta2.1=0.8, theta2.2=0.2, theta1.1=0.7, theta1.2=0.1)) ##system.time( ##opt <- ml.ql(yuima, theta2, theta1, h=1/((n)^(2/3)), theta2.lim, theta1.lim, method="Newton") ##) ##opt@coef ## # carma(p=2,q=0) driven by a brownian motion without location parameter mod0<-setCarma(p=2, q=0, scale.par="sigma") true.parm0 <-list(a1=1.39631, a2=0.05029, b0=1, sigma=0.23) samp0<-setSampling(Terminal=100,n=250) set.seed(123) sim0<-simulate(mod0, true.parameter=true.parm0, sampling=samp0) system.time( carmaopt0 <- qmle(sim0, start=list(a1=1.39631,a2=0.05029, b0=1, sigma=0.23)) ) summary(carmaopt0) # carma(p=2,q=1) driven by a brownian motion without location parameter mod1<-setCarma(p=2, q=1) true.parm1 <-list(a1=1.39631, a2=0.05029, b0=1, b1=2) samp1<-setSampling(Terminal=100,n=250) set.seed(123) sim1<-simulate(mod1, true.parameter=true.parm1, sampling=samp1) system.time( carmaopt1 <- qmle(sim1, start=list(a1=1.39631,a2=0.05029, b0=1,b1=2),joint=TRUE) ) summary(carmaopt1) # carma(p=2,q=1) driven by a compound poisson process where the jump size is normally distributed. mod2<-setCarma(p=2, q=1, measure=list(intensity="1",df=list("dnorm(z, 0, 1)")), measure.type="CP") true.parm2 <-list(a1=1.39631, a2=0.05029, b0=1, b1=2) samp2<-setSampling(Terminal=100,n=250) set.seed(123) sim2<-simulate(mod2, true.parameter=true.parm2, sampling=samp2) system.time( carmaopt2 <- qmle(sim2, start=list(a1=1.39631,a2=0.05029, b0=1,b1=2),joint=TRUE) ) summary(carmaopt2) # carma(p=2,q=1) driven by a normal inverse gaussian process mod3<-setCarma(p=2,q=1, measure=list(df=list("rNIG(z, alpha, beta, delta1, mu)")), measure.type="code") # # True param true.param3<-list(a1=1.39631, a2=0.05029, b0=1, b1=2, alpha=1, beta=0, delta1=1, mu=0) samp3<-setSampling(Terminal=100,n=200) set.seed(123) sim3<-simulate(mod3, true.parameter=true.param3, sampling=samp3) carmaopt3<-qmle(sim3,start=true.param3) summary(carmaopt3) # Simulation and Estimation of COGARCH(1,1) with CP driven noise # Model parameters eta<-0.053 b1 <- eta beta <- 0.04 a0 <- beta/b1 phi<- 0.038 a1 <- phi # Definition cog11<-setCogarch(p = 1,q = 1, measure = list(intensity = "1", df = list("dnorm(z, 0, 1)")), measure.type = "CP", XinExpr=TRUE) # Parameter paramCP11 <- list(a1 = a1, b1 = b1, a0 = a0, y01 = 50.31) # Sampling scheme samp11 <- setSampling(0, 3200, n=64000) # Simulation set.seed(125) SimTime11 <- system.time( sim11 <- simulate(object = cog11, true.parameter = paramCP11, sampling = samp11, method="mixed") ) plot(sim11) # Estimation timeComp11<-system.time( res11 <- qmle(yuima = sim11, start = paramCP11, grideq = TRUE, method = "Nelder-Mead") ) timeComp11 unlist(paramCP11) coef(res11) # COGARCH(2,2) model driven by CP cog22 <- setCogarch(p = 2,q = 2, measure = list(intensity = "1", df = list("dnorm(z, 0, 1)")), measure.type = "CP", XinExpr=TRUE) # Parameter paramCP22 <- list(a1 = 0.04, a2 = 0.001, b1 = 0.705, b2 = 0.1, a0 = 0.1, y01 = (1 + 2 / 3), y02=0) # Use diagnostic.cog for checking the stat and positivity check22 <- Diagnostic.Cogarch(cog22, param = paramCP22) # Sampling scheme samp22 <- setSampling(0, 3600, n = 64000) # Simulation set.seed(125) SimTime22 <- system.time( sim22 <- simulate(object = cog22, true.parameter = paramCP22, sampling = samp22, method = "Mixed") ) plot(sim22) timeComp22 <- system.time( res22 <- qmle(yuima = sim22, start = paramCP22, grideq=TRUE, method = "Nelder-Mead") ) timeComp22 unlist(paramCP22) coef(res22) ## End(Not run) qmleLevy Gaussian quasi-likelihood estimation for Levy driven SDE Description Calculate the Gaussian quasi-likelihood and Gaussian quasi-likelihood estimators of Levy driven SDE. Usage qmleLevy(yuima, start, lower, upper, joint = FALSE, third = FALSE, Est.Incr = "NoIncr", aggregation = TRUE) Arguments yuima a yuima object. lower a named list for specifying lower bounds of parameters. upper a named list for specifying upper bounds of parameters. start initial values to be passed to the optimizer. joint perform joint estimation or two stage estimation, by default joint=FALSE. If there exists an overlapping parameter, joint=TRUE does not work for the theo- retical reason third perform third estimation by default third=FALSE. If there exists an overlapping parameter, third=TRUE does not work for the theoretical reason. Est.Incr the qmleLevy returns an object of mle-clas, by default Est.Incr="NoIncr", other options as "Inc" or "IncrPar". aggregation If aggregation=TRUE, the function returns the unit-time Levy increments. If Est.Incr="IncrPar", the function estimates Levy parameters using the unit- time Levy increments. Details This function performs Gaussian quasi-likelihood estimation for Levy driven SDE. Value first estimated values of first estimation (scale parameters) second estimated values of second estimation (drift parameters) third estimated values of third estimation (scale parameters) Note The function qmleLevy uses the function qmle internally. It can be applied only for the standard- ized Levy noise whose moments of any order exist. In present yuima package, birateral gamma (bgamma) process, normal inverse Gaussian (NIG) process, variance gamma (VG) process, and normal tempered stable process are such candidates. In the current version, the standardization condition on the driving noise is internally checked only for the one-dimensional noise. The stan- dardization condition for the multivariate noise is given in https://docs.google.com/viewer?a=v&pid=sites&srcid=ZGVmYXVsdGRvbWFpbnx5dW1hdWVoYXJhMTkyOHxneDo3Z or https://docs.google.com/viewer?a=v&pid=sites&srcid=ZGVmYXVsdGRvbWFpbnx5dW1hdWVoYXJhMTkyOHxneDo3Z They also contain more presice explanation of this function. Author(s) The YUIMA Project Team Contacts: <NAME> <<EMAIL>> References <NAME>. (2013). Convergence of Gaussian quasi-likelihood random fields for ergodic Levy driven SDE observed at high frequency. The Annals of Statistics, 41(3), 1593-1641. <NAME>. and <NAME>. (2017). On stepwise estimation of Levy driven stochastic differential equation (Japanese) ., Proc. Inst. Statist. Math., accepted. Examples ## Not run: ## One-dimensional case dri<-"-theta0*x" ## set drift jum<-"theta1/(1+x^2)^(-1/2)" ## set jump yuima<-setModel(drift = dri ,jump.coeff = jum ,solve.variable = "x",state.variable = "x" ,measure.type = "code" ,measure = list(df="rbgamma(z,1,sqrt(2),1,sqrt(2))")) ## set true model n<-3000 T<-30 ## terminal hn<-T/n ## stepsize sam<-setSampling(Terminal = T, n=n) ## set sampling scheme yuima<-setYuima(model = yuima, sampling = sam) ## model true<-list(theta0 = 1,theta1 = 2) ## true values upper<-list(theta0 = 4, theta1 = 4) ## set upper bound lower<-list(theta0 = 0.5, theta1 = 1) ## set lower bound set.seed(123) yuima<-simulate(yuima, xinit = 0, true.parameter = true,sampling = sam) ## generate a path start<-list(theta0 = runif(1,0.5,4), theta1 = runif(1,1,4)) ## set initial values qmleLevy(yuima,start=start,lower=lower,upper=upper, joint = TRUE) ## Multi-dimensional case lambda<-1/2 alpha<-1 beta<-c(0,0) mu<-c(0,0) Lambda<-matrix(c(1,0,0,1),2,2) ## set parameters in noise dri<-c("1-theta0*x1-x2","-theta1*x2") jum<-matrix(c("x1*theta2+1","0","0","1"),2,2) ## set coefficients yuima <- setModel(drift=dri, solve.variable=c("x1","x2"),state.variable = c("x1","x2"), jump.coeff=jum, measure.type="code", measure=list(df="rvgamma(z, lambda, alpha, beta, mu, Lambda )")) n<-3000 ## the number of total samples T<-30 ## terminal hn<-T/n ## stepsize sam<-setSampling(Terminal = T, n=n) ## set sampling scheme yuima<-setYuima(model = yuima, sampling = sam) ## model true<-list(theta0 = 1,theta1 = 2,theta2 = 3,lambda=lambda, alpha=alpha, beta=beta,mu=mu, Lambda=Lambda) ## true values upper<-list(theta0 = 4, theta1 = 4, theta2 = 5, lambda=lambda, alpha=alpha, beta=beta,mu=mu, Lambda=Lambda) ## set upper bound lower<-list(theta0 = 0.5, theta1 = 1, theta2 = 1, lambda=lambda, alpha=alpha, beta=beta,mu=mu, Lambda=Lambda) ## set lower bound set.seed(123) yuima<-simulate(yuima, xinit = c(0,0), true.parameter = true,sampling = sam) ## generate a path plot(yuima) start<-list(theta0 = runif(1,0.5,4), theta1 = runif(1,1,4), theta2 = runif(1,1,5),lambda=lambda, alpha=alpha, beta=beta,mu=mu, Lambda=Lambda) ## set initial values qmleLevy(yuima,start=start,lower=lower,upper=upper,joint = FALSE,third=TRUE) ## End(Not run) rconst Fictitious rng for the constant random variable used to generate and describe Poisson jumps. Description Fictitious rng for the constant random variable used to generate and describe Poisson jumps. Usage rconst(n, k = 1) dconst(x, k = 1) Arguments n number of replications k the size of the jump x the fictitious argument Value returns a numeric vector Author(s) The YUIMA Project Team Examples dconst(1,1) dconst(2,1) dconst(2,2) rconst(10,3) rng Random numbers and densities Description simulate function can use the specific random number generators to generate Levy paths. Usage rGIG(x,lambda,delta,gamma) dGIG(x,lambda,delta,gamma) rGH(x,lambda,alpha,beta,delta,mu,Lambda) dGH(x,lambda,alpha,beta,delta,mu,Lambda) rIG(x,delta,gamma) dIG(x,delta,gamma) rNIG(x,alpha,beta,delta,mu,Lambda) dNIG(x,alpha,beta,delta,mu,Lambda) rvgamma(x,lambda,alpha,beta,mu,Lambda) dvgamma(x,lambda,alpha,beta,mu,Lambda) rbgamma(x,delta.plus,gamma.plus,delta.minus,gamma.minus) dbgamma(x,delta.plus,gamma.plus,delta.minus,gamma.minus) rstable(x,alpha,beta,sigma,gamma) rpts(x,alpha,a,b) rnts(x,alpha,a,b,beta,mu,Lambda) Arguments x Number of R.Ns to be geneated. a parameter b parameter delta parameter written as δ below gamma parameter written as γ below mu parameter written as µ below Lambda parameter written as Λ below alpha parameter written as α below lambda parameter written as λ below sigma parameter written as σ below beta parameter written as β below delta.plus parameter written as δ+ below gamma.plus parameter written as γ+ below delta.minus parameter written as δ− below gamma.minus parameter written as γ− below Details GIG (generalized inverse Gaussian): The density function of GIG distribution is expressed as: f (x) = 1/2 ∗ (γ/δ)λ ∗ 1/bKλ (γ ∗ δ) ∗ x( λ − 1) ∗ exp(−1/2 ∗ (δ 2 /x + γ 2 ∗ x)) where bKλ () is the modified Bessel function of the third kind with order lambda. The parameters λ, δ and γ vary within the following regions: δ >= 0, γ > 0 if λ > 0, δ > 0, γ > 0 if λ = 0, δ > 0, γ >= 0 if λ < 0. The corresponding Levy measure is given in Eberlein, E., & <NAME>. (2004) (it contains IG). GH (generalized hyperbolic): Generalized hyperbolic distribution is defined by the normal mean- variance mixture of generalized inverse Gaussian distribution. The parameters α, β, δ, µ express heaviness of tails, degree of asymmetry, scale and location, respectively. Here the parameter Λ is supposed to be symmetric and positive definite with det(Λ) = 1 and the parameters vary within the following region: δ >= 0, α > 0, α2 > β T Λβ if λ > 0, δ > 0, α > 0, α2 > β T Λβ if λ = 0, δ > 0, α >= 0, α2 >= β T Λβ if λ < 0. The corresponding Levy measure is given in Eberlein, E., & <NAME>. (2004) (it contains NIG and vgamma). IG (inverse Gaussian (the element of GIG)): ∆ and γ are positive (the case of γ = 0 corresponds to the positive half stable, provided by the "rstable"). NIG (normal inverse Gaussian (the element of GH)): Normal inverse Gaussian distribution is defined by the normal mean-variance mixuture of inverse Gaussian distribution. The parameters α, β, δ and µ express the heaviness of tails, degree of asymmetry, scale and location, respectively. They satisfy the following conditions: Λ is symmetric and positive definite with det(Λ) = 1; δ > 0; α > 0 with α2 − β T Λβ > 0. vgamma (variance gamma (the element of GH)): Variance gamma distribution is defined by the nor- mal mean-variance mixture of gamma distribution. The parameters satisfy the following conditions: Lambda is symmetric and positive definite with det(Λ) = 1; λ > 0; α > 0 with α2 − β T Λβ > 0. Especially in the case of β = 0 it is variance gamma distribution. bgamma (bilateral gamma): Bilateral gamma distribution is defined by the difference of independent gamma distributions Gamma(δ+ , γ+ )andGamma(δ− , γ− ). Its Levy density f (z) is given by: f (z) = δ+ /z ∗ exp(−γ+ ∗ z) ∗ ind(z > 0) + δ− /|z| ∗ exp(−γ− ∗ |z|) ∗ ind(z < 0), where the function ind() denotes an indicator function. stable (stable): Parameters α, β, σ and γ express stability, degree of skewness, scale and location, respectively. They satisfy the following condition: 0 < α <= 2; −1 <= β <= 1; σ > 0; γ is a real number. pts (positive tempered stable): Positive tempered stable distribution is defined by the tilting of positive stable distribution. The parameters α, a and b express stability, scale and degree of tilting, respectively. They satisfy the following condition: 0 < α < 1; a > 0; b > 0. Its Levy density f (z) is given by: f (z) = az ( − 1 − α)exp(−bz). nts (normal tempered stable): Normal tempered stable distribution is defined by the normal mean- variance mixture of positive tempered stable distribution. The parameters α, a, b, β, µ and Λ ex- press stability, scale, degree of tilting, degree of asymmemtry, location and degree of mixture, respectively. They satisfy the following condition: Lambda is symmetric and positive definite with det(Λ) = 1; 0 < α < 1; a > 0; b > 0. In one-dimensional case, its Levy density f (z) is given by: f (z) = 2a/(2π)( 1/2)∗exp(β ∗z)∗(z 2 /(2b+β 2 ))( −α/2−1/4)∗bK( α+1/2)(z 2 (2b+β 2 )( 1/2)). Value rXXX Collection of of random numbers or vectors dXXX Density function Note Some density-plot functions are still missing: as for the non-Gaussian stable densities, one can use, e.g., stabledist package. The rejection-acceptance method is used for generating pts and nts. It should be noted that its acceptance rate decreases at exponential order as a and b become larger: specifically, the rate is given by exp(a ∗ Γ(−α) ∗ b( α)) Author(s) The YUIMA Project Team Contacts: <NAME> <<EMAIL>> and <NAME> <<EMAIL>> References ## rGIG, dGIG, rIG, dIG <NAME>. (1988). The Inverse Gaussian Distribution: Theory: Methodology, and Applications (Vol. 95). CRC Press. <NAME>., & <NAME>. (2014). Generating generalized inverse Gaussian random variates. Statistics and Computing, 24(4), 547-557. doi:10.1111/14679469.00045 <NAME>. (2012). Statistical properties of the generalized inverse Gaussian distribution (Vol. 9). Springer Science & Business Media. https://link.springer.com/book/10.1007/978-1-4612-5698-4 <NAME>., <NAME>., & <NAME>. (1976). Generating random variates using transfor- mations with multiple roots. The American Statistician, 30(2), 88-90. doi:10.1080/00031305.1976.10479147 ## rGH, dGH, rNIG, dNIG, rvgamma, dvgamma <NAME>. (1977). Exponentially decreasing distributions for the logarithm of particle size. In Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences (Vol. 353, No. 1674, pp. 401-419). The Royal Society. doi:10.1098/rspa.1977.0041 <NAME>. (1997). Processes of normal inverse Gaussian type. Finance and stochas- tics, 2(1), 41-68. doi:10.1007/s007800050032 <NAME>. (2001). Application of generalized hyperbolic Levy motions to finance. In Levy pro- cesses (pp. 319-336). Birkhauser Boston. doi:10.1007/9781461201977_14 <NAME>., & <NAME>. (2004). Generalized hyperbolic and inverse Gaussian distri- butions: limiting cases and approximation of processes. In Seminar on stochastic analysis, random fields and applications IV (pp. 221-264). Birkh??user Basel. doi:10.1007/9781461201977_14 <NAME>., <NAME>., & <NAME>. (1998). The variance gamma process and option pricing. European finance review, 2(1), 79-105. doi:10.1111/14679469.00045 ## rbgamma, dbgamma <NAME>., & <NAME>. (2008). Bilateral Gamma distributions and processes in financial mathe- matics. Stochastic Processes and their Applications, 118(2), 261-283. doi:10.1016/j.spa.2007.04.006 <NAME>., & <NAME>. (2008). On the shapes of bilateral Gamma densities. Statistics & Proba- bility Letters, 78(15), 2478-2484. doi:10.1016/j.spa.2007.04.006 ## rstable Chambers, <NAME>., <NAME>, and <NAME>. (1976) A method for simulating stable random variables, Journal of the american statistical association, 71(354), 340-344. doi:10.1080/ 01621459.1976.10480344 <NAME>. (1996) On the Chambers-Mallows-Stuck method for simulating skewed stable ran- dom variables, Statistics & probability letters, 28.2, 165-171. doi:10.1016/01677152(95)001131 <NAME>. (2010) Correction to:" On the Chambers-Mallows-Stuck Method for Simulating Skewed Stable Random Variables", No. 20761, University Library of Munich, Germany. https://ideas.repec.org/p/pra/mprapa ## rpts <NAME>., & <NAME>. (2011). On simulation of tempered stable random variates. Journal of Computational and Applied Mathematics, 235(8), 2873-2887. doi:10.1016/j.cam.2010.12.014 ## rnts <NAME>., & <NAME>. (2001). Normal modified stable processes. Aarhus: MaPhySto, Department of Mathematical Sciences, University of Aarhus. Examples ## Not run: set.seed(123) # Ex 1. (One-dimensional standard Cauchy distribution) # The value of parameters is alpha=1,beta=0,sigma=1,gamma=0. # Choose the values of x. x<-10 # the number of r.n rstable(x,1,0,1,0) # Ex 2. (One-dimensional Levy distribution) # Choose the values of sigma, gamma, x. # alpha = 0.5, beta=1 x<-10 # the number of r.n beta <- 1 sigma <- 0.1 gamma <- 0.1 rstable(x,0.5,beta,sigma,gamma) # Ex 3. (Symmetric bilateral gamma) # delta=delta.plus=delta.minus, gamma=gamma.plus=gamma.minus. # Choose the values of delta and gamma and x. x<-10 # the number of r.n rbgamma(x,1,1,1,1) # Ex 4. ((Possibly skewed) variance gamma) # lambda, alpha, beta, mu # Choose the values of lambda, alpha, beta, mu and x. x<-10 # the number of r.n rvgamma(x,2,1,-0.5,0) # Ex 5. (One-dimensional normal inverse Gaussian distribution) # Lambda=1. # Choose the parameter values and x. x<-10 # the number of r.n rNIG(x,1,1,1,1) # Ex 6. (Multi-dimensional normal inverse Gaussian distribution) # Choose the parameter values and x. beta<-c(.5,.5) mu<-c(0,0) Lambda<-matrix(c(1,0,0,1),2,2) x<-10 # the number of r.n rNIG(x,1,beta,1,mu,Lambda) # Ex 7. (Positive tempered stable) # Choose the parameter values and x. alpha<-0.7 a<-0.2 b<-1 x<-10 # the number of r.n rpts(x,alpha,a,b) # Ex 8. (Generarized inverse Gaussian) # Choose the parameter values and x. lambda<-0.3 delta<-1 gamma<-0.5 x<-10 # the number of r.n rGIG(x,lambda,delta,gamma) # Ex 9. (Multi-variate generalized hyperbolic) # Choose the parameter values and x. lambda<-0.4 alpha<-1 beta<-c(0,0.5) delta<-1 mu<-c(0,0) Lambda<-matrix(c(1,0,0,1),2,2) x<-10 # the number of r.n rGH(x,lambda,alpha,beta,delta,mu,Lambda) ## End(Not run) setCarma Continuous Autoregressive Moving Average (p, q) model Description ’setCarma’ describes the following model: Vt = c0 + sigma (b0 Xt(0) + ... + b(q) Xt(q)) dXt(0) = Xt(1) dt ... dXt(p-2) = Xt(p-1) dt dXt(p-1) = (-a(p) Xt(0) - ... - a(1) Xt(p-1))dt + (gamma(0) + gamma(1) Xt(0) + ... + gamma(p) Xt(p-1))dZt The continuous ARMA process using the state-space representation as in Brockwell (2000) is ob- tained by choosing: gamma(0) = 1, gamma(1) = gamma(2) = ... = gamma(p) = 0. Please refer to the vignettes and the examples or the yuima documentation for details. Usage setCarma(p,q,loc.par=NULL,scale.par=NULL,ar.par="a",ma.par="b", lin.par=NULL,Carma.var="v",Latent.var="x",XinExpr=FALSE, Cogarch=FALSE, ...) Arguments p a non-negative integer that indicates the number of the autoregressive coeffi- cients. q a non-negative integer that indicates the number of the moving average coeffi- cients. loc.par location coefficient. The default value loc.par=NULL implies that c0=0. scale.par scale coefficient. The default value scale.par=NULL implies that sigma=1. ar.par a character-string that is the label of the autoregressive coefficients. The default Value is ar.par="a". ma.par a character-string that is the label of the moving average coefficients. The default Value is ma.par="b". Carma.var a character-string that is the label of the observed process. Defaults to "v". Latent.var a character-string that is the label of the unobserved process. Defaults to "x". lin.par a character-string that is the label of the linear coefficients. If lin.par=NULL, the default, the ’setCarma’ builds the CARMA(p, q) model defined as in Brockwell (2000). XinExpr a logical variable. The default value XinExpr=FALSE implies that the start- ing condition for Latent.var is zero. If XinExpr=TRUE, each component of Latent.var has a parameter as a initial value. Cogarch a logical variable. The default value Cogarch=FALSE implies that the parameters are specified according to Brockwell (2000). ... Arguments to be passed to ’setCarma’, such as the slots of yuima.model-class measure Levy measure of jump variables. measure.type type specification for Levy measure. xinit a vector of expressions identyfying the starting conditions for CARMA model. Details Please refer to the vignettes and the examples or to the yuimadocs package. An object of yuima.carma-class contains: info: It is an object of carma.info-class which is a list of arguments that identifies the carma(p,q) model and the same slots in an object of yuima.model-class . Value model an object of yuima.carma-class. Note There may be missing information in the model description. Please contribute with suggestions and fixings. Author(s) The YUIMA Project Team References <NAME>. (2000) Continuous-time ARMA processes, Stochastic Processes: Theory and Meth- ods. Handbook of Statistics, 19, (<NAME> and <NAME>, eds.) 249-276. North-Holland, Amsterdam. Examples # Ex 1. (Continuous ARMA process driven by a Brownian Motion) # To describe the state-space representation of a CARMA(p=3,q=1) model: # Vt=c0+alpha0*X0t+alpha1*X1t # dX0t = X1t*dt # dX1t = X2t*dt # dX2t = (-beta3*X0t-beta2*X1t-beta1*X2t)dt+dWt # we set mod1<-setCarma(p=3, q=1, loc.par="c0") # Look at the model structure by str(mod1) # Ex 2. (General setCarma model driven by a Brownian Motion) # To describe the model defined as: # Vt=c0+alpha0*X0t+alpha1*X1t # dX0t = X1t*dt # dX1t = X2t*dt # dX2t = (-beta3*X0t-beta2*X1t-beta1*X2t)dt+(c0+alpha0*X0t)dWt # we set mod2 <- setCarma(p=3, q=1, loc.par="c0", ma.par="alpha", ar.par="beta", lin.par="alpha") # Look at the model structure by str(mod2) # Ex 3. (Continuous Arma model driven by a Levy process) # To specify the CARMA(p=3,q=1) model driven by a Compound Poisson process defined as: # Vt=c0+alpha0*X0t+alpha1*X1t # dX0t = X1t*dt # dX1t = X2t*dt # dX2t = (-beta3*X0t-beta2*X1t-beta1*X2t)dt+dzt # we set the Levy measure as in setModel mod3 <- setCarma(p=3, q=1, loc.par="c0", measure=list(intensity="1",df=list("dnorm(z, 0, 1)")), measure.type="CP") # Look at the model structure by str(mod3) # Ex 4. (General setCarma model driven by a Levy process) # Vt=c0+alpha0*X0t+alpha1*X1t # dX0t = X1t*dt # dX1t = X2t*dt # dX2t = (-beta3*X1t-beta2*X2t-beta1*X3t)dt+(c0+alpha0*X0t)dzt mod4 <- setCarma(p=3, q=1, loc.par="c0", ma.par="alpha", ar.par="beta", lin.par="alpha", measure=list(intensity="1",df=list("dnorm(z, 0, 1)")), measure.type="CP") # Look at the model structure by str(mod4) setCharacteristic Set characteristic information and create a ‘characteristic’ object. Description setCharacteristic is a constructor for characteristic class. Usage setCharacteristic(equation.number,time.scale) Arguments equation.number The number of equations modeled in yuima object. time.scale time.scale assumed in the model. Details class characteristic has two slots, equation.number is the number of equations handled in the yuima object, and time.scale is a hoge of characteristic. Value An object of class characteristic. Author(s) The YUIMA Project Team setCogarch Continuous-time GARCH (p,q) process Description setCogarch describes the Cogarch(p,q) model introduced in Brockwell et al. (2006): dGt = sqrt(Vt)dZt Vt = a0 + (a1 Yt(1) + ... + a(p) Yt(p)) dYt(1) = Yt(2) dt ... dYt(q-1) = Yt(q) dt dYt(q) = (-b(q) Yt(1) - ... - b(1) Yt(q))dt + (a0 + (a1 Yt(1) + ... + a(p) Yt(p))d[ZtZt]^{q} Usage setCogarch(p, q, ar.par = "b", ma.par = "a", loc.par = "a0", Cogarch.var = "g", V.var = "v", Latent.var = "y", jump.variable = "z", time.variable = "t", measure = NULL, measure.type = NULL, XinExpr = FALSE, startCogarch = 0, work = FALSE, ...) Arguments p a non negative integer that is the number of the moving average coefficients of the Variance process. q a non-negative integer that indicates the number of the autoregressive coeffi- cients of the Variance process. ar.par a character-string that is the label of the autoregressive coefficients. ma.par a character-string that is the label of the autoregressive coefficients. loc.par the location coefficient. Cogarch.var a character-string that is the label of the observed cogarch process. V.var a character-string that is the label of the latent variance process. Latent.var a character-string that is the label of the latent process in the state space repre- sentation for the variance process. jump.variable the jump variable. time.variable the time variable. measure Levy measure of jump variables. measure.type type specification for Levy measure. XinExpr a vector of expressions identyfying the starting conditions for Cogarch model. startCogarch Start condition for the Cogarch process work Internal Variable. In the final release this input will be removed. ... Arguments to be passed to setCogarch such as the slots of the yuima.model-class Details We remark that yuima describes a Cogarch(p,q) model using the formulation proposed in Brockwell et al. (2006). This representation has the Cogarch(1,1) model introduced in Kluppelberg et al. (2004) as a special case. Indeed, by choosing beta = a0 b1, eta = b1 and phi = a1, we obtain the Cogarch(1,1) model proposed in Kluppelberg et al. (2004) defined as the solution of the SDEs: dGt = sqrt(Vt)dZt dVt = (beta - eta Vt) dt + phi Vt d[ZtZt]^{q} Please refer to the vignettes and the examples. An object of yuima.cogarch-class contains: info: It is an object of cogarch.info-class which is a list of arguments that identifies the Coga- rch(p,q) model and the same slots in an object of yuima.model-class . Value model an object of yuima.cogarch-class. Note There may be missing information in the model description. Please contribute with suggestions and fixings. Author(s) The YUIMA Project Team References <NAME>., <NAME>. and <NAME>. (2006) Continuous-time GARCH processes, The Annals of Applied Probability, 16, 790-826. <NAME>., <NAME>., and <NAME>. (2004) A continuous-time GARCH process driven by a Levy process: Stationarity and second-order behaviour, Journal of Applied Probability, 41, 601-622. <NAME>, <NAME>, <NAME> (2017) COGARCH(p,q): Simulation and Inference with the yuima Package, Journal of Statistical Software, 80(4), 1-49. Examples # Ex 1. (Continuous time GARCH process driven by a compound poisson process) prova<-setCogarch(p=1,q=3,work=FALSE, measure=list(intensity="1", df=list("dnorm(z, 0, 1)")), measure.type="CP", Cogarch.var="y", V.var="v", Latent.var="x") setData Set and access data of an object of type "yuima.data" or "yuima". Description setData constructs an object of yuima.data-class. get.zoo.data returns the content of the zoo.data slot of a yuima.data-class object. (Note: value is a list of zoo objects). plot plot method for object of yuima.data-class or yuima-class. dim returns the dim of the zoo.data slot of a yuima.data-class object. length returns the length of the time series in zoo.data slot of a yuima.data-class object. cbind.yuima bind yuima.data object. Usage setData(original.data, delta=NULL, t0=0) get.zoo.data(x) Arguments original.data some type of data, usually some sort of time series. The function always tries to convert to the input data into an object of zoo-type. See Details. x an object of type yuima.data-class or yuima-class. delta If there is the need to redefine on the fly the delta increment of the data to make it consistent to statistical theory. See Details. t0 the time origin for the internal zoo.data slot, defaults to 0. Details Objects in the yuima.data-class contain two slots: original.data: The slot original.data contains, as the name suggests, a copy of the original data passed to the function setData. It is intended for backup purposes. zoo.data: the function setData tries to convert original.data into an object of class zoo. The coerced zoo data are stored in the slot zoo.data. If the conversion fails the function exits with an error. Internally, the yuima package stores and operates on zoo-type objects. The function get.zoo.data returns the content of the slot zoo.data of x if x is of yuima.data-class or the content of <EMAIL> if x is of yuima-class. Value value a list of object(s) of yuima.data-class for setData. The content of the zoo.data slot for get.zoo.data Author(s) The YUIMA Project Team Examples X <- ts(matrix(rnorm(200),100,2)) mydata <- setData(X) str(get.zoo.data(mydata)) dim(mydata) length(mydata) plot(mydata) # exactly the same output mysde <- setYuima(data=setData(X)) str(get.zoo.data(mysde)) plot(mysde) dim(mysde) length(mysde) # changing delta on the fly to 1/252 mysde2 <- setYuima(data=setData(X, delta=1/252)) str(get.zoo.data(mysde2)) plot(mysde2) dim(mysde2) length(mysde2) # changing delta on the fly to 1/252 and shifting time to t0=1 mysde2 <- setYuima(data=setData(X, delta=1/252, t0=1)) str(get.zoo.data(mysde2)) plot(mysde2) dim(mysde2) length(mysde2) setFunctional Description of a functional associated with a perturbed stochastic dif- ferential equation Description This function is used to give a description of the stochastic differential equation. The functional represent the price of the option in financial economics, for example. Usage setFunctional(model, F, f, xinit,e) Arguments model yuima or yuima.model object. F function of $X_t$ and $epsilon$ f list of functions of $X_t$ and $epsilon$ xinit initial values of state variable. e epsilon parameter Details You should look at the vignette and examples. The object foi contains several “slots”. To see inside its structure we use the R command str. f and Fare R (list of) expressions which contains the functional of interest specification. e is a small parameter on which we conduct asymptotic expansion of the functional. Value yuima an object of class ’yuima’ containing object of class ’functional’. If yuima object was given as ’model’ argument, the result is just added and the other slots of the object are maintained. Note There may be missing information in the model description. Please contribute with suggestions and fixings. Author(s) The YUIMA Project Team Examples set.seed(123) # to the Black-Scholes economy: # dXt^e = Xt^e * dt + e * Xt^e * dWt diff.matrix <- matrix( c("x*e"), 1,1) model <- setModel(drift = c("x"), diffusion = diff.matrix) # call option is evaluated by averating # max{ (1/T)*int_0^T Xt^e dt, 0}, the first argument is the functional of interest: Terminal <- 1 xinit <- c(1) f <- list( c(expression(x/Terminal)), c(expression(0))) F <- 0 division <- 1000 e <- .3 yuima <- setYuima(model = model,sampling = setSampling(Terminal = Terminal, n = division)) yuima <- setFunctional( model = yuima, xinit=xinit, f=f,F=F,e=e) # look at the model structure str(yuima@functional) setHawkes Constructor of Hawkes model Description ’setHawkes’ constructs an object of class yuima.Hawkes that is a mathematical description of a multivariate Hawkes model Usage setHawkes(lower.var = "0", upper.var = "t", var.dt = "s", process = "N", dimension = 1, intensity = "lambda", ExpKernParm1 = "c", ExpKernParm2 = "a", const = "nu", measure = NULL, measure.type = NULL) Arguments lower.var Lower bound in the integral upper.var Upper bound in the integral var.dt Time variable process Counting process dimension An integer that indicates the components of the counting process intensity Intensity Process ExpKernParm1 Kernel parameters ExpKernParm2 Kernel parameters const Constant term in the intensity process measure Jump size. By default 1 measure.type Type. By default code. Details By default the object is an univariate Hawkes process Value The function returns an object of class yuima.Hawkes. Author(s) YUIMA Team Examples ## Not run: # Definition of an univariate hawkes model provaHawkes2<-setHawkes() str(provaHawkes2) # Simulation true.par <- list(nu1=0.5, c11=3.5, a11=4.5) simprv1 <- simulate(object = provaHawkes2, true.parameter = true.par, sampling = setSampling(Terminal =70, n=7000)) plot(simprv1) # Computation of intensity lambda1 <- Intensity.PPR(simprv1, param = true.par) plot(lambda1) # qmle res1 <- qmle(simprv1, method="Nelder-Mead", start = true.par) summary(res1) ## End(Not run) setIntegral Integral of Stochastic Differential Equation Description ’setIntegral’ is the constructor of an object of class yuima.Integral Usage setIntegral(yuima, integrand, var.dx, lower.var, upper.var, out.var = "", nrow = 1, ncol = 1) Arguments yuima an object of class yuima.model that is the SDE. integrand A matrix or a vector of strings that describe each component of the integrand. var.dx A label that indicates the variable of integration lower.var A label that indicates the lower variable in the support of integration, by default lower.var = 0. upper.var A label that indicates the upper variable in the support of integration, by default upper.var = t. out.var Label for the output nrow Dimension of output if integrand is a vector of string. ncol Dimension of output if integrand is a vector of string. Value The constructor returns an object of class yuima.Integral. Author(s) The YUIMA Project Team References Yuima Documentation Examples ## Not run: # Definition Model Mod1<-setModel(drift=c("a1"), diffusion = matrix(c("s1"),1,1), solve.variable = c("X"), time.variable = "s") # In this example we define an integral of SDE such as # \[ # I=\int^{t}_{0} b*exp(-a*(t-s))*(X_s-a1*s)dX_s # \] integ <- matrix("b*exp(-a*(t-s))*(X-a1*s)",1,1) Integral <- setIntegral(yuima = Mod1,integrand = integ, var.dx = "X", lower.var = "0", upper.var = "t", out.var = "", nrow =1 ,ncol=1) # Structure of slots is(Integral) # Function h in the above definition Integral@Integral@Integrand@IntegrandList # Dimension of Intgrand Integral@Integral@Integrand@dimIntegrand # all parameters are $\left(b,a,a1,s1\right)$ Integral@[email protected]@allparam # the parameters in the integrand are $\left(b,a,a1\right)$ \newline Integral@[email protected]@Integrandparam # common parameters are $a1$ Integral@[email protected]@common # integral variable dX_s Integral@[email protected]@var.dx Integral@[email protected]@var.time # lower and upper vars Integral@[email protected]@lower.var Integral@[email protected]@upper.var ## End(Not run) setLaw Random variable constructor 132 setLaw Description Constructor of a random variable Usage setLaw(rng = function(n, ...) { NULL }, density = function(x, ...) { NULL }, cdf = function(q, ...) { NULL }, quant = function(p, ...) { NULL }, characteristic = function(u, ...) { NULL }, time.var = "t", dim = NA) Arguments rng function density function cdf function characteristic function quant function time.var label dim label Details Insert additional info Value object of class yuima.law Note Insert additional info Author(s) YUIMA TEAM setMap Map of a Stochastic Differential Equation Description ’setMap’ is the constructor of an object of class yuima.Map that describes a map of a SDE Usage setMap(func, yuima, out.var = "", nrow = 1, ncol = 1) Arguments func a matrix or a vector of strings that describe each component of the map. yuima an object of class yuima.model that is the SDE. out.var label for the output nrow dimension of Map if func is a vector of string. ncol dimension of output if func is a vector of string. Value The constructor returns an object of class yuima.Map. Author(s) The YUIMA Project Team References Yuima Documentation Examples ## Not run: # Definition of a yuima model mod <- setModel(drift=c("a1", "a2"), diffusion = matrix(c("s1","0","0","s2"),2,2), solve.variable = c("X","Y")) # Definition of a map my.Map <- matrix(c("(X+Y)","-X-Y", "a*exp(X-a1*t)","b*exp(Y-a2*t)"), nrow=2,ncol=2) # Construction of yuima.Map yuimaMap <- setMap(func = my.Map, yuima = mod, out.var = c("f11","f21","f12","f22")) # Simulation of a Map set.seed(123) samp <- setSampling(0, 100,n = 1000) mypar <- list(a=1, b=1, s1=0.1, s2=0.2, a1=0.1, a2=0.1) sim1 <- simulate(object = yuimaMap, true.parameter = mypar, sampling = samp) # plot plot(sim1, ylab = <EMAIL>, main = "simulation Map", cex.main = 0.8) ## End(Not run) setModel Basic description of stochastic differential equations (SDE) Description ’setModel’ gives a description of stochastic differential equation with or without jumps of the fol- lowing form: dXt = a(t,Xt, alpha)dt + b(t,Xt,beta)dWt + c(t,Xt,gamma)dZt, X0=x0 All functions relying on the yuima package will get as much information as possible from the different slots of the yuima-class structure without replicating the same code twice. If there are missing pieces of information, some default values can be assumed. Usage setModel(drift = NULL, diffusion = NULL, hurst = 0.5, jump.coeff = NULL, measure = list(), measure.type = character(), state.variable = "x", jump.variable = "z", time.variable = "t", solve.variable, xinit) Arguments drift a vector of expressions (the default value is 0 when drift=NULL). diffusion a matrix of expressions (the default value is 0 when diffusion=NULL). hurst the Hurst parameter of the gaussian noise. If h=0.5, the default, the process is Wiener otherwise it is fractional Brownian motion with that precise value of the Hurst index. Can be set to NA for further specification. jump.coeff a matrix of expressions for the jump component. measure Levy measure for jump variables. measure.type type specification for Levy measures. state.variable a vector of names of the state variables in the drift and diffusion coefficients. jump.variable a vector of names of the jump variables in the jump coefficient. time.variable the name of the time variable. solve.variable a vector of names of the variables in the left-hand-side of the equations in the model; solve.variable equals state.variable as long as we have no ex- ogenous variable other than statistical parameters in the coefficients (drift and diffusion). xinit a vector of numbers identifying the initial value of the solve.variable. Details Please refer to the vignettes and the examples or to the yuimadocs package. An object of yuima.model-class contains several slots: drift: an R expression which specifies the drift coefficient (a vector). diffusion: an R expression which specifies the diffusion coefficient (a matrix). jump.coeff: coefficient of the jump term. measure: the Levy measure of the driving Levy process. measure.type: specifies the type of the measure, such as CP, code or density. See below. parameter: a short name for “parameters”. It is an object of model.parameter-class which is a list of vectors of names of parameters belonging to the single components of the model (drift, diffusion, jump and measure), the names of common parameters and the names of all parameters. For more details see model.parameter-class documentation page. solve.variable: a vector of variable names, each element corresponds to the name of the solution variable (left-hand-side) of each equation in the model, in the corresponding order. state.variable: identifies the state variables in the R expression. By default, it is assumed to be x. jump.variable: the variable for the jump coefficient. By default, it is assumed to be z. time: the time variable. By default, it is assumed to be t. solve.variable: used to identify the solution variables in the R expression, i.e. the variable with respect to which the stochastic differential equation has to be solved. By default, it is assumed to be x, otherwise the user can choose any other model specification. noise.number: denotes the number of sources of noise. Currently only for the Gaussian part. equation.number: denotes the dimension of the stochastic differential equation. dimension: the dimensions of the parameters in the parameter slot. xinit: denotes the initial value of the stochastic differential equation. The yuima.model-class structure assumes that the user either uses the default names for state.variable, jump.variable, solution.variable and time.variable or specifies his/her own names. All the rest of the terms in the R expressions are considered as parameters and identified accordingly in the parameter slot. Value model an object of yuima.model-class. Note There may be missing information in the model description. Please contribute with suggestions and fixings. Author(s) The YUIMA Project Team Examples # Ex 1. (One-dimensional diffusion process) # To describe # dXt = -3*Xt*dt + (1/(1+Xt^2+t))dWt, # we set mod1 <- setModel(drift = "-3*x", diffusion = "1/(1+x^2+t)", solve.variable = c("x")) # We may omit the solve.variable; then the default variable x is used mod1 <- setModel(drift = "-3*x", diffusion = "1/(1+x^2+t)") # Look at the model structure by str(mod1) # Ex 2. (Two-dimensional diffusion process with three factors) # To describe # dX1t = -3*X1t*dt + dW1t +X2t*dW3t, # dX2t = -(X1t + 2*X2t)*dt + X1t*dW1t + 3*dW2t, # we set the drift coefficient a <- c("-3*x1","-x1-2*x2") # and also the diffusion coefficient b <- matrix(c("1","x1","0","3","x2","0"),2,3) # Then set mod2 <- setModel(drift = a, diffusion = b, solve.variable = c("x1","x2")) # Look at the model structure by str(mod2) # The noise.number is automatically determined by inputting the diffusion matrix expression. # If the dimensions of the drift differs from the number of the rows of the diffusion, # the error message is returned. # Ex 3. (Process with jumps (compound Poisson process)) # To describe # dXt = -theta*Xt*dt+sigma*dZt mod3 <- setModel(drift=c("-theta*x"), diffusion="sigma", jump.coeff="1", measure=list(intensity="1", df=list("dnorm(z, 0, 1)")), measure.type="CP", solve.variable="x") # Look at the model structure by str(mod3) # Ex 4. (Process with jumps (stable process)) # To describe # dXt = -theta*Xt*dt+sigma*dZt mod4 <- setModel(drift=c("-theta*x"), diffusion="sigma", jump.coeff="1", measure.type="code",measure=list(df="rstable(z,1,0,1,0)"), solve.variable="x") # Look at the model structure by str(mod4) # See rng about other candidate of Levy noises. # Ex 5. (Two-dimensional stochastic differenatial equation with Levy noise) # To describe # dX1t = (1 - X1t - X2t)*dt+dZ1t # dX2t = (0.5 - X1t - X2t)*dt+dZ2t beta<-c(.5,.5) mu<-c(0,0) Lambda<-matrix(c(1,0,0,1),2,2) mod5 <- setModel(drift=c("1 - x1-x2",".5 - x1-x2"), solve.variable=c("x1","x2"), jump.coeff=Lambda, measure.type="code", measure=list(df="rNIG(z, alpha, beta, delta0, mu, Lambda)")) # Look at the model structure by str(mod5) # Ex 6. (Process with fractional Gaussian noise) # dYt = 3*Yt*dt + dWt^h mod6 <- setModel(drift="3*y", diffusion=1, hurst=0.3, solve.variable=c("y")) # Look at the model structure by str(mod6) setPoisson Basic constructor for Compound Poisson processes Description ’setPoisson’ construct a Compound Poisson model specification for a process of the form: Mt = m0+sum_{i=0}^Nt c*Y_{tau_i}, M0=m0 where Nt is a homogeneous or time-inhomogeneous Poisson process, tau_i is the sequence of ran- dom times of Nt and Y is a sequence of i.i.d. random jumps. Usage setPoisson(intensity = 1, df = NULL, scale = 1, dimension=1, ...) Arguments intensity either and expression or a numerical value representing the intensity function of the Poisson process Nt. df is the density of jump random variables Y. scale this is the scaling factor c. dimension this is the dimension of the jump component. ... passed to setModel Details An object of yuima.model-class where the model slot is of class yuima.poisson-class. Value model an object of yuima.model-class. Author(s) The YUIMA Project Team Examples ## Not run: Terminal <- 10 samp <- setSampling(T=Terminal,n=1000) # Ex 1. (Simple homogeneous Poisson process) mod1 <- setPoisson(intensity="lambda", df=list("dconst(z,1)")) set.seed(123) y1 <- simulate(mod1, true.par=list(lambda=1),sampling=samp) plot(y1) # scaling the jumps mod2 <- setPoisson(intensity="lambda", df=list("dconst(z,1)"),scale=5) set.seed(123) y2 <- simulate(mod2, true.par=list(lambda=1),sampling=samp) plot(y2) # scaling the jumps through the constant distribution mod3 <- setPoisson(intensity="lambda", df=list("dconst(z,5)")) set.seed(123) y3 <- simulate(mod3, true.par=list(lambda=1),sampling=samp) plot(y3) # Ex 2. (Time inhomogeneous Poisson process) mod4 <- setPoisson(intensity="beta*(1+sin(lambda*t))", df=list("dconst(z,1)")) set.seed(123) lambda <- 3 beta <- 5 y4 <- simulate(mod4, true.par=list(lambda=lambda,beta=beta),sampling=samp) par(mfrow=c(2,1)) par(mar=c(3,3,1,1)) plot(y4) f <- function(t) beta*(1+sin(lambda*t)) curve(f, 0, Terminal, col="red") # Ex 2. (Time inhomogeneous Compound Poisson process with Gaussian Jumps) mod5 <- setPoisson(intensity="beta*(1+sin(lambda*t))", df=list("dnorm(z,mu,sigma)")) set.seed(123) y5 <- simulate(mod5, true.par=list(lambda=lambda,beta=beta,mu=0, sigma=2),sampling=samp) plot(y5) f <- function(t) beta*(1+sin(lambda*t)) curve(f, 0, Terminal, col="red") ## End(Not run) setPPR Point Process Description Constructor of a Point Process Regression Model Usage setPPR(yuima, counting.var = "N", gFun, Kernel, var.dx = "s", var.dt = "s", lambda.var = "lambda", lower.var = "0", upper.var = "t", nrow = 1, ncol = 1) Arguments yuima an object of yuima.model-class that describes the mathematical features of counting and covariates processes Y[t]=(X[t],N[t]). counting.var a label denoting the name of the counting process. gFun a vector string that is the mathematical expression of the vector function g(t,Y[t-],theta) in the intensity process. Kernel a matrix string that is the kernel kappa(t-s,Y[s],theta) in the definition of the intensity process. var.dx a string denoting the integration variable in the intensity process. var.dt a string denoting the integration time variable in the intensity process. lambda.var name of the intensity process. lower.var Lower bound of the support for the integral in the definition of the intensity process. upper.var Upper bound of the support for the integral in the definition of the intensity process. nrow number of rows in the kernel. ncol number of columns in the kernel. Value An object of yuima.PPR Note There may be missing information in the model description. Please contribute with suggestions and fixings. Author(s) The YUIMA Project Team Contacts: <NAME> <<EMAIL>> References Insert Here References Examples ## Not run: ## Hawkes process with power law kernel # I. Law Definition: my.rHwk2 <- function(n){ as.matrix(rep(1,n)) } Law.Hwk2 <- setLaw(rng = my.rHwk2, dim = 1) # II. Definition of the counting process N_t mod.Hwk2 <- setModel(drift = c("0"), diffusion = matrix("0",1,1), jump.coeff = matrix(c("1"),1,1), measure = list(df = Law.Hwk2), measure.type = "code", solve.variable = c("N"), xinit=c("0")) # III. Definition of g() and kappa() g.Hwk2 <- "mu" Kern.Hwk2 <- "alpha/(1+(t-s))^beta" # IV. Construction of an yuima.PPR object PPR.Hwk2 <- setPPR(yuima = mod.Hwk2, gFun=g.Hwk2, Kernel = as.matrix(Kern.Hwk2),var.dx = "N") ## End(Not run) setSampling Set sampling information and create a ‘sampling’ object. Description setSampling is a constructor for yuima.sampling-class. Usage setSampling(Initial = 0, Terminal = 1, n = 100, delta, grid, random = FALSE, sdelta=as.numeric(NULL), sgrid=as.numeric(NULL), interpolation="pt" ) Arguments Initial Initial time of the grid. Terminal Terminal time of the grid. n number of time intervals. delta mesh size in case of regular time grid. grid a grid of times for the simulation, possibly empty. random specify if it is random sampling. See Details. sdelta mesh size in case of regular space grid. sgrid a grid in space for the simulation, possibly empty. interpolation a rule of interpolation in case of subsampling. By default, the previous tick interpolation. See Details. Details The function creates an object of type yuima.sampling-class with several slots. Initial: initial time of the grid. Terminal: terminal time fo the grid. n: the number of observations - 1. delta: in case of a regular time grid it is the mesh. grid: the grid of times. random: either FALSE or the distribution of the random times. regular: indicator of whether the grid is regular or not. For internal use only. sdelta: in case of a regular space grid it is the mesh. sgrid: the grid in space. oindex: in case of interpolation, a vector of indexes corresponding to the original observations used for the approximation. interpolation: the name of the interpolation method used. In case of subsampling, the observations are subsampled on some given grid/sgrid or according to some random times. When the original observations do not exist at a give point of the grid they are obtained by some approximation method. Available methods are "pt" or "previous tick" observation method, "nt" or "next tick" observation method, or by l"linear" interpolation. In case of interpolation, the slot oindex contains the vector of indexes corresponding to the original observations used for the approximation. For the linear method the index corresponds to the left- most observation. The slot random is used as information in case a grid is already determined (e.g. n or delta, etc. ot the grid itself are given) or if some subsampling has occurred or if some particular method which causes a random grid is used in simulation (for example the space discretized Euler scheme). The slot random contains a list of two elements distr and scale, where distr is a the distribution of independent random times and scale is either a scaling constant or a scaling function. If the grid of times is deterministic, then random is FALSE. If not specified and random=FALSE, the slot grid is filled automatically by the function. It is even- tually modified or created after the call to the function simulate. If delta is not specified, it is calculated as (Terminal-Initial)/n). If delta is specified, the Terminal is adjusted to be equal to Initial+n*delta. The vectors delta, n, Initial and Terminal may have different lengths, but then they are extended to the maximal length to keep consistency. See examples. If grid is specified, it takes precedence over all other arguments. Value An object of type yuima.sampling-class. Author(s) The YUIMA Project Team Examples samp <- setSampling(Terminal=1, n=1000) str(samp) samp <- setSampling(Terminal=1, n=1000, delta=0.3) str(samp) samp <- setSampling(Terminal=1, n=1000, delta=c(0.1,0.3)) str(samp) samp <- setSampling(Terminal=1:3, n=1000) str(samp) setYuima Creates a "yuima" object by combining "model", "data", "sampling", "characteristic" and "functional"slots. Description setYuima constructs an object of yuima-class. Usage setYuima(data, model, sampling, characteristic, functional) Arguments data an object of yuima.data-class. model an object of yuima.model-class. sampling an object of yuima.sampling-class. characteristic an object of yuima.characteristic-class. functional an object of class yuima.functional-class. Details The yuima-class object is the main object of the yuima package. Some of the slots can be missing. The slot data contains the data, either empirical or simulated. The slot model contains the description of the (statistical) model which is used to generate the data via different simulation schemes, to draw inference from the data or both. The sampling slot contains information on how the data have been collected or how they should be simulated. The slot characteristic contains information on PLEASE FINISH THIS. The slot functional con- tains information on PLEASE FINISH THIS. Please refer to the vignettes and the examples in the yuimadocs package for more informations. Value an object of yuima-class. Author(s) The YUIMA Project Team Examples # Creation of a yuima object with all slots for a # stochastic differential equation # dXt^e = -theta2 * Xt^e * dt + theta1 * dWt diffusion <- matrix(c("theta1"), 1, 1) drift <- c("-1*theta2*x") ymodel <- setModel(drift=drift, diffusion=diffusion) n <- 100 ysamp <- setSampling(Terminal=1, n=n) yuima <- setYuima(model=ymodel, sampling=ysamp) str(yuima) simBmllag Simulation of increments of bivariate Brownian motions with multi- scale lead-lag relationships Description This function simulates increments of bivariate Brownian motions with multi-scale lead-lag rela- tionships introduced in Hayashi and Koike (2018a) by the multi-dimensional circulant embedding method of Chan and Wood (1999). Usage simBmllag(n, J, rho, theta, delta = 1/2^(J + 1), imaginary = FALSE) simBmllag.coef(n, J, rho, theta, delta = 1/2^(J + 1)) Arguments n the number of increments to be simulated. J a positive integer to determine the finest time resolution: 2^(-J-1) is regarded as the finest time resolution. rho a vector of scale-by-scale correlation coefficients. If length(rho) < J, zeros are appended to make the length equal to J. theta a vector of scale-by-scale lead-lag parameters. If length(theta) < J, zeros are appended to make the length equal to J. delta the step size of time increments. This must be smaller than or equal to 2^(-J-1). imaginary logical. See ‘Details’. Details Let B(t) be a bivariate Gaussian process with stationary increments such that its marginal processes are standard Brownian motions and its cross-spectral density is given by Eq.(14) of Hayashi and Koike (2018a). The function simBmllag simulates the increments B(iδ)−B((i−1)δ), i = 1, . . . , n. The parameters Rj and thetaj in Eq.(14) of Hayashi and Koike (2018a) are specified by rho and theta, while δ and n are specified by delta and n, respecitively. Simulation is implemented by the multi-dimensional circulant embedding algorithm of Chan and Wood (1999). The last step of this algorithm returns a bivariate complex-valued sequence whose real and imaginary parts are independent and has the same law as B(kδ) − B((k − 1)δ), k = 1, . . . , n; see Step 3 of Chan and Wood (1999, Section 3.2). If imaginary = TRUE, the function simBmllag directly returns this bivariate complex-valued sequence, so we obtain two sets of simu- lated increments of B(t) by taking its real and complex parts. If imaginary = FALSE (default), the function returns only the real part of this sequence, so we directly obtain simulated increments of B(t). The function simBmllag.coef is internally used to compute the sequence of coefficient matrices R(k)Λ(k)1/2 in Step 2 of Chan and Wood (1999, Section 3.2). This procedure can be implemented before generating random numbers. Since this step typically takes the most computational cost, this function is useful to reduce computational time when we conduct a Monte Carlo simulation for (B(kδ) − B((k − 1)δ))nk=1 with a fixed set of parameters. See ‘Examples’ for how to use this function to simulate (B(kδ) − B((k − 1)δ))nk=1 . Value simBmllag returns a n x 2 matrix if imaginary = FALSE (default). Otherwise, simBmllag returns a complex-valued n x 2 matrix. simBmllag.coef returns a complex-valued m x 2 x 2 array, where m is an integer determined by the rule described at the end of Chan and Wood (1999, Section 2.3). Note There are typos in the first and second displayed equations in page 1221 of Hayashi and Koike (2018a): The j-th summands on their right hand sides should be multiplied by 2j . Author(s) <NAME> with YUIMA project Team References <NAME>. and <NAME>. (1999). Simulation of stationary Gaussian vector fields, Statistics and Computing, 9, 265–268. <NAME>. and <NAME>. (2018a). Wavelet-based methods for high-frequency lead-lag analysis, SIAM Journal of Financial Mathematics, 9, 1208–1248. <NAME>. and <NAME>. (2018b). Multi-scale analysis of lead-lag relationships in high-frequency financial markets. doi:10.48550/arXiv.1708.03992. See Also wllag Examples ## Example 1 ## Simulation setting of Hayashi and Koike (2018a, Section 4). n <- 15000 J <- 13 rho <- c(0.3,0.5,0.7,0.5,0.5,0.5,0.5,0.5) theta <- c(-1,-1, -2, -2, -3, -5, -7, -10)/2^(J + 1) set.seed(123) dB <- simBmllag(n, J, rho, theta) str(dB) n/2^(J + 1) # about 0.9155 sum(dB[ ,1]^2) # should be close to n/2^(J + 1) sum(dB[ ,2]^2) # should be close to n/2^(J + 1) # Plot the sample path of the process B <- apply(dB, 2, "diffinv") # construct the sample path Time <- seq(0, by = 1/2^(J+1), length.out = n) # Time index plot(zoo(B, Time), main = "Sample path of B(t)") # Using simBmllag.coef to implement the same simulation a <- simBmllag.coef(n, J, rho, theta) m <- dim(a)[1] set.seed(123) z1 <- rnorm(m) + 1i * rnorm(m) z2 <- rnorm(m) + 1i * rnorm(m) y1 <- a[ ,1,1] * z1 + a[ ,1,2] * z2 y2 <- a[ ,2,1] * z1 + a[ ,2,2] * z2 dW <- mvfft(cbind(y1, y2))[1:n, ]/sqrt(m) dB2 <- Re(dW) plot(diff(dB - dB2)) # identically equal to zero ## Example 2 ## Simulation Scenario 2 of Hayashi and Koike (2018b, Section 5). # Simulation of Bm driving the log-price processes n <- 30000 J <- 14 rho <- c(0.3,0.5,0.7,0.5,0.5,0.5,0.5,0.5) theta <- c(-1,-1, -2, -2, -3, -5, -7, -10)/2^(J + 1) dB <- simBmllag(n, J, rho, theta) # Simulation of Bm driving the volatility processes R <- -0.5 # leverage parameter delta <- 1/2^(J+1) # step size of time increments dW1 <- R * dB[ ,1] + sqrt(1 - R^2) * rnorm(n, sd = sqrt(delta)) dW2 <- R * dB[ ,2] + sqrt(1 - R^2) * rnorm(n, sd = sqrt(delta)) # Simulation of the model by the simulate function dW <- rbind(dB[,1], dB[,2], dW1, dW2) # increments of the driving Bm # defining the yuima object drift <- c(0, 0, "kappa*(eta - x3)", "kappa*(eta - x4)") diffusion <- diag(4) diag(diffusion) <- c("sqrt(max(x3,0))", "sqrt(max(x4,0))", "xi*sqrt(max(x3,0))", "xi*sqrt(max(x4,0))") xinit <- c(0,0,"rgamma(1, 2*kappa*eta/xi^2,2*kappa/xi^2)", "rgamma(1, 2*kappa*eta/xi^2,2*kappa/xi^2)") mod <- setModel(drift = drift, diffusion = diffusion, xinit = xinit, state.variable = c("x1","x2","x3","x4")) samp <- setSampling(Terminal = n * delta, n = n) yuima <- setYuima(model = mod, sampling = samp) # simulation result <- simulate(yuima, increment.W = dW, true.parameter = list(kappa = 5, eta = 0.04, xi = 0.5)) plot(result) simCIR Simulation of the Cox-Ingersoll-Ross diffusion Description This is a function to simulate a Cox-Ingersoll-Ross process given via the SDE √ dXt = (α − βXt )dt + γXt dWt with a Brownian motion (Wt )t≥0 and parameters α, β, γ > 0. We use an exact CIR simulator for (Xtj )j=1,...,n through the non-central chi-squares distribution. Usage simCIR(time.points, n, h, alpha, beta, gamma, equi.dist=FALSE ) Arguments alpha, beta, gamma numbers given as in the SDE above. equi.dist a logical value indicating whether the sampling points are equidistant (default equi.dist=FALSE). n a number indicating the quantity of sampling points in the case equi.dist=TRUE. h a number indicating the step size in the case equi.dist=TRUE. time.points a numeric vector of sampling times (necessary if equi.dist=FALSE). Value A numeric matrix containing the realization of (t0 , Xt0 ), . . . , (tn , Xtn ) with tj denoting the j-th sampling times. Author(s) <NAME> Contacts: <<EMAIL>> References <NAME> and <NAME>. Chi-square simulation of the CIR process and the Heston model. Int. J. Theor. Appl. Finance, 16(3):1350014, 38, 2013. Examples ## You always need the parameters alpha, beta and gamma ## Additionally e.g. time.points data <- simCIR(alpha=3,beta=1,gamma=1, time.points = c(0,0.1,0.2,0.25,0.3)) ## or n, number of observations, h, distance between observations, ## and equi.dist=TRUE data <- simCIR(alpha=3,beta=1,gamma=1,n=1000,h=0.1,equi.dist=TRUE) plot(data[1,],data[2,], type="l",col=4) ## If you input every value and equi.dist=TRUE, time.points are not ## used for the simulations. data <- simCIR(alpha=3,beta=1,gamma=1,n=1000,h=0.1, time.points = c(0,0.1,0.2,0.25,0.3), equi.dist=TRUE) ## If you leave equi.dist=FALSE, the parameters n and h are not ## used for the simulation. data <- simCIR(alpha=3,beta=1,gamma=1,n=1000,h=0.1, time.points = c(0,0.1,0.2,0.25,0.3)) simFunctional Calculate the value of functional Description Calculate the value of functional associated with sde by Euler scheme. Usage simFunctional(yuima, expand.var="e") Fnorm(yuima, expand.var="e") F0(yuima, expand.var="e") Arguments yuima a yuima object containing model, functional and data. expand.var default expand.var="e". Details Calculate the value of functional of interest. Fnorm returns normalized one, and F0 returns the value for the case small parameter epsilon = 0. In simFunctional and Fnorm, yuima MUST contains the ’data’ slot (X in legacy version) Value Fe a real value Note we need to fix this routine. Author(s) YUIMA Project Team Examples set.seed(123) # to the Black-Scholes economy: # dXt^e = Xt^e * dt + e * Xt^e * dWt diff.matrix <- matrix( c("x*e"), 1,1) model <- setModel(drift = c("x"), diffusion = diff.matrix) # call option is evaluated by averating # max{ (1/T)*int_0^T Xt^e dt, 0}, the first argument is the functional of interest: Terminal <- 1 xinit <- c(1) f <- list( c(expression(x/Terminal)), c(expression(0))) F <- 0 division <- 1000 e <- .3 samp <- setSampling(Terminal = Terminal, n = division) yuima <- setYuima(model = model,sampling = samp) yuima <- setFunctional( yuima, xinit=xinit, f=f,F=F,e=e) # evaluate the functional value yuima <- simulate(yuima,xinit=xinit,true.par=e) Fe <- simFunctional(yuima) Fe Fenorm <- Fnorm(yuima) Fenorm simulate Simulator function for multi-dimensional stochastic processes Description Simulate multi-dimensional stochastic processes. Usage simulate(object, nsim=1, seed=NULL, xinit, true.parameter, space.discretized = FALSE, increment.W = NULL, increment.L = NULL, method = "euler", hurst, methodfGn = "WoodChan", sampling=sampling, subsampling=subsampling, ...) Arguments object an yuima-class, yuima.model-class or yuima.carma-class object. xinit initial value vector of state variables. true.parameter named list of parameters. space.discretized flag to switch to space-discretized Euler Maruyama method. increment.W to specify Wiener increment for each time tics in advance. increment.L to specify Levy increment for each time tics in advance. method string Variable for simulation scheme. The default value method=euler uses the euler discretization for the simulation of a sample path. nsim Not used yet. Included only to match the standard genenirc in package stats. seed Not used yet. Included only to match the standard genenirc in package stats. hurst value of Hurst parameter for simulation of the fGn. Overrides the specified hurst slot. methodfGn simulation methods for fractional Gaussian noise. ... passed to setSampling to create a sampling sampling a yuima.sampling-class object. subsampling a yuima.sampling-class object. Details simulate is a function to solve SDE using the Euler-Maruyama method. This function supports usual Euler-Maruyama method for multidimensional SDE, and space discretized Euler-Maruyama method for one dimensional SDE. It simulates solutions of stochastic differential equations with Gaussian noise, fractional Gaussian noise awith/without jumps. If a yuima-class object is passed as input, then the sampling information is taken from the slot sampling of the object. If a yuima.carma-class object, a yuima.model-class object or a yuima-class object with missing sampling slot is passed as input the sampling argument is used. If this argu- ment is missing then the sampling structure is constructed from Initial, Terminal, etc. arguments (see setSampling for details on how to use these arguments). For a COGARCH(p,q) process setting method=mixed implies that the simulation scheme is based on the solution of the state space process. For the case in which the underlying noise is a compound poisson Levy process, the trajectory is build firstly by simulation of the jump time, then the quadratic variation and the increments noise are simulated exactly at jump time. For the others Levy process, the simulation scheme is based on the discretization of the state space process solution. Value yuima a yuima-class object. Note In the simulation of multi-variate Levy processes, the values of parameters have to be defined out- side of simulate function in advance (see examples below). Author(s) The YUIMA Project Team Examples set.seed(123) # Path-simulation for 1-dim diffusion process. # dXt = -0.3*Xt*dt + dWt mod <- setModel(drift="-0.3*y", diffusion=1, solve.variable=c("y")) str(mod) # Set the model in an `yuima' object with a sampling scheme. T <- 1 n <- 1000 samp <- setSampling(Terminal=T, n=n) ou <- setYuima(model=mod, sampling=samp) # Solve SDEs using Euler-Maruyama method. par(mfrow=c(3,1)) ou <- simulate(ou, xinit=1) plot(ou) set.seed(123) ouB <- simulate(mod, xinit=1,sampling=samp) plot(ouB) set.seed(123) ouC <- simulate(mod, xinit=1, Terminal=1, n=1000) plot(ouC) par(mfrow=c(1,1)) # Path-simulation for 1-dim diffusion process. # dXt = theta*Xt*dt + dWt mod1 <- setModel(drift="theta*y", diffusion=1, solve.variable=c("y")) str(mod1) ou1 <- setYuima(model=mod1, sampling=samp) # Solve SDEs using Euler-Maruyama method. ou1 <- simulate(ou1, xinit=1, true.p = list(theta=-0.3)) plot(ou1) ## Not run: # A multi-dimensional (correlated) diffusion process. # To describe the following model: # X=(X1,X2,X3); dXt = U(t,Xt)dt + V(t)dWt # For drift coeffcient U <- c("-x1","-2*x2","-t*x3") # For diffusion coefficient of X1 v1 <- function(t) 0.5*sqrt(t) # For diffusion coefficient of X2 v2 <- function(t) sqrt(t) # For diffusion coefficient of X3 v3 <- function(t) 2*sqrt(t) # correlation rho <- function(t) sqrt(1/2) # coefficient matrix for diffusion term V <- matrix( c( "v1(t)", "v2(t) * rho(t)", "v3(t) * rho(t)", "", "v2(t) * sqrt(1-rho(t)^2)", "", "", "", "v3(t) * sqrt(1-rho(t)^2)" ), 3, 3) # Model sde using "setModel" function cor.mod <- setModel(drift = U, diffusion = V, state.variable=c("x1","x2","x3"), solve.variable=c("x1","x2","x3") ) str(cor.mod) # Set the `yuima' object. cor.samp <- setSampling(Terminal=T, n=n) cor <- setYuima(model=cor.mod, sampling=cor.samp) # Solve SDEs using Euler-Maruyama method. set.seed(123) cor <- simulate(cor) plot(cor) # A non-negative process (CIR process) # dXt= a*(c-y)*dt + b*sqrt(Xt)*dWt sq <- function(x){y = 0;if(x>0){y = sqrt(x);};return(y);} model<- setModel(drift="0.8*(0.2-x)", diffusion="0.5*sq(x)",solve.variable=c("x")) T<-10 n<-1000 sampling <- setSampling(Terminal=T,n=n) yuima<-setYuima(model=model, sampling=sampling) cir<-simulate(yuima,xinit=0.1) plot(cir) # solve SDEs using Space-discretized Euler-Maruyama method v4 <- function(t,x){ return(0.5*(1-x)*sqrt(t)) } mod_sd <- setModel(drift = c("0.1*x1", "0.2*x2"), diffusion = c("v1(t)","v4(t,x2)"), solve.var=c("x1","x2") ) samp_sd <- setSampling(Terminal=T, n=n) sd <- setYuima(model=mod_sd, sampling=samp_sd) sd <- simulate(sd, xinit=c(1,1), space.discretized=TRUE) plot(sd) ## example of simulation by specifying increments ## Path-simulation for 1-dim diffusion process ## dXt = -0.3*Xt*dt + dWt mod <- setModel(drift="-0.3*y", diffusion=1,solve.variable=c("y")) str(mod) ## Set the model in an `yuima' object with a sampling scheme. Terminal <- 1 n <- 500 mod.sampling <- setSampling(Terminal=Terminal, n=n) yuima.mod <- setYuima(model=mod, sampling=mod.sampling) ##use original increment delta <- Terminal/n my.dW <- rnorm(n * <EMAIL>, 0, sqrt(delta)) my.dW <- t(matrix(my.dW, nrow=n, ncol=<EMAIL>)) ## Solve SDEs using Euler-Maruyama method. yuima.mod <- simulate(yuima.mod, xinit=1, space.discretized=FALSE, increment.W=my.dW) if( !is.null(yuima.mod) ){ dev.new() # x11() plot(yuima.mod) } ## A multi-dimensional (correlated) diffusion process. ## To describe the following model: ## X=(X1,X2,X3); dXt = U(t,Xt)dt + V(t)dWt ## For drift coeffcient U <- c("-x1","-2*x2","-t*x3") ## For process 1 diff.coef.1 <- function(t) 0.5*sqrt(t) ## For process 2 diff.coef.2 <- function(t) sqrt(t) ## For process 3 diff.coef.3 <- function(t) 2*sqrt(t) ## correlation cor.rho <- function(t) sqrt(1/2) ## coefficient matrix for diffusion term V <- matrix( c( "diff.coef.1(t)", "diff.coef.2(t) * cor.rho(t)", "diff.coef.3(t) * cor.rho(t)", "", "diff.coef.2(t)", "diff.coef.3(t) * sqrt(1-cor.rho(t)^2)", "diff.coef.1(t) * cor.rho(t)", "", "diff.coef.3(t)" ), 3, 3) ## Model sde using "setModel" function cor.mod <- setModel(drift = U, diffusion = V, solve.variable=c("x1","x2","x3") ) str(cor.mod) ## Set the `yuima' object. set.seed(123) obj.sampling <- setSampling(Terminal=Terminal, n=n) yuima.obj <- setYuima(model=cor.mod, sampling=obj.sampling) ##use original dW my.dW <- rnorm(n * yuima.<EMAIL>, 0, sqrt(delta)) my.dW <- t(matrix(my.dW, nrow=n, ncol=yu<EMAIL>)) ## Solve SDEs using Euler-Maruyama method. yuima.obj.path <- simulate(yuima.obj, space.discretized=FALSE, increment.W=my.dW) if( !is.null(yuima.obj.path) ){ dev.new() # x11() plot(yuima.obj.path) } ##:: sample for Levy process ("CP" type) ## specify the jump term as c(x,t)dz obj.model <- setModel(drift=c("-theta*x"), diffusion="sigma", jump.coeff="1", measure=list(intensity="1", df=list("dnorm(z, 0, 1)")), measure.type="CP", solve.variable="x") ##:: Parameters lambda <- 3 theta <- 6 sigma <- 1 xinit <- runif(1) N <- 500 h <- N^(-0.7) eps <- h/50 n <- 50*N T <- N*h set.seed(123) obj.sampling <- setSampling(Terminal=T, n=n) obj.yuima <- setYuima(model=obj.model, sampling=obj.sampling) X <- simulate(obj.yuima, xinit=xinit, true.parameter=list(theta=theta, sigma=sigma)) dev.new() plot(X) ##:: sample for Levy process ("CP" type) ## specify the jump term as c(x,t,z) ## same plot as above example obj.model <- setModel(drift=c("-theta*x"), diffusion="sigma", jump.coeff="z", measure=list(intensity="1", df=list("dnorm(z, 0, 1)")), measure.type="CP", solve.variable="x") set.seed(123) obj.sampling <- setSampling(Terminal=T, n=n) obj.yuima <- setYuima(model=obj.model, sampling=obj.sampling) X <- simulate(obj.yuima, xinit=xinit, true.parameter=list(theta=theta, sigma=sigma)) dev.new() plot(X) ##:: sample for Levy process ("code" type) ## dX_{t} = -x dt + dZ_t obj.model <- setModel(drift="-x", xinit=1, jump.coeff="1", measure.type="code", measure=list(df="rIG(z, 1, 0.1)")) obj.sampling <- setSampling(Terminal=10, n=10000) obj.yuima <- setYuima(model=obj.model, sampling=obj.sampling) result <- simulate(obj.yuima) dev.new() plot(result) ##:: sample for multidimensional Levy process ("code" type) ## dX = (theta - A X)dt + dZ, ## theta=(theta_1, theta_2) = c(1,.5) ## A=[a_ij], a_11 = 2, a_12 = 1, a_21 = 1, a_22=2 require(yuima) x0 <- c(1,1) beta <- c(.1,.1) mu <- c(0,0) delta0 <- 1 alpha <- 1 Lambda <- matrix(c(1,0,0,1),2,2) cc <- matrix(c(1,0,0,1),2,2) obj.model <- setModel(drift=c("1 - 2*x1-x2",".5-x1-2*x2"), xinit=x0, solve.variable=c("x1","x2"), jump.coeff=cc, measure.type="code", measure=list(df="rNIG(z, alpha, beta, delta0, mu, Lambda)")) obj.sampling <- setSampling(Terminal=10, n=10000) obj.yuima <- setYuima(model=obj.model, sampling=obj.sampling) result <- simulate(obj.yuima,true.par=list( alpha=alpha, beta=beta, delta0=delta0, mu=mu, Lambda=Lambda)) plot(result) # Path-simulation for a Carma(p=2,q=1) model driven by a Brownian motion: carma1<-setCarma(p=2,q=1) str(carma1) # Set the sampling scheme samp<-setSampling(Terminal=100,n=10000) # Set the values of the model parameters par.carma1<-list(b0=1,b1=2.8,a1=2.66,a2=0.3) set.seed(123) sim.carma1<-simulate(carma1, true.parameter=par.carma1, sampling=samp) plot(sim.carma1) # Path-simulation for a Carma(p=2,q=1) model driven by a Compound Poisson process. carma1<-setCarma(p=2, q=1, measure=list(intensity="1",df=list("dnorm(z, 0, 1)")), measure.type="CP") # Set Sampling scheme samp<-setSampling(Terminal=100,n=10000) # Fix carma parameters par.carma1<-list(b0=1, b1=2.8, a1=2.66, a2=0.3) set.seed(123) sim.carma1<-simulate(carma1, true.parameter=par.carma1, sampling=samp) plot(sim.carma1) ## End(Not run) snr Calculating self-normalized residuals for SDEs. Description Calculate self-normalized residuals based on the Gaussian quasi-likelihood estimator. Usage snr(yuima, start, lower, upper, withdrift) Arguments yuima a yuima object. lower a named list for specifying lower bounds of parameters. upper a named list for specifying upper bounds of parameters. start initial values to be passed to the optimizer. withdrift use drift information for constructing self-normalized residuals. by default, withdrift = FALSE Details This function calculates the Gaussian quasi maximum likelihood estimator and associated self- normalized residuals. Value estimator Gaussian quasi maximum likelihood estimator snr self-normalized residuals based on the Gaussian quasi maximum likelihood es- timator Author(s) The YUIMA Project Team Contacts: <NAME> <<EMAIL>> References <NAME>. (2013). Asymptotics for functionals of self-normalized residuals of discretely observed stochastic processes. Stochastic Processes and their Applications 123 (2013), 2752–2778 Examples ## Not run: # Test code (1. diffusion case) yuima.mod <- setModel(drift="-theta*x",diffusion="theta1/sqrt(1+x^2)") n <- 10000 ysamp <- setSampling(Terminal=n^(1/3),n=n) yuima <- setYuima(model=yuima.mod, sampling=ysamp) set.seed(123) yuima <- simulate(yuima, xinit=0, true.parameter = list(theta=2,theta1=3)) start=list(theta=3,theta1=0.5) lower=list(theta=1,theta1=0.3) upper=list(theta=5,theta1=3) res <- snr(yuima,start,lower,upper) str(res) # Test code (2.jump diffusion case) a<-3 b<-5 mod <- setModel(drift="10-theta*x", #drift="10-3*x/(1+x^2)", diffusion="theta1*(2+x^2)/(1+x^2)", jump.coeff="1", # measure=list(intensity="10",df=list("dgamma(z, a, b)")), measure=list(intensity="10",df=list("dunif(z, a, b)")), measure.type="CP") T <- 100 ## Terminal n <- 10000 ## generation size samp <- setSampling(Terminal=T, n=n) ## define sampling scheme yuima <- setYuima(model = mod, sampling = samp) yuima <- simulate(yuima, xinit=1, true.parameter=list(theta=2,theta1=sqrt(2),a=a,b=b), sampling = samp) start=list(theta=3,theta1=0.5) lower=list(theta=1,theta1=0.3) upper=list(theta=5,theta1=3) res <- snr(yuima,start,lower,upper) str(res) ## End(Not run) spectralcov Spectral Method for Cumulative Covariance Estimation Description This function implements the local method of moments proposed in Bibinger et al. (2014) to es- timate the cummulative covariance matrix of a non-synchronously observed multi-dimensional Ito process with noise. Usage lmm(x, block = 20, freq = 50, freq.p = 10, K = 4, interval = c(0, 1), Sigma.p = NULL, noise.var = "AMZ", samp.adj = "direct", psd = TRUE) Arguments x an object of yuima-class or yuima.data-class. block a positive integer indicating the number of the blocks which the observation interval is split into. freq a positive integer indicating the number of the frequencies used to compute the final estimator. freq.p a positive integer indicating the number of the frequencies used to compute the pilot estimator for the spot covariance matrix (corresponding to the number Jn in Eq.(29) from Altmeyer and Bibinger (2015)). K a positive integer indicating the number of the blocks used to compute the pilot estimator for the spot covariance matrix (corresponding to the number Kn in Eq.(29) from Altmeyer and Bibinger (2015)). interval a vector indicating the observation interval. The first component represents the initial value and the second component represents the terminal value. Sigma.p a block by dim(x) matrix giving the pilot estimates of the spot covariance ma- trix plugged into the optimal weight matrices. If NULL (the default), it is com- puted by using formula (29) from Altmeyer and Bibinger (2015). noise.var character string giving the method to estimate the noise variances. There are sev- eral options: "AMZ" (the default) uses equation (3.7) from Gatheral and Oomen (2010), i.e. the quasi-maximum likelihood estimator proposed by Ait-Sahalia et al. (2005) (see also Xiu (2010)). "BR" uses equation (3.9) from Gatheral and Oomen (2010), i.e. the sample average of the squared returns divided by 2, the estimator proposed by Bandi and Russel (2006). "O" uses equation (3.8) from Gatheral and Oomen (2010), i.e. another method-of-moments estimator proposed by Oomen (2006). It is also possible to directly specify the noise variances by setting this argument to a numeric vector. In this case the i-th component of noise.var must indicates the variance of the noise for the i-th component of the observation process. samp.adj character string giving the method to adjust the effect of the sampling times on the variances of the spectral statistics for the noise part. The default method "direct" uses the local sums of the squares of the one-skip differences of the sampling times divided by 2, which directly appears in the representation of the variances of the spectral statistics for the noise part. Another choice is "QVT", which uses the local quadratic variations of time as in Altmeyer and Bibinger (2015) and Bibinger et al. (2014). psd logical. If TRUE (the default), the estimated covariance matrix and variance- covariance matrix are converted to their spectral absolute values to ensure their positive semi-definiteness. This procedure does not matter in terms of the asymp- totic theory. Details The default implementation is the adaptive version of the local method of moments estimator, which is only based on observation data. It is possible to implement oracle versions of the estimator by setting user-specified Sigma.p and/or noise.var. An example is given below. Value An object of class "yuima.specv", which is a list with the following elements: covmat the estimated covariance matrix vcov the estimated variance-covariance matrix of as.vector(covmat) Sigma.p the pilot estimates of the spot covariance matrix Author(s) <NAME> with YUIMA Project Team References <NAME>., <NAME>. and <NAME>. (2005) How often to sample a continuous-time pro- cess in the presence of market microstructure noise, The Review of Financial Studies, 18, 351–416. <NAME>. and <NAME>. (2015) Functional stable limit theorems for quasi-efficient spectral covolatility estimators, to appear in Stochastic processes and their applications, doi:10.1016/j.spa.2015.07.009. <NAME>. and <NAME>. (2006) Separating microstructure noise from volatility, Journal of Financial Economics, 79, 655–692. <NAME>., <NAME>., <NAME>. and <NAME>. (2014) Estimating the quadratic covariation matrix from noisy observations: local method of moments and efficiency, Annals of Statistics, 42, 80–114. <NAME>. and <NAME>. (2010) Zero-intelligence realized variance estimation, Finance Stochastics, 14, 249–283. <NAME>. (2006) Properties of realized variance under alternative sampling schemes, Journal of Business and Economic Statistics, 24, 219–237. <NAME>. (2011) Asymptotic equivalence for inference on the volatility from noisy observations, Annals of Statistics, 39, 772–802. <NAME>. (2010) Quasi-maximum likelihood estimation of volatility with high frequency data, Jour- nal of Econometrics, 159, 235–250. See Also cce, setData Examples # Example. One-dimensional and regular sampling case # Here the simulated model is taken from Reiss (2011) ## Set a model sigma <- function(t) sqrt(0.02 + 0.2 * (t - 0.5)^4) modI <- setModel(drift = 0, diffusion = "sigma(t)") ## Generate a path of the process set.seed(117) n <- 12000 yuima.samp <- setSampling(Terminal = 1, n = n) yuima <- setYuima(model = modI, sampling = yuima.samp) yuima <- simulate(yuima, xinit = 0) delta <- 0.01 # standard deviation of microstructure noise yuima <- noisy.sampling(yuima, var.adj = delta^2) # generate noisy observations plot(yuima) ## Estimation of the integrated volatility est <- lmm(yuima) est ## True integrated volatility and theoretical standard error disc <- seq(0, 1, by = 1/n) cat("true integrated volatility\n") print(mean(sigma(disc[-1])^2)) cat("theoretical standard error\n") print(sqrt(8*delta*mean(sigma(disc[-1])^3))/n^(1/4)) # Plotting the pilot estimate of the spot variance path block <- 20 G <- seq(0,1,by=1/block)[1:block] Sigma.p <- sigma(G)^2 # true spot variance plot(zoo(Sigma.p, G), col = "blue",, xlab = "time", ylab = expression(sigma(t)^2)) lines(zoo(est$Sigma.p, G)) ## "Oracle" implementation lmm(yuima, block = block, Sigma.p = Sigma.p, noise.var = delta^2) # Example. Multi-dimensional case # We simulate noisy observations of a correlated bivariate Brownian motion # First we examine the regular sampling case since in this situsation the theoretical standard # error can easily be computed via the formulae given in p.88 of Bibinger et al. (2014) ## Set a model drift <- c(0,0) rho <- 0.5 # correlation diffusion <- matrix(c(1,rho,0,sqrt(1-rho^2)),2,2) modII <- setModel(drift=drift,diffusion=diffusion, state.variable=c("x1","x2"),solve.variable=c("x1","x2")) ## Generate a path of the latent process set.seed(123) ## We regard the unit interval as 6.5 hours and generate the path on it ## with the step size equal to 1 seconds n <- 8000 yuima.samp <- setSampling(Terminal = 1, n = n) yuima <- setYuima(model = modII, sampling = yuima.samp) yuima <- simulate(yuima) ## Generate noisy observations eta <- 0.05 yuima <- noisy.sampling(yuima, var.adj = diag(eta^2, 2)) plot(yuima) ## Estimation of the integrated covariance matrix est <- lmm(yuima) est ## Theoretical standard error a <- sqrt(4 * eta * (sqrt(1 + rho) + sqrt(1 - rho))) b <- sqrt(2 * eta * ((1 + rho)^(3/2) + (1 - rho)^(3/2))) cat("theoretical standard error\n") print(matrix(c(a,b,b,a),2,2)/n^(1/4)) ## "Oracle" implementation block <- 20 Sigma.p <- matrix(c(1,rho,rho,1),block,4,byrow=TRUE) # true spot covariance matrix lmm(yuima, block = block, Sigma.p = Sigma.p, noise.var = rep(eta^2,2)) # Next we extract nonsynchronous observations from # the path generated above by Poisson random sampling psample <- poisson.random.sampling(yuima, rate = c(1/2,1/2), n = n) ## Estimation of the integrated covariance matrix lmm(psample) ## "Oracle" implementation lmm(psample, block = block, Sigma.p = Sigma.p, noise.var = rep(eta^2,2)) ## Other choices of tuning parameters (estimated values are not varied so much) lmm(psample, block = 25) lmm(psample, freq = 100) lmm(psample, freq.p = 15) lmm(psample, K = 8) subsampling subsampling Description subsampling Usage subsampling(x, sampling, ...) Arguments x an yuima-class or yuima.model-class object. sampling a yuima.sampling-class object. ... used to create a sampling structure Details When subsampling on some grid of times, it may happen that no data is available at the given grid point. In this case it is possible to use several techniques. Different options are avaiable specifying the argument, or the slot, interpolation: "none" or "exact" no interpolation. If no data point exists at a given grid point, NA is returned in the subsampled data "pt" or "previous" the first data on the left of the grid point instant is used. "nt" or "next" the first data on the right of the grid point instant is used. "lin" or "linear" the average of the values of the first data on the left and the first data to the right of the grid point instant is used. Value yuima a yuima.data-class object. Author(s) The YUIMA Project Team Examples ## Set a model diff.coef.1 <- function(t, x1=0, x2) x2*(1+t) diff.coef.2 <- function(t, x1, x2=0) x1*sqrt(1+t^2) cor.rho <- function(t, x1=0, x2=0) sqrt((1+cos(x1*x2))/2) diff.coef.matrix <- matrix(c("diff.coef.1(t,x1,x2)", "diff.coef.2(t,x1,x2)*cor.rho(t,x1,x2)", "", "diff.coef.2(t,x1,x2)*sqrt(1-cor.rho(t,x1,x2)^2)"),2,2) cor.mod <- setModel(drift=c("",""), diffusion=diff.coef.matrix, solve.variable=c("x1", "x2"), xinit=c(3,2)) set.seed(111) ## We first simulate the two dimensional diffusion model yuima.samp <- setSampling(Terminal=1, n=1200) yuima <- setYuima(model=cor.mod, sampling=yuima.samp) yuima.sim <- simulate(yuima) plot(yuima.sim, plot.type="single") ## random sampling with exponential times ## one random sequence per time series newsamp <- setSampling( random=list(rdist=c( function(x) rexp(x, rate=10), function(x) rexp(x, rate=20))) ) newdata <- subsampling(yuima.sim, sampling=newsamp) points(get.zoo.data(newdata)[[1]],col="red") points(get.zoo.data(newdata)[[2]],col="green") plot(yuima.sim, plot.type="single") ## deterministic subsampling with different ## frequence for each time series newsamp <- setSampling(delta=c(0.1,0.2)) newdata <- subsampling(yuima.sim, sampling=newsamp) points(get.zoo.data(newdata)[[1]],col="red") points(get.zoo.data(newdata)[[2]],col="green") toLatex Additional Methods for LaTeX Representations for Yuima objects Description These methods convert yuima-class, yuima.model-class, yuima.carma-class or yuima.cogarch-class objects to character vectors with LaTeX markup. Usage ## S3 method for class 'yuima' toLatex(object,...) ## S3 method for class 'yuima.model' toLatex(object,...) ## S3 method for class 'yuima.carma' toLatex(object,...) ## S3 method for class 'yuima.cogarch' toLatex(object,...) Arguments object object of a class yuima, yuima.model or yuima.carma. ... currently not used. Details This method tries to convert a formal description of the model slot of the yuima object into a LaTeX formula. This is just a simple proof of concept and probably further LaTex manipulations for use in papers. Copy and paste of the output of toLatex into a real LaTeX file should do the job. Examples # dXt = theta*Xt*dt + dWt mod1 <- setModel(drift="theta*y", diffusion=1, solve.variable=c("y")) str(mod1) toLatex(mod1) # A multi-dimensional (correlated) diffusion process. # To describe the following model: # X=(X1,X2,X3); dXt = U(t,Xt)dt + V(t)dWt # For drift coeffcient U <- c("-x1","-2*x2","-t*x3") # For diffusion coefficient of X1 v1 <- function(t) 0.5*sqrt(t) # For diffusion coefficient of X2 v2 <- function(t) sqrt(t) # For diffusion coefficient of X3 v3 <- function(t) 2*sqrt(t) # correlation rho <- function(t) sqrt(1/2) # coefficient matrix for diffusion term V <- matrix( c( "v1(t)", "v2(t) * rho(t)", "v3(t) * rho(t)", "", "v2(t) * sqrt(1-rho(t)^2)", "", "", "", "v3(t) * sqrt(1-rho(t)^2)" ), 3, 3) # Model sde using "setModel" function cor.mod <- setModel(drift = U, diffusion = V, state.variable=c("x1","x2","x3"), solve.variable=c("x1","x2","x3") ) str(cor.mod) toLatex(cor.mod) # A CARMA(p=3,q=1) process. carma1<-setCarma(p=3,q=1,loc.par="c",scale.par="s") str(carma1) toLatex(carma1) # A COGARCH(p=3,q=5) process. cogarch1<-setCogarch(p=3,q=5, measure=list(df=list("rNIG(z, mu00, bu00, 1, 0)")), measure.type="code") str(cogarch1) toLatex(cogarch1) variable.Integral Class for the mathematical description of integral of a stochastic pro- cess Description Auxiliar class for definition of an object of class yuima.Integral. see the documentation of yuima.Integral for more details. wllag Scale-by-scale lead-lag estimation Description This function estimates lead-lag parameters on a scale-by-scale basis from non-synchronously ob- served bivariate processes, using the estimatiors proposed in Hayashi and Koike (2018b). Usage wllag(x, y, J = 8, N = 10, tau = 1e-3, from = -to, to = 100, verbose = FALSE, in.tau = FALSE, tol = 1e-6) Arguments x a zoo object for observation data of the first process. y a zoo object for observation data of the second process. J a positive integer. Scale-by scale lead-lag parameters are estimated up to the level J. N The number of vanishing moments of Daubechies’ compactly supported wavelets. This should be an integer between 1 and 10. tau the step size of a finite grid on which objective functions are evaluated. Note that this value is identified with the finest time resolution of the underlying model. The default value 1e-3 corresponds to 1 mili-second if the unit time corresponds to 1 second. from a negative integer. from*tau gives the lower end of a finite grid on which ob- jective functions are evaluated. to a positive integer. to*tau gives the upper end of a finite grid on which objective functions are evaluated. verbose a logical. If FALSE (default), the function returns only the estimated scale-by- scale lead-lag parameters. Otherwise, the function also returns some other statis- tics such as values of the signed objective functions. See ‘Value’. in.tau a logical. If TRUE, the estimated lead-lag parameters are returned in increments of tau. That is, the estimated lead-lag parameters are divided by tau. tol tolelance parameter to avoid numerical errors in comparison of time stamps. All time stamps are divided by tol and rounded to integers. A reasonable choice of tol is the minimum unit of time stamps. The default value 1e-6 supposes that the minimum unit of time stamps is greater or equal to 1 micro-second. Details Hayashi and Koike (2018a) introduced a bivariate continuous-time model having different lead- lag relationships at different time scales. The wavelet cross-covariance functions of this model, computed based on the Littlewood-Paley wavelets, have unique maximizers in absolute values at each time scale. These maximizer can be considered as lead-lag parameters at each time scale. To estimate these parameters from discrete observation data, Hayashi and Koike (2018b) constructed objective functions mimicking behavior of the wavelet cross-covariance functions of the underlying model. Then, estimates of the scale-by-scale lead-lag parameters can be obtained by maximizing these objective functions in absolute values. Value If verbose is FALSE, a numeric vector with length J, corresponding to the estimated scale-by-scale lead-lag parameters, is returned. Note that their positive values indicate that the first process leads the second process. Otherwise, an object of class "yuima.wllag", which is a list with the following components, is returned: lagtheta the estimated scale-by-scale lead-lag parameters. The j th component corre- sponds to the estimate at the level j. A positive value indicates that the first process leads the second process. obj.values the values of the objective functions evaluated at the estimated lead-lag param- eters. obj.fun a list of values of the objective functions. The j th component of the list corre- sponds to a zoo object for values of the signed objective function at the level j indexed by the search grid. theta.hry the lead-lag parameter estimate in the sense of Hoffmann, Rosenbaum and Yoshida (2013). cor.hry the correltion coefficient in the sense of Hoffmann, Rosenbaum and Yoshida (2013), evaluated at the estimated lead-lag parameter. ccor.hry a zoo object for values of the cross correltion function in the sense of Hoffmann, Rosenbaum and Yoshida (2013) indexed by the search grid. Note Smaller levels correspond to finer time scales. In particular, the first level corresponds to the finest time resolution, which is defined by the argument tau. If there are multiple maximizers in an objective function, wllag takes a maximizer farthest from zero (if there are two such values, the function takes the negative one). This behavior is different from llag. The objective functions themselves do NOT consitently estimate the corresponding wavelet co- variance functions. This means that values in obj.values and obj.fun cannot be interpreted as covaraince estimates (their scales depend on the degree of non-synchronicity of observation times). Author(s) <NAME> with YUIMA Project Team References <NAME>. and <NAME>. (2018a). Wavelet-based methods for high-frequency lead-lag analysis, SIAM Journal of Financial Mathematics, 9, 1208–1248. <NAME>. and <NAME>. (2018b). Multi-scale analysis of lead-lag relationships in high-frequency financial markets. doi:10.48550/arXiv.1708.03992. <NAME>., <NAME>. and <NAME>. (2013) Estimation of the lead-lag parameter from non-synchronous data, Bernoulli, 19, no. 2, 426–461. See Also simBmllag, llag Examples ## An example from a simulation setting of Hayashi and Koike (2018b) set.seed(123) # Simulation of Bm driving the log-price processes n <- 15000 J <- 13 tau <- 1/2^(J+1) rho <- c(0.3,0.5,0.7,0.5,0.5,0.5,0.5,0.5) theta <- c(-1,-1, -2, -2, -3, -5, -7, -10) * tau dB <- simBmllag(n, J, rho, theta) Time <- seq(0, by = tau, length.out = n) # Time index x <- zoo(diffinv(dB[ ,1]), Time) # simulated path of the first process y <- zoo(diffinv(dB[ ,2]), Time) # simulated path of the second process # Generate non-synchronously observed data x <- x[as.logical(rbinom(n + 1, size = 1, prob = 0.5))] y <- y[as.logical(rbinom(n + 1, size = 1, prob = 0.5))] # Estimation of scale-by-scale lead-lag parameters (compare with theta/tau) wllag(x, y, J = 8, tau = tau, tol = tau, in.tau = TRUE) # Estimation with other information out <- wllag(x, y, tau = tau, tol = tau, in.tau = TRUE, verbose = TRUE) out # Plot of the HRY cross-correlation function plot(out$ccor.hry, xlab = expression(theta), ylab = expression(U(theta))) dev.off() # Plot of the objective functions op <- par(mfrow = c(4,2)) plot(out) par(op) ybook R code for the Yuima Book Description Shows the R code corresponding to each chapter in the Yuima Book. Usage ybook(chapter) Arguments chapter a number in 1:7 Details This is an accessory function which open the R code corresponding to Chapter "chapter" in the Yuima Book so that the reader can replicate the code. Examples ybook(1) yuima-class Class for stochastic differential equations Description The yuima S4 class is a class of the yuima package. Details The yuima-class object is the main object of the yuima package. Some of the slots may be missing. The data slot contains the data, either empirical or simulated. The model contains the description of the (statistical) model which is used to generate the data via different simulation schemes, to draw inference from the data or both. The sampling slot contains information on how the data have been collected or how they should be generated. The slot characteristic contains information on PLEASE FINISH THIS. The slot functional con- tains information on PLEASE FINISH THIS. Slots data: an object of class yuima.data-class model: an object of class yuima.model-class sampling: an object of class yuima.sampling-class characteristic: an object of class yuima.characteristic-class functional: an object of class yuima.functional-class Methods new signature(x = "yuima", data = "yuima.data", model = "yuima.model", sampling = "yuima.sampling", characteristic = "yuima.characteristic": the function makes a copy of the prototype object from the class definition of yuima-class, then calls the initialize method passing as arguments the newly created object and the remaining arguments. initialize signature(x = "yuima", data = "yuima.data", model = "yuima.model", sampling = "yuima.sampling", characteristic = "yuima.characteristic": makes a copy of each argument in the corresponding slots of the object x. get.data signature(x = "yuima"): returns the content of the slot data. plot signature(x = "yuima", ...): calls plot from the zoo package with argument <EMAIL>. Additional arguments ... are passed as is to the plot function. dim signature(x = "yuima"): the number of SDEs in the yuima object. length signature(x = "yuima"): a vector of length of each SDE described in the yuima object. cce signature(x = "yuima"): calculates the asyncronous covariance estimator on the data con- tained in <EMAIL>. For more details see cce. llag signature(x = "yuima"): calculates the lead lag estimate r on the data contained in <EMAIL>. For more details see llag. simulate simulation method. For more information see simulate. cbind signature(x = "yuima"): bind yuima.data object. Author(s) The YUIMA Project Team yuima.ae-class Class for the asymptotic expansion of diffusion processes Description The yuima.ae class is used to describe the output of the functions ae and aeMarginal. Slots order integer. The order of the expansion. var character. The state variables. u.var character. The variables of the characteristic function. eps.var character. The perturbation variable. characteristic expression. The characteristic function. density expression. The probability density function. Z0 numeric. The solution to the deterministic process obtained by setting the perturbation to zero. Mu numeric. The drift vector for the representation of Z1. Sigma matrix. The diffusion matrix for the representation of Z1. c.gamma list. The coefficients of the Hermite polynomials. h.gamma list. Hermite polynomials. yuima.carma-class Class for the mathematical description of CARMA(p,q) model Description The yuima.carma class is a class of the yuima package that extends the yuima.model-class. Slots info: is an carma.info-class object that describes the structure of the CARMA(p,q) model. drift: is an R expression which specifies the drift coefficient (a vector). diffusion: is an R expression which specifies the diffusion coefficient (a matrix). hurst: the Hurst parameter of the gaussian noise. If h=0.5, the process is Wiener otherwise it is fractional Brownian motion with that precise value of the Hurst index. Can be set to NA for further specification. jump.coeff: a vector of expressions for the jump component. measure: Levy measure for jump variables. measure.type: Type specification for Levy measures. state.variable a vector of names identifying the names used to denote the state variable in the drift and diffusion specifications. parameter: which is a short name for “parameters”, is an object of class model.parameter-class. For more details see model.parameter-class documentation page. state.variable: identifies the state variables in the R expression. jump.variable: identifies the variable for the jump coefficient. time.variable: the time variable. noise.number: denotes the number of sources of noise. Currently only for the Gaussian part. equation.number: denotes the dimension of the stochastic differential equation. dimension: the dimensions of the parameter given in the parameter slot. solve.variable: identifies the variable with respect to which the stochastic differential equation has to be solved. xinit: contains the initial value of the stochastic differential equation. J.flag: wheather jump.coeff include jump.variable. Methods simulate simulation method. For more information see simulate. toLatex This method converts an object of yuima.carma-class to character vectors with LaTeX markup. CarmaNoise Recovering underlying Levy. For more information see CarmaNoise. qmle Quasi maximum likelihood estimation procedure. For more information see qmle. Author(s) The YUIMA Project Team yuima.carma.qmle-class Class for Quasi Maximum Likelihood Estimation of CARMA(p,q) model Description The yuima.carma.qmle class is a class of the yuima package that extends the mle-class of the stats4 package. Slots Incr.Lev: is an object of class zoo that contains the estimated increments of the noise obtained using CarmaNoise. model: is an object of of yuima.carma-class. logL.Incr: is an object of class numeric that contains the value of the log-likelihood for estimated Levy increments. call: is an object of class language. coef: is an object of class numeric that contains estimated parameters. fullcoef: is an object of class numeric that contains estimated and fixed parameters. vcov: is an object of class matrix. min: is an object of class numeric. minuslogl: is an object of class function. method: is an object of class character. Methods plot Plot method for estimated increment of the noise. Methods mle All methods for mle-class are available. Author(s) The YUIMA Project Team yuima.characteristic-class Classe for stochastic differential equations characteristic scheme Description The yuima.characteristic class is a class of the yuima package. Slots equation.number: The number of equations modeled in the yuima object. time.scale: The time scale assumed in the yuima object. Author(s) The YUIMA Project Team yuima.cogarch-class Class for the mathematical description of CoGarch(p,q) model Description The yuima.cogarch class is a class of the yuima package that extends the yuima.model-class. Objects from the Class Objects can be created by calls of the function setCogarch. Slots info: is an cogarch.info-class object that describes the structure of the Cogarch(p,q) model. drift: is an R expression which specifies the drift coefficient (a vector). diffusion: is an R expression which specifies the diffusion coefficient (a matrix). hurst: the Hurst parameter of the gaussian noise. jump.coeff: a vector of "expressions" for the jump component. measure: Levy measure for the jump component. measure.type: Type of specification for Levy measure parameter: is an object of class model.parameter-class. state.variable: the state variable. jump.variable: the jump variable. time.variable: the time variable. noise.number: Object of class "numeric" equation.number: dimension of the stochastic differential equation. dimension: number of parameters. solve.variable: the solve variable xinit: Object of class "expression" that contains the starting function for the SDE. J.flag: wheather jump.coeff include jump.variable. Extends Class "yuima.model", directly. Methods simulate simulation method. For more information see simulate toLatex This method converts an object of yuima.cogarch-class to character vectors with La- TeX markup. qmle Quasi maximum likelihood estimation procedure. For more information see qmle. Author(s) The YUIMA Project Team yuima.CP.qmle-class Class for Quasi Maximum Likelihood Estimation of Compound Poisson-based and SDE models Description The yuima.CP.qmle class is a class of the yuima package that extends the mle-class of the stats4 package. Slots Jump.times: a vector which contains the estimated time of jumps. Jump.values: a vector which contains the jumps. X.values: the value of the process at the jump times. model: is an object of of yuima.model-class. call: is an object of class language. coef: is an object of class numeric that contains estimated parameters. fullcoef: is an object of class numeric that contains estimated and fixed parameters. vcov: is an object of class matrix. min: is an object of class numeric. minuslogl: is an object of class function. method: is an object of class character. model: is an object of class yuima.model-class. Methods plot Plot method for plotting the jump times. Methods mle All methods for mle-class are available. Author(s) The YUIMA Project Team yuima.data-class Class "yuima.data" for the data slot of a "yuima" class object Description The yuima.data-class is a class of the yuima package used to store the data which are hold in the slot data of an object of the yuima-class. Objects from this class contain either true data or simulated data. Details Objects in this class are created or initialized using the methods new or initialize or via the function setData. The preferred way to construct an object in this class is to use the function setData. Objects in this class are used to store the data which are hold in the slot data of an object of the yuima-class. Objects in this class contain two slots described here. original.data: The slot original.data contains, as the name suggests, a copy of the original data passed by the user to methods new or initialize or to the function setData. It is intended for backup purposes. zoo.data: When a new object of this class is created or initialized using the original.data, the package tries to convert original.data into an object of class zoo. Once coerced to zoo, the data are stored in the slot zoo.data. If the conversion fails, the initialization or creation of the object fails. Internally, the yuima package stores and operates on zoo-type objects. If data are obtained by simulation, the original.data slot is usually empty. Slots original.data: The original data. zoo.data: A list of zoo format data. Methods new signature(x = "yuima.data", original.data): the function makes a copy of the proto- type object from the class definition of yuima.data-class, then calls the initialize method passing as arguments the newly created object and the original.data. initialize signature(x = "yuima.data", original.data): makes a copy of original.data into the slot original.data of x and tries to coerce original.data into an object of class zoo. The result is put in the slot zoo.data of x. If coercion fails, the intialize method fails as well. get.zoo.data signature(x = "yuima.data"): returns the content of the slot zoo.data of x. plot signature(x = "yuima.data", ...): calls plot from the zoo package with argument <EMAIL>. Additional arguments ... are passed as is to the plot function. dim signature(x = "yuima.data"): calls dim from the zoo package with argument <EMAIL>. length signature(x = "yuima.data"): calls length from the zoo package with argument <EMAIL>. cce signature(x = "yuima.data"): calculates asyncronous covariance estimator on the data con- tained in <EMAIL>. For more details see cce. llag signature(x = "yuima.data"): calculates lead lag estimate on the data contained in <EMAIL>. For more details see llag. cbind.yuima signature(x = "yuima.data"): bind yuima.data object. Author(s) The YUIMA Project Team yuima.functional-class Classes for stochastic differential equations data object Description The yuima.functional class is a class of the yuima package. Author(s) YUIMA Project yuima.Hawkes Class for a mathematical description of a Point Process Description The yuima.Hawkes-class is a class of the yuima package that extends the yuima.PPR-class. The object of this class contains all the information about the Hawkes process with exponential kernel. An object of this class can be created by calls of the function setHawkes. yuima.Integral-class Class for the mathematical description of integral of a stochastic pro- cess Description The yuima.Integral class is a class of the yuima package that extends the yuima-class it repre- sents a integral of a stochastic process zt = int^{t}_0 h(theta, Xs, s) dXs Slots In the following we report the the additional slots of an object of class yuima.Integral with respect to the yuima-class: Integral: It is an object of class Integral.sde and it is composed by the following slots: param.Integral: it is an object of class param.Integral and it is composed by the follow- ing slots: allparam: labels of all parameters (model and integral). common: common parameters. Integrandparam: labels of all parameters only in the integral. variable.Integral: it is an object of class variable.Integral and it is composed by the following slots: var.dx: integral variable. lower.var: lower bound of support. upper.var: upper bound of support. out.var: labels of output. var.time: label of time. Integrand: it is an object of class variable.Integral and it is composed by the following slots: IntegrandList: It is a list that contains the components of integrand h(theta, Xs, s). dimIntegrand: a numeric object that is the dimensions of the output. Methods simulate simulation method. For more information see simulate. yuima.law-class Class of yuima law Description Insert Description Here Slots ...: rng function density function cdf function quantile function characteristic function param.measure ... time.var label dim number Methods rand signature(object = "yuima.law", n = "numeric", param = "list", ...): INSERT HERE DESCRIPTION dens signature(object = "yuima.law", x = "numeric", param = "list", log = FALSE, ...): INSERT HERE DESCRIPTION cdf signature(object = "yuima.law", q = "numeric", param = "list", ...): INSERT HERE DESCRIPTION quant signature(object = "yuima.law", p = "numeric", param = "list", ...): INSERT HERE DESCRIPTION char signature(object = "yuima.law", u = "numeric", param = "list", ...): INSERT HERE DESCRIPTION Author(s) The YUIMA Project Team yuima.Map-class Class for the mathematical description of function of a stochastic pro- cess Description The yuima.Map class is a class of the yuima package that extends the yuima-class it represents a map of a stochastic process zt = g(theta, Xt, t) : R^{q x d x 1} -> R^{l1 x l2 x ...} or an operator between two independent stochasic process: zt = h(theta, Xt, Yt, t) where Xt and Yt are object of class yuima.model-class or yuima-class with the same dimension. Slots Here we report the additional slots of an object of class yuima.Map with respect to the yuima-class: Output: It is an object of class info.Map and it is composed by the following slots: formula: It is a vector that contains the components of map g(theta, Xt, t) or the opera- tor h(theta, Xt, Yt, t) dimension: a numeric object that is the dimensions of the Map. type: If type = "Maps", the Map is a map of stochastic process, If type = "Operator", the result is an operator between two independent stochastic process param it is an object of class param.Map and it is composed by the following slots: out.var: labels for Map. allparam: labels of all parameters (model and map/operators). allparamMap: labels of map/operator parameters. common: common parameters. Input.var: labels for inputs. time.var: label for time variable. Methods simulate simulation method. For more information see simulate. Author(s) The YUIMA Project Team yuima.model-class Classes for the mathematical description of stochastic differential equations Description The yuima.model class is a class of the yuima package. Slots drift: is an R expression which specifies the drift coefficient (a vector). diffusion: is an R expression which specifies the diffusion coefficient (a matrix). hurst: the Hurst parameter of the gaussian noise. If h=0.5, the process is Wiener otherwise it is fractional Brownian motion with that precise value of the Hurst index. Can be set to NA for further specification. jump.coeff: a matrix of expressions for the jump component. measure: Levy measure for jump variables. measure.type: Type specification for Levy measures. state.variable a vector of names identifying the names used to denote the state variable in the drift and diffusion specifications. parameter: which is a short name for “parameters”, is an object of class model.parameter-class. For more details see model.parameter-class documentation page. state.variable: identifies the state variables in the R expression. jump.variable: identifies the variable for the jump coefficient. time.variable: the time variable. noise.number: denotes the number of sources of noise. Currently only for the Gaussian part. equation.number: denotes the dimension of the stochastic differential equation. dimension: the dimensions of the parameter given in the parameter slot. solve.variable: identifies the variable with respect to which the stochastic differential equation has to be solved. xinit: contains the initial value of the stochastic differential equation. J.flag: wheather jump.coeff include jump.variable. Author(s) The YUIMA Project Team yuima.multimodel-class Class for the mathematical description of Multi dimensional Jump Dif- fusion processes Description The yuima.multimodel class is a class of the yuima package that extends the yuima.model-class. Slots drift: always expression((0)). diffusion: a list of expression((0)). hurst: always h=0.5, but ignored for this model. jump.coeff: set according to scale in setPoisson. measure: a list containting the intensity measure and the jump distribution. measure.type: always "CP". state.variable a vector of names identifying the names used to denote the state variable in the drift and diffusion specifications. parameter: which is a short name for “parameters”, is an object of class model.parameter-class. For more details see model.parameter-class documentation page. state.variable: identifies the state variables in the R expression. jump.variable: identifies the variable for the jump coefficient. time.variable: the time variable. noise.number: denotes the number of sources of noise. equation.number: denotes the dimension of the stochastic differential equation. dimension: the dimensions of the parameter given in the parameter slot. solve.variable: identifies the variable with respect to which the stochastic differential equation has to be solved. xinit: contains the initial value of the stochastic differential equation. J.flag: wheather jump.coeff include jump.variable. Methods simulate simulation method. For more information see simulate. qmle Quasi maximum likelihood estimation procedure. For more information see qmle. Author(s) The YUIMA Project Team Examples ## Not run: # We define the density function of the underlying Levy dmyexp <- function(z, sig1, sig2, sig3){ rep(0,3) } # We define the random number generator rmyexp <- function(z, sig1, sig2, sig3){ cbind(rnorm(z,0,sig1), rgamma(z,1,sig2), rnorm(z,0,sig3)) } # Model Definition: in this case we consider only a multi # compound poisson process with a common intensity as underlying # noise mod <- setModel(drift = matrix(c("0","0","0"),3,1), diffusion = NULL, jump.coeff = matrix(c("1","0","0","0","1","-1","1","0","0"),3,3), measure = list( intensity = "lambda1", df = "dmyexp(z,sig1,sig2,sig3)"), jump.variable = c("z"), measure.type=c("CP"), solve.variable=c("X1","X2","X3")) # Sample scheme samp<-setSampling(0,100,n=1000) param <- list(lambda1 = 1, sig1 = 0.1, sig2 = 0.1, sig3 = 0.1) # Simulation traj <- simulate(object = mod, sampling = samp, true.parameter = param) # Plot plot(traj, main = " driven noise. Multidimensional CP", cex.main = 0.8) # We construct a multidimensional SDE driven by a multivariate # levy process without CP components. # Definition multivariate density dmyexp1 <- function(z, sig1, sig2, sig3){ rep(0,3) } # Definition of random number generator # In this case user must define the delta parameter in order to # control the effect of time interval in the simulation. rmyexp1 <- function(z, sig1, sig2, sig3, delta){ cbind(rexp(z,sig1*delta), rgamma(z,1*delta,sig2), rexp(z,sig3*delta)) } # Model defintion mod1 <- setModel(drift=matrix(c("0.1*(0.01-X1)", "0.05*(1-X2)","0.1*(0.1-X3)"),3,1), diffusion=NULL, jump.coeff = matrix(c("0.01","0","0","0","0.01", "0","0","0","0.01"),3,3), measure = list(df="dmyexp1(z,sig1,sig2,sig3)"), jump.variable = c("z"), measure.type=c("code"), solve.variable=c("X1","X2","X3"),xinit=c("10","1.2","10")) # Simulation sample paths samp<-setSampling(0,100,n=1000) param <- list(sig1 = 1, sig2 = 1, sig3 = 1) # Simulation set.seed(1) traj1 <- simulate(object = mod1, sampling = samp, true.parameter = param) # Plot plot(traj1, main = "driven noise: multi Levy without CP", cex.main = 0.8) # We construct a multidimensional SDE driven by a multivariate # levy process. # We consider a mixed situation where some # noise are driven by a multivariate Compuond Poisson that # shares a common intensity parameters. ### Multi Levy model rmyexample2<-function(z,sig1,sig2,sig3, delta){ if(missing(delta)){ delta<-1 } cbind(rexp(z,sig1*delta), rgamma(z,1*delta,sig2), rexp(z,sig3*delta), rep(1,z), rep(1,z)) } dmyexample2<-function(z,sig1,sig2,sig3){ rep(0,5) } # Definition Model mod2 <- setModel(drift=matrix(c("0.1*(0.01-X1)", "0.05*(1-X2)","0.1*(0.1-X3)", "0", "0"),5,1), diffusion=NULL, jump.coeff = matrix(c("0.01","0","0","0","0", "0","0.01","0","0","0", "0","0","0.01","0","0", "0","0","0","0.01","0", "0","0","0","0","0.01"),5,5), measure = list(df = "dmyexample2(z,sig1,sig2,sig3)", intensity = "lambda1"), jump.variable = c("z"), measure.type=c("code","code","code","CP","CP"), solve.variable=c("X1","X2","X3","X4","X5"), xinit=c("10","1.2","10","0","0")) # Simulation scheme samp <- setSampling(0, 100, n = 1000) param <- list(sig1 = 1, sig2 = 1, sig3 = 1, lambda1 = 1) # Simulation set.seed(1) traj2 <- simulate(object = mod2, sampling = samp, true.parameter = param) plot(traj2, main = "driven noise: general multi Levy", cex.main = 0.8) ## End(Not run) yuima.poisson-class Class for the mathematical description of Compound Poisson pro- cesses Description The yuima.poisson class is a class of the yuima package that extends the yuima.model-class. Slots drift: always expression((0)). diffusion: a list of expression((0)). hurst: always h=0.5, but ignored for this model. jump.coeff: set according to scale in setPoisson. measure: a list containting the intensity measure and the jump distribution. measure.type: always "CP". state.variable a vector of names identifying the names used to denote the state variable in the drift and diffusion specifications. parameter: which is a short name for “parameters”, is an object of class model.parameter-class. For more details see model.parameter-class documentation page. state.variable: identifies the state variables in the R expression. jump.variable: identifies the variable for the jump coefficient. time.variable: the time variable. noise.number: denotes the number of sources of noise. equation.number: denotes the dimension of the stochastic differential equation. dimension: the dimensions of the parameter given in the parameter slot. solve.variable: identifies the variable with respect to which the stochastic differential equation has to be solved. xinit: contains the initial value of the stochastic differential equation. J.flag: wheather jump.coeff include jump.variable. Methods simulate simulation method. For more information see simulate. qmle Quasi maximum likelihood estimation procedure. For more information see qmle. Author(s) The YUIMA Project Team yuima.PPR Class for a mathematical description of a Point Process Description The yuima.PPR class is a class of the yuima package that extends the yuima-class. The object of this class contains all the information about the Point Process Regression Model. Objects from the Class Objects can be created by calls of the function setPPR. Slots PPR: is an object of class info.PPR. gFun: is an object of class info.Map. Kernel: is an object of class Integral.sde. data: is an object of class yuima.data-class. The slot contain either true data or simulated data model: is an object of class yuima.model-class. The slot contains all the information about the covariates sampling: is an object of class yuima.sampling-class. characteristic: is an object of class yuima.characteristic-class. model: is an object of class yuima.functional-class. Author(s) The YUIMA Project Team yuima.qmleLevy.incr Class for Quasi Maximum Likelihood Estimation of Levy SDE model Description The yuima.qmleLevy.incr-class is a class of the yuima package that extends the mle-class of the stats4 package. Slots Incr.Lev: is an object of class yuima.data-class that contains the estimated increments of the noise. logL.Incr: an numeric object that represents the value of the loglikelihood for the estimated Levy increments. minusloglLevy: an R function that evaluates the loglikelihood of the estimated Levy increments. The function is used internally in qmleLevy for the estimation of the Levy measure parameters. Levydetails: a list containing additional information about the optimization procedure in the estimation of the Levy measure parameters. See optim help for the meaning of the compo- nents of this list. Data: is an object of yuima.data-class containing observation data. model: is an object of of yuima.carma-class. call: is an object of class language. coef: is an object of class numeric that contains estimated parameters. fullcoef: is an object of class numeric that contains estimated and fixed parameters. vcov: is an object of class matrix. min: is an object of class numeric. minuslogl: is an object of class function. nobs: an object of class numeric. method: is an object of class character. Methods Methods mle All methods for mle-class are available. Author(s) The YUIMA Project Team yuima.sampling-class Classes for stochastic differential equations sampling scheme Description The yuima.sampling class is a class of the yuima package. Details This object is created by setSampling or as a result of a simulation scheme by the simulate function or after subsampling via the function subsampling. Slots Initial: initial time of the grid. Terminal: terminal time fo the grid. n: the number of observations - 1. delta: in case of a regular grid is the mesh. grid: the grid of times. random: either FALSE or the distribution of the random times. regular: indicator of whether the grid is regular or not. For internal use only. sdelta: in case of a regular space grid it is the mesh. sgrid: the grid in space. oindex: in case of interpolation, a vector of indexes corresponding to the original observations used for the approximation. interpolation: the name of the interpolation method used. Author(s) The YUIMA Project Team yuima.snr-class Class "yuima.snr" for self-normalized residuals of SDE "yuima" class object Description The yuima.snr-class is a class of the yuima package used to store the calculatied self-normalized residuals of an SDEs. yuima.snr-class 189 Slots call: The original call. coef: A numeric vector. snr: A numeric vector of residuals. model: A yuima.model. Methods show print method Author(s) The YUIMA Project Team
presize
cran
R
Package ‘presize’ February 27, 2023 Type Package Title Precision Based Sample Size Calculation Version 0.3.7 Maintainer <NAME> <<EMAIL>> Description Bland (2009) <doi:10.1136/bmj.b3985> recommended to base study sizes on the width of the confidence interval rather the power of a statistical test. The goal of 'presize' is to provide functions for such precision based sample size calculations. For a given sample size, the functions will return the precision (width of the confidence interval), and vice versa. License GPL-3 URL https://github.com/CTU-Bern/presize, https://ctu-bern.github.io/presize/ BugReports https://github.com/CTU-Bern/presize/issues Encoding UTF-8 RoxygenNote 7.2.2 Suggests binom, dplyr, ggplot2, gt, Hmisc, knitr, magrittr, markdown, rmarkdown, shinydashboard, shinytest, testthat, tidyr Imports kappaSize (>= 1.2), shiny VignetteBuilder knitr NeedsCompilation no Author <NAME> [aut], <NAME> [cre, aut], <NAME> [aut], <NAME> [ctb], <NAME> [ctb] Repository CRAN Date/Publication 2023-02-27 21:02:29 UTC R topics documented: launch_presize_ap... 2 prec_au... 3 prec_co... 4 prec_cron... 5 prec_ic... 6 prec_kapp... 7 prec_lim_agre... 8 prec_l... 9 prec_mea... 12 prec_meandif... 13 prec_o... 14 prec_pro... 15 prec_rat... 17 prec_raterati... 18 prec_riskdif... 19 prec_riskrati... 21 prec_sen... 23 launch_presize_app Presize shiny app Description Besides the programmatic approach to using presize, we also supply a shiny app, enabling point- and-click interaction with the program. The app will open in a new window. Select the appropriate method from the menu on the left and enter the relevant parameters indicated in the panel on the right. The output is then displayed lower down the page. Usage launch_presize_app() Details The main disadvantage to the app is that it only allows a single scenario at a time. The app is also available at https://shiny.ctu.unibe.ch/presize/. Examples # launch the app ## Not run: launch_presize_app() ## End(Not run) prec_auc Sample size or precision for AUC Description Calculate the sample size from AUC, prevalence and confidence interval width or the expected confidence interval width from AUC, prevalence and sample size, following Hanley and McNeil (1982). Usage prec_auc(auc, prev, n = NULL, conf.width = NULL, conf.level = 0.95, ...) Arguments auc AUC value. prev prevalence. n number of observations. conf.width precision (the full width of the confidence interval). conf.level confidence level. ... other arguments to optimize. Details Sample size is derived by optimizing the difference between the difference between the lower and upper limits of the confidence interval and conf.width. Value Object of class "presize", a list of arguments (including the computed one) augmented with method and note elements. References Hanley, JA and McNeil, BJ (1982) The Meaning and Use of the Area under a Receiver Operating Characteristic (ROC) Curve. Radiology 148, 29-36 Examples # confidence interval width N <- 500 prev <- .1 auc <- .65 (prec <- prec_auc(auc, prev, n = N)) cwidth <- prec$conf.width # sample size prec_auc(auc, prev, conf.width = cwidth) prec_cor Sample size or precision for correlation coefficient Description prec_cor returns the sample size or the precision for the given pearson, spearman, or kendall correlation coefficient. Usage prec_cor( r, n = NULL, conf.width = NULL, conf.level = 0.95, method = c("pearson", "kendall", "spearman"), ... ) Arguments r desired correlation coefficient. n sample size. conf.width precision (the full width of the confidence interval). conf.level confidence level. method Exactly one of pearson (default), kendall, or spearman. Methods can be ab- breviated. ... other options to uniroot (e.g. tol) Details Exactly one of the parameters n or conf.width must be passed as NULL, and that parameter is determined from the other. Sample size or precision is calculated according to formula 2 in Bonett and Wright (2000). The use of pearson is only recommended, if n ≥ 25. The pearson correlation coefficient assumes bivariate normality. If the assumption of bivariate normality cannot be met, spearman or kendall should be considered. n is rounded up to the next whole number using ceiling. uniroot is used to solve n. Value Object of class "presize", a list of arguments (including the computed one) augmented with method and note elements. References <NAME>, and Wright TA (2000) Sample size requirements for estimating Pearson, Kendall and Spearman correlations Psychometrika 65:23-28. doi:10.1007/BF02294183 Examples # calculate confidence interval width... # Pearson correlation coefficient prec_cor(r = 0.5, n = 100) # Kendall rank correlation coefficient (tau) prec_cor(r = 0.5, n = 100, method = "kendall") # Spearman's rank correlation coefficient prec_cor(r = 0.5, n = 100, method = "spearman") # calculate N required for a given confidence interval width... # Pearson correlation coefficient prec_cor(r = 0.5, conf.width = .15) # Kendall rank correlation coefficient (tau) prec_cor(r = 0.5, conf.width = .15, method = "kendall") # Spearman's rank correlation coefficient prec_cor(r = 0.5, conf.width = .15, method = "spearman") prec_cronb Sample size or precision for Cronbach’s alpha Description prec_cronb returns the sample size or the precision for the given Cronbach’s alpha. Usage prec_cronb(k, calpha, n = NULL, conf.level = 0.95, conf.width = NULL) Arguments k number of measurements/items. calpha desired Cronbach’s alpha. n sample size. conf.level confidence level. conf.width precision (the full width of the confidence interval). Details Exactly one of the parameters n or conf.width must be passed as NULL, and that parameter is determined from the other. Sample size or precision is calculated according to the formula & code and provided in Bonett and Wright (2014). n is rounded up to the next whole number using ceiling. Value Object of class "presize", a list of arguments (including the computed one) augmented with method and note elements. References <NAME>. and <NAME>. (2015) Cronbach’s alpha reliability: Interval estimation, hypothesis testing, and sample size planning J. Organiz. Behav., 36, pages 3– 15. doi:10.1002/job.1960. # k= number of items Examples # calculate confidence interval width... prec_cronb (k=5,calpha=0.7,n= 349,conf.level= 0.95, conf.width= NULL) # calculate N required for a given confidence interval width... prec_cronb (k=5,calpha=0.7,n= NULL,conf.level= 0.95, conf.width= 0.1) prec_icc Sample size or precision for an intraclass correlation Description prec_icc returns the sample size or the precision for the given intraclass correlation. Usage prec_icc(rho, k, n = NULL, conf.width = NULL, conf.level = 0.95) Arguments rho desired intraclass correlation. k number of observations per n (subject). n number of subjects. conf.width precision (the full width of the confidence interval). conf.level confidence level. Details Exactly one of the parameters n or conf.width must be passed as NULL, and that parameter is determined from the others. Sample size or precision is calculated according to formula 3 in Bonett (2002), which is an approx- imation. Whether ICC is calculated for a one-way or a two-way ANOVA does not matter in the approximation. As suggested by the author, 5 ∗ rho is added to n, if k = 2 and rho ≥ 7. This makes the assumption that there is no interaction between rater and subject. n is rounded up to the next whole number using ceiling. Value Object of class "presize", a list of arguments (including the computed one) augmented with method and note elements. References <NAME> (2002). Sample size requirements for estimating intraclass correlations with desired precision. Statistics in Medicine, 21:1331-1335. doi:10.1002/sim.1108 Examples # Bonett (2002) gives an example using 4 raters, with an ICC of 0.85 and want # a confidence width of 0.2. Bonett calculated that a sample size of 19.2 was # required. This can be done via prec_icc(0.85, 4, conf.width = 0.2) # note that \code{presamp} rounds up to the nearist integer. # Bonett then goes on to estimate the width given the sample size, finding a # value 'close to 0.2': prec_icc(0.85, 4, 20) prec_kappa Sample size or precision for Cohen’s kappa Description prec_kappa returns the sample size or the precision for the provided Cohen’s kappa coefficient. Usage prec_kappa( kappa, n = NULL, raters = 2, n_category = 2, props, conf.width = NULL, conf.level = 0.95 ) Arguments kappa expected value of Cohen’s kappa. n sample size. raters number of raters (maximum of 6). n_category number of categories of outcomes (maximum of 5). props expected proportions of each outcome (should have length n_category). conf.width precision (the full width of the confidence interval). conf.level confidence level. Details This function wraps the FixedN and CI functions in the kappaSize package. The FixedN functions in kappaSize return a one sided confidence interval. The values that are passed to kappaSize en- sure that two-sided confidence intervals are returned, although we assume that confidence intervals are symmetrical. Value Object of class "presize", a list of arguments (including the computed one) augmented with method and note elements. See Also FixedNBinary, FixedN3Cats, CIBinary, CI3Cats Examples # precision based on sample size # two categories with proportions of 30 and 70\%, four raters prec_kappa(kappa = .5, n = 200, raters = 4, n_category = 2, props = c(.3,.7)) # sample size to get a given precision prec_kappa(kappa = .5, conf.width = .15, raters = 4, n_category = 2, props = c(.3,.7)) # as above, but with two scenarios for kappa prec_kappa(kappa = c(.5, .75), conf.width = .15, raters = 4, n_category = 2, props = c(.3,.7)) prec_kappa(kappa = c(.5, .75), conf.width = c(.15, 0.3), raters = 4, n_category = 2, props = c(.3,.7)) prec_lim_agree Sample size or precision for limit of agreement on Bland-Altman plots Description prec_lim_agree returns the sample size or the precision for the limit of agreement, i.e. the confi- dence interval around the limit of agreement, expressed in SD-units. It is an approximation based on the Normal distribution, instead of a Student t distribution. Usage prec_lim_agree(n = NULL, conf.width = NULL, conf.level = 0.95) Arguments n sample size. conf.width precision (the full width of the confidence interval). conf.level confidence level. Details Exactly one of the parameters n or conf.width must be passed as NULL, and that parameter is determined from the other. The sample size and precision are calculated according to formulae in Bland & Altman (1986). The CI width is a simple function of the sample size only. Value Object of class "presize", a list of arguments (including the computed one) augmented with method and note elements. References Bland & Altman (1986) Statistical methods for assessing agreement between two methods of clini- cal measurement Lancet i(8476):307-310 doi:10.1016/S01406736(86)908378 Examples # calculate confidence interval width, given N prec_lim_agree(200) # calculate N given, confidence interval width prec_lim_agree(conf.width = .1) prec_lr Sample size or precision for likelihood ratios Description These functions calculate the precision or sample size for likelihood ratios (LRs). prec_lr is a generalized method for that can be used for positive and negative LRs as well as conditional LRs. prec_pos_lr is a wrapper to prec_lr to ease calculations for positive likelihood ratios by allowing sensitivity and specificity to be given explicitly. prec_neg_lr is a wrapper to prec_lr to ease calculations for negative likelihood ratios by allowing sensitivity and specificity to be given explicitly. Usage prec_lr(prev, p1, p2, n = NULL, conf.width = NULL, conf.level = 0.95, ...) prec_pos_lr( prev, sens, spec, n = NULL, conf.width = NULL, conf.level = 0.95, ... ) prec_neg_lr( prev, sens, spec, n = NULL, conf.width = NULL, conf.level = 0.95, ... ) Arguments prev disease/case prevalence in the study group. p1 proportion of positives in group 1 (e.g. sensitivity). p2 proportion of positives in group 2 (e.g. 1 - specificity). n total group size. conf.width precision (the full width of the confidence interval). conf.level confidence level (defaults to 0.95). ... other arguments to uniroot (e.g. tol). sens sensitivity. spec specificity. Details These functions implement formula 10 from Simel et al 1991. prec_lr is a generalized function allowing for many scenarios, while prec_pos_lr and prec_neg_lr are specific to positive and negative likelihood ratios in the 2*2 setting (e.g. disease status and test positive/negative). For the positive likelihood ratio (LR+), in a 2x2 style experiment, p1 should be sensitivity, p2 should be 1-specificity. Alternatively, use prec_pos_lr. For the negative likelihood ratio (LR-), in a 2x2 style experiment, p1 should be 1-sensitivity, p2 should be specificity. Alternatively, use prec_neg_lr. For conditional likelihood ratios with 3x2 tables, such as positive or negative tests against incon- clusive ones (yields), p1 would be the proportion of positive or negative tests in the diseased group and p2 would be the proportion of positive or negative tests in the non-diseased group. Value Object of class "presize", a list of arguments (including the computed one) augmented with method and note elements. Functions • prec_pos_lr(): "Positive likelihood ratio" • prec_neg_lr(): "Negative likelihood ratio" References Simel, DL, Samsa, GP and Matchar, DB (1991) Likelihood ratios with confidence: Sample size estimation for diagnostic test studies. J Clin Epidemiol 44(8), 763-770 Examples # equal numbers of diseased/non-diseased, 80% sens, 73% spec, 74 participants total prec_lr(.5, .8, .27, 74) # Simel et al 1991, problem 1 - LR+ CI width from N # Sensitivity of a new test is at least 80%, specificity is 73% and the LR+ # is 2.96 (= 0.8/(1-0.73)). We have as many diseased as not diseased # (n1 = n2, n = 2*n1 = 146.8, prevalence = .5) prec_lr(prev = .5, p1 = .8, p2 = 1-.73, n = 146.8) prec_pos_lr(prev = .5, sens = .8, spec = .73, n = 146.8) # problem 1 of Simel et al actually derives n1 rather than the width of the # confidence interval (ie N from CI width). If we know that the lower limit # of the CI should be 2.0, the confidence interval width is approximately # exp(2*(log(2.96) - log(2))) = 2.19 (approximate because the CI Of the LR # is only symetrical on the log(LR) scale), which we can put in conf.width prec_lr(prev = .5, p1 = .8, p2 = 1-.73, conf.width = 2.2) # same, but using the wrapper to specify sens and spec prec_pos_lr(prev = .5, sens = .8, spec = .73, conf.width = 2.2) # Simel et al 1991, problem 2 - LR- CI width from N # p1 = 1 - sens = .1, p2 = spec = .5 # n1 = n2, n = 160, prev = .5 prec_lr(prev = .5, p1 = .1, p2 = .5, n = 160) # same, but using the wrapper to specify sens and spec prec_neg_lr(prev = .5, sens = .9, spec = .5, n = 160) prec_mean Sample size or precision for a mean Description prec_mean returns the sample size or the precision for the provided mean and standard deviation. Usage prec_mean( mean, sd, n = NULL, conf.width = NULL, conf.level = 0.95, ..., mu = NULL ) Arguments mean mean. sd standard deviation. n number of observations. conf.width precision (the full width of the confidence interval). conf.level confidence level. ... other arguments to uniroot (e.g. tol). mu deprecated argument Details Exactly one of the parameters n or conf.width must be passed as NULL, and that parameter is determined from the other. The precision is defined as the full width of the confidence interval. The confidence interval calcu- lated as t(n − 1) ∗ sd/sqrt(n), with t(n-1) from the t-distribution with n-1 degrees of freedom. This function is also suitable for a difference in paired means, as this reduces to a single value per individual - the difference. uniroot is used to solve n. Value Object of class "presize", a list with mean mean, sd standard deviation, n sample size, conf.width precision (the width of the confidence interval), lwr lower bound of confidence interval, upr upper bound of confidence interval, augmented with method and note elements. Examples # mean of 5, SD of 2.5, whats the confidence interval width with 20 participants? prec_mean(mean = 5, sd = 2.5, n = 20) # mean of 5, SD of 2.5, how many participants for CI width of 2.34? prec_mean(mean = 5, sd = 2.5, conf.width = 2.34) # approximately the inverse of above prec_meandiff Sample size or precision for a mean difference Description prec_meandiff returns the sample size or the precision for the provided mean difference and stan- dard deviations. For paired differences, use prec_mean, as it is equivalent to a simple mean. Usage prec_meandiff( delta, sd1, sd2 = sd1, n1 = NULL, r = 1, conf.width = NULL, conf.level = 0.95, variance = c("equal", "unequal"), ... ) Arguments delta difference in means between the two groups. sd1 standard deviation in group 1. sd2 standard deviation in group 2. n1 number of patients in group 1. r allocation ratio (relative size of group 2 and group 1 (n2 / n1)). conf.width precision (the full width of the confidence interval). conf.level confidence level. variance equal (default) or unequal variance. ... other options to uniroot (e.g. tol) Details Exactly one of the parameters n or conf.width must be passed as NULL, and that parameter is determined from the other. Value Object of class "presize", a list of arguments (including the computed one) augmented with method and note elements. Examples # mean difference of 5, SD of 2.5, CI width with 20 participants assuming equal variances prec_meandiff(delta = 5, sd1 = 2.5, n1 = 20, var = "equal") # mean difference of 5, SD of 2.5, number of participants for a CI width of 3, # assuming equal variances prec_meandiff(delta = 5, sd1 = 2.5, conf.width = 3, var = "equal") prec_or Sample size or precision for an odds ratio Description prec_or returns the sample size or the precision for the provided proportions. Usage prec_or( p1, p2, n1 = NULL, r = 1, conf.width = NULL, conf.level = 0.95, method = c("gart", "woolf", "indip_smooth"), ... ) Arguments p1 risk among exposed. p2 risk among unexposed. n1 number of patients in exposed group. r allocation ratio (relative size of unexposed and exposed cohort (n2 / n1)). conf.width precision (the full width of the confidence interval). conf.level confidence level. method Exactly one of indip_smooth (default), gart, or woolf. Methods can be abbre- viated. ... other arguments to uniroot (e.g. tol). Details Exactly one of the parameters n1 or conf.width must be passed as NULL, and that parameter is determined from the other. Woolf (woolf), Gart (gart), and Independence-smoothed logit (indip_smooth) belong to a gen- eral family of adjusted confidence intervals, adding 0 (woolf) to each cell, 0.5 (gart) to each cell, or an adjustment for each cell based on observed data (independence-smoothed). In gart and in- dip_smooth, estimate of the CI is not possible if p1 = 0, in which case the OR becomes 0, but the lower level of the CI is > 0. Further, if p1 = 1 and p2 < 1, or if p1 > 0 and p2 = 0, the OR becomes ∞, but the upper limit of the CI is finite. For the approximate intervals, gart and indip_smooth are the recommended intervals (Fagerland et al. 2011). uniroot is used to solve n for the woolf, gart, and indip_smooth method. Value Object of class "presize", a list of arguments (including the computed one) augmented with method and note elements. References <NAME>, <NAME>, <NAME> (2015). Recommended confidence intervals for two indepen- dent binomial proportions. Statistical Methods in Medical Research, 24(2):224-254. doi:10.1177/ 0962280211415469. Examples # 10\% events in one group, 15\% in the other, 200 participants total # (= 100 in each group), estimate confidence interval width prec_or(p1 = .1, p2 = .15, n1 = 200/2) # formula by Gart prec_or(p1 = .1, p2 = .15, n1 = 200/2, method = "gart") # formula by Woolf prec_or(p1 = .1, p2 = .15, n1 = 200/2, method = "woolf") # 10\% odds in one group, 15\% in the other, desired CI width of 0.1, # estimate N prec_or(p1 = .1, p2 = .15, conf.width = .1) # formula by Gart prec_or(p1 = .1, p2 = .15, conf.width = .1, method = "gart") # formula by Woolf prec_or(p1 = .1, p2 = .15, conf.width = .1, method = "woolf") prec_prop Sample size or precision for a proportion Description prec_prop returns the sample size or the precision for the provided proportion. Usage prec_prop( p, n = NULL, conf.width = NULL, conf.level = 0.95, method = c("wilson", "agresti-coull", "exact", "wald"), ... ) Arguments p proportion. n number of observations. conf.width precision (the full width of the confidence interval). conf.level confidence level. method The method to use to calculate precision. Exactly one method may be provided. Methods can be abbreviated. ... other arguments to uniroot (e.g. tol). Details Exactly one of the parameters n or conf.width must be passed as NULL, and that parameter is determined from the other. The wilson, agresti-coull, exact, and wald method are implemented. The wilson method is sug- gested for small n (< 40), and the agresti-coull method is suggested for larger n (see reference). The wald method is not suggested, but provided due to its widely distributed use. uniroot is used to solve n for the agresti-coull, wilson, and exact methods. Agresti-coull can be abbreviated by ac. Value Object of class "presize", a list of arguments (including the computed one) augmented with method and note elements. In the wilson and agresti-coull formula, the p from which the confidence interval is calculated is adjusted by a term (i.e. p + term ± ci). This adjusted p is returned in padj. References <NAME>, <NAME>, DasGupta A (2001) Interval Estimation for a Binomial Proportion, Statistical Science, 16:2, 101-117, doi:10.1214/ss/1009213286 See Also binom.test, binom.confint in package binom, and binconf in package Hmisc Examples # CI width for 15\% with 50 participants prec_prop(0.15, n = 50) # number of participants for 15\% with a CI width of 0.2 prec_prop(0.15, conf.width = 0.2) # confidence interval width for a range of scenarios between 10 and 90\% with # 100 participants via the wilson method prec_prop(p = 1:9 / 10, n = 100, method = "wilson") # number of participants for a range of scenarios between 10 and 90\% with # a CI of 0.192 via the wilson method prec_prop(p = 1:9 / 10, conf.width = .192, method = "wilson") prec_rate Sample size or precision for a rate Description prec_rate returns the sample size or the precision for the provided rate. Usage prec_rate( r, x = NULL, conf.width = NULL, conf.level = 0.95, method = c("score", "vs", "exact", "wald"), ... ) Arguments r rate or rate ratio. x number of events. conf.width precision (the full width of the confidence interval). Should not exceed 5 times r. conf.level confidence level. method The method to use to calculate precision. Exactly one method may be provided. Methods can be abbreviated. ... other arguments to uniroot (e.g. tol). Details Exactly one of the parameters r or conf.width must be passed as NULL, and that parameter is determined from the other. The score, variance stabilizing (vs), exact, and wald method are implemented to calculate the rate and the precision. For few events x (<5), the exact method is recommended. If more than one method is specified or the method is miss-specified, the ’score’ method will be used. uniroot is used to solve n for the score and exact method. Value Object of class "presize", a list of arguments (including the computed one) augmented with method and note elements. References <NAME>. (2002) A Comparison of Nine Confidence Intervals for a Poisson Parameter When the Expected Number of Events is ≤ 5, The American Statistician, 56:2, 85-89, doi:10.1198/ 000313002317572736 See Also poisson.test Examples # confidence interval width for a rate of 2.5 events per unit and 20 events, # using the score method prec_rate(2.5, x = 20, met = "score") # number of events to yield a CI width of 2.243 for a rate of 2.5 events per # unit and 20 events, using the score method prec_rate(2.5, conf.width = 2.243, met = "score") # confidence interval width for a rate of 2.5 events per unit and 20 events, # using the exact method prec_rate(2.5, x = 20, met = "exact") # vs and wald have the same conf.width, but different lwr and upr prec_rate(2.5, x = 20, met = "vs") prec_rate(2.5, x = 20, met = "wald") prec_rateratio Sample size or precision for a rate ratio Description prec_rateratio returns the sample size or the precision for the provided proportions. Usage prec_rateratio( n1 = NULL, rate1 = NULL, rate2 = 2 * rate1, prec.level = NULL, r = 1, conf.level = 0.95 ) Arguments n1 number of patients in exposed group. rate1 event rate in the exposed group. rate2 event rate in the unexposed group. prec.level ratio of the upper limit over the lower limit of the rate ratio confidence interval. r allocation ratio (relative size of unexposed and exposed cohort (n2 / n1)). conf.level confidence level. Details Exactly one of the parameters n1 or conf.width must be passed as NULL, and that parameter is determined from the other. Event rates in the two groups should also be provided (rate1, rate2). If only rate1 is provided, rate2 is assumed to be 2 times rate1. References <NAME>, <NAME> (2018). Planning Study Size Based on Precision Rather Than Power. Epidemiology, 29:599-603. doi:10.1097/EDE.0000000000000876. Examples # 20 participants, a rate of 50% against a rate of 300\% prec_rateratio(20, .5, 3) # sample size required to attain a CI whose upper limit is not more than 3.81 larger # than the lower limit prec_rateratio(rate1 = .5, rate2 = 3, prec.level = 3.81) prec_riskdiff Sample size or precision for risk difference Description prec_riskdiff returns the risk difference and the sample size or the precision for the provided proportions. Usage prec_riskdiff( p1, p2, n1 = NULL, conf.width = NULL, r = 1, conf.level = 0.95, method = c("newcombe", "mn", "ac", "wald"), ... ) Arguments p1 risk among exposed. p2 risk among unexposed. n1 number of patients in exposed group. conf.width precision (the full width of the confidence interval). r allocation ratio (relative size of exposed and unexposed cohort (n1 / n2)). conf.level confidence level. method Exactly one of newcombe (default), mn (Miettinen-Nurminen), ac (Agresti-Caffo), wald. Methods can be abbreviated. ... other options to uniroot (e.g. tol) Details Exactly one of the parameters n1 or conf.width must be passed as NULL, and that parameter is determined from the other. Newcombe (newcombe) proposed a confidence interval based on the wilson score method for the single proportion (see prec_prop). The confidence interval without continuity correction is imple- mented from equation 10 in Newcombe (1998). Miettinen-Nurminen (mn) provide a closed from equation for the restricted maximum likelihood estimate . The implementation is based on code provided by Yongyi Min on https://users. stat.ufl.edu/~aa/cda/R/two-sample/R2/index.html. Agresti-Caffo (ac) confidence interval is based on the Wald confidence interval, adding 1 success to each cell of the 2 x 2 table (see Agresti and Caffo 2000). uniroot is used to solve n for the newcombe, ac, and mn method. References <NAME> (2003) Categorical Data Analysis, Second Edition, Wiley Series in Probability and Statis- tics, doi:10.1002/0471249688. <NAME> and <NAME> (2000) Simple and Effective Confidence Intervals for Proportions and Dif- ferences of Proportions Result from Adding Two Successes and Two Failures, The American Statis- tician, 54(4):280-288. <NAME> and <NAME> (1985) Comparative analysis of two rates, Statistics in Medicine, 4:213-226. Newcombe RG (1998) Interval estimation for the difference between independent proportions: comparison of eleven methods, Statistics in Medicine, 17:873-890. <NAME>, <NAME>, and <NAME> (2015). Recommended confidence intervals for two inde- pendent binomial proportions, Statistical methods in medical research 24(2):224-254. Examples # proportions of 40 and 30\%, 50 participants, how wide is the CI? prec_riskdiff(p1 = .4, p2 = .3, n1 = 50) # proportions of 40 and 30\%, 50 participants, how many participants for a CI 0.2 wide? prec_riskdiff(p1 = .4, p2 = .3, conf.width = .2) # Validate Newcombe (1998) prec_riskdiff(p1 = 56/70, p2 = 48/80, n1 = 70, r = 70/80, met = "newcombe") # Table IIa prec_riskdiff(p1 = 10/10, p2 = 0/10, n1 = 10, met = "newcombe") # Table IIh # multiple scenarios prec_riskdiff(p1 = c(56/70, 9/10, 6/7, 5/56), p2 = c(48/80, 3/10, 2/7, 0/29), n1 = c(70, 10, 7, 56), r = c(70/80, 1, 1, 56/29), method = "wald") prec_riskratio Sample size or precision for risk ratio Description prec_riskratio returns the risk ratio and the sample size or the precision for the provided propor- tions. Usage prec_riskratio( p1, p2, n1 = NULL, r = 1, conf.width = NULL, conf.level = 0.95, method = c("koopman", "katz"), ... ) Arguments p1 risk among exposed. p2 risk among unexposed. n1 number of patients in exposed group. r allocation ratio (relative size of unexposed and exposed cohort (n2 / n1)). conf.width precision (the full width of the confidence interval). conf.level confidence level. method Exactly one of koopman (default), katz. Methods can be abbreviated. ... other arguments to uniroot (e.g. tol). Details Exactly one of the parameters n1 or conf.width must be passed as NULL, and that parameter is determined from the other. Koopman (koopman) provides an asymptotic score confidence interval that is always consistent with Pearsons chi-squared test. It is the recommended interval (Fagerland et al.). Katz (katz) use a logarithmic transformation to calculate the confidence interval. The CI cannot be computed if one of the proportions is zero. If both proportions are 1, the estimate of the standard error becomes zero, resulting in a CI of [1, 1]. uniroot is used to solve n for the katz, and koopman method. References <NAME>, <NAME>, and <NAME> (2015). Recommended confidence intervals for two inde- pendent binomial proportions, Statistical methods in medical research 24(2):224-254. <NAME>, <NAME>, Azen SP, and Pike MC (1978) Obtaining Confidence Intervals for the Risk Ratio in Cohort Studies, Biometrics 34:469-474. Koopman PAR (1984) Confidence Intervals for the Ratio of Two Binomial Proportions, Biometrics 40:513-517. Examples # Validate function with example in Fagerland et al. (2015), Table 5. prec_riskratio(p1 = 7/34, p2 = 1/34, n1 = 34, r = 1, met = "katz") # 7 (0.91 to 54) prec_riskratio(p1 = 7/34, p2 = 1/34, n1 = 34, r = 1, met = "koopman") # 7 (1.21 to 43) # Validate the Koopman method with example in Koopman (1984) prec_riskratio(p1 = 36/40, p2 = 16/80, n1 = 40, r = 2, met = "koopman") # 4.5 (2.94 to 7.15) prec_sens Sample size and precision of sensitivity and specificity Description Because sensitivity (true positives/total number of positives) and specificity (true negatives/total number of negatives) are simple proportions, these functions act as wrappers for prec_prop. Usage prec_sens( sens, n = NULL, ntot = NULL, prev = NULL, conf.width = NULL, round = "ceiling", ... ) prec_spec( spec, n = NULL, ntot = NULL, prev = NULL, conf.width = NULL, round = "ceiling", ... ) Arguments sens, spec proportions. n number of observations. ntot total sample size. prev prevalence of cases/disease (i.e. proportion of ntot with the disease). conf.width precision (the full width of the confidence interval). round string, round calculated n up (ceiling) or down (floor). ... options passed to prec_prop (e.g. method, conf.width, conf.level). Details If ntot and prev are given, they are used to calculate n. Value Object of class "presize", a list of arguments (including the computed one) augmented with method and note elements. Note Calculated n can take on non-integer numbers, but prec_prop requires integers, so the calculated n is rounded according to the approach indicated in round. See Also prec_prop Examples # confidence interval width with n prec_sens(.6, 50) # confidence interval width with ntot and prevalence (assuming 50% prev) prec_sens(.6, ntot = 100, prev = .5) # sample size with confidence interval width prec_sens(.6, conf.width = 0.262)
concrete-shortint
rust
Rust
Crate concrete_shortint === Welcome the the `concrete-shortint` documentation! Description --- This library makes it possible to execute modular operations over encrypted short integer. It allows to execute an integer circuit on an untrusted server because both circuit inputs and outputs are kept private. Data are encrypted on the client side, before being sent to the server. On the server side every computation is performed on ciphertexts. The server however, has to know the integer circuit to be evaluated. At the end of the computation, the server returns the encryption of the result to the user. Keys --- This crates exposes two type of keys: * The ClientKey is used to encrypt and decrypt and has to be kept secret; * The ServerKey is used to perform homomorphic operations on the server side and it is meant to be published (the client sends it to the server). Quick Example --- The following piece of code shows how to generate keys and run a small integer circuit homomorphically. ``` use concrete_shortint::{gen_keys, Parameters}; // We generate a set of client/server keys, using the default parameters: let (mut client_key, mut server_key) = gen_keys(Parameters::default()); let msg1 = 1; let msg2 = 0; // We use the client key to encrypt two messages: let ct_1 = client_key.encrypt(msg1); let ct_2 = client_key.encrypt(msg2); // We use the server public key to execute an integer circuit: let ct_3 = server_key.unchecked_add(&ct_1, &ct_2); // We use the client key to decrypt the output of the circuit: let output = client_key.decrypt(&ct_3); assert_eq!(output, 1); ``` Re-exports --- `pub use ciphertext::Ciphertext;``pub use client_key::ClientKey;``pub use parameters::Parameters;``pub use server_key::CheckError;``pub use server_key::ServerKey;`Modules --- ciphertextModule with the definition of a short-integer ciphertext.client_keyModule with the definition of the ClientKey.engineparametersModule with the definition of parameters for short-integers.server_keyModule with the definition of the ServerKey.wopbsModule with the definition of the WopbsKey (WithOut padding PBS Key).Functions --- gen_keysGenerate a couple of client and server keys. Crate concrete_shortint === Welcome the the `concrete-shortint` documentation! Description --- This library makes it possible to execute modular operations over encrypted short integer. It allows to execute an integer circuit on an untrusted server because both circuit inputs and outputs are kept private. Data are encrypted on the client side, before being sent to the server. On the server side every computation is performed on ciphertexts. The server however, has to know the integer circuit to be evaluated. At the end of the computation, the server returns the encryption of the result to the user. Keys --- This crates exposes two type of keys: * The ClientKey is used to encrypt and decrypt and has to be kept secret; * The ServerKey is used to perform homomorphic operations on the server side and it is meant to be published (the client sends it to the server). Quick Example --- The following piece of code shows how to generate keys and run a small integer circuit homomorphically. ``` use concrete_shortint::{gen_keys, Parameters}; // We generate a set of client/server keys, using the default parameters: let (mut client_key, mut server_key) = gen_keys(Parameters::default()); let msg1 = 1; let msg2 = 0; // We use the client key to encrypt two messages: let ct_1 = client_key.encrypt(msg1); let ct_2 = client_key.encrypt(msg2); // We use the server public key to execute an integer circuit: let ct_3 = server_key.unchecked_add(&ct_1, &ct_2); // We use the client key to decrypt the output of the circuit: let output = client_key.decrypt(&ct_3); assert_eq!(output, 1); ``` Re-exports --- `pub use ciphertext::Ciphertext;``pub use client_key::ClientKey;``pub use parameters::Parameters;``pub use server_key::CheckError;``pub use server_key::ServerKey;`Modules --- ciphertextModule with the definition of a short-integer ciphertext.client_keyModule with the definition of the ClientKey.engineparametersModule with the definition of parameters for short-integers.server_keyModule with the definition of the ServerKey.wopbsModule with the definition of the WopbsKey (WithOut padding PBS Key).Functions --- gen_keysGenerate a couple of client and server keys. Struct concrete_shortint::parameters::Parameters === ``` pub struct Parameters { pub lwe_dimension: LweDimension, pub glwe_dimension: GlweDimension, pub polynomial_size: PolynomialSize, pub lwe_modular_std_dev: StandardDev, pub glwe_modular_std_dev: StandardDev, pub pbs_base_log: DecompositionBaseLog, pub pbs_level: DecompositionLevelCount, pub ks_base_log: DecompositionBaseLog, pub ks_level: DecompositionLevelCount, pub pfks_level: DecompositionLevelCount, pub pfks_base_log: DecompositionBaseLog, pub pfks_modular_std_dev: StandardDev, pub cbs_level: DecompositionLevelCount, pub cbs_base_log: DecompositionBaseLog, pub message_modulus: MessageModulus, pub carry_modulus: CarryModulus, } ``` A structure defining the set of cryptographic parameters for homomorphic integer circuit evaluation. Fields --- `lwe_dimension: LweDimension``glwe_dimension: GlweDimension``polynomial_size: PolynomialSize``lwe_modular_std_dev: StandardDev``glwe_modular_std_dev: StandardDev``pbs_base_log: DecompositionBaseLog``pbs_level: DecompositionLevelCount``ks_base_log: DecompositionBaseLog``ks_level: DecompositionLevelCount``pfks_level: DecompositionLevelCount``pfks_base_log: DecompositionBaseLog``pfks_modular_std_dev: StandardDev``cbs_level: DecompositionLevelCount``cbs_base_log: DecompositionBaseLog``message_modulus: MessageModulus``carry_modulus: CarryModulus`Implementations --- ### impl Parameters #### pub unsafe fn new_unsecure(    lwe_dimension: LweDimension,    glwe_dimension: GlweDimension,    polynomial_size: PolynomialSize,    lwe_modular_std_dev: StandardDev,    glwe_modular_std_dev: StandardDev,    pbs_base_log: DecompositionBaseLog,    pbs_level: DecompositionLevelCount,    ks_base_log: DecompositionBaseLog,    ks_level: DecompositionLevelCount,    pfks_level: DecompositionLevelCount,    pfks_base_log: DecompositionBaseLog,    pfks_modular_std_dev: StandardDev,    cbs_level: DecompositionLevelCount,    cbs_base_log: DecompositionBaseLog,    message_modulus: MessageModulus,    carry_modulus: CarryModulus) -> Parameters Constructs a new set of parameters for integer circuit evaluation. ##### Safety This function is unsafe, as failing to fix the parameters properly would yield incorrect and unsecure computation. Unless you are a cryptographer who really knows the impact of each of those parameters, you **must** stick with the provided parameters. Trait Implementations --- ### impl Clone for Parameters #### fn clone(&self) -> Parameters Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn default() -> Self Returns the “default value” for a type. #### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where    __D: Deserializer<'de>, Deserialize this value from the given Serde deserializer. #### fn eq(&self, other: &Parameters) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason. #### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where    __S: Serializer, Serialize this value into the given Serde serializer. ### impl StructuralPartialEq for Parameters Auto Trait Implementations --- ### impl RefUnwindSafe for Parameters ### impl Send for Parameters ### impl Sync for Parameters ### impl Unpin for Parameters ### impl UnwindSafe for Parameters Blanket Implementations --- ### impl<T> Any for Twhere    T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> Pointable for T #### const ALIGN: usize = mem::align_of::<T>() The alignment of pointer.#### type Init = T The type for initializers.#### unsafe fn init(init: <T as Pointable>::Init) -> usize Initializes a with the given initializer. Dereferences the given pointer. Mutably dereferences the given pointer. Drops the object pointed to by the given pointer. #### type Output = T Should always be `Self`### impl<T> ToOwned for Twhere    T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. #### type Error = Infallible The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> DeserializeOwned for Twhere    T: for<'de> Deserialize<'de>, Enum concrete_shortint::server_key::CheckError === ``` pub enum CheckError { CarryFull, } ``` Error returned when the carry buffer is full. Variants --- ### `CarryFull` Trait Implementations --- ### impl Debug for CheckError #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more1.0.0 · source#### fn description(&self) -> &str 👎Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎Deprecated since 1.33.0: replaced by Error::source, which can support downcasting#### fn provide<'a>(&'a self, demand: &mut Demand<'a>) 🔬This is a nightly-only experimental API. (`error_generic_member_access`)Provides type based access to context intended for error reports. Read moreAuto Trait Implementations --- ### impl RefUnwindSafe for CheckError ### impl Send for CheckError ### impl Sync for CheckError ### impl Unpin for CheckError ### impl UnwindSafe for CheckError Blanket Implementations --- ### impl<T> Any for Twhere    T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> Pointable for T #### const ALIGN: usize = mem::align_of::<T>() The alignment of pointer.#### type Init = T The type for initializers.#### unsafe fn init(init: <T as Pointable>::Init) -> usize Initializes a with the given initializer. Dereferences the given pointer. Mutably dereferences the given pointer. Drops the object pointed to by the given pointer. #### fn provide<'a>(&'a self, demand: &mut Demand<'a>) 🔬This is a nightly-only experimental API. (`provide_any`)Data providers should implement this method to provide *all* values they are able to provide by using `demand`. #### type Output = T Should always be `Self`### impl<T> ToString for Twhere    T: Display + ?Sized, #### default fn to_string(&self) -> String Converts the given value to a `String`. #### type Error = Infallible The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Module concrete_shortint::ciphertext === Module with the definition of a short-integer ciphertext. Structs --- CiphertextA structure representing a short-integer ciphertext. It is used to evaluate a short-integer circuits homomorphically. Internally, it uses a LWE ciphertext.DegreeThis indicates the number of operations that has been done. Module concrete_shortint::client_key === Module with the definition of the ClientKey. Structs --- ClientKeyA structure containing the client key, which must be kept secret. Module concrete_shortint::parameters === Module with the definition of parameters for short-integers. This module provides the structure containing the cryptographic parameters required for the homomorphic evaluation of integer circuits as well as a list of secure cryptographic parameter sets. Modules --- parameters_wopbsparameters_wopbs_message_carryStructs --- CarryModulusThe number of bits on which the carry will be encoded.DecompositionBaseLogThe logarithm of the base used in a decomposition.DecompositionLevelCountThe number of levels used in a decomposition.GlweDimensionThe number of polynomials of an GLWE mask, or the size of an GLWE secret key.LweDimensionThe number of scalar in an LWE mask, or the length of an LWE secret key.MessageModulusThe number of bits on which the message will be encoded.ParametersA structure defining the set of cryptographic parameters for homomorphic integer circuit evaluation.PolynomialSizeThe number of coefficients of a polynomial.StandardDevA distribution parameter that uses the standard deviation as representation.Constants --- ALL_PARAMETER_VECVector containing all parameter setsBIVARIATE_PBS_COMPLIANT_PARAMETER_SET_VECVector containing all parameter sets where the carry space is strictly greater than oneDEFAULT_PARAMETERSDefault parameter setPARAM_MESSAGE_1_CARRY_0Nomenclature: PARAM_MESSAGE_X_CARRY_Y: the message (respectively carry) modulus is encoded over X (reps. Y) bits, i.e., message_modulus = 2^{X} (resp. carry_modulus = 2^{Y}).PARAM_MESSAGE_1_CARRY_1PARAM_MESSAGE_1_CARRY_2PARAM_MESSAGE_1_CARRY_3PARAM_MESSAGE_1_CARRY_4PARAM_MESSAGE_1_CARRY_5PARAM_MESSAGE_1_CARRY_6PARAM_MESSAGE_1_CARRY_7PARAM_MESSAGE_2_CARRY_0PARAM_MESSAGE_2_CARRY_1PARAM_MESSAGE_2_CARRY_2PARAM_MESSAGE_2_CARRY_3PARAM_MESSAGE_2_CARRY_4PARAM_MESSAGE_2_CARRY_5PARAM_MESSAGE_2_CARRY_6PARAM_MESSAGE_3_CARRY_0PARAM_MESSAGE_3_CARRY_1PARAM_MESSAGE_3_CARRY_2PARAM_MESSAGE_3_CARRY_3PARAM_MESSAGE_3_CARRY_4PARAM_MESSAGE_3_CARRY_5PARAM_MESSAGE_4_CARRY_0PARAM_MESSAGE_4_CARRY_1PARAM_MESSAGE_4_CARRY_2PARAM_MESSAGE_4_CARRY_3PARAM_MESSAGE_4_CARRY_4PARAM_MESSAGE_5_CARRY_0PARAM_MESSAGE_5_CARRY_1PARAM_MESSAGE_5_CARRY_2PARAM_MESSAGE_5_CARRY_3PARAM_MESSAGE_6_CARRY_0PARAM_MESSAGE_6_CARRY_1PARAM_MESSAGE_6_CARRY_2PARAM_MESSAGE_7_CARRY_0PARAM_MESSAGE_7_CARRY_1PARAM_MESSAGE_8_CARRY_0WITH_CARRY_PARAMETERS_VECVector containing all parameter sets where the carry space is strictly greater than oneTraits --- DispersionParameterA trait for types representing distribution parameters, for a given unsigned integer type.Functions --- get_parameters_from_message_and_carryReturn a parameter set from a message and carry moduli. Module concrete_shortint::server_key === Module with the definition of the ServerKey. This module implements the generation of the server public key, together with all the available homomorphic integer operations. Structs --- MaxDegreeMaximum value that the degree can reach.ServerKeyA structure containing the server public key.Enums --- CheckErrorError returned when the carry buffer is full. Module concrete_shortint::wopbs === Module with the definition of the WopbsKey (WithOut padding PBS Key). This module implements the generation of another server public key, which allows to compute an alternative version of the programmable bootstrapping. This does not require the use of a bit of padding. In the case where a padding bit is defined, keys are generated so that there a compatible for both uses. Structs --- WopbsKey Function concrete_shortint::gen_keys === ``` pub fn gen_keys(parameters_set: Parameters) -> (ClientKey, ServerKey) ``` Generate a couple of client and server keys. Example --- Generating a pair of ClientKey and ServerKey using the default parameters. ``` use concrete_shortint::gen_keys; // generate the client key and the server key: let (cks, sks) = gen_keys(Default::default()); ```
hll
hex
Erlang
Toggle Theme HLL v0.1.1 API Reference === Modules --- [HLL](HLL.html) Default HyperLogLog module [HLL.Redis](HLL.Redis.html) Redis compatible HyperLogLog module Toggle Theme HLL v0.1.1 HLL === Default HyperLogLog module. Note that this module is not Redis compatible. Use alternative [`HLL.Redis`](HLL.Redis.html) module if you need to interact with Redis and need it to be Redis compatible. This module uses `:erlang.phash2` as hash function. Example --- ``` iex> hll = HLL.new(14) iex> hll = Enum.reduce(1..2000, hll, fn i, acc -> HLL.add(acc, i) end) iex> HLL.cardinality(hll) 1998 ``` Serialization --- It has two representations, sparse (space-efficient for low cardinality) and dense (space-efficient for high cardinality). When encode HyperLogLog with `HLL.encode`, this module would automatically choose the representation with smaller encoded size. ``` # sparse representation: <<0::4, (p - 8)::4, index0::p, count0::6, index1::p, count1::6 ..., padding::xx># dense representation: <<1::4, (p - 8)::4, count0::6, count1::6, count2::6 ...>> ``` [Link to this section](#summary) Summary === [Types](#types) --- [t()](#t:t/0) [Functions](#functions) --- [add(hll, item)](#add/2) Add a value to HyperLogLog instance [cardinality(hll)](#cardinality/1) Estimate cardinality of HyperLogLog instance [decode(hll_binary)](#decode/1) Decode HLL binary format to HyperLogLog instance [encode(hll)](#encode/1) Encode HyperLogLog instance to HLL binary format [merge(list_of_hll)](#merge/1) Merge multiple HyperLogLog instances into one [new(p)](#new/1) Create a HyperLogLog instance with specified precision in range from 8 to 16 [Link to this section](#types) Types === ``` t() :: {HLL, 8..16, [map](https://hexdocs.pm/elixir/typespecs.html#basic-types)()} ``` [Link to this section](#functions) Functions === ``` add([t](#t:t/0)(), [any](https://hexdocs.pm/elixir/typespecs.html#basic-types)()) :: [t](#t:t/0)() ``` Add a value to HyperLogLog instance. Example --- ``` iex> h = HLL.new(12) {HLL, 12, %{}} iex> HLL.add(h, "hello") {HLL, 12, %{1581 => 2}} ``` ``` cardinality([t](#t:t/0)()) :: [non_neg_integer](https://hexdocs.pm/elixir/typespecs.html#basic-types)() ``` Estimate cardinality of HyperLogLog instance. Example --- ``` iex> h = HLL.new(14) iex> HLL.cardinality(h) 0 iex> h = HLL.add(h, "foo") iex> HLL.cardinality(h) 1 iex> h = HLL.add(h, "bar") iex> HLL.cardinality(h) 2 ``` ``` decode([binary](https://hexdocs.pm/elixir/typespecs.html#built-in-types)()) :: [t](#t:t/0)() ``` Decode HLL binary format to HyperLogLog instance. Example --- ``` iex> h = HLL.new(14) |> HLL.add("foo") {HLL, 14, %{617 => 1}} iex> encoded = HLL.encode(h) <<6, 9, 164, 16>> iex> HLL.decode(encoded) {HLL, 14, %{617 => 1}} ``` ``` encode([t](#t:t/0)()) :: [binary](https://hexdocs.pm/elixir/typespecs.html#built-in-types)() ``` Encode HyperLogLog instance to HLL binary format. Example --- ``` iex> HLL.new(14) |> HLL.encode() <<6>> iex> HLL.new(14) |> HLL.add("foo") |> HLL.encode() <<6, 9, 164, 16>> iex> HLL.new(14) |> HLL.add("foo") |> HLL.add("bar") |> HLL.encode() <<6, 9, 164, 16, 219, 129, 0>> ``` ``` merge([[t](#t:t/0)()]) :: [t](#t:t/0)() ``` Merge multiple HyperLogLog instances into one. Example --- ``` iex> h1 = HLL.new(12) |> HLL.add("foo") iex> h2 = HLL.new(12) |> HLL.add("bar") iex> h3 = HLL.new(12) |> HLL.add("foo") |> HLL.add("bar") iex> h_merged = HLL.merge([h1, h2]) iex> h3 == h_merged true ``` ``` new(8..16) :: [t](#t:t/0)() ``` Create a HyperLogLog instance with specified precision in range from 8 to 16. Example --- ``` iex> HLL.new(12) {HLL, 12, %{}} iex> HLL.new(14) {HLL, 14, %{}} ``` Toggle Theme HLL v0.1.1 HLL.Redis === Redis compatible HyperLogLog module. This module is Redis (v5) compatible. It uses the same hash algorithm, same HyperLogLog estimation algorithm and same serialization format as Redis (v5) does. Therefore, it could consume HyperLogLog sketches from Redis and it could generate HyperLogLog sketches for Redis as well. It has fixed precision 14 (16384 buckets) as Redis does. If you are looking for using different precision, you could use [`HLL`](HLL.html) module instead. [`HLL.Redis`](#content) module is generally slower than alternative [`HLL`](HLL.html) module: * [`HLL.Redis`](#content) hash function is slower: Hash function in [`HLL.Redis`](#content) is ported from Redis and written in Elixir. Hash function in [`HLL`](HLL.html) is `:erlang.phash2`, which is in native code. * [`HLL.Redis`](#content) serialization is slower: [`HLL.Redis`](#content) uses Redis binary format for serialization. [`HLL`](HLL.html) uses a binary format which is closer to [`HLL`](HLL.html)'s internal data structure, which makes it faster to encode and decode. Therefore, if you do not require "Redis compatible", it's recommanded to use [`HLL`](HLL.html) module for performance gain. Example --- ``` iex> hll_redis = HLL.Redis.new() iex> hll_redis = Enum.reduce(1..2000, hll_redis, fn i, acc -> HLL.Redis.add(acc, Integer.to_string(i)) end) iex> HLL.Redis.cardinality(hll_redis) 2006 ``` [Link to this section](#summary) Summary === [Types](#types) --- [t()](#t:t/0) [Functions](#functions) --- [add(hll_redis, item)](#add/2) Add a value to Redis compatible HyperLogLog instance [cardinality(hll_redis)](#cardinality/1) Estimate cardinality of Redis compatible instance [decode(redis_binary)](#decode/1) Decode Redis HyperLogLog binary format to Redis compatible HyperLogLog instance [encode(hll_redis)](#encode/1) Encode Redis compatible HyperLogLog instance to Redis HyperLogLog binary format [merge(list_of_hll_redis)](#merge/1) Merge multiple Redis compatible HyperLogLog instances into one [new()](#new/0) Create a Redis compatible HyperLogLog instance with precision = 14 [Link to this section](#types) Types === ``` t() :: {HLL.Redis, [map](https://hexdocs.pm/elixir/typespecs.html#basic-types)()} ``` [Link to this section](#functions) Functions === ``` add([t](#t:t/0)(), [any](https://hexdocs.pm/elixir/typespecs.html#basic-types)()) :: [t](#t:t/0)() ``` Add a value to Redis compatible HyperLogLog instance. If `item` is binary, it would use Redis compatible murmur2 hash function directly. If `item` is not binary, it would be transformed to binary via [`:erlang.term_to_binary/1`](http://www.erlang.org/doc/man/erlang.html#term_to_binary-1) and then use Redis compatible murmur2 hash function. Example --- ``` iex> HLL.Redis.new() |> HLL.Redis.add("hello") {HLL.Redis, %{9216 => 1}} ``` ``` cardinality([t](#t:t/0)()) :: [non_neg_integer](https://hexdocs.pm/elixir/typespecs.html#basic-types)() ``` Estimate cardinality of Redis compatible instance. Example --- ``` iex> data = Enum.map(1..5000, &Integer.to_string/1) iex> h = HLL.Redis.new() iex> h = Enum.reduce(data, h, fn x, acc -> HLL.Redis.add(acc, x) end) iex> HLL.Redis.cardinality(h) 4985 iex> {:ok, conn} = Redix.start_link() iex> for x <- data do Redix.command!(conn, ["PFADD", "test_hll_redis_cardinality", x]) end iex> Redix.command!(conn, ["PFCOUNT", "test_hll_redis_cardinality"]) 4985 ``` ``` decode([binary](https://hexdocs.pm/elixir/typespecs.html#built-in-types)()) :: [t](#t:t/0)() ``` Decode Redis HyperLogLog binary format to Redis compatible HyperLogLog instance. Example --- ``` iex> {:ok, conn} = Redix.start_link() iex> Redix.command!(conn, ["PFADD", "test_hll_redis_decode", "okk"]) iex> bin = Redix.command!(conn, ["GET", "test_hll_redis_decode"]) <<72, 89, 76, 76, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 128, 108, 180, 132, 83, 73>> iex> HLL.Redis.decode(bin) {HLL.Redis, %{11445 => 2}} iex> HLL.Redis.new() |> HLL.Redis.add("okk") {HLL.Redis, %{11445 => 2}} ``` ``` encode([t](#t:t/0)()) :: [binary](https://hexdocs.pm/elixir/typespecs.html#built-in-types)() ``` Encode Redis compatible HyperLogLog instance to Redis HyperLogLog binary format. Example --- ``` iex> HLL.Redis.new() |> HLL.Redis.add("hello") |> HLL.Redis.encode() <<72, 89, 76, 76, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 128, 99, 255, 128, 91, 254>> iex> {:ok, conn} = Redix.start_link() iex> Redix.command!(conn, ["PFADD", "test_hll_redis_encode", "hello"]) iex> Redix.command!(conn, ["GET", "test_hll_redis_encode"]) <<72, 89, 76, 76, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 128, 99, 255, 128, 91, 254>> ``` Merge multiple Redis compatible HyperLogLog instances into one. Example --- ``` iex> h1 = HLL.Redis.new() |> HLL.Redis.add("foo") iex> h2 = HLL.Redis.new() |> HLL.Redis.add("bar") iex> h3 = HLL.Redis.new() |> HLL.Redis.add("foo") |> HLL.Redis.add("bar") iex> h_merged = HLL.Redis.merge([h1, h2]) iex> h3 == h_merged true ``` ``` new() :: [t](#t:t/0)() ``` Create a Redis compatible HyperLogLog instance with precision = 14. Example --- ``` iex> HLL.Redis.new() {HLL.Redis, %{}} ```
@types/level-js
npm
JavaScript
[Installation](#installation) === > `npm install --save @types/level-js` [Summary](#summary) === This package contains type definitions for level-js (<https://github.com/Level/level-js>). [Details](#details) === Files were exported from <https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/level-js>. [index.d.ts](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/level-js/index.d.ts) --- ``` import { AbstractLevelDOWN, AbstractOptions } from "abstract-leveldown"; interface Level extends AbstractLevelDOWN { readonly location: string; readonly prefix: string; readonly version: string | number; destroy(location: string, cb: (err: Error | undefined) => void): void; destroy(location: string, prefix: string, cb: (err: Error | undefined) => void): void; } interface LevelOptions { readonly prefix?: string | undefined; readonly version?: string | number | undefined; } interface LevelConstructor { new(location: string, options?: LevelOptions): Level; (location: string, options?: LevelOptions): Level; } declare const Level: LevelConstructor; export = Level; ``` ### [Additional Details](#additional-details) * Last updated: Wed, 18 Oct 2023 05:47:07 GMT * Dependencies: [@types/abstract-leveldown](https://npmjs.com/package/@types/abstract-leveldown) [Credits](#credits) === These definitions were written by [<NAME>](https://github.com/danwbyrne). Readme --- ### Keywords none
mmaqshiny
cran
R
Package ‘mmaqshiny’ October 13, 2022 Title Explore Air-Quality Mobile-Monitoring Data Version 1.0.0 Description Mobile-monitoring or ``sensors on a mobile platform'', is an increasingly popular approach to measure high-resolution pollution data at the street level. Coupled with location data, spatial visualisation of air-quality parameters helps detect localized areas of high air-pollution, also called hotspots. In this approach, portable sensors are mounted on a vehicle and driven on predetermined routes to collect high frequency data (1 Hz). 'mmaqshiny' is for analysing, visualising and spatial mapping of high-resolution air-quality data collected by specific devices installed on a moving platform. 1 Hz data of PM2.5 (mass concentrations of particulate matter with size less than 2.5 microns), Black carbon mass concentrations (BC), ultra-fine particle number concentrations, carbon dioxide along with GPS coordinates and relative humidity (RH) data collected by popular portable instruments (TSI DustTrak-8530, Aethlabs microAeth-AE51, TSI CPC3007, LICOR Li-830, Garmin GPSMAP 64s, Omega USB RH probe respectively). It incorporates device specific cleaning and correction algorithms. RH correction is applied to DustTrak PM2.5 following the Chakrabarti et al., (2004) <doi:10.1016/j.atmosenv.2004.03.007>. Provision is given to add linear regression coefficients for correcting the PM2.5 data (if required). BC data will be cleaned for the vibration generated noise, by adopting the statistical procedure as explained in Apte et al., (2011) <doi:10.1016/j.atmosenv.2011.05.028>, followed by a loading correction as suggested by Ban-Weiss et al., (2009) <doi:10.1021/es8021039>. For the number concentration data, provision is given for dilution correction factor (if a diluter is used with CPC3007; default value is 1). The package joins the raw, cleaned and corrected data from the above said instruments and outputs as a downloadable csv file. Depends R (>= 3.5.0) License MIT + file LICENSE Encoding UTF-8 LazyData true Imports htmltools, Cairo, xts, lubridate, zoo, caTools, ggplot2, data.table, DT, dplyr, leaflet, stringr, shiny, XML, shinyjs, plotly Suggests testthat, devtools, usethis, shinytest RoxygenNote 7.1.0 URL https://github.com/meenakshi-kushwaha/mmaqshiny BugReports https://github.com/meenakshi-kushwaha/mmaqshiny/issues NeedsCompilation no Author <NAME> [aut, cre, cph], <NAME> [dtc], <NAME> [aui, ctb], <NAME> [aut, cph] Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2020-06-26 16:00:23 UTC R topics documented: mmaqshiny_ru... 2 mmaqshiny_run mmaqshiny: Explore Air-Quality Mobile-Monitoring Data Description Mobile-monitoring or “sensors on a mobile platform”, is an increasingly popular approach to mea- sure high-resolution pollution data at the street level. Coupled with location data, spatial visualisa- tion of air-quality parameters helps detect localized areas of high air-pollution, also called hotspots. In this approach, portable sensors are mounted on a vehicle and driven on predetermined routes to collect high frequency data (1 Hz). The package is for analysing, visualising and spatial maps of high-resolution air-quality data collected by specific devices installed on a moving platform. Usage mmaqshiny_run() Examples if(interactive()){ mmaqshiny::mmaqshiny_run() }
WHC_ModelSqliteKit
cocoapods
Objective-C
WHC_ModelSqliteKit === WHC_ModelSqliteKit Documentation --- Welcome to the documentation for WHC_ModelSqliteKit! This library provides a convenient and efficient way to use SQLite database in your iOS projects. It offers a set of tools and utilities to simplify database operations and improve performance. Whether you are new to SQLite or an experienced developer, this documentation will guide you through the essentials of using WHC_ModelSqliteKit effectively in your iOS app. ### Installation To use WHC_ModelSqliteKit in your project, follow these steps: * Open your project in Xcode. * Go to “File” > “Swift Packages” > “Add Package Dependency”. * Enter the following URL: https://github.com/netyouli/WHC_ModelSqliteKit.git * Choose the latest available version of WHC_ModelSqliteKit. * Click “Next” and then “Finish”. ### Quick Start To get started with WHC_ModelSqliteKit, follow these steps: 1. Create a new SQLite database: ``` import WHC_ModelSqliteKit let databasePath = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0] + "/TestDB.sqlite" if let database = WHC_ModelSqlite.createDatabase(databasePath: databasePath) { // Database created successfully } else { // Error creating database } ``` Next, create a table and model: ``` class Person: WHC_SqliteModel { var name = "" var age = 0 } WHC_ModelSqlite.createTable(modelClass: Person.self, dbName: databasePath, needVerify: true) ``` Finally, insert, update, or query the database using model objects: ``` let person = Person() person.name = "John" person.age = 25 let insertResult = WHC_ModelSqlite.insert(model: person) if insertResult.result { // Insert successful } else { // Insert failed } let updateResult = WHC_ModelSqlite.update(model: person, where: "name = 'John'", dbName: databasePath) if updateResult.result { // Update successful } else { // Update failed } let queryResult = WHC_ModelSqlite.query(model: Person.self, where: "age > 20", dbName: databasePath) as? [Person] if let results = queryResult { // Use the query results } else { // Query failed } ``` ### Key Features * **Automatic mapping:** WHC_ModelSqliteKit automatically maps model properties to database columns, saving you from writing repetitive code. * **Efficient queries:** Perform complex queries with ease using WHC_ModelSqliteKit’s powerful query API. * **Concurrency support:** WHC_ModelSqliteKit handles concurrent database operations and ensures data integrity. * **Data migration:** Easily migrate database schema and upgrade data models without losing any data. ### Conclusion With WHC_ModelSqliteKit, you can leverage the power of SQLite in your iOS app without the hassle of manual database management. Start using WHC_ModelSqliteKit today and simplify your database operations!