Dataset Preview
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 1 missing columns ({'meta'})

This happened while the json dataset builder was generating data using

hf://datasets/recursal/Devopedia/data/dev_index.json (at revision b7c3b8cc33cd387698951d3e4b14f4356566bb68)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              text: string
              to
              {'text': Value(dtype='string', id=None), 'meta': {'title': Value(dtype='string', id=None), 'href': Value(dtype='string', id=None)}}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1323, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 938, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 1 missing columns ({'meta'})
              
              This happened while the json dataset builder was generating data using
              
              hf://datasets/recursal/Devopedia/data/dev_index.json (at revision b7c3b8cc33cd387698951d3e4b14f4356566bb68)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

text
string
meta
dict
# Hypothesis Testing and Types of Errors ## Summary Suppose we want to study income of a population. We study a sample from the population and draw conclusions. The sample should represent the population for our study to be a reliable one. **Null hypothesis** \((H\_0)\) is that sample represents population. Hypothesis testing provides us with framework to conclude if we have sufficient evidence to either accept or reject null hypothesis. Population characteristics are either assumed or drawn from third-party sources or judgements by subject matter experts. Population data and sample data are characterised by moments of its distribution (mean, variance, skewness and kurtosis). We test null hypothesis for equality of moments where population characteristic is available and conclude if sample represents population. For example, given only mean income of population, we validate if mean income of sample is close to population mean to conclude if sample represents the population. ## Discussion ### What are the math representations of population and sample parameters? Population mean and population variance are denoted in Greek alphabets \(\mu\) and \(\sigma^2\) respectively, while sample mean and sample variance are denoted in English alphabets \(\bar x\) and \(s^2\) respectively. ### What's the relevance of sampling error to hypothesis testing? Suppose we obtain a sample mean of \(\bar x\) from a population of mean \(\mu\). The two are defined by the relationship |\(\bar x\) - \(\mu\)|>=0: + If the difference is not significant, we conclude the difference is due to sampling. This is called **sampling error** and this happens due to chance. + If the difference is significant, we conclude the sample does not represent the population. The reason has to be more than chance for difference to be explained.Hypothesis testing helps us to conclude if the difference is due to sampling error or due to reasons beyond sampling error. ### What are some assumptions behind hypothesis testing? A common assumption is that the observations are independent and come from a random sample. The population distribution must be Normal or the sample size is large enough. If the sample size is large enough, we can invoke the *Central Limit Theorem (CLT)* regardless of the underlying population distribution. Due to CLT, sampling distribution of the sample statistic (such as sample mean) will be approximately a Normal distribution. A rule of thumb is 30 observations but in some cases even 10 observations may be sufficient to invoke the CLT. Others require at least 50 observations. ### What are one-tailed and two-tailed tests? When acceptance of \(H\_0\) involves boundaries on both sides, we invoke the **two-tailed test**. For example, if we define \(H\_0\) as sample drawn from population with age limits in the range of 25 to 35, then testing of \(H\_0\) involves limits on both sides. Suppose we define the population as greater than age 50, we are interested in rejecting a sample if the age is less than or equal to 50; we are not concerned about any upper limit. Here we invoke the **one-tailed test**. A one-tailed test could be left-tailed or right-tailed. Consider average gas price in California compared to the national average of $2.62. If we believe that the price is higher in California, we consider right-tailed test. If we believe that California price is different from national average but we don't know if it's higher or lower, we consider two-tailed test. Symbolically, given the **alternative or research hypothesis** \(H\_1\), we state, + \(H\_0\): \(\mu = \$ 2.62\) + \(H\_1\) right-tailed: \(\mu > \$ 2.62\) + \(H\_1\) two-tailed: \(\mu \neq \$ 2.62\) ### What are the types of errors in hypothesis testing? In concluding whether sample represents population, there is scope for committing errors on following counts: + Not accepting that sample represents population when in reality it does. This is called **type-I** or **\(\alpha\) error**. + Accepting that sample represents population when in reality it does not. This is called **type-II** or **\(\beta\) error**.For instance, granting loan to an applicant with low credit score is \(\alpha\) error. Not granting loan to an applicant with high credit score is (\(\beta\)) error. The symbols \(\alpha\) and \(\beta\) are used to represent the probability of type-I and type-II errors respectively. ### How do we measure type-I or \(\alpha\) error? The p-value can be interpreted as the probability of getting a result that's same or more extreme when the null hypothesis is true. The observed sample mean \(\bar x\) is overlaid on population distribution of values with mean \(\mu\) and variance \(\sigma^2\). The proportion of values beyond \(\bar x\) and away from \(\mu\) (either in left tail or in right tail or in both tails) is **p-value**. If p-value <= \(\alpha\) we reject null hypothesis. The results are said to be **statistically significant** and not due to chance. Assuming \(\alpha\)=0.05, p-value > 5%, we conclude the sample is highly likely to be drawn from population with mean \(\mu\) and variance \(\sigma^2\). We accept \((H\_0)\). Otherwise, there's insufficient evidence to be part of population and we reject \(H\_0\). We preselect \(\alpha\) based on how much type-I error we're willing to tolerate. \(\alpha\) is called **level of significance**. The standard for level of significance is 0.05 but in some studies it may be 0.01 or 0.1. In the case of two-tailed tests, it's \(\alpha/2\) on either side. ### How do we determine sample size and confidence interval for sample estimate? **Law of Large Numbers** suggests larger the sample size, the more accurate the estimate. Accuracy means the variance of estimate will tend towards zero as sample size increases. Sample Size can be determined to suit accepted level of tolerance for deviation. Confidence interval of sample mean is determined from sample mean offset by variance on either side of the sample mean. If the population variance is known, then we conduct z-test based on Normal distribution. Otherwise, variance has to be estimated and we use t-test based on t-distribution. The formulae for determining sample size and confidence interval depends on what we to estimate (mean/variance/others), sampling distribution of estimate and standard deviation of estimate's sampling distribution. ### How do we measure type-II or \(\beta\) error? We overlay sample mean's distribution on population distribution, the proportion of overlap of sampling estimate's distribution on population distribution is **\(\beta\) error**. Larger the overlap, larger the chance the sample does belong to population with mean \(\mu\) and variance \(\sigma^2\). Incidentally, despite the overlap, p-value may be less than 5%. This happens when sample mean is way off population mean, but the variance of sample mean is such that the overlap is significant. ### How do we control \(\alpha\) and \(\beta\) errors? Errors \(\alpha\) and \(\beta\) are dependent on each other. Increasing one decreases the other. Choosing suitable values for these depends on the cost of making these errors. Perhaps it's worse to convict an innocent person (type-I error) than to acquit a guilty person (type-II error), in which case we choose a lower \(\alpha\). But it's possible to decrease both errors but collecting more data. Just as p-value manifests \(\alpha\), **Power of Test** manifests \(\beta\). Power of test is \(1-\beta\). Among the various ways to interpret power are: + Probability of rejecting the null hypothesis when, in fact, it is false. + Probability that a test of significance will pick up on an effect that is present. + Probability of avoiding a Type II error.Low p-value and high power help us decisively conclude sample doesn't belong to population. When we cannot conclude decisively, it's advisable to go for larger samples and multiple samples. In fact, power is increased by increasing sample size, effect sizes and significance levels. Variance also affects power. ### What are some misconceptions in hypothesis testing? A common misconception is to consider "p value as the probability that the null hypothesis is true". In fact, p-value is computed under the assumption that the null hypothesis is true. P-value is the probability of observing the values, or more extremes values, if the null hypothesis is true. Another misconception, sometimes called **base rate fallacy**, is that under controlled \(\alpha\) and adequate power, statistically significant results correspond to true differences. This is not the case, as shown in the figure. Even with \(\alpha\)=5% and power=80%, 36% of statistically significant p-values will not report the true difference. This is because only 10% of the null hypotheses are false (base rate) and 80% power on these gives only 80 true positives. P-value doesn't measure the size of the effect, for which **confidence interval** is a better approach. A drug that gives 25% improvement may not mean much if symptoms are innocuous compared to another drug that gives small improvement from a disease that leads to certain death. Context is therefore important. ## Milestones 1710 The field of **statistical testing** probably starts with John Arbuthnot who applies it to test sex ratios at birth. Subsequently, others in the 18th and 19th centuries use it in other fields. However, modern terminology (null hypothesis, p-value, type-I or type-II errors) is formed only in the 20th century. 1900 Pearson introduces the concept of **p-value** with the chi-squared test. He gives equations for calculating P and states that it's "the measure of the probability of a complex system of n errors occurring with a frequency as great or greater than that of the observed system." 1925 Ronald A. Fisher develops the concept of p-value and shows how to calculate it in a wide variety of situations. He also notes that a value of 0.05 may be considered as conventional cut-off. 1933 Neyman and Pearson publish *On the problem of the most efficient tests of statistical hypotheses*. They introduce the notion of **alternative hypotheses**. They also describe both **type-I and type-II errors** (although they don't use these terms). They state, "Without hoping to know whether each separate hypothesis is true or false, we may search for rules to govern our behaviour with regard to them, in following which we insure that, in the long run of experience, we shall not be too often wrong." 1949 Johnson's textbook titled *Statistical methods in research* is perhaps the first to introduce to students the Neyman-Pearson hypothesis testing at a time when most textbooks follow Fisher's significance testing. Johnson uses the terms "error of the first kind" and "error of the second kind". In time, Fisher's approach is called **P-value approach** and the Neyman-Pearson approach is called **fixed-α approach**. 1993 Carver makes the following suggestions: use of the term "statistically significant"; interpret results with respect to the data first and statistical significance second; and pay attention to the size of the effect.
{ "title": "Hypothesis Testing and Types of Errors", "href": "hypothesis-testing-and-types-of-errors" }
# Polygonal Modelling ## Summary Polygonal modelling is a 3D modelling approach that utilizes edges, vertices and faces to form models. Modellers start with simple shapes and add details to build on them. They alter the shapes by adjusting the coordinates of one or more vertices. A polygonal model is called faceted as polygonal faces determine its shape.   Polygonal or polyhedral modelling fits best where visualization matters more than precision. It's extensively used by video game designers and animation studios. Assets in video games form whole worlds for gamers. Features of these assets are built using polygonal modelling. Computers take less time to render polygonal models. So, polygonal modelling software run well on browsers. For higher precision, advanced 3D models such as NURBS are suitable. However, NURBs can't be 3D printed unless they are converted to polygons. Many industrial applications easily handle polygonal model representations. ## Discussion ### Can you describe the basic elements of polygonal modelling? A **vertex** is the smallest component of a 3D model. Two or more edges of a polygon meet at a vertex. **Edges** define the shape of the polygons and the 3D model. They are straight lines connecting the vertices. Triangles and quadrilaterals are the polygons generally used. Some applications offer the use of polygons with any number of edges (N-gons) to work with. Faces of polygons combine to form polygonal **meshes**. One can **deform** meshes. That is, one may move, twist or turn meshes to create 3D objects using deformation tools in the software. The number of polygons in a mesh makes its **polycount**. **UV coordinates** are the horizontal (U) and vertical (V) axes of the 2D space. 3D meshes are converted into 2D information to wrap textures around them. Polygon density in the meshes is its **resolution**. Higher resolution indicates better detailing. Good 3D models contain high-resolution meshes where fine-detailing matters and low-resolution meshes where detailing isn't important. ### How are polygonal meshes generated? Polygonal meshes are generated by converting a set of spatial points into vertices, faces and edges. These components meet at shared boundaries to form physical models. Polygonal mesh generation (aka meshing) is of two types: **Manual** and **Automatic**. In manual meshing, the positions of vertices are edited one by one. In automatic meshing, values are fed into the software. The software automatically constructs meshes based on the specified values. The automatic method enables the rapid creation of 3D objects in games, movies and VR. Meshing is performed at two levels. At the model's surface level, it's called **Surface meshing**. Surface meshes won't have free edges or a common edge shared by more than two polygons. Meshing in its volume dimension is called **Solid meshing**. The solid surfaces in solid meshing are either polyhedral or trimmed. There are many ways to produce polygonal meshes. Forming primitives from standard shapes is one way. Meshes can also be drawn by interpolating edges or points of other objects. Converting existing solid models and stretching custom-made meshes into fresh meshes are two other options. ### What are free edges, manifold edges and non-manifold edges? A **free edge** in a mesh is an edge that doesn't fully merge with the edge of its neighbouring element. The nodes of meshes with free edges won't be accurately connected. Such edges within the geometry will affect the overall output. Therefore, unwanted free meshes should be removed. A **manifold edge** is an edge shared utmost by two faces. It means, when there is a third face sharing the edge, it becomes a **non-manifold** edge. A non-manifold edge cannot be replicated in the real world. Hence it should be removed while modelling. In the event of 3D printing, non-manifold edges will produce failed models. ### How would you classify the polygonal meshing process based on grid structure? A grid structure works on the principle of Finite Element Analysis (FEA). An FEA node can be thought of as the vertex of a polygon in polygonal modelling. An FEA element shall represent an edge, a shape and a solid in three different dimensions. Dividing the expanse of a polygonal model into small elements before computing forms a grid. Grid structure-wise, meshing is of two types: + **Structured meshing** displays a definite pattern in the arrangement of nodes or elements. The size of each element in it is nearly the same. It enables easy access to the coordinates of these elements. It's applicable to uniform grids made of rectangles, ellipses and spheres that make regular grids. + **Unstructured meshing** is arbitrary and forms irregular geometric shapes. The connectivity between elements is not uniform. So, unstructured meshes do not follow a definite pattern. It requires that the connectivity between elements is well-defined and properly stored. The axes of these elements are unaligned (non-orthogonal). ### How are mesh generation algorithms written for polygonal modelling? Mesh generation algorithms are written according to the principles of the chosen mesh generation method. There are many methods to generating meshes. It depends on the mesh type. A mesh generation method serves the purposes of generating nodes (geometry) and connecting nodes (topology). Let's take the Delaunay triangulation method for instance. According to it, the surface domain elements are discretized into non-overlapping triangles. The nodes are so created that the angles between them when triangulated are the least. The circumcircle drawn about each triangle cannot accommodate an additional triangle within it. Delaunay triangulation is applied through several algorithms. Boyer-Watson algorithm is one of them. It's an incremental algorithm that adds one node at a time in a given triangulation. If the new point falls within the circumcircle of a triangle, the triangle is removed. Using the new point a fresh triangle is formed. ### How does one fix the polygon count for models? Polygon count or polycount gives a measure of visual quality. Detailing needs a high number of polygons. It gives a photorealistic effect. But high polycount impacts efficiency. It may take more time to load and render. When a model takes more time to download, we may run out of patience. Real-time rendering delays cause a video or animation to stop and start. So, a good polygonal model is a combination of high visual quality and low polycount. The threshold number to call a polygon count high is subjective. For mobile devices, anywhere between 300 to 1500 polygons is good. Desktops can comfortably accommodate 1500 to 4000 polygons without affecting performance. These polycount numbers vary depending on the CPU configuration and other hardware capabilities. Advanced rendering capabilities smoothly handle anywhere between 10k to 40k polygons. Global mobile markets are vying to produce CPUs that can render 100k to 1 million polygons for an immersive 3D experience. Higher polycount increases the file sizes of 3D assets. Websites will have upload limits. So it's also important to keep file sizes in mind while fixing the polygon count. ### What are some beginner pitfalls to polygonal modelling? **Irregular meshes**: As beginners, we may miss triangles and create self-intersecting surfaces. Or we may leave holes on mesh surfaces or fill in with backward triangles. Irregular meshes will affect the model's overall appearance. Eyeball checks and use of mesh generation software will help us avoid mesh-related errors. **Incorrect measurements**: It may distort the model's proportionality and ruin the output. It's best to train our eyes to compare images and estimate the difference in depths. Comparing our model with the reference piece on the image viewer tool will tell us the difference. **Too many subdivisions early in the modelling**: It will disable us from making changes without tampering with the measurements. So, we may end up creating uneven surfaces. Instead, it's better to start with fewer polygons and add to them as we build the model. **Topology error**: We may get the edge structure and mesh distributions wrong. We need to equip ourselves by learning how to use mesh tools. It's important to learn where to use triangles, quads and higher polygons. Duplicates are to be watched out for. Understanding the flow of edges is vital. ## Milestones 1952 Geoffrey Colin Shepherd furthers Thomas Bradwardine's 14th-century work on non-convex polygons. He extends polygon formation to the imaginary plane. It paves the way for the construction of complex polygons. In polygonal modelling, complex polygons have circuitous boundaries. A polygon with a hole inside is one example. 1972 Bruce G Baumgart introduces a paper on **winged edge data structure** at Stanford University. Winged data structure is a way of representing polyhedrons on a computer. The paper states its exclusive use in AI for computer graphics and world modelling. 1972 Newell introduces the **painter's algorithm**. It's a painting algorithm that paints a polygon. It considers the distance of the plane from the viewer while painting. The algorithm paints the farthest polygon from the viewer first and proceeds to the nearest. 1972 Edwin Catmull and Fredrick Parke create the **world's first 3D rendered movie**. In the movie, the animation of Edwin's left hand has precisely drawn and measured polygons. 1992 Fowlery et al. present *Modelling Seashells* at ACM SIGGRAPH, Chicago. They use polygonal meshes among others to create comprehensive computer imagery of seashells. 1998 Andreas Raab suggests the **classification of edges** of a polygonal mesh. They shall be grouped as sharp, smooth, contour and triangulation edges. It solves the problem of choosing the right lines to draw. 1999 Deussen et al. successfully apply Adreas Raab's algorithm that constructs a skeleton from a 3D polygonal model. They use it in connection with the intersecting planes.
{ "title": "Polygonal Modelling", "href": "polygonal-modelling" }
# Relation Extraction ## Summary Consider the phrase "President Clinton was in Washington today". This describes a *Located* relation between Clinton and Washington. Another example is "Steve Balmer, CEO of Microsoft, said…", which describes a *Role* relation of Steve Balmer within Microsoft. The task of extracting semantic relations between entities in text is called **Relation Extraction (RE)**. While Named Entity Recognition (NER) is about identifying entities in text, RE is about finding the relations among the entities. Given unstructured text, NER and RE helps us obtain useful structured representations. Both tasks are part of the discipline of Information Extraction (IE). Supervised, semi-supervised, and unsupervised approaches exist to do RE. In the 2010s, neural network architectures were applied to RE. Sometimes the term **Relation Classification** is used, particularly in approaches that treat it as a classification problem. ## Discussion ### What sort of relations are captured in relation extraction? Here are some relations with examples: + *located-in*: CMU is in Pittsburgh + *father-of*: Manuel Blum is the father of Avrim Blum + *person-affiliation*: Bill Gates works at Microsoft Inc. + *capital-of*: Beijing is the capital of China + *part-of*: American Airlines, a unit of AMR Corp., immediately matched the moveIn general, affiliations involve persons, organizations or artifacts. Geospatial relations involve locations. Part-of relations involve organizations or geo-political entities. **Entity tuple** is the common way to represent entities bound in a relation. Given n entities in a relation r, the notation is \(r(e\_{1},e\_{2},...,e\_{n})\). An example use of this notation is *Located-In(CMU, Pittsburgh)*. RE mostly deals with binary relations where n=2. For n>2, the term used is **higher-order relations**. An example of 4-ary biomedical relation is *point\_mutation(codon, 12, G, T)*, in the sentence "At codons 12, the occurrence of point mutations from G to T were observed". ### What are some common applications of relation extraction? Since structured information is easier to use than unstructured text, relation extraction is useful in many NLP applications. RE enriches existing information. Once relations are obtained, they can be stored in databases for future queries. They can be visualized and correlated with other information in the system. In question answering, one might ask "When was Gandhi born?" Such a factoid question can be answered if our relation database has stored the relation *Born-In(Gandhi, 1869)*. In biomedical domain, protein binding relations can lead to drug discovery. When relations are extracted from a sentence such as "Gene X with mutation Y leads to malignancy Z", these relations can help us detect cancerous genes. Another example is to know the location of a protein in an organism. This ternary relation is split into two binary relations (Protein-Organism and Protein-Location). Once these are classified, the results are merged into a ternary relation. ### Which are the main techniques for doing relation extraction? With **supervised learning**, the model is trained on annotated text. Entities and their relations are annotated. Training involves a binary classifier that detects the presence of a relation, and a classifier to label the relation. For labelling, we could use SVMs, decision trees, Naive Bayes or MaxEnt. Two types of supervision are feature-based or kernel-based. Since finding large annotated datasets is difficult, a **semi-supervised** approach is more practical. One approach is to do a phrasal search with wildcards. For example, `[ORG] has a hub at [LOC]` would return organizations and their hub locations. If we relax the pattern, we'll get more matches but also false positives. An alternative is to use a set of specific patterns, induced from an initial set of seed patterns and seed tuples. This approach is called **bootstrapping**. For example, given the seed tuple *hub(Ryanair, Charleroi)* we can discover many phrasal patterns in unlabelled text. Using these patterns, we can discover more patterns and tuples. However, we have to be careful of **semantic drift**, in which one wrong tuple/pattern can lead to further errors. ### What sort of features are useful for relation extraction? Supervised learning uses features. The named entities themselves are useful features. This includes an entity's bag of words, head words and its entity type. It's also useful to look at words surrounding the entities, including words that are in between the two entities. Stems of these words can also be included. The distance between the entities could be useful. The **syntactic structure** of the sentence can signal the relations. A syntax tree could be obtained via base-phrase chunking, dependency parsing or full constituent parsing. The paths in these trees can be used to train binary classifiers to detect specific syntactic constructions. The accompanying figure shows possible features in the sentence "[ORG American Airlines], a unit of AMR Corp., immediately matched the move, spokesman [PERS Tim Wagner] said." When using syntax, expert knowledge of linguistics is needed to know which syntactic constructions correspond to which relations. However, this can be automated via machine learning. ### Could you explain kernel-based methods for supervised relation classification? Unlike feature-based methods, kernel-based methods don't require explicit feature engineering. They can explore a large feature space in polynomial computation time. The essence of a kernel is to compute the **similarity** between two sequences. A kernel could be designed to measure structural similarity of character sequences, word sequences, or parse trees involving the entities. In practice, a kernel is used as a similarity function in classifiers such as SVM or Voted Perceptron. We note a few kernel designs: + **Subsequence**: Uses a sequence of words made of the entities and their surrounding words. Word representation includes POS tag and entity type. + **Syntactic Tree**: A constituent parse tree is used. Convolution Parse Tree Kernel is one way to compare similarity of two syntactic trees. + **Dependency Tree**: Similarity is computed between two dependency parse trees. This could be enhanced with shallow semantic parsers. A variation is to use dependency graph paths in which the shortest path between entities represents a relation. + **Composite**: Combines the above approaches. Subsequence kernels capture lexical information whereas tree kernels capture syntactic information. ### Could you explain distant supervised approach to relation extraction? Due to extensive work done for Semantic Web, we already have many knowledge bases that contain `entity-relation-entity` triplets. Examples include DBpedia (3K relations), Freebase (38K relations), YAGO, and Google Knowledge Graph (35K relations). These can be used for relation extraction without requiring annotated text. Distant supervision is a combination of unsupervised and supervised approaches. It extracts relations without supervision. It also induces thousands of features using a probabilistic classifier. The process starts by linking named entities to those in the knowledge bases. Using relations in the knowledge base, the patterns are picked up in the text. Patterns are applied to find more relations. Early work used DBpedia and Freebase, and Wikipedia as the text corpus. Later work utilized semi-structured data (HTML tables, Wikipedia list pages, etc.) or even a web search to fill gaps in knowledge graphs. ### Could you compare some semi-supervised or unsupervised approaches of some relation extraction tools? DIPRE's algorithm (1998) starts with seed relations, applies them to text, induces patterns, and applies the patterns to obtain more tuples. These steps are iterated. When applied to *(author, book)* relation, patterns take the form `(longest-common-suffix of prefix strings, author, middle, book, longest-common-prefix of suffix strings)`. DIPRE is an application of Yarowsky algorithm (1995) invented for WSD. Like DIPRE, Snowball (2000) uses seed relations but doesn't look for exact pattern matches. Tuples are represented as vectors, grouped using similarity functions. Each term is also weighted. Weights are adjusted with each iteration. Snowball can handle variations in tokens or punctuation. KnowItAll (2005) starts with domain-independent extraction patterns. Relation-specific and domain-specific rules are derived from the generic patterns. The rules are applied on a large scale on online text. It uses pointwise mutual information (PMI) measure to retain the most likely patterns and relations. Unlike earlier algorithms, TextRunner (2007) doesn't require a pre-defined set of rules. It learns relations, classes and entities on its own from a large corpus. ### How are neural networks being used to do relation extraction? Neural networks were increasingly applied to relation extraction from the early 2010s. Early approaches used **Recursive Neural Networks** that were applied to syntactic parse trees. The use of **Convolutional Neural Networks (CNNs)** came next, to extract sentence-level features and the context surrounding words. A combination of these two networks has also been used. Since CNNs failed to learn long-distance dependencies, **Recurrent Neural Networks (RNNs)** were found to be more effective in this regard. By 2017, basic RNNs gave way to gated variants called GRU and LSTM. A comparative study showed that CNNs are good at capturing local and position-invariant features whereas RNNs are better at capturing order information long-range context dependency. The next evolution was towards **attention mechanism** and **pre-trained language models** such as BERT. For example, attention mechanism can pick out most relevant words and use CNNs or LSTMs to learn relations. Thus, we don't need explicit dependency trees. In January 2020, it was seen that BERT-based models represent the current state-of-the-art with an F1 score close to 90. ### How do we evaluate algorithms for relation extraction? Recall, precision and F-measures are typically used to evaluate on a gold-standard of human annotated relations. These are typically used for supervised methods. For unsupervised methods, it may be sufficient to check if a relation has been captured correctly. There's no need to check if every mention of the relation has been detected. Precision here is simply the correct relations against all relations as judged by human experts. Recall is more difficult to compute. Gazetteers and web resources may be used for this purpose. ### Could you mention some resources for working with relation extraction? Papers With Code has useful links to recent publications on relation classification. GitHub has a topic page on relation classification. Another useful resource is a curated list of papers, tutorials and datasets. The current state-of-the-art is captured on the NLP-progress page of relation extraction. Among the useful datasets for training or evaluation are ACE-2005 (7 major relation types) and SemEval-2010 Task 8 (19 relation types). For distant supervision, Riedel or NYT dataset was formed by aligning Freebase relations with New York Times corpus. There's also Google Distant Supervision (GIDS) dataset and FewRel. TACRED is a large dataset containing 41 relation types from newswire and web text. ## Milestones 1998 At the 7th Message Understanding Conference (MUC), the task of extracting relations between entities is considered. Since this is considered as part of template filling, they call it **template relations**. Relations are limited to organizations: employee\_of, product\_of, and location\_of. Jun 2000 Agichtein and Gravano propose *Snowball*, a semi-supervised approach to generating patterns and extracting relations from a small set of seed relations. At each iteration, it evaluates for quality and keeps only the most reliable patterns and relations. Feb 2003 Zelenko et al. obtain **shallow parse trees** from text for use in binary relation classification. They use contiguous and sparse subtree kernels to assess similarity of two parse trees. Subsequently, this **kernel-based** approach is followed by other researchers: kernels on dependency parse trees of Culotta and Sorensen (2004); subsequence and shortest dependency path kernels of Bunescu and Mooney (2005); convolutional parse kernels of Zhang et al. (2006); and composite kernels of Choi et al. (2009). 2004 Kambhatla takes a **feature-based** supervised classifier approach to relation extraction. A MaxEnt model is used along with lexical, syntactic and semantic features. Since kernel methods are a generalization of feature-based algorithms, Zhao and Grishman (2005) extend Kambhatla's work by including more syntactic features using kernels, then use SVM to pick out the most suitable features. Jun 2005 Since binary classifiers have been well studied, McDonald et al. cast the problem of extracting **higher-order relations** into many binary relations. This also makes the data less sparse and eases computation. Binary relations are represented as a graph, from which cliques are extracted. They find that probabilistic cliques perform better than maximal cliques. The figure corresponds to some binary relations extracted for the sentence "John and Jane are CEOs at Inc. Corp. and Biz. Corp. respectively." Jan 2007 Banko et al. propose **Open Information Extraction** along with an implementation that they call *TextRunner*. In an unsupervised manner, the system is able to extract relations without any human input. Each tuple is assigned a probability and indexed for efficient information retrieval. TextRunner has three components: self-supervised learner, single-pass extractor, and redundancy-based assessor. Aug 2009 Mintz et al. propose **distant supervision** to avoid the cost of producing hand-annotated corpus. Using entity pairs that appear in Freebase, they find all sentences in which each pair occurs in unlabelled text, extract textual features and train a relation classifier. The include both lexical and syntactic features. They note that syntactic features are useful when patterns are nearby in the dependency tree but distant in terms of words. In the early 2010s, distant supervision becomes an active area of research. Aug 2014 Neural networks and word embeddings were first explored by Collobert et al. (2011) for a number of NLP tasks. Zeng et al. apply **word embeddings** and **Convolutional Neural Network (CNN)** to relation classification. They treat relation classification as a multi-class classification problem. Lexical features include the entities, their surrounding tokens, and WordNet hypernyms. CNN is used to extract sentence level features, for which each token is represented as *word features (WF)* and *position features (PF)*. Jul 2015 Dependency shortest path and subtrees have been shown to be effective for relation classification. Liu et al. propose a recursive neural network to model the dependency subtrees, and a convolutional neural network to capture the most important features on the shortest path. Oct 2015 Song et al. present *PKDE4J*, a framework for dictionary-based entity extraction and rule-based relation extraction. Primarily meant for biomedical field, they report F-measures of 85% for entity extraction and 81% for relation extraction. The RE algorithm uses dependency parse trees, which are analyzed to extract heuristic rules. They come up with 17 rules that can be applied to discern relations. Examples of rules include verb in dependency path, nominalization, negation, active/passive voice, entity order, etc. Aug 2016 Miwa and Bansal propose to **jointly model the tasks of NER and RE**. A BiLSTM is used on word sequences to obtain the named entities. Another BiLSTM is used on dependency tree structures to obtain the relations. They also find that shortest path dependency tree performs better than subtrees of full trees. May 2019 Wu and He apply **BERT pre-trained language model** to relation extraction. They call their model *R-BERT*. Named entities are identified beforehand and are delimited with special tokens. Since an entity can span multiple tokens, their start/end hidden token representations are averaged. The output is a softmax layer with cross-entropy as the loss function. On SemEval-2010 Task 8, R-BERT achieves state-of-the-art Macro-F1 score of 89.25. Other BERT-based models learn NER and RE jointly, or rely on topological features of an entity pair graph.
{ "title": "Relation Extraction", "href": "relation-extraction" }
# React Native ## Summary Traditionally, *native mobile apps* have been developed in specific languages that call platform-specific APIs. For example, Objective-C and Swift for iOS app development; Java and Kotlin for Android app development. This means that developers who wish to release their app on multiple platforms will have to implement it in different languages. To avoid this duplication, *hybrid apps* came along. The app was implemented using web technologies but instead of running it inside a web browser, it was wrapped and distributed as an app. But it had performance limitations. React Native enables web developers write code once, deploy on any mobile platform and also use the platform's native API. **React Native** is a platform to build native mobile apps using JavaScript and React. ## Discussion ### As a developer, why should I adopt React Native? Since React Native allows developers maintain a single codebase even when targeting multiple mobile platforms, development work is considerably reduced. Code can be reused across platforms. If you're a web developer new to mobile app development, there's no need to learn a new language. You can reuse your current web programming skills and apply them to the mobile app world. Your knowledge of HTML, CSS and JS will be useful, although you'll be applying them in a different form in React Native. React Native uses ReactJS, which is a JS library invented and later open sourced by Facebook. ReactJS itself has been gaining adoption because it's easy to learn for a JS programmer. It's performant due to the use of *virtual DOM*. The recommended syntax is ES6 and JSX. ES6 brings simplicity and readability to JS code. JSX is a combination of XML and JS to build reusable component-based UI. ### How is React Native different from ReactJS? React Native is a framework whereas ReactJS is a library. In ReactJS projects, we typically use a bundler such as *Webpack* to bundle necessary JS files for use in a browser. In React Native, we need only a single command to start a new project. All basic modules required for the project will be installed. We also need to install Android Studio for Android development and Xcode for iOS development. In ReactJS, we are allowed to use HTML tags. In React Native, we create UI components using React Native components that are specified using JSX syntax. These components are mapped to native UI components. Thus, we can't reuse any ReactJS libraries that render HTML, SVG or Canvas. In ReactJS, styling is done using CSS, like in any web app. In React Native, styling is done using JS objects. For component layout, React Native's *Flexbox* can be used. CSS animations are also replaced with the *Animated* API. ### How does React Native work under the hood? Between native and JavaScript worlds is a bridge (implemented in C++) through which data flows. Native code can call JS code and vice versa. To pass data between the two, data is serialized. For example, a UI event is captured as a native event but the processing for this is done in JavaScript. The result is serialized and sent over the bridge to the native world. The native world deserializes the response, does any necessary processing and updates the UI. ### What are some useful developer features of React Native? React Native offers the following: + **Hot Reloading**: Small changes to your app will be immediately visible during development. If business logic is changed, Live Reload can be used instead. + **Debugging**: Chrome Dev Tools can be used for debugging your app. In fact, your debugging skills from the web world can be applied here. + **Publishing**: Publishing your app is easy using CodePush, now part of Visual Studio App Center. + **Device Access**: React Native gets access to camera, sensors, contacts, geolocation, etc. + **Declarative**: UI components are written in a declarative manner. Component-based architecture also means that one developer need not worry about breaking another's work. + **Animations**: For performance, these are serialized and sent to the native driver. They run independent of the JS event loop. + **Native Code**: Native code and React Native code can coexist. This is important because React Native APIs may not support all native functionality. ### How does React Native compare against platforms in terms of performance? Since React Native is regularly being improved with each release, we can except better performance than what we state below. A comparison of React Native against iOS native programming using Swift showed comparable performance of CPU usage for list views. When resizing maps, Swift was better by 10% but React Native uses far less memory here. For GPU usage, Swift outperforms marginally except for list views. React Native apps can leak memory. Therefore, `FlatList`, `SectionList`, or `VirtualizedList` could be used rather than `ListView`. The communication between native and JS runtimes over the bridge is via message queues. This is also a performance bottleneck. For better performance, ReactNavigation is recommended over Navigator component. When comparing against Ionic platform, React Native outperforms Ionic across metrics such as CPU usage, memory usage, power consumption and list scrolling. ### Are there real-world examples of who's using React Native? Facebook and Instagram use React Native. Other companies or products using it include Bloomberg, Pinterest, Skype, Tesla, Uber, Walmart, Wix, Discord, Gyroscope, SoundCloud Pulse, Tencent QQ, Vogue, and many more. Walmart moved to React Native because it was hard to find skilled developers for native development. They used an incremental approach by migrating parts of their code to React Native. They were able to reuse 95% of their code between iOS and Android. They could reuse business logic with their web apps as well. They could deliver quick updates from their server rather than an app store. Bloomberg developed their app in half the time using React Native. They were also able to push updates, do A/B testing and iterate quickly. Airbnb engineers write code for the web, iOS and Android. With React Native, they stated, > It's now feasible for us to have the same engineer skilled in JavaScript and React write the feature for all three platforms. However, in June 2018, Airbnb decided to move away from React Native and back to native development due to technical and organizational challenges. ### What backend should I use for my React Native app? React Native provides UI components. However, the React Native ecosystem is vast. There are frameworks/libraries for AR/VR, various editors and IDEs that support React Native, local databases (client-side storage), performance monitoring tools, CI/CD tools, authentication libraries, deep linking libraries, UI frameworks, and more. Specifically for backends, **Mobile Backend as a Service (MBaaS)** is now available. Some options include RN Firebase, Baqend, RN Back, Feather and Graph Cool. These services make it easy for developers to build their React Native apps. The more traditional approach is to build and manage your own backend. Some developers choose Node.js or Express.js because these are based on JavaScript that they're already using to build React Native UI. This can be paired with a database such as Firebase, MySQL, or MongoDB. Another option is to use Django with GraphQL. Even WordPress can be used, especially if the app is content driven. These are merely some examples. Developers can use any backend that suits their expertise and app requirements. ### Could you point me to some useful React Native developer resources? Here are some useful resources: + Expo is a free and open source toolchain for your React Native projects. Expo also has a collection of apps developed and shared by others. The easiest way to create a new app is to use the create-react-native-app codebase. + If you wish learn by studying app code written by others, React Active News maintains a curated list of open source React Native apps. + React.parts is a place to find reusable components for React Native. + Visual Studio App Center is a useful tool to build and release your app. + Use React Navigation for routing and navigation in React Native apps. + React Native provides only the UI but here's a great selection of tools to complement React Native. ## Milestones 2011 At Facebook, Jordan Walke and his team release ReactJS, a JavaScript library that brings a new way of rendering pages with more responsive user interactions. A web page can be built from a hierarchy of UI components. 2013 React Native starts as an internal hackathon project within Facebook. Meanwhile, ReactJS is open sourced. Mar 2015 Facebook open sources React Native for iOS on GitHub. The release for Android comes in September. 2016 Microsoft and Samsung commit to adopt React Native for Windows and Tizen. 2017 React Native sees a number of improvements over the year: better navigation, smoother list rendering, more performant animations, and more.
{ "title": "React Native", "href": "react-native" }
# Web of Things ## Summary Web of Things (WoT) is a set of building blocks that seeks to make the Internet of Things (IoT) more interoperable and usable. It simplifies application development (including cross-domain applications) by adopting the web paradigm. Web developers will have a low barrier to entry when programming for the IoT. The key concepts of WoT include Thing Description, Thing Model, Interaction Model, Hypermedia Controls, Protocol Bindings, Profiles, Discovery and Binding Templates. IoT devices (aka Things) are treated as web resources, which makes WoT a Resource-Oriented Architecture (ROA). WoT is standardized by the W3C. There are developer tools and implementations. As of December 2023, widespread industry adoption of WoT is yet to happen. Highly resource-constrained devices that can't run a web stack will not be able to adopt WoT. ## Discussion ### Why do we need the Web of Things (WoT)? The IoT ecosystem is fragmented. Applications or devices from different vendors don't talk to one another due to differing data models. Consumers need to use multiple mobile apps to interact with their IoT devices. While IoT has managed to network different devices via various connectivity protocols (Zigbee, IEEE 802.15.4, NB-IoT, Thread, etc.), there's a disconnect at the application layer. For developers, this disconnect translates to more effort integrating new devices and services. Each application exposes its own APIs. This results in tight coupling between clients and service providers. It's more effort maintaining and evolving these services. WoT brings interoperability at the application layer with a unifying data model. It reuses the web paradigm. IoT devices can be treated as web resources. Just as documents on the web are interlinked and easily navigated, Things can be linked, discovered, queried and acted upon. Mature web standards such as REST, HTTP, JSON, AJAX and URI can be used to achieve this. This means that web developers can become IoT developers. They can create reusable IoT building blocks rather than custom proprietary implementations that work for limited use cases. ### What integration patterns does WoT cover? An IoT device can directly expose a WoT API. This is the simplest integration pattern. It's also challenging from a security perspective or if the device is behind a firewall. For more resource-constrained devices running LPWAN protocols, direct access is difficult. They would connect to the cloud via a gateway, which exposes the WoT API. When devices spread over a large area need to cooperate, they would connect to the cloud in different ways and the cloud exposes the WoT API. Let's consider specific use cases. A remote controller connects directly to an electrical appliance in a trusted environment. Similarly, a sensor acting as a control agent connects to an electrical appliance. A remote control outside a trusted environment connects to a gateway or a edge device which then connects to an electrical appliance. Connected devices are mapped to digital twins that can be accessed via a client device. A device can be controlled via its digital twin in the cloud. These various integration patterns can be combined through system integration. ### What's the architecture of WoT? WoT standardizes a layered architecture of four layers (lower to higher): Access, Find, Share and Compose. The protocols or techniques used at each of these layers are already widely used on the web. These four layers can't be mapped to the OSI model, nor are they strictly defined at the interfaces. They're really a collection of services to ease the development of IoT solutions. At the access layer, solution architects have to think about resource, representation and interface designs. They should also define how resources are interlinked. At the find layer, web clients can discover root URLs, the syntax and semantics of interacting with Things. At the compose layer, tools such as Node-RED and IFTTT can help create mashups. ### What are Thing Description (TD) and Thing Model (TM) in WoT? TD is something like the business card of the Thing. It reveals everything about the Thing. It informs the protocol, data encoding, data structure, and security mechanism used by the Thing. TD itself is in JSON-LD format and is exposed by the Thing or can be discovered by consumers from a Thing Description Directory (TDD). In object-oriented programming, objects are instantiated from classes. Likewise, a TD can be seen as an instantiation of a TM. A TM is a logical description of a Thing's interface and interactions. However, it doesn't contain instance-specific information such as an IP address, serial number of GPS location. A TM can include security details if those are applicable for all instances of that TM. Both TD and TM are represented and serialized in JSON-LD format. Whereas a TD can be validated against its TM, a TM can't be validated. ### What's the WoT interaction model? Apart from links, a Thing may expose three types of interaction affordances: + **Properties**: Property is a state of the Thing. State may be read-only or read-write. Properties can be made observable. Sensor values, stateful actuators, configuration, status and computation results are examples. + **Actions**: Action invokes a function of the Thing. Action can be used to update one or more properties including read-only ones. + **Events**: Event is used to asynchronously send data from the Thing to a consumer. Focus is on state transitions rather than the state itself. Examples include alarms or samples of a time series.Like documents on the web, WoT also links and forms. These are called **hypermedia controls**. Links are used to discover and interlink Things. Forms enable more complex operations than what's possible by simply dereferencing a URI. ### What are protocol bindings in WoT? WoT's abstractions make it protocol agnostic. It doesn't matter if a Thing uses MQTT, CoAP, Modbus or any other connectivity protocol. WoT's interaction model unifies all these so that applications talk in terms of properties, actions and events. But abstractions have to be translated into protocol actions. This is provided by **protocol bindings**. For a door handle for example, protocol binding tells how to open/close the door at the level of knob or lever. W3C has published a non-normative document called **WoT Binding Templates**. This gives blueprints on how to write TDs for different IoT platforms or standards. This includes protocol-specific metadata, payload formats, and usage in specific IoT platforms. The consumer of a TD would implement the template, that is, the protocol stack, media type encoder/decoder and platform stack. ### Who has implemented WoT? W3C maintains a list of developer resources. This include tools, implementations, TD directories and WoT middleware. For example, Eclipse Thingweb is a Node.js implementation to expose and consume TD. From other sources, there are implementations in Python, Java, Rust and Dart. Among the TD directories are TinyIoT Thing Directory and WoTHive. Major WoT deployments during 2012-2021 have been documented. Krellian Ltd. offers WebThings Gateway and WebThings Framework. WebThings was initially developed at Mozilla. However, its API differs from W3C specifications in many ways. The sayWoT! platform from evosoft (a Siemens subsidiary) gives web and cloud developers an easy way to develop IoT solutions. One study compared many WoT platforms including WoT-SDN, HomeWeb, KNX-WoT, EXIP, WTIF, SOCRADES, WoTKit, µWoTO, and more. WoT is being leveraged to create digital twins. WoTwins and Eclipse Ditto with WoT integration are examples of this. Ortiz et al. used WoT TD effectively in real-time IoT data processing in smart ports use case. WoTemu is an emulation framework for WoT edge architecture. ### What standards cover WoT? The W3C is standardizing WoT. The following are the main normative specifications: + WoT Architecture 1.1 (Recommendation) + WoT Thing Description 1.1 (Recommendation) + WoT Discovery (Recommendation) + WoT Profile (Working Draft)Informative specifications include WoT Scripting API, WoT Binding Templates, WoT Security and Privacy Guidelines, and WoT Use Cases and Requirements. Beginners can start at the W3C WoT webpage for latest updates, community groups, documentation and tooling. At the IETF, there's a draft titled *Guidance on RESTful Design for Internet of Things Systems*. This is relevant to WoT. ### What are some limitations of WoT? WoT depends on the web stack. Hence, it's not suited for very low-power devices or mesh deployments. **Matter** protocol, known earlier as Project CHIP, is an alternative to WoT. This is promoted by the Connectivity Standards Alliance (CSA), formerly called Zigbee Alliance. Matter is based on Thread, IPv6 and Dotdot. While Matter is not web friendly like WoT, it appears to have better industry traction. However, Matter devices that expose WoT TDs can talk to WoT devices. There's a claim that WoT hasn't adequately addressed security, privacy and data sharing issues. This is especially important when IoT devices are directly exposed to the web. Devices are energy inefficient since they're always on. They're vulnerable to DoS attacks. WoT alone can't solve complex problems such as optimize workflows across many IoT devices or applications. Hypermedea and EnvGuard are two approaches to solve this. Larian et al. compared many WoT platforms. They noted that current IoT middleware and WoT resource discovery need to be improved. Legacy systems would require custom code to interface to the WoT architecture. ## Milestones Nov 2007 Wilde uses the term "Web of Things" in a paper titled *Putting Things to REST*. He makes the case for treating a Thing (such as a sensor) as a web resource. It could then be accessed via RESTful calls rather than the more restrictive SOAP/WSDL API calls. Web concepts of URI, HTTP, HTML, XML and loosely coupling can be applied effectively towards Things. 2011 Guinard publishes his Doctor of Science dissertation in the field of Web of Things. In 2016, he co-authors (with Trifa) a book titled *Building the Web of Things*. Guinard sees WoT as > A refinement of the Internet of Things (IoT) by integrating smart things not only into the Internet (the network), but into the Web (the application layer). Jul 2013 **Web of Things Community Group** is created. Subsequently in 2014, a workshop is held (June) and an Interest Group is formed (November). Dec 2016 Following the first in-person meeting and a WoT Plugfest in 2015, the **W3C WoT Working Group** is formed. It's aim is to produce two normative specifications (Architecture, Thing Description) and two informative specifications (Scripting API, Binding Templates). Jun 2018 From the Eclipse Foundation, the first commit on GitHub is made for the **Eclipse Thingweb** project. The project aims to provide Node.js components and tools for developers to build IoT systems that conform to W3C WoT standards. The project releases v0.5.0 in October. Apr 2020 W3C publishes WoT Architecture and WoT Thing Description as separate **W3C Recommendation** documents. Jul 2022 Tzavaras et al. propose using **OpenAPI** descriptions and ontologies to bring Things closer to the world of Semantic Web. Thing Descriptions can be created in OpenAPI while also conforming to W3C WoT architecture. They argue that OpenAPI is already a mature standard. It provides a uniform way to interact with web services and Things. Nov 2022 Markus Reigl at Siemens comments that WoT will do for IoT what HTML did for the WWW in the 1990s. TD is not a mere concept. It leads to executable software code. He predicts IoT standardization will gain momentum. Dec 2023 W3C publishes WoT Architecture 1.1 and WoT Thing Description 1.1 as W3C Recommendation documents. In addition, WoT Discover is also published as a W3C Recommendation.
{ "title": "Web of Things", "href": "web-of-things" }
# TensorFlow ## Summary TensorFlow is an open source software library for numerical computation using **data flow graphs**. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) that flow between them. This dataflow paradigm enables parallelism, distributed execution, optimal compilation and portability. The typical use of TensorFlow is for Machine Learning (ML), particularly Deep Learning (DL) that uses large scale multi-layered neural networks. More specifically, it's best for classification, perception, understanding, discovery, prediction and creation. TensorFlow was originally developed by researchers and engineers working on the Google Brain team within Google's Machine Intelligence Research organization for ML/DL research. The system is general enough to be applicable in a wide variety of other domains as well. ## Discussion ### For which use cases is TensorFlow best suited? TensorFlow can be used in any domain where ML/DL can be employed. It can also be used in other forms of AI, including reinforcement learning and logistic regression. On mobile devices, applications include speech recognition, image recognition, object localization, gesture recognition, optical character recognition, translation, text classification, voice synthesis, and more. Some of the areas are: + **Voice/Speech Recognition**: For voice-based interfaces as popularized by Apple Siri, Amazon Alexa or Microsoft Cortana. For sentiment analysis in CRM. For flaw detection (noise analysis) in industrial systems. + **Text-Based Applications**: For sentimental analysis (CRM, Social Media), threat detection (Social Media, Government) and fraud detection (Insurance, Finance). For machine translation such as with Google Translate. For text summarization using sequence-to-sequence learning. For language detection. For automated email replies such as with Google SmartReply. + **Image Recognition**: For face recognition, image search, machine vision and photo clustering. For object classification and identification within larger images. For cancer detection in medical applications. + **Time-Series Analysis**: For forecasting. For customer recommendations. For risk detection, predictive analytics and resource planning. + **Video Detection**: For motion detection in gaming and security systems. For large-scale video understanding. ### Could you name some applications where TensorFlow is being used? TensorFlow is being used by Google in following areas: + RankBrain: Google search engine. + SmartReply: Deep LSTM model to automatically generate email responses. + Massively Multitask Networks for Drug Discovery: A deep neural network model for identifying promising drug candidates. + On-Device Computer Vision for OCR - On-device computer vision model to do optical character recognition to enable real-time translation. + Retinal imaging - Early detection of diabetic retinopathy using deep neural network of 26 layers. + SyntaxNet - Built for Natural Language Understanding (NLU), this is based on TensorFlow and open sourced by Google in 2016.Outside Google, we mention some known real-world examples. Mozilla uses TensorFlow for speech recognition. UK supermarket Ocado uses it for route planning for its robots, demand forecasting, and product recommendations. A Japanese farmer has used it to classify cucumbers based on shape, length and level of distortion. As an experiment, Intel used TensorFlow on traffic videos for pedestrian detection. Further examples were noted at the TensorFlow Developer Summit, 2018. ### Which platforms and languages support TensorFlow? TensorFlow is available on 64-bit Linux, macOS, Windows and also on the mobile computing platforms like Android and iOS. Google has announced a software stack specifically for Android development called TensorFlow Lite. TensorFlow has official APIs available in the following languages: Python, JavaScript, C++, Java, Go, Swift. Python API is recommended. Bindings in other languages are available from community: C#, Haskell, Julia, Ruby, Rust, Scala. There's also C++ API reference for TensorFlow Serving. R's `tensorflow` package provides access to the complete TensorFlow API from within R. Nvidia's **TensorRT**, a Programmable Inference Accelerator, allows you to optimize your models for inference by lowering precision and thereby reducing latency. ### How is TensorFlow different from other ML/DL platforms? TensorFlow is relatively painless to setup. With its growing community adoption, it offers a healthy ecosystem of updates, tutorials and example code. It can run on a variety of hardware. It's cross platform. It has APIs or bindings in many popular programming languages. It supports GPU acceleration. Through TensorBoard, you get an intuitive view of your computation pipeline. Keras, a DL library, can run on TensorFlow. However, it's been criticized for being more complex and slower than alternative frameworks. Created in 2007, **Theano** is one of the first DL frameworks but it's been perceived as too low-level. Support for Theano is also ending. Written in Lua, **Torch** is meant for GPUs. It's Python port released by Facebook, called **PyTorch**, is popular for analyzing unstructured data. It's developer friendly and memory efficient. **Caffe2** does well for modeling convolutional neural networks. **Apache MXNet**, along with its simplified DL interface called **Gluon**, is supported by Amazon and Microsoft. Microsoft also has **Microsoft Cognitive Toolkit (CNTK)** that can handle large datasets. For Java and Scala programmers, there's **Deeplearning4j**. ### Which are the tools closely related to TensorFlow? The following are closely associated with or variants of TensorFlow: + **TensorFlow Lite**: Enables low-latency inferences on mobile and embedded devices. + **TensorFlow Mobile**: To use TensorFlow from within iOS or Android mobile apps, where TensorFlow Lite cannot be used. + **TensorFlow Serving**: A high performance, open source serving system for machine learning models, designed for production environments and optimized for TensorFlow. + **TensorLayer**: Provides popular DL and RL modules that can be easily customized and assembled for tackling real-world machine learning problems. + **TensorFlow Hub**: A library for the publication, discovery, and consumption of reusable parts of machine learning models. + **TensorFlow Model Analysis**: A library for evaluating TensorFlow models. + **TensorFlow Debugger**: Allows us to view the internal structure and states of running TensorFlow graphs during training and inference. + **TensorFlow Playground**: A browser-based interface for beginners to tinker with neural networks. Written in TypeScript and D3.js. Doesn't actually use TensorFlow. + **TensorFlow.js**: Build and train models entirely in the browser or Node.js runtime. + **TensorBoard**: A suite of visualization tools that helps to understand, debug, and optimize TensorFlow programs. + **TensorFlow Transform**: A library for preprocessing data with TensorFlow. ### What's the architecture of TensorFlow? TensorFlow can be deployed across platforms, details of which are abstracted away from higher layers. The core itself is implemented in C++ and exposes its features via APIs in many languages, with Python being the most recommended. Above these language APIs, is the **Layers** API that offers commonly used layers in deep learning models. To read data, **Datasets** API is the recommended way and it creates input pipelines. With **Estimators**, we can create custom models or bring in models pre-made for common ML tasks. **XLA (Accelerated Linear Algebra)** is a domain-specific compiler for linear algebra that optimizes TensorFlow computations. If offers improvements in speed, memory usage, and portability on server and mobile platforms. ### Could you explain how TensorFlow's data graph works? TensorFlow uses a **dataflow graph**, which is a common programming model for parallel computing. Graph nodes represent **operations** and edges represent data consumed or produced by the nodes. Edges are called **tensors** that carry data. In the example figure, we show five graph nodes: `a` and `b` are placeholders to accept inputs; `c`, `d` and `e` are simple arithmetic operations. In TensorFlow 1.x, when a graph is created, tensors don't contain the results of operations. The graph is evaluated through **sessions**, which encapsulate the TensorFlow runtime. However, with **eager execution**, operations are evaluated immediately instead of building a graph for later execution. This is useful for debugging and iterating quickly on small models or data. For ingesting data into the graph, **placeholders** can be used for the simplest cases but otherwise, **datasets** should be preferred. To train models, **layers** are used to modify values in the graph. To simplify usage, high-level API called **estimators** should be used. They encapsulate training, evaluation, prediction and export for serving. Estimators themselves are built on layers and build the graph for you. ### How is TensorFlow 2.0 different from TensorFlow 1.x? It makes sense to write any new code in TensorFlow 2.0. Existing 1.x code can be migrated to 2.0. The recommended path is to move to TensorFlow 1.14 and then to 2.0. Compatibility module `tf.compat` should help. Here are the key changes in TensorFlow 2.0: + **API Cleanup**: Many APIs are removed or moved. For example, `absl-py` package replaces `tf.app`, `tf.flags`, and `tf.logging`. Main namespace `tf.*` is cleaned up by moving some items into subpackages such as `tf.math`. Examples of new modules are `tf.summary`, `tf.keras.metrics`, and `tf.keras.optimizers`. + **Eager Execution**: Like Python, eager execution is the default behaviour. Code execute in order, making `tf.control_dependencies()` redundant. + **No More Globals**: We need to keep track of variables. Untracked `tf.Variable` will get garbage collected. + **Functions, Not Sessions**: Functions are more familiar to developers. Although `session.run()` is gone, for efficiency and JIT compilation, `tf.function()` decorator can be used. This automatically invokes *AutoGraph* to convert Python constructs into TensorFlow graph equivalents. Functions can be shared and reused. ## Milestones 2011 Google Brain invents **DistBelief**, a framework to train large models for machine learning. DistBelief can make use of computing clusters of thousands of machines for accelerated training. The framework manages details of parallelism (multithreading, message passing), synchronization and communication. Compared to MapReduce, DistBelief is better at deep network training. Compared to GraphLab, DistBelief is better at structured graphs. Nov 2015 Under Apache 2.0 licensing, Google open sources TensorFlow, which is Google Brain's second-generation machine learning system. While other open source ML frameworks exist (Caffe, Theano, Torch), Google's competence in ML is supposedly 5-7 years ahead of the rest. However, Google doesn't open source algorithms that run on TensorFlow, not its advanced hardware infrastructure. Apr 2016 Version 0.8 of TensorFlow is released. It comes with distributed training support. Powered by gRPC, models can be trained on hundreds of machines in parallel. For example, Inception image classification network was trained using 100 GPUs with an overall speedup of 56x compared to a single GPU. More generally, the system can map the dataflow graph onto heterogeneous devices (multi-core CPUs, general-purpose GPUs, mobile processors) in the available processes. May 2016 Google announced that it's been using **Tensor Processing Unit (TPU)**, a custom ASIC built specifically for machine learning and tailored for TensorFlow. Jun 2016 TensorFlow v0.9 is released with support for iOS and Raspberry Pi. Android support has been around from the beginning. Feb 2017 Version 1.0 of TensorFlow is released. The API is in Python but there's also experimental APIs in Java and Go. Nov 2017 Google releases a preview of **TensorFlow Lite** for mobile and embedded devices. This enables low-latency inferences for on-device ML models. In future, this should be preferred over **TensorFlow Mobile**. With TensorFlow 1.4, we can build models using high-level **Keras** API. Keras, which was previously in `tf.contrib.keras`, is now the core package `tf.keras`. Sep 2019 TensorFlow 2.0 is released following an alpha release in June. It improves workflows for both production and experimentation. It promises better performance on GPU acceleration.
{ "title": "TensorFlow", "href": "tensorflow" }
# Wi-Fi Calling ## Summary Wi-Fi Calling is a technology that allows users to make or receive voice calls via a local Wi-Fi hotspot rather than via their mobile network operator's cellular radio connection. Voice calls are thus carried over the Internet, implying that Wi-Fi Calling relies on VoIP. However, unlike other VoIP services such as Skype or Viber, Wi-Fi Calling gives operators more control. Wi-Fi Calling is possible only if the operator supports it, user's phone has the feature and user has enabled it. Once enabled, whether a voice call uses the cellular radio link or Wi-Fi link is almost transparent to the user. With cellular networks going all IP and offering VoLTE, Wi-Fi Calling has become practical and necessary in a competitive market. Wi-Fi Calling is also called *Voice over Wi-Fi (VoWi-Fi)*. ## Discussion ### In what scenarios can Wi-Fi Calling be useful to have? In places where cellular coverage is poor, such as in rural residences, concrete indoors, basements, or underground train stations, users will not be able to make or receive voice calls. In these scenarios, the presence of a local Wi-Fi network can serve as the "last-mile" connectivity to the user. Wi-Fi can therefore complement the cellular network in places where the latter's coverage is poor. For example, a user could be having an active voice call via the cellular network and suddenly enters a building with poor coverage. Without Wi-Fi Calling, the call might get dropped. With Wi-Fi Calling, the call can be seamlessly handed over to the Wi-Fi network without even the user noticing it. Astute users may notice that their call is on Wi-Fi since smartphones may indicate this via an icon. More importantly, user intervention is not required to switch between cellular and Wi-Fi. Such seamless handover has become possible because cellular network's IP and packet switching: VoWi-Fi can be handed off to VoLTE, and vice versa. ### Isn't Wi-Fi Calling the same as Skype, Viber or WhatsApp voice calls? Many smartphone apps allow voice (and even video) calls over the Internet. They are based on VoIP technology. We normally call them over-the-top (OTT) services since they merely use the phone's data connection and operators bill for data usage and not for the service itself. However, many of these systems require both parties to have the same app installed. Even when this constraint is removed, the service is controlled by the app provider. Wi-Fi Calling gives cellular operators greater control. Driven by competition from OTT services, Wi-Fi Calling gives operators an opportunity to regain market share for voice calls. Voice packets are carried securely over IP to the operator's core network, thus allowing the operator to reuse many resources and procedures already in place for VoIP calls. Likewise, messages and video–*Video over LTE (ViLTE)*–can also be carried over Wi-Fi. From an architectural perspective, Wi-Fi Calling is served by operator's IP Multimedia Subsystem (IMS), whereas Skype calls are routed out of the operator's network into the Internet. ### Isn't Wi-Fi Calling the same as Wi-Fi Offload? Not exactly. Wi-Fi Calling can be seen as a form of offload but they have different motivations. Wi-Fi Offload came about to ease network congestion and improve QoS for users in high-density areas. The offload is transparent for users whose devices are authenticated via EAP-SIM/AKA. Wi-Fi Calling is in response to OTT services stealing revenue from mobile operators. Even VoLTE was deployed by operators, voice calls couldn't be made over Wi-Fi and OTT services was what users used when they had access to Wi-Fi. Wi-Fi Calling aims to overcome this problem. ### What are the possible benefits of Wi-Fi Calling? For subscribers, benefits include seamless connectivity and mobility between cellular and Wi-Fi. The selection is automatic and transparent to users. Data is protected using IPSec from mobile to core network, along with traditional SIM-based authentication. Users can potentially lower their monthly bills through service bundles and reduced roaming charges. Sometimes calling home from another country could be free depending on the subscribed plan and operator. Moreover, the user's phone will have a single call log (likewise, for message log). The default dialler can be used along with all saved contacts. Those receiving the call will see the caller's usual phone number. These are not possible with a third-party installed app. For operators, Wi-Fi complements cellular coverage and capacity. T-Mobile was one of the early adopters because it had poor indoor coverage. Wi-Fi Network performance is optimized by allowing bandwidth-intensive traffic to be offloaded to Wi-Fi when so required. All their IMS-based services can now be extended to Wi-Fi access rather than losing out to OTT app/service providers. ### How does the network architecture change for Wi-Fi Calling? Two network functions are involved: + **Evolved Packet Data Gateway (ePDG)**: Serves an untrusted Wi-Fi network. An IPSec tunnel protects data between mobile and ePDG, from where it goes to Packet Gateway (PGW). The mobile needs an update with an IPsec client. No changes are needed for the access point. + **Trusted Wireless Access Gateway (TWAG)**: Serves a trusted Wi-Fi network, which is typically under operator's control. In this case, data between mobile and TWAG is encrypted at radio access and IPSec is not used. From TWAG, data goes to PGW. No changes are needed for the mobile but Wi-Fi access point needs to be updated.If the network is not an Evolved Packet Core (EPC), then Tunnel Termination Gateway (TTG) is used instead of ePDG; Wireless Access Gateway (WAG) is used instead of TWAG; GGSN is used instead of PGW. The untrusted mode is often used for Wi-Fi Calling, since public hotspots can be used without updating the access point. It's the operator who decides if a non-3GPP access can be considered trusted. ### How is an end-user device authenticated for Voice over Wi-Fi service? Within the network, *3GPP AAA Server* is used to authenticate end devices. Authentication is based on SIM and the usual network functions located in the Home Subscriber Server (HSS). 3GPP AAA Server does not maintain a separate database and relies on the HSS. Vendors who sell AAA servers usually give the ability to do authentication of devices that don't have SIM. For legacy networks, they can interface with HLR rather than HSS. They support AAA protocols such as RADIUS and Diameter. They support various EAP methods including TLS, PEAP and CHAP. ### What are the 3GPP standards covering Wi-Fi Calling? Documents that specify "non-3GPP access" are applicable to Wi-Fi Calling. The following are some relevant documents (non-exhaustive list): + TR 22.814: Location services + TR 22.912: Study into network selection requirements + TS 23.402: Architectural enhancements + TS 24.234: 3GPP-WLAN interworking: WLAN UE to network protocols, Stage 3 + TS 24.302: Access to EPC, Stage 3 + TS 29.273: 3GPP EPS AAA interfaces + TS 33.402: System Architecture Evolution: security aspects + TR 33.822: Security aspects for inter-access mobilityIn addition, GSMA has released a list of Permanent Reference Documents on VoWi-Fi. Wi-Fi Calling is a technology that comes from the cellular world. From Wi-Fi perspective, there's no special IEEE standard that talks about Wi-Fi Calling. ### Are there commercial services offering Wi-Fi Calling? In June 2016, it was reported that all four major operators in the US support Wi-Fi Calling, with T-Mobile supporting as many as 38 different handsets. In November 2016, there were 40+ operators offering Wi-Fi Calling in 25+ countries. Moreover, even affordable phones or devices without SIMs are supporting Wi-Fi Calling. An operator will normally publish a list of handsets that are supported, which usually includes both Android and iPhone models. In September 2017, it was reported that AT&T has 23 phones and Verizon has 17 phones that support Wi-Fi Calling. Wi-Fi Calling may involve regulatory approval based on the country's licensing framework. For example, India's TRAI commented in October 2017 that Wi-Fi Calling can be introduced since licensing allows telephony service to be provided independent of the radio access. ### Within enterprises, how can IT teams plan for Wi-Fi Calling? Some access points have the ability to prioritize voice traffic and this can be repurposed for Wi-Fi Calling. Examples include Aerohive, Aruba, Cisco Aironet and Ruckus. Enterprises can also work with operators to deploy femto/pico cells or distributed antenna systems. A minimum of 1 Mbps may be needed to support Wi-Fi Calling although Republic Wireless in the US claims 80 kbps is enough to hold a call, although voice quality may suffer. In reality, voice needs just 12 kbps but can scale down to 4.75 kbps. ### How will users be billed for Wi-Fi Calling? This is completely operator dependent and based on subscriber's current plan. For example, Canada's Rogers says that calls and messages are deducted from airtime and messaging limits. Roaming charges may apply only for international roaming. Verizon Wireless states that a voice call will use about 1 MB/minute of data; a video call will use 6-8 MB/minute. Billing is linked to user's current plan. ### What are some practical issues with Wi-Fi Calling? Back in 2014, T-Mobile had handoff problems but it was improved later. The service was also not offered by other operators and not supported by most handsets. Even when a handset supports it, operators may not offer the service if the handset has not been purchased from the operator. Since any Wi-Fi hotspot can be used, including public ones, security is a concern. For this reason, all data over Wi-Fi must be protected and subscriber must be authenticated by the cellular operator. Seamless call continuity across cellular and Wi-Fi could be a problem, particularly when firewalls and VPNs are involved. Some users have reported problems when using Wi-Fi behind corporate firewalls. Likewise, IT teams in enterprises may have the additional task of ensuring Wi-Fi coverage and managing traffic. Since Wi-Fi Calling often uses public hotspots, there's no QoS control. However, it's argued that in places where cellular has poor coverage, QoS cannot be guaranteed anyway. In addition, QoS on Wi-Fi can often be achieved implicitly because of excess capacity. With the coming of 802.11ac and the ability to prioritize traffic via Wi-Fi Multimedia (WMM), QoS is unlikely to be a problem. ## Milestones 2007 T-Mobile in the US launches something called "HotSpot @ Home". This is based on a technology named *Unlicensed Mobile Access*, which is a commercial name of a 3GPP feature named *Generic Access Network*. GAN operates in the IP layer, which means that access can be via any protocol, not just Wi-Fi. UMA does not take off because of lack of handsets that support it. It also had other operational issues related to interference, handover and configuration setup. Nov 2011 Republic Wireless, a mobile virtual network operator (MVNO) in the US, rolls out "Hybrid Calling". Calls are primarily on Wi-Fi and cellular will be used as a fallback option. Their General Manager, Brian Dally, states, > Every other mobile carrier talks about offloading to Wi-Fi, we talk about failing over to cellular. Sep 2014 T-Mobile introduces Wi-Fi Calling in the US. This comes at the heels of the operator's rollout of VoLTE. Meanwhile, Apple iPhone starts supporting Wi-Fi Calling. Apr 2015 Sprint introduces Wi-Fi Calling in the US. EE does the same in the UK. Meanwhile, Google gets into telecom by launching *Project Fi*, which allows seamless switching between Wi-Fi and cellular. Google doesn't have its own cellular network but uses those of Sprint, T-Mobile, and US Cellular. Oct 2015 In the US, AT&T obtains regulatory approval to launch Wi-Fi Calling. By 2016, all four major US operators rollout Wi-Fi Calling nationwide. Jun 2017 UMA, which may be called first generation Wi-Fi Calling, is decommissioned by T-Mobile in the US. Nov 2018 Researchers discover security vulnerabilities with Wi-Fi Calling due to various reasons. They propose possible solutions to overcome these.
{ "title": "Wi-Fi Calling", "href": "wi-fi-calling" }
# Design Thinking ## Summary Design thinking is a problem-solving method used to create practical and creative solutions while addressing the needs of users. The process is extremely user centric as it focuses on understanding the needs of users and ensuring that the solutions created solve users' needs. It's an iterative process that favours ongoing experimentation until the right solution is found. ## Discussion ### Why is the design thinking process important? Design thinking helps us to innovate, focus on the user, and ultimately design products that solve real user problems. The design thinking process can be used in companies to reduce the time it takes to bring a product to the market. Design thinking can significantly reduce the amount of time spent on design and development. The design thinking process increases return of investment as the products are user-centric, which helps increase user engagement and user retention. It's been seen that a more efficient workflow due to design thinking gave 75% savings in design and development time, 50% reduction in defect rate, and a calculated ROI of more than 300%. ### When and where should the design thinking process be used? The design thinking process should especially be used when dealing with **human-centric challenges** and **complex challenges**. The design thinking process helps break down complex problems and experiment with multiple solutions. Design thinking can be applied in these contexts: human-centred innovation, problems affecting diverse groups, involving multiple systems, shifting markets and behaviours, complex societal challenges, problems that data can't solve, and more. A class of problems called **wicked problems** is where design thinking can help. Wicked problems are not easy to define and information about them is confusing. They have many stakeholders and complex interdependencies. On the contrary, design thinking is perhaps an overkill for obvious problems, especially if they're not human centred. In such cases, traditional problem-solving methods may suffice. ### What are the principles of the design thinking process? There are some basic principles that guide us in applying design thinking: + **The human rule**: All design activity is social because all social innovation will bring us back to the "human-centric point of view". + **The ambiguity rule**: Ambiguity is inevitable, and it can't be removed or oversimplified. Experimenting at the limits of your knowledge and ability is crucial in being able to see things differently. + **The redesign rule**: While technology and social circumstances may change, basic human needs remain unchanged. So, every solution is essentially a redesign. + **The tangibility rule**: Making ideas tangible by creating prototypes allows designers to communicate them effectively. ### What are the typical steps of a design thinking process? The process involves five steps: + **Empathy**: Put yourself in the shoes of the user and look at the challenge from the point of view of the user. Refrain from making assumptions or suggesting answers. Suspend judgements throughout the process. + **Define**: Create a challenge statement based on the notes and thoughts you have gained from the empathizing step. Go back to the users and modify the challenge statement based on their inputs. Refer to the challenge statement multiple times throughout the design thinking process. + **Ideate**: Come up with ideas to solve the proposed challenge. Put down even the craziest ideas. + **Prototype**: Make physical representations of your ideas and solutions. Get an understanding of what the final product may look like, identify design flaws or constraints. Take feedback from users. Improve the prototype through iterations. + **Test**: Evaluate the prototype on well-defined criteria.Note that empathy and ideate are divergent steps whereas others are convergent. Divergent means expanding information with alternatives and solutions. Convergent is reducing information or filtering to a suitable solution. ### What are the specific tools to practice design thinking? Design thinking offers tools for each step of its five-step process. These are summarized in the above figure. These tools offer individuals and teams something concrete to effectively practice design thinking. New Metrics has enumerated 14 different tools: immersion, visualization, brainstorming, empathy mapping, journey mapping, affinity mapping, rapid iteration, assumption testing, prototyping, design sprints, design criteria, finding the value proposition, and learning launch. They describe each tool briefly and note the benefits. More tools include focus groups, shadowing, concept maps, personas, positioning matrix, minimum viable product, volume model, wireframing, and storyboards. For specific software tools, we note the following: + **Empathize**: Typeform, Zoom, Creatlr + **Define**: Smaply, Userforge, MakeMyPersona + **Ideate**: SessionLab, Stormboard, IdeaFlip + **Prototype**: Boords, Mockingbird, POP + **Test**: UserTesting, HotJar, PingPong + **Complete Process**: Sprintbase, InVision, Mural, Miro ### What should I keep in mind when applying the design thinking process? Every designer can use a variation of the design thinking process that suits them and customize it for each challenge. Although distinct steps are defined, design thinking is not a linear process. Rather, it's very much **iterative**. For example, during prototyping we may go back to redefine the problem statement or look for alternative ideas. Every step gives us new information that might help us improve on previous steps. Adopt Agile methodology. Design thinking is strong on ideation while Scrum is strong on implementation. Combine the two to make a powerful hybrid Agile approach. While the steps are clear, applying them correctly is not easy. To identify what annoys your clients, ask questions. Empathy means that you should relate to their problems. Open-ended questions will stimulate answers and help identify the problems correctly. At the end of the process, as a designer, reflect on the way you've gone through the process. Identify areas of improvement or how you could have done things differently. Gather insights on the way you went through the design thinking process. ### What do I do once the prototype is proven to work? The prototype itself can be said to "work" only after we have submitted it to the clients for feedback. Use this feedback to improve the prototype. Make the actual product after incorporating all the feedback from the prototype. Gathering feedback itself is an important activity. Present your solution to the client by describing the thought process by which the challenge was solved. Take notes from users and ensure that they are satisfied with the final product. It's important not to defend your product. It's more important to listen to what users have to say and make changes to improve the solution. Present several versions of the prototype so that users can compare and express what they like and dislike. Consider using *I Like, I Wish, What If* method for gathering feedback. Get feedback from regular users as well as extreme users with highly opinionated views. Be flexible and improvise during testing sessions. Allow users to contribute ideas. Recognize that prototyping and testing is an iterative process. Be prepared to do this a few times. ### How is design thinking different from user-centred design? On the surface, both design thinking and user-centred design (UCD) are focused on the needs of users. They have similar processes and methods. They aim for creative or innovative solutions. To elicit greater empathy among designers, UCD has been more recently called human-centred design (HCD). However, design thinking goes beyond usability. It considers technical feasibility, economic viability, desirability, etc. without losing focus on user needs. While UCD is dominated by usability engineers and focuses on user interfaces, design thinking has a larger scope. Design thinking brings more multi-disciplinary perspectives that can suggest innovative solutions to complex problems. While it borrows from UCD methods, it goes beyond the design discipline. Some see UCD as a framework and design thinking as a methodology that can be applied within that framework. Others see these as complementary: a team can start with design thinking for initial exploration and later shift to UCD for prototyping and implementation. ### What are some ways to get more ideas? Design thinking is not about applying standard off-the-shelf solutions. It's about solving difficult problems that typically require creative approaches and innovation. More ideas, the better. Use different techniques such as brainstorming, mind mapping, role plays, storyboarding, etc. Innovation is not automatic and needs to be fostered. We should create the right mindsets, an open and explorative culture. Designers should combine both logic and imagination. Teams should be cross-disciplinary and collaborative. Work environments must be conductive to innovation. When framing the problem, think about how the challenge can be solved in a certain place or scenario. For example, think about how one of your ideas would function differently in a setting such as a kitchen. Write down even ideas that may not work. Further research and prototyping might help refine it. Moreover, during the prototyping and testing steps, current ideas can spark new ideas. ## Milestones Sep 1962 *The Conference on Systematic and Intuitive Methods in Engineering, Industrial Design, Architecture and Communications* is held in London. It explores design processes and new design methods. Although the birth of design methodology can be traced to Zwicky's *Morphological Method* (1948), it's this conference that recognizes design methodology as a field of academic study. 1966 The term **Design Science** is introduced. This shows that the predominant approach is to find "a single rationalised method, based on formal languages and theories". 1969 Herbert A. Simon, a Nobel Prize laureate and cognitive scientist, mentions the design thinking process in his book *The Sciences of the Artificial* and further contributes ideas that are now known as the principles of design thinking. 1970 This decade sees some resistance to the adoption of design methodology. Even early pioneers begin to dislike "the continual attempt to fix the whole of life into a logical framework". 1973 Rittel publishes *The State of the Art in Design Methods*. He argues that the early approaches of the 1960s were simplistic, and a new generation of methodologies are beginning to emerge in the 1970s. Rather than optimize through systematic methods, the **second generation** is about finding a satisfactory solution in which designers partner with clients, customers and users. This approach is probably more relevant to architecture and planning than engineering and industrial design. 1980 This decade sees the development of **engineering design methodology**. An example is the series of *International Conferences on Engineering Design*. The American Society of Mechanical Engineers also launches a series of conferences on Design Theory and Methodology. Oct 1982 Nigel Cross discusses the problem-solving nature of designers in his seminal paper *Designerly Ways of Knowing*. 1987 Peter Rowe, Director of Urban Design Programs at Harvard, publishes his book *Design Thinking*. This explores the underlying structure and focus of inquiry in design thinking. 1991 IDEO, an international design and consulting firm, brings design thinking to the mainstream by developing their own customer-friendly technology.
{ "title": "Design Thinking", "href": "design-thinking" }
# Single Page Application ## Summary A web application broadly consists of two things: data (content) and control (structure, styling, behaviour). In traditional applications, these are spread across multiple pages of HTML, CSS and JS files. Each page is served in HTML with links to suitable CSS/JS files. A Single Page Application (SPA) brings a new programming paradigm for the web. With SPA, we have a single HTML page for the entire application. This page along with necessary CSS and JS for the site are loaded when the page is first requested. Subsequently, as the user navigates the app, only relevant data is requested from the server. Other files are already available with the client. The page doesn't reload but the view and HTML DOM are updated. SPA (along with PWA) is the modern way to build web applications. SPA enhances user experience. There are frameworks that simplify building SPAs. ## Discussion ### Could you explain the single page application for a beginner? In a typical multi-page application, each page is generated as HTML on the server and served to the client browser. Each page has its own URL that's used by the client to request that page. When a user navigates from one page to another, the entire page loads. However, it's common for all pages to share many UI components: sidebar, header, footer, navigation menu, login/logout UI, and more. It's therefore wasteful to download these common elements with every page request. In terms of user experience, moving from one page to another might be annoying. Current page might lose UI interaction as user waits for another page to load. In SPA, there's a single URL. When a link is clicked, relevant content is downloaded and specific UI components are updated to render that content. User experience improves because user stays with and can interact with the current page while the new content is fetched from the server. When an update happens, there's no transition to another page. Parts of the current page are updated with new content. ### How does the lifecycle of an SPA request/response compare against a traditional multi-page app? In multi-page apps, each request is for a specific page or document. Server looks at the URL and serves the corresponding page or document. The entire app is really a collection of pages. In SPA, the first client request loads the app and all its relevant assets. These could be HTML plus JS/CSS files. If the app is complex, this initial bundle of files could be large. Therefore, the first view of the app can take some time to appear. During this phase, a loader image may be shown to the user. Subsequently, when the user navigates within the SPA, an API is called to fetch new data. The server responds with only the data, typically in JSON format. The browser receives this data and updates the app view. User sees this new information without a page reload. The app stays in the same page. Only the view changes by updating some components of the page. SPAs are well-suited when we wish to build rich interactive UI with lots of client-side behaviour. ### Which are the different SPA architectures? Application content might be stored in files or databases. It can be dynamic (news sites) or contextual (user specific). Therefore, the application has to transform this content into HTML so that users can read them in a web browser. This transformation process is called *rendering*. From this perspective, we note the following SPA architectures: + **Client-Side Rendering**: When browser requests the site, server responds quickly with a basic HTML page. This is linked to CSS/JS files. While these files are loading, user sees a loader image. Once data loads, JavaScript on the browser executes to complete the view and DOM. Slow client devices can spoil user experience. + **Server-Side Rendering**: HTML page is generated on the fly at the server. Users therefore see the content quickly without any loader image. At the browser, once events are attached to the DOM, app is ready for user interaction. + **Static Site Generators**: HTML pages are pre-generated and stored at the server. This means that the server can respond immediately. Better still, the page can be served by a CDN. This is the fastest approach. This approach is not suitable for dynamic content. ### What are the benefits of an SPA? With SPA, applications load faster and use less bandwidth. User experience is seamless, similar to a native app. Users don't have to watch slow page reloads. Developers can build feature-rich applications such as content-editing apps. On mobile devices, the experience is richer: clicks can be replaced with scrolling and amazing transitions. With browsers providing many developer tools, SPAs are also easy to debug on the client side. SPA optimizes bandwidth usage. Main resources (HTML/CSS/JS) are downloaded only once and reused. Subsequently, only data is downloaded. In addition, SPAs can cache data, thereby saving bandwidth. Caching also enables the application to work offline. ### What are some criticisms or disadvantages of an SPA? Among the disadvantages of SPA is **SEO**. SPA has a single URL and all routing happens via JavaScript. More recently, Google is able to crawl and index JS files. In general, use multi-page apps if SEO is important. Adopt SPA for SaaS platforms, social networks or closed communities where SEO doesn't matter. SPA breaks **browser navigation**. Browser's back button will go to previous page rather than previous app view. This can be overcome with the *HTML5 History API*. SPA could lead to **security issues**. Cross-site scripting attacks are possible. If developers are not careful, sensitive data could be part of the initial data download. Since all this data is not necessarily displayed on the UI, it can give developers a false sense of security. Developers could also unknowingly provide access to privileged functionality at the client side. SPA needs **client-side processing** and therefore may not work well on old browsers or slow devices. It won't work if users turn off JavaScript in their browsers. SPAs can be hard to maintain due to reliance on many third-party libraries. It's worth reading Adam Silver's article on the many disadvantages of SPAs. ### What are some best practices when converting a traditional app to SPA? An SPA has to implement many things that come by default in traditional apps: browsing history, routing, deep linking to particular views. Therefore, **select a framework** that facilitates these. Select a framework with a good ecosystem and a modular structure. It must be flexible and performant for even complex UI designs. After the initial page loads, subsequent data is loaded by making API calls. Building an SPA implies a **well-defined API**. Involve both frontend and backend engineers while creating this API. In one approach, serve static files separately from the data that's handled by API endpoints. Define clearly which parts of the UI are dynamic. This helps to organize project modules. Structure the project to enable **reusable components**. Due to its high reliance on JavaScript, invest in **build tools** for better dependency management. Webpack is a good choice. A build process can do code compilation (via Babel), file bundling and minification. When converting to an SPA, don't take an all-out approach. **Migrate incrementally**, perhaps one page at a time. ### How do I test and measure performance of an SPA? Testing tools Selenium, Cypress and Puppeteer can also be used to measure app performance. WebPageTest is an online tool that's easier to use. Compared to multi-page apps, there's more effort to fill forms or navigate across views. Application performance on the client side can be monitored via Navigation Timing API and Resource Timing API. But these fail to capture JavaScript execution times. To address this, User Timing API can be used. LinkedIn took this approach and improved the performance of their SPA by 20%. Among the techniques they used are lazy rendering (defer rendering outside viewport) and lazy data fetching. At Holiday Extras, their app took 23 seconds to load on a good 3G connection. To reduce this, they adopted code splitting to defer loading of non-critical libraries. CSS was also split into three parts loaded at different stages: critical, body, onload. They moved from JS rendering to HTML rendering, and then started serving static HTML from Cloudfront CDN. They did real user monitoring (RUM). Among the tools they used were React, EJS, Webpack, and Speed Curve. ### Could you mention some popular websites or web apps that are SPAs? Facebook, Google Maps, Gmail, Twitter, Google Drive, and GitHub are some examples of websites built as SPAs. For example, in Gmail we can read mails, delete mails, compose and send mails without leaving the page. It's the same with Google Maps in which new locations are loaded and displayed in a seamless manner. In Grammarly, writers get suggestions and corrections as they compose their content. All this is powered by HTML5 and AJAX to build responsive apps. Trello is another example of SPA. The card layout, overlays, and user interactions are all done without any page reloads. ### Which are some tools and frameworks to help me create an SPA? The three main frameworks for building SPAs are React, Angular and Vue on the client side, and Node.js on the server side. All these are based on JavaScript. Other JavaScript frameworks include Meteor, Backbone, Ember, Polymer, Knockout and Aurelia. Developers can choose the right framework by comparing how each implements or supports UI, routing, components, data binding, usability, scalability, performance, and testability. For example, while Ember comes with routing, React doesn't; but many modules for React support routing. React supports reusable components. React supports one-way data binding whereas Angular supports two-way data binding. Ember and Meteor are opinionated whereas React and Angular are less so and more flexible. .NET/C# developers can consider using Blazor. Blazor can work both at client side and server side. It runs in a web browser due to WebAssembly. Design tools support traditional multi-page sites. Adobe Experience Manager Sites is a tool that allows designers to create or edit SPAs. It supports drag-and-drop editing, out-of-the-box components and responsive web design. ### How does an SPA differ from PWA? PWA uses standard web technologies to deliver mobile native app-like experience. They were meant to make responsive web apps feel more native on mobile platforms. PWA enables the app to work offline, push notifications and access device hardware. Unlike SPA, PWA use service workers, web app manifest and HTTPS. PWA load almost instantly since service workers run in a separate thread from the UI. SPAs need to pre-fetch assets at the start and therefore there's always an initial loading screen. SPAs can also use service workers but PWA do it better. In terms of accessibility, PWA are better than SPAs. SPAs might be suited for data-intensive sites that are not necessarily visually stunning. But PWA are not so different from SPA. Both offer app-like user experience. Many PWA are built with the same frameworks that are used to build SPA. In fact, an app might initially be developed as an SPA. Later, additional features such as caching, manifest icons and loading screens could be added. These make an SPA more like a PWA. ## Milestones 1995 In the mid-1990s, rich interactions on web browsers become possible due to two different technologies: **Java Applets** and **Macromedia Flash**. Browsers are merely proxies for these technologies that have to be explicitly installed as browser plugins. With these technologies, all content is either loaded upfront or loaded on demand as the view changes. No page reloads are necessary. In this sense, these are ancestors of modern SPAs. 2005 Jesse James Garrett publishes a paper titled *Ajax: A New Approach to Web Applications*. This describes a novel way to design web applications. AJAX, that expands to **Asynchronous Javascript + XML**, makes asynchronous requests in the background while the user continues to interact with the UI in the foreground. Once the server responds with XML (or JSON or any other format) data, the browser updates the view. AJAX uses the `XMLHTTPRequest` API. While this was around since the early 2000s, Garrett's paper popularizes this approach. 2008 With the launch of GitHub, many JavaScript libraries and frameworks are invented and shared via GitHub. These become the building blocks on which true SPAs would later be built. Sep 2010 Twitter releases a new version of its app with client-side rendering using JavaScript. Initial page load becomes slow. Due to diversity of client devices and browsers, user experience becomes inconsistent. In 2012, Twitter updates the app towards server-side rendering and defers all JS execution until the content is rendered on browser. They also organize the code as CommonJS modules and do lazy loading. These changes reduce the initial page load to a fifth. May 2016 Google builds an app for its Google I/O event. Google engineers call this both an SPA and a PWA. With an App Engine backend, the app uses web components, Web Animations API, material design, Polymer and Firebase. During the event the app brings more user engagement than the native app. We might say that the app started as a SPA to create a PWA. In general, it's better to plan for a PWA from the outset rather than re-engineer an SPA at a later point. Feb 2019 Google engineers compare different SPA architectures in terms of performance. One of these is called **rehydration** which combines both server-side and client-side renderings. This has the drawback that content loads quickly but not immediately interactive, thus frustrating the user. May 2019 With the rise of edge computing, Section describes in a blog post how a Nuxt.js app (based on Vue.js) can be deployed at the edge. The app is housed within a Node.js module deployed at the edge. This SPA uses server-side rendering.
{ "title": "Single Page Application", "href": "single-page-application" }
# Document Object Model ## Summary Document Object Model (DOM) is the object-oriented representation of an HTML or XML document. It defines a platform-neutral programming interface for accessing various components of a webpage, so that JavaScript programs can change document structure, style, and content programmatically. It generates a hierarchical model of the HTML or XML document in memory. Programmers can access/manipulate tags, IDs, classes, attributes and elements using commands or methods provided by the document object. It's a logical structure because DOM doesn't specify any relationship between objects. Typically you use DOM API when documents can fit into memory. For very large documents, streaming APIs such as Simple API for XML (SAX) may be used. The W3C DOM and WHATWG DOM are standards implemented in most modern browsers. However, many browsers extend these standards. Web applications must keep in view the DOM standard used for maintaining interoperability across browsers. ## Discussion ### What are the different components of a DOM? Purpose of DOM is to mirror HTML/XML documents as an in-memory representation. It's composed of: + Set of objects/elements + Hierarchical structure to combine objects + An interface to access/modify objectsDOM lists the required interface objects, with supported methods and fields. DOM-compliant browsers are responsible to supply concrete implementation in a particular language (mostly JavaScript). Some HTML DOM objects, functions & attributes: + **Node** - Each tree node is a Node object. Different types of nodes inherit from the basic `Node` interface. + **Document** - Root of the DOM tree is the HTMLDocument node. Usually available directly from JavaScript as document or window. Gives access to properties associated with a webpage such as URL, stylesheets, title, or characterSet. The field `document.documentElement` represents the child node of type `HTMLElement` and corresponds to `<html>` element. + **Attr** – An attribute in an `HTMLElement` object providing the ability to access and set an attribute. Has name and value fields. + **Text** — A leaf node containing text inside a markup element. If there is no markup inside, text is contained in a single `Text` object (only child of the element). ### Can you show with an example how a web page gets converted into its DOM? The simplest way to see the DOM generated for any webpage is using "Inspect" option within your browser menu. DOM element navigation window that opens allows you to scroll through the element tree on the page. You can also alter some element values and styles – text, font, colours. Event listeners associated with each elements are also listed. The document is the root node of the DOM tree and offers many useful properties and methods. `document.getElementById(str)` gives you the element with `str` as id (or name). It returns a reference to the DOM tree node representing the desired element. Referring to the figure, `document.getElementById('div1')` will return the first "div" child node of the "body" node. We can also see that "html" node has two direct children, "head" and "body". This example also shows three leaf nodes containing only text. These are one "title" and two "p" tags. Corresponding CSS and JavaScript files referenced from HTML code can also be accessed through DOM objects. ### How is JavaScript used to manipulate the DOM of a web page? The ability to manipulate webpages dynamically using client-side programming is the basic purpose behind defining a DOM. This is achieved using DHTML. DHTML is not a markup language but a technique to make dynamic web pages using client-side programming. For uniform cross-browser support of webpages, DHTML involves three aspects: + **JavaScript** - for scripting cross-browser compatible code + **CSS** - for controlling the style and presentation + **DOM** - for a uniform programming interface to access and manipulate the web page as a documentGoogle Chrome, Microsoft Edge, Mozilla Firefox and other browsers support DOM through standard JavaScript. JavaScript programming can be used to manipulate the HTML page rendering, the underlying DOM and the supporting CSS. List of some important DOM related JavaScript functionalities: + Select, Create, Update and Delete DOM Elements (reference by ID/Name) + Style setting of DOM Elements – color, font, size, etc + Get/set attributes of Elements + Navigating between DOM elements – child, parent, sibling nodes + Manipulating the BOM (Browser Object Model) to interact with the browser + Event listeners and propagation based on action triggers on DOM elements ### Can DOM be applied to documents other than HTML or XML? By definition, DOM is a language-neutral object interface. W3 clearly defines it as an API for valid HTML and well-formed XML documents. Therefore, a DOM can be defined for any XML compliant markup language. The WHATWG community manages the HTML DOM interface. Some Microsoft specific XML extensions define their own DOM. **Scalable Vector Graphics (SVG)** is an XML-based markup language for describing two-dimensional vector graphics. It defines its own DOM API. **XAML** is a declarative markup language promoted by Microsoft, used in UI creation of .NET Core apps. When represented as text, XAML files are XML files with `.xaml` extension. By treating XAML as a XAML node stream, XAML readers communicate with XAML writers and enable a program to view/alter the contents of a XAML node stream similar to the XML Document Object Model (DOM) and the `XmlReader` and `XmlWriter` classes. **Standard Generalized Markup Language (SGML)** is a standard for how to specify a document markup language or tag set. The DOM support for SGML documents is limited to parallel support for XML. While working with SGML documents, the DOM will ignore `IGNORE` marked sections and `RCDATA` sections. ### What are the disadvantages of using DOM? The biggest problem with DOM is that it is **memory intensive**. While using the DOM interface, the entire HTML/XML is parsed and a DOM tree (of all nodes) is generated and returned. Once parsed, the user can navigate the tree to access data in the document nodes. DOM interface is easy and flexible to use but has an overhead of parsing the entire HTML/XML before you can start using it. So when the document size is large, the memory requirement is high and initial document loading time is also high. For small devices with limited on board memory, DOM parsing might be an overhead. SAX (Simple API for XML) is another document parsing technique where the parser doesn’t read in the entire document. Events are triggered when the XML is being parsed. When it encounters a tag start (e.g. `<sometag>`), then it triggers the tagStarted event. When the end of the tag is seen (`</sometag>`), it triggers tagEnded. So it's better in terms of memory efficiency for heavy applications. In earlier days, the DOM standard was not uniformly adopted by various browsers, but that incompatibility issue doesn’t exist anymore. ### What sort of DOM support is offered by React, Node.js and other JavaScript-based platforms? Everything in DOM is a node – document/element/attribute nodes, etc. But if you have a list of 10 items on your webpage and after some user interaction, need to update one of them, the entire DOM will be re-rendered. This is especially troublesome in Single Page Applications (SPAs). React WebUI framework solves this by creating a **virtual DOM** which is an in-memory data-structure cache for selective rendering. Differences in the node rendering are computed, browser's displayed DOM is updated efficiently, "reconciled" by the algorithm. NodeJS runtime environment has its own implementation for DOM interface, used when we need to work with HTML on server side for some reason. The DOM `Node` interface is an abstract base class upon which many other DOM API objects are based, thus letting those object types to be used similarly and often interchangeably. `jsdom` is a pure JavaScript implementation of WHATWG DOM and HTML Standards for use with Node.js. In AngularJS scripting framework, there are directives for binding application data to attributes of HTML DOM elements. Ex. `ng-disabled` directive binds AngularJS application data to the disabled attribute of HTML elements. ## Milestones 1995 Brendan Eich and Netscape design and release JavaScript, first supported in Netscape Navigator. In subsequent years, JavaScript becomes one of the core technologies of the World Wide Web, alongside HTML and CSS. All major web browsers have a dedicated JavaScript engine to execute it. In 1997, it's standardized as ECMAScript. 1996 JScript is introduced as the Microsoft dialect of the ECMAScript standard. Limited support for user-generated events and modifying HTML documents in the first generation of JavaScript & JScript is called "DOM Level 0" or **Legacy DOM**. No independent standard is developed for DOM Level 0, but it's partly described in the specifications for HTML 4. 1997 Netscape and Microsoft release version 4.0 of Netscape Navigator and Internet Explorer respectively. DHTML support is added to enable changes to a loaded HTML document. DHTML requires extensions to Legacy DOM implementations but both browsers developed them in parallel and remain incompatible. These versions of the DOM later become known as the **Intermediate DOM**. 1998 W3C DOM Working Group drafts a standard DOM specification, known as **DOM Level 1** that becomes the W3C Recommendation in 1998. This is after the standardization of ECMAScript. 2001 Microsoft Internet Explorer version 6 comes out with support for W3C DOM. 2004 Mozilla comes out with its *Design Principles for Web Application Technologies*, the consensus opinion of the Mozilla Foundation and Opera Software in the context of standards for Web Applications and Compound Documents. This defines browser code compatibility with HTML, CSS, DOM, and JavaScript. 2005 Large parts of W3C DOM are well-supported by all the common ECMAScript-enabled browsers including Safari and Gecko-based browsers (like Mozilla, Firefox, SeaMonkey and Camino). 2020 The HTML DOM living standard is a constantly updated standard maintained by WHATWG.org, with latest updates happening continuously.
{ "title": "Document Object Model", "href": "document-object-model" }
# Open Data ## Summary The idea of open data is to share data freely with others. Openness also allows others to modify, reuse or redistribute the data. Openness has two facets: legal and technical. **Legal openness** is about applying a suitable open license to the data. **Technical openness** is about removing technical barriers and making it easy to access, read, store or process the data. By opening up data, others can unlock value in the form of information and knowledge. For example, in the travel sector, locations, images, prices and reviews are data that can help us plan a holiday. Information is data within a given context. Knowledge personalizes information and helps us make decisions. Many organizations worldwide promote open data. Open licenses, datasets and tools are available. Governments are releasing open data that citizens can use. ## Discussion ### Could you describe open data? Database is really about structure and organization of data, also known as the database model. This is generally covered by copyright. In the context of open data, we're more concerned with the contents of the database, which we simply call data. Data can mean a single item or an entire collection. Particularly for factual databases, a collection can be protected but not individual items. For example, a protected collection may be about the melting point of various substances but no one can be prevented from stating a particular item, such as element E melts at temperature T. To understand the meaning of openness, we can refer to the *Open Definition* that states, "Open means anyone can freely access, use, modify, and share for any purpose (subject, at most, to requirements that preserve provenance and openness)." Open data should be accessible at a reasonable cost if not free. It should be available in bulk. There shouldn't be restrictions on who or for what purpose they wish to use the data. Tools to use the data should be freely available and not proprietary. Data shouldn't be locked up behind passwords or firewalls. ### Where could open data be useful? Open data can make governments more transparent. Citizens will have confidence that their money is being spent as budgeted or in implementing the right policies. For example, one activist noted that Canadian citizens used open data to save their government $3.2bn in fraudulent charitable donations. In Brazil, DataViva provides millions of interactive visualizations based on open government data. New business opportunities are possible. For example, once Transport for London opened their data, developers used it to build apps. Likewise, Thomson Reuters uses open data to provide better services to its customers. OpenStreetMap and Copernicus are examples that enable new GIS applications. In research, open data is a part of what is called Open Science. It leads to reproducible research and faster advancements. Open data also enables researchers to revalidate their own findings. Open data can be used to protect the environment. Some apps that do this include mWater, Save the Rain and Ecofacts. ### Could you mention some sources of open data? There are many curated lists on the web for open data. From some of these, we mention a few useful sources of open data by category: + **General**: DBpedia, Datasets Subreddit, Kaggle, FiveThirtyEight, Microsoft Marco + **Government**: Data.gov, Data.gov.uk, European Union Open Data Portal + **Economy**: World Bank Open Data, Global Financial Data, International Monetary Fund + **Business**: OpenCorporates, Yellowpages, EU-Startups, Glassdoor + **Health & Science**: World Health Organization, HealthData.gov, NHS Digital, Open Science Data Cloud, NASA Earth Data, LondonAir + **Research**: Google Scholar, Pew Research Center, OpenLibrary Data Dumps, CERN Open Data + **Environment**: Climate Data Online, IEA Atlas of Energy ### Which are some organizations working with or for open data? We mention a few organizations: + Open Data Institute: Works with companies and governments to build an open, trustworthy data ecosystem, where people can make better decisions using data and manage any harmful impacts. + Open Data Commons: Provides a set of legal tools to publish and use open data. They've published open licenses applicable to data. + Open Knowledge Foundation: A worldwide network of people passionate about openness, using advocacy, technology and training to unlock information and enable people to work with it to create and share knowledge. It was briefly called Open Knowledge International. The Open Definition is one of their projects. + Open Data Charter: A collaboration between governments and organisations working to open up data based on a shared set of principles. ### Why and what aspects of open data should we standardize? Data is more valuable if we can combine two different datasets to obtain new insights. **Interoperability** is the key. Given diverse systems, tools and data formats, interoperability can be almost impossible. Without standards, it becomes more difficult for us to publish, access or share data effectively. Standards also make it easier to repeat processes, compare results and reach a shared understanding. Moreover, we need **open standards** that are available to public and are defined through collaboration and consensus. Open standards should define a common data model. The data pipeline should be streamlined. It should be easy to combine data. It should promote common understanding. Open Data Institute's page on standards is a useful resource to learn more. A checklist for selecting a suitable standard looks at the licensing, project needs, maintenance of the standard, and guidance on usage. ### What sort of licensing should I adopt when opening my data? Releasing your data without a license creates uncertainty. In some jurisdictions, data lacking explicit permission may be protected by intellectual property rights. It's therefore better to attach a license. Aspects that define a license include public domain, attribution, share-alike, non-commercial, database only and no derivatives. There are a number of licenses that conform to Open Definition. *Creative Commons CC0* is an example. It releases the data into public domain. Anyone can copy, modify and distribute, even for commercial purposes, without asking permission. A similar license is *Open Data Commons Public Domain Dedication and Licence (PDDL)*. Other conformant but less reusable licenses are data licenses from governments (Germany, UK, Canada, Taiwan). UK's Open Government Licence is an example. Two other licenses worth looking at come from the Linux Foundation: CDLA–Sharing-1.0 and CDLA-Permissive-1.0, where CDLA refers to Community Data License Agreement. *Open Data Commons Open Database License (ODC-ODbL)* is seen as a "viral license". Any changes you make to the dataset, you're required to release the same under this license. ### What are some challenges with open data? Publishers continue to use customized licenses. This makes it hard to reuse data. It makes licenses incompatible across datasets. Instead, they should use standardized open licenses. Ambivalent or redundant clauses cause confusion. Licenses often are not clear about the data to which they apply. Data is often not linked to legal terms. Data is hard to find. Sometimes their locations change. A platform such as CKAN might help. Data could be misinterpreted, which could result in a wrong decision. This creates a fear of accountability and prevents producers from opening their datasets. Quality of data is another concern. Data should be machine-readable and in raw form. Publishing data in HTML or PDF is not research friendly. For context and interpretation, metadata should be shared. Raw data is good but also requires advanced technical skills and domain knowledge. We therefore need to build better data literacy. AI algorithms are being used to analyse data but many are black-box models. They also promote data centralization and control, which are at odds with the open data movement. Data infrastructure must of consistent quality. Data can also be biased by gender or against minority groups. ## Milestones 1942 The concept of open data starts with Robert King Merton, one of the fathers of the sociology of science. He explains how freely sharing scientific research and results can stimulate growth and innovation. 1994 In the US, the Government Printing Office (GPO) goes online and opens a few government-related documents. This is an early example of **open government data** at a time when the Internet was becoming popular. Historically and legally, the idea can be traced back to the the Freedom of Information Act of 1966. 2005 The Open Knowledge Foundation creates the **Open Definition**. This is based on established principles from the open source movement for software. This definition is later translated into more than 30 languages. In November 2015, Open Definition 2.1 is released. Feb 2006 At a TED talk, Hans Rosling presents compelling visuals of global trends in health and economics. Using publicly available datasets from different sources, he debunks myths about the developing world. He makes a case for governments and organizations to open up their data. Data must be enabled using design tools. His mantra is to animate and liberate data from closed databases. Data must also be searchable. Dec 2007 Thirty individuals meet at Sebastopol, California to discuss open public data. Many of them come from the culture of free and open source software movement. This event can be seen as a convergence of many past US and European efforts in open data. They identify **eight principles**: complete, primary, timely, accessible, machine processable, non-discriminatory, non-proprietary, and license-free. Feb 2009 At a TED talk, Tim Berners-Lee gets people to chant "Raw data, now." He makes reference to Rosling's talk of 2006 and adds that it's important to link data from different sources. He calls this **Linked Data**. He mentions *DBpedia*, which takes data from Wikipedia and connects them up. May 2009 The US government launches **Data.gov** with 47 datasets. About five years later, it has about 100,000 datasets. 2010 Many governments attempt to open their data to public but there are concerns about privacy. It's only in 2015 that privacy principles become an essential part of discussions on open data. It's become clear that providing raw data is not always possible. Governments must balance potential benefits to public against privacy rights of individuals. 2012 Guillermo Moncecchi of Open Knowledge International writes that beyond transparency, open data is also about building a **public data infrastructure**. While the focus during 2007-2009 was on data transparency, during 2010-2016 focus shifts to public infrastructure. Consider data about street light locations. Citizens can use this data to solve problems on their own. Metrics change from number of published datasets to APIs and reuse. Jun 2017 Open Knowledge Foundation publishes the **Global Open Data Index (GODI)**. This shows that only 38% of government data is really open. This is based on Open Definition 2.1. A later update available online shows that only 166 of 1410 datasets (11%) are open. The report gives a number of recommendations to governments and policy makers to make their data more open. Jan 2018 Open Data Charter replaces their earlier ambitious call of "open by default" with a more practical "publish with purpose". Rather than getting governments to open up as much data as possible quickly, the intent is to get them to take small steps in that direction. Governments can open datasets with a clear view of tangible benefits to citizens. Feb 2018 Using open data exposed by Here.com via an API, Tjukanov uses visualization to show average traffic congestion in many of the world's largest cities. Interesting patterns emerge, plotted within a radius of 50km from the city center. He uses QGIS, an open source GIS platform. This is an example of the value that open data plus open source can unlock. Jan 2020 Using open data collated by the Institute for Government from data.gov.uk, Peter Cook produces interesting **organograms**, which are visualizations of organization structures. He does this for many UK government departments. Clicking on individual data points shows name, job title, salary range and other details. This sort of data has been available (though patchy) since March 2011.
{ "title": "Open Data", "href": "open-data" }
# Remote Pair Programming ## Summary Pair programming is a practice in which developers work in pairs. When the pair is not sitting next to each other, we call it **Remote Pair Programming (RPP)**. Remote pair programming has the same benefits as pair programming: higher quality software, fewer defects, knowledge sharing, team cohesion, faster delivery, and more. The nature of remote working demands the use of suitable tools. Selecting the right set of tools can enable teams get the best out of pairing. There are plenty of tools in the market today (Sep 2021) for RPP. Since the coming of COVID-19 in 2020 and an increased adoption of work-from-home culture, RPP has become all the more essential. RPP is also called *Distributed Pair Programming (DPP)*, a term that appears to be popular among researchers. ## Discussion ### How is remote pair programming different from pair programming? In pair programming, the pair can point to code with their fingers. In RPP, this is done using the keyboard cursor or the mouse pointer. In some tools, both developers may be able to move their own cursors or mouse pointers. In other tools, there's only one cursor and mouse pointer. The person controlling it becomes the driver. In pair programming, both developers are looking at the same monitor. RPP allows more flexibility. The navigator could be checking out other parts of the code while the driver is typing. In pair programming, it's possible to share physical artefacts such as printed documents or a whiteboard. In RPP, everything must happen online. It's therefore essential to have tools to perform these activities online. RPP can be used for diverse use cases beyond coding: mentoring, hiring, live tutorials, etc. These are use cases where participants are likely to be at remote locations. We should clarify that sharing code for reviews, making pull requests or using version control isn't RPP. These are asynchronous workflows. RPP happens only when both developers are participating concurrently within the same workspace. ### What should I look for when selecting a tool for RPP? Here are some factors to consider: + **Installation**: Often called *Cloud IDEs*, these are tools hosted in the cloud. No local installation is required other than a web browser. Other tools require local installation. Better still are *plugins* that extend popular editors/IDEs with collaborative editing capability. + **Cross-Platform**: Some tools may run on Windows but not on Mac or Linux. Plugins are better in this regard. They extend familiar software already available for various platforms. + **Editing**: Simultaneous editing by both developers. Copy/paste between systems. One developer can navigate to other parts of codebase or other applications while the other is editing. Editor/IDE agnostic. + **Multimodal**: Bidirectional live audio/video streaming. Chat window. Integrated with editor/IDE. + **Usability**: Uncluttered layouts. Awareness about what's shared and the current editing context. Automatic turn-off of notifications, thus providing a distraction-free environment. + **Performance**: Minimal lag. Supports high video resolutions. Fall back to lower resolutions on low-bandwidth connections. Visibility into performance metrics. + **Integration**: Connect to code repositories (GitHub, GitLab, Bitbucket) and other tools (Jira, Trello). + **Others**: Security, cost and customer support are also important. Open source may be important for some teams. ### What tools are available for RPP? There are dozens of tools out there. One way to classify tools is as follows: + **Screen Show**: Only screensharing. Before switching roles, we need to push code changes to a shared repository. Videoconferencing tools such as Skype, Google Hangouts, Google Meet and Slack Calls are examples. + **Screen Control/Share**: Temporary remote control of your partner's system. Interactions can lag. Zoom, VNC, Join.me, CoScreen, Tuple, TeamViewer and tmux are examples. + **Application Share**: True collaborative editing and hence most preferred. Each environment can be personalized. Developers can use different editors or IDEs. A developer can navigate within the codebase without interrupting the partner. Developers can even edit different parts of the code in parallel. Live Share (with Visual Studio and VS Code), CodeTogether, GitLive, Floobits, Drovio, Atom Teletype, and AWS Cloud9 are examples.Among the Cloud IDEs are AWS Cloud9, Codenvy, and Replit. For privacy, Codenvy has a self-hosted option. Among the plugins are Live Share, Remote Collab, Atom Teletype, CodeTogether, GitLive and Floobits. CodeTogether supports VS Code, IntelliJ and Eclipse. Guests can join from IDE or browser. GitLive supports VS Code, IntelliJ and Android Studio. Floobits supports Emacs, Sublime Text, Neovim, IntelliJ and Atom. ### How do I select the right tool for RPP? Teletype for Atom suits pair programming's driver-navigator style. Live Share allows more open-ended collaboration, which perhaps is not what you want. However, Live Share might suit ping-pong style of pairing. In strong-style pairing, we want the navigator to guide the driver step by step. This is best done with only screensharing. In Linux, tmux and tmate are popular. These work even on low-bandwidth connections. However, this may not be the best choice for beginners who find it hard to learn the commands. Cloud IDEs may not be optimal for all-day coding. It's also too dependent on net connection. Plugins such as CodeTogether track changes within IDEs and are therefore not demanding on bandwidth (unlike screensharing). CodeTogether team also found that allowing multiple developers to edit code is hard to follow. Their design enforces a master controller who can give/take temporary control to others. ### Could you share some tips for RPP? RPP is easiest if the developers have met before in person. If not, have icebreakers at the start of the project. Informal chats or online games can also help in building rapport. Allow for frequent breaks since remote pairs get tired more easily. When remote pairing across time zones, select a time convenient for both. Start every session with a clear agenda. Tackle one task at a time. Some companies such as GitLab have an internal app to help developers pair up. Before pairing for long hours, get the basics right. Use a good headset that mitigates external noise and echo. Use a large monitor or even two monitors. Use comfortable desk and chair. Make use of non-verbal cues. Lean forward when you wish to speak. Gesture to draw attention. Frequently check with your partner about audio/video quality and quickly take corrective action. An audio splitter with two connected headsets allows another colleague to easily join in on any conversation. Always have the audio and video on, even during breaks. This gives the feeling of being connected to the "office vibe". ## Milestones 1998 **Extreme Programming (XP)** as practiced at Chrysler is talked about. Pair programming is one of the core practices within XP. 2001 In the *Agile Manifesto*, Beck et al. point out that face-to-face interactions are most effective. In this light, remote pairing is guaranteed to fail. This motivates research into adapting XP for distributed teams. Schümmer and Schümmer present *TUKAN* as a "synchronous distributed team programming environment". TUKAN includes chat, audio, multi-user code editing and integration with version management. They use the term **distributed pair programming**. 2004 Hanks publishes results of an empirical study on RPP. He uses a screensharing application called Virtual Network Computing (VNC). He modifies VNC to support a second cursor that the navigator can use as a pointer. Unlike in earlier tools, the second cursor appears only when required. Jul 2015 In a literature survey, da Silva Estácio and Prikladnicki find that most studies have been from a teaching perspective. Only a few studies talk about RPP in a real-world software development setting. They also survey RPP tools. They make recommendations about tool features: shared code repository, support for specific pairing roles, role switching, gesturing, etc. Oct 2015 Tsompanoudi et al. modify a previously available Eclipse plugin and use it in an educational setting to help students learn programming collaboratively. Tasks are streamlined using **collaboration scripts** that were studied by other researchers as early as 2007. Jan 2019 **Tuple** launches alpha release. It claims to be the "best pair programming tool on macOS". The focus of Tuple is performance: low CPU usage, low latency, high resolution video and no distracting UI components (sometimes called UI chrome). It also exposes performance graphs to the user so that they can take corrective action. 2020 The COVID-19 pandemic forces teams to work remotely. Teams used to pair programming are now required to do the same remotely. The situation forces teams to better understand the dynamics of pairing remotely. It's also expected that the arrival of 5G will make RPP more reliable and sophisticated. Mar 2021 Packt Publishing Limited publishes Bolboacă's book titled *Practical Remote Pair Programming*.
{ "title": "Remote Pair Programming", "href": "remote-pair-programming" }
# Continuous Integration ## Summary Continuous Integration (CI) is the practice of routinely integrating code changes into the main branch of a repository, and testing the changes, as early and often as possible. Ideally, developers will integrate their code daily, if not multiple times a day. Martin Fowler, Chief Scientist at ThoughtWorks, has stated that, > Continuous Integration doesn't get rid of bugs, but it does make them dramatically easier to find and remove. ## Discussion ### Why do we need Continuous Integration? In the past, developers on a team might work in isolation for an extended period of time and only merge their changes to the master branch once their work was completed. This made merging code changes difficult and time consuming, and also resulted in bugs accumulating for a long time without correction. These factors made it harder to deliver updates to customers quickly. With CI, each code change can potentially trigger a build-and-test process. Testing becomes an essential part of the build process. Bugs, if any, are highlighted early before they get a chance to grow or become hard to trace. Essentially, CI breaks down the development process into smaller pieces while also employing a repeatable process of build and test. ### What are the benefits of Continuous Integration? Among the many benefits of CI are the following: + Shorter integration cycles + Better visibility of what others are doing leading to greater communication + Issues are caught and resolved early + Better use of time for development rather than debugging + Early knowledge that your code works along with others' changes + Ultimately, enabling the team to release software faster ### How does Continuous Integration work? With continuous integration, developers frequently commit to a shared repository using a version control system such as Git. Prior to each commit, developers may choose to run local unit tests on their code as an extra verification layer before integrating. A continuous integration service automatically builds and runs unit tests on the new code changes to immediately surface any errors. Continuous integration refers to the build and unit testing stages of the software release process. Every revision that is committed triggers an automated build and test. ### What are some CI tools and to choose among them? There are many solutions out there. Some of them include Codeship, TravisCI, SemaphoreCI, CircleCI, Jenkins, Bamboo, and Teamcity. Some factors to consider when selecting a tool include price (commercial or free), features, ease of use, integration (with other tools and frameworks), support (commercial or community) and more. ### What are the challenges to Continuous Integration? To improve and perfect your CI, you need to overcome 3 major challenges: + **No Standalone Fresh Checkout**: The single biggest hurdle to a smooth CI build is ensuring that your application's tests can be run from a fresh code checkout (e.g. a git clone). This means that all of your app's dependencies are either included in the checkout, or they're specified and can be pulled in by a script in the checkout. + **Unreliable Tests**: Now that your app sets up with a single command, you've built a foundation for effective CI. The next challenge is to ensure that your test results are repeatable and reliable. Intermittent or "expected" failures that persist for too long are pernicious. Once the habit of treating failures as intermittent takes hold, legitimate errors often get ignored. + **Obscure Build Results**: Once you've produced a reliable test suite, the next challenge is to get results quickly, take appropriate action on them, and distribute information to the people who matter. ### How are Continuous Integration, Continuous Delivery, and Continuous Deployment practices related to one another? Continuous integration leads to both continuous delivery and continuous deployment. Continuous deployment is like continuous delivery, except that releases happen automatically. More specifically, continuous integration requires automated testing to ensure that nothing is broken when new commits are made. Continuous delivery takes this to the next step by automating the release process so that your customers get regular fixes and upgrades. Continuous delivery still requires manual intervention to initiate the final deployment to production. Continuous deployment automates this last step too. There's no "Release Day" as such. Customers see a steady stream of improvements and this enables early feedback. Since releases are small, they're less risky and easier to fix. Jeff Humble, author of the book *Continuous Delivery*, says this about Continuous Deployment, > Essentially, it is the practice of releasing every good build to users. ## Milestones 1991 Grady Booch first proposes the term **Continuous Integration (CI)**. In 1994, he uses the term in his book *Object-Oriented Analysis and Design with Applications*. 1997 Kent Beck and Ron Jeffries invent **Extreme Programming (XP)** while on the Chrysler Comprehensive Compensation System project. Beck publishes about continuous integration in 1998. Extreme Programming embraces the practice of CI. 2001 **CruiseControl**, one of the first open-source CI tools, is released.
{ "title": "Continuous Integration", "href": "continuous-integration" }
# Grammar and Spell Checker ## Summary A well-written article with correct grammar, punctuation and spelling along with an appropriate tone and style to match the needs of the intended reader or community is always important. Software tools offer algorithm-based solutions for grammar and spell checking and correction. Classical rule-based approaches employ a dictionary of words along with a set of rules. Recent neural network-based approaches learn from millions of published articles and offer suggestions for appropriate choice of words and way to phrase parts of sentences to adjust the tone, style and semantics of the sentence. They can alter suggestions based on the publication domain of the article like academic, news, etc. Grammar and spelling correction are tasks that belong to a more general NLP process called **lexical disambiguation**. ## Discussion ### What is a software grammar and spell checker, its general tasks and uses? A grammar and spell checker is a software tool that checks a written text for grammatical mistakes, appropriate punctuation, misspellings, and issues related to sentence structure. More recently, neural network-based tools also evaluate tone, style, and semantics to ensure that the writing is flawless. Often such tools offer a visual indication by highlighting or underlining spelling and grammar errors in different colors (often red for spelling and blue for grammar). Upon hovering or clicking on the highlighted parts, they offer appropriately ranked suggestions to correct those errors. Certain tools offer a suggestive corrected version by displaying correction as strikeout in an appropriate color. Such tools are used to improve writing, produce engaging content, and for assessment and training purposes. Several tools also offer style correction to adapt the article for specific domains like academic publications, marketing, and advertising, legal, news reporting, etc. However, till today, no tool is a perfect alternative to an expert human evaluator. ### What are some important terms relevant to a grammar and spell checker? The following NLP terms and approaches are relevant to grammar and spell checker: + **Part-of-Speech (PoS)** tagging marks words as noun, verb, adverb, etc. based on definition and context. + **Named Entity Recognition (NER)** is labeling a sequence of text into predefined categories such as name, location, etc. Labels help determine the context of words around them. + **Confusion Set** is a set of probable words that can appear in a certain context, e.g. set of articles before a noun. + **N-Gram** is a sub-sequence of n words or tokens. For example, "The sun is bright" has these 2-grams: {"the sun", "sun is", "is bright"}. + **Parallel Corpus** is a collection of text placed alongside its translation, e.g. text with errors and its corresponding corrected version(s). + **Language Model (LM)** determines the probability distribution over a sequence of words. It says how likely is a particular sequence of words. + **Machine Translation (MT)** is a software approach to translate one sequence of text into another. In grammar checking, this refers to translating erroneous text into correct text. ### What are the various types of grammar and spelling errors? We describe the following types: + **Sentence Structure**: Parts of speech are organized incorrectly. For example, "she began to singing" shows misplaced 'to' or '-ing'. Dependent clause without the main clause, run-on sentence due to missing conjunction, or missing subject are some structural errors. + **Syntax Error**: Violation of rules of grammar. These can be in relation to subject-verb agreement, wrong/missing article or preposition, verb tense or verb form error, or a noun number error. + **Punctuation Error**: Punctuation marks like comma, semi-colon, period, exclamation, question mark, etc. are missing, unnecessary, or wrongly placed. + **Spelling Error**: Word is not known in the dictionary. + **Semantic Error**: Grammar rules are followed but the sentence doesn't make sense, often due to a wrong choice of words. "I am going to the library to buy a book" is an example where 'bookstore' should replace 'library'. Rule-based approaches typically can't handle semantic errors. They require statistical or machine learning approaches, which can also flag other types of errors. Often a combination of approaches leads to a good solution. ### What are classical methods for implementing grammar and spell checkers? Classical methods of spelling correction match words against a given dictionary, an approach alluded by critiques to be unreliable as it can't detect incorrect use of correctly spelled words; or correct words not in the dictionary, like technical words, acronyms, etc. Grammar checkers use hand-coded grammar rules on PoS tagged text for correct or incorrect sentences. For instance, the rule `I + Verb (3rd person, singular form)` corresponds to the incorrect verb form usage, as in the phrase "I has a dog." These methods provide detailed explanations of flagged errors making it helpful for learning. However, rule maintenance is tedious and devoid of context. Statistical approaches validate parts of a sentence (n-grams) against their presence in a corpus. These approaches can flag words used out of context. However, it's challenging to provide detailed explanations. Their efficiency is limited to the choice of corpora. **Noisy channel model** is one statistical approach. A LM based on trigrams and bigrams gives better results than just unigrams. Where rare words are wrongly corrected, using a blacklist of words or a probability threshold can help. ### What are Machine Learning-based methods for implementing grammar and spell checkers? ML-based approaches are either Classification (discriminative) or Machine Translation (generative). **Classification** approaches work with well-defined errors. Each error type (article, preposition, etc.) requires training a separate multi-class classifier. For example, a proposition error classifier takes n-grams associated with propositions in a sentence and outputs a score for every candidate proposition in the confusion set. Contextual corrections also consider features like PoS and NER. A model can be a linear classifier like a Support Vector Machine (SVM), an n-gram LM-based or Naïve Bayes classifier, or even a DNN-based classifier. **Machine Translation** approaches can be Statistical Machine Translation (SMT) or Neural Machine Translation (NMT). Both these use parallel corpora to train a sequence-to-sequence model, where text with errors translates to corrected text. NMT uses encoder-decoder architecture, where an encoder determines a latent vector for a sentence based upon the input word embeddings. The decoder then generates target tokens from the latent vector and relevant surrounding input and output tokens (attention). These benefit from transfer learning and advancements in transformer-based architecture. Editor models reduce training time by outputting edits to input tokens from a reduced confusion set instead of generating target tokens. ### How can I train an NMT model for grammar and spell checking? In general, NMT requires training an **encoder-decoder model** using cross-entropy as the loss function by comparing maximum likelihood output to the gold standard correct output. To train a good model requires a large number of parallel corpora and compute capacity. Transformers are attention-based deep seq2seq architectures. Pre-trained language models generated by transformer architectures like BERT provide contextual embeddings to find the most likely token given the surrounding tokens, making it useful to flag contextual errors in an n-gram. **Transfer learning** via fine tuning weights of a transformer using the parallel corpus of incorrect to correct examples makes it suitable for GEC use. Pre-processing or pre-training with synthetic data improves the performance and accuracy. Further enhancements can be to use separate heads for different types of errors. **Editor models** are better as they output edit sequences instead of corrected versions. Training and testing of editor models require the generation of edit sequences from source-target parallel texts. ### What datasets are available for training and evaluation of grammar and spell check models? MT or classification models need datasets with annotated errors. NMT requires a large amount of data. *Lang 8*, the largest available parallel corpora, has 100,051 English entries. *Corpus of Linguistic Acceptability (CoLA)* is a dataset of sentences labeled as either grammatically correct or incorrect. It can be used, for example, to fine tune a pre-trained model. *GitHub Typo Corpus* is harvested from GitHub and contains errors and their corrections. Benchmarking data in Standard Generalized Markup Language (SGML) format is available. Sebastian Ruder offers a detailed list of available benchmarking test datasets along with the various models (publications and source code). **Noise models** use transducers to produce erroneous sentences from correct ones with a specified probability. They induce various error types to generate a larger dataset from a smaller one, like replacing a word from its confusion set, misplace or remove punctuations, induce spelling, tense, noun number, or verb form mistakes, etc. **Round-trip MT**, such as English-German-English translation, can also generate parallel corpora. **Wikipedia edit sequences** offer millions of consecutive snapshots to serve as source-target pairs. However, only a tiny fraction of those edits are language related. ### How do I annotate or evaluate the performance of grammar and spell checkers? ERRor ANnotation Toolkit (ERRANT) enabled suggestions with explanation. It automatically annotates parallel English sentences with error type information, thereby standardizing parallel datasets and facilitating detailed error type evaluation. Training and evaluation require comparing the output to the target gold standard and giving a numerical measure of effectiveness or loss. Editor models have an advantage as the sequence length of input and output is the same. Unequal sequences need alignment with the insertion of empty tokens. Max-Match (\(M^2\)) scorer determine the smallest edit sequence out of the multiple possible ways to arrive at the gold standard using the notion of Levenshtein distance. The evaluation happens by computing precision, recall, and F1 measure between the set of system edits and the set of gold edits for all sentences after aligning the sequences to the same length. Dynamic programming can also align multiple sequences to the gold standard when there is more than one possible correct outcome. ### Could you mention some tools or libraries that implement grammar and spell checking? *GNU Aspell* is a standard utility used in GNU OS and other UNIX-like OS. *Hunspell* is a spell checker that's part of popular software such as LibreOffice, OpenOffice.org, Mozilla Firefox 3 & Thunderbird, Google Chrome, and more. Hunspell itself is based on MySpell. Hunspell can use one or more dictionaries, stemming, morphological analysis, and Unicode text. Python packages for spell checking include `pyspellchecker`, `textblob` and `autocorrect`. A search for "grammar spell" on GitHub brings up useful dictionaries or code implemented in various languages. There's a converter from British to American English. Spellcheckr is a JavaScript implementation for web frontends. Deep learning models include Textly-DRF-API and GECwBERT. Many online services or offline software also exist: WhiteSmoke from 2002, LanguageTool from 2005, Grammarly from 2009, Ginger from 2011, Reverso from 2013, and Trinka from 2020. Trinka focuses on an academic style of writing. Grammarly focuses on suggestions in terms of writing style, clarity, engagement, delivery, etc. ## Milestones 1960 Blair implements a simple spelling corrector using heuristics and a dictionary of correct words. Incorrect spellings are associated with the corrected ones via abbreviations that indicate similarity between the two. Blair notes that this is in some sense a form of pattern recognition. In one experiment, the program successfully corrects 89 of 117 misspelled words. In general, research interest in spell checking and correction begins in the 1960s. 1971 R. E. Gorin writes **Ispell** in PDP-10 assembly. Ispell becomes the main spell-checking program for UNIX. Ispell is also credited with introducing the generalized affix description system. Much later, Geoff Kuenning implements a C++ version with support for many European languages. This is called **International Ispell**. GNU Aspell, MySpell and Hunspell are other software inspired by Ispell. 1980 In the 1980s, GEC systems are **syntax-based systems**, such as EPISTLE. They determine the syntactic structure of each sentence and the grammatical functions fulfilled by various phrases. They detect several classes of grammatical errors, such as disagreement in number between the subject and the verb. 1990 This decade focuses on simple **linear classifiers** to flag incorrect choice of articles or statistical methods to identify and flag use of commonly confused words. Confusion can be due to identical sounding words, typos etc. 2000 Rule-based methods evolve in the 2000s. Rule generation is based on parse trees, designed heuristically or based on linguistic knowledge or statistical analysis of erratic texts. These methods don't generalize to new types of errors. New rules need to be constantly added. 2005 The mid-2000s sees methods to record and create aligned corpora of pre- and post-editing ESL (English as a Second Language) writing samples. SMTs offer improvement in identifying and correcting writing errors. GEC sees the use of semantic and syntactic features including PoS tags and NER information for determining the applicable correction. Support Vector Machines (SVMs), n-gram LM-based and Naïve Bayes classifiers are used to predict the potential correction. 2010 **DNN-based classifier** approaches are proposed in 2000s and early 2010s. However, a specific set of error types have to be defined. Typically only well-defined errors can be addressed with these approaches. SMT models learn mappings from source text to target text using a noisy channel model. SMT-based GEC models use parallel corpora of erratic text and grammatically correct version of the same text in the same language. Open-source SMT engines are available online and include Moses, Joshua and cdec. 2016 **Neural Machine Translation (NMT)** shows better prospects by capturing some learner errors missed by SMT models. This is because NMT can encode structural patterns from training data and is more likely to capture an unseen error. 2018 With the advent of **attention-based transformer architecture** in 2017, its application to GEC gives promising results. 2019 Methods to improve the training data by text augmentation of various types, including cyclic machine translation, emerge. These improve the performance of GEC tools significantly and enable better flagging of style or context-based errors or suggestions. Predicting edits instead of tokens allows the model to pick the output from a smaller confusion set. Thus, editor models lead to faster training and inference of GEC models.
{ "title": "Grammar and Spell Checker", "href": "grammar-and-spell-checker" }
# Question Answering ## Summary Search engines, and information retrieval systems in general, help us obtain relevant documents to any search query. In reality, people want answers. Question Answering (QA) is about giving a direct answer in the form of a grammatically correct sentence. QA is a subfield of Computer Science. It's predominantly based on Information Retrieval and Natural Language Processing. Both questions and answers are in natural language. QA is also related to an NLP subfield called *text summarization*. Where answers are long and descriptive, they're probably summarized from different sources. In this case, QA is also called **focused summarization** or **query-based summarization**. There are lots of datasets to train and evaluate QA models. By late 2010s, neural network models have brought state-of-the-art results. ## Discussion ### Which are the broad categories of questions answered by QA systems? **Factoid** questions are the simplest. An example of this is "What is the population of the Bahamas?" Answers are short and factual, often identified by named entities. Variations of factoid questions include single answer, list of answers (such as "Which are the official languages of Singapore?"), or yes/no. Questions typically ask what, where, when, which, who, or is. QA research started with factoid questions. Later, research progressed to questions that sought **descriptive** answers. "Why is the sky blue?" requires an explanation. "What is global warming?" requires a definition. Questions typically ask why, how or what. **Closed-domain** questions are about a specific domain such as medicine, environment, baseball, algebra, etc. **Open-domain** questions are regardless of the domain. Open-domain QA systems use large collections of documents or knowledge bases covering diverse domains. When the system is given a single document to answer a question, we call it **reading comprehension**. If information has to be searched in multiple documents across domains, the term **open-context open-domain QA** has been used. ### What are the main approaches or techniques used in question answering? QA systems rely on external sources from where answers can be determined. Broad approaches are the following: + **Information Retrieval-based**: Extends traditional IR pipeline. *Reading comprehension* is applied on each retrieved document to select a suitable named entity, sentence or paragraph. This has also been called *open domain QA*. The web (or CommonCrawl), PubMed and Wikipedia are possible sources. + **Knowledge-based**: Facts are stored in knowledge bases. Questions are converted (by semantic parsers) into semantic representations, which are then used to query the knowledge bases. Knowledge could be stored in relational databases or as RDF triples. This has also been called *semantic parsing-based QA*. DBpedia and Freebase are possible knowledge sources. + **Hybrid**: IBM's DeepQA is an example that combines both IR and knowledge approaches. ### What are some variations of question answering systems? We note the following variations or specializations of QA systems: + **Visual QA (VQA)**: Input is an image (or video) rather than text. VQA is at the intersection of computer vision and NLP. + **Conversational QA**: In dialogue systems, there's a continuity of context. The current question may be incomplete or ambiguous but it can be resolved by looking at past interactions. CoQA and QuAC are two datasets for this purpose. + **Compositional QA**: Complex questions are decomposed into smaller parts, each answered individually, and then the final answers is composed. This technique is used in VQA as well. + **Domain-Specific QA**: Biomedical QA is a specialized field where both domain patterns and knowledge can be exploited. AQuA is a dataset specific to algebra. + **Context-Specific QA**: Social media texts are informal. Models that do well on newswire QA have been shown to do poorly on tweets. Community forums (Quora, StackOverflow) provide multi-sentence questions with often long answers that are upvoted or downvoted. ### What are the key challenges faced by question answering systems? QA systems face two challenges: **question complexity (depth)** and **domain size (breadth)**. Systems are good at either of these but not both. An example of depth is "What's the cheapest bus to Chichen Itza leaving tomorrow?" A much simpler question is "Where is Chichen Itza?" **Common sense reasoning** is challenging. For example, 'longest river' requires reverse sorting by length; 'by a margin of' involves some sort of comparison; 'at least' implies a lower cut-off. Temporal or spatial questions require reasoning about time or space relations. **Lexical gap** means that a concept can be expressed using different words. For example, we're looking for a 'city' but the question asks about a 'venue'. Approaches to solving this include string normalization, query expansion, and entailment. **Ambiguity** occurs when a word or phrase can have multiple meanings, only one of which is intended in a given context. The correct meaning can be obtained via corpus-based methods (distributional hypothesis) or resource-based methods. Sometimes the answer is **distributed** across different sources. QA systems need to align different knowledge ontologies. An alternative is to decompose the question into simpler queries and combine the answers later. ### What are the steps in a typical question answering pipeline? In IR-based factoid QA, tokens from the question or the question itself forms the query to the IR system. Sometimes stopwords may be removed, the query rephrased or expanded. From the retrieved documents, relevant sentences or passages are extracted. Named entities, n-gram overlap, question keywords, and keyword proximity are some techniques at this stage. Finally, a suitable answer is picked. We can train classifiers to extract an answer. Features include answer type, matching pattern, number of matching keywords, keyword distance, punctuation location, etc. Neural network models are also common for answer selection. For knowledge-based QA, the first step is to invoke a semantic parser to obtain a logical form for querying. Such a parser could be rule-based to extract common relations, or it could be learned via supervised machine learning. More commonly, semi-supervised or unsupervised methods are used based on web content. Such methods help us discover new knowledge relations in unstructured text. Relevant techniques include distant supervision, open information extraction and entity linking. ### How are neural networks being used in question answering? Widespread use of neural networks for NLP started with **distributed representation** for words. A feedforward model learned the representation as it was being trained on a language modelling task. In these representations, semantically similar words will be close to one another. The next development was towards **compositional distributional semantics**, where sentence-level representations are composed from word representations. These were more useful for question answering. Iyyer et al. reduced dependency parse trees to vector representations that were used to train an RNN. Yu et al. used a CNN for answer selection. A common approach to answer selection is to look at the similarity between question and answer in the semantic space. Later models added an attention layer between the question and its candidate answers. Tan et al. evaluated BiLSTMs with attention and CNN. Dynamic Coattention Network (DCN) is also based on attention. Facebook researchers combined a seq2seq model with multitasking. Transformer architecture has been applied for QA. In fact, QA was one of the tasks to which BERT was fine-tuned (on SQuAD) and evaluated. BERTserini used fine-tuned BERT along with information retrieval from Wikipedia. ### What are some useful datasets for training or evaluating question answering models? Datasets are used for training and evaluating QA systems. Based on the design and makeup, each dataset might evaluate different aspects of the system better. Among the well-known datasets are Stanford Question Answering Dataset (SQuAD), Natural Question (NQ), Question Answering in Context (QuAC) and HotpotQA. All four are based on Wikipedia content. Conversational Question Answering (CoQA) is a dataset that's based on Wikipedia plus other sources. Wikipedia often presents data in tables. WikiTableQuestions is a dataset in which answers are in tables rather than freeform text. TyDi QA is a multilingual dataset. TweetQA takes its data from Twitter. Question Answering over Linked Data (QALD) is a series of datasets created from knowledge bases such as DBpedia, MusicBrainz, Drugbank and LinkedSpending. Other datasets to note are ELI5, ShARC, MS MARCO, NewsQA, CMU Wikipedia Factoid QA, CNN/DailyMail QA, Microsoft WikiQA, Quora Question Pairs, CuratedTREC, WebQuestions, WikiMovies, GeoQuery and ATIS. Papers With Code lists dozens of datasets along with their respective state-of-the-art models. ## Milestones 1961 MIT researchers implement a program named *Baseball*. It reads a question from a punched card. It references a dictionary of words and idioms to generate a "specification list", which is a canonical expression of what the question is asking. Content analysis involves syntactic phrase structures. 1963 Bertram Raphael at MIT publishes a memo titled *Operation of a Semantic Question-Answering System*. He describes a QA model that accepts a restricted form of English. Factual information comes from a relational model. Program is written in LISP. Raphael credits LISP's list-processing capability for making the implementation a lot easier. Dec 1993 Developed at MIT, *START* goes online. This is probably the world's first web-based QA system. It can answer questions on places, people, movies, dictionary definitions, etc. Jun 1997 With the growth of web, *AskJeeves* is launched as an online QA system. However, it basically does pattern matching against a knowledge base of questions and returns curated answers. If there's no match, it falls back to a web search. In February 2006, the system is rebranded as *Ask*. Nov 1999 At the 8th Text REtrieval Conference (TREC-8), a Question Answering track is introduced. This is to foster research in QA. TREC-8 focuses on only open-domain closed-class questions (fact-based short answers). At future TREC events, the QA track continues to produce datasets for training and evaluation. 2002 It's helpful to identify the type of question being asked. Li and Roth propose a machine learning approach to **question classification**. Such a classification imposes constraints on potential answers. Due to ambiguity, their model allows for multiple classes for a single question. For example, "What do bats eat?" could belong to three class: food, plant, animal. The features used for learning include words, POS tags, chunks, head chunks, named entities, semantically related words, n-grams, and relations. 2010 After about three years of effort, IBM Watson competes at human expert levels in terms precision, confidence and speed at the *Jeopardy!* quiz show. It's *DeepQA* architecture integrates many content sources and NLP techniques. Answer candidates come with confidence measures. They're then scored using supporting evidence. Watson wins *Jeopardy!* in February 2011. Dec 2014 Yu et al. look at the specific task of answer selection. Using **distributed representations**, they look for answers that are semantically similar to the question. This is a departure from a classification approach that uses hand-crafted syntactic and semantic features. They use a bigram model with a convolutional layer and a average pooling layer. These capture syntactic structures and long-term dependencies without relying external parse trees. Jul 2017 Chen et al. use Wikipedia as the knowledge source for open-domain QA. Answers are predicted as text spans. Earlier research typically consider a short piece of already identified text. Since the present approach searches over multiple large documents, they call it "machine reading at scale". Called *DrQA*, this system integrates document retrieval and document reading. Bigram features and bag-of-words weighted with TF-IDF are used for retrieval. The reader uses BiLSTM each for the question and passages, with attention between the two. Oct 2018 Researchers at Google release **BERT** that's trained on 3.3 billion words of unlabelled text. BERT is a pre-trained language model. As a sample task, they fine-tune BERT for question answering. SQuAD v1.1 and v2.0 datasets are used. Question and text containing the answer are concatenated to form the input sequence. Start and end tokens of the answer are predicted using softmax. For questions without answers, start/end tokens point to `[CLS]` token. Jan 2019 Google release *Natural Questions (NQ)* dataset. It has 300K pairs plus 16K questions with answers from five different annotators. Answer comes from a Wikipedia page and the model is required to read the entire page. The questions themselves are based on real, anonymized, aggregated queries from Google Search. Answers can be yes/no, long, long and short, or no answer. 2019 On SQuAD 2.0 dataset, many implementations start surpassing human performance. Many of these are based on the **transformer neural network architecture** including BERT, RoBERTa, XLNet, and ALBERT. Let's note that SQuAD 2.0 combines 100K questions from SQuAD 1.1 plus 50K unanswerable questions. When there's no answer, models are required to abstain from answering. Jun 2019 Since datasets are available only for some domains and languages, Lewis et al. propose a method to synthesize questions to train QA models. Passages are randomly selected from documents. Random noun phrases or named entities are picked as answers. "Fill-in-the-blanks" questions are generated. Using neural machine translation (NMT), these are converted into natural questions. Feb 2020 Google Research releases *TyDi QA*, a typologically diverse multilingual dataset. It has 200K question-answer pairs from 11 languages. To avoid shared words in a pair, a human was asked to frame a question when they didn't know the answer. Google Search identified a suitable Wikipedia article to answer the question. The person then marked the answer. Researchers expect their model to generalize well to many languages.
{ "title": "Question Answering", "href": "question-answering" }
# Inter-Service Communication for Microservices ## Summary In a monolithic application, all parts of the app access a shared database. Each part can easily invoke the functionality of another part. In a microservices architecture, an app is composed of many microservices, each potentially managing its own database. What happens if one service requires data or processing from another service? This is not as trivial or efficient as in a monolithic application. Inter-service communication (ISC) is an important consideration when designing a microservices-based application. A badly designed app can result in a lot of communication among the different services, resulting in a *chatty app* or *chatty I/O*. Communication can be reduced by having fewer microservices but this replaces the monolith with smaller monoliths. The goal is therefore to achieve a balance by following good design principles and patterns. ## Discussion ### Why do microservices need to communicate with one another? In the traditional approach to building monolithic applications, lot of communication was internal to the app. Such communication was often local, fast and easily manageable. When designing a microservices architecture, we break up the monolithic into independent parts, each of which has a well-defined role. While each microservice can be deployed and scaled independently, none of them deliver the full value of the application. A microservice will often require data managed by another, or require the services of another. For example, consider a taxi booking application. Trip management and passenger management are separate microservices but a trip cannot be initiated without some knowledge or authentication of the passenger. Hence these two independent microservices, each doing its specific roles, will still need to communicate. While microservices architecture has brought benefits to build large-scale applications, it has also exposed the communication across microservices. Complexity that was previously hidden is now visible. Dozens or even hundreds of microservices that make up an app must be "wired together" properly to make the whole thing work. ### What are the different types of inter-service communication for microservices? Broadly, there are two types: + **Synchronous**: Client sends a request and waits for a response. Client code execution itself may prefer to receive the response via a callback (thread is not blocked) or wait (thread is blocked). Either way, the communication to the external world is synchronous. + **Asynchronous**: Client sends a request and doesn't wait for a response.With synchronous communication protocols, the receiver has to be available to send back the response. From application perspective, synchronous implies a less responsive user experience since we have to wait for the response. If one of the services in a chain of synchronous requests delays its response, the entire call flow gets delayed. With asynchronous communication protocols, the request (often called message) is typically sent to a message queue. Even if the receiver is not available, the message will remain in the queue and can be processed at a later time. Even if a service fails to respond, the original asynchronous call is not affected since it's not waiting for a response. ### What protocols and data formats are suitable for inter-service communication for microservices? **HTTP/HTTPS** is a synchronous protocol. Often service APIs are exposed as REST endpoints. **AMQP** and **MQTT** are examples of asynchronous protocols. To manage the queue, we can use RabbitMQ **message broker**. Instead of a message queue, we can also use an **event bus** for updating data asynchronously. Synchronous protocols are usually limited to one-to-one interactions. Asynchronous protocols have more options: **one-to-one** (notifications), **one-to-many** (publish/subscribe), or even allow for responses coming back asynchronously. For example, a user sends a tweet on her Twitter account that has many followers. This is an example of one-to-many publish/subscribe model. The data that these protocols carry must be formatted in a manner understood by all. **Text-based formats** include JSON and XML. XML in particular is very verbose. Therefore, some implementations may prefer **binary formats**: MessagePack, Thrift, ProtoBuf, Avro. Note that these are well-known and popular formats and using them enables easier integration of microservices. It's also possible (but not preferred) to use a proprietary non-standard format internal to an application. ### What design patterns are available for inter-service communication for microservices? Here are a few patterns to note: + **Saga Pattern**: A sequence of transactions, each one local to its database. A microservice triggers another via an event or message. If something fails, reverse operations undo the changes. + **API Composition**: Since table joins are not possible across databases, a dedicated service (or API gateway) coordinates the "joins", which are now at application layer rather than within the database. + **Command Query Responsibility Segregation (CQRS)**: Services keep materialized views based on data from multiple services. These views are updated via subscribed events. CQRS separates writes (commands) from the reads (queries). + **Event Sourcing**: Rather than store state, we store events. State may be computed from these events as desired. This is often used with CQRS: write events but derive states by replaying the events. + **Orchestration**: A central controller or orchestrator coordinates interactions across microservices. API Composition is a form of orchestration. + **Choreography**: Each microservice knows what to do when an event occurs, which are posted on an event stream/bus. + **Service Mesh**: Push application networking functions down to the infrastructure and not mix them with business logic. ### Could you compare orchestration and choreography? Orchestration is a centralized approach. Calls are often synchronous: orchestrator calls service A, waits for response, then calls service B, and so on. This is good if service B depends on data from service A. However, if service A is down, service B can't be called. By coupling B with A, we've created a dependency. The orchestrator also becomes a single point of failure. Choreography enables peer-to-peer interactions without a centralized controller. It's more flexible and scalable than the orchestration approach. It's event-driven architecture applied to microservices. The logic of handling an event is built into the microservice. Choreography is asynchronous and non-blocking. The patterns CQRS and Event Sourcing are applicable to choreography. There are also hybrid approaches where a service orchestrates a few services while it interacts with others via an event stream. In another approach, an orchestrator emits events for other services and consumes response events asynchronously from the event stream for further processing. To conclude, > Orchestration is about control whereas choreography is about interactions. ### How do we handle service API calls that fail? The simplest solution is to **retry** after a specified timeout. A maximum number of retries can be attempted. However, if the operation is not idempotent (that is, it changes application state), then retry is not a safe recovery method. The other approach is to use a **circuit breaker**. Many failed requests can result in a bottleneck. There's no point sending further requests. This is where we "open the circuit" to prevent further requests to a service that's not responding. We can also proactively reduce chances of failure by **load balancing** requests. A request must be processed by a service instance and we can select an instance that has less load. Container orchestrators (Kubernetes) or service meshes (Istio) enable this. ### Are there any best practices for defining a service API? Microservices must be designed to be independent of one another. One approach is to use **Domain-Driven Design**. This talks about understanding the problem space, using design patterns, and refactoring continuously. API should model the domain. It shouldn't leak internal implementations. APIs must have well-defined semantics and versioning schemes. A microservice can support multiple versions of an API or you could have a service for each version. Public APIs are usually REST over HTTP. Internal APIs can adopt RPC-style, where remote calls can look like local calls. However, they should be designed right to avoid chattiness. Consider the trade-off between making many I/O calls and retrieving too much data. Since application state is now distributed across microservices, design for and manage **Eventual Consistency**. While REST calls may use JSON, RPC calls can be more efficient with binary formats enabled by RPC frameworks such as gRPC, Apache Avro and Apache Thrift. To simplify API design and development, use an **Interface Definition Language (IDL)**. This will generate client code, serialization code and API documentation. ### What are some anti-patterns to avoid when microservices need to communicate? Sharing a database across many microservices is an anti-pattern since this introduces tighter coupling. A single data model for all microservices is another. Using synchronous protocols across many microservices increases latencies and makes your app brittle to failures. If microservices are not properly defined, this may result in chatty I/O that affects performance and responsiveness. An application may depend on hundreds of shared libraries. In the spirit of code reuse, all microservices may be relying on these libraries. This results in another anti-pattern called **distributed monolith**. Reusing code within a domain or service boundary is fine but anything beyond that is coupling. This form of coupling is worse than code duplication. Shared libraries can be considered but not made mandatory in some areas: logging, tracing, routing. It's not required that every event should contain full data, particularly when consumers are going to use only some of it. Consider sending essential data and URLs pointing to additional data. To communicate, consider REST plus its alternatives such as messaging and event sourcing. ### What tools can I use to implement inter-service communication for microservices? Among the message brokers are RabbitMQ, Kafka, ActiveMQ, and Kestrel. Cloud providers offer their own messaging systems such as Amazon SQS, Google Cloud Pub/Sub, and Firebase Cloud Messaging (FCM). Microsoft Azure offers Event Grid, Event Hubs and Service Bus for messaging. NATS, a CNCF project, is an open source messaging system. Istio enables service mesh technology. Azure Service Fabric Mesh is an alternative. These rely on Envoy as the networking proxy. Similar proxies include Linkerd and Cilium. Conduit is a service mesh designed for Kubernetes. Netflix's Conductor can help with orchestration. For logging and monitoring, we have Retrace, Logstash, Graylog, and Jaeger. OpenTracing is an API that enables distributed tracing. Circuit breaker pattern can be implemented with Netflix's Hystrix. To define service APIs, Swagger can be used. In Java, REST-based microservices can be created with Spring Boot. ## Milestones 2004 Eric Evans publishes *Domain-Driven Design: Tackling Complexity in the Heart of Software*. The book relates directly to object-oriented programming. Only later, it's relevance to microservices would be understood. 2007 Michael Nygard explains the **circuit breaker** pattern in his book *Release It!: Design and Deploy Production-Ready Software*. This helps us build fault tolerant systems. In 2011, Netflix invents the Hystrix framework that includes the circuit breaker pattern. 2009 Netflix embraces API-driven architecture that affects both development and operations. This is today seen as the birth of microservices. 2010 At the start of this decade, the three-tier model (web tier, app tier, database tier) is found to break under heavy load. Microservices come to the rescue but they also bring problems relating to inter-service communication. Companies introduce libraries that are built into microservices to handle the networking aspects: Google's Stubby, Netflix's Hystrix, Twitter's Finagle. This however introduces coupling. A few years later these evolve to networking proxy and **service mesh**. Jan 2016 Version 0.0.7 of **Linkerd** is open sourced on GitHub. It's based on Twitter's Finagle and Netty. This is one of the early beginnings of a service mesh. Likewise, **Istio** version 0.1.0 is released as an alpha version in May 2017.
{ "title": "Inter-Service Communication for Microservices", "href": "inter-service-communication-for-microservices" }
End of preview.

Dataset Card for Devopedia

Waifu Husbando to catch your attention.

Dataset Description

Devopedia is a ~1.15 M Tokens (llama-2-7b-chat-tokenizer) / ~999.32K Tokens (RWKV Tokenizer) scrape of Devopedia. It serves as a training resource for large language models and other NLP tasks. This card details the dataset's origin, content, and limitations.

  • Curated by: KaraKaraWitch
  • Funded by: Recursal.ai (I work there lol)
  • Shared by: KaraKaraWitch
  • Language(s) (NLP): English
  • License: cc-by-sa-4.0

Devopedia was created under time constraints for the release of EagleX v1, and may contain biases in selection.

Supported Tasks and Leaderboards

Primarily used for language modeling.

Languages

While the dataset is focused on English. Keep in mind there are other languages as well.

Processing and Filtering

We scraped the Devopedia for a list of articles. Writing them to a compiled file dev_index.json. Before scraping individual article for its page contents.

The article contents are then selected by sections. Each section is converted to Markdown. Including the appropriate title. No filtering was done over the dataset.

Data Instances

Refer to the following sample:

{
    "text": "# Hypothesis Testing and Types of Errors\n\n## Summary\n\n\nSuppose we want to study income of a population. We study a sample from the population and draw conclusions. The sample should represent the population for our study to be a reliable one.\n\n**Null hypothesis** (H0)(H\\_0) is that sample represents population. Hypothesis testing provides us with framework to conclude if we have sufficient evidence to either accept or reject null hypothesis. \n\nPopulation characteristics are either assumed or drawn from third-party sources or judgements by subject matter experts. Population data and sample data are characterised by moments of its distribution (mean, variance, skewness and kurtosis). We test null hypothesis for equality of moments where population characteristic is available and conclude if sample represents population.\n\nFor example, given only mean income of population, <TRUNCATED...>"
}

Data Keys

Each json line is a dictionary with a text str.

Recursal's Vision

To make AI accessible to everyone, regardless of language, or economical status

This is the collective goal of the RWKV Open Source foundation and Recursal AI, the commercial entity who backs it.

We believe that AI should not be controlled by a select few individual organization. And that it should be made accessible regardless if you are rich or poor, or a native speaker of english.

About RWKV

RWKV is an Open Source, non profit group, under the linux foundation. Focused on developing the RWKV AI architecture, in accordence to our vision.

The RWKV architecture scales efficiently and economically. As an RNN & Transformer hybrid, it is able to provide the performance similar to leading transformer models, while having the compute and energy efficiency of an RNN based architecture.

You can find out more about the project, and latest models, at the following

About Recursal AI

Recursal AI, is the commercial entity built to provide support for RWKV model development and users, while providing commercial services via its public cloud, or private-cloud / on-premise offerings.

As part of our vision. Our commitment, is to ensure open source development and access to the best foundational AI models and datasets.

The following dataset/models provided here, is part of that commitment.

You can find out more about recursal AI here

Dataset Curators

KaraKaraWitch. (I typically hang out in PygmalionAI discord, sometimes EleutherAI. If something is wrong, @karakarawitch on discord.)

I'd be happy if you could spread the word and recommend this dataset.

Licensing Information

Devopedia lists their content as under CC-BY-SA.

Recursal Waifus [Husbandos] (The banner image) are licensed under CC-BY-SA. They do not represent the related websites in any official capacity unless otherwise or announced by the website. You may use them as a banner image. However, you must always link back to the dataset.

Citation Information

@misc{Devopedia,
  title         = {Devopedia},
  author        = {KaraKaraWitch, recursal.ai},
  year          = {2024},
  howpublished  = {\url{https://huggingface.co/datasets/recursal/Devopedia}},
}
Downloads last month
50