content_id
stringlengths 14
14
| page_title
stringlengths 1
250
| section_title
stringlengths 1
1.26k
⌀ | breadcrumb
stringlengths 1
1.39k
| text
stringlengths 9
3.55k
|
---|---|---|---|---|
c_yyf8a91sb8zd | Uniqueness case | Summary | Uniqueness_case | In mathematical finite group theory, the uniqueness case is one of the three possibilities for groups of characteristic 2 type given by the trichotomy theorem. The uniqueness case covers groups G of characteristic 2 type with e(G) ≥ 3 that have an almost strongly p-embedded maximal 2-local subgroup for all primes p whose 2-local p-rank is sufficiently large (usually at least 3). Aschbacher (1983a, 1983b) proved that there are no finite simple groups in the uniqueness case. |
c_wiuqw5s7m1eg | Vertex of a representation | Summary | Vertex_of_a_representation | In mathematical finite group theory, the vertex of a representation of a finite group is a subgroup associated to it, that has a special representation called a source. Vertices and sources were introduced by Green (1958–1959). |
c_ak0r69vi6z84 | No free lunch theorem | Summary | No_free_lunch_theorem | In mathematical folklore, the "no free lunch" (NFL) theorem (sometimes pluralized) of David Wolpert and William Macready, alludes to the saying "no such thing as a free lunch", that is, there are no easy shortcuts to success. It appeared in the 1997 "No Free Lunch Theorems for Optimization". Wolpert had previously derived no free lunch theorems for machine learning (statistical inference).In 2005, Wolpert and Macready themselves indicated that the first theorem in their paper "state that any two optimization algorithms are equivalent when their performance is averaged across all possible problems".The "no free lunch" (NFL) theorem is an easily stated and easily understood consequence of theorems Wolpert and Macready actually prove. It is weaker than the proven theorems, and thus does not encapsulate them. Various investigators have extended the work of Wolpert and Macready substantively. In terms of how the NFL theorem is used in the context of the research area, the no free lunch in search and optimization is a field that is dedicated for purposes of mathematically analyzing data for statistical identity, particularly search and optimization.While some scholars argue that NFL conveys important insight, others argue that NFL is of little relevance to machine learning research. |
c_zmcv9slvsx6y | Kimeme | Algorithm design | Kimeme > Features > Algorithm design | In mathematical folklore, the no free lunch theorem (sometimes pluralized) of David Wolpert and William G. Macready appears in the 1997 "No Free Lunch Theorems for Optimization. "This mathematical result states the need for a specific effort in the design of a new algorithm, tailored to the specific problem to be optimized. Kimeme allows the design and experimentation of new optimization algorithms through the new paradigm of memetic computing, a subject of computational intelligence which studies algorithmic structures composed of multiple interacting and evolving modules (memes). |
c_y178lgcbs02q | Minus-plus sign | In mathematics | Plus–minus_sign > Usage > In mathematics | In mathematical formulas, the ± symbol may be used to indicate a symbol that may be replaced by either the plus and minus signs, + or −, allowing the formula to represent two values or two equations.If x2 = 9, one may give the solution as x = ±3. This indicates that the equation has two solutions: x = +3 and x = −3. A common use of this notation is found in the quadratic formula x = − b ± b 2 − 4 a c 2 a , {\displaystyle x={\frac {-b\pm {\sqrt {b^{2}-4ac}}}{2a}},} which describes the two solutions to the quadratic equation ax2+bx+c = 0. Similarly, the trigonometric identity sin ( A ± B ) = sin ( A ) cos ( B ) ± cos ( A ) sin ( B ) {\displaystyle \sin(A\pm B)=\sin(A)\cos(B)\pm \cos(A)\sin(B)} can be interpreted as a shorthand for two equations: one with + on both sides of the equation, and one with − on both sides. |
c_et1ak5c099j0 | Minus-plus sign | In mathematics | Plus–minus_sign > Usage > In mathematics | + x 5 5 ! − x 7 7 ! |
c_5nsft66jssul | Minus-plus sign | In mathematics | Plus–minus_sign > Usage > In mathematics | + ⋯ ± 1 ( 2 n + 1 ) ! x 2 n + 1 + ⋯ . {\displaystyle \sin \left(x\right)=x-{\frac {x^{3}}{3! |
c_v396ql0ee8fw | Minus-plus sign | In mathematics | Plus–minus_sign > Usage > In mathematics | }}+{\frac {x^{5}}{5! }}-{\frac {x^{7}}{7! }}+\cdots \pm {\frac {1}{(2n+1)! |
c_nnkx2ii5jj9r | Minus-plus sign | In mathematics | Plus–minus_sign > Usage > In mathematics | }}x^{2n+1}+\cdots ~.} Here, the plus-or-minus sign indicates that the term may be added or subtracted depending on whether n is odd or even; a rule which can be deduced from the first few terms. A more rigorous presentation would multiply each term by a factor of (−1)n, which gives +1 when n is even, and −1 when n is odd. |
c_t5rhdpv0dla3 | Minus-plus sign | In mathematics | Plus–minus_sign > Usage > In mathematics | In older texts one occasionally finds (−)n, which means the same. When the standard presumption that the plus-or-minus signs all take on the same value of +1 or all −1 is not true, then the line of text that immediately follows the equation must contain a brief description of the actual connection, if any, most often of the form "where the ‘±’ signs are independent" or similar. If a brief, simple description is not possible, the equation must be re-written to provide clarity; e.g. by introducing variables such as s1, s2, ... and specifying a value of +1 or −1 separately for each, or some appropriate relation, like s 3 = s 1 ⋅ ( s 2 ) n , {\displaystyle s_{3}=s_{1}\cdot (s_{2})^{n}\,,} or similar. |
c_93zq8lrn6u68 | Penrose inequality | Summary | Penrose_inequality | In mathematical general relativity, the Penrose inequality, first conjectured by Sir Roger Penrose, estimates the mass of a spacetime in terms of the total area of its black holes and is a generalization of the positive mass theorem. The Riemannian Penrose inequality is an important special case. Specifically, if (M, g) is an asymptotically flat Riemannian 3-manifold with nonnegative scalar curvature and ADM mass m, and A is the area of the outermost minimal surface (possibly with multiple connected components), then the Riemannian Penrose inequality asserts m ≥ A 16 π . {\displaystyle m\geq {\sqrt {\frac {A}{16\pi }}}.} |
c_f1n0z6ykaaxg | Penrose inequality | Summary | Penrose_inequality | This is purely a geometrical fact, and it corresponds to the case of a complete three-dimensional, space-like, totally geodesic submanifold of a (3 + 1)-dimensional spacetime. Such a submanifold is often called a time-symmetric initial data set for a spacetime. The condition of (M, g) having nonnegative scalar curvature is equivalent to the spacetime obeying the dominant energy condition. |
c_lsirgjald43t | Penrose inequality | Summary | Penrose_inequality | This inequality was first proved by Gerhard Huisken and Tom Ilmanen in 1997 in the case where A is the area of the largest component of the outermost minimal surface. Their proof relied on the machinery of weakly defined inverse mean curvature flow, which they developed. In 1999, Hubert Bray gave the first complete proof of the above inequality using a conformal flow of metrics. Both of the papers were published in 2001. |
c_q789s4bz4c5i | Bernstein algebra | Summary | Bernstein_problem_in_mathematical_genetics | In mathematical genetics, a genetic algebra is a (possibly non-associative) algebra used to model inheritance in genetics. Some variations of these algebras are called train algebras, special train algebras, gametic algebras, Bernstein algebras, copular algebras, zygotic algebras, and baric algebras (also called weighted algebra). The study of these algebras was started by Ivor Etherington (1939). In applications to genetics, these algebras often have a basis corresponding to the genetically different gametes, and the structure constant of the algebra encode the probabilities of producing offspring of various types. The laws of inheritance are then encoded as algebraic properties of the algebra. For surveys of genetic algebras see Bertrand (1966), Wörz-Busekros (1980) and Reed (1997). |
c_qztcsv96vslb | Higman-Sims graph | Summary | Higman-Sims_graph | In mathematical graph theory, the Higman–Sims graph is a 22-regular undirected graph with 100 vertices and 1100 edges. It is the unique strongly regular graph srg(100,22,0,6), where no neighboring pair of vertices share a common neighbor and each non-neighboring pair of vertices share six common neighbors. It was first constructed by Mesner (1956) and rediscovered in 1968 by Donald G. Higman and Charles C. Sims as a way to define the Higman–Sims group, a subgroup of index two in the group of automorphisms of the Higman–Sims graph. |
c_n1nvd2wbi5o0 | 3-transposition group | Summary | 3-transposition_group | In mathematical group theory, a 3-transposition group is a group generated by a conjugacy class of involutions, called the 3-transpositions, such that the product of any two involutions from the conjugacy class has order at most 3. They were first studied by Bernd Fischer (1964, 1970, 1971) who discovered the three Fischer groups as examples of 3-transposition groups. |
c_vevaozxmv6ym | C-group | Summary | C-group | In mathematical group theory, a C-group is a group such that the centralizer of any involution has a normal Sylow 2-subgroup. They include as special cases CIT-groups where the centralizer of any involution is a 2-group, and TI-groups where any Sylow 2-subgroups have trivial intersection. The simple C-groups were determined by Suzuki (1965), and his classification is summarized by Gorenstein (1980, 16.4). The classification of C-groups was used in Thompson's classification of N-groups. The simple C-groups are the projective special linear groups PSL2(p) for p a Fermat or Mersenne prime the projective special linear groups PSL2(9) the projective special linear groups PSL2(2n) for n≥2 the projective special linear groups PSL3(q) for q a prime power the Suzuki groups Sz(22n+1) for n≥1 the projective unitary groups PU3(q) for q a prime power |
c_c87kg8n4g9vi | Demushkin group | Summary | Demushkin_group | In mathematical group theory, a Demushkin group (also written as Demuškin or Demuskin) is a pro-p group G having a certain properties relating to duality in group cohomology. More precisely, G must be such that the first cohomology group with coefficients in Fp = Z/p Z has finite rank, the second cohomology group has rank 1, and the cup product induces a non-degenerate pairing H1(G,Fp)× H1(G,Fp) → H2(G,Fp).Such groups were introduced by Demuškin (1959). Demushkin groups occur as the Galois groups of the maximal p-extensions of local number fields containing all p-th roots of unity. |
c_exnt22hj9qiu | Normal p-complement | Summary | P-nilpotent_group | In mathematical group theory, a normal p-complement of a finite group for a prime p is a normal subgroup of order coprime to p and index a power of p. In other words the group is a semidirect product of the normal p-complement and any Sylow p-subgroup. A group is called p-nilpotent if it has a normal p-complement. |
c_ppjvpsbonm83 | Special abelian subgroup | Summary | Special_abelian_subgroup | In mathematical group theory, a subgroup of a group is termed a special abelian subgroup or SA-subgroup if the centralizer of any nonidentity element in the subgroup is precisely the subgroup(Curtis & Reiner 1981, p.354). Equivalently, an SA subgroup is a centrally closed abelian subgroup. Any SA subgroup is a maximal abelian subgroup, that is, it is not properly contained in another abelian subgroup. For a CA group, the SA subgroups are precisely the maximal abelian subgroups.SA subgroups are known for certain characters associated with them termed exceptional characters. |
c_k9515ivgpvy5 | Tame group | Summary | Tame_group | In mathematical group theory, a tame group is a certain kind of group defined in model theory. Formally, we define a bad field as a structure of the form (K, T), where K is an algebraically closed field and T is an infinite, proper, distinguished subgroup of K, such that (K, T) is of finite Morley rank in its full language. A group G is then called a tame group if no bad field is interpretable in G. |
c_rvclinxpsdcw | Hall–Higman theorem | Summary | Hall–Higman_theorem | In mathematical group theory, the Hall–Higman theorem, due to Philip Hall and Graham Higman (1956, Theorem B), describes the possibilities for the minimal polynomial of an element of prime power order for a representation of a p-solvable group. |
c_fis02p4wqglx | Schur cover | Summary | Schur_multiplier | In mathematical group theory, the Schur multiplier or Schur multiplicator is the second homology group H 2 ( G , Z ) {\displaystyle H_{2}(G,\mathbb {Z} )} of a group G. It was introduced by Issai Schur (1904) in his work on projective representations. |
c_pjehhdga13xl | Automorphism group of a free group | Summary | Automorphism_group_of_a_free_group | In mathematical group theory, the automorphism group of a free group is a discrete group of automorphisms of a free group. The quotient by the inner automorphisms is the outer automorphism group of a free group, which is similar in some ways to the mapping class group of a surface. |
c_gfo6ooudq94p | Balance theorem | Summary | Balance_theorem | In mathematical group theory, the balance theorem states that if G is a group with no core then G either has disconnected Sylow 2-subgroups or it is of characteristic 2 type or it is of component type (Gorenstein 1983, p. 7). The significance of this theorem is that it splits the classification of finite simple groups into three major subcases. |
c_5nd6rbtrxlby | Root data | Summary | Root_datum | In mathematical group theory, the root datum of a connected split reductive algebraic group over a field is a generalization of a root system that determines the group up to isomorphism. They were introduced by Michel Demazure in SGA III, published in 1970. |
c_nkwvo4tkbocr | Green's function number | Summary | Green's_function_number | In mathematical heat conduction, the Green's function number is used to uniquely categorize certain fundamental solutions of the heat equation to make existing solutions easier to identify, store, and retrieve. |
c_gx60uskl35uh | Perpetuant | Summary | Perpetuant | In mathematical invariant theory, a perpetuant is informally an irreducible covariant of a form or infinite degree. More precisely, the dimension of the space of irreducible covariants of given degree and weight for a binary form stabilizes provided the degree of the form is larger than the weight of the covariant, and the elements of this space are called perpetuants. Perpetuants were introduced and named by Sylvester (1882, p.105). MacMahon (1884, 1885, 1894) and Stroh (1890) classified the perpetuants. |
c_mq3j531oid1j | Perpetuant | Summary | Perpetuant | Elliott (1907) describes the early history of perpetuants and gives an annotated bibliography. MacMahon conjectured and Stroh proved that the dimension of the space of perpetuants of degree n>2 and weight w is the coefficient of xw of x 2 n − 1 − 1 ( 1 − x 2 ) ( 1 − x 3 ) ⋯ ( 1 − x n ) {\displaystyle {\frac {x^{2^{n-1}-1}}{(1-x^{2})(1-x^{3})\cdots (1-x^{n})}}} For n=1 there is just one perpetuant, of weight 0, and for n=2 the number is given by the coefficient of xw of x2/(1-x2). There are very few papers after about 1910 discussing perpetuants; (Littlewood 1944) is one of the few exceptions. (Kraft & Procesi 2020) exhibited an explicit base of the space of perpetuants. |
c_a2ltk9yo1rrm | Transvectant | Summary | Transvectant | In mathematical invariant theory, a transvectant is an invariant formed from n invariants in n variables using Cayley's Ω process. |
c_7kdx9ojzebg7 | Evectant | Summary | Evectant | In mathematical invariant theory, an evectant is a contravariant constructed from an invariant by acting on it with a differential operator called an evector. Evectants and evectors were introduced by Sylvester (1854, p.95). |
c_tjsfo7g8ztzm | Invariant of a binary form | Summary | Invariants_of_binary_form | In mathematical invariant theory, an invariant of a binary form is a polynomial in the coefficients of a binary form in two variables x and y that remains invariant under the special linear group acting on the variables x and y. |
c_8uaognnplxym | Canonizant | Summary | Canonizant | In mathematical invariant theory, the canonizant or canonisant is a covariant of forms related to a canonical form for them. |
c_mw1d7w456sqx | Catalecticant | Summary | Catalecticant | In mathematical invariant theory, the catalecticant of a form of even degree is a polynomial in its coefficients that vanishes when the form is a sum of an unusually small number of powers of linear forms. It was introduced by Sylvester (1852); see Miller (2010). The word catalectic refers to an incomplete line of verse, lacking a syllable at the end or ending with an incomplete foot. |
c_pry42kdmesja | Osculant | Summary | Osculant | In mathematical invariant theory, the osculant or tacinvariant or tact invariant is an invariant of a hypersurface that vanishes if the hypersurface touches itself, or an invariant of several hypersurfaces that osculate, meaning that they have a common point where they meet to unusually high order. |
c_hdn6e4ykd1f2 | 74 knot | Summary | 74_knot | In mathematical knot theory, 74 is the name of a 7-crossing knot which can be visually depicted in a highly-symmetric form, and so appears in the symbolism and/or artistic ornamentation of various cultures. |
c_y4yzmru03fp6 | Conway sphere | Summary | Conway_sphere | In mathematical knot theory, a Conway sphere, named after John Horton Conway, is a 2-sphere intersecting a given knot or link in a 3-manifold transversely in four points. In a knot diagram, a Conway sphere can be represented by a simple closed curve crossing four points of the knot, the cross-section of the sphere; such a curve does not always exist for an arbitrary knot diagram of a knot with a Conway sphere, but it is always possible to choose a diagram for the knot in which the sphere can be depicted in this way. A Conway sphere is essential if it is incompressible in the knot complement. Sometimes, this condition is included in the definition of Conway spheres. == References == |
c_l5bcz1hcs78g | Link (knot theory) | Summary | Link_(knot_theory) | In mathematical knot theory, a link is a collection of knots which do not intersect, but which may be linked (or knotted) together. A knot can be described as a link with one component. Links and knots are studied in a branch of mathematics called knot theory. Implicit in this definition is that there is a trivial reference link, usually called the unlink, but the word is also sometimes used in context where there is no notion of a trivial link. |
c_53y99gt7o86j | Link (knot theory) | Summary | Link_(knot_theory) | For example, a co-dimension 2 link in 3-dimensional space is a subspace of 3-dimensional Euclidean space (or often the 3-sphere) whose connected components are homeomorphic to circles. The simplest nontrivial example of a link with more than one component is called the Hopf link, which consists of two circles (or unknots) linked together once. The circles in the Borromean rings are collectively linked despite the fact that no two of them are directly linked. The Borromean rings thus form a Brunnian link and in fact constitute the simplest such link. |
c_ogips1fqd5mb | Hopf link | Summary | Hopf_link | In mathematical knot theory, the Hopf link is the simplest nontrivial link with more than one component. It consists of two circles linked together exactly once, and is named after Heinz Hopf. |
c_q71ykl3gofzh | Equisatisfiability | Summary | Equisatisfiability | In mathematical logic (a subtopic within the field of formal logic), two formulae are equisatisfiable if the first formula is satisfiable whenever the second is and vice versa; in other words, either both formulae are satisfiable or both are not. Equisatisfiable formulae may disagree, however, for a particular choice of variables. As a result, equisatisfiability is different from logical equivalence, as two equivalent formulae always have the same models. |
c_vrkl2kut5feg | Equisatisfiability | Summary | Equisatisfiability | Whereas within equisatisfiable formulae, only the primitive proposition the formula imposes is valued. Equisatisfiability is generally used in the context of translating formulae, so that one can define a translation to be correct if the original and resulting formulae are equisatisfiable. Examples of translations involving this concept are Skolemization and some translations into conjunctive normal form. |
c_ernahugexvqf | Valuation (logic) | Mathematical logic | Valuation_(logic) > Mathematical logic | In mathematical logic (especially model theory), a valuation is an assignment of truth values to formal sentences that follows a truth schema. Valuations are also called truth assignments. In propositional logic, there are no quantifiers, and formulas are built from propositional variables using logical connectives. |
c_2vni3jgt2pdu | Valuation (logic) | Mathematical logic | Valuation_(logic) > Mathematical logic | In this context, a valuation begins with an assignment of a truth value to each propositional variable. This assignment can be uniquely extended to an assignment of truth values to all propositional formulas. In first-order logic, a language consists of a collection of constant symbols, a collection of function symbols, and a collection of relation symbols. |
c_i2mz1cwtjw96 | Valuation (logic) | Mathematical logic | Valuation_(logic) > Mathematical logic | Formulas are built out of atomic formulas using logical connectives and quantifiers. A structure consists of a set (domain of discourse) that determines the range of the quantifiers, along with interpretations of the constant, function, and relation symbols in the language. Corresponding to each structure is a unique truth assignment for all sentences (formulas with no free variables) in the language. |
c_fzrmhkzdb9nu | Resolution inference | Summary | First-order_resolution | In mathematical logic and automated theorem proving, resolution is a rule of inference leading to a refutation complete theorem-proving technique for sentences in propositional logic and first-order logic. For propositional logic, systematically applying the resolution rule acts as a decision procedure for formula unsatisfiability, solving the (complement of the) Boolean satisfiability problem. For first-order logic, resolution can be used as the basis for a semi-algorithm for the unsatisfiability problem of first-order logic, providing a more practical method than one following from Gödel's completeness theorem. The resolution rule can be traced back to Davis and Putnam (1960); however, their algorithm required trying all ground instances of the given formula. This source of combinatorial explosion was eliminated in 1965 by John Alan Robinson's syntactical unification algorithm, which allowed one to instantiate the formula during the proof "on demand" just as far as needed to keep refutation completeness.The clause produced by a resolution rule is sometimes called a resolvent. |
c_kw6bfezbgqeq | ⊢ | Summary | Turnstile_(symbol) | In mathematical logic and computer science the symbol ⊢ ( ⊢ {\displaystyle \vdash } ) has taken the name turnstile because of its resemblance to a typical turnstile if viewed from above. It is also referred to as tee and is often read as "yields", "proves", "satisfies" or "entails". |
c_elyf3hdf76bu | Gabbay's separation theorem | Summary | Gabbay's_separation_theorem | In mathematical logic and computer science, Gabbay's separation theorem, named after Dov Gabbay, states that any arbitrary temporal logic formula can be rewritten in a logically equivalent "past → future" form. I.e. the future becomes what must be satisfied. This form can be used as execution rules; a MetateM program is a set of such rules. == References == |
c_118525g6llis | Mu-recursive function | Summary | Mu-recursive_function | In mathematical logic and computer science, a general recursive function, partial recursive function, or μ-recursive function is a partial function from natural numbers to natural numbers that is "computable" in an intuitive sense – as well as in a formal one. If the function is total, it is also called a total recursive function (sometimes shortened to recursive function). In computability theory, it is shown that the μ-recursive functions are precisely the functions that can be computed by Turing machines (this is one of the theorems that supports the Church–Turing thesis). The μ-recursive functions are closely related to primitive recursive functions, and their inductive definition (below) builds upon that of the primitive recursive functions. |
c_yw3mbukrfglv | Mu-recursive function | Summary | Mu-recursive_function | However, not every total recursive function is a primitive recursive function—the most famous example is the Ackermann function. Other equivalent classes of functions are the functions of lambda calculus and the functions that can be computed by Markov algorithms. The subset of all total recursive functions with values in {0,1} is known in computational complexity theory as the complexity class R. |
c_8cgw3qvm2pxp | Univalence axiom | Summary | Higher_inductive_type | In mathematical logic and computer science, homotopy type theory (HoTT ) refers to various lines of development of intuitionistic type theory, based on the interpretation of types as objects to which the intuition of (abstract) homotopy theory applies. This includes, among other lines of work, the construction of homotopical and higher-categorical models for such type theories; the use of type theory as a logic (or internal language) for abstract homotopy theory and higher category theory; the development of mathematics within a type-theoretic foundation (including both previously existing mathematics and new mathematics that homotopical types make possible); and the formalization of each of these in computer proof assistants. There is a large overlap between the work referred to as homotopy type theory, and as the univalent foundations project. |
c_kqnqpt1xgj7k | Univalence axiom | Summary | Higher_inductive_type | Although neither is precisely delineated, and the terms are sometimes used interchangeably, the choice of usage also sometimes corresponds to differences in viewpoint and emphasis. As such, this article may not represent the views of all researchers in the fields equally. This kind of variability is unavoidable when a field is in rapid flux. |
c_pgwr14muxwi5 | Top type | Summary | Top_type | In mathematical logic and computer science, some type theories and type systems include a top type that is commonly denoted with top or the symbol ⊤. The top type is sometimes called also universal type, or universal supertype as all other types in the type system of interest are subtypes of it, and in most cases, it contains every possible object of the type system. It is in contrast with the bottom type, or the universal subtype, which every other type is supertype of and it is often that the type contains no members at all. |
c_5xuxzguq6qba | Kleene star | Summary | Kleene_star | In mathematical logic and computer science, the Kleene star (or Kleene operator or Kleene closure) is a unary operation, either on sets of strings or on sets of symbols or characters. In mathematics, it is more commonly known as the free monoid construction. The application of the Kleene star to a set V {\displaystyle V} is written as V ∗ {\displaystyle V^{*}} . It is widely used for regular expressions, which is the context in which it was introduced by Stephen Kleene to characterize certain automata, where it means "zero or more repetitions". |
c_7v9pxc0djz0i | Kleene star | Summary | Kleene_star | If V {\displaystyle V} is a set of strings, then V ∗ {\displaystyle V^{*}} is defined as the smallest superset of V {\displaystyle V} that contains the empty string ε {\displaystyle \varepsilon } and is closed under the string concatenation operation. If V {\displaystyle V} is a set of symbols or characters, then V ∗ {\displaystyle V^{*}} is the set of all strings over symbols in V {\displaystyle V} , including the empty string ε {\displaystyle \varepsilon } .The set V ∗ {\displaystyle V^{*}} can also be described as the set containing the empty string and all finite-length strings that can be generated by concatenating arbitrary elements of V {\displaystyle V} , allowing the use of the same element multiple times. If V {\displaystyle V} is either the empty set ∅ or the singleton set { ε } {\displaystyle \{\varepsilon \}} , then V ∗ = { ε } {\displaystyle V^{*}=\{\varepsilon \}} ; if V {\displaystyle V} is any other finite set or countably infinite set, then V ∗ {\displaystyle V^{*}} is a countably infinite set. As a consequence, each formal language over a finite or countably infinite alphabet Σ {\displaystyle \Sigma } is countable, since it is a subset of the countably infinite set Σ ∗ {\displaystyle \Sigma ^{*}} . The operators are used in rewrite rules for generative grammars. |
c_g7wdu4a6wexk | Calculus of Constructions | Summary | Calculus_of_Inductive_Constructions | In mathematical logic and computer science, the calculus of constructions (CoC) is a type theory created by Thierry Coquand. It can serve as both a typed programming language and as constructive foundation for mathematics. For this second reason, the CoC and its variants have been the basis for Coq and other proof assistants. Some of its variants include the calculus of inductive constructions (which adds inductive types), the calculus of (co)inductive constructions (which adds coinduction), and the predicative calculus of inductive constructions (which removes some impredicativity). |
c_c7m22j7nswnm | Lambda-mu calculus | Summary | Lambda-mu_calculus | In mathematical logic and computer science, the lambda-mu calculus is an extension of the lambda calculus introduced by M. Parigot. It introduces two new operators: the μ operator (which is completely different both from the μ operator found in computability theory and from the μ operator of modal μ-calculus) and the bracket operator. Proof-theoretically, it provides a well-behaved formulation of classical natural deduction. One of the main goals of this extended calculus is to be able to describe expressions corresponding to theorems in classical logic. |
c_uui5zih03bff | Lambda-mu calculus | Summary | Lambda-mu_calculus | According to the Curry–Howard isomorphism, lambda calculus on its own can express theorems in intuitionistic logic only, and several classical logical theorems can't be written at all. However with these new operators one is able to write terms that have the type of, for example, Peirce's law. Semantically these operators correspond to continuations, found in some functional programming languages. |
c_k3b44wxsv67t | Two-variable logic with counting | Summary | Two-variable_logic_with_counting | In mathematical logic and computer science, two-variable logic is the fragment of first-order logic where formulae can be written using only two different variables. This fragment is usually studied without function symbols. |
c_yrkc2q2o60to | Analytical hierarchy | Summary | Analytical_hierarchy | In mathematical logic and descriptive set theory, the analytical hierarchy is an extension of the arithmetical hierarchy. The analytical hierarchy of formulas includes formulas in the language of second-order arithmetic, which can have quantifiers over both the set of natural numbers, N {\displaystyle \mathbb {N} } , and over functions from N {\displaystyle \mathbb {N} } to N {\displaystyle \mathbb {N} } . The analytical hierarchy of sets classifies sets by the formulas that can be used to define them; it is the lightface version of the projective hierarchy. |
c_ws5ljpkjxpvp | Implication graph | Summary | Implication_graph | In mathematical logic and graph theory, an implication graph is a skew-symmetric, directed graph G = (V, E) composed of vertex set V and directed edge set E. Each vertex in V represents the truth status of a Boolean literal, and each directed edge from vertex u to vertex v represents the material implication "If the literal u is true then the literal v is also true". Implication graphs were originally used for analyzing complex Boolean expressions. |
c_w8b47n4kr7nx | Potential isomorphism | Summary | Potential_isomorphism | In mathematical logic and in particular in model theory, a potential isomorphism is a collection of finite partial isomorphisms between two models which satisfies certain closure conditions. Existence of a partial isomorphism entails elementary equivalence, however the converse is not generally true, but it holds for ω-saturated models. |
c_mz23ddu2plqu | Universal Horn theory | Summary | Universal_Horn_theory | In mathematical logic and logic programming, a Horn clause is a logical formula of a particular rule-like form which gives it useful properties for use in logic programming, formal specification, and model theory. Horn clauses are named for the logician Alfred Horn, who first pointed out their significance in 1951. |
c_96qtjxzbzzmi | Completeness (logic) | Summary | Completeness_(logic) | In mathematical logic and metalogic, a formal system is called complete with respect to a particular property if every formula having the property can be derived using that system, i.e. is one of its theorems; otherwise the system is said to be incomplete. The term "complete" is also used without qualification, with differing meanings depending on the context, mostly referring to the property of semantical validity. Intuitively, a system is called complete in this particular sense, if it can derive every formula that is true. |
c_7uj5b2om3zdn | Skolem paradox | Summary | Skolem_paradox | In mathematical logic and philosophy, Skolem's paradox is a seeming contradiction that arises from the downward Löwenheim–Skolem theorem. Thoralf Skolem (1922) was the first to discuss the seemingly contradictory aspects of the theorem, and to discover the relativity of set-theoretic notions now known as non-absoluteness. Although it is not an actual antinomy like Russell's paradox, the result is typically called a paradox and was described as a "paradoxical state of affairs" by Skolem (1922: p. |
c_s33uvyqvvl6v | Skolem paradox | Summary | Skolem_paradox | 295). Skolem's paradox is that every countable axiomatisation of set theory in first-order logic, if it is consistent, has a model that is countable. This appears contradictory because it is possible to prove, from those same axioms, a sentence that intuitively says (or that precisely says in the standard model of the theory) that there exist sets that are not countable. |
c_5a834ri7i3zu | Skolem paradox | Summary | Skolem_paradox | Thus the seeming contradiction is that a model that is itself countable, and which therefore contains only countable sets, satisfies the first-order sentence that intuitively states "there are uncountable sets". A mathematical explanation of the paradox, showing that it is not a contradiction in mathematics, was given by Skolem (1922). Skolem's work was harshly received by Ernst Zermelo, who argued against the limitations of first-order logic, but the result quickly came to be accepted by the mathematical community. |
c_wsi6kh0nb6hy | Skolem paradox | Summary | Skolem_paradox | The philosophical implications of Skolem's paradox have received much study. One line of inquiry questions whether it is accurate to claim that any first-order sentence actually states "there are uncountable sets". This line of thought can be extended to question whether any set is uncountable in an absolute sense. More recently, the paper "Models and Reality" by Hilary Putnam, and responses to it, led to renewed interest in the philosophical aspects of Skolem's result. |
c_w3u21j6y48d8 | Collapsing function | Summary | Collapsing_function | In mathematical logic and set theory, an ordinal collapsing function (or projection function) is a technique for defining (notations for) certain recursive large countable ordinals, whose principle is to give names to certain ordinals much larger than the one being defined, perhaps even large cardinals (though they can be replaced with recursively large ordinals at the cost of extra technical difficulty), and then "collapse" them down to a system of notations for the sought-after ordinal. For this reason, ordinal collapsing functions are described as an impredicative manner of naming ordinals. The details of the definition of ordinal collapsing functions vary, and get more complicated as greater ordinals are being defined, but the typical idea is that whenever the notation system "runs out of fuel" and cannot name a certain ordinal, a much larger ordinal is brought "from above" to give a name to that critical point. |
c_dd90lo24zrkl | Collapsing function | Summary | Collapsing_function | An example of how this works will be detailed below, for an ordinal collapsing function defining the Bachmann–Howard ordinal (i.e., defining a system of notations up to the Bachmann–Howard ordinal). The use and definition of ordinal collapsing functions is inextricably intertwined with the theory of ordinal analysis, since the large countable ordinals defined and denoted by a given collapse are used to describe the ordinal-theoretic strength of certain formal systems, typically subsystems of analysis (such as those seen in the light of reverse mathematics), extensions of Kripke–Platek set theory, Bishop-style systems of constructive mathematics or Martin-Löf-style systems of intuitionistic type theory. Ordinal collapsing functions are typically denoted using some variation of either the Greek letter ψ {\displaystyle \psi } (psi) or θ {\displaystyle \theta } (theta). |
c_szisawowvdly | Ordinal notation | Summary | Ordinal_notation | In mathematical logic and set theory, an ordinal notation is a partial function mapping the set of all finite sequences of symbols, themselves members of a finite alphabet, to a countable set of ordinals. A Gödel numbering is a function mapping the set of well-formed formulae (a finite sequence of symbols on which the ordinal notation function is defined) of some formal language to the natural numbers. This associates each well-formed formula with a unique natural number, called its Gödel number. If a Gödel numbering is fixed, then the subset relation on the ordinals induces an ordering on well-formed formulae which in turn induces a well-ordering on the subset of natural numbers. |
c_hbsygajvmqvi | Ordinal notation | Summary | Ordinal_notation | A recursive ordinal notation must satisfy the following two additional properties: the subset of natural numbers is a recursive set the induced well-ordering on the subset of natural numbers is a recursive relationThere are many such schemes of ordinal notations, including schemes by Wilhelm Ackermann, Heinz Bachmann, Wilfried Buchholz, Georg Cantor, Solomon Feferman, Gerhard Jäger, Isles, Pfeiffer, Wolfram Pohlers, Kurt Schütte, Gaisi Takeuti (called ordinal diagrams), Oswald Veblen. Stephen Cole Kleene has a system of notations, called Kleene's O, which includes ordinal notations but it is not as well behaved as the other systems described here. Usually one proceeds by defining several functions from ordinals to ordinals and representing each such function by a symbol. |
c_yeznq96dv6pp | Ordinal notation | Summary | Ordinal_notation | In many systems, such as Veblen's well known system, the functions are normal functions, that is, they are strictly increasing and continuous in at least one of their arguments, and increasing in other arguments. Another desirable property for such functions is that the value of the function is greater than each of its arguments, so that an ordinal is always being described in terms of smaller ordinals. There are several such desirable properties. Unfortunately, no one system can have all of them since they contradict each other. |
c_twsyd5l85fjp | Minsky machine | Summary | Minsky_machine | In mathematical logic and theoretical computer science, a register machine is a generic class of abstract machines used in a manner similar to a Turing machine. All the models are Turing equivalent. |
c_ksi9zdg0mk5x | Convergent term rewriting system | Summary | Abstract_rewriting | In mathematical logic and theoretical computer science, an abstract rewriting system (also (abstract) reduction system or abstract rewrite system; abbreviated ARS) is a formalism that captures the quintessential notion and properties of rewriting systems. In its simplest form, an ARS is simply a set (of "objects") together with a binary relation, traditionally denoted with → {\displaystyle \rightarrow } ; this definition can be further refined if we index (label) subsets of the binary relation. Despite its simplicity, an ARS is sufficient to describe important properties of rewriting systems like normal forms, termination, and various notions of confluence. |
c_rxcru5fd1jnz | Convergent term rewriting system | Summary | Abstract_rewriting | Historically, there have been several formalizations of rewriting in an abstract setting, each with its idiosyncrasies. This is due in part to the fact that some notions are equivalent, see below in this article. The formalization that is most commonly encountered in monographs and textbooks, and which is generally followed here, is due to Gérard Huet (1980). |
c_8uqb63h257ym | Lambda cube | Summary | Lambda_cube | In mathematical logic and type theory, the λ-cube (also written lambda cube) is a framework introduced by Henk Barendregt to investigate the different dimensions in which the calculus of constructions is a generalization of the simply typed λ-calculus. Each dimension of the cube corresponds to a new kind of dependency between terms and types. Here, "dependency" refers to the capacity of a term or type to bind a term or type. The respective dimensions of the λ-cube correspond to: x-axis ( → {\displaystyle \rightarrow } ): types that can bind terms, corresponding to dependent types. |
c_vznrpqueur2c | Lambda cube | Summary | Lambda_cube | y-axis ( ↑ {\displaystyle \uparrow } ): terms that can bind types, corresponding to polymorphism. z-axis ( ↗ {\displaystyle \nearrow } ): types that can bind types, corresponding to (binding) type operators.The different ways to combine these three dimensions yield the 8 vertices of the cube, each corresponding to a different kind of typed system. The λ-cube can be generalized into the concept of a pure type system. |
c_k1ic5vx9d3hn | Löwenheim number | Summary | Löwenheim_number | In mathematical logic the Löwenheim number of an abstract logic is the smallest cardinal number for which a weak downward Löwenheim–Skolem theorem holds. They are named after Leopold Löwenheim, who proved that these exist for a very broad class of logics. |
c_xkesrom7e758 | Theory of pure equality | Summary | Theory_of_pure_equality | In mathematical logic the theory of pure equality is a first-order theory. It has a signature consisting of only the equality relation symbol, and includes no non-logical axioms at all.This theory is consistent but incomplete, as a non-empty set with the usual equality relation provides an interpretation making certain sentences true. It is an example of a decidable theory and is a fragment of more expressive decidable theories, including monadic class of first-order logic (which also admits unary predicates and is, via Skolem normal form, related to set constraints in program analysis) and monadic second-order theory of a pure set (which additionally permits quantification over predicates and whose signature extends to monadic second-order logic of k successors). |
c_c8nw7ovtlxwy | Beth definability | Summary | Beth_definability | In mathematical logic, Beth definability is a result that connects implicit definability of a property to its explicit definability. Specifically Beth definability states that the two senses of definability are equivalent. First-order logic has the Beth definability property. |
c_qkbn0tq4ev3y | Craig interpolation | Summary | Craig_interpolation | In mathematical logic, Craig's interpolation theorem is a result about the relationship between different logical theories. Roughly stated, the theorem says that if a formula φ implies a formula ψ, and the two have at least one atomic variable symbol in common, then there is a formula ρ, called an interpolant, such that every non-logical symbol in ρ occurs both in φ and ψ, φ implies ρ, and ρ implies ψ. The theorem was first proved for first-order logic by William Craig in 1957. Variants of the theorem hold for other logics, such as propositional logic. A stronger form of Craig's interpolation theorem for first-order logic was proved by Roger Lyndon in 1959; the overall result is sometimes called the Craig–Lyndon theorem. |
c_9cyi99ypd68l | Craig's theorem | Summary | Craig's_theorem | In mathematical logic, Craig's theorem (also known as Craig's trick) states that any recursively enumerable set of well-formed formulas of a first-order language is (primitively) recursively axiomatizable. This result is not related to the well-known Craig interpolation theorem, although both results are named after the same logician, William Craig. |
c_syi6v2dp3b99 | Diaconescu theorem | Summary | Diaconescu's_theorem | In mathematical logic, Diaconescu's theorem, or the Goodman–Myhill theorem, states that the full axiom of choice is sufficient to derive the law of the excluded middle or restricted forms of it. The theorem was discovered in 1975 by Radu Diaconescu and later by Goodman and Myhill. Already in 1967, Errett Bishop posed the theorem as an exercise (Problem 2 on page 58 in Foundations of constructive analysis). |
c_r6uc4nhqy2l3 | Frege's propositional calculus | Summary | Frege's_propositional_calculus | In mathematical logic, Frege's propositional calculus was the first axiomatization of propositional calculus. It was invented by Gottlob Frege, who also invented predicate calculus, in 1879 as part of his second-order predicate calculus (although Charles Peirce was the first to use the term "second-order" and developed his own version of the predicate calculus independently of Frege). It makes use of just two logical operators: implication and negation, and it is constituted by six axioms and one inference rule: modus ponens. Frege's propositional calculus is equivalent to any other classical propositional calculus, such as the "standard PC" with 11 axioms. |
c_yvcc2ejrqn1k | Frege's propositional calculus | Summary | Frege's_propositional_calculus | Frege's PC and standard PC share two common axioms: THEN-1 and THEN-2. Notice that axioms THEN-1 through THEN-3 only make use of (and define) the implication operator, whereas axioms FRG-1 through FRG-3 define the negation operator. |
c_whljmdm08z2j | Frege's propositional calculus | Summary | Frege's_propositional_calculus | The following theorems will aim to find the remaining nine axioms of standard PC within the "theorem-space" of Frege's PC, showing that the theory of standard PC is contained within the theory of Frege's PC. (A theory, also called here, for figurative purposes, a "theorem-space", is a set of theorems that are a subset of a universal set of well-formed formulas. The theorems are linked to each other in a directed manner by inference rules, forming a sort of dendritic network. At the roots of the theorem-space are found the axioms, which "generate" the theorem-space much like a generating set generates a group.) |
c_p0yb2ywqyoer | Kirby–Paris theorem | Summary | Goodstein_sequence | In mathematical logic, Goodstein's theorem is a statement about the natural numbers, proved by Reuben Goodstein in 1944, which states that every Goodstein sequence eventually terminates at 0. Laurence Kirby and Jeff Paris showed that it is unprovable in Peano arithmetic (but it can be proven in stronger systems, such as second-order arithmetic). This was the third example of a true statement that is unprovable in Peano arithmetic, after the examples provided by Gödel's incompleteness theorem and Gerhard Gentzen's 1943 direct proof of the unprovability of ε0-induction in Peano arithmetic. The Paris–Harrington theorem gave another example. |
c_2wvmv08b4rl8 | Kirby–Paris theorem | Summary | Goodstein_sequence | Kirby and Paris introduced a graph-theoretic hydra game with behavior similar to that of Goodstein sequences: the "Hydra" (named for the mythological multi-headed Hydra of Lerna) is a rooted tree, and a move consists of cutting off one of its "heads" (a branch of the tree), to which the hydra responds by growing a finite number of new heads according to certain rules. Kirby and Paris proved that the Hydra will eventually be killed, regardless of the strategy that Hercules uses to chop off its heads, though this may take a very long time. Just like for Goodstein sequences, Kirby and Paris showed that it cannot be proven in Peano arithmetic alone. |
c_6fyqk4i34ow1 | Gödel's β function | Summary | Gödel's_β_function | In mathematical logic, Gödel's β function is a function used to permit quantification over finite sequences of natural numbers in formal theories of arithmetic. The β function is used, in particular, in showing that the class of arithmetically definable functions is closed under primitive recursion, and therefore includes all primitive recursive functions. The β function was introduced without the name in the proof of the first of Gödel's incompleteness theorems (Gödel 1931). The β function lemma given below is an essential step of that proof. Gödel gave the β function its name in (Gödel 1934). |
c_hjcrcjvv2os4 | Heyting arithmetic | Summary | Heyting_arithmetic | In mathematical logic, Heyting arithmetic H A {\displaystyle {\mathsf {HA}}} is an axiomatization of arithmetic in accordance with the philosophy of intuitionism. It is named after Arend Heyting, who first proposed it. |
c_fkx473k7dlud | Lindenbaum's lemma | Summary | Lindenbaum's_lemma | In mathematical logic, Lindenbaum's lemma, named after Adolf Lindenbaum, states that any consistent theory of predicate logic can be extended to a complete consistent theory. The lemma is a special case of the ultrafilter lemma for Boolean algebras, applied to the Lindenbaum algebra of a theory. |
c_t2w0hjofc3g2 | Lindstrom's theorem | Summary | Lindström's_theorem | In mathematical logic, Lindström's theorem (named after Swedish logician Per Lindström, who published it in 1969) states that first-order logic is the strongest logic (satisfying certain conditions, e.g. closure under classical negation) having both the (countable) compactness property and the (downward) Löwenheim–Skolem property.Lindström's theorem is perhaps the best known result of what later became known as abstract model theory, the basic notion of which is an abstract logic; the more general notion of an institution was later introduced, which advances from a set-theoretical notion of model to a category-theoretical one. Lindström had previously obtained a similar result in studying first-order logics extended with Lindström quantifiers.Lindström's theorem has been extended to various other systems of logic, in particular modal logics by Johan van Benthem and Sebastian Enqvist. |
c_m4vahwo0nazg | Löb's Theorem | Summary | Löb's_Theorem | In mathematical logic, Löb's theorem states that in Peano arithmetic (PA) (or any formal system including PA), for any formula P, if it is provable in PA that "if P is provable in PA then P is true", then P is provable in PA. If Prov(P) means that the formula P is provable, we may express this more formally as If P A ⊢ P r o v ( P ) → P {\displaystyle {\mathit {PA}}\vdash {\mathrm {Prov} (P)\rightarrow P}} then P A ⊢ P {\displaystyle {\mathit {PA}}\vdash P} .An immediate corollary (the contrapositive) of Löb's theorem is that, if P is not provable in PA, then "if P is provable in PA, then P is true" is not provable in PA. For example, "If 1 + 1 = 3 {\displaystyle 1+1=3} is provable in PA, then 1 + 1 = 3 {\displaystyle 1+1=3} " is not provable in PA.Löb's theorem is named for Martin Hugo Löb, who formulated it in 1955. It is related to Curry's paradox. |
c_z2iyebfqa7nx | Morley rank | Summary | Morley_rank | In mathematical logic, Morley rank, introduced by Michael D. Morley (1965), is a means of measuring the size of a subset of a model of a theory, generalizing the notion of dimension in algebraic geometry. |
c_ldthcj2lr6ii | Typed set theory | Summary | Typed_set_theory | In mathematical logic, New Foundations (NF) is an axiomatic set theory, conceived by Willard Van Orman Quine as a simplification of the theory of types of Principia Mathematica. Quine first proposed NF in a 1937 article titled "New Foundations for Mathematical Logic"; hence the name. Much of this entry discusses NF with urelements (NFU), an important variant of NF due to Jensen and clarified by Holmes. In 1940 and in a revision in 1951, Quine introduced an extension of NF sometimes called "Mathematical Logic" or "ML", that included proper classes as well as sets. |
c_z0067ijyeq8i | Typed set theory | Summary | Typed_set_theory | New Foundations has a universal set, so it is a non-well-founded set theory. That is to say, it is an axiomatic set theory that allows infinite descending chains of membership, such as ... xn ∈ xn-1 ∈ ... ∈ x2 ∈ x1. It avoids Russell's paradox by permitting only stratifiable formulas to be defined using the axiom schema of comprehension. For instance, x ∈ y is a stratifiable formula, but x ∈ x is not. New Foundations is closely related to Russellian unramified typed set theory (TST), a streamlined version of the theory of types of Principia Mathematica with a linear hierarchy of types. |
c_5ijy5ppmsw0q | Peano–Russell notation | Summary | Peano–Russell_notation | In mathematical logic, Peano–Russell notation was Bertrand Russell's application of Giuseppe Peano's logical notation to the logical notions of Frege and was used in the writing of Principia Mathematica in collaboration with Alfred North Whitehead: "The notation adopted in the present work is based upon that of Peano, and the following explanations are to some extent modelled on those which he prefixes to his Formulario Mathematico." (Chapter I: Preliminary Explanations of Ideas and Notations, page 4) |
c_pbe6ylg8n1bt | Rosser's trick | Summary | Rosser's_trick | In mathematical logic, Rosser's trick is a method for proving Gödel's incompleteness theorems without the assumption that the theory being considered is ω-consistent (Smorynski 1977, p. 840; Mendelson 1977, p. 160). This method was introduced by J. Barkley Rosser in 1936, as an improvement of Gödel's original proof of the incompleteness theorems that was published in 1931. While Gödel's original proof uses a sentence that says (informally) "This sentence is not provable", Rosser's trick uses a formula that says "If this sentence is provable, there is a shorter proof of its negation". |
c_n8tpkd0xwk1v | Russell paradox | Summary | Russel's_paradox | In mathematical logic, Russell's paradox (also known as Russell's antinomy) is a set-theoretic paradox published by the British philosopher and mathematician Bertrand Russell in 1901. Russell's paradox shows that every set theory that contains an unrestricted comprehension principle leads to contradictions. The paradox had already been discovered independently in 1899 by the German mathematician Ernst Zermelo. However, Zermelo did not publish the idea, which remained known only to David Hilbert, Edmund Husserl, and other academics at the University of Göttingen. |