content_id
stringlengths
14
14
page_title
stringlengths
1
250
section_title
stringlengths
1
1.26k
breadcrumb
stringlengths
1
1.39k
text
stringlengths
9
3.55k
c_u7honyogcd3s
Lagrange duality
Summary
Lagrange_duality
This fact is called weak duality. In general, the optimal values of the primal and dual problems need not be equal. Their difference is called the duality gap. For convex optimization problems, the duality gap is zero under a constraint qualification condition. This fact is called strong duality.
c_6y99bjexkh6q
Linear complementarity problem
Summary
Linear_complementarity_problem
In mathematical optimization theory, the linear complementarity problem (LCP) arises frequently in computational mechanics and encompasses the well-known quadratic programming as a special case. It was proposed by Cottle and Dantzig in 1968.
c_mpyhj7dmn3h9
Mixed linear complementarity problem
Summary
Mixed_linear_complementarity_problem
In mathematical optimization theory, the mixed linear complementarity problem, often abbreviated as MLCP or LMCP, is a generalization of the linear complementarity problem to include free variables.
c_yf5hdc3k5z9j
Bland's rule
Summary
Bland's_rule
In mathematical optimization, Bland's rule (also known as Bland's algorithm, Bland's anti-cycling rule or Bland's pivot rule) is an algorithmic refinement of the simplex method for linear optimization. With Bland's rule, the simplex algorithm solves feasible linear optimization problems without cycling.The original simplex algorithm starts with an arbitrary basic feasible solution, and then changes the basis in order to decrease the minimization target and find an optimal solution. Usually, the target indeed decreases in every step, and thus after a bounded number of steps an optimal solution is found. However, there are examples of degenerate linear programs, on which the original simplex algorithm cycles forever.
c_p605mcom00yh
Bland's rule
Summary
Bland's_rule
It gets stuck at a basic feasible solution (a corner of the feasible polytope) and changes bases in a cyclic way without decreasing the minimization target. Such cycles are avoided by Bland's rule for choosing a column to enter and a column to leave the basis. Bland's rule was developed by Robert G. Bland, now an Emeritus Professor of operations research at Cornell University, while he was a research fellow at the Center for Operations Research and Econometrics in Belgium.
c_ds3gm8mourrp
Cunningham's rule
Summary
Cunningham's_rule
In mathematical optimization, Cunningham's rule (also known as least recently considered rule or round-robin rule) is an algorithmic refinement of the simplex method for linear optimization. The rule was proposed 1979 by W. H. Cunningham to defeat the deformed hypercube constructions by Klee and Minty et al. (see, e.g. Klee–Minty cube).Cunningham's rule assigns a cyclic order to the variables and remembers the last variable to enter the basis. The next entering variable is chosen to be the first allowable candidate starting from the last chosen variable and following the given circular order.
c_78oja52bsjgh
Cunningham's rule
Summary
Cunningham's_rule
History-based rules defeat the deformed hypercube constructions because they tend to average out how many times a variable pivots. It has recently been shown by David Avis and Oliver Friedmann that there is a family of linear programs on which the simplex algorithm equipped with Cunningham's rule requires exponential time. == Notes ==
c_p4u0ispjb8y4
Simplex algorithm
Summary
Simplex_method
In mathematical optimization, Dantzig's simplex algorithm (or simplex method) is a popular algorithm for linear programming.The name of the algorithm is derived from the concept of a simplex and was suggested by T. S. Motzkin. Simplices are not actually used in the method, but one interpretation of it is that it operates on simplicial cones, and these become proper simplices with an additional constraint. The simplicial cones in question are the corners (i.e., the neighborhoods of the vertices) of a geometric object called a polytope. The shape of this polytope is defined by the constraints applied to the objective function.
c_k8k6lptl5ut1
Himmelblau's function
Summary
Himmelblau's_function
In mathematical optimization, Himmelblau's function is a multi-modal function, used to test the performance of optimization algorithms. The function is defined by: f ( x , y ) = ( x 2 + y − 11 ) 2 + ( x + y 2 − 7 ) 2 . {\displaystyle f(x,y)=(x^{2}+y-11)^{2}+(x+y^{2}-7)^{2}.\quad } It has one local maximum at x = − 0.270845 {\displaystyle x=-0.270845} and y = − 0.923039 {\displaystyle y=-0.923039} where f ( x , y ) = 181.617 {\displaystyle f(x,y)=181.617} , and four identical local minima: f ( 3.0 , 2.0 ) = 0.0 , {\displaystyle f(3.0,2.0)=0.0,\quad } f ( − 2.805118 , 3.131312 ) = 0.0 , {\displaystyle f(-2.805118,3.131312)=0.0,\quad } f ( − 3.779310 , − 3.283186 ) = 0.0 , {\displaystyle f(-3.779310,-3.283186)=0.0,\quad } f ( 3.584428 , − 1.848126 ) = 0.0. {\displaystyle f(3.584428,-1.848126)=0.0.\quad } The locations of all the minima can be found analytically. However, because they are roots of cubic polynomials, when written in terms of radicals, the expressions are somewhat complicated.The function is named after David Mautner Himmelblau (1924–2011), who introduced it.
c_3273mts99phl
Wolfe duality
Summary
Wolfe_duality
In mathematical optimization, Wolfe duality, named after Philip Wolfe, is type of dual problem in which the objective function and constraints are all differentiable functions. Using this concept a lower bound for a minimization problem can be found because of the weak duality principle.
c_s74xapyzdfk9
Zadeh's rule
Summary
Zadeh's_rule
In mathematical optimization, Zadeh's rule (also known as the least-entered rule) is an algorithmic refinement of the simplex method for linear optimization. The rule was proposed around 1980 by Norman Zadeh (son of Lotfi A. Zadeh), and has entered the folklore of convex optimization since then.Zadeh offered a reward of $1,000 to anyone who can show that the rule admits polynomially many iterations or to prove that there is a family of linear programs on which the pivoting rule requires subexponentially many iterations to find the optimum.
c_izyshrclfxbe
Zermelo's navigation problem
Summary
Zermelo's_navigation_problem
In mathematical optimization, Zermelo's navigation problem, proposed in 1931 by Ernst Zermelo, is a classic optimal control problem that deals with a boat navigating on a body of water, originating from a point A {\displaystyle A} to a destination point B {\displaystyle B} . The boat is capable of a certain maximum speed, and the goal is to derive the best possible control to reach B {\displaystyle B} in the least possible time. Without considering external forces such as current and wind, the optimal control is for the boat to always head towards B {\displaystyle B} . Its path then is a line segment from A {\displaystyle A} to B {\displaystyle B} , which is trivially optimal. With consideration of current and wind, if the combined force applied to the boat is non-zero the control for no current and wind does not yield the optimal path.
c_fcip6b30y537
Solution space
Summary
Solution_space
In mathematical optimization, a feasible region, feasible set, search space, or solution space is the set of all possible points (sets of values of the choice variables) of an optimization problem that satisfy the problem's constraints, potentially including inequalities, equalities, and integer constraints. This is the initial set of candidate solutions to the problem, before the set of candidates has been narrowed down. For example, consider the problem of minimizing the function x 2 + y 4 {\displaystyle x^{2}+y^{4}} with respect to the variables x {\displaystyle x} and y , {\displaystyle y,} subject to 1 ≤ x ≤ 10 {\displaystyle 1\leq x\leq 10} and 5 ≤ y ≤ 12. {\displaystyle 5\leq y\leq 12.\,} Here the feasible set is the set of pairs (x, y) in which the value of x is at least 1 and at most 10 and the value of y is at least 5 and at most 12.
c_dh94bljvv7ye
Solution space
Summary
Solution_space
The feasible set of the problem is separate from the objective function, which states the criterion to be optimized and which in the above example is x 2 + y 4 . {\displaystyle x^{2}+y^{4}.} In many problems, the feasible set reflects a constraint that one or more variables must be non-negative.
c_qdq1gs5g3uni
Solution space
Summary
Solution_space
In pure integer programming problems, the feasible set is the set of integers (or some subset thereof). In linear programming problems, the feasible set is a convex polytope: a region in multidimensional space whose boundaries are formed by hyperplanes and whose corners are vertices. Constraint satisfaction is the process of finding a point in the feasible region.
c_m7ib5of2uhz8
Quadratically constrained quadratic program
Summary
Quadratically_constrained_quadratic_program
In mathematical optimization, a quadratically constrained quadratic program (QCQP) is an optimization problem in which both the objective function and the constraints are quadratic functions. It has the form minimize 1 2 x T P 0 x + q 0 T x subject to 1 2 x T P i x + q i T x + r i ≤ 0 for i = 1 , … , m , A x = b , {\displaystyle {\begin{aligned}&{\text{minimize}}&&{\tfrac {1}{2}}x^{\mathrm {T} }P_{0}x+q_{0}^{\mathrm {T} }x\\&{\text{subject to}}&&{\tfrac {1}{2}}x^{\mathrm {T} }P_{i}x+q_{i}^{\mathrm {T} }x+r_{i}\leq 0\quad {\text{for }}i=1,\dots ,m,\\&&&Ax=b,\end{aligned}}} where P0, …, Pm are n-by-n matrices and x ∈ Rn is the optimization variable. If P0, …, Pm are all positive semidefinite, then the problem is convex. If these matrices are neither positive nor negative semidefinite, the problem is non-convex. If P1, … ,Pm are all zero, then the constraints are in fact linear and the problem is a quadratic program.
c_3wcysxo2zvf9
Affine scaling
Summary
Affine_scaling
In mathematical optimization, affine scaling is an algorithm for solving linear programming problems. Specifically, it is an interior point method, discovered by Soviet mathematician I. I. Dikin in 1967 and reinvented in the U.S. in the mid-1980s.
c_m7ovb2vp6d8k
Constrained minimisation
Summary
Constrained_minimisation
In mathematical optimization, constrained optimization (in some contexts called constraint optimization) is the process of optimizing an objective function with respect to some variables in the presence of constraints on those variables. The objective function is either a cost function or energy function, which is to be minimized, or a reward function or utility function, which is to be maximized. Constraints can be either hard constraints, which set conditions for the variables that are required to be satisfied, or soft constraints, which have some variable values that are penalized in the objective function if, and based on the extent that, the conditions on the variables are not satisfied.
c_kxoxlybv6ofy
Linearization
Optimization
Linearization > Uses of linearization > Optimization
In mathematical optimization, cost functions and non-linear components within can be linearized in order to apply a linear solving method such as the Simplex algorithm. The optimized result is reached much more efficiently and is deterministic as a global optimum.
c_6ie9rfc7ukol
Fractional programming
Summary
Fractional_programming
In mathematical optimization, fractional programming is a generalization of linear-fractional programming. The objective function in a fractional program is a ratio of two functions that are in general nonlinear. The ratio to be optimized often describes some kind of efficiency of a system.
c_oowjla7x29vh
Linear-fractional programming
Summary
Linear-fractional_programming_(LFP)
In mathematical optimization, linear-fractional programming (LFP) is a generalization of linear programming (LP). Whereas the objective function in a linear program is a linear function, the objective function in a linear-fractional program is a ratio of two linear functions. A linear program can be regarded as a special case of a linear-fractional program in which the denominator is the constant function 1. Formally, a linear-fractional program is defined as the problem of maximizing (or minimizing) a ratio of affine functions over a polyhedron, maximize c T x + α d T x + β subject to A x ≤ b , {\displaystyle {\begin{aligned}{\text{maximize}}\quad &{\frac {\mathbf {c} ^{T}\mathbf {x} +\alpha }{\mathbf {d} ^{T}\mathbf {x} +\beta }}\\{\text{subject to}}\quad &A\mathbf {x} \leq \mathbf {b} ,\end{aligned}}} where x ∈ R n {\displaystyle \mathbf {x} \in \mathbb {R} ^{n}} represents the vector of variables to be determined, c , d ∈ R n {\displaystyle \mathbf {c} ,\mathbf {d} \in \mathbb {R} ^{n}} and b ∈ R m {\displaystyle \mathbf {b} \in \mathbb {R} ^{m}} are vectors of (known) coefficients, A ∈ R m × n {\displaystyle A\in \mathbb {R} ^{m\times n}} is a (known) matrix of coefficients and α , β ∈ R {\displaystyle \alpha ,\beta \in \mathbb {R} } are constants. The constraints have to restrict the feasible region to { x | d T x + β > 0 } {\displaystyle \{\mathbf {x} |\mathbf {d} ^{T}\mathbf {x} +\beta >0\}} , i.e. the region on which the denominator is positive. Alternatively, the denominator of the objective function has to be strictly negative in the entire feasible region.
c_ulgu7dzkhao4
Very large-scale neighborhood search
Summary
Very_large-scale_neighborhood_search
In mathematical optimization, neighborhood search is a technique that tries to find good or near-optimal solutions to a combinatorial optimisation problem by repeatedly transforming a current solution into a different solution in the neighborhood of the current solution. The neighborhood of a solution is a set of similar solutions obtained by relatively simple modifications to the original solution. For a very large-scale neighborhood search, the neighborhood is large and possibly exponentially sized. The resulting algorithms can outperform algorithms using small neighborhoods because the local improvements are larger.
c_7holcfwahbpb
Very large-scale neighborhood search
Summary
Very_large-scale_neighborhood_search
If neighborhood searched is limited to just one or a very small number of changes from the current solution, then it can be difficult to escape from local minima, even with additional meta-heuristic techniques such as Simulated Annealing or Tabu search. In large neighborhood search techniques, the possible changes from one solution to its neighbor may allow tens or hundreds of values to change, and this means that the size of the neighborhood may itself be sufficient to allow the search process to avoid or escape local minima, though additional meta-heuristic techniques can still improve performance. == References ==
c_qevp27s5ag6g
Oracle complexity (optimization)
Summary
Oracle_complexity_(optimization)
In mathematical optimization, oracle complexity is a standard theoretical framework to study the computational requirements for solving classes of optimization problems. It is suitable for analyzing iterative algorithms which proceed by computing local information about the objective function at various points (such as the function's value, gradient, Hessian etc.). The framework has been used to provide tight worst-case guarantees on the number of required iterations, for several important classes of optimization problems.
c_hz7s6vpihots
Ackley function
Summary
Ackley_function
In mathematical optimization, the Ackley function is a non-convex function used as a performance test problem for optimization algorithms. It was proposed by David Ackley in his 1987 PhD dissertation.On a 2-dimensional domain it is defined by: f ( x , y ) = − 20 exp ⁡ − exp ⁡ + e + 20 {\displaystyle {\begin{aligned}f(x,y)=-20&{}\exp \left\\&{}-\exp \left+e+20\end{aligned}}} Its global optimum point is f ( 0 , 0 ) = 0. {\displaystyle f(0,0)=0.}
c_aren8iggleif
KKT conditions
Summary
Constraint_qualification
In mathematical optimization, the Karush–Kuhn–Tucker (KKT) conditions, also known as the Kuhn–Tucker conditions, are first derivative tests (sometimes called first-order necessary conditions) for a solution in nonlinear programming to be optimal, provided that some regularity conditions are satisfied. Allowing inequality constraints, the KKT approach to nonlinear programming generalizes the method of Lagrange multipliers, which allows only equality constraints. Similar to the Lagrange approach, the constrained maximization (minimization) problem is rewritten as a Lagrange function whose optimal point is a (global) saddle point, i.e. a global maximum (minimum) over the domain of the choice variables and a global minimum (maximum) over the multipliers, which is why the Karush–Kuhn–Tucker theorem is sometimes referred to as the saddle-point theorem.The KKT conditions were originally named after Harold W. Kuhn and Albert W. Tucker, who first published the conditions in 1951. Later scholars discovered that the necessary conditions for this problem had been stated by William Karush in his master's thesis in 1939.
c_xci482ful4uh
Klee–Minty cube
Worst case
Klee–Minty_cube > Computational complexity > Worst case
In mathematical optimization, the Klee–Minty cube is an example that shows the worst-case computational complexity of many algorithms of linear optimization. It is a deformed cube with exactly 2D corners in dimension D. Klee and Minty showed that Dantzig's simplex algorithm visits all corners of a (perturbed) cube in dimension D in the worst case.Modifications of the Klee–Minty construction showed similar exponential time complexity for other pivoting rules of simplex type, which maintain primal feasibility, such as Bland's rule. Another modification showed that the criss-cross algorithm, which does not maintain primal feasibility, also visits all the corners of a modified Klee–Minty cube. Like the simplex algorithm, the criss-cross algorithm visits all 8 corners of the three-dimensional cube in the worst case.
c_ulue35va3rsw
Rastrigin function
Summary
Rastrigin_function
In mathematical optimization, the Rastrigin function is a non-convex function used as a performance test problem for optimization algorithms. It is a typical example of non-linear multimodal function. It was first proposed in 1974 by Rastrigin as a 2-dimensional function and has been generalized by Rudolph.
c_i6zw0x5ovqcv
Rastrigin function
Summary
Rastrigin_function
The generalized version was popularized by Hoffmeister & Bäck and Mühlenbein et al. Finding the minimum of this function is a fairly difficult problem due to its large search space and its large number of local minima. On an n {\displaystyle n} -dimensional domain it is defined by: f ( x ) = A n + ∑ i = 1 n {\displaystyle f(\mathbf {x} )=An+\sum _{i=1}^{n}\left} where A = 10 {\displaystyle A=10} and x i ∈ {\displaystyle x_{i}\in } . There are many extrema: The global minimum is at x = 0 {\displaystyle \mathbf {x} =\mathbf {0} } where f ( x ) = 0 {\displaystyle f(\mathbf {x} )=0} . The maximum function value for x i ∈ {\displaystyle x_{i}\in } is located around x i ∈ {\displaystyle x_{i}\in } :Here are all the values at 0.5 interval listed for the 2D Rastrigin function with x i ∈ {\displaystyle x_{i}\in }: The abundance of local minima underlines the necessity of a global optimization algorithm when needing to find the global minimum. Local optimization algorithms are likely to get stuck in a local minimum.
c_blhimew1aa15
Rosenbrock function
Summary
Rosenbrock_function
In mathematical optimization, the Rosenbrock function is a non-convex function, introduced by Howard H. Rosenbrock in 1960, which is used as a performance test problem for optimization algorithms. It is also known as Rosenbrock's valley or Rosenbrock's banana function. The global minimum is inside a long, narrow, parabolic shaped flat valley. To find the valley is trivial.
c_ix1qr69wxhx9
Rosenbrock function
Summary
Rosenbrock_function
To converge to the global minimum, however, is difficult. The function is defined by f ( x , y ) = ( a − x ) 2 + b ( y − x 2 ) 2 {\displaystyle f(x,y)=(a-x)^{2}+b(y-x^{2})^{2}} It has a global minimum at ( x , y ) = ( a , a 2 ) {\displaystyle (x,y)=(a,a^{2})} , where f ( x , y ) = 0 {\displaystyle f(x,y)=0} . Usually, these parameters are set such that a = 1 {\displaystyle a=1} and b = 100 {\displaystyle b=100} . Only in the trivial case where a = 0 {\displaystyle a=0} the function is symmetric and the minimum is at the origin.
c_8mseg9xf4a1j
Active set
Summary
Active_set_method
In mathematical optimization, the active-set method is an algorithm used to identify the active constraints in a set of inequality constraints. The active constraints are then expressed as equality constraints, thereby transforming an inequality-constrained problem into a simpler equality-constrained subproblem. An optimization problem is defined using an objective function to minimize or maximize, and a set of constraints g 1 ( x ) ≥ 0 , … , g k ( x ) ≥ 0 {\displaystyle g_{1}(x)\geq 0,\dots ,g_{k}(x)\geq 0} that define the feasible region, that is, the set of all x to search for the optimal solution. Given a point x {\displaystyle x} in the feasible region, a constraint g i ( x ) ≥ 0 {\displaystyle g_{i}(x)\geq 0} is called active at x 0 {\displaystyle x_{0}} if g i ( x 0 ) = 0 {\displaystyle g_{i}(x_{0})=0} , and inactive at x {\displaystyle x} if g i ( x 0 ) > 0.
c_sqhmd31rmxa1
Active set
Summary
Active_set_method
{\displaystyle g_{i}(x_{0})>0.} Equality constraints are always active.
c_ol7rj2xulb7e
Active set
Summary
Active_set_method
The active set at x 0 {\displaystyle x_{0}} is made up of those constraints g i ( x 0 ) {\displaystyle g_{i}(x_{0})} that are active at the current point (Nocedal & Wright 2006, p. 308). The active set is particularly important in optimization theory, as it determines which constraints will influence the final result of optimization. For example, in solving the linear programming problem, the active set gives the hyperplanes that intersect at the solution point. In quadratic programming, as the solution is not necessarily on one of the edges of the bounding polygon, an estimation of the active set gives us a subset of inequalities to watch while searching the solution, which reduces the complexity of the search.
c_e1mprkxxysne
Criss-cross algorithm
Summary
Criss-cross_algorithm
In mathematical optimization, the criss-cross algorithm is any of a family of algorithms for linear programming. Variants of the criss-cross algorithm also solve more general problems with linear inequality constraints and nonlinear objective functions; there are criss-cross algorithms for linear-fractional programming problems, quadratic-programming problems, and linear complementarity problems.Like the simplex algorithm of George B. Dantzig, the criss-cross algorithm is not a polynomial-time algorithm for linear programming. Both algorithms visit all 2D corners of a (perturbed) cube in dimension D, the Klee–Minty cube (after Victor Klee and George J. Minty), in the worst case. However, when it is started at a random corner, the criss-cross algorithm on average visits only D additional corners. Thus, for the three-dimensional cube, the algorithm visits all 8 corners in the worst case and exactly 3 additional corners on average.
c_8zefkmw9shzh
Cutting plane
Summary
Cutting-plane_method
In mathematical optimization, the cutting-plane method is any of a variety of optimization methods that iteratively refine a feasible set or objective function by means of linear inequalities, termed cuts. Such procedures are commonly used to find integer solutions to mixed integer linear programming (MILP) problems, as well as to solve general, not necessarily differentiable convex optimization problems. The use of cutting planes to solve MILP was introduced by Ralph E. Gomory. Cutting plane methods for MILP work by solving a non-integer linear program, the linear relaxation of the given integer program.
c_bz1oyvgoh7xl
Cutting plane
Summary
Cutting-plane_method
The theory of Linear Programming dictates that under mild assumptions (if the linear program has an optimal solution, and if the feasible region does not contain a line), one can always find an extreme point or a corner point that is optimal. The obtained optimum is tested for being an integer solution. If it is not, there is guaranteed to exist a linear inequality that separates the optimum from the convex hull of the true feasible set.
c_vt6vlt3jqcdj
Cutting plane
Summary
Cutting-plane_method
Finding such an inequality is the separation problem, and such an inequality is a cut. A cut can be added to the relaxed linear program.
c_min7n4ambcw1
Cutting plane
Summary
Cutting-plane_method
Then, the current non-integer solution is no longer feasible to the relaxation. This process is repeated until an optimal integer solution is found.
c_mjtzo775gdel
Cutting plane
Summary
Cutting-plane_method
Cutting-plane methods for general convex continuous optimization and variants are known under various names: Kelley's method, Kelley–Cheney–Goldstein method, and bundle methods. They are popularly used for non-differentiable convex minimization, where a convex objective function and its subgradient can be evaluated efficiently but usual gradient methods for differentiable optimization can not be used. This situation is most typical for the concave maximization of Lagrangian dual functions. Another common situation is the application of the Dantzig–Wolfe decomposition to a structured optimization problem in which formulations with an exponential number of variables are obtained. Generating these variables on demand by means of delayed column generation is identical to performing a cutting plane on the respective dual problem.
c_u9b023rt005m
Ellipsoidal algorithm
Summary
Ellipsoid_algorithm
In mathematical optimization, the ellipsoid method is an iterative method for minimizing convex functions. When specialized to solving feasible linear optimization problems with rational data, the ellipsoid method is an algorithm which finds an optimal solution in a number of steps that is polynomial in the input size. The ellipsoid method generates a sequence of ellipsoids whose volume uniformly decreases at every step, thus enclosing a minimizer of a convex function.
c_3y22t7hxhr4p
Firefly algorithm
Summary
Firefly_algorithm
In mathematical optimization, the firefly algorithm is a metaheuristic proposed by Xin-She Yang and inspired by the flashing behavior of fireflies.
c_y0miyajcaepw
Lagrange multiplier
Summary
Lagrangian_multiplier
In mathematical optimization, the method of Lagrange multipliers is a strategy for finding the local maxima and minima of a function subject to equation constraints (i.e., subject to the condition that one or more equations have to be satisfied exactly by the chosen values of the variables). It is named after the mathematician Joseph-Louis Lagrange. The basic idea is to convert a constrained problem into a form such that the derivative test of an unconstrained problem can still be applied. The relationship between the gradient of the function and gradients of the constraints rather naturally leads to a reformulation of the original problem, known as the Lagrangian function.The method can be summarized as follows: In order to find the maximum or minimum of a function f ( x ) {\displaystyle f(x)} subjected to the equality constraint g ( x ) = 0 {\displaystyle g(x)=0} , form the Lagrangian function, L ( x , λ ) ≡ f ( x ) + λ ⋅ g ( x ) {\displaystyle {\mathcal {L}}(x,\lambda )\equiv f(x)+\lambda \cdot g(x)} and find the stationary points of L {\displaystyle {\mathcal {L}}} considered as a function of x {\displaystyle x} and the Lagrange multiplier λ {\displaystyle \lambda ~} .
c_wiallmeg793o
Lagrange multiplier
Summary
Lagrangian_multiplier
This means that all partial derivatives should be zero, including the partial derivative with respect to λ {\displaystyle \lambda ~} . ∂ L ∂ x = 0 {\displaystyle \ {\frac {\ \partial {\mathcal {L}}\ }{\partial x}}=0\qquad } and ∂ L ∂ λ = 0 ; {\displaystyle \qquad {\frac {\ \partial {\mathcal {L}}\ }{\partial \lambda }}=0\ ;} or equivalently ∂ f ( x ) ∂ x + λ ⋅ ∂ g ( x ) ∂ x = 0 {\displaystyle \ {\frac {\ \partial f(x)\ }{\partial x}}+\lambda \cdot {\frac {\ \partial g(x)\ }{\partial x}}=0\qquad } and g ( x ) = 0 . {\displaystyle \qquad g(x)=0~.}
c_8ulw7ya7hsok
Lagrange multiplier
Summary
Lagrangian_multiplier
The solution corresponding to the original constrained optimization is always a saddle point of the Lagrangian function, which can be identified among the stationary points from the definiteness of the bordered Hessian matrix.The great advantage of this method is that it allows the optimization to be solved without explicit parameterization in terms of the constraints. As a result, the method of Lagrange multipliers is widely used to solve challenging constrained optimization problems. Further, the method of Lagrange multipliers is generalized by the Karush–Kuhn–Tucker conditions, which can also take into account inequality constraints of the form h ( x ) ≤ c {\displaystyle h(\mathbf {x} )\leq c} for a given constant c {\displaystyle c~} .
c_hrou57yf27hu
Network simplex algorithm
Summary
Network_simplex_algorithm
In mathematical optimization, the network simplex algorithm is a graph theoretic specialization of the simplex algorithm. The algorithm is usually formulated in terms of a minimum-cost flow problem. The network simplex method works very well in practice, typically 200 to 300 times faster than the simplex method applied to general linear program of same dimensions.
c_vphenfkel71q
Ordered subset expectation maximization
Summary
Ordered_subset_expectation_maximization
In mathematical optimization, the ordered subset expectation maximization (OSEM) method is an iterative method that is used in computed tomography. In applications in medical imaging, the OSEM method is used for positron emission tomography, for single photon emission computed tomography, and for X-ray computed tomography. The OSEM method is related to the expectation maximization (EM) method of statistics. The OSEM method is also related to methods of filtered back projection.
c_spqs6g8g8bnk
Perturbation function
Summary
Perturbation_function
In mathematical optimization, the perturbation function is any function which relates to primal and dual problems. The name comes from the fact that any such function defines a perturbation of the initial problem. In many cases this takes the form of shifting the constraints.In some texts the value function is called the perturbation function, and the perturbation function is called the bifunction.
c_83utjnop0c15
Non-negative least squares
Summary
Non-negative_least_squares
In mathematical optimization, the problem of non-negative least squares (NNLS) is a type of constrained least squares problem where the coefficients are not allowed to become negative. That is, given a matrix A and a (column) vector of response variables y, the goal is to find a r g m i n x ⁡ ‖ A x − y ‖ 2 2 {\displaystyle \operatorname {arg\,min} \limits _{\mathbf {x} }\|\mathbf {Ax} -\mathbf {y} \|_{2}^{2}} subject to x ≥ 0.Here x ≥ 0 means that each component of the vector x should be non-negative, and ‖·‖2 denotes the Euclidean norm. Non-negative least squares problems turn up as subproblems in matrix decomposition, e.g. in algorithms for PARAFAC and non-negative matrix/tensor factorization. The latter can be considered a generalization of NNLS.Another generalization of NNLS is bounded-variable least squares (BVLS), with simultaneous upper and lower bounds αi ≤ xi ≤ βi. : 291
c_atws0npmu00q
Proximal operator
Summary
Proximal_operator
In mathematical optimization, the proximal operator is an operator associated with a proper, lower semi-continuous convex function f {\displaystyle f} from a Hilbert space X {\displaystyle {\mathcal {X}}} to {\displaystyle } , and is defined by: prox f ⁡ ( v ) = arg ⁡ min x ∈ X ( f ( x ) + 1 2 ‖ x − v ‖ X 2 ) . {\displaystyle \operatorname {prox} _{f}(v)=\arg \min _{x\in {\mathcal {X}}}\left(f(x)+{\frac {1}{2}}\|x-v\|_{\mathcal {X}}^{2}\right).} For any function in this class, the minimizer of the right-hand side above is unique, hence making the proximal operator well-defined. The proximal operator is used in proximal gradient methods, which is frequently used in optimization algorithms associated with non-differentiable optimization problems such as total variation denoising.
c_6rrwzc8ml0e9
Push–relabel maximum flow algorithm
Summary
Push-relabel_maximum_flow_algorithm
In mathematical optimization, the push–relabel algorithm (alternatively, preflow–push algorithm) is an algorithm for computing maximum flows in a flow network. The name "push–relabel" comes from the two basic operations used in the algorithm. Throughout its execution, the algorithm maintains a "preflow" and gradually converts it into a maximum flow by moving flow locally between neighboring nodes using push operations under the guidance of an admissible network maintained by relabel operations. In comparison, the Ford–Fulkerson algorithm performs global augmentations that send flow following paths from the source all the way to the sink.The push–relabel algorithm is considered one of the most efficient maximum flow algorithms.
c_uo5fkth7azds
Push–relabel maximum flow algorithm
Summary
Push-relabel_maximum_flow_algorithm
The generic algorithm has a strongly polynomial O(V 2E) time complexity, which is asymptotically more efficient than the O(VE 2) Edmonds–Karp algorithm. Specific variants of the algorithms achieve even lower time complexities. The variant based on the highest label node selection rule has O(V 2√E) time complexity and is generally regarded as the benchmark for maximum flow algorithms. Subcubic O(VElog(V 2/E)) time complexity can be achieved using dynamic trees, although in practice it is less efficient.The push–relabel algorithm has been extended to compute minimum cost flows. The idea of distance labels has led to a more efficient augmenting path algorithm, which in turn can be incorporated back into the push–relabel algorithm to create a variant with even higher empirical performance.
c_uvib2gsw34cb
Revised simplex algorithm
Summary
Revised_simplex_method
In mathematical optimization, the revised simplex method is a variant of George Dantzig's simplex method for linear programming. The revised simplex method is mathematically equivalent to the standard simplex method but differs in implementation. Instead of maintaining a tableau which explicitly represents the constraints adjusted to a set of basic variables, it maintains a representation of a basis of the matrix representing the constraints. The matrix-oriented approach allows for greater computational efficiency by enabling sparse matrix operations.
c_w5g1mil8bjds
Total dual integrality
Summary
Total_dual_integrality
In mathematical optimization, total dual integrality is a sufficient condition for the integrality of a polyhedron. Thus, the optimization of a linear objective over the integral points of such a polyhedron can be done using techniques from linear programming. A linear system A x ≤ b {\displaystyle Ax\leq b} , where A {\displaystyle A} and b {\displaystyle b} are rational, is called totally dual integral (TDI) if for any c ∈ Z n {\displaystyle c\in \mathbb {Z} ^{n}} such that there is a feasible, bounded solution to the linear program max c T x A x ≤ b , {\displaystyle {\begin{aligned}&&\max c^{\mathrm {T} }x\\&&Ax\leq b,\end{aligned}}} there is an integer optimal dual solution.Edmonds and Giles showed that if a polyhedron P {\displaystyle P} is the solution set of a TDI system A x ≤ b {\displaystyle Ax\leq b} , where b {\displaystyle b} has all integer entries, then every vertex of P {\displaystyle P} is integer-valued. Thus, if a linear program as above is solved by the simplex algorithm, the optimal solution returned will be integer.
c_ipgw3nv524qx
Total dual integrality
Summary
Total_dual_integrality
Further, Giles and Pulleyblank showed that if P {\displaystyle P} is a polytope whose vertices are all integer valued, then P {\displaystyle P} is the solution set of some TDI system A x ≤ b {\displaystyle Ax\leq b} , where b {\displaystyle b} is integer valued. Note that TDI is a weaker sufficient condition for integrality than total unimodularity. == References ==
c_n34pdowmxd2s
Liability threshold model
Summary
Threshold_model
In mathematical or statistical modeling a threshold model is any model where a threshold value, or set of threshold values, is used to distinguish ranges of values where the behaviour predicted by the model varies in some important way. A particularly important instance arises in toxicology, where the model for the effect of a drug may be that there is zero effect for a dose below a critical or threshold value, while an effect of some significance exists above that value. Certain types of regression model may include threshold effects.
c_wihdg2veoaq6
Prime filter
Summary
Ideal_(order_theory)
In mathematical order theory, an ideal is a special subset of a partially ordered set (poset). Although this term historically was derived from the notion of a ring ideal of abstract algebra, it has subsequently been generalized to a different notion. Ideals are of great importance for many constructions in order and lattice theory.
c_qnnf13gpfyji
Linear transport theory
Summary
Linear_transport_theory
In mathematical physics Linear transport theory is the study of equations describing the migration of particles or energy within a host medium when such migration involves random absorption, emission and scattering events. Subject to certain simplifying assumptions, this is a common and useful framework for describing the scattering of light (radiative transfer) or neutrons (neutron transport). Given the laws of individual collision events (in the form of absorption coefficients and scattering kernels/phase functions) the problem of linear transport theory is then to determine the result of a large number of random collisions governed by these laws. This involves computing exact or approximate solutions of the transport equation, and there are various forms of the transport equation that have been studied. Common varieties include steady-state vs time-dependent, scalar vs vector (the latter including polarization), and monoenergetic vs multi-energy (multi-group).
c_qcizg65s9ycf
Gravitational instantons
Summary
Gravitational_instanton
In mathematical physics and differential geometry, a gravitational instanton is a four-dimensional complete Riemannian manifold satisfying the vacuum Einstein equations. They are so named because they are analogues in quantum theories of gravity of instantons in Yang–Mills theory. In accordance with this analogy with self-dual Yang–Mills instantons, gravitational instantons are usually assumed to look like four dimensional Euclidean space at large distances, and to have a self-dual Riemann tensor. Mathematically, this means that they are asymptotically locally Euclidean (or perhaps asymptotically locally flat) hyperkähler 4-manifolds, and in this sense, they are special examples of Einstein manifolds.
c_xzm8rtp9yy2r
Gravitational instantons
Summary
Gravitational_instanton
From a physical point of view, a gravitational instanton is a non-singular solution of the vacuum Einstein equations with positive-definite, as opposed to Lorentzian, metric. There are many possible generalizations of the original conception of a gravitational instanton: for example one can allow gravitational instantons to have a nonzero cosmological constant or a Riemann tensor which is not self-dual. One can also relax the boundary condition that the metric is asymptotically Euclidean. There are many methods for constructing gravitational instantons, including the Gibbons–Hawking Ansatz, twistor theory, and the hyperkähler quotient construction.
c_58yas35pvxd0
ADHM construction
Summary
ADHM_construction
In mathematical physics and gauge theory, the ADHM construction or monad construction is the construction of all instantons using methods of linear algebra by Michael Atiyah, Vladimir Drinfeld, Nigel Hitchin, Yuri I. Manin in their paper "Construction of Instantons."
c_m3jrp74tuo6u
Quadratic Fourier transform
Summary
Quadratic_Fourier_transform
In mathematical physics and harmonic analysis, the quadratic Fourier transform is an integral transform that generalizes the fractional Fourier transform, which in turn generalizes the Fourier transform.Roughly speaking, the Fourier transform corresponds to a change of variables from time to frequency (in the context of harmonic analysis) or from position to momentum (in the context of quantum mechanics). In phase space, this is a 90 degree rotation. The fractional Fourier transform generalizes this to any angle rotation, giving a smooth mixture of time and frequency, or of position and momentum.
c_242te1binh11
Quadratic Fourier transform
Summary
Quadratic_Fourier_transform
The quadratic Fourier transform extends this further to the group of all linear symplectic transformations in phase space (of which rotations are a subgroup). More specifically, for every member of the metaplectic group (which is a double cover of the symplectic group) there is a corresponding quadratic Fourier transform. == References ==
c_8cwvg3gokaoo
Pauli matrices
Summary
Pauli_spin_matrix
In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices which are Hermitian, involutory and unitary. Usually indicated by the Greek letter sigma (σ), they are occasionally denoted by tau (τ) when used in connection with isospin symmetries. These matrices are named after the physicist Wolfgang Pauli.
c_i600yfrdjnnp
Pauli matrices
Summary
Pauli_spin_matrix
In quantum mechanics, they occur in the Pauli equation which takes into account the interaction of the spin of a particle with an external electromagnetic field. They also represent the interaction states of two polarization filters for horizontal / vertical polarization, 45 degree polarization (right/left), and circular polarization (right/left). Each Pauli matrix is Hermitian, and together with the identity matrix I (sometimes considered as the zeroth Pauli matrix σ0 ), the Pauli matrices form a basis for the real vector space of 2 × 2 Hermitian matrices.
c_clpg363yvdno
Pauli matrices
Summary
Pauli_spin_matrix
This means that any 2 × 2 Hermitian matrix can be written in a unique way as a linear combination of Pauli matrices, with all coefficients being real numbers. Hermitian operators represent observables in quantum mechanics, so the Pauli matrices span the space of observables of the complex 2 dimensional Hilbert space. In the context of Pauli's work, σk represents the observable corresponding to spin along the kth coordinate axis in three-dimensional Euclidean space R 3 .
c_ky0264l2tj2t
Pauli matrices
Summary
Pauli_spin_matrix
{\displaystyle \ \mathbb {R} ^{3}~.} The Pauli matrices (after multiplication by i to make them anti-Hermitian) also generate transformations in the sense of Lie algebras: the matrices iσ1, iσ2, iσ3 form a basis for the real Lie algebra s u ( 2 ) {\displaystyle {\mathfrak {su}}(2)} , which exponentiates to the special unitary group SU(2). The algebra generated by the three matrices σ1, σ2, σ3 is isomorphic to the Clifford algebra of R 3 , {\displaystyle \ \mathbb {R} ^{3}\ ,} and the (unital associative) algebra generated by iσ1, iσ2, iσ3 functions identically (is isomorphic) to that of quaternions ( H {\displaystyle \mathbb {H} } ).
c_zt33ah67g39h
Continuum limit
Summary
Continuum_limit
In mathematical physics and mathematics, the continuum limit or scaling limit of a lattice model refers to its behaviour in the limit as the lattice spacing goes to zero. It is often useful to use lattice models to approximate real-world processes, such as Brownian motion. Indeed, according to Donsker's theorem, the discrete random walk would, in the scaling limit, approach the true Brownian motion.
c_bak88usbdlex
Gaussian q-distribution
Summary
Gaussian_q-distribution
In mathematical physics and probability and statistics, the Gaussian q-distribution is a family of probability distributions that includes, as limiting cases, the uniform distribution and the normal (Gaussian) distribution. It was introduced by Diaz and Teruel. It is a q-analog of the Gaussian or normal distribution. The distribution is symmetric about zero and is bounded, except for the limiting case of the normal distribution. The limiting uniform distribution is on the range -1 to +1.
c_z1qpj5dhrot4
Orbital stability
Summary
Orbital_stability
In mathematical physics and the theory of partial differential equations, the solitary wave solution of the form u ( x , t ) = e − i ω t ϕ ( x ) {\displaystyle u(x,t)=e^{-i\omega t}\phi (x)} is said to be orbitally stable if any solution with the initial data sufficiently close to ϕ ( x ) {\displaystyle \phi (x)} forever remains in a given small neighborhood of the trajectory of e − i ω t ϕ ( x ) . {\displaystyle e^{-i\omega t}\phi (x).}
c_27qc8imjkxhl
Higher gauge theory
Summary
Higher_gauge_theory
In mathematical physics higher gauge theory is the general study of counterparts of gauge theory that involve higher-degree differential forms instead of the traditional connection forms of gauge theories.
c_lce7zg4n6co6
KZ equation
Summary
KZ_equation
In mathematical physics the Knizhnik–Zamolodchikov equations, or KZ equations, are linear differential equations satisfied by the correlation functions (on the Riemann sphere) of two-dimensional conformal field theories associated with an affine Lie algebra at a fixed level. They form a system of complex partial differential equations with regular singular points satisfied by the N-point functions of affine primary fields and can be derived using either the formalism of Lie algebras or that of vertex algebras. The structure of the genus-zero part of the conformal field theory is encoded in the monodromy properties of these equations. In particular, the braiding and fusion of the primary fields (or their associated representations) can be deduced from the properties of the four-point functions, for which the equations reduce to a single matrix-valued first-order complex ordinary differential equation of Fuchsian type. Originally the Russian physicists Vadim Knizhnik and Alexander Zamolodchikov derived the equations for the SU(2) Wess–Zumino–Witten model using the classical formulas of Gauss for the connection coefficients of the hypergeometric differential equation.
c_ugv6icx898a1
Clebsch–Gordan coefficients for SU(3)
Summary
Clebsch–Gordan_coefficients_for_SU(3)
In mathematical physics, Clebsch–Gordan coefficients are the expansion coefficients of total angular momentum eigenstates in an uncoupled tensor product basis. Mathematically, they specify the decomposition of the tensor product of two irreducible representations into a direct sum of irreducible representations, where the type and the multiplicities of these irreducible representations are known abstractly. The name derives from the German mathematicians Alfred Clebsch (1833–1872) and Paul Gordan (1837–1912), who encountered an equivalent problem in invariant theory. Generalization to SU(3) of Clebsch–Gordan coefficients is useful because of their utility in characterizing hadronic decays, where a flavor-SU(3) symmetry exists (the eightfold way) that connects the three light quarks: up, down, and strange.
c_kbf1pazsab5b
Gleason's theorem
Summary
Gleason's_theorem
In mathematical physics, Gleason's theorem shows that the rule one uses to calculate probabilities in quantum physics, the Born rule, can be derived from the usual mathematical representation of measurements in quantum physics together with the assumption of non-contextuality. Andrew M. Gleason first proved the theorem in 1957, answering a question posed by George W. Mackey, an accomplishment that was historically significant for the role it played in showing that wide classes of hidden-variable theories are inconsistent with quantum physics. Multiple variations have been proven in the years since. Gleason's theorem is of particular importance for the field of quantum logic and its attempt to find a minimal set of mathematical axioms for quantum theory.
c_k13pu773oo6y
Hilbert-Ackermann system
Summary
Hilbert_system
In mathematical physics, Hilbert system is an infrequently used term for a physical system described by a C*-algebra.In logic, especially mathematical logic, a Hilbert system, sometimes called Hilbert calculus, Hilbert-style deductive system or Hilbert–Ackermann system, is a type of system of formal deduction attributed to Gottlob Frege and David Hilbert. These deductive systems are most often studied for first-order logic, but are of interest for other logics as well. Most variants of Hilbert systems take a characteristic tack in the way they balance a trade-off between logical axioms and rules of inference. Hilbert systems can be characterised by the choice of a large number of schemes of logical axioms and a small set of rules of inference.
c_2u6uzyd82lme
Hilbert-Ackermann system
Summary
Hilbert_system
Systems of natural deduction take the opposite tack, including many deduction rules but very few or no axiom schemes. The most commonly studied Hilbert systems have either just one rule of inference – modus ponens, for propositional logics – or two – with generalisation, to handle predicate logics, as well – and several infinite axiom schemes. Hilbert systems for propositional modal logics, sometimes called Hilbert-Lewis systems, are generally axiomatised with two additional rules, the necessitation rule and the uniform substitution rule.
c_qziuco8ao57r
Hilbert-Ackermann system
Summary
Hilbert_system
A characteristic feature of the many variants of Hilbert systems is that the context is not changed in any of their rules of inference, while both natural deduction and sequent calculus contain some context-changing rules. Thus, if one is interested only in the derivability of tautologies, no hypothetical judgments, then one can formalize the Hilbert system in such a way that its rules of inference contain only judgments of a rather simple form. The same cannot be done with the other two deductions systems: as context is changed in some of their rules of inferences, they cannot be formalized so that hypothetical judgments could be avoided – not even if we want to use them just for proving derivability of tautologies.
c_dax5p2z4qukk
Kundt spacetime
Summary
Kundt_spacetime
In mathematical physics, Kundt spacetimes are Lorentzian manifolds admitting a geodesic null congruence with vanishing optical scalars (expansion, twist and shear). A well known member of Kundt class is pp-wave. Ricci-flat Kundt spacetimes in arbitrary dimension are algebraically special.
c_zcz3dco80i09
Kundt spacetime
Summary
Kundt_spacetime
In four dimensions Ricci-flat Kundt metrics of Petrov type III and N are completely known. All VSI spacetimes belong to a subset of the Kundt spacetimes. == References ==
c_ppaqkewubs6a
Geometry of special relativity
Summary
Spacelike_vector
In mathematical physics, Minkowski space (or Minkowski spacetime) () combines inertial space and time manifolds (x,y) with a non-inertial reference frame of space and time (x',t') into a four-dimensional model relating a position (inertial frame of reference) to the field. A four-vector (x,y,z,t) consists of a coordinate axes such as a Euclidean space plus time. This may be used with the non-inertial frame to illustrate specifics of motion, but should not be confused with the spacetime model generally.
c_oxd7pvl52o5q
Geometry of special relativity
Summary
Spacelike_vector
The model helps show how a spacetime interval between any two events is independent of the inertial frame of reference in which they are recorded. Mathematician Hermann Minkowski developed it from the work of Hendrik Lorentz, Henri Poincaré, and others, and said it "was grown on experimental physical grounds." Minkowski space is closely associated with Einstein's theories of special relativity and general relativity and is the most common mathematical structure by which special relativity is formalized.
c_zxbduw571hyt
Geometry of special relativity
Summary
Spacelike_vector
While the individual components in Euclidean space and time might differ due to length contraction and time dilation, in Minkowski spacetime, all frames of reference will agree on the total interval in spacetime between events. Minkowski space differs from four-dimensional Euclidean space insofar as it treats time differently than the three spatial dimensions. In 3-dimensional Euclidean space, the isometry group (the maps preserving the regular Euclidean distance) is the Euclidean group.
c_q0attom3lip3
Geometry of special relativity
Summary
Spacelike_vector
It is generated by rotations, reflections and translations. When time is appended as a fourth dimension, the further transformations of translations in time and Lorentz boosts are added, and the group of all these transformations is called the Poincaré group. Minkowski's model follows special relativity where motion causes time dilation changing the scale applied to the frame in motion and shifts the phase of light.
c_lxuajgt51wov
Geometry of special relativity
Summary
Spacelike_vector
Spacetime is equipped with an indefinite non-degenerate bilinear form, variously called the Minkowski metric, the Minkowski norm squared or Minkowski inner product depending on the context. The Minkowski inner product is defined so as to yield the spacetime interval between two events when given their coordinate difference vector as argument. Equipped with this inner product, the mathematical model of spacetime is called Minkowski space. The group of transformations for Minkowski space that preserve the spacetime interval (as opposed to the spatial Euclidean distance) is the Poincaré group (as opposed to the isometry group).
c_o7m2er95mpo9
Yang–Mills action
Summary
Yang-Mills_field
In mathematical physics, Yang–Mills theory is a gauge theory based on a special unitary group SU(n), or more generally any compact, reductive Lie algebra. Yang–Mills theory seeks to describe the behavior of elementary particles using these non-abelian Lie groups and is at the core of the unification of the electromagnetic force and weak forces (i.e. U(1) × SU(2)) as well as quantum chromodynamics, the theory of the strong force (based on SU(3)). Thus it forms the basis of our understanding of the Standard Model of particle physics.
c_uwte4vpf4lr4
Gibbons–Hawking space
Summary
Gibbons–Hawking_space
In mathematical physics, a Gibbons–Hawking space, named after Gary Gibbons and Stephen Hawking, is essentially a hyperkähler manifold with an extra U(1) symmetry. (In general, Gibbons–Hawking metrics are a subclass of hyperkähler metrics.) Gibbons–Hawking spaces, especially ambipolar ones, find an application in the study of black hole microstate geometries.
c_653y082gokie
Grassman variable
Summary
Grassmann_number
In mathematical physics, a Grassmann number, named after Hermann Grassmann (also called an anticommuting number or supernumber), is an element of the exterior algebra over the complex numbers. The special case of a 1-dimensional algebra is known as a dual number. Grassmann numbers saw an early use in physics to express a path integral representation for fermionic fields, although they are now widely used as a foundation for superspace, on which supersymmetry is constructed.
c_9slcr1p7v2od
Pöschl–Teller potential
Summary
Pöschl–Teller_potential
In mathematical physics, a Pöschl–Teller potential, named after the physicists Herta Pöschl (credited as G. Pöschl) and Edward Teller, is a special class of potentials for which the one-dimensional Schrödinger equation can be solved in terms of special functions.
c_10q5nckwr14b
Caloron
Summary
Caloron
In mathematical physics, a caloron is the finite temperature generalization of an instanton.
c_egh23d476l5n
Closed timelike loop
Summary
Closed_Timelike_Curve
In mathematical physics, a closed timelike curve (CTC) is a world line in a Lorentzian manifold, of a material particle in spacetime, that is "closed", returning to its starting point. This possibility was first discovered by Willem Jacob van Stockum in 1937 and later confirmed by Kurt Gödel in 1949, who discovered a solution to the equations of general relativity (GR) allowing CTCs known as the Gödel metric; and since then other GR solutions containing CTCs have been found, such as the Tipler cylinder and traversable wormholes. If CTCs exist, their existence would seem to imply at least the theoretical possibility of time travel backwards in time, raising the spectre of the grandfather paradox, although the Novikov self-consistency principle seems to show that such paradoxes could be avoided. Some physicists speculate that the CTCs which appear in certain GR solutions might be ruled out by a future theory of quantum gravity which would replace GR, an idea which Stephen Hawking labeled the chronology protection conjecture. Others note that if every closed timelike curve in a given space-time passes through an event horizon, a property which can be called chronological censorship, then that space-time with event horizons excised would still be causally well behaved and an observer might not be able to detect the causal violation.
c_rh3vnqivk300
Lattice model (physics)
Summary
Lattice_models
In mathematical physics, a lattice model is a mathematical model of a physical system that is defined on a lattice, as opposed to a continuum, such as the continuum of space or spacetime. Lattice models originally occurred in the context of condensed matter physics, where the atoms of a crystal automatically form a lattice. Currently, lattice models are quite popular in theoretical physics, for many reasons. Some models are exactly solvable, and thus offer insight into physics beyond what can be learned from perturbation theory.
c_vu9wyorv1j5l
Lattice model (physics)
Summary
Lattice_models
Lattice models are also ideal for study by the methods of computational physics, as the discretization of any continuum model automatically turns it into a lattice model. The exact solution to many of these models (when they are solvable) includes the presence of solitons. Techniques for solving these include the inverse scattering transform and the method of Lax pairs, the Yang–Baxter equation and quantum groups.
c_1id7bqp9arwv
Lattice model (physics)
Summary
Lattice_models
The solution of these models has given insights into the nature of phase transitions, magnetization and scaling behaviour, as well as insights into the nature of quantum field theory. Physical lattice models frequently occur as an approximation to a continuum theory, either to give an ultraviolet cutoff to the theory to prevent divergences or to perform numerical computations. An example of a continuum theory that is widely studied by lattice models is the QCD lattice model, a discretization of quantum chromodynamics.
c_peyp6u4au8eu
Lattice model (physics)
Summary
Lattice_models
However, digital physics considers nature fundamentally discrete at the Planck scale, which imposes upper limit to the density of information, aka Holographic principle. More generally, lattice gauge theory and lattice field theory are areas of study. Lattice models are also used to simulate the structure and dynamics of polymers.
c_vxt3cbxssfrh
Null dust solution
Summary
Null_dust_solution
In mathematical physics, a null dust solution (sometimes called a null fluid) is a Lorentzian manifold in which the Einstein tensor is null. Such a spacetime can be interpreted as an exact solution of Einstein's field equation, in which the only mass–energy present in the spacetime is due to some kind of massless radiation.
c_h2vh4o5g2e8j
Neveu–Schwarz boundary conditions
Summary
Neveu–Schwarz_boundary_conditions
In mathematical physics, a super Virasoro algebra is an extension of the Virasoro algebra (named after Miguel Ángel Virasoro) to a Lie superalgebra. There are two extensions with particular importance in superstring theory: the Ramond algebra (named after Pierre Ramond) and the Neveu–Schwarz algebra (named after André Neveu and John Henry Schwarz). Both algebras have N = 1 supersymmetry and an even part given by the Virasoro algebra. They describe the symmetries of a superstring in two different sectors, called the Ramond sector and the Neveu–Schwarz sector.
c_12ris5lqt3jc
Constructive quantum field theory
Summary
Constructive_quantum_field_theory
In mathematical physics, constructive quantum field theory is the field devoted to showing that quantum field theory can be defined in terms of precise mathematical structures. This demonstration requires new mathematics, in a sense analogous to classical real analysis, putting calculus on a mathematically rigorous foundation. Weak, strong, and electromagnetic forces of nature are believed to have their natural description in terms of quantum fields.
c_1tbd3yzh9c09
Constructive quantum field theory
Summary
Constructive_quantum_field_theory
Attempts to put quantum field theory on a basis of completely defined concepts have involved most branches of mathematics, including functional analysis, differential equations, probability theory, representation theory, geometry, and topology. It is known that a quantum field is inherently hard to handle using conventional mathematical techniques like explicit estimates. This is because a quantum field has the general nature of an operator-valued distribution, a type of object from mathematical analysis.
c_2e0uq8it9d4p
Constructive quantum field theory
Summary
Constructive_quantum_field_theory
The existence theorems for quantum fields can be expected to be very difficult to find, if indeed they are possible at all. One discovery of the theory that can be related in non-technical terms, is that the dimension d of the spacetime involved is crucial. Notable work in the field by James Glimm and Arthur Jaffe showed that with d < 4 many examples can be found.
c_iw5sqxa1jjm4
Constructive quantum field theory
Summary
Constructive_quantum_field_theory
Along with work of their students, coworkers, and others, constructive field theory resulted in a mathematical foundation and exact interpretation to what previously was only a set of recipes, also in the case d < 4. Theoretical physicists had given these rules the name "renormalization," but most physicists had been skeptical about whether they could be turned into a mathematical theory. Today one of the most important open problems, both in theoretical physics and in mathematics, is to establish similar results for gauge theory in the realistic case d = 4.